A Research on the Generative Learning Model Supported by Context-Based Learning
ERIC Educational Resources Information Center
Ulusoy, Fatma Merve; Onen, Aysem Seda
2014-01-01
This study is based on the generative learning model which involves context-based learning. Using the generative learning model, we taught the topic of Halogens. This topic is covered in the grade 10 chemistry curriculum using activities which are designed in accordance with the generative learning model supported by context-based learning. The…
Model Based Analysis and Test Generation for Flight Software
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep
2009-01-01
We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.
Different Manhattan project: automatic statistical model generation
NASA Astrophysics Data System (ADS)
Yap, Chee Keng; Biermann, Henning; Hertzmann, Aaron; Li, Chen; Meyer, Jon; Pao, Hsing-Kuo; Paxia, Salvatore
2002-03-01
We address the automatic generation of large geometric models. This is important in visualization for several reasons. First, many applications need access to large but interesting data models. Second, we often need such data sets with particular characteristics (e.g., urban models, park and recreation landscape). Thus we need the ability to generate models with different parameters. We propose a new approach for generating such models. It is based on a top-down propagation of statistical parameters. We illustrate the method in the generation of a statistical model of Manhattan. But the method is generally applicable in the generation of models of large geographical regions. Our work is related to the literature on generating complex natural scenes (smoke, forests, etc) based on procedural descriptions. The difference in our approach stems from three characteristics: modeling with statistical parameters, integration of ground truth (actual map data), and a library-based approach for texture mapping.
A Model-Based Method for Content Validation of Automatically Generated Test Items
ERIC Educational Resources Information Center
Zhang, Xinxin; Gierl, Mark
2016-01-01
The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…
Testing Strategies for Model-Based Development
NASA Technical Reports Server (NTRS)
Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.
2006-01-01
This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.
Pe'er, Guy; Zurita, Gustavo A.; Schober, Lucia; Bellocq, Maria I.; Strer, Maximilian; Müller, Michael; Pütz, Sandro
2013-01-01
Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model “G-RaFFe” generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature. PMID:23724108
Pe'er, Guy; Zurita, Gustavo A; Schober, Lucia; Bellocq, Maria I; Strer, Maximilian; Müller, Michael; Pütz, Sandro
2013-01-01
Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model "G-RaFFe" generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature.
Preserving Differential Privacy in Degree-Correlation based Graph Generation
Wang, Yue; Wu, Xintao
2014-01-01
Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as cluster coefficient often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we study the problem of enforcing edge differential privacy in graph generation. The idea is to enforce differential privacy on graph model parameters learned from the original network and then generate the graphs for releasing using the graph model with the private parameters. In particular, we develop a differential privacy preserving graph generator based on the dK-graph generation model. We first derive from the original graph various parameters (i.e., degree correlations) used in the dK-graph model, then enforce edge differential privacy on the learned parameters, and finally use the dK-graph model with the perturbed parameters to generate graphs. For the 2K-graph model, we enforce the edge differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We conduct experiments on four real networks and compare the performance of our private dK-graph models with the stochastic Kronecker graph generation model in terms of utility and privacy tradeoff. Empirical evaluations show the developed private dK-graph generation models significantly outperform the approach based on the stochastic Kronecker generation model. PMID:24723987
Automated extraction of knowledge for model-based diagnostics
NASA Technical Reports Server (NTRS)
Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.
1990-01-01
The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.
Supervised Learning Based Hypothesis Generation from Biomedical Literature.
Sang, Shengtian; Yang, Zhihao; Li, Zongyao; Lin, Hongfei
2015-01-01
Nowadays, the amount of biomedical literatures is growing at an explosive speed, and there is much useful knowledge undiscovered in this literature. Researchers can form biomedical hypotheses through mining these works. In this paper, we propose a supervised learning based approach to generate hypotheses from biomedical literature. This approach splits the traditional processing of hypothesis generation with classic ABC model into AB model and BC model which are constructed with supervised learning method. Compared with the concept cooccurrence and grammar engineering-based approaches like SemRep, machine learning based models usually can achieve better performance in information extraction (IE) from texts. Then through combining the two models, the approach reconstructs the ABC model and generates biomedical hypotheses from literature. The experimental results on the three classic Swanson hypotheses show that our approach outperforms SemRep system.
Kaminsky, Jan; Rodt, Thomas; Gharabaghi, Alireza; Forster, Jan; Brand, Gerd; Samii, Madjid
2005-06-01
The FE-modeling of complex anatomical structures is not solved satisfyingly so far. Voxel-based as opposed to contour-based algorithms allow an automated mesh generation based on the image data. Nonetheless their geometric precision is limited. We developed an automated mesh-generator that combines the advantages of voxel-based generation with improved representation of the geometry by displacement of nodes on the object-surface. Models of an artificial 3D-pipe-section and a skullbase were generated with different mesh-densities using the newly developed geometric, unsmoothed and smoothed voxel generators. Compared to the analytic calculation of the 3D-pipe-section model the normalized RMS error of the surface stress was 0.173-0.647 for the unsmoothed voxel models, 0.111-0.616 for the smoothed voxel models with small volume error and 0.126-0.273 for the geometric models. The highest element-energy error as a criterion for the mesh quality was 2.61x10(-2) N mm, 2.46x10(-2) N mm and 1.81x10(-2) N mm for unsmoothed, smoothed and geometric voxel models, respectively. The geometric model of the 3D-skullbase resulted in the lowest element-energy error and volume error. This algorithm also allowed the best representation of anatomical details. The presented geometric mesh-generator is universally applicable and allows an automated and accurate modeling by combining the advantages of the voxel-technique and of improved surface-modeling.
Research on Generating Method of Embedded Software Test Document Based on Dynamic Model
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying
2018-03-01
This paper provides a dynamic model-based test document generation method for embedded software that provides automatic generation of two documents: test requirements specification documentation and configuration item test documentation. This method enables dynamic test requirements to be implemented in dynamic models, enabling dynamic test demand tracking to be easily generated; able to automatically generate standardized, standardized test requirements and test documentation, improved document-related content inconsistency and lack of integrity And other issues, improve the efficiency.
Model Based Document and Report Generation for Systems Engineering
NASA Technical Reports Server (NTRS)
Delp, Christopher; Lam, Doris; Fosse, Elyse; Lee, Cin-Young
2013-01-01
As Model Based Systems Engineering (MBSE) practices gain adoption, various approaches have been developed in order to simplify and automate the process of generating documents from models. Essentially, all of these techniques can be unified around the concept of producing different views of the model according to the needs of the intended audience. In this paper, we will describe a technique developed at JPL of applying SysML Viewpoints and Views to generate documents and reports. An architecture of model-based view and document generation will be presented, and the necessary extensions to SysML with associated rationale will be explained. A survey of examples will highlight a variety of views that can be generated, and will provide some insight into how collaboration and integration is enabled. We will also describe the basic architecture for the enterprise applications that support this approach.
NASA Astrophysics Data System (ADS)
Cheng, K.; Guo, L. M.; Wang, Y. K.; Zafar, M. T.
2017-11-01
In order to select effective samples in the large number of data of PV power generation years and improve the accuracy of PV power generation forecasting model, this paper studies the application of clustering analysis in this field and establishes forecasting model based on neural network. Based on three different types of weather on sunny, cloudy and rainy days, this research screens samples of historical data by the clustering analysis method. After screening, it establishes BP neural network prediction models using screened data as training data. Then, compare the six types of photovoltaic power generation prediction models before and after the data screening. Results show that the prediction model combining with clustering analysis and BP neural networks is an effective method to improve the precision of photovoltaic power generation.
Model based document and report generation for systems engineering
NASA Astrophysics Data System (ADS)
Delp, C.; Lam, D.; Fosse, E.; Lee, Cin-Young
As Model Based Systems Engineering (MBSE) practices gain adoption, various approaches have been developed in order to simplify and automate the process of generating documents from models. Essentially, all of these techniques can be unified around the concept of producing different views of the model according to the needs of the intended audience. In this paper, we will describe a technique developed at JPL of applying SysML Viewpoints and Views to generate documents and reports. An architecture of model-based view and document generation will be presented, and the necessary extensions to SysML with associated rationale will be explained. A survey of examples will highlight a variety of views that can be generated, and will provide some insight into how collaboration and integration is enabled. We will also describe the basic architecture for the enterprise applications that support this approach.
NASA Astrophysics Data System (ADS)
Boakye-Boateng, Nasir Abdulai
The growing demand for wind power integration into the generation mix prompts the need to subject these systems to stringent performance requirements. This study sought to identify the required tools and procedures needed to perform real-time simulation studies of Doubly-Fed Induction Generator (DFIG) based wind generation systems as basis for performing more practical tests of reliability and performance for both grid-connected and islanded wind generation systems. The author focused on developing a platform for wind generation studies and in addition, the author tested the performance of two DFIG models on the platform real-time simulation model; an average SimpowerSystemsRTM DFIG wind turbine, and a detailed DFIG based wind turbine using ARTEMiSRTM components. The platform model implemented here consists of a high voltage transmission system with four integrated wind farm models consisting in total of 65 DFIG based wind turbines and it was developed and tested on OPAL-RT's eMEGASimRTM Real-Time Digital Simulator.
Schwarz-Christoffel Conformal Mapping based Grid Generation for Global Oceanic Circulation Models
NASA Astrophysics Data System (ADS)
Xu, Shiming
2015-04-01
We propose new grid generation algorithms for global ocean general circulation models (OGCMs). Contrary to conventional, analytical forms based dipolar or tripolar grids, the new algorithm are based on Schwarz-Christoffel (SC) conformal mapping with prescribed boundary information. While dealing with the conventional grid design problem of pole relocation, it also addresses more advanced issues of computational efficiency and the new requirements on OGCM grids arisen from the recent trend of high-resolution and multi-scale modeling. The proposed grid generation algorithm could potentially achieve the alignment of grid lines to coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the generated grids are still orthogonal curvilinear, they can be readily 10 utilized in existing Bryan-Cox-Semtner type ocean models. The proposed methodology can also be applied to the grid generation task for regional ocean modeling when complex land-ocean distribution is present.
Patch-Based Generative Shape Model and MDL Model Selection for Statistical Analysis of Archipelagos
NASA Astrophysics Data System (ADS)
Ganz, Melanie; Nielsen, Mads; Brandt, Sami
We propose a statistical generative shape model for archipelago-like structures. These kind of structures occur, for instance, in medical images, where our intention is to model the appearance and shapes of calcifications in x-ray radio graphs. The generative model is constructed by (1) learning a patch-based dictionary for possible shapes, (2) building up a time-homogeneous Markov model to model the neighbourhood correlations between the patches, and (3) automatic selection of the model complexity by the minimum description length principle. The generative shape model is proposed as a probability distribution of a binary image where the model is intended to facilitate sequential simulation. Our results show that a relatively simple model is able to generate structures visually similar to calcifications. Furthermore, we used the shape model as a shape prior in the statistical segmentation of calcifications, where the area overlap with the ground truth shapes improved significantly compared to the case where the prior was not used.
Alterations in choice behavior by manipulations of world model.
Green, C S; Benson, C; Kersten, D; Schrater, P
2010-09-14
How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) "probability matching"-a consistent example of suboptimal choice behavior seen in humans-occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning.
Alterations in choice behavior by manipulations of world model
Green, C. S.; Benson, C.; Kersten, D.; Schrater, P.
2010-01-01
How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) “probability matching”—a consistent example of suboptimal choice behavior seen in humans—occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning. PMID:20805507
Salo, Zoryana; Beek, Maarten; Wright, David; Whyne, Cari Marisa
2015-04-13
Current methods for the development of pelvic finite element (FE) models generally are based upon specimen specific computed tomography (CT) data. This approach has traditionally required segmentation of CT data sets, which is time consuming and necessitates high levels of user intervention due to the complex pelvic anatomy. The purpose of this research was to develop and assess CT landmark-based semi-automated mesh morphing and mapping techniques to aid the generation and mechanical analysis of specimen-specific FE models of the pelvis without the need for segmentation. A specimen-specific pelvic FE model (source) was created using traditional segmentation methods and morphed onto a CT scan of a different (target) pelvis using a landmark-based method. The morphed model was then refined through mesh mapping by moving the nodes to the bone boundary. A second target model was created using traditional segmentation techniques. CT intensity based material properties were assigned to the morphed/mapped model and to the traditionally segmented target models. Models were analyzed to evaluate their geometric concurrency and strain patterns. Strains generated in a double-leg stance configuration were compared to experimental strain gauge data generated from the same target cadaver pelvis. CT landmark-based morphing and mapping techniques were efficiently applied to create a geometrically multifaceted specimen-specific pelvic FE model, which was similar to the traditionally segmented target model and better replicated the experimental strain results (R(2)=0.873). This study has shown that mesh morphing and mapping represents an efficient validated approach for pelvic FE model generation without the need for segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Min, Yul Ha; Park, Hyeoun-Ae; Chung, Eunja; Lee, Hyunsook
2013-12-01
The purpose of this paper is to describe the components of a next-generation electronic nursing records system ensuring full semantic interoperability and integrating evidence into the nursing records system. A next-generation electronic nursing records system based on detailed clinical models and clinical practice guidelines was developed at Seoul National University Bundang Hospital in 2013. This system has two components, a terminology server and a nursing documentation system. The terminology server manages nursing narratives generated from entity-attribute-value triplets of detailed clinical models using a natural language generation system. The nursing documentation system provides nurses with a set of nursing narratives arranged around the recommendations extracted from clinical practice guidelines. An electronic nursing records system based on detailed clinical models and clinical practice guidelines was successfully implemented in a hospital in Korea. The next-generation electronic nursing records system can support nursing practice and nursing documentation, which in turn will improve data quality.
Measurement-based quantum communication with resource states generated by entanglement purification
NASA Astrophysics Data System (ADS)
Wallnöfer, J.; Dür, W.
2017-01-01
We investigate measurement-based quantum communication with noisy resource states that are generated by entanglement purification. We consider the transmission of encoded information via noisy quantum channels using a measurement-based implementation of encoding, error correction, and decoding. We show that such an approach offers advantages over direct transmission, gate-based error correction, and measurement-based schemes with direct generation of resource states. We analyze the noise structure of resource states generated by entanglement purification and show that a local error model, i.e., noise acting independently on all qubits of the resource state, is a good approximation in general, and provides an exact description for Greenberger-Horne-Zeilinger states. The latter are resources for a measurement-based implementation of error-correction codes for bit-flip or phase-flip errors. This provides an approach to link the recently found very high thresholds for fault-tolerant measurement-based quantum information processing based on local error models for resource states with error thresholds for gate-based computational models.
Oliveira, Roberta B; Pereira, Aledir S; Tavares, João Manuel R S
2017-10-01
The number of deaths worldwide due to melanoma has risen in recent times, in part because melanoma is the most aggressive type of skin cancer. Computational systems have been developed to assist dermatologists in early diagnosis of skin cancer, or even to monitor skin lesions. However, there still remains a challenge to improve classifiers for the diagnosis of such skin lesions. The main objective of this article is to evaluate different ensemble classification models based on input feature manipulation to diagnose skin lesions. Input feature manipulation processes are based on feature subset selections from shape properties, colour variation and texture analysis to generate diversity for the ensemble models. Three subset selection models are presented here: (1) a subset selection model based on specific feature groups, (2) a correlation-based subset selection model, and (3) a subset selection model based on feature selection algorithms. Each ensemble classification model is generated using an optimum-path forest classifier and integrated with a majority voting strategy. The proposed models were applied on a set of 1104 dermoscopic images using a cross-validation procedure. The best results were obtained by the first ensemble classification model that generates a feature subset ensemble based on specific feature groups. The skin lesion diagnosis computational system achieved 94.3% accuracy, 91.8% sensitivity and 96.7% specificity. The input feature manipulation process based on specific feature subsets generated the greatest diversity for the ensemble classification model with very promising results. Copyright © 2017 Elsevier B.V. All rights reserved.
On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models
NASA Astrophysics Data System (ADS)
Xu, S.; Wang, B.; Liu, J.
2015-02-01
In this article we propose two conformal mapping based grid generation algorithms for global ocean general circulation models (OGCMs). Contrary to conventional, analytical forms based dipolar or tripolar grids, the new algorithms are based on Schwarz-Christoffel (SC) conformal mapping with prescribed boundary information. While dealing with the basic grid design problem of pole relocation, these new algorithms also address more advanced issues such as smoothed scaling factor, or the new requirements on OGCM grids arisen from the recent trend of high-resolution and multi-scale modeling. The proposed grid generation algorithm could potentially achieve the alignment of grid lines to coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the generated grids are still orthogonal curvilinear, they can be readily utilized in existing Bryan-Cox-Semtner type ocean models. The proposed methodology can also be applied to the grid generation task for regional ocean modeling where complex land-ocean distribution is present.
Capacity expansion model of wind power generation based on ELCC
NASA Astrophysics Data System (ADS)
Yuan, Bo; Zong, Jin; Wu, Shengyu
2018-02-01
Capacity expansion is an indispensable prerequisite for power system planning and construction. A reasonable, efficient and accurate capacity expansion model (CEM) is crucial to power system planning. In most current CEMs, the capacity of wind power generation is considered as boundary conditions instead of decision variables, which may lead to curtailment or over construction of flexible resource, especially at a high renewable energy penetration scenario. This paper proposed a wind power generation capacity value(CV) calculation method based on effective load-carrying capability, and a CEM that co-optimizes wind power generation and conventional power sources. Wind power generation is considered as decision variable in this model, and the model can accurately reflect the uncertainty nature of wind power.
NASA Astrophysics Data System (ADS)
Teodor, V. G.; Baroiu, N.; Susac, F.; Oancea, N.
2016-11-01
The modelling of a curl of surfaces associated with a pair of rolling centrodes, when it is known the profile of the rack-gear's teeth profile, by direct measuring, as a coordinate matrix, has as goal the determining of the generating quality for an imposed kinematics of the relative motion of tool regarding the blank. In this way, it is possible to determine the generating geometrical error, as a base of the total error. The generation modelling allows highlighting the potential errors of the generating tool, in order to correct its profile, previously to use the tool in machining process. A method developed in CATIA is proposed, based on a new method, namely the method of “relative generating trajectories”. They are presented the analytical foundation, as so as some application for knows models of rack-gear type tools used on Maag teething machines.
Quality Assessment and Comparison of Smartphone and Leica C10 Laser Scanner Based Point Clouds
NASA Astrophysics Data System (ADS)
Sirmacek, Beril; Lindenbergh, Roderik; Wang, Jinhu
2016-06-01
3D urban models are valuable for urban map generation, environment monitoring, safety planning and educational purposes. For 3D measurement of urban structures, generally airborne laser scanning sensors or multi-view satellite images are used as a data source. However, close-range sensors (such as terrestrial laser scanners) and low cost cameras (which can generate point clouds based on photogrammetry) can provide denser sampling of 3D surface geometry. Unfortunately, terrestrial laser scanning sensors are expensive and trained persons are needed to use them for point cloud acquisition. A potential effective 3D modelling can be generated based on a low cost smartphone sensor. Herein, we show examples of using smartphone camera images to generate 3D models of urban structures. We compare a smartphone based 3D model of an example structure with a terrestrial laser scanning point cloud of the structure. This comparison gives us opportunity to discuss the differences in terms of geometrical correctness, as well as the advantages, disadvantages and limitations in data acquisition and processing. We also discuss how smartphone based point clouds can help to solve further problems with 3D urban model generation in a practical way. We show that terrestrial laser scanning point clouds which do not have color information can be colored using smartphones. The experiments, discussions and scientific findings might be insightful for the future studies in fast, easy and low-cost 3D urban model generation field.
NASA Technical Reports Server (NTRS)
Iscoe, Neil; Liu, Zheng-Yang; Feng, Guohui; Yenne, Britt; Vansickle, Larry; Ballantyne, Michael
1992-01-01
Domain-specific knowledge is required to create specifications, generate code, and understand existing systems. Our approach to automating software design is based on instantiating an application domain model with industry-specific knowledge and then using that model to achieve the operational goals of specification elicitation and verification, reverse engineering, and code generation. Although many different specification models can be created from any particular domain model, each specification model is consistent and correct with respect to the domain model.
Research on complex 3D tree modeling based on L-system
NASA Astrophysics Data System (ADS)
Gang, Chen; Bin, Chen; Yuming, Liu; Hui, Li
2018-03-01
L-system as a fractal iterative system could simulate complex geometric patterns. Based on the field observation data of trees and knowledge of forestry experts, this paper extracted modeling constraint rules and obtained an L-system rules set. Using the self-developed L-system modeling software the L-system rule set was parsed to generate complex tree 3d models.The results showed that the geometrical modeling method based on l-system could be used to describe the morphological structure of complex trees and generate 3D tree models.
Generating Systems Biology Markup Language Models from the Synthetic Biology Open Language.
Roehner, Nicholas; Zhang, Zhen; Nguyen, Tramy; Myers, Chris J
2015-08-21
In the context of synthetic biology, model generation is the automated process of constructing biochemical models based on genetic designs. This paper discusses the use cases for model generation in genetic design automation (GDA) software tools and introduces the foundational concepts of standards and model annotation that make this process useful. Finally, this paper presents an implementation of model generation in the GDA software tool iBioSim and provides an example of generating a Systems Biology Markup Language (SBML) model from a design of a 4-input AND sensor written in the Synthetic Biology Open Language (SBOL).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Chen; Gupta, Vipul; Huang, Shenyan
The goal of this project is to model long-term creep performance for nickel-base superalloy weldments in high temperature power generation systems. The project uses physics-based modeling methodologies and algorithms for predicting alloy properties in heterogeneous material structures. The modeling methodology will be demonstrated on a gas turbine combustor liner weldment of Haynes 282 precipitate-strengthened nickel-base superalloy. The major developments are: (1) microstructure-property relationships under creep conditions and microstructure characterization (2) modeling inhomogeneous microstructure in superalloy weld (3) modeling mesoscale plastic deformation in superalloy weld and (4) a constitutive creep model that accounts for weld and base metal microstructure and theirmore » long term evolution. The developed modeling technology is aimed to provide a more efficient and accurate assessment of a material’s long-term performance compared with current testing and extrapolation methods. This modeling technology will also accelerate development and qualification of new materials in advanced power generation systems. This document is a final technical report for the project, covering efforts conducted from October 2014 to December 2016.« less
Automatic 3d Building Model Generations with Airborne LiDAR Data
NASA Astrophysics Data System (ADS)
Yastikli, N.; Cetin, Z.
2017-11-01
LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D building models can be generated successfully using raw LiDAR point cloud data.
Application for managing model-based material properties for simulation-based engineering
Hoffman, Edward L [Alameda, CA
2009-03-03
An application for generating a property set associated with a constitutive model of a material includes a first program module adapted to receive test data associated with the material and to extract loading conditions from the test data. A material model driver is adapted to receive the loading conditions and a property set and operable in response to the loading conditions and the property set to generate a model response for the material. A numerical optimization module is adapted to receive the test data and the model response and operable in response to the test data and the model response to generate the property set.
NASA Astrophysics Data System (ADS)
Rodriguez Marco, Albert
Battery management systems (BMS) require computationally simple but highly accurate models of the battery cells they are monitoring and controlling. Historically, empirical equivalent-circuit models have been used, but increasingly researchers are focusing their attention on physics-based models due to their greater predictive capabilities. These models are of high intrinsic computational complexity and so must undergo some kind of order-reduction process to make their use by a BMS feasible: we favor methods based on a transfer-function approach of battery cell dynamics. In prior works, transfer functions have been found from full-order PDE models via two simplifying assumptions: (1) a linearization assumption--which is a fundamental necessity in order to make transfer functions--and (2) an assumption made out of expedience that decouples the electrolyte-potential and electrolyte-concentration PDEs in order to render an approach to solve for the transfer functions from the PDEs. This dissertation improves the fidelity of physics-based models by eliminating the need for the second assumption and, by linearizing nonlinear dynamics around different constant currents. Electrochemical transfer functions are infinite-order and cannot be expressed as a ratio of polynomials in the Laplace variable s. Thus, for practical use, these systems need to be approximated using reduced-order models that capture the most significant dynamics. This dissertation improves the generation of physics-based reduced-order models by introducing different realization algorithms, which produce a low-order model from the infinite-order electrochemical transfer functions. Physics-based reduced-order models are linear and describe cell dynamics if operated near the setpoint at which they have been generated. Hence, multiple physics-based reduced-order models need to be generated at different setpoints (i.e., state-of-charge, temperature and C-rate) in order to extend the cell operating range. This dissertation improves the implementation of physics-based reduced-order models by introducing different blending approaches that combine the pre-computed models generated (offline) at different setpoints in order to produce good electrochemical estimates (online) along the cell state-of-charge, temperature and C-rate range.
NASA Astrophysics Data System (ADS)
Lim, Chen Kim; Tan, Kian Lam; Yusran, Hazwanni; Suppramaniam, Vicknesh
2017-10-01
Visual language or visual representation has been used in the past few years in order to express the knowledge in graphic. One of the important graphical elements is fractal and L-Systems is a mathematic-based grammatical model for modelling cell development and plant topology. From the plant model, L-Systems can be interpreted as music sound and score. In this paper, LSound which is a Visual Language Programming (VLP) framework has been developed to model plant to music sound and generate music score and vice versa. The objectives of this research has three folds: (i) To expand the grammar dictionary of L-Systems music based on visual programming, (ii) To design and produce a user-friendly and icon based visual language framework typically for L-Systems musical score generation which helps the basic learners in musical field and (iii) To generate music score from plant models and vice versa using L-Systems method. This research undergoes a four phases methodology where the plant is first modelled, then the music is interpreted, followed by the output of music sound through MIDI and finally score is generated. LSound is technically compared to other existing applications in the aspects of the capability of modelling the plant, rendering the music and generating the sound. It has been found that LSound is a flexible framework in which the plant can be easily altered through arrow-based programming and the music score can be altered through the music symbols and notes. This work encourages non-experts to understand L-Systems and music hand-in-hand.
NASA Technical Reports Server (NTRS)
Parrott, Edith L.; Weiland, Karen J.
2017-01-01
The ability of systems engineers to use model-based systems engineering (MBSE) to generate self-consistent, up-to-date systems engineering products for project life-cycle and technical reviews is an important aspect for the continued and accelerated acceptance of MBSE. Currently, many review products are generated using labor-intensive, error-prone approaches based on documents, spreadsheets, and chart sets; a promised benefit of MBSE is that users will experience reductions in inconsistencies and errors. This work examines features of SysML that can be used to generate systems engineering products. Model elements, relationships, tables, and diagrams are identified for a large number of the typical systems engineering artifacts. A SysML system model can contain and generate most systems engineering products to a significant extent and this paper provides a guide on how to use MBSE to generate products for project life-cycle and technical reviews. The use of MBSE can reduce the schedule impact usually experienced for review preparation, as in many cases the review products can be auto-generated directly from the system model. These approaches are useful to systems engineers, project managers, review board members, and other key project stakeholders.
LISP based simulation generators for modeling complex space processes
NASA Technical Reports Server (NTRS)
Tseng, Fan T.; Schroer, Bernard J.; Dwan, Wen-Shing
1987-01-01
The development of a simulation assistant for modeling discrete event processes is presented. Included are an overview of the system, a description of the simulation generators, and a sample process generated using the simulation assistant.
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
Knowledge-based approach for generating target system specifications from a domain model
NASA Technical Reports Server (NTRS)
Gomaa, Hassan; Kerschberg, Larry; Sugumaran, Vijayan
1992-01-01
Several institutions in industry and academia are pursuing research efforts in domain modeling to address unresolved issues in software reuse. To demonstrate the concepts of domain modeling and software reuse, a prototype software engineering environment is being developed at George Mason University to support the creation of domain models and the generation of target system specifications. This prototype environment, which is application domain independent, consists of an integrated set of commercial off-the-shelf software tools and custom-developed software tools. This paper describes the knowledge-based tool that was developed as part of the environment to generate target system specifications from a domain model.
A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.
2013-12-18
This paper presents four algorithms to generate random forecast error time series, including a truncated-normal distribution model, a state-space based Markov model, a seasonal autoregressive moving average (ARMA) model, and a stochastic-optimization based model. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets, used for variable generation integration studies. A comparison is made using historical DA load forecast and actual load values to generate new sets of DA forecasts with similar stoical forecast error characteristics. This paper discusses and comparesmore » the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less
A generative tool for building health applications driven by ISO 13606 archetypes.
Menárguez-Tortosa, Marcos; Martínez-Costa, Catalina; Fernández-Breis, Jesualdo Tomás
2012-10-01
The use of Electronic Healthcare Records (EHR) standards in the development of healthcare applications is crucial for achieving the semantic interoperability of clinical information. Advanced EHR standards make use of the dual model architecture, which provides a solution for clinical interoperability based on the separation of the information and knowledge. However, the impact of such standards is biased by the limited availability of tools that facilitate their usage and practical implementation. In this paper, we present an approach for the automatic generation of clinical applications for the ISO 13606 EHR standard, which is based on the dual model architecture. This generator has been generically designed, so it can be easily adapted to other dual model standards and can generate applications for multiple technological platforms. Such good properties are based on the combination of standards for the representation of generic user interfaces and model-driven engineering techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard, M.A.; Sommer, S.C.
1995-04-01
AUTOCASK (AUTOmatic Generation of 3-D CASK models) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for the structural analysis of shipping casks for radioactive material. Model specification is performed on the microcomputer, and the analyses are performed on an engineering workstation or mainframe computer. AUTOCASK is based on 80386/80486 compatible microcomputers. The system is composed of a series of menus, input programs, display programs, a mesh generation program, and archive programs. All data is entered through fill-in-the-blank input screens that contain descriptive data requests.
NASA Astrophysics Data System (ADS)
Zangori, Laura; Forbes, Cory T.; Schwarz, Christina V.
2015-10-01
Opportunities to generate model-based explanations are crucial for elementary students, yet are rarely foregrounded in elementary science learning environments despite evidence that early learners can reason from models when provided with scaffolding. We used a quasi-experimental research design to investigate the comparative impact of a scaffold test condition consisting of embedded physical scaffolds within a curricular modeling task on third-grade (age 8-9) students' formulation of model-based explanations for the water cycle. This condition was contrasted to the control condition where third-grade students used a curricular modeling task with no embedded physical scaffolds. Students from each condition ( n scaffold = 60; n unscaffold = 56) generated models of the water cycle before and after completion of a 10-week water unit. Results from quantitative analyses suggest that students in the scaffolded condition represented and linked more subsurface water process sequences with surface water process sequences than did students in the unscaffolded condition. However, results of qualitative analyses indicate that students in the scaffolded condition were less likely to build upon these process sequences to generate model-based explanations and experienced difficulties understanding their models as abstracted representations rather than recreations of real-world phenomena. We conclude that embedded curricular scaffolds may support students to consider non-observable components of the water cycle but, alone, may be insufficient for generation of model-based explanations about subsurface water movement.
Automation of route identification and optimisation based on data-mining and chemical intuition.
Lapkin, A A; Heer, P K; Jacob, P-M; Hutchby, M; Cunningham, W; Bull, S D; Davidson, M G
2017-09-21
Data-mining of Reaxys and network analysis of the combined literature and in-house reactions set were used to generate multiple possible reaction routes to convert a bio-waste feedstock, limonene, into a pharmaceutical API, paracetamol. The network analysis of data provides a rich knowledge-base for generation of the initial reaction screening and development programme. Based on the literature and the in-house data, an overall flowsheet for the conversion of limonene to paracetamol was proposed. Each individual reaction-separation step in the sequence was simulated as a combination of the continuous flow and batch steps. The linear model generation methodology allowed us to identify the reaction steps requiring further chemical optimisation. The generated model can be used for global optimisation and generation of environmental and other performance indicators, such as cost indicators. However, the identified further challenge is to automate model generation to evolve optimal multi-step chemical routes and optimal process configurations.
Synthetic Training Data Generation for Activity Monitoring and Behavior Analysis
NASA Astrophysics Data System (ADS)
Monekosso, Dorothy; Remagnino, Paolo
This paper describes a data generator that produces synthetic data to simulate observations from an array of environment monitoring sensors. The overall goal of our work is to monitor the well-being of one occupant in a home. Sensors are embedded in a smart home to unobtrusively record environmental parameters. Based on the sensor observations, behavior analysis and modeling are performed. However behavior analysis and modeling require large data sets to be collected over long periods of time to achieve the level of accuracy expected. A data generator - was developed based on initial data i.e. data collected over periods lasting weeks to facilitate concurrent data collection and development of algorithms. The data generator is based on statistical inference techniques. Variation is introduced into the data using perturbation models.
Parametric vs. non-parametric daily weather generator: validation and comparison
NASA Astrophysics Data System (ADS)
Dubrovsky, Martin
2016-04-01
As the climate models (GCMs and RCMs) fail to satisfactorily reproduce the real-world surface weather regime, various statistical methods are applied to downscale GCM/RCM outputs into site-specific weather series. The stochastic weather generators are among the most favourite downscaling methods capable to produce realistic (observed like) meteorological inputs for agrological, hydrological and other impact models used in assessing sensitivity of various ecosystems to climate change/variability. To name their advantages, the generators may (i) produce arbitrarily long multi-variate synthetic weather series representing both present and changed climates (in the latter case, the generators are commonly modified by GCM/RCM-based climate change scenarios), (ii) be run in various time steps and for multiple weather variables (the generators reproduce the correlations among variables), (iii) be interpolated (and run also for sites where no weather data are available to calibrate the generator). This contribution will compare two stochastic daily weather generators in terms of their ability to reproduce various features of the daily weather series. M&Rfi is a parametric generator: Markov chain model is used to model precipitation occurrence, precipitation amount is modelled by the Gamma distribution, and the 1st order autoregressive model is used to generate non-precipitation surface weather variables. The non-parametric GoMeZ generator is based on the nearest neighbours resampling technique making no assumption on the distribution of the variables being generated. Various settings of both weather generators will be assumed in the present validation tests. The generators will be validated in terms of (a) extreme temperature and precipitation characteristics (annual and 30 years extremes and maxima of duration of hot/cold/dry/wet spells); (b) selected validation statistics developed within the frame of VALUE project. The tests will be based on observational weather series from several European stations available from the ECA&D database.
2013-09-01
partner agencies and nations, detects, tracks, and interdicts illegal drug-trafficking in this region. In this thesis, we develop a probability model based...trafficking in this region. In this thesis, we develop a probability model based on intelligence inputs to generate a spatial temporal heat map specifying the...complement and vet such complicated simulation by developing more analytically tractable models. We develop probability models to generate a heat map
NASA Technical Reports Server (NTRS)
Cohen, Gerald C. (Inventor); McMann, Catherine M. (Inventor)
1991-01-01
An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.
A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure.
Zhou, Yang; Wu, Dewei
2016-01-01
It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT).
A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure
2016-01-01
It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT). PMID:27597859
Generating Multimodal References
ERIC Educational Resources Information Center
van der Sluis, Ielka; Krahmer, Emiel
2007-01-01
This article presents a new computational model for the generation of multimodal referring expressions (REs), based on observations in human communication. The algorithm is an extension of the graph-based algorithm proposed by Krahmer, van Erk, and Verleg (2003) and makes use of a so-called Flashlight Model for pointing. The Flashlight Model…
River Devices to Recover Energy with Advanced Materials (River DREAM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMahon, Daniel P.
2013-07-03
The purpose of this project is to develop a generator called a Galloping Hydroelectric Energy Extraction Device (GHEED). It uses a galloping prism to convert water flow into linear motion. This motion is converted into electricity via a dielectric elastomer generator (DEG). The galloping mechanism and the DEG are combined to create a system to effectively generate electricity. This project has three research objectives: 1. Oscillator development and design a. Characterize galloping behavior, evaluate control surface shape change on oscillator performance and demonstrate shape change with water flow change. 2. Dielectric Energy Generator (DEG) characterization and modeling a. Characterize andmore » model the performance of the DEG based on oscillator design 3. Galloping Hydroelectric Energy Extraction Device (GHEED) system modeling and integration a. Create numerical models for construction of a system performance model and define operating capabilities for this approach Accomplishing these three objectives will result in the creation of a model that can be used to fully define the operating parameters and performance capabilities of a generator based on the GHEED design. This information will be used in the next phase of product development, the creation of an integrated laboratory scale generator to confirm model predictions.« less
NASA Technical Reports Server (NTRS)
Parrott, Edith L.; Weiland, Karen J.
2017-01-01
This paper is for the AIAA Space Conference. The ability of systems engineers to use model-based systems engineering (MBSE) to generate self-consistent, up-to-date systems engineering products for project life-cycle and technical reviews is an important aspect for the continued and accelerated acceptance of MBSE. Currently, many review products are generated using labor-intensive, error-prone approaches based on documents, spreadsheets, and chart sets; a promised benefit of MBSE is that users will experience reductions in inconsistencies and errors. This work examines features of SysML that can be used to generate systems engineering products. Model elements, relationships, tables, and diagrams are identified for a large number of the typical systems engineering artifacts. A SysML system model can contain and generate most systems engineering products to a significant extent and this paper provides a guide on how to use MBSE to generate products for project life-cycle and technical reviews. The use of MBSE can reduce the schedule impact usually experienced for review preparation, as in many cases the review products can be auto-generated directly from the system model. These approaches are useful to systems engineers, project managers, review board members, and other key project stakeholders.
NASA Astrophysics Data System (ADS)
Kunz, Robert; Haworth, Daniel; Dogan, Gulkiz; Kriete, Andres
2006-11-01
Three-dimensional, unsteady simulations of multiphase flow, gas exchange, and particle/aerosol deposition in the human lung are reported. Surface data for human tracheo-bronchial trees are derived from CT scans, and are used to generate three- dimensional CFD meshes for the first several generations of branching. One-dimensional meshes for the remaining generations down to the respiratory units are generated using branching algorithms based on those that have been proposed in the literature, and a zero-dimensional respiratory unit (pulmonary acinus) model is attached at the end of each terminal bronchiole. The process is automated to facilitate rapid model generation. The model is exercised through multiple breathing cycles to compute the spatial and temporal variations in flow, gas exchange, and particle/aerosol deposition. The depth of the 3D/1D transition (at branching generation n) is a key parameter, and can be varied. High-fidelity models (large n) are run on massively parallel distributed-memory clusters, and are used to generate physical insight and to calibrate/validate the 1D and 0D models. Suitably validated lower-order models (small n) can be run on single-processor PC’s with run times that allow model-based clinical intervention for individual patients.
A theory of the n-i-p silicon solar cell
NASA Technical Reports Server (NTRS)
Goradia, C.; Weinberg, I.; Baraona, C.
1981-01-01
A computer model has been developed, based on an analytical theory of the high base resistivity BSF n(+)(pi)p(+) or p(+)(nu)n(+) silicon solar cell. The model makes very few assumptions and accounts for nonuniform optical generation, generation and recombination in the junction space charge region, and bandgap narrowing in the heavily doped regions. The paper presents calculated results based on this model and compares them to available experimental data. Also discussed is radiation damage in high base resistivity n(+)(pi)p(+) space solar cells.
A Point Rainfall Generator With Internal Storm Structure
NASA Astrophysics Data System (ADS)
Marien, J. L.; Vandewiele, G. L.
1986-04-01
A point rainfall generator is a probabilistic model for the time series of rainfall as observed in one geographical point. The main purpose of such a model is to generate long synthetic sequences of rainfall for simulation studies. The present generator is a continuous time model based on 13.5 years of 10-min point rainfalls observed in Belgium and digitized with a resolution of 0.1 mm. The present generator attempts to model all features of the rainfall time series which are important for flood studies as accurately as possible. The original aspects of the model are on the one hand the way in which storms are defined and on the other hand the theoretical model for the internal storm characteristics. The storm definition has the advantage that the important characteristics of successive storms are fully independent and very precisely modelled, even on time bases as small as 10 min. The model of the internal storm characteristics has a strong theoretical structure. This fact justifies better the extrapolation of this model to severe storms for which the data are very sparse. This can be important when using the model to simulate severe flood events.
Learning Physics-based Models in Hydrology under the Framework of Generative Adversarial Networks
NASA Astrophysics Data System (ADS)
Karpatne, A.; Kumar, V.
2017-12-01
Generative adversarial networks (GANs), that have been highly successful in a number of applications involving large volumes of labeled and unlabeled data such as computer vision, offer huge potential for modeling the dynamics of physical processes that have been traditionally studied using simulations of physics-based models. While conventional physics-based models use labeled samples of input/output variables for model calibration (estimating the right parametric forms of relationships between variables) or data assimilation (identifying the most likely sequence of system states in dynamical systems), there is a greater opportunity to explore the full power of machine learning (ML) methods (e.g, GANs) for studying physical processes currently suffering from large knowledge gaps, e.g. ground-water flow. However, success in this endeavor requires a principled way of combining the strengths of ML methods with physics-based numerical models that are founded on a wealth of scientific knowledge. This is especially important in scientific domains like hydrology where the number of data samples is small (relative to Internet-scale applications such as image recognition where machine learning methods has found great success), and the physical relationships are complex (high-dimensional) and non-stationary. We will present a series of methods for guiding the learning of GANs using physics-based models, e.g., by using the outputs of physics-based models as input data to the generator-learner framework, and by using physics-based models as generators trained using validation data in the adversarial learning framework. These methods are being developed under the broad paradigm of theory-guided data science that we are developing to integrate scientific knowledge with data science methods for accelerating scientific discovery.
Modeling and Simulation of U-tube Steam Generator
NASA Astrophysics Data System (ADS)
Zhang, Mingming; Fu, Zhongguang; Li, Jinyao; Wang, Mingfei
2018-03-01
The U-tube natural circulation steam generator was mainly researched with modeling and simulation in this article. The research is based on simuworks system simulation software platform. By analyzing the structural characteristics and the operating principle of U-tube steam generator, there are 14 control volumes in the model, including primary side, secondary side, down channel and steam plenum, etc. The model depends completely on conservation laws, and it is applied to make some simulation tests. The results show that the model is capable of simulating properly the dynamic response of U-tube steam generator.
Jiang, Guoqian; Evans, Julie; Endle, Cory M; Solbrig, Harold R; Chute, Christopher G
2016-01-01
The Biomedical Research Integrated Domain Group (BRIDG) model is a formal domain analysis model for protocol-driven biomedical research, and serves as a semantic foundation for application and message development in the standards developing organizations (SDOs). The increasing sophistication and complexity of the BRIDG model requires new approaches to the management and utilization of the underlying semantics to harmonize domain-specific standards. The objective of this study is to develop and evaluate a Semantic Web-based approach that integrates the BRIDG model with ISO 21090 data types to generate domain-specific templates to support clinical study metadata standards development. We developed a template generation and visualization system based on an open source Resource Description Framework (RDF) store backend, a SmartGWT-based web user interface, and a "mind map" based tool for the visualization of generated domain-specific templates. We also developed a RESTful Web Service informed by the Clinical Information Modeling Initiative (CIMI) reference model for access to the generated domain-specific templates. A preliminary usability study is performed and all reviewers (n = 3) had very positive responses for the evaluation questions in terms of the usability and the capability of meeting the system requirements (with the average score of 4.6). Semantic Web technologies provide a scalable infrastructure and have great potential to enable computable semantic interoperability of models in the intersection of health care and clinical research.
NASA Astrophysics Data System (ADS)
Li, Chunguang; Maini, Philip K.
2005-10-01
The Penna bit-string model successfully encompasses many phenomena of population evolution, including inheritance, mutation, evolution, and aging. If we consider social interactions among individuals in the Penna model, the population will form a complex network. In this paper, we first modify the Verhulst factor to control only the birth rate, and introduce activity-based preferential reproduction of offspring in the Penna model. The social interactions among individuals are generated by both inheritance and activity-based preferential increase. Then we study the properties of the complex network generated by the modified Penna model. We find that the resulting complex network has a small-world effect and the assortative mixing property.
Multi-Fidelity Framework for Modeling Combustion Instability
2016-07-27
generated from the reduced-domain dataset. Evaluations of the framework are performed based on simplified test problems for a model rocket combustor showing...generated from the reduced-domain dataset. Evaluations of the framework are performed based on simplified test problems for a model rocket combustor...of Aeronautics and Astronautics and Associate Fellow AIAA. ‡ Professor Emeritus. § Senior Scientist, Rocket Propulsion Division and Senior Member
Generational Sex And HIV Risk Among Indigenous Women In A Street-Based Urban Canadian Setting
Bingham, Brittany; Leo, Diane; Zhang, Ruth; Montaner, Julio
2014-01-01
In Canada, indigenous women are overrepresented among new HIV infections and street-based sex workers. Scholars suggest that Aboriginal women’s HIV risk stems from intergenerational effects of colonisation and racial policies. This research examined generational sex work involvement among Aboriginal and non-Aboriginal women and the effect on risk for HIV acquisition. The sample included 225 women in street-based sex work and enrolled in a community-based prospective cohort, in partnership with local sex work and Aboriginal community partners. Bivariate and multivariate logistic regression modeled an independent relationship between Aboriginal ancestry and generational sex work; and the impact of generational sex work on HIV infection among Aboriginal sex workers. Aboriginal women (48%) were more likely to be HIV-positive, with 34% living with HIV compared to 24% non-Aboriginal. In multivariate logistic regression model, Aboriginal women remained 3 times more likely to experience generational sex work (aOR:2.97; 95%CI:1.5,5.8). Generational sex work was significantly associated with HIV (aOR=3.01, 95%CI: 1.67–4.58) in a confounder model restricted to Aboriginal women. High prevalence of generational sex work among Aboriginal women and 3-fold increased risk for HIV infection are concerning. Policy reforms and community-based, culturally safe and trauma informed HIV prevention initiatives are required for Indigenous sex workers. PMID:24654881
Generational sex work and HIV risk among Indigenous women in a street-based urban Canadian setting.
Bingham, Brittany; Leo, Diane; Zhang, Ruth; Montaner, Julio; Shannon, Kate
2014-01-01
In Canada, Indigenous women are over-represented among new HIV infections and street-based sex workers. Scholars suggest that Aboriginal women's HIV risk stems from intergenerational effects of colonisation and racial policies. This research examined generational sex work involvement among Aboriginal and non-Aboriginal women and the effect on risk for HIV acquisition. The sample included 225 women in street-based sex work and enrolled in a community-based prospective cohort, in partnership with local sex work and Aboriginal community partners. Bivariate and multivariate logistic regression modeled an independent relationship between Aboriginal ancestry and generational sex work and the impact of generational sex work on HIV infection among Aboriginal sex workers. Aboriginal women (48%) were more likely to be HIV-positive, with 34% living with HIV compared to 24% non-Aboriginal women. In multivariate logistic regression model, Aboriginal women remained three times more likely to experience generational sex work (AOR:2.97; 95%CI:1.5,5.8). Generational sex work was significantly associated with HIV (AOR = 3.01, 95%CI: 1.67-4.58) in a confounder model restricted to Aboriginal women. High prevalence of generational sex work among Aboriginal women and three-fold increased risk for HIV infection are concerning. Policy reforms and community-based, culturally safe and trauma informed HIV-prevention initiatives are required for Indigenous sex workers.
Knowledge-Based Planning Model for Courses of Action Generation,
1986-04-07
AO-AIS 608 KNOWLEDGE-BASED PLANNING MODEL FOR COURSES OF ACTION mJI OENERATION(U) ARMY MAR COLL CARLISLE BARRACKS PA USI FE D R COLLINS ET AL. 97APR...agencies. This document may not be released for open publication until it has been cleared by the appropriate military service or government agency. 00 DTIC...I ELECTE KNOWLEDGE-BASED PLANNING MODEL C AUG 5~ FOR COURSES OF ACTION GENERATION DD BY COLONEL D. R. COLLINS LIEUTENANT COLONEL(P) T. A. BAUCUM
UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces
NASA Technical Reports Server (NTRS)
Shiffman, Smadar; Degani, Asaf; Heymann, Michael
2004-01-01
In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.
Fast modeling of flux trapping cascaded explosively driven magnetic flux compression generators.
Wang, Yuwei; Zhang, Jiande; Chen, Dongqun; Cao, Shengguang; Li, Da; Liu, Chebo
2013-01-01
To predict the performance of flux trapping cascaded flux compression generators, a calculation model based on an equivalent circuit is investigated. The system circuit is analyzed according to its operation characteristics in different steps. Flux conservation coefficients are added to the driving terms of circuit differential equations to account for intrinsic flux losses. To calculate the currents in the circuit by solving the circuit equations, a simple zero-dimensional model is used to calculate the time-varying inductance and dc resistance of the generator. Then a fast computer code is programmed based on this calculation model. As an example, a two-staged flux trapping generator is simulated by using this computer code. Good agreements are achieved by comparing the simulation results with the measurements. Furthermore, it is obvious that this fast calculation model can be easily applied to predict performances of other flux trapping cascaded flux compression generators with complex structures such as conical stator or conical armature sections and so on for design purpose.
Object-Oriented Modeling of an Energy Harvesting System Based on Thermoelectric Generators
NASA Astrophysics Data System (ADS)
Nesarajah, Marco; Frey, Georg
This paper deals with the modeling of an energy harvesting system based on thermoelectric generators (TEG), and the validation of the model by means of a test bench. TEGs are capable to improve the overall energy efficiency of energy systems, e.g. combustion engines or heating systems, by using the remaining waste heat to generate electrical power. Previously, a component-oriented model of the TEG itself was developed in Modelica® language. With this model any TEG can be described and simulated given the material properties and the physical dimension. Now, this model was extended by the surrounding components to a complete model of a thermoelectric energy harvesting system. In addition to the TEG, the model contains the cooling system, the heat source, and the power electronics. To validate the simulation model, a test bench was built and installed on an oil-fired household heating system. The paper reports results of the measurements and discusses the validity of the developed simulation models. Furthermore, the efficiency of the proposed energy harvesting system is derived and possible improvements based on design variations tested in the simulation model are proposed.
Validation of two (parametric vs non-parametric) daily weather generators
NASA Astrophysics Data System (ADS)
Dubrovsky, M.; Skalak, P.
2015-12-01
As the climate models (GCMs and RCMs) fail to satisfactorily reproduce the real-world surface weather regime, various statistical methods are applied to downscale GCM/RCM outputs into site-specific weather series. The stochastic weather generators are among the most favourite downscaling methods capable to produce realistic (observed-like) meteorological inputs for agrological, hydrological and other impact models used in assessing sensitivity of various ecosystems to climate change/variability. To name their advantages, the generators may (i) produce arbitrarily long multi-variate synthetic weather series representing both present and changed climates (in the latter case, the generators are commonly modified by GCM/RCM-based climate change scenarios), (ii) be run in various time steps and for multiple weather variables (the generators reproduce the correlations among variables), (iii) be interpolated (and run also for sites where no weather data are available to calibrate the generator). This contribution will compare two stochastic daily weather generators in terms of their ability to reproduce various features of the daily weather series. M&Rfi is a parametric generator: Markov chain model is used to model precipitation occurrence, precipitation amount is modelled by the Gamma distribution, and the 1st order autoregressive model is used to generate non-precipitation surface weather variables. The non-parametric GoMeZ generator is based on the nearest neighbours resampling technique making no assumption on the distribution of the variables being generated. Various settings of both weather generators will be assumed in the present validation tests. The generators will be validated in terms of (a) extreme temperature and precipitation characteristics (annual and 30-years extremes and maxima of duration of hot/cold/dry/wet spells); (b) selected validation statistics developed within the frame of VALUE project. The tests will be based on observational weather series from several European stations available from the ECA&D database. Acknowledgements: The weather generator is developed and validated within the frame of projects WG4VALUE (sponsored by the Ministry of Education, Youth and Sports of CR), and VALUE (COST ES 1102 action).
A comprehensive study on urban true orthorectification
Zhou, G.; Chen, W.; Kelmelis, J.A.; Zhang, Dongxiao
2005-01-01
To provide some advanced technical bases (algorithms and procedures) and experience needed for national large-scale digital orthophoto generation and revision of the Standards for National Large-Scale City Digital Orthophoto in the National Digital Orthophoto Program (NDOP), this paper presents a comprehensive study on theories, algorithms, and methods of large-scale urban orthoimage generation. The procedures of orthorectification for digital terrain model (DTM)-based and digital building model (DBM)-based orthoimage generation and their mergence for true orthoimage generation are discussed in detail. A method of compensating for building occlusions using photogrammetric geometry is developed. The data structure needed to model urban buildings for accurately generating urban orthoimages is presented. Shadow detection and removal, the optimization of seamline for automatic mosaic, and the radiometric balance of neighbor images are discussed. Street visibility analysis, including the relationship between flight height, building height, street width, and relative location of the street to the imaging center, is analyzed for complete true orthoimage generation. The experimental results demonstrated that our method can effectively and correctly orthorectify the displacements caused by terrain and buildings in urban large-scale aerial images. ?? 2005 IEEE.
Model-Based GUI Testing Using Uppaal at Novo Nordisk
NASA Astrophysics Data System (ADS)
Hjort, Ulrik H.; Illum, Jacob; Larsen, Kim G.; Petersen, Michael A.; Skou, Arne
This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates a test suite satisfying some testing criterion, such as edge or state coverage, and converts the individual test case into a scripting language that can be automatically executed against the target. The tool has significantly reduced the time required for test construction and generation, and reduced the number of test scripts while increasing the coverage.
Integrated Mode Choice, Small Aircraft Demand, and Airport Operations Model User's Guide
NASA Technical Reports Server (NTRS)
Yackovetsky, Robert E. (Technical Monitor); Dollyhigh, Samuel M.
2004-01-01
A mode choice model that generates on-demand air travel forecasts at a set of GA airports based on changes in economic characteristics, vehicle performance characteristics such as speed and cost, and demographic trends has been integrated with a model to generate itinerate aircraft operations by airplane category at a set of 3227 airports. Numerous intermediate outputs can be generated, such as the number of additional trips diverted from automobiles and schedule air by the improved performance and cost of on-demand air vehicles. The total number of transported passenger miles that are diverted is also available. From these results the number of new aircraft to service the increased demand can be calculated. Output from the models discussed is in the format to generate the origin and destination traffic flow between the 3227 airports based on solutions to a gravity model.
Continuous data assimilation for downscaling large-footprint soil moisture retrievals
NASA Astrophysics Data System (ADS)
Altaf, Muhammad U.; Jana, Raghavendra B.; Hoteit, Ibrahim; McCabe, Matthew F.
2016-10-01
Soil moisture is a key component of the hydrologic cycle, influencing processes leading to runoff generation, infiltration and groundwater recharge, evaporation and transpiration. Generally, the measurement scale for soil moisture is found to be different from the modeling scales for these processes. Reducing this mismatch between observation and model scales in necessary for improved hydrological modeling. An innovative approach to downscaling coarse resolution soil moisture data by combining continuous data assimilation and physically based modeling is presented. In this approach, we exploit the features of Continuous Data Assimilation (CDA) which was initially designed for general dissipative dynamical systems and later tested numerically on the incompressible Navier-Stokes equation, and the Benard equation. A nudging term, estimated as the misfit between interpolants of the assimilated coarse grid measurements and the fine grid model solution, is added to the model equations to constrain the model's large scale variability by available measurements. Soil moisture fields generated at a fine resolution by a physically-based vadose zone model (HYDRUS) are subjected to data assimilation conditioned upon coarse resolution observations. This enables nudging of the model outputs towards values that honor the coarse resolution dynamics while still being generated at the fine scale. Results show that the approach is feasible to generate fine scale soil moisture fields across large extents, based on coarse scale observations. Application of this approach is likely in generating fine and intermediate resolution soil moisture fields conditioned on the radiometerbased, coarse resolution products from remote sensing satellites.
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.
1997-01-01
The NASA Lewis Research Center is developing analytical methods and software tools to create a bridge between the controls and computational fluid dynamics (CFD) disciplines. Traditionally, control design engineers have used coarse nonlinear simulations to generate information for the design of new propulsion system controls. However, such traditional methods are not adequate for modeling the propulsion systems of complex, high-speed vehicles like the High Speed Civil Transport. To properly model the relevant flow physics of high-speed propulsion systems, one must use simulations based on CFD methods. Such CFD simulations have become useful tools for engineers that are designing propulsion system components. The analysis techniques and software being developed as part of this effort are an attempt to evolve CFD into a useful tool for control design as well. One major aspect of this research is the generation of linear models from steady-state CFD results. CFD simulations, often used during the design of high-speed inlets, yield high resolution operating point data. Under a NASA grant, the University of Akron has developed analytical techniques and software tools that use these data to generate linear models for control design. The resulting linear models have the same number of states as the original CFD simulation, so they are still very large and computationally cumbersome. Model reduction techniques have been successfully applied to reduce these large linear models by several orders of magnitude without significantly changing the dynamic response. The result is an accurate, easy to use, low-order linear model that takes less time to generate than those generated by traditional means. The development of methods for generating low-order linear models from steady-state CFD is most complete at the one-dimensional level, where software is available to generate models with different kinds of input and output variables. One-dimensional methods have been extended somewhat so that linear models can also be generated from two- and three-dimensional steady-state results. Standard techniques are adequate for reducing the order of one-dimensional CFD-based linear models. However, reduction of linear models based on two- and three-dimensional CFD results is complicated by very sparse, ill-conditioned matrices. Some novel approaches are being investigated to solve this problem.
Automated knowledge generation
NASA Technical Reports Server (NTRS)
Myler, Harley R.; Gonzalez, Avelino J.
1988-01-01
The general objectives of the NASA/UCF Automated Knowledge Generation Project were the development of an intelligent software system that could access CAD design data bases, interpret them, and generate a diagnostic knowledge base in the form of a system model. The initial area of concentration is in the diagnosis of the process control system using the Knowledge-based Autonomous Test Engineer (KATE) diagnostic system. A secondary objective was the study of general problems of automated knowledge generation. A prototype was developed, based on object-oriented language (Flavors).
NASA Technical Reports Server (NTRS)
Nieten, Joseph L.; Seraphine, Kathleen M.
1991-01-01
Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.
Alfred, Michael; Chung, Christopher A
2012-12-01
This paper describes a second generation Simulator for Engineering Ethics Education. Details describing the first generation activities of this overall effort are published in Chung and Alfred (Sci Eng Ethics 15:189-199, 2009). The second generation research effort represents a major development in the interactive simulator educational approach. As with the first generation effort, the simulator places students in first person perspective scenarios involving different types of ethical situations. Students must still gather data, assess the situation, and make decisions. The approach still requires students to develop their own ability to identify and respond to ethical engineering situations. However, were as, the generation one effort involved the use of a dogmatic model based on National Society of Professional Engineers' Code of Ethics, the new generation two model is based on a mathematical model of the actual experiences of engineers involved in ethical situations. This approach also allows the use of feedback in the form of decision effectiveness and professional career impact. Statistical comparisons indicate a 59 percent increase in overall knowledge and a 19 percent improvement in teaching effectiveness over an Internet Engineering Ethics resource based approach.
An ontology model for nursing narratives with natural language generation technology.
Min, Yul Ha; Park, Hyeoun-Ae; Jeon, Eunjoo; Lee, Joo Yun; Jo, Soo Jung
2013-01-01
The purpose of this study was to develop an ontology model to generate nursing narratives as natural as human language from the entity-attribute-value triplets of a detailed clinical model using natural language generation technology. The model was based on the types of information and documentation time of the information along the nursing process. The typesof information are data characterizing the patient status, inferences made by the nurse from the patient data, and nursing actions selected by the nurse to change the patient status. This information was linked to the nursing process based on the time of documentation. We describe a case study illustrating the application of this model in an acute-care setting. The proposed model provides a strategy for designing an electronic nursing record system.
Gray correlation analysis and prediction models of living refuse generation in Shanghai city.
Liu, Gousheng; Yu, Jianguo
2007-01-01
A better understanding of the factors that affect the generation of municipal living refuse (MLF) and the accurate prediction of its generation are crucial for municipal planning projects and city management. Up to now, most of the design efforts have been based on a rough prediction of MLF without any actual support. In this paper, based on published data of socioeconomic variables and MLF generation from 1990 to 2003 in the city of Shanghai, the main factors that affect MLF generation have been quantitatively studied using the method of gray correlation coefficient. Several gray models, such as GM(1,1), GIM(1), GPPM(1) and GLPM(1), have been studied, and predicted results are verified with subsequent residual test. Results show that, among the selected seven factors, consumption of gas, water and electricity are the largest three factors affecting MLF generation, and GLPM(1) is the optimized model to predict MLF generation. Through this model, the predicted MLF generation in 2010 in Shanghai will be 7.65 million tons. The methods and results developed in this paper can provide valuable information for MLF management and related municipal planning projects.
Semantic attributes based texture generation
NASA Astrophysics Data System (ADS)
Chi, Huifang; Gan, Yanhai; Qi, Lin; Dong, Junyu; Madessa, Amanuel Hirpa
2018-04-01
Semantic attributes are commonly used for texture description. They can be used to describe the information of a texture, such as patterns, textons, distributions, brightness, and so on. Generally speaking, semantic attributes are more concrete descriptors than perceptual features. Therefore, it is practical to generate texture images from semantic attributes. In this paper, we propose to generate high-quality texture images from semantic attributes. Over the last two decades, several works have been done on texture synthesis and generation. Most of them focusing on example-based texture synthesis and procedural texture generation. Semantic attributes based texture generation still deserves more devotion. Gan et al. proposed a useful joint model for perception driven texture generation. However, perceptual features are nonobjective spatial statistics used by humans to distinguish different textures in pre-attentive situations. To give more describing information about texture appearance, semantic attributes which are more in line with human description habits are desired. In this paper, we use sigmoid cross entropy loss in an auxiliary model to provide enough information for a generator. Consequently, the discriminator is released from the relatively intractable mission of figuring out the joint distribution of condition vectors and samples. To demonstrate the validity of our method, we compare our method to Gan et al.'s method on generating textures by designing experiments on PTD and DTD. All experimental results show that our model can generate textures from semantic attributes.
Error Generation in CATS-Based Agents
NASA Technical Reports Server (NTRS)
Callantine, Todd
2003-01-01
This research presents a methodology for generating errors from a model of nominally preferred correct operator activities, given a particular operational context, and maintaining an explicit link to the erroneous contextual information to support analyses. It uses the Crew Activity Tracking System (CATS) model as the basis for error generation. This report describes how the process works, and how it may be useful for supporting agent-based system safety analyses. The report presents results obtained by applying the error-generation process and discusses implementation issues. The research is supported by the System-Wide Accident Prevention Element of the NASA Aviation Safety Program.
Flapping wing applied to wind generators
NASA Astrophysics Data System (ADS)
Colidiuc, Alexandra; Galetuse, Stelian; Suatean, Bogdan
2012-11-01
The new conditions at the international level for energy source distributions and the continuous increasing of energy consumption must lead to a new alternative resource with the condition of keeping the environment clean. This paper offers a new approach for a wind generator and is based on the theoretical aerodynamic model. This new model of wind generator helped me to test what influences would be if there will be a bird airfoil instead of a normal wind generator airfoil. The aim is to calculate the efficiency for the new model of wind generator. A representative direction for using the renewable energy is referred to the transformation of wind energy into electrical energy, with the help of wind turbines; the development of such systems lead to new solutions based on high efficiency, reduced costs and suitable to the implementation conditions.
NASA Astrophysics Data System (ADS)
Żukowicz, Marek; Markiewicz, Michał
2016-09-01
The aim of the article is to present a mathematical definition of the object model, that is known in computer science as TreeList and to show application of this model for design evolutionary algorithm, that purpose is to generate structures based on this object. The first chapter introduces the reader to the problem of presenting data using the TreeList object. The second chapter describes the problem of testing data structures based on TreeList. The third one shows a mathematical model of the object TreeList and the parameters, used in determining the utility of structures created through this model and in evolutionary strategy, that generates these structures for testing purposes. The last chapter provides a brief summary and plans for future research related to the algorithm presented in the article.
Evaluation of Generation Alternation Models in Evolutionary Robotics
NASA Astrophysics Data System (ADS)
Oiso, Masashi; Matsumura, Yoshiyuki; Yasuda, Toshiyuki; Ohkura, Kazuhiro
For efficient implementation of Evolutionary Algorithms (EA) to a desktop grid computing environment, we propose a new generation alternation model called Grid-Oriented-Deletion (GOD) based on comparison with the conventional techniques. In previous research, generation alternation models are generally evaluated by using test functions. However, their exploration performance on the real problems such as Evolutionary Robotics (ER) has not been made very clear yet. Therefore we investigate the relationship between the exploration performance of EA on an ER problem and its generation alternation model. We applied four generation alternation models to the Evolutionary Multi-Robotics (EMR), which is the package-pushing problem to investigate their exploration performance. The results show that GOD is more effective than the other conventional models.
Semantics-Based Composition of Integrated Cardiomyocyte Models Motivated by Real-World Use Cases.
Neal, Maxwell L; Carlson, Brian E; Thompson, Christopher T; James, Ryan C; Kim, Karam G; Tran, Kenneth; Crampin, Edmund J; Cook, Daniel L; Gennari, John H
2015-01-01
Semantics-based model composition is an approach for generating complex biosimulation models from existing components that relies on capturing the biological meaning of model elements in a machine-readable fashion. This approach allows the user to work at the biological rather than computational level of abstraction and helps minimize the amount of manual effort required for model composition. To support this compositional approach, we have developed the SemGen software, and here report on SemGen's semantics-based merging capabilities using real-world modeling use cases. We successfully reproduced a large, manually-encoded, multi-model merge: the "Pandit-Hinch-Niederer" (PHN) cardiomyocyte excitation-contraction model, previously developed using CellML. We describe our approach for annotating the three component models used in the PHN composition and for merging them at the biological level of abstraction within SemGen. We demonstrate that we were able to reproduce the original PHN model results in a semi-automated, semantics-based fashion and also rapidly generate a second, novel cardiomyocyte model composed using an alternative, independently-developed tension generation component. We discuss the time-saving features of our compositional approach in the context of these merging exercises, the limitations we encountered, and potential solutions for enhancing the approach.
Semantics-Based Composition of Integrated Cardiomyocyte Models Motivated by Real-World Use Cases
Neal, Maxwell L.; Carlson, Brian E.; Thompson, Christopher T.; James, Ryan C.; Kim, Karam G.; Tran, Kenneth; Crampin, Edmund J.; Cook, Daniel L.; Gennari, John H.
2015-01-01
Semantics-based model composition is an approach for generating complex biosimulation models from existing components that relies on capturing the biological meaning of model elements in a machine-readable fashion. This approach allows the user to work at the biological rather than computational level of abstraction and helps minimize the amount of manual effort required for model composition. To support this compositional approach, we have developed the SemGen software, and here report on SemGen’s semantics-based merging capabilities using real-world modeling use cases. We successfully reproduced a large, manually-encoded, multi-model merge: the “Pandit-Hinch-Niederer” (PHN) cardiomyocyte excitation-contraction model, previously developed using CellML. We describe our approach for annotating the three component models used in the PHN composition and for merging them at the biological level of abstraction within SemGen. We demonstrate that we were able to reproduce the original PHN model results in a semi-automated, semantics-based fashion and also rapidly generate a second, novel cardiomyocyte model composed using an alternative, independently-developed tension generation component. We discuss the time-saving features of our compositional approach in the context of these merging exercises, the limitations we encountered, and potential solutions for enhancing the approach. PMID:26716837
An Empirical Model for Vane-Type Vortex Generators in a Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Dudek, Julianne C.
2005-01-01
An empirical model which simulates the effects of vane-type vortex generators in ducts was incorporated into the Wind-US Navier-Stokes computational fluid dynamics code. The model enables the effects of the vortex generators to be simulated without defining the details of the geometry within the grid, and makes it practical for researchers to evaluate multiple combinations of vortex generator arrangements. The model determines the strength of each vortex based on the generator geometry and the local flow conditions. Validation results are presented for flow in a straight pipe with a counter-rotating vortex generator arrangement, and the results are compared with experimental data and computational simulations using a gridded vane generator. Results are also presented for vortex generator arrays in two S-duct diffusers, along with accompanying experimental data. The effects of grid resolution and turbulence model are also examined.
Universal Verification Methodology Based Register Test Automation Flow.
Woo, Jae Hun; Cho, Yong Kwan; Park, Sun Kyu
2016-05-01
In today's SoC design, the number of registers has been increased along with complexity of hardware blocks. Register validation is a time-consuming and error-pron task. Therefore, we need an efficient way to perform verification with less effort in shorter time. In this work, we suggest register test automation flow based UVM (Universal Verification Methodology). UVM provides a standard methodology, called a register model, to facilitate stimulus generation and functional checking of registers. However, it is not easy for designers to create register models for their functional blocks or integrate models in test-bench environment because it requires knowledge of SystemVerilog and UVM libraries. For the creation of register models, many commercial tools support a register model generation from register specification described in IP-XACT, but it is time-consuming to describe register specification in IP-XACT format. For easy creation of register model, we propose spreadsheet-based register template which is translated to IP-XACT description, from which register models can be easily generated using commercial tools. On the other hand, we also automate all the steps involved integrating test-bench and generating test-cases, so that designers may use register model without detailed knowledge of UVM or SystemVerilog. This automation flow involves generating and connecting test-bench components (e.g., driver, checker, bus adaptor, etc.) and writing test sequence for each type of register test-case. With the proposed flow, designers can save considerable amount of time to verify functionality of registers.
Multiple-generator errors are unavoidable under model misspecification.
Jewett, D L; Zhang, Z
1995-08-01
Model misspecification poses a major problem for dipole source localization (DSL) because it causes insidious multiple-generator errors (MulGenErrs) to occur in the fitted dipole parameters. This paper describes how and why this occurs, based upon simple algebraic considerations. MulGenErrs must occur, to some degree, in any DSL analysis of real data because there is model misspecification and mathematically the equations used for the simultaneously active generators must be of a different form than the equations for each generator active alone.
ERIC Educational Resources Information Center
Lennett, Benjamin; Morris, Sarah J.; Byrum, Greta
2012-01-01
Based on a request for information (RFI) submitted to The University Community Next Generation Innovation Project (Gig.U), the paper describes a model for universities to develop next generation broadband infrastructure in their communities. In the our view universities can play a critical role in spurring next generation networks into their…
Exploiting the functional and taxonomic structure of genomic data by probabilistic topic modeling.
Chen, Xin; Hu, Xiaohua; Lim, Tze Y; Shen, Xiajiong; Park, E K; Rosen, Gail L
2012-01-01
In this paper, we present a method that enable both homology-based approach and composition-based approach to further study the functional core (i.e., microbial core and gene core, correspondingly). In the proposed method, the identification of major functionality groups is achieved by generative topic modeling, which is able to extract useful information from unlabeled data. We first show that generative topic model can be used to model the taxon abundance information obtained by homology-based approach and study the microbial core. The model considers each sample as a “document,” which has a mixture of functional groups, while each functional group (also known as a “latent topic”) is a weight mixture of species. Therefore, estimating the generative topic model for taxon abundance data will uncover the distribution over latent functions (latent topic) in each sample. Second, we show that, generative topic model can also be used to study the genome-level composition of “N-mer” features (DNA subreads obtained by composition-based approaches). The model consider each genome as a mixture of latten genetic patterns (latent topics), while each functional pattern is a weighted mixture of the “N-mer” features, thus the existence of core genomes can be indicated by a set of common N-mer features. After studying the mutual information between latent topics and gene regions, we provide an explanation of the functional roles of uncovered latten genetic patterns. The experimental results demonstrate the effectiveness of proposed method.
Karanjekar, Richa V; Bhatt, Arpita; Altouqui, Said; Jangikhatoonabad, Neda; Durai, Vennila; Sattler, Melanie L; Hossain, M D Sahadat; Chen, Victoria
2015-12-01
Accurately estimating landfill methane emissions is important for quantifying a landfill's greenhouse gas emissions and power generation potential. Current models, including LandGEM and IPCC, often greatly simplify treatment of factors like rainfall and ambient temperature, which can substantially impact gas production. The newly developed Capturing Landfill Emissions for Energy Needs (CLEEN) model aims to improve landfill methane generation estimates, but still require inputs that are fairly easy to obtain: waste composition, annual rainfall, and ambient temperature. To develop the model, methane generation was measured from 27 laboratory scale landfill reactors, with varying waste compositions (ranging from 0% to 100%); average rainfall rates of 2, 6, and 12 mm/day; and temperatures of 20, 30, and 37°C, according to a statistical experimental design. Refuse components considered were the major biodegradable wastes, food, paper, yard/wood, and textile, as well as inert inorganic waste. Based on the data collected, a multiple linear regression equation (R(2)=0.75) was developed to predict first-order methane generation rate constant values k as functions of waste composition, annual rainfall, and temperature. Because, laboratory methane generation rates exceed field rates, a second scale-up regression equation for k was developed using actual gas-recovery data from 11 landfills in high-income countries with conventional operation. The Capturing Landfill Emissions for Energy Needs (CLEEN) model was developed by incorporating both regression equations into the first-order decay based model for estimating methane generation rates from landfills. CLEEN model values were compared to actual field data from 6 US landfills, and to estimates from LandGEM and IPCC. For 4 of the 6 cases, CLEEN model estimates were the closest to actual. Copyright © 2015 Elsevier Ltd. All rights reserved.
Testolin, Alberto; Zorzi, Marco
2016-01-01
Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here we discuss a powerful class of graphical models that can be implemented as stochastic, generative neural networks. These models overcome many limitations associated with classic connectionist models, for example by exploiting unsupervised learning in hierarchical architectures (deep networks) and by taking into account top-down, predictive processing supported by feedback loops. We review some recent cognitive models based on generative networks, and we point out promising research directions to investigate neuropsychological disorders within this approach. Though further efforts are required in order to fill the gap between structured Bayesian models and more realistic, biophysical models of neuronal dynamics, we argue that generative neural networks have the potential to bridge these levels of analysis, thereby improving our understanding of the neural bases of cognition and of pathologies caused by brain damage. PMID:27468262
Koopman Operator Framework for Time Series Modeling and Analysis
NASA Astrophysics Data System (ADS)
Surana, Amit
2018-01-01
We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.
Testolin, Alberto; Zorzi, Marco
2016-01-01
Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here we discuss a powerful class of graphical models that can be implemented as stochastic, generative neural networks. These models overcome many limitations associated with classic connectionist models, for example by exploiting unsupervised learning in hierarchical architectures (deep networks) and by taking into account top-down, predictive processing supported by feedback loops. We review some recent cognitive models based on generative networks, and we point out promising research directions to investigate neuropsychological disorders within this approach. Though further efforts are required in order to fill the gap between structured Bayesian models and more realistic, biophysical models of neuronal dynamics, we argue that generative neural networks have the potential to bridge these levels of analysis, thereby improving our understanding of the neural bases of cognition and of pathologies caused by brain damage.
Model-based adaptive 3D sonar reconstruction in reverberating environments.
Saucan, Augustin-Alexandru; Sintes, Christophe; Chonavel, Thierry; Caillec, Jean-Marc Le
2015-10-01
In this paper, we propose a novel model-based approach for 3D underwater scene reconstruction, i.e., bathymetry, for side scan sonar arrays in complex and highly reverberating environments like shallow water areas. The presence of multipath echoes and volume reverberation generates false depth estimates. To improve the resulting bathymetry, this paper proposes and develops an adaptive filter, based on several original geometrical models. This multimodel approach makes it possible to track and separate the direction of arrival trajectories of multiple echoes impinging the array. Echo tracking is perceived as a model-based processing stage, incorporating prior information on the temporal evolution of echoes in order to reject cluttered observations generated by interfering echoes. The results of the proposed filter on simulated and real sonar data showcase the clutter-free and regularized bathymetric reconstruction. Model validation is carried out with goodness of fit tests, and demonstrates the importance of model-based processing for bathymetry reconstruction.
Digital relief generation from 3D models
NASA Astrophysics Data System (ADS)
Wang, Meili; Sun, Yu; Zhang, Hongming; Qian, Kun; Chang, Jian; He, Dongjian
2016-09-01
It is difficult to extend image-based relief generation to high-relief generation, as the images contain insufficient height information. To generate reliefs from three-dimensional (3D) models, it is necessary to extract the height fields from the model, but this can only generate bas-reliefs. To overcome this problem, an efficient method is proposed to generate bas-reliefs and high-reliefs directly from 3D meshes. To produce relief features that are visually appropriate, the 3D meshes are first scaled. 3D unsharp masking is used to enhance the visual features in the 3D mesh, and average smoothing and Laplacian smoothing are implemented to achieve better smoothing results. A nonlinear variable scaling scheme is then employed to generate the final bas-reliefs and high-reliefs. Using the proposed method, relief models can be generated from arbitrary viewing positions with different gestures and combinations of multiple 3D models. The generated relief models can be printed by 3D printers. The proposed method provides a means of generating both high-reliefs and bas-reliefs in an efficient and effective way under the appropriate scaling factors.
Compartmental and Spatial Rule-Based Modeling with Virtual Cell.
Blinov, Michael L; Schaff, James C; Vasilescu, Dan; Moraru, Ion I; Bloom, Judy E; Loew, Leslie M
2017-10-03
In rule-based modeling, molecular interactions are systematically specified in the form of reaction rules that serve as generators of reactions. This provides a way to account for all the potential molecular complexes and interactions among multivalent or multistate molecules. Recently, we introduced rule-based modeling into the Virtual Cell (VCell) modeling framework, permitting graphical specification of rules and merger of networks generated automatically (using the BioNetGen modeling engine) with hand-specified reaction networks. VCell provides a number of ordinary differential equation and stochastic numerical solvers for single-compartment simulations of the kinetic systems derived from these networks, and agent-based network-free simulation of the rules. In this work, compartmental and spatial modeling of rule-based models has been implemented within VCell. To enable rule-based deterministic and stochastic spatial simulations and network-free agent-based compartmental simulations, the BioNetGen and NFSim engines were each modified to support compartments. In the new rule-based formalism, every reactant and product pattern and every reaction rule are assigned locations. We also introduce the rule-based concept of molecular anchors. This assures that any species that has a molecule anchored to a predefined compartment will remain in this compartment. Importantly, in addition to formulation of compartmental models, this now permits VCell users to seamlessly connect reaction networks derived from rules to explicit geometries to automatically generate a system of reaction-diffusion equations. These may then be simulated using either the VCell partial differential equations deterministic solvers or the Smoldyn stochastic simulator. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Model learning for robot control: a survey.
Nguyen-Tuong, Duy; Peters, Jan
2011-11-01
Models are among the most essential tools in robotics, such as kinematics and dynamics models of the robot's own body and controllable external objects. It is widely believed that intelligent mammals also rely on internal models in order to generate their actions. However, while classical robotics relies on manually generated models that are based on human insights into physics, future autonomous, cognitive robots need to be able to automatically generate models that are based on information which is extracted from the data streams accessible to the robot. In this paper, we survey the progress in model learning with a strong focus on robot control on a kinematic as well as dynamical level. Here, a model describes essential information about the behavior of the environment and the influence of an agent on this environment. In the context of model-based learning control, we view the model from three different perspectives. First, we need to study the different possible model learning architectures for robotics. Second, we discuss what kind of problems these architecture and the domain of robotics imply for the applicable learning methods. From this discussion, we deduce future directions of real-time learning algorithms. Third, we show where these scenarios have been used successfully in several case studies.
C code generation from Petri-net-based logic controller specification
NASA Astrophysics Data System (ADS)
Grobelny, Michał; Grobelna, Iwona; Karatkevich, Andrei
2017-08-01
The article focuses on programming of logic controllers. It is important that a programming code of a logic controller is executed flawlessly according to the primary specification. In the presented approach we generate C code for an AVR microcontroller from a rule-based logical model of a control process derived from a control interpreted Petri net. The same logical model is also used for formal verification of the specification by means of the model checking technique. The proposed rule-based logical model and formal rules of transformation ensure that the obtained implementation is consistent with the already verified specification. The approach is validated by practical experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallo, Giulia
Integrating increasingly high levels of variable generation in U.S. electricity markets requires addressing not only power system and grid modeling challenges but also an understanding of how market participants react and adapt to them. Key elements of current and future wholesale power markets can be modeled using an agent-based approach, which may prove to be a useful paradigm for researchers studying and planning for power systems of the future.
NASA Astrophysics Data System (ADS)
Li, Pai; Huang, Yuehui; Jia, Yanbing; Liu, Jichun; Niu, Yi
2018-02-01
Abstract . This article has studies on the generation investment decision in the background of global energy interconnection. Generation investment decision model considering the multiagent benefit is proposed. Under the back-ground of global energy Interconnection, generation investors in different clean energy base not only compete with other investors, but also facing being chosen by the power of the central area, therefor, constructing generation investment decision model considering multiagent benefit can be close to meet the interests demands. Using game theory, the complete information game model is adopted to solve the strategies of different subjects in equilibrium state.
A MATLAB based 3D modeling and inversion code for MT data
NASA Astrophysics Data System (ADS)
Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.
2017-07-01
The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.
NASA Astrophysics Data System (ADS)
Dubrovsky, M.; Hirschi, M.; Spirig, C.
2014-12-01
To quantify impact of the climate change on a specific pest (or any weather-dependent process) in a specific site, we may use a site-calibrated pest (or other) model and compare its outputs obtained with site-specific weather data representing present vs. perturbed climates. The input weather data may be produced by the stochastic weather generator. Apart from the quality of the pest model, the reliability of the results obtained in such experiment depend on an ability of the generator to represent the statistical structure of the real world weather series, and on the sensitivity of the pest model to possible imperfections of the generator. This contribution deals with the multivariate HOWGH weather generator, which is based on a combination of parametric and non-parametric statistical methods. Here, HOWGH is used to generate synthetic hourly series of three weather variables (solar radiation, temperature and precipitation) required by a dynamic pest model SOPRA to simulate the development of codling moth. The contribution presents results of the direct and indirect validation of HOWGH. In the direct validation, the synthetic series generated by HOWGH (various settings of its underlying model are assumed) are validated in terms of multiple climatic characteristics, focusing on the subdaily wet/dry and hot/cold spells. In the indirect validation, we assess the generator in terms of characteristics derived from the outputs of SOPRA model fed by the observed vs. synthetic series. The weather generator may be used to produce weather series representing present and future climates. In the latter case, the parameters of the generator may be modified by the climate change scenarios based on Global or Regional Climate Models. To demonstrate this feature, the results of codling moth simulations for future climate will be shown. Acknowledgements: The weather generator is developed and validated within the frame of projects WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports of CR), and VALUE (COST ES 1102 action).
NASA Technical Reports Server (NTRS)
Goel, Narendra S.; Rozehnal, Ivan; Thompson, Richard L.
1991-01-01
A computer-graphics-based model, named DIANA, is presented for generation of objects of arbitrary shape and for calculating bidirectional reflectances and scattering from them, in the visible and infrared region. The computer generation is based on a modified Lindenmayer system approach which makes it possible to generate objects of arbitrary shapes and to simulate their growth, dynamics, and movement. Rendering techniques are used to display an object on a computer screen with appropriate shading and shadowing and to calculate the scattering and reflectance from the object. The technique is illustrated with scattering from canopies of simulated corn plants.
DOUBLE SHELL TANK (DST) HYDROXIDE DEPLETION MODEL FOR CARBON DIOXIDE ABSORPTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
OGDEN DM; KIRCH NW
2007-10-31
This document generates a supernatant hydroxide ion depletion model based on mechanistic principles. The carbon dioxide absorption mechanistic model is developed in this report. The report also benchmarks the model against historical tank supernatant hydroxide data and vapor space carbon dioxide data. A comparison of the newly generated mechanistic model with previously applied empirical hydroxide depletion equations is also performed.
A random spatial network model based on elementary postulates
Karlinger, Michael R.; Troutman, Brent M.
1989-01-01
A model for generating random spatial networks that is based on elementary postulates comparable to those of the random topology model is proposed. In contrast to the random topology model, this model ascribes a unique spatial specification to generated drainage networks, a distinguishing property of some network growth models. The simplicity of the postulates creates an opportunity for potential analytic investigations of the probabilistic structure of the drainage networks, while the spatial specification enables analyses of spatially dependent network properties. In the random topology model all drainage networks, conditioned on magnitude (number of first-order streams), are equally likely, whereas in this model all spanning trees of a grid, conditioned on area and drainage density, are equally likely. As a result, link lengths in the generated networks are not independent, as usually assumed in the random topology model. For a preliminary model evaluation, scale-dependent network characteristics, such as geometric diameter and link length properties, and topologic characteristics, such as bifurcation ratio, are computed for sets of drainage networks generated on square and rectangular grids. Statistics of the bifurcation and length ratios fall within the range of values reported for natural drainage networks, but geometric diameters tend to be relatively longer than those for natural networks.
NASA Technical Reports Server (NTRS)
Margaria, Tiziana (Inventor); Hinchey, Michael G. (Inventor); Rouff, Christopher A. (Inventor); Rash, James L. (Inventor); Steffen, Bernard (Inventor)
2010-01-01
Systems, methods and apparatus are provided through which in some embodiments, automata learning algorithms and techniques are implemented to generate a more complete set of scenarios for requirements based programming. More specifically, a CSP-based, syntax-oriented model construction, which requires the support of a theorem prover, is complemented by model extrapolation, via automata learning. This may support the systematic completion of the requirements, the nature of the requirement being partial, which provides focus on the most prominent scenarios. This may generalize requirement skeletons by extrapolation and may indicate by way of automatically generated traces where the requirement specification is too loose and additional information is required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhou, S; Cai, W; Hurwitz, M
2015-06-15
Purpose: Respiratory-correlated cone-beam CT (4DCBCT) images acquired immediately prior to treatment have the potential to represent patient motion patterns and anatomy during treatment, including both intra- and inter-fractional changes. We develop a method to generate patient-specific motion models based on 4DCBCT images acquired with existing clinical equipment and used to generate time varying volumetric images (3D fluoroscopic images) representing motion during treatment delivery. Methods: Motion models are derived by deformably registering each 4DCBCT phase to a reference phase, and performing principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated by optimizing the resulting PCAmore » coefficients iteratively through comparison of the cone-beam projections simulating kV treatment imaging and digitally reconstructed radiographs generated from the motion model. Patient and physical phantom datasets are used to evaluate the method in terms of tumor localization error compared to manually defined ground truth positions. Results: 4DCBCT-based motion models were derived and used to generate 3D fluoroscopic images at treatment time. For the patient datasets, the average tumor localization error and the 95th percentile were 1.57 and 3.13 respectively in subsets of four patient datasets. For the physical phantom datasets, the average tumor localization error and the 95th percentile were 1.14 and 2.78 respectively in two datasets. 4DCBCT motion models are shown to perform well in the context of generating 3D fluoroscopic images due to their ability to reproduce anatomical changes at treatment time. Conclusion: This study showed the feasibility of deriving 4DCBCT-based motion models and using them to generate 3D fluoroscopic images at treatment time in real clinical settings. 4DCBCT-based motion models were found to account for the 3D non-rigid motion of the patient anatomy during treatment and have the potential to localize tumor and other patient anatomical structures at treatment time even when inter-fractional changes occur. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc., Palo Alto, CA. The project was also supported, in part, by Award Number R21CA156068 from the National Cancer Institute.« less
Model-Drive Architecture for Agent-Based Systems
NASA Technical Reports Server (NTRS)
Gradanin, Denis; Singh, H. Lally; Bohner, Shawn A.; Hinchey, Michael G.
2004-01-01
The Model Driven Architecture (MDA) approach uses a platform-independent model to define system functionality, or requirements, using some specification language. The requirements are then translated to a platform-specific model for implementation. An agent architecture based on the human cognitive model of planning, the Cognitive Agent Architecture (Cougaar) is selected for the implementation platform. The resulting Cougaar MDA prescribes certain kinds of models to be used, how those models may be prepared and the relationships of the different kinds of models. Using the existing Cougaar architecture, the level of application composition is elevated from individual components to domain level model specifications in order to generate software artifacts. The software artifacts generation is based on a metamodel. Each component maps to a UML structured component which is then converted into multiple artifacts: Cougaar/Java code, documentation, and test cases.
Sparsity-based fast CGH generation using layer-based approach for 3D point cloud model
NASA Astrophysics Data System (ADS)
Kim, Hak Gu; Jeong, Hyunwook; Ro, Yong Man
2017-03-01
Computer generated hologram (CGH) is becoming increasingly important for a 3-D display in various applications including virtual reality. In the CGH, holographic fringe patterns are generated by numerically calculating them on computer simulation systems. However, a heavy computational cost is required to calculate the complex amplitude on CGH plane for all points of 3D objects. This paper proposes a new fast CGH generation based on the sparsity of CGH for 3D point cloud model. The aim of the proposed method is to significantly reduce computational complexity while maintaining the quality of the holographic fringe patterns. To that end, we present a new layer-based approach for calculating the complex amplitude distribution on the CGH plane by using sparse FFT (sFFT). We observe the CGH of a layer of 3D objects is sparse so that dominant CGH is rapidly generated from a small set of signals by sFFT. Experimental results have shown that the proposed method is one order of magnitude faster than recently reported fast CGH generation.
Network approaches for expert decisions in sports.
Glöckner, Andreas; Heinen, Thomas; Johnson, Joseph G; Raab, Markus
2012-04-01
This paper focuses on a model comparison to explain choices based on gaze behavior via simulation procedures. We tested two classes of models, a parallel constraint satisfaction (PCS) artificial neuronal network model and an accumulator model in a handball decision-making task from a lab experiment. Both models predict action in an option-generation task in which options can be chosen from the perspective of a playmaker in handball (i.e., passing to another player or shooting at the goal). Model simulations are based on a dataset of generated options together with gaze behavior measurements from 74 expert handball players for 22 pieces of video footage. We implemented both classes of models as deterministic vs. probabilistic models including and excluding fitted parameters. Results indicated that both classes of models can fit and predict participants' initially generated options based on gaze behavior data, and that overall, the classes of models performed about equally well. Early fixations were thereby particularly predictive for choices. We conclude that the analyses of complex environments via network approaches can be successfully applied to the field of experts' decision making in sports and provide perspectives for further theoretical developments. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Nijland, Linda; Arentze, Theo; Timmermans, Harry
2014-01-01
Modeling multi-day planning has received scarce attention in activity-based transport demand modeling so far. However, new dynamic activity-based approaches are being developed at the current moment. The frequency and inflexibility of planned activities and events in activity schedules of individuals indicate the importance of incorporating those pre-planned activities in the new generation of dynamic travel demand models. Elaborating and combining previous work on event-driven activity generation, the aim of this paper is to develop and illustrate an extension of a need-based model of activity generation that takes into account possible influences of pre-planned activities and events. This paper describes the theory and shows the results of simulations of the extension. The simulation was conducted for six different activities, and the parameter values used were consistent with an earlier estimation study. The results show that the model works well and that the influences of the parameters are consistent, logical, and have clear interpretations. These findings offer further evidence of face and construct validity to the suggested modeling approach.
n-D shape/texture optimal synthetic description and modeling by GEOGINE
NASA Astrophysics Data System (ADS)
Fiorini, Rodolfo A.; Dacquino, Gianfranco F.
2004-12-01
GEOGINE(GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for multidimensional shape/texture optimal synthetic description and learning, is presented. Usually elementary geometric shape robust characterization, subjected to geometric transformation, on a rigorous mathematical level is a key problem in many computer applications in different interest areas. The past four decades have seen solutions almost based on the use of n-Dimensional Moment and Fourier descriptor invariants. The present paper introduces a new approach for automatic model generation based on n -Dimensional Tensor Invariants as formal dictionary. An ontological model is the kernel used for specifying ontologies so that how close an ontology can be from the real world depends on the possibilities offered by the ontological model. By this approach even chromatic information content can be easily and reliably decoupled from target geometric information and computed into robus colour shape parameter attributes. Main GEOGINEoperational advantages over previous approaches are: 1) Automated Model Generation, 2) Invariant Minimal Complete Set for computational efficiency, 3) Arbitrary Model Precision for robust object description.
NASA Astrophysics Data System (ADS)
Myers, B.; Beard, T. D.; Weiskopf, S. R.; Jackson, S. T.; Tittensor, D.; Harfoot, M.; Senay, G. B.; Casey, K.; Lenton, T. M.; Leidner, A. K.; Ruane, A. C.; Ferrier, S.; Serbin, S.; Matsuda, H.; Shiklomanov, A. N.; Rosa, I.
2017-12-01
Biodiversity and ecosystems services underpin political targets for the conservation of biodiversity; however, previous incarnations of these biodiversity-related targets have not relied on integrated model based projections of possible outcomes based on climate and land use change. Although a few global biodiversity models are available, most biodiversity models lie along a continuum of geography and components of biodiversity. Model-based projections of the future of global biodiversity are critical to support policymakers in the development of informed global conservation targets, but the scientific community lacks a clear strategy for integrating diverse data streams in developing, and evaluating the performance of, such biodiversity models. Therefore, in this paper, we propose a framework for ongoing testing and refinement of model-based projections of biodiversity trends and change, by linking a broad variety of biodiversity models with data streams generated by advances in remote sensing, coupled with new and emerging in-situ observation technologies to inform development of essential biodiversity variables, future global biodiversity targets, and indicators. Our two main objectives are to (1) develop a framework for model testing and refining projections of a broad range of biodiversity models, focusing on global models, through the integration of diverse data streams and (2) identify the realistic outputs that can be developed and determine coupled approaches using remote sensing and new and emerging in-situ observations (e.g., metagenomics) to better inform the next generation of global biodiversity targets.
NASA Astrophysics Data System (ADS)
Wang, Dai; Gao, Junyu; Li, Pan; Wang, Bin; Zhang, Cong; Saxena, Samveg
2017-08-01
Modeling PEV travel and charging behavior is the key to estimate the charging demand and further explore the potential of providing grid services. This paper presents a stochastic simulation methodology to generate itineraries and charging load profiles for a population of PEVs based on real-world vehicle driving data. In order to describe the sequence of daily travel activities, we use the trip chain model which contains the detailed information of each trip, namely start time, end time, trip distance, start location and end location. A trip chain generation method is developed based on the Naive Bayes model to generate a large number of trips which are temporally and spatially coupled. We apply the proposed methodology to investigate the multi-location charging loads in three different scenarios. Simulation results show that home charging can meet the energy demand of the majority of PEVs in an average condition. In addition, we calculate the lower bound of charging load peak on the premise of lowest charging cost. The results are instructive for the design and construction of charging facilities to avoid excessive infrastructure.
NASA Astrophysics Data System (ADS)
Curcó, David; Casanovas, Jordi; Roca, Marc; Alemán, Carlos
2005-07-01
A method for generating atomistic models of dense amorphous polymers is presented. The method is organized in a two-steps procedure. First, structures are generated using an algorithm that minimizes the torsional strain. After this, a relaxation algorithm is applied to minimize the non-bonding interactions. Two alternative relaxation methods, which are based simple minimization and Concerted Rotation techniques, have been implemented. The performance of the method has been checked by simulating polyethylene, polypropylene, nylon 6, poly(L,D-lactic acid) and polyglycolic acid.
Haas, Magali; Stephenson, Diane; Romero, Klaus; Gordon, Mark Forrest; Zach, Neta; Geerts, Hugo
2016-09-01
Many disease-modifying clinical development programs in Alzheimer's disease (AD) have failed to date, and development of new and advanced preclinical models that generate actionable knowledge is desperately needed. This review reports on computer-based modeling and simulation approach as a powerful tool in AD research. Statistical data-analysis techniques can identify associations between certain data and phenotypes, such as diagnosis or disease progression. Other approaches integrate domain expertise in a formalized mathematical way to understand how specific components of pathology integrate into complex brain networks. Private-public partnerships focused on data sharing, causal inference and pathway-based analysis, crowdsourcing, and mechanism-based quantitative systems modeling represent successful real-world modeling examples with substantial impact on CNS diseases. Similar to other disease indications, successful real-world examples of advanced simulation can generate actionable support of drug discovery and development in AD, illustrating the value that can be generated for different stakeholders. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bonne, François; Alamir, Mazen; Hoa, Christine; Bonnay, Patrick; Bon-Mardion, Michel; Monteiro, Lionel
2015-12-01
In this article, we present a new Simulink library of cryogenics components (such as valve, phase separator, mixer, heat exchanger...) to assemble to generate model-based control schemes. Every component is described by its algebraic or differential equation and can be assembled with others to build the dynamical model of a complete refrigerator or the model of a subpart of it. The obtained model can be used to automatically design advanced model based control scheme. It also can be used to design a model based PI controller. Advanced control schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in future fusion reactors such as those expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT- 60SA). The paper gives the example of the generation of the dynamical model of the 400W@1.8K refrigerator and shows how to build a Constrained Model Predictive Control for it. Based on the scheme, experimental results will be given. This work is being supported by the French national research agency (ANR) through the ANR-13-SEED-0005 CRYOGREEN program.
Effect of material constants on power output in piezoelectric vibration-based generators.
Takeda, Hiroaki; Mihara, Kensuke; Yoshimura, Tomohiro; Hoshina, Takuya; Tsurumi, Takaaki
2011-09-01
A possible power output estimation based on material constants in piezoelectric vibration-based generators is proposed. A modified equivalent circuit model of the generator was built and was validated by the measurement results in the generator fabricated using potassium sodium niobate-based and lead zirconate titanate (PZT) ceramics. Subsequently, generators with the same structure using other PZT-based and bismuth-layered structure ferroelectrics ceramics were fabricated and tested. The power outputs of these generators were expressed as a linear functions of the term composed of electromechanical coupling coefficients k(sys)(2) and mechanical quality factors Q*(m) of the generator. The relationship between device constants (k(sys)(2) and Q*(m)) and material constants (k(31)(2) and Q(m)) was clarified. Estimation of the power output using material constants is demonstrated and the appropriate piezoelectric material for the generator is suggested.
Erdoğdu, Utku; Tan, Mehmet; Alhajj, Reda; Polat, Faruk; Rokne, Jon; Demetrick, Douglas
2013-01-01
The availability of enough samples for effective analysis and knowledge discovery has been a challenge in the research community, especially in the area of gene expression data analysis. Thus, the approaches being developed for data analysis have mostly suffered from the lack of enough data to train and test the constructed models. We argue that the process of sample generation could be successfully automated by employing some sophisticated machine learning techniques. An automated sample generation framework could successfully complement the actual sample generation from real cases. This argument is validated in this paper by describing a framework that integrates multiple models (perspectives) for sample generation. We illustrate its applicability for producing new gene expression data samples, a highly demanding area that has not received attention. The three perspectives employed in the process are based on models that are not closely related. The independence eliminates the bias of having the produced approach covering only certain characteristics of the domain and leading to samples skewed towards one direction. The first model is based on the Probabilistic Boolean Network (PBN) representation of the gene regulatory network underlying the given gene expression data. The second model integrates Hierarchical Markov Model (HIMM) and the third model employs a genetic algorithm in the process. Each model learns as much as possible characteristics of the domain being analysed and tries to incorporate the learned characteristics in generating new samples. In other words, the models base their analysis on domain knowledge implicitly present in the data itself. The developed framework has been extensively tested by checking how the new samples complement the original samples. The produced results are very promising in showing the effectiveness, usefulness and applicability of the proposed multi-model framework.
Song, Jingwei; He, Jiaying; Zhu, Menghua; Tan, Debao; Zhang, Yu; Ye, Song; Shen, Dingtao; Zou, Pengfei
2014-01-01
A simulated annealing (SA) based variable weighted forecast model is proposed to combine and weigh local chaotic model, artificial neural network (ANN), and partial least square support vector machine (PLS-SVM) to build a more accurate forecast model. The hybrid model was built and multistep ahead prediction ability was tested based on daily MSW generation data from Seattle, Washington, the United States. The hybrid forecast model was proved to produce more accurate and reliable results and to degrade less in longer predictions than three individual models. The average one-week step ahead prediction has been raised from 11.21% (chaotic model), 12.93% (ANN), and 12.94% (PLS-SVM) to 9.38%. Five-week average has been raised from 13.02% (chaotic model), 15.69% (ANN), and 15.92% (PLS-SVM) to 11.27%. PMID:25301508
Accessing and constructing driving data to develop fuel consumption forecast model
NASA Astrophysics Data System (ADS)
Yamashita, Rei-Jo; Yao, Hsiu-Hsen; Hung, Shih-Wei; Hackman, Acquah
2018-02-01
In this study, we develop a forecasting models, to estimate fuel consumption based on the driving behavior, in which vehicles and routes are known. First, the driving data are collected via telematics and OBDII. Then, the driving fuel consumption formula is used to calculate the estimate fuel consumption, and driving behavior indicators are generated for analysis. Based on statistical analysis method, the driving fuel consumption forecasting model is constructed. Some field experiment results were done in this study to generate hundreds of driving behavior indicators. Based on data mining approach, the Pearson coefficient correlation analysis is used to filter highly fuel consumption related DBIs. Only highly correlated DBI will be used in the model. These DBIs are divided into four classes: speed class, acceleration class, Left/Right/U-turn class and the other category. We then use K-means cluster analysis to group to the driver class and the route class. Finally, more than 12 aggregate models are generated by those highly correlated DBIs, using the neural network model and regression analysis. Based on Mean Absolute Percentage Error (MAPE) to evaluate from the developed AMs. The best MAPE values among these AM is below 5%.
Supervised guiding long-short term memory for image caption generation based on object classes
NASA Astrophysics Data System (ADS)
Wang, Jian; Cao, Zhiguo; Xiao, Yang; Qi, Xinyuan
2018-03-01
The present models of image caption generation have the problems of image visual semantic information attenuation and errors in guidance information. In order to solve these problems, we propose a supervised guiding Long Short Term Memory model based on object classes, named S-gLSTM for short. It uses the object detection results from R-FCN as supervisory information with high confidence, and updates the guidance word set by judging whether the last output matches the supervisory information. S-gLSTM learns how to extract the current interested information from the image visual se-mantic information based on guidance word set. The interested information is fed into the S-gLSTM at each iteration as guidance information, to guide the caption generation. To acquire the text-related visual semantic information, the S-gLSTM fine-tunes the weights of the network through the back-propagation of the guiding loss. Complementing guidance information at each iteration solves the problem of visual semantic information attenuation in the traditional LSTM model. Besides, the supervised guidance information in our model can reduce the impact of the mismatched words on the caption generation. We test our model on MSCOCO2014 dataset, and obtain better performance than the state-of-the- art models.
Visual Persons Behavior Diary Generation Model based on Trajectories and Pose Estimation
NASA Astrophysics Data System (ADS)
Gang, Chen; Bin, Chen; Yuming, Liu; Hui, Li
2018-03-01
The behavior pattern of persons was the important output of the surveillance analysis. This paper focus on the generation model of visual person behavior diary. The pipeline includes the person detection, tracking, and the person behavior classify. This paper adopts the deep convolutional neural model YOLO (You Only Look Once)V2 for person detection module. Multi person tracking was based on the detection framework. The Hungarian assignment algorithm was used to the matching. The person appearance model was integrated by HSV color model and Hash code model. The person object motion was estimated by the Kalman Filter. The multi objects were matching with exist tracklets through the appearance and motion location distance by the Hungarian assignment method. A long continuous trajectory for one person was get by the spatial-temporal continual linking algorithm. And the face recognition information was used to identify the trajectory. The trajectories with identification information can be used to generate the visual diary of person behavior based on the scene context information and person action estimation. The relevant modules are tested in public data sets and our own capture video sets. The test results show that the method can be used to generate the visual person behavior pattern diary with certain accuracy.
Individual-based models for adaptive diversification in high-dimensional phenotype spaces.
Ispolatov, Iaroslav; Madhok, Vaibhav; Doebeli, Michael
2016-02-07
Most theories of evolutionary diversification are based on equilibrium assumptions: they are either based on optimality arguments involving static fitness landscapes, or they assume that populations first evolve to an equilibrium state before diversification occurs, as exemplified by the concept of evolutionary branching points in adaptive dynamics theory. Recent results indicate that adaptive dynamics may often not converge to equilibrium points and instead generate complicated trajectories if evolution takes place in high-dimensional phenotype spaces. Even though some analytical results on diversification in complex phenotype spaces are available, to study this problem in general we need to reconstruct individual-based models from the adaptive dynamics generating the non-equilibrium dynamics. Here we first provide a method to construct individual-based models such that they faithfully reproduce the given adaptive dynamics attractor without diversification. We then show that a propensity to diversify can be introduced by adding Gaussian competition terms that generate frequency dependence while still preserving the same adaptive dynamics. For sufficiently strong competition, the disruptive selection generated by frequency-dependence overcomes the directional evolution along the selection gradient and leads to diversification in phenotypic directions that are orthogonal to the selection gradient. Copyright © 2015 Elsevier Ltd. All rights reserved.
Reaction Mechanism Generator: Automatic construction of chemical kinetic mechanisms
NASA Astrophysics Data System (ADS)
Gao, Connie W.; Allen, Joshua W.; Green, William H.; West, Richard H.
2016-06-01
Reaction Mechanism Generator (RMG) constructs kinetic models composed of elementary chemical reaction steps using a general understanding of how molecules react. Species thermochemistry is estimated through Benson group additivity and reaction rate coefficients are estimated using a database of known rate rules and reaction templates. At its core, RMG relies on two fundamental data structures: graphs and trees. Graphs are used to represent chemical structures, and trees are used to represent thermodynamic and kinetic data. Models are generated using a rate-based algorithm which excludes species from the model based on reaction fluxes. RMG can generate reaction mechanisms for species involving carbon, hydrogen, oxygen, sulfur, and nitrogen. It also has capabilities for estimating transport and solvation properties, and it automatically computes pressure-dependent rate coefficients and identifies chemically-activated reaction paths. RMG is an object-oriented program written in Python, which provides a stable, robust programming architecture for developing an extensible and modular code base with a large suite of unit tests. Computationally intensive functions are cythonized for speed improvements.
Aggregation Trade Offs in Family Based Recommendations
NASA Astrophysics Data System (ADS)
Berkovsky, Shlomo; Freyne, Jill; Coombe, Mac
Personalized information access tools are frequently based on collaborative filtering recommendation algorithms. Collaborative filtering recommender systems typically suffer from a data sparsity problem, where systems do not have sufficient user data to generate accurate and reliable predictions. Prior research suggested using group-based user data in the collaborative filtering recommendation process to generate group-based predictions and partially resolve the sparsity problem. Although group recommendations are less accurate than personalized recommendations, they are more accurate than general non-personalized recommendations, which are the natural fall back when personalized recommendations cannot be generated. In this work we present initial results of a study that exploits the browsing logs of real families of users gathered in an eHealth portal. The browsing logs allowed us to experimentally compare the accuracy of two group-based recommendation strategies: aggregated group models and aggregated predictions. Our results showed that aggregating individual models into group models resulted in more accurate predictions than aggregating individual predictions into group predictions.
NASA Astrophysics Data System (ADS)
Peck, Jaron Joshua
Water is used in power generation for cooling processes in thermoelectric power. plants and currently withdraws more water than any other sector in the U.S. Reducing water. use from power generation will help to alleviate water stress in at risk areas, where droughts. have the potential to strain water resources. The amount of water used for power varies. depending on many climatic aspects as well as plant operation factors. This work presents. a model that quantifies the water use for power generation for two regions representing. different generation fuel portfolios, California and Utah. The analysis of the California Independent System Operator introduces the methods. of water energy modeling by creating an overall water use factor in volume of water per. unit of energy produced based on the fuel generation mix of the area. The idea of water. monitoring based on energy used by a building or region is explored based on live fuel mix. data. This is for the purposes of increasing public awareness of the water associated with. personal energy use and helping to promote greater energy efficiency. The Utah case study explores the effects more renewable, and less water-intensive, forms of energy will have on the overall water use from power generation for the state. Using a similar model to that of the California case study, total water savings are quantified. based on power reduction scenarios involving increased use of renewable energy. The. plausibility of implementing more renewable energy into Utah’s power grid is also. discussed. Data resolution, as well as dispatch methods, economics, and solar variability, introduces some uncertainty into the analysis.
NASA Astrophysics Data System (ADS)
Prada, Jose Fernando
Keeping a contingency reserve in power systems is necessary to preserve the security of real-time operations. This work studies two different approaches to the optimal allocation of energy and reserves in the day-ahead generation scheduling process. Part I presents a stochastic security-constrained unit commitment model to co-optimize energy and the locational reserves required to respond to a set of uncertain generation contingencies, using a novel state-based formulation. The model is applied in an offer-based electricity market to allocate contingency reserves throughout the power grid, in order to comply with the N-1 security criterion under transmission congestion. The objective is to minimize expected dispatch and reserve costs, together with post contingency corrective redispatch costs, modeling the probability of generation failure and associated post contingency states. The characteristics of the scheduling problem are exploited to formulate a computationally efficient method, consistent with established operational practices. We simulated the distribution of locational contingency reserves on the IEEE RTS96 system and compared the results with the conventional deterministic method. We found that assigning locational spinning reserves can guarantee an N-1 secure dispatch accounting for transmission congestion at a reasonable extra cost. The simulations also showed little value of allocating downward reserves but sizable operating savings from co-optimizing locational nonspinning reserves. Overall, the results indicate the computational tractability of the proposed method. Part II presents a distributed generation scheduling model to optimally allocate energy and spinning reserves among competing generators in a day-ahead market. The model is based on the coordination between individual generators and a market entity. The proposed method uses forecasting, augmented pricing and locational signals to induce efficient commitment of generators based on firm posted prices. It is price-based but does not rely on multiple iterations, minimizes information exchange and simplifies the market clearing process. Simulations of the distributed method performed on a six-bus test system showed that, using an appropriate set of prices, it is possible to emulate the results of a conventional centralized solution, without need of providing make-whole payments to generators. Likewise, they showed that the distributed method can accommodate transactions with different products and complex security constraints.
Software Tools for Developing and Simulating the NASA LaRC CMF Motion Base
NASA Technical Reports Server (NTRS)
Bryant, Richard B., Jr.; Carrelli, David J.
2006-01-01
The NASA Langley Research Center (LaRC) Cockpit Motion Facility (CMF) motion base has provided many design and analysis challenges. In the process of addressing these challenges, a comprehensive suite of software tools was developed. The software tools development began with a detailed MATLAB/Simulink model of the motion base which was used primarily for safety loads prediction, design of the closed loop compensator and development of the motion base safety systems1. A Simulink model of the digital control law, from which a portion of the embedded code is directly generated, was later added to this model to form a closed loop system model. Concurrently, software that runs on a PC was created to display and record motion base parameters. It includes a user interface for controlling time history displays, strip chart displays, data storage, and initializing of function generators used during motion base testing. Finally, a software tool was developed for kinematic analysis and prediction of mechanical clearances for the motion system. These tools work together in an integrated package to support normal operations of the motion base, simulate the end to end operation of the motion base system providing facilities for software-in-the-loop testing, mechanical geometry and sensor data visualizations, and function generator setup and evaluation.
Introductory Biology Students' Conceptual Models and Explanations of the Origin of Variation
ERIC Educational Resources Information Center
Bray Speth, Elena; Shaw, Neil; Momsen, Jennifer; Reinagel, Adam; Le, Paul; Taqieddin, Ranya; Long, Tammy
2014-01-01
Mutation is the key molecular mechanism generating phenotypic variation, which is the basis for evolution. In an introductory biology course, we used a model-based pedagogy that enabled students to integrate their understanding of genetics and evolution within multiple case studies. We used student-generated conceptual models to assess…
ERIC Educational Resources Information Center
Beard, John; Yaprak, Attila
A content analysis model for assessing advertising themes and messages generated primarily for United States markets to overcome barriers in the cultural environment of international markets was developed and tested. The model is based on three primary categories for generating, evaluating, and executing advertisements: rational, emotional, and…
Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator
NASA Astrophysics Data System (ADS)
Rehman, Naveed Ur; Siddiqui, Mubashir Ali
2017-03-01
In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.
Smith-Osborne, Alexa; Felderhoff, Brandi
2014-01-01
Social work theory advanced the formulation of the construct of the sandwich generation to apply to the emerging generational cohort of caregivers, most often middle-aged women, who were caring for maturing children and aging parents simultaneously. This systematic review extends that focus by synthesizing the literature on sandwich generation caregivers for the general aging population with dementia and for veterans with dementia and polytrauma. It develops potential protective mechanisms based on empirical literature to support an intervention resilience model for social work practitioners. This theoretical model addresses adaptive coping of sandwich- generation families facing ongoing challenges related to caregiving demands.
An Infrastructure for UML-Based Code Generation Tools
NASA Astrophysics Data System (ADS)
Wehrmeister, Marco A.; Freitas, Edison P.; Pereira, Carlos E.
The use of Model-Driven Engineering (MDE) techniques in the domain of distributed embedded real-time systems are gain importance in order to cope with the increasing design complexity of such systems. This paper discusses an infrastructure created to build GenERTiCA, a flexible tool that supports a MDE approach, which uses aspect-oriented concepts to handle non-functional requirements from embedded and real-time systems domain. GenERTiCA generates source code from UML models, and also performs weaving of aspects, which have been specified within the UML model. Additionally, this paper discusses the Distributed Embedded Real-Time Compact Specification (DERCS), a PIM created to support UML-based code generation tools. Some heuristics to transform UML models into DERCS, which have been implemented in GenERTiCA, are also discussed.
Auto Code Generation for Simulink-Based Attitude Determination Control System
NASA Technical Reports Server (NTRS)
MolinaFraticelli, Jose Carlos
2012-01-01
This paper details the work done to auto generate C code from a Simulink-Based Attitude Determination Control System (ADCS) to be used in target platforms. NASA Marshall Engineers have developed an ADCS Simulink simulation to be used as a component for the flight software of a satellite. This generated code can be used for carrying out Hardware in the loop testing of components for a satellite in a convenient manner with easily tunable parameters. Due to the nature of the embedded hardware components such as microcontrollers, this simulation code cannot be used directly, as it is, on the target platform and must first be converted into C code; this process is known as auto code generation. In order to generate C code from this simulation; it must be modified to follow specific standards set in place by the auto code generation process. Some of these modifications include changing certain simulation models into their atomic representations which can bring new complications into the simulation. The execution order of these models can change based on these modifications. Great care must be taken in order to maintain a working simulation that can also be used for auto code generation. After modifying the ADCS simulation for the auto code generation process, it is shown that the difference between the output data of the former and that of the latter is between acceptable bounds. Thus, it can be said that the process is a success since all the output requirements are met. Based on these results, it can be argued that this generated C code can be effectively used by any desired platform as long as it follows the specific memory requirements established in the Simulink Model.
Hayashi, Hideaki; Nakamura, Go; Chin, Takaaki; Tsuji, Toshio
2017-01-01
This paper proposes an artificial electromyogram (EMG) signal generation model based on signal-dependent noise, which has been ignored in existing methods, by introducing the stochastic construction of the EMG signals. In the proposed model, an EMG signal variance value is first generated from a probability distribution with a shape determined by a commanded muscle force and signal-dependent noise. Artificial EMG signals are then generated from the associated Gaussian distribution with a zero mean and the generated variance. This facilitates representation of artificial EMG signals with signal-dependent noise superimposed according to the muscle activation levels. The frequency characteristics of the EMG signals are also simulated via a shaping filter with parameters determined by an autoregressive model. An estimation method to determine EMG variance distribution using rectified and smoothed EMG signals, thereby allowing model parameter estimation with a small number of samples, is also incorporated in the proposed model. Moreover, the prediction of variance distribution with strong muscle contraction from EMG signals with low muscle contraction and related artificial EMG generation are also described. The results of experiments conducted, in which the reproduction capability of the proposed model was evaluated through comparison with measured EMG signals in terms of amplitude, frequency content, and EMG distribution demonstrate that the proposed model can reproduce the features of measured EMG signals. Further, utilizing the generated EMG signals as training data for a neural network resulted in the classification of upper limb motion with a higher precision than by learning from only measured EMG signals. This indicates that the proposed model is also applicable to motion classification. PMID:28640883
MATTS- A Step Towards Model Based Testing
NASA Astrophysics Data System (ADS)
Herpel, H.-J.; Willich, G.; Li, J.; Xie, J.; Johansen, B.; Kvinnesland, K.; Krueger, S.; Barrios, P.
2016-08-01
In this paper we describe a Model Based approach to testing of on-board software and compare it with traditional validation strategy currently applied to satellite software. The major problems that software engineering will face over at least the next two decades are increasing application complexity driven by the need for autonomy and serious application robustness. In other words, how do we actually get to declare success when trying to build applications one or two orders of magnitude more complex than today's applications. To solve the problems addressed above the software engineering process has to be improved at least for two aspects: 1) Software design and 2) Software testing. The software design process has to evolve towards model-based approaches with extensive use of code generators. Today, testing is an essential, but time and resource consuming activity in the software development process. Generating a short, but effective test suite usually requires a lot of manual work and expert knowledge. In a model-based process, among other subtasks, test construction and test execution can also be partially automated. The basic idea behind the presented study was to start from a formal model (e.g. State Machines), generate abstract test cases which are then converted to concrete executable test cases (input and expected output pairs). The generated concrete test cases were applied to an on-board software. Results were collected and evaluated wrt. applicability, cost-efficiency, effectiveness at fault finding, and scalability.
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.
Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki
2016-06-24
Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras
Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki
2016-01-01
Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961
Multi-objective optimization for generating a weighted multi-model ensemble
NASA Astrophysics Data System (ADS)
Lee, H.
2017-12-01
Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.
Mirus, B.B.; Ebel, B.A.; Heppner, C.S.; Loague, K.
2011-01-01
Concept development simulation with distributed, physics-based models provides a quantitative approach for investigating runoff generation processes across environmental conditions. Disparities within data sets employed to design and parameterize boundary value problems used in heuristic simulation inevitably introduce various levels of bias. The objective was to evaluate the impact of boundary value problem complexity on process representation for different runoff generation mechanisms. The comprehensive physics-based hydrologic response model InHM has been employed to generate base case simulations for four well-characterized catchments. The C3 and CB catchments are located within steep, forested environments dominated by subsurface stormflow; the TW and R5 catchments are located in gently sloping rangeland environments dominated by Dunne and Horton overland flows. Observational details are well captured within all four of the base case simulations, but the characterization of soil depth, permeability, rainfall intensity, and evapotranspiration differs for each. These differences are investigated through the conversion of each base case into a reduced case scenario, all sharing the same level of complexity. Evaluation of how individual boundary value problem characteristics impact simulated runoff generation processes is facilitated by quantitative analysis of integrated and distributed responses at high spatial and temporal resolution. Generally, the base case reduction causes moderate changes in discharge and runoff patterns, with the dominant process remaining unchanged. Moderate differences between the base and reduced cases highlight the importance of detailed field observations for parameterizing and evaluating physics-based models. Overall, similarities between the base and reduced cases indicate that the simpler boundary value problems may be useful for concept development simulation to investigate fundamental controls on the spectrum of runoff generation mechanisms. Copyright 2011 by the American Geophysical Union.
On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models
NASA Astrophysics Data System (ADS)
Xu, S.; Wang, B.; Liu, J.
2015-10-01
In this article we propose two grid generation methods for global ocean general circulation models. Contrary to conventional dipolar or tripolar grids, the proposed methods are based on Schwarz-Christoffel conformal mappings that map areas with user-prescribed, irregular boundaries to those with regular boundaries (i.e., disks, slits, etc.). The first method aims at improving existing dipolar grids. Compared with existing grids, the sample grid achieves a better trade-off between the enlargement of the latitudinal-longitudinal portion and the overall smooth grid cell size transition. The second method addresses more modern and advanced grid design requirements arising from high-resolution and multi-scale ocean modeling. The generated grids could potentially achieve the alignment of grid lines to the large-scale coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the grids are orthogonal curvilinear, they can be easily utilized by the majority of ocean general circulation models that are based on finite difference and require grid orthogonality. The proposed grid generation algorithms can also be applied to the grid generation for regional ocean modeling where complex land-sea distribution is present.
NASA Astrophysics Data System (ADS)
Cardenas, Jesus Alvaro
An energy and environmental crisis will emerge throughout the world if we continue with our current practices of generation and distribution of electricity. A possible solution to this problem is based on the Smart grid concept, which is heavily influenced by Information and Communication Technology (ICT). Although the electricity industry is mostly regulated, there are global models used as roadmaps for Smart Grids' implementation focusing on technologies and the basic generation-distribution-transmission model. This project aims to further enhance a business model for a future global deployment. It takes into consideration the many factors interacting in this energy provision process, based on the diffusion of technologies and literature surveys on the available documents in the Internet as well as peer-reviewed publications. Tariffs and regulations, distributed energy generation, integration of service providers, consumers becoming producers, self-healing devices, and many other elements are shifting this industry into a major change towards liberalization and deregulation of this sector, which has been heavily protected by the government due to the importance of electricity for consumers. We propose an Energy Management Business Model composed by four basic elements: Supply Chain, Information and Communication Technology (ICT), Stakeholders Response, and the resulting Green Efficient Energy (GEE). We support the developed model based on the literature survey, we support it with the diffusion analysis of these elements, and support the overall model with two surveys: one for peers and professionals, and other for experts in the field, based on the Smart Grid Carnegie Melon Maturity Model (CMU SEI SGMM). The contribution of this model is a simple path to follow for entities that want to achieve environmental friendly energy with the involvement of technology and all stakeholders.
A model of oil-generation in a waterlogged and closed system
NASA Astrophysics Data System (ADS)
Zhigao, He
This paper presents a new model on synthetic effects on oil-generation in a waterlogged and closed system. It is suggested based on information about oil in high pressure layers (including gas dissolved in oil), marsh gas and its fermentative solution, fermentation processes and mechanisms, gaseous hydrocarbons of carbonate rocks by acid treatment, oil-field water, recent and ancient sediments, and simulation experiments of artificial marsh gas and biological action. The model differs completely from the theory of oil-generation by thermal degradation of kerogen but stresses the synthetic effects of oil-generation in special waterlogged and closed geological systems, the importance of pressure in oil-forming processes, and direct oil generation by micro-organisms. Oil generated directly by micro-organisms is a particular biochemical reaction. Another feature of this model is that generation, migration and accumulation of petroleum are considered as a whole.
NASA Technical Reports Server (NTRS)
Xiang, Xuwu; Smith, Eric A.; Tripoli, Gregory J.
1992-01-01
A hybrid statistical-physical retrieval scheme is explored which combines a statistical approach with an approach based on the development of cloud-radiation models designed to simulate precipitating atmospheres. The algorithm employs the detailed microphysical information from a cloud model as input to a radiative transfer model which generates a cloud-radiation model database. Statistical procedures are then invoked to objectively generate an initial guess composite profile data set from the database. The retrieval algorithm has been tested for a tropical typhoon case using Special Sensor Microwave/Imager (SSM/I) data and has shown satisfactory results.
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
Reducing a Knowledge-Base Search Space When Data Are Missing
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
This software addresses the problem of how to efficiently execute a knowledge base in the presence of missing data. Computationally, this is an exponentially expensive operation that without heuristics generates a search space of 1 + 2n possible scenarios, where n is the number of rules in the knowledge base. Even for a knowledge base of the most modest size, say 16 rules, it would produce 65,537 possible scenarios. The purpose of this software is to reduce the complexity of this operation to a more manageable size. The problem that this system solves is to develop an automated approach that can reason in the presence of missing data. This is a meta-reasoning capability that repeatedly calls a diagnostic engine/model to provide prognoses and prognosis tracking. In the big picture, the scenario generator takes as its input the current state of a system, including probabilistic information from Data Forecasting. Using model-based reasoning techniques, it returns an ordered list of fault scenarios that could be generated from the current state, i.e., the plausible future failure modes of the system as it presently stands. The scenario generator models a Potential Fault Scenario (PFS) as a black box, the input of which is a set of states tagged with priorities and the output of which is one or more potential fault scenarios tagged by a confidence factor. The results from the system are used by a model-based diagnostician to predict the future health of the monitored system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Augustine, Chad
Existing methodologies for estimating the electricity generation potential of Enhanced Geothermal Systems (EGS) assume thermal recovery factors of 5% or less, resulting in relatively low volumetric electricity generation potentials for EGS reservoirs. This study proposes and develops a methodology for calculating EGS electricity generation potential based on the Gringarten conceptual model and analytical solution for heat extraction from fractured rock. The electricity generation potential of a cubic kilometer of rock as a function of temperature is calculated assuming limits on the allowed produced water temperature decline and reservoir lifetime based on surface power plant constraints. The resulting estimates of EGSmore » electricity generation potential can be one to nearly two-orders of magnitude larger than those from existing methodologies. The flow per unit fracture surface area from the Gringarten solution is found to be a key term in describing the conceptual reservoir behavior. The methodology can be applied to aid in the design of EGS reservoirs by giving minimum reservoir volume, fracture spacing, number of fractures, and flow requirements for a target reservoir power output. Limitations of the idealized model compared to actual reservoir performance and the implications on reservoir design are discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurwitz, M; Williams, C; Dhou, S
Purpose: Respiratory motion can vary significantly over the course of simulation and treatment. Our goal is to use volumetric images generated with a respiratory motion model to improve the definition of the internal target volume (ITV) and the estimate of delivered dose. Methods: Ten irregular patient breathing patterns spanning 35 seconds each were incorporated into a digital phantom. Ten images over the first five seconds of breathing were used to emulate a 4DCT scan, build the ITV, and generate a patient-specific respiratory motion model which correlated the measured trajectories of markers placed on the patients’ chests with the motion ofmore » the internal anatomy. This model was used to generate volumetric images over the subsequent thirty seconds of breathing. The increase in the ITV taking into account the full 35 seconds of breathing was assessed with ground-truth and model-generated images. For one patient, a treatment plan based on the initial ITV was created and the delivered dose was estimated using images from the first five seconds as well as ground-truth and model-generated images from the next 30 seconds. Results: The increase in the ITV ranged from 0.2 cc to 6.9 cc for the ten patients based on ground-truth information. The model predicted this increase in the ITV with an average error of 0.8 cc. The delivered dose to the tumor (D95) changed significantly from 57 Gy to 41 Gy when estimated using 5 seconds and 30 seconds, respectively. The model captured this effect, giving an estimated D95 of 44 Gy. Conclusion: A respiratory motion model generating volumetric images of the internal patient anatomy could be useful in estimating the increase in the ITV due to irregular breathing during simulation and in assessing delivered dose during treatment. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc. and Radiological Society of North America Research Scholar Grant #RSCH1206.« less
Second Generation Models for Strain-Based Design
DOT National Transportation Integrated Search
2011-08-30
This project covers the development of tensile strain design models which form a key part of the strain-based design of pipelines. The strain-based design includes at least two limit states, tensile rupture, and compressive buckling. The tensile stra...
Dynamic Models Applied to Landslides: Study Case Angangueo, MICHOACÁN, MÉXICO.
NASA Astrophysics Data System (ADS)
Torres Fernandez, L.; Hernández Madrigal, V. M., , Dr; Capra, L.; Domínguez Mota, F. J., , Dr
2017-12-01
Most existing models for landslide zonification are static type, do not consider the dynamic behavior of the trigger factor. This results in a limited representation of the actual zonation of slope instability, present a short-term validity, cańt be applied for the design of early warning systems, etc. Particularly in Mexico, these models are static because they do not consider triggering factor such as precipitation. In this work, we present a numerical evaluation to know the landslide susceptibility, based on probabilistic methods. Which are based on the generation of time series, which are generated from the meteorological stations, having limited information an interpolation is made to generate the simulation of the precipitation in the zone. The obtained information is integrated in PCRaster and in conjunction with the conditioning factors it is possible to generate a dynamic model. This model will be applied for landslide zoning in the municipality of Angangueo, characterized by frequent logging of debris and mud flow, translational and rotational landslides, detonated by atypical precipitations, such as those recorded in 2010. These caused economic losses and humans. With these models, it would be possible to generate probable scenarios that help the Angangueo's population to reduce the risks and to carry out actions of constant resilience activities.
Limits and Economic Effects of Distributed PV Generation in North and South Carolina
NASA Astrophysics Data System (ADS)
Holt, Kyra Moore
The variability of renewable sources, such as wind and solar, when integrated into the electrical system must be compensated by traditional generation sources in-order to maintain the constant balance of supply and demand required for grid stability. The goal of this study is to analyze the effects of increasing large levels of solar Photovoltaic (PV) penetration (in terms of a percentage of annual energy production) on a test grid with similar characteristics to the Duke Energy Carolinas (DEC) and Progress Energy Carolinas (PEC) regions of North and South Carolina. PV production is modeled entering the system at the distribution level and regional PV capacity is based on household density. A gridded hourly global horizontal irradiance (GHI) dataset is used to capture the variable nature of PV generation. A unit commitment model (UCM) is then used determine the hourly dispatch of generators based on generator parameters and costs to supply generation to meet demand. Annual modeled results for six different scenarios are evaluated to determine technical, environmental and economic effects of varying levels of distributed PV penetration on the system. This study finds that the main limiting factor for PV integration in the DEC and PEC balancing authority regions is defined by the large generating capacity of base-load nuclear plants within the system. This threshold starts to affect system stability at integration levels of 5.7%. System errors, defined by imbalances caused by over or under generation with respect to demand, are identified in the model however the validity of these errors in real world context needs further examination due to the lack of high frequency irradiance data and modeling limitations. Operational system costs decreased as expected with PV integration although further research is needed to explore the impacts of the capital costs required to achieve the penetration levels found in this study. PV system generation was found to mainly displace coal generation creating a loss of revenue for generator owners. In all scenarios, CO 2 emissions were reduced with PV integration. This reduction could be used to meet impending EPA state-specific CO2 emissions targets.
Accuracy of latent-variable estimation in Bayesian semi-supervised learning.
Yamazaki, Keisuke
2015-09-01
Hierarchical probabilistic models, such as Gaussian mixture models, are widely used for unsupervised learning tasks. These models consist of observable and latent variables, which represent the observable data and the underlying data-generation process, respectively. Unsupervised learning tasks, such as cluster analysis, are regarded as estimations of latent variables based on the observable ones. The estimation of latent variables in semi-supervised learning, where some labels are observed, will be more precise than that in unsupervised, and one of the concerns is to clarify the effect of the labeled data. However, there has not been sufficient theoretical analysis of the accuracy of the estimation of latent variables. In a previous study, a distribution-based error function was formulated, and its asymptotic form was calculated for unsupervised learning with generative models. It has been shown that, for the estimation of latent variables, the Bayes method is more accurate than the maximum-likelihood method. The present paper reveals the asymptotic forms of the error function in Bayesian semi-supervised learning for both discriminative and generative models. The results show that the generative model, which uses all of the given data, performs better when the model is well specified. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mathematical modeling to predict residential solid waste generation.
Benítez, Sara Ojeda; Lozano-Olvera, Gabriela; Morelos, Raúl Adalberto; Vega, Carolina Armijo de
2008-01-01
One of the challenges faced by waste management authorities is determining the amount of waste generated by households in order to establish waste management systems, as well as trying to charge rates compatible with the principle applied worldwide, and design a fair payment system for households according to the amount of residential solid waste (RSW) they generate. The goal of this research work was to establish mathematical models that correlate the generation of RSW per capita to the following variables: education, income per household, and number of residents. This work was based on data from a study on generation, quantification and composition of residential waste in a Mexican city in three stages. In order to define prediction models, five variables were identified and included in the model. For each waste sampling stage a different mathematical model was developed, in order to find the model that showed the best linear relation to predict residential solid waste generation. Later on, models to explore the combination of included variables and select those which showed a higher R(2) were established. The tests applied were normality, multicolinearity and heteroskedasticity. Another model, formulated with four variables, was generated and the Durban-Watson test was applied to it. Finally, a general mathematical model is proposed to predict residential waste generation, which accounts for 51% of the total.
Infrared radiation scene generation of stars and planets in celestial background
NASA Astrophysics Data System (ADS)
Guo, Feng; Hong, Yaohui; Xu, Xiaojian
2014-10-01
An infrared (IR) radiation generation model of stars and planets in celestial background is proposed in this paper. Cohen's spectral template1 is modified for high spectral resolution and accuracy. Based on the improved spectral template for stars and the blackbody assumption for planets, an IR radiation model is developed which is able to generate the celestial IR background for stars and planets appearing in sensor's field of view (FOV) for specified observing date and time, location, viewpoint and spectral band over 1.2μm ~ 35μm. In the current model, the initial locations of stars are calculated based on midcourse space experiment (MSX) IR astronomical catalogue (MSX-IRAC) 2 , while the initial locations of planets are calculated using secular variations of the planetary orbits (VSOP) theory. Simulation results show that the new IR radiation model has higher resolution and accuracy than common model.
NASA Astrophysics Data System (ADS)
Alidoost, F.; Arefi, H.
2017-11-01
Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.
The Knowledge Building Paradigm: A Model of Learning for Net Generation Students
ERIC Educational Resources Information Center
Philip, Donald
2005-01-01
In this article Donald Philip describes Knowledge Building, a pedagogy based on the way research organizations function. The global economy, Philip argues, is driving a shift from older, industrial models to the model of the business as a learning organization. The cognitive patterns of today's Net Generation students, formed by lifetime exposure…
Inferring a District-Based Hierarchical Structure of Social Contacts from Census Data
Yu, Zhiwen; Liu, Jiming; Zhu, Xianjun
2015-01-01
Researchers have recently paid attention to social contact patterns among individuals due to their useful applications in such areas as epidemic evaluation and control, public health decisions, chronic disease research and social network research. Although some studies have estimated social contact patterns from social networks and surveys, few have considered how to infer the hierarchical structure of social contacts directly from census data. In this paper, we focus on inferring an individual’s social contact patterns from detailed census data, and generate various types of social contact patterns such as hierarchical-district-structure-based, cross-district and age-district-based patterns. We evaluate newly generated contact patterns derived from detailed 2011 Hong Kong census data by incorporating them into a model and simulation of the 2009 Hong Kong H1N1 epidemic. We then compare the newly generated social contact patterns with the mixing patterns that are often used in the literature, and draw the following conclusions. First, the generation of social contact patterns based on a hierarchical district structure allows for simulations at different district levels. Second, the newly generated social contact patterns reflect individuals social contacts. Third, the newly generated social contact patterns improve the accuracy of the SEIR-based epidemic model. PMID:25679787
EOID System Model Validation, Metrics, and Synthetic Clutter Generation
2003-09-30
Our long-term goal is to accurately predict the capability of the current generation of laser-based underwater imaging sensors to perform Electro ... Optic Identification (EOID) against relevant targets in a variety of realistic environmental conditions. The models will predict the impact of
Method of performing computational aeroelastic analyses
NASA Technical Reports Server (NTRS)
Silva, Walter A. (Inventor)
2011-01-01
Computational aeroelastic analyses typically use a mathematical model for the structural modes of a flexible structure and a nonlinear aerodynamic model that can generate a plurality of unsteady aerodynamic responses based on the structural modes for conditions defining an aerodynamic condition of the flexible structure. In the present invention, a linear state-space model is generated using a single execution of the nonlinear aerodynamic model for all of the structural modes where a family of orthogonal functions is used as the inputs. Then, static and dynamic aeroelastic solutions are generated using computational interaction between the mathematical model and the linear state-space model for a plurality of periodic points in time.
Model for energy transfer in the solar wind: Model results
NASA Technical Reports Server (NTRS)
Barnes, A. A., Jr.; Hartle, R. E.
1972-01-01
A description is given of the results of solar wind flow in which the heating is due to (1) propagation and dissipation of hydromagnetic waves generated near the base of the wind, and (2) thermal conduction. A series of models is generated for fixed values of density, electron and proton temperature, and magnetic field at the base by varying the wave intensity at the base of the model. This series of models predicts the observed correlation between flow speed and proton temperature for a large range of velocities. The wave heating takes place in a shell about the sun greater than or approximately equal to 10 R thick. We conclude that large-scale variations observed in the solar wind are probably due mainly to variation in the hydromagnetic wave flux near the sun.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Yuping; Zheng, Qipeng P.; Wang, Jianhui
2014-11-01
tThis paper presents a two-stage stochastic unit commitment (UC) model, which integrates non-generation resources such as demand response (DR) and energy storage (ES) while including riskconstraints to balance between cost and system reliability due to the fluctuation of variable genera-tion such as wind and solar power. This paper uses conditional value-at-risk (CVaR) measures to modelrisks associated with the decisions in a stochastic environment. In contrast to chance-constrained modelsrequiring extra binary variables, risk constraints based on CVaR only involve linear constraints and con-tinuous variables, making it more computationally attractive. The proposed models with risk constraintsare able to avoid over-conservative solutions butmore » still ensure system reliability represented by loss ofloads. Then numerical experiments are conducted to study the effects of non-generation resources ongenerator schedules and the difference of total expected generation costs with risk consideration. Sen-sitivity analysis based on reliability parameters is also performed to test the decision preferences ofconfidence levels and load-shedding loss allowances on generation cost reduction.« less
From Pixels to Response Maps: Discriminative Image Filtering for Face Alignment in the Wild.
Asthana, Akshay; Zafeiriou, Stefanos; Tzimiropoulos, Georgios; Cheng, Shiyang; Pantic, Maja
2015-06-01
We propose a face alignment framework that relies on the texture model generated by the responses of discriminatively trained part-based filters. Unlike standard texture models built from pixel intensities or responses generated by generic filters (e.g. Gabor), our framework has two important advantages. First, by virtue of discriminative training, invariance to external variations (like identity, pose, illumination and expression) is achieved. Second, we show that the responses generated by discriminatively trained filters (or patch-experts) are sparse and can be modeled using a very small number of parameters. As a result, the optimization methods based on the proposed texture model can better cope with unseen variations. We illustrate this point by formulating both part-based and holistic approaches for generic face alignment and show that our framework outperforms the state-of-the-art on multiple "wild" databases. The code and dataset annotations are available for research purposes from http://ibug.doc.ic.ac.uk/resources.
NASA Astrophysics Data System (ADS)
Shurupov, A. V.; Zavalova, V. E.; Kozlov, A. V.; Shurupov, M. A.; Povareshkin, M. N.; Kozlov, A. A.; Shurupova, N. P.
2018-01-01
Experimental models of microsecond duration powerful generators of current pulses on the basis of explosive magnetic generators and voltage impulse generator have been developed for the electromagnetic pulse effects on energy facilities to verify their stability. Exacerbation of voltage pulse carried out through the use of electro explosive current interrupter made of copper wires with diameters of 80 and 120 μm. Experimental results of these models investigation are represented. Voltage fronts about 100 ns and the electric field strength of 800 kV/m are registered.
Community-based Rehabilitation in the Philippines: Using Income Generation Projects.
ERIC Educational Resources Information Center
Santos Valdez, Luzviminda Joy
1998-01-01
Provides examples of people with disabilities who are self-employed and able to generate their own income through community-based models of rehabilitation. Suggests that these small-scale efforts demonstrate the importance of understanding the local socioeconomic context of rehabilitation. (SK)
Developing models for the prediction of hospital healthcare waste generation rate.
Tesfahun, Esubalew; Kumie, Abera; Beyene, Abebe
2016-01-01
An increase in the number of health institutions, along with frequent use of disposable medical products, has contributed to the increase of healthcare waste generation rate. For proper handling of healthcare waste, it is crucial to predict the amount of waste generation beforehand. Predictive models can help to optimise healthcare waste management systems, set guidelines and evaluate the prevailing strategies for healthcare waste handling and disposal. However, there is no mathematical model developed for Ethiopian hospitals to predict healthcare waste generation rate. Therefore, the objective of this research was to develop models for the prediction of a healthcare waste generation rate. A longitudinal study design was used to generate long-term data on solid healthcare waste composition, generation rate and develop predictive models. The results revealed that the healthcare waste generation rate has a strong linear correlation with the number of inpatients (R(2) = 0.965), and a weak one with the number of outpatients (R(2) = 0.424). Statistical analysis was carried out to develop models for the prediction of the quantity of waste generated at each hospital (public, teaching and private). In these models, the number of inpatients and outpatients were revealed to be significant factors on the quantity of waste generated. The influence of the number of inpatients and outpatients treated varies at different hospitals. Therefore, different models were developed based on the types of hospitals. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Zhao, Yan; Yang, Zijiang; Gao, Song; Liu, Jinbiao
2018-02-01
Automatic generation control(AGC) is a key technology to maintain real time power generation and load balance, and to ensure the quality of power supply. Power grids require each power generation unit to have a satisfactory AGC performance, being specified in two detailed rules. The two rules provide a set of indices to measure the AGC performance of power generation unit. However, the commonly-used method to calculate these indices is based on particular data samples from AGC responses and will lead to incorrect results in practice. This paper proposes a new method to estimate the AGC performance indices via system identification techniques. In addition, a nonlinear regression model between performance indices and load command is built in order to predict the AGC performance indices. The effectiveness of the proposed method is validated through industrial case studies.
NASA Astrophysics Data System (ADS)
Vaidyanathan, A.; Yip, F.
2017-12-01
Context: Studies that have explored the impacts of environmental exposure on human health have mostly relied on data from weather stations, which can be limited in geographic scope. For this assessment, we: (1) evaluated the performance of the meteorological data from the North American Land Data Assimilation System Phase 2 (NLDAS) model with measurements from weather stations for public health and specifically for CDC's Environmental Public Health Tracking Program, and (2) conducted a health assessment to explore the relationship between heat exposure and mortality, and examined region-specific differences in heat-mortality (H-M) relationships when using model-based estimates in place of measurements from weather stations.Methods: Meteorological data from the NLDAS Phase 2 model was evaluated against measurements from weather stations. A time-series analysis was conducted, using both station- and model-based data, to generate H-M relationships for counties in the U.S. The county-specific risk information was pooled to characterize regional relationships for both station- and model-based data, which were then compared to identify degrees of overlap and discrepancies between results generated using the two data sources. Results: NLDAS-based heat metrics were in agreement with those generated using weather station data. In general, the H-M relationship tended to be non-linear and varied by region, particularly the heat index value at which the health risks become positively significant. However, there was a high degree of overlap between region-specific H-M relationships generated from weather stations and the NLDAS model.Interpretation: Heat metrics from NLDAS model are available for all counties in the coterminous U.S. from 1979-2015. These data can facilitate health research and surveillance activities exploring health impacts associated with long-term heat exposures at finer geographic scales.Conclusion: High spatiotemporal coverage of environmental health data is an important attribute in understanding potential public health impacts. With the limited geographic scope of station-based measurements, adopting NLDAS-based modeled estimates in CDC's Tracking Network would provide a more comprehensive understanding of specific meteorological exposures on human health.
2018-01-01
Profile Database E-17 Attachment 2: NRMM Data Input Requirements E-25 Attachment 3: General Physics -Based Model Data Input Requirements E-28...E-15 Figure E-11 Examples of Unique Surface Types E-20 Figure E-12 Correlating Physical Testing with Simulation E-21 Figure E-13 Simplified Tire...Table 10-8 Scoring Values 10-19 Table 10-9 Accuracy – Physics -Based 10-20 Table 10-10 Accuracy – Validation Through Measurement 10-22 Table 10-11
Collective bubble oscillations as a component of surf infrasound.
Park, Joseph; Garcés, Milton; Fee, David; Pawlak, Geno
2008-05-01
Plunging surf is a known generator of infrasound, though the mechanisms have not been clearly identified. A model based on collective bubble oscillations created by demise of the initially entrained air pocket is examined. Computed spectra are compared to infrasound data from the island of Kauai during periods of medium, large, and extreme surf. Model results suggest that bubble oscillations generated by plunging waves are plausible generators of infrasound, and that dynamic bubble plume evolution on a temporal scale comparable to the breaking wave period may contribute to the broad spectral lobe of dominant infrasonic energy observed in measured data. Application of an inverse model has potential to characterize breaking wave size distributions, energy, and temporal changes in seafloor morphology based on remotely sensed infrasound.
Learning a generative model of images by factoring appearance and shape.
Le Roux, Nicolas; Heess, Nicolas; Shotton, Jamie; Winn, John
2011-03-01
Computer vision has grown tremendously in the past two decades. Despite all efforts, existing attempts at matching parts of the human visual system's extraordinary ability to understand visual scenes lack either scope or power. By combining the advantages of general low-level generative models and powerful layer-based and hierarchical models, this work aims at being a first step toward richer, more flexible models of images. After comparing various types of restricted Boltzmann machines (RBMs) able to model continuous-valued data, we introduce our basic model, the masked RBM, which explicitly models occlusion boundaries in image patches by factoring the appearance of any patch region from its shape. We then propose a generative model of larger images using a field of such RBMs. Finally, we discuss how masked RBMs could be stacked to form a deep model able to generate more complicated structures and suitable for various tasks such as segmentation or object recognition.
Stochastic Watershed Models for Risk Based Decision Making
NASA Astrophysics Data System (ADS)
Vogel, R. M.
2017-12-01
Over half a century ago, the Harvard Water Program introduced the field of operational or synthetic hydrology providing stochastic streamflow models (SSMs), which could generate ensembles of synthetic streamflow traces useful for hydrologic risk management. The application of SSMs, based on streamflow observations alone, revolutionized water resources planning activities, yet has fallen out of favor due, in part, to their inability to account for the now nearly ubiquitous anthropogenic influences on streamflow. This commentary advances the modern equivalent of SSMs, termed `stochastic watershed models' (SWMs) useful as input to nearly all modern risk based water resource decision making approaches. SWMs are deterministic watershed models implemented using stochastic meteorological series, model parameters and model errors, to generate ensembles of streamflow traces that represent the variability in possible future streamflows. SWMs combine deterministic watershed models, which are ideally suited to accounting for anthropogenic influences, with recent developments in uncertainty analysis and principles of stochastic simulation
Ren, Jiaping; Wang, Xinjie; Manocha, Dinesh
2016-01-01
We present a biologically plausible dynamics model to simulate swarms of flying insects. Our formulation, which is based on biological conclusions and experimental observations, is designed to simulate large insect swarms of varying densities. We use a force-based model that captures different interactions between the insects and the environment and computes collision-free trajectories for each individual insect. Furthermore, we model the noise as a constructive force at the collective level and present a technique to generate noise-induced insect movements in a large swarm that are similar to those observed in real-world trajectories. We use a data-driven formulation that is based on pre-recorded insect trajectories. We also present a novel evaluation metric and a statistical validation approach that takes into account various characteristics of insect motions. In practice, the combination of Curl noise function with our dynamics model is used to generate realistic swarm simulations and emergent behaviors. We highlight its performance for simulating large flying swarms of midges, fruit fly, locusts and moths and demonstrate many collective behaviors, including aggregation, migration, phase transition, and escape responses. PMID:27187068
Space-based laser-driven MHD generator: Feasibility study
NASA Technical Reports Server (NTRS)
Choi, S. H.
1986-01-01
The feasibility of a laser-driven MHD generator, as a candidate receiver for a space-based laser power transmission system, was investigated. On the basis of reasonable parameters obtained in the literature, a model of the laser-driven MHD generator was developed with the assumptions of a steady, turbulent, two-dimensional flow. These assumptions were based on the continuous and steady generation of plasmas by the exposure of the continuous wave laser beam thus inducing a steady back pressure that enables the medium to flow steadily. The model considered here took the turbulent nature of plasmas into account in the two-dimensional geometry of the generator. For these conditions with the plasma parameters defining the thermal conductivity, viscosity, electrical conductivity for the plasma flow, a generator efficiency of 53.3% was calculated. If turbulent effects and nonequilibrium ionization are taken into account, the efficiency is 43.2%. The study shows that the laser-driven MHD system has potential as a laser power receiver for space applications because of its high energy conversion efficiency, high energy density and relatively simple mechanism as compared to other energy conversion cycles.
Modeling the Impacts of Solar Distributed Generation on U.S. Water Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amanda, Smith; Omitaomu, Olufemi A; Jaron, Peck
2015-01-01
Distributed electric power generation technologies typically use little or no water per unit of electrical energy produced; in particular, renewable energy sources such as solar PV systems do not require cooling systems and present an opportunity to reduce water usage for power generation. Within the US, the fuel mix used for power generation varies regionally, and certain areas use more water for power generation than others. The need to reduce water usage for power generation is even more urgent in view of climate change uncertainties. In this paper, we present an example case within the state of Tennessee, one ofmore » the top four states in water consumption for power generation and one of the states with little or no potential for developing centralized renewable energy generations. The potential for developing PV generation within Knox County, Tennessee, is studied, along with the potential for reducing water withdrawal and consumption within the Tennessee Valley stream region. Electric power generation plants in the region are quantified for their electricity production and expected water withdrawal and consumption over one year, where electrical generation data is provided over one year and water usage is modeled based on the cooling system(s) in use. Potential solar PV electrical production is modeled based on LiDAR data and weather data for the same year. Our proposed methodology can be summarized as follows: First, the potential solar generation is compared against the local grid demand. Next, electrical generation reductions are specified that would result in a given reduction in water withdrawal and a given reduction in water consumption, and compared with the current water withdrawal and consumption rates for the existing fuel mix. The increase in solar PV development that would produce an equivalent amount of power, is determined. In this way, we consider how targeted local actions may affect the larger stream region through thoughtful energy development. This model can be applied to other regions, other types of distributed generation, and used as a framework for modeling alternative growth scenarios in power production capacity in addition to modeling adjustments to existing capacity.« less
Physico-Chemical Dynamics of Nanoparticle Formation during Laser Decontamination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, M.D.
2005-06-01
Laser-ablation based decontamination is a new and effective approach for simultaneous removal and characterization of contaminants from surfaces (e.g., building interior and exterior walls, ground floors, etc.). The scientific objectives of this research are to: (1) characterize particulate matter generated during the laser-ablation based decontamination, (2) develop a technique for simultaneous cleaning and spectroscopic verification, and (3) develop an empirical model for predicting particle generation for the size range from 10 nm to tens of micrometers. This research project provides fundamental data obtained through a systematic study on the particle generation mechanism, and also provides a working model for predictionmore » of particle generation such that an effective operational strategy can be devised to facilitate worker protection.« less
Realistic facial animation generation based on facial expression mapping
NASA Astrophysics Data System (ADS)
Yu, Hui; Garrod, Oliver; Jack, Rachael; Schyns, Philippe
2014-01-01
Facial expressions reflect internal emotional states of a character or in response to social communications. Though much effort has been taken to generate realistic facial expressions, it still remains a challenging topic due to human being's sensitivity to subtle facial movements. In this paper, we present a method for facial animation generation, which reflects true facial muscle movements with high fidelity. An intermediate model space is introduced to transfer captured static AU peak frames based on FACS to the conformed target face. And then dynamic parameters derived using a psychophysics method is integrated to generate facial animation, which is assumed to represent natural correlation of multiple AUs. Finally, the animation sequence in the intermediate model space is mapped to the target face to produce final animation.
Next generation of weather generators on web service framework
NASA Astrophysics Data System (ADS)
Chinnachodteeranun, R.; Hung, N. D.; Honda, K.; Ines, A. V. M.
2016-12-01
Weather generator is a statistical model that synthesizes possible realization of long-term historical weather in future. It generates several tens to hundreds of realizations stochastically based on statistical analysis. Realization is essential information as a crop modeling's input for simulating crop growth and yield. Moreover, they can be contributed to analyzing uncertainty of weather to crop development stage and to decision support system on e.g. water management and fertilizer management. Performing crop modeling requires multidisciplinary skills which limit the usage of weather generator only in a research group who developed it as well as a barrier for newcomers. To improve the procedures of performing weather generators as well as the methodology to acquire the realization in a standard way, we implemented a framework for providing weather generators as web services, which support service interoperability. Legacy weather generator programs were wrapped in the web service framework. The service interfaces were implemented based on an international standard that was Sensor Observation Service (SOS) defined by Open Geospatial Consortium (OGC). Clients can request realizations generated by the model through SOS Web service. Hierarchical data preparation processes required for weather generator are also implemented as web services and seamlessly wired. Analysts and applications can invoke services over a network easily. The services facilitate the development of agricultural applications and also reduce the workload of analysts on iterative data preparation and handle legacy weather generator program. This architectural design and implementation can be a prototype for constructing further services on top of interoperable sensor network system. This framework opens an opportunity for other sectors such as application developers and scientists in other fields to utilize weather generators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, Marie-Paule, E-mail: marie-paule.garcia@univ-brest.fr; Villoing, Daphnée; McKay, Erin
Purpose: The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. Methods: The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of amore » given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit GATE offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on GATE to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user’s imaging requirements and generates automatically command files used as input for GATE. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant GATE input files are generated for the virtual patient model and associated pharmacokinetics. Results: Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body “step and shoot” acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry computation performed on the ICRP 110 model is also presented. Conclusions: The proposed platform offers a generic framework to implement any scintigraphic imaging protocols and voxel/organ-based dosimetry computation. Thanks to the modular nature of TestDose, other imaging modalities could be supported in the future such as positron emission tomography.« less
Garcia, Marie-Paule; Villoing, Daphnée; McKay, Erin; Ferrer, Ludovic; Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila; Bardiès, Manuel
2015-12-01
The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of a given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit gate offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on gate to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user's imaging requirements and generates automatically command files used as input for gate. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant gate input files are generated for the virtual patient model and associated pharmacokinetics. Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body "step and shoot" acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry computation performed on the ICRP 110 model is also presented. The proposed platform offers a generic framework to implement any scintigraphic imaging protocols and voxel/organ-based dosimetry computation. Thanks to the modular nature of TestDose, other imaging modalities could be supported in the future such as positron emission tomography.
NASA Astrophysics Data System (ADS)
Lu, Mark; Liang, Curtis; King, Dion; Melvin, Lawrence S., III
2005-11-01
Model-based Optical Proximity correction has become an indispensable tool for achieving wafer pattern to design fidelity at current manufacturing process nodes. Most model-based OPC is performed considering the nominal process condition, with limited consideration of through process manufacturing robustness. This study examines the use of off-target process models - models that represent non-nominal process states such as would occur with a dose or focus variation - to understands and manipulate the final pattern correction to a more process robust configuration. The study will first examine and validate the process of generating an off-target model, then examine the quality of the off-target model. Once the off-target model is proven, it will be used to demonstrate methods of generating process robust corrections. The concepts are demonstrated using a 0.13 μm logic gate process. Preliminary indications show success in both off-target model production and process robust corrections. With these off-target models as tools, mask production cycle times can be reduced.
Reaction Mechanism Generator: Automatic construction of chemical kinetic mechanisms
Gao, Connie W.; Allen, Joshua W.; Green, William H.; ...
2016-02-24
Reaction Mechanism Generator (RMG) constructs kinetic models composed of elementary chemical reaction steps using a general understanding of how molecules react. Species thermochemistry is estimated through Benson group additivity and reaction rate coefficients are estimated using a database of known rate rules and reaction templates. At its core, RMG relies on two fundamental data structures: graphs and trees. Graphs are used to represent chemical structures, and trees are used to represent thermodynamic and kinetic data. Models are generated using a rate-based algorithm which excludes species from the model based on reaction fluxes. RMG can generate reaction mechanisms for species involvingmore » carbon, hydrogen, oxygen, sulfur, and nitrogen. It also has capabilities for estimating transport and solvation properties, and it automatically computes pressure-dependent rate coefficients and identifies chemically-activated reaction paths. RMG is an object-oriented program written in Python, which provides a stable, robust programming architecture for developing an extensible and modular code base with a large suite of unit tests. Computationally intensive functions are cythonized for speed improvements.« less
Zhang, Xiaoyan; Kim, Daeseung; Shen, Shunyao; Yuan, Peng; Liu, Siting; Tang, Zhen; Zhang, Guangming; Zhou, Xiaobo; Gateno, Jaime
2017-01-01
Accurate surgical planning and prediction of craniomaxillofacial surgery outcome requires simulation of soft tissue changes following osteotomy. This can only be achieved by using an anatomically detailed facial soft tissue model. The current state-of-the-art of model generation is not appropriate to clinical applications due to the time-intensive nature of manual segmentation and volumetric mesh generation. The conventional patient-specific finite element (FE) mesh generation methods are to deform a template FE mesh to match the shape of a patient based on registration. However, these methods commonly produce element distortion. Additionally, the mesh density for patients depends on that of the template model. It could not be adjusted to conduct mesh density sensitivity analysis. In this study, we propose a new framework of patient-specific facial soft tissue FE mesh generation. The goal of the developed method is to efficiently generate a high-quality patient-specific hexahedral FE mesh with adjustable mesh density while preserving the accuracy in anatomical structure correspondence. Our FE mesh is generated by eFace template deformation followed by volumetric parametrization. First, the patient-specific anatomically detailed facial soft tissue model (including skin, mucosa, and muscles) is generated by deforming an eFace template model. The adaptation of the eFace template model is achieved by using a hybrid landmark-based morphing and dense surface fitting approach followed by a thin-plate spline interpolation. Then, high-quality hexahedral mesh is constructed by using volumetric parameterization. The user can control the resolution of hexahedron mesh to best reflect clinicians’ need. Our approach was validated using 30 patient models and 4 visible human datasets. The generated patient-specific FE mesh showed high surface matching accuracy, element quality, and internal structure matching accuracy. They can be directly and effectively used for clinical simulation of facial soft tissue change. PMID:29027022
Zhang, Xiaoyan; Kim, Daeseung; Shen, Shunyao; Yuan, Peng; Liu, Siting; Tang, Zhen; Zhang, Guangming; Zhou, Xiaobo; Gateno, Jaime; Liebschner, Michael A K; Xia, James J
2018-04-01
Accurate surgical planning and prediction of craniomaxillofacial surgery outcome requires simulation of soft tissue changes following osteotomy. This can only be achieved by using an anatomically detailed facial soft tissue model. The current state-of-the-art of model generation is not appropriate to clinical applications due to the time-intensive nature of manual segmentation and volumetric mesh generation. The conventional patient-specific finite element (FE) mesh generation methods are to deform a template FE mesh to match the shape of a patient based on registration. However, these methods commonly produce element distortion. Additionally, the mesh density for patients depends on that of the template model. It could not be adjusted to conduct mesh density sensitivity analysis. In this study, we propose a new framework of patient-specific facial soft tissue FE mesh generation. The goal of the developed method is to efficiently generate a high-quality patient-specific hexahedral FE mesh with adjustable mesh density while preserving the accuracy in anatomical structure correspondence. Our FE mesh is generated by eFace template deformation followed by volumetric parametrization. First, the patient-specific anatomically detailed facial soft tissue model (including skin, mucosa, and muscles) is generated by deforming an eFace template model. The adaptation of the eFace template model is achieved by using a hybrid landmark-based morphing and dense surface fitting approach followed by a thin-plate spline interpolation. Then, high-quality hexahedral mesh is constructed by using volumetric parameterization. The user can control the resolution of hexahedron mesh to best reflect clinicians' need. Our approach was validated using 30 patient models and 4 visible human datasets. The generated patient-specific FE mesh showed high surface matching accuracy, element quality, and internal structure matching accuracy. They can be directly and effectively used for clinical simulation of facial soft tissue change.
NASA Astrophysics Data System (ADS)
Xiao, Heng; Gou, Xiaolong; Yang, Suwen
2011-05-01
Thermoelectric (TE) power generation technology, due to its several advantages, is becoming a noteworthy research direction. Many researchers conduct their performance analysis and optimization of TE devices and related applications based on the generalized thermoelectric energy balance equations. These generalized TE equations involve the internal irreversibility of Joule heating inside the thermoelectric device and heat leakage through the thermoelectric couple leg. However, it is assumed that the thermoelectric generator (TEG) is thermally isolated from the surroundings except for the heat flows at the cold and hot junctions. Since the thermoelectric generator is a multi-element device in practice, being composed of many fundamental TE couple legs, the effect of heat transfer between the TE couple leg and the ambient environment is not negligible. In this paper, based on basic theories of thermoelectric power generation and thermal science, detailed modeling of a thermoelectric generator taking account of the phenomenon of energy loss from the TE couple leg is reported. The revised generalized thermoelectric energy balance equations considering the effect of heat transfer between the TE couple leg and the ambient environment have been derived. Furthermore, characteristics of a multi-element thermoelectric generator with irreversibility have been investigated on the basis of the new derived TE equations. In the present investigation, second-law-based thermodynamic analysis (exergy analysis) has been applied to the irreversible heat transfer process in particular. It is found that the existence of the irreversible heat convection process causes a large loss of heat exergy in the TEG system, and using thermoelectric generators for low-grade waste heat recovery has promising potential. The results of irreversibility analysis, especially irreversible effects on generator system performance, based on the system model established in detail have guiding significance for the development and application of thermoelectric generators, particularly for the design and optimization of TE modules.
Generating target system specifications from a domain model using CLIPS
NASA Technical Reports Server (NTRS)
Sugumaran, Vijayan; Gomaa, Hassan; Kerschberg, Larry
1991-01-01
The quest for reuse in software engineering is still being pursued and researchers are actively investigating the domain modeling approach to software construction. There are several domain modeling efforts reported in the literature and they all agree that the components that are generated from domain modeling are more conducive to reuse. Once a domain model is created, several target systems can be generated by tailoring the domain model or by evolving the domain model and then tailoring it according to the specified requirements. This paper presents the Evolutionary Domain Life Cycle (EDLC) paradigm in which a domain model is created using multiple views, namely, aggregation hierarchy, generalization/specialization hierarchies, object communication diagrams and state transition diagrams. The architecture of the Knowledge Based Requirements Elicitation Tool (KBRET) which is used to generate target system specifications is also presented. The preliminary version of KBRET is implemented in the C Language Integrated Production System (CLIPS).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lebersorger, S.; Beigl, P., E-mail: peter.beigl@boku.ac.at
Waste management planning requires reliable data concerning waste generation, influencing factors on waste generation and forecasts of waste quantities based on facts. This paper aims at identifying and quantifying differences between different municipalities' municipal solid waste (MSW) collection quantities based on data from waste management and on socio-economic indicators. A large set of 116 indicators from 542 municipalities in the Province of Styria was investigated. The resulting regression model included municipal tax revenue per capita, household size and the percentage of buildings with solid fuel heating systems. The model explains 74.3% of the MSW variation and the model assumptions aremore » met. Other factors such as tourism, home composting or age distribution of the population did not significantly improve the model. According to the model, 21% of MSW collected in Styria was commercial waste and 18% of the generated MSW was burned in domestic heating systems. While the percentage of commercial waste is consistent with literature data, practically no literature data are available for the quantity of MSW burned, which seems to be overestimated by the model. The resulting regression model was used as basis for a waste prognosis model (Beigl and Lebersorger, in preparation).« less
Lebersorger, S; Beigl, P
2011-01-01
Waste management planning requires reliable data concerning waste generation, influencing factors on waste generation and forecasts of waste quantities based on facts. This paper aims at identifying and quantifying differences between different municipalities' municipal solid waste (MSW) collection quantities based on data from waste management and on socio-economic indicators. A large set of 116 indicators from 542 municipalities in the Province of Styria was investigated. The resulting regression model included municipal tax revenue per capita, household size and the percentage of buildings with solid fuel heating systems. The model explains 74.3% of the MSW variation and the model assumptions are met. Other factors such as tourism, home composting or age distribution of the population did not significantly improve the model. According to the model, 21% of MSW collected in Styria was commercial waste and 18% of the generated MSW was burned in domestic heating systems. While the percentage of commercial waste is consistent with literature data, practically no literature data are available for the quantity of MSW burned, which seems to be overestimated by the model. The resulting regression model was used as basis for a waste prognosis model (Beigl and Lebersorger, in preparation). Copyright © 2011 Elsevier Ltd. All rights reserved.
Evidence-based health policy: three generations of reform in Mexico.
Frenk, Julio; Sepúlveda, Jaime; Gómez-Dantés, Octavio; Knaul, Felicia
2003-11-15
The Mexican health system has evolved through three generations of reform. The creation of the Ministry of Health and the main social security agency in 1943 marked the first generation of health reforms. In the late 1970s, a second generation of reforms was launched around the primary health-care model. Third-generation reforms favour systemic changes to reorganise the system through the horizontal integration of basic functions-stewardship, financing, and provision. The stability of leadership in the health sector is emphasised as a key element that allowed for reform during the past 60 years. Furthermore, there has been a transition in the second generation of reforms to a model that is increasingly based on evidence; this has been intensified and extended in the third generation of reforms. We also examine policy developments that will provide social protection in health for all. These developments could be of interest for countries seeking to provide their citizens with universal access to health care that incorporates equity, quality, and financial protection.
Power Control for Direct-Driven Permanent Magnet Wind Generator System with Battery Storage
Guang, Chu Xiao; Ying, Kong
2014-01-01
The objective of this paper is to construct a wind generator system (WGS) loss model that addresses the loss of the wind turbine and the generator. It aims to optimize the maximum effective output power and turbine speed. Given that the wind generator system has inertia and is nonlinear, the dynamic model of the wind generator system takes the advantage of the duty of the Buck converter and employs feedback linearization to design the optimized turbine speed tracking controller and the load power controller. According to that, this paper proposes a dual-mode dynamic coordination strategy based on the auxiliary load to reduce the influence of mode conversion on the lifetime of the battery. Optimized speed and power rapid tracking as well as the reduction of redundant power during mode conversion have gone through the test based on a 5 kW wind generator system test platform. The generator output power as the capture target has also been proved to be efficient. PMID:25050405
Power control for direct-driven permanent magnet wind generator system with battery storage.
Guang, Chu Xiao; Ying, Kong
2014-01-01
The objective of this paper is to construct a wind generator system (WGS) loss model that addresses the loss of the wind turbine and the generator. It aims to optimize the maximum effective output power and turbine speed. Given that the wind generator system has inertia and is nonlinear, the dynamic model of the wind generator system takes the advantage of the duty of the Buck converter and employs feedback linearization to design the optimized turbine speed tracking controller and the load power controller. According to that, this paper proposes a dual-mode dynamic coordination strategy based on the auxiliary load to reduce the influence of mode conversion on the lifetime of the battery. Optimized speed and power rapid tracking as well as the reduction of redundant power during mode conversion have gone through the test based on a 5 kW wind generator system test platform. The generator output power as the capture target has also been proved to be efficient.
Pain expressiveness and altruistic behavior: an exploration using agent-based modeling.
de C Williams, Amanda C; Gallagher, Elizabeth; Fidalgo, Antonio R; Bentley, Peter J
2016-03-01
Predictions which invoke evolutionary mechanisms are hard to test. Agent-based modeling in artificial life offers a way to simulate behaviors and interactions in specific physical or social environments over many generations. The outcomes have implications for understanding adaptive value of behaviors in context. Pain-related behavior in animals is communicated to other animals that might protect or help, or might exploit or predate. An agent-based model simulated the effects of displaying or not displaying pain (expresser/nonexpresser strategies) when injured and of helping, ignoring, or exploiting another in pain (altruistic/nonaltruistic/selfish strategies). Agents modeled in MATLAB interacted at random while foraging (gaining energy); random injury interrupted foraging for a fixed time unless help from an altruistic agent, who paid an energy cost, speeded recovery. Environmental and social conditions also varied, and each model ran for 10,000 iterations. Findings were meaningful in that, in general, contingencies that evident from experimental work with a variety of mammals, over a few interactions, were replicated in the agent-based model after selection pressure over many generations. More energy-demanding expression of pain reduced its frequency in successive generations, and increasing injury frequency resulted in fewer expressers and altruists. Allowing exploitation of injured agents decreased expression of pain to near zero, but altruists remained. Decreasing costs or increasing benefits of helping hardly changed its frequency, whereas increasing interaction rate between injured agents and helpers diminished the benefits to both. Agent-based modeling allows simulation of complex behaviors and environmental pressures over evolutionary time.
Fault diagnosis based on continuous simulation models
NASA Technical Reports Server (NTRS)
Feyock, Stefan
1987-01-01
The results are described of an investigation of techniques for using continuous simulation models as basis for reasoning about physical systems, with emphasis on the diagnosis of system faults. It is assumed that a continuous simulation model of the properly operating system is available. Malfunctions are diagnosed by posing the question: how can we make the model behave like that. The adjustments that must be made to the model to produce the observed behavior usually provide definitive clues to the nature of the malfunction. A novel application of Dijkstra's weakest precondition predicate transformer is used to derive the preconditions for producing the required model behavior. To minimize the size of the search space, an envisionment generator based on interval mathematics was developed. In addition to its intended application, the ability to generate qualitative state spaces automatically from quantitative simulations proved to be a fruitful avenue of investigation in its own right. Implementations of the Dijkstra transform and the envisionment generator are reproduced in the Appendix.
Model-based VQ for image data archival, retrieval and distribution
NASA Technical Reports Server (NTRS)
Manohar, Mareboyana; Tilton, James C.
1995-01-01
An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.
Terai, Asuka; Nakagawa, Masanori
2007-08-01
The purpose of this paper is to construct a model that represents the human process of understanding metaphors, focusing specifically on similes of the form an "A like B". Generally speaking, human beings are able to generate and understand many sorts of metaphors. This study constructs the model based on a probabilistic knowledge structure for concepts which is computed from a statistical analysis of a large-scale corpus. Consequently, this model is able to cover the many kinds of metaphors that human beings can generate. Moreover, the model implements the dynamic process of metaphor understanding by using a neural network with dynamic interactions. Finally, the validity of the model is confirmed by comparing model simulations with the results from a psychological experiment.
An Optimization-Based System Model of Disturbance-Generated Forest Biomass Utilization
ERIC Educational Resources Information Center
Curry, Guy L.; Coulson, Robert N.; Gan, Jianbang; Tchakerian, Maria D.; Smith, C. Tattersall
2008-01-01
Disturbance-generated biomass results from endogenous and exogenous natural and cultural disturbances that affect the health and productivity of forest ecosystems. These disturbances can create large quantities of plant biomass on predictable cycles. A systems analysis model has been developed to quantify aspects of system capacities (harvest,…
Dark matter stability and one-loop neutrino mass generation based on Peccei-Quinn symmetry
NASA Astrophysics Data System (ADS)
Suematsu, Daijiro
2018-01-01
We propose a model which is a simple extension of the KSVZ invisible axion model with an inert doublet scalar. Peccei-Quinn symmetry forbids tree-level neutrino mass generation and its remnant Z_2 symmetry guarantees dark matter stability. The neutrino masses are generated by one-loop effects as a result of the breaking of Peccei-Quinn symmetry through a nonrenormalizable interaction. Although the low energy effective model coincides with an original scotogenic model which contains right-handed neutrinos with large masses, it is free from the strong CP problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolly, S; Chen, H; Mutic, S
Purpose: A persistent challenge for the quality assessment of radiation therapy treatments (e.g. contouring accuracy) is the absence of the known, ground truth for patient data. Moreover, assessment results are often patient-dependent. Computer simulation studies utilizing numerical phantoms can be performed for quality assessment with a known ground truth. However, previously reported numerical phantoms do not include the statistical properties of inter-patient variations, as their models are based on only one patient. In addition, these models do not incorporate tumor data. In this study, a methodology was developed for generating numerical phantoms which encapsulate the statistical variations of patients withinmore » radiation therapy, including tumors. Methods: Based on previous work in contouring assessment, geometric attribute distribution (GAD) models were employed to model both the deterministic and stochastic properties of individual organs via principle component analysis. Using pre-existing radiation therapy contour data, the GAD models are trained to model the shape and centroid distributions of each organ. Then, organs with different shapes and positions can be generated by assigning statistically sound weights to the GAD model parameters. Organ contour data from 20 retrospective prostate patient cases were manually extracted and utilized to train the GAD models. As a demonstration, computer-simulated CT images of generated numerical phantoms were calculated and assessed subjectively and objectively for realism. Results: A cohort of numerical phantoms of the male human pelvis was generated. CT images were deemed realistic both subjectively and objectively in terms of image noise power spectrum. Conclusion: A methodology has been developed to generate realistic numerical anthropomorphic phantoms using pre-existing radiation therapy data. The GAD models guarantee that generated organs span the statistical distribution of observed radiation therapy patients, according to the training dataset. The methodology enables radiation therapy treatment assessment with multi-modality imaging and a known ground truth, and without patient-dependent bias.« less
Habitat classification modeling with incomplete data: Pushing the habitat envelope
Zarnetske, P.L.; Edwards, T.C.; Moisen, Gretchen G.
2007-01-01
Habitat classification models (HCMs) are invaluable tools for species conservation, land-use planning, reserve design, and metapopulation assessments, particularly at broad spatial scales. However, species occurrence data are often lacking and typically limited to presence points at broad scales. This lack of absence data precludes the use of many statistical techniques for HCMs. One option is to generate pseudo-absence points so that the many available statistical modeling tools can be used. Traditional techniques generate pseudoabsence points at random across broadly defined species ranges, often failing to include biological knowledge concerning the species-habitat relationship. We incorporated biological knowledge of the species-habitat relationship into pseudo-absence points by creating habitat envelopes that constrain the region from which points were randomly selected. We define a habitat envelope as an ecological representation of a species, or species feature's (e.g., nest) observed distribution (i.e., realized niche) based on a single attribute, or the spatial intersection of multiple attributes. We created HCMs for Northern Goshawk (Accipiter gentilis atricapillus) nest habitat during the breeding season across Utah forests with extant nest presence points and ecologically based pseudo-absence points using logistic regression. Predictor variables were derived from 30-m USDA Landfire and 250-m Forest Inventory and Analysis (FIA) map products. These habitat-envelope-based models were then compared to null envelope models which use traditional practices for generating pseudo-absences. Models were assessed for fit and predictive capability using metrics such as kappa, thresholdindependent receiver operating characteristic (ROC) plots, adjusted deviance (Dadj2), and cross-validation, and were also assessed for ecological relevance. For all cases, habitat envelope-based models outperformed null envelope models and were more ecologically relevant, suggesting that incorporating biological knowledge into pseudo-absence point generation is a powerful tool for species habitat assessments. Furthermore, given some a priori knowledge of the species-habitat relationship, ecologically based pseudo-absence points can be applied to any species, ecosystem, data resolution, and spatial extent. ?? 2007 by the Ecological Society of America.
Toward a Model-Based Predictive Controller Design in Brain–Computer Interfaces
Kamrunnahar, M.; Dias, N. S.; Schiff, S. J.
2013-01-01
A first step in designing a robust and optimal model-based predictive controller (MPC) for brain–computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8–23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications. PMID:21267657
Toward a model-based predictive controller design in brain-computer interfaces.
Kamrunnahar, M; Dias, N S; Schiff, S J
2011-05-01
A first step in designing a robust and optimal model-based predictive controller (MPC) for brain-computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8-23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications.
NASA Astrophysics Data System (ADS)
Lanusse, Francois; Ravanbakhsh, Siamak; Mandelbaum, Rachel; Schneider, Jeff; Poczos, Barnabas
2017-01-01
Weak gravitational lensing has long been identified as one of the most powerful probes to investigate the nature of dark energy. As such, weak lensing is at the heart of the next generation of cosmological surveys such as LSST, Euclid or WFIRST.One particularly crititcal source of systematic errors in these surveys comes from the shape measurement algorithms tasked with estimating galaxy shapes. GREAT3, the last community challenge to assess the quality of state-of-the-art shape measurement algorithms has in particular demonstrated that all current methods are biased to various degrees and, more importantly, that these biases depend on the details of the galaxy morphologies. These biases can be measured and calibrated by generating mock observations where a known lensing signal has been introduced and comparing the resulting measurements to the ground-truth. Producing these mock observations however requires input galaxy images of higher resolution and S/N than the simulated survey, which typically implies acquiring extremely expensive space-based observations.The goal of this work is to train a deep generative model on already available Hubble Space Telescope data which can then be used to sample new galaxy images conditioned on parameters such as magnitude, size or redshift and exhibiting complex morphologies. Such model can allow us to inexpensively produce large set of realistic realistic images for calibration purposes.We implement a conditional generative model based on state-of-the-art deep learning methods and fit it to deep galaxy images from the COSMOS survey. The quality of the model is assessed by computing an extensive set of galaxy morphology statistics on the generated images. Beyond simple second moment statistics such as size and ellipticity, we apply more complex statistics specifically designed to be sensitive to disturbed galaxy morphologies. We find excellent agreement between the morphologies of real and model generated galaxies.Our results suggest that such deep generative models represent a reliable alternative to the acquisition of expensive high quality observations for generating the calibration data needed by the next generation of weak lensing surveys.
Renewable generation technology choice and policies in a competitive electricity supply industry
NASA Astrophysics Data System (ADS)
Sarkar, Ashok
Renewable energy generation technologies have lower externality costs but higher private costs than fossil fuel-based generation. As a result, the choice of renewables in the future generation mix could be affected by the industry's future market-oriented structure because market objectives based on private value judgments may conflict with social policy objectives toward better environmental quality. This research assesses how renewable energy generation choices would be affected in a restructured electricity generation market. A multi-period linear programming-based model (Resource Planning Model) is used to characterize today's electricity supply market in the United States. The model simulates long-range (2000-2020) generation capacity planning and operation decisions under alternative market paradigms. Price-sensitive demand is used to simulate customer preferences in the market. Dynamically changing costs for renewables and a two-step load duration curve are used. A Reference Case represents the benchmark for a socially-optimal diffusion of renewables and a basis for comparing outcomes under alternative market structures. It internalizes externality costs associated with emissions of sulfur dioxide (SOsb2), nitrous oxides (NOsbx), and carbon dioxide (COsb2). A Competitive Case represents a market with many generation suppliers and decision-making based on private costs. Finally, a Market Power Case models the extreme case of market power: monopoly. The results suggest that the share of renewables would decrease (and emissions would increase) considerably in both the Competitive and the Market Power Cases with respect to the Reference Case. The reduction is greater in the Market Power Case due to pricing decisions under existing supply capability. The research evaluates the following environmental policy options that could overcome market failures in achieving an appropriate level of renewable generation: COsb2 emissions tax, SOsb2 emissions cap, renewable portfolio standards (RPS), and enhanced research and development (R&D). RPS would best ensure an appropriate share of renewables, whereas SOsb2 emissions caps would not support a shift to renewables in an era of inexpensive natural gas. The effectiveness of the policies are dependent on the market structure. If market power exists, the analyses indicate that generally higher levels of intervention would be necessary to achieve a shift to renewables.
Pattern formation in individual-based systems with time-varying parameters
NASA Astrophysics Data System (ADS)
Ashcroft, Peter; Galla, Tobias
2013-12-01
We study the patterns generated in finite-time sweeps across symmetry-breaking bifurcations in individual-based models. Similar to the well-known Kibble-Zurek scenario of defect formation, large-scale patterns are generated when model parameters are varied slowly, whereas fast sweeps produce a large number of small domains. The symmetry breaking is triggered by intrinsic noise, originating from the discrete dynamics at the microlevel. Based on a linear-noise approximation, we calculate the characteristic length scale of these patterns. We demonstrate the applicability of this approach in a simple model of opinion dynamics, a model in evolutionary game theory with a time-dependent fitness structure, and a model of cell differentiation. Our theoretical estimates are confirmed in simulations. In further numerical work, we observe a similar phenomenon when the symmetry-breaking bifurcation is triggered by population growth.
NASA Astrophysics Data System (ADS)
Hayat, T.; Ullah, Siraj; Khan, M. Ijaz; Alsaedi, A.; Zaigham Zia, Q. M.
2018-03-01
Here modeling and computations are presented to introduce the novel concept of Darcy-Forchheimer three-dimensional flow of water-based carbon nanotubes with nonlinear thermal radiation and heat generation/absorption. Bidirectional stretching surface induces the flow. Darcy's law is commonly replace by Forchheimer relation. Xue model is implemented for nonliquid transport mechanism. Nonlinear formulation based upon conservation laws of mass, momentum and energy is first modeled and then solved by optimal homotopy analysis technique. Optimal estimations of auxiliary variables are obtained. Importance of influential variables on the velocity and thermal fields is interpreted graphically. Moreover velocity and temperature gradients are discussed and analyzed. Physical interpretation of influential variables is examined.
Yang, Hao; Xu, Xiangyang; Neumann, Ingo
2014-11-19
Terrestrial laser scanning technology (TLS) is a new technique for quickly getting three-dimensional information. In this paper we research the health assessment of concrete structures with a Finite Element Method (FEM) model based on TLS. The goal focuses on the benefits of 3D TLS in the generation and calibration of FEM models, in order to build a convenient, efficient and intelligent model which can be widely used for the detection and assessment of bridges, buildings, subways and other objects. After comparing the finite element simulation with surface-based measurement data from TLS, the FEM model is determined to be acceptable with an error of less than 5%. The benefit of TLS lies mainly in the possibility of a surface-based validation of results predicted by the FEM model.
The TimeGeo modeling framework for urban mobility without travel surveys
Jiang, Shan; Yang, Yingxiang; Gupta, Siddharth; Veneziano, Daniele; Athavale, Shounak; González, Marta C.
2016-01-01
Well-established fine-scale urban mobility models today depend on detailed but cumbersome and expensive travel surveys for their calibration. Not much is known, however, about the set of mechanisms needed to generate complete mobility profiles if only using passive datasets with mostly sparse traces of individuals. In this study, we present a mechanistic modeling framework (TimeGeo) that effectively generates urban mobility patterns with resolution of 10 min and hundreds of meters. It ties together the inference of home and work activity locations from data, with the modeling of flexible activities (e.g., other) in space and time. The temporal choices are captured by only three features: the weekly home-based tour number, the dwell rate, and the burst rate. These combined generate for each individual: (i) stay duration of activities, (ii) number of visited locations per day, and (iii) daily mobility networks. These parameters capture how an individual deviates from the circadian rhythm of the population, and generate the wide spectrum of empirically observed mobility behaviors. The spatial choices of visited locations are modeled by a rank-based exploration and preferential return (r-EPR) mechanism that incorporates space in the EPR model. Finally, we show that a hierarchical multiplicative cascade method can measure the interaction between land use and generation of trips. In this way, urban structure is directly related to the observed distance of travels. This framework allows us to fully embrace the massive amount of individual data generated by information and communication technologies (ICTs) worldwide to comprehensively model urban mobility without travel surveys. PMID:27573826
The TimeGeo modeling framework for urban motility without travel surveys.
Jiang, Shan; Yang, Yingxiang; Gupta, Siddharth; Veneziano, Daniele; Athavale, Shounak; González, Marta C
2016-09-13
Well-established fine-scale urban mobility models today depend on detailed but cumbersome and expensive travel surveys for their calibration. Not much is known, however, about the set of mechanisms needed to generate complete mobility profiles if only using passive datasets with mostly sparse traces of individuals. In this study, we present a mechanistic modeling framework (TimeGeo) that effectively generates urban mobility patterns with resolution of 10 min and hundreds of meters. It ties together the inference of home and work activity locations from data, with the modeling of flexible activities (e.g., other) in space and time. The temporal choices are captured by only three features: the weekly home-based tour number, the dwell rate, and the burst rate. These combined generate for each individual: (i) stay duration of activities, (ii) number of visited locations per day, and (iii) daily mobility networks. These parameters capture how an individual deviates from the circadian rhythm of the population, and generate the wide spectrum of empirically observed mobility behaviors. The spatial choices of visited locations are modeled by a rank-based exploration and preferential return (r-EPR) mechanism that incorporates space in the EPR model. Finally, we show that a hierarchical multiplicative cascade method can measure the interaction between land use and generation of trips. In this way, urban structure is directly related to the observed distance of travels. This framework allows us to fully embrace the massive amount of individual data generated by information and communication technologies (ICTs) worldwide to comprehensively model urban mobility without travel surveys.
3D active shape models of human brain structures: application to patient-specific mesh generation
NASA Astrophysics Data System (ADS)
Ravikumar, Nishant; Castro-Mateos, Isaac; Pozo, Jose M.; Frangi, Alejandro F.; Taylor, Zeike A.
2015-03-01
The use of biomechanics-based numerical simulations has attracted growing interest in recent years for computer-aided diagnosis and treatment planning. With this in mind, a method for automatic mesh generation of brain structures of interest, using statistical models of shape (SSM) and appearance (SAM), for personalised computational modelling is presented. SSMs are constructed as point distribution models (PDMs) while SAMs are trained using intensity profiles sampled from a training set of T1-weighted magnetic resonance images. The brain structures of interest are, the cortical surface (cerebrum, cerebellum & brainstem), lateral ventricles and falx-cerebri membrane. Two methods for establishing correspondences across the training set of shapes are investigated and compared (based on SSM quality): the Coherent Point Drift (CPD) point-set registration method and B-spline mesh-to-mesh registration method. The MNI-305 (Montreal Neurological Institute) average brain atlas is used to generate the template mesh, which is deformed and registered to each training case, to establish correspondence over the training set of shapes. 18 healthy patients' T1-weightedMRimages form the training set used to generate the SSM and SAM. Both model-training and model-fitting are performed over multiple brain structures simultaneously. Compactness and generalisation errors of the BSpline-SSM and CPD-SSM are evaluated and used to quantitatively compare the SSMs. Leave-one-out cross validation is used to evaluate SSM quality in terms of these measures. The mesh-based SSM is found to generalise better and is more compact, relative to the CPD-based SSM. Quality of the best-fit model instance from the trained SSMs, to test cases are evaluated using the Hausdorff distance (HD) and mean absolute surface distance (MASD) metrics.
Modeling Renewable Penertration Using a Network Economic Model
NASA Astrophysics Data System (ADS)
Lamont, A.
2001-03-01
This paper evaluates the accuracy of a network economic modeling approach in designing energy systems having renewable and conventional generators. The network approach models the system as a network of processes such as demands, generators, markets, and resources. The model reaches a solution by exchanging prices and quantity information between the nodes of the system. This formulation is very flexible and takes very little time to build and modify models. This paper reports an experiment designing a system with photovoltaic and base and peak fossil generators. The level of PV penetration as a function of its price and the capacities of the fossil generators were determined using the network approach and using an exact, analytic approach. It is found that the two methods agree very closely in terms of the optimal capacities and are nearly identical in terms of annual system costs.
ERIC Educational Resources Information Center
Dickes, Amanda Catherine; Sengupta, Pratim; Farris, Amy Voss; Satabdi, Basu
2016-01-01
In this paper, we present a third-grade ecology learning environment that integrates two forms of modeling--embodied modeling and agent-based modeling (ABMs)--through the generation of mathematical representations that are common to both forms of modeling. The term "agent" in the context of ABMs indicates individual computational objects…
Ghose, Soumya; Greer, Peter B; Sun, Jidi; Pichler, Peter; Rivest-Henault, David; Mitra, Jhimli; Richardson, Haylea; Wratten, Chris; Martin, Jarad; Arm, Jameen; Best, Leah; Dowling, Jason A
2017-10-27
In MR only radiation therapy planning, generation of the tissue specific HU map directly from the MRI would eliminate the need of CT image acquisition and may improve radiation therapy planning. The aim of this work is to generate and validate substitute CT (sCT) scans generated from standard T2 weighted MR pelvic scans in prostate radiation therapy dose planning. A Siemens Skyra 3T MRI scanner with laser bridge, flat couch and pelvic coil mounts was used to scan 39 patients scheduled for external beam radiation therapy for localized prostate cancer. For sCT generation a whole pelvis MRI (1.6 mm 3D isotropic T2w SPACE sequence) was acquired. Patients received a routine planning CT scan. Co-registered whole pelvis CT and T2w MRI pairs were used as training images. Advanced tissue specific non-linear regression models to predict HU for the fat, muscle, bladder and air were created from co-registered CT-MRI image pairs. On a test case T2w MRI, the bones and bladder were automatically segmented using a novel statistical shape and appearance model, while other soft tissues were separated using an Expectation-Maximization based clustering model. The CT bone in the training database that was most 'similar' to the segmented bone was then transformed with deformable registration to create the sCT component of the test case T2w MRI bone tissue. Predictions for the bone, air and soft tissue from the separate regression models were successively combined to generate a whole pelvis sCT. The change in monitor units between the sCT-based plans relative to the gold standard CT plan for the same IMRT dose plan was found to be [Formula: see text] (mean ± standard deviation) for 39 patients. The 3D Gamma pass rate was [Formula: see text] (2 mm/2%). The novel hybrid model is computationally efficient, generating an sCT in 20 min from standard T2w images for prostate cancer radiation therapy dose planning and DRR generation.
NASA Astrophysics Data System (ADS)
Ghose, Soumya; Greer, Peter B.; Sun, Jidi; Pichler, Peter; Rivest-Henault, David; Mitra, Jhimli; Richardson, Haylea; Wratten, Chris; Martin, Jarad; Arm, Jameen; Best, Leah; Dowling, Jason A.
2017-11-01
In MR only radiation therapy planning, generation of the tissue specific HU map directly from the MRI would eliminate the need of CT image acquisition and may improve radiation therapy planning. The aim of this work is to generate and validate substitute CT (sCT) scans generated from standard T2 weighted MR pelvic scans in prostate radiation therapy dose planning. A Siemens Skyra 3T MRI scanner with laser bridge, flat couch and pelvic coil mounts was used to scan 39 patients scheduled for external beam radiation therapy for localized prostate cancer. For sCT generation a whole pelvis MRI (1.6 mm 3D isotropic T2w SPACE sequence) was acquired. Patients received a routine planning CT scan. Co-registered whole pelvis CT and T2w MRI pairs were used as training images. Advanced tissue specific non-linear regression models to predict HU for the fat, muscle, bladder and air were created from co-registered CT-MRI image pairs. On a test case T2w MRI, the bones and bladder were automatically segmented using a novel statistical shape and appearance model, while other soft tissues were separated using an Expectation-Maximization based clustering model. The CT bone in the training database that was most ‘similar’ to the segmented bone was then transformed with deformable registration to create the sCT component of the test case T2w MRI bone tissue. Predictions for the bone, air and soft tissue from the separate regression models were successively combined to generate a whole pelvis sCT. The change in monitor units between the sCT-based plans relative to the gold standard CT plan for the same IMRT dose plan was found to be 0.3%+/-0.9% (mean ± standard deviation) for 39 patients. The 3D Gamma pass rate was 99.8+/-0.00 (2 mm/2%). The novel hybrid model is computationally efficient, generating an sCT in 20 min from standard T2w images for prostate cancer radiation therapy dose planning and DRR generation.
A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture
NASA Technical Reports Server (NTRS)
Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.
2005-01-01
Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.
NASA Technical Reports Server (NTRS)
Ting, Eric; Nguyen, Nhan; Trinh, Khanh
2014-01-01
This paper presents a static aeroelastic model and longitudinal trim model for the analysis of a flexible wing transport aircraft. The static aeroelastic model is built using a structural model based on finite-element modeling and coupled to an aerodynamic model that uses vortex-lattice solution. An automatic geometry generation tool is used to close the loop between the structural and aerodynamic models. The aeroelastic model is extended for the development of a three degree-of-freedom longitudinal trim model for an aircraft with flexible wings. The resulting flexible aircraft longitudinal trim model is used to simultaneously compute the static aeroelastic shape for the aircraft model and the longitudinal state inputs to maintain an aircraft trim state. The framework is applied to an aircraft model based on the NASA Generic Transport Model (GTM) with wing structures allowed to flexibly deformed referred to as the Elastically Shaped Aircraft Concept (ESAC). The ESAC wing mass and stiffness properties are based on a baseline "stiff" values representative of current generation transport aircraft.
NASA Astrophysics Data System (ADS)
Kubalska, J. L.; Preuss, R.
2013-12-01
Digital Surface Models (DSM) are used in GIS data bases as single product more often. They are also necessary to create other products such as3D city models, true-ortho and object-oriented classification. This article presents results of DSM generation for classification of vegetation in urban areas. Source data allowed producing DSM with using of image matching method and ALS data. The creation of DSM from digital images, obtained by Ultra Cam-D digital Vexcel camera, was carried out in Match-T by INPHO. This program optimizes the configuration of images matching process, which ensures high accuracy and minimize gap areas. The analysis of the accuracy of this process was made by comparison of DSM generated in Match-T with DSM generated from ALS data. Because of further purpose of generated DSM it was decided to create model in GRID structure with cell size of 1 m. With this parameter differential model from both DSMs was also built that allowed determining the relative accuracy of the compared models. The analysis indicates that the generation of DSM with multi-image matching method is competitive for the same surface model creation from ALS data. Thus, when digital images with high overlap are available, the additional registration of ALS data seems to be unnecessary.
Geospatial Modelling Approach for 3d Urban Densification Developments
NASA Astrophysics Data System (ADS)
Koziatek, O.; Dragićević, S.; Li, S.
2016-06-01
With growing populations, economic pressures, and the need for sustainable practices, many urban regions are rapidly densifying developments in the vertical built dimension with mid- and high-rise buildings. The location of these buildings can be projected based on key factors that are attractive to urban planners, developers, and potential buyers. Current research in this area includes various modelling approaches, such as cellular automata and agent-based modelling, but the results are mostly linked to raster grids as the smallest spatial units that operate in two spatial dimensions. Therefore, the objective of this research is to develop a geospatial model that operates on irregular spatial tessellations to model mid- and high-rise buildings in three spatial dimensions (3D). The proposed model is based on the integration of GIS, fuzzy multi-criteria evaluation (MCE), and 3D GIS-based procedural modelling. Part of the City of Surrey, within the Metro Vancouver Region, Canada, has been used to present the simulations of the generated 3D building objects. The proposed 3D modelling approach was developed using ESRI's CityEngine software and the Computer Generated Architecture (CGA) language.
Mathematical modeling of electrical activity of uterine muscle cells.
Rihana, Sandy; Terrien, Jeremy; Germain, Guy; Marque, Catherine
2009-06-01
The uterine electrical activity is an efficient parameter to study the uterine contractility. In order to understand the ionic mechanisms responsible for its generation, we aimed at building a mathematical model of the uterine cell electrical activity based upon the physiological mechanisms. First, based on the voltage clamp experiments found in the literature, we focus on the principal ionic channels and their cognate currents involved in the generation of this electrical activity. Second, we provide the methodology of formulations of uterine ionic currents derived from a wide range of electrophysiological data. The model is validated step by step by comparing simulated voltage-clamp results with the experimental ones. The model reproduces successfully the generation of single spikes or trains of action potentials that fit with the experimental data. It allows analyzing ionic channels implications. Likewise, the calcium-dependent conductance influences significantly the cellular oscillatory behavior.
Gleeson, Matthew Paul; Montanari, Dino
2012-11-01
The most desirable chemical starting point in drug discovery is a hit or lead with a good overall profile, and where there may be issues; a clear SAR strategy should be identifiable to minimize the issue. Filtering based on drug-likeness concepts are a first step, but more accurate theoretical methods are needed to i) estimate the biological profile of molecule in question and ii) based on the underlying structure-activity relationships used by the model, estimate whether it is likely that the molecule in question can be altered to remove these liabilities. In this paper, the authors discuss the generation of ADMET models and their practical use in decision making. They discuss the issues surrounding data collation, experimental errors, the model assessment and validation steps, as well as the different types of descriptors and statistical models that can be used. This is followed by a discussion on how the model accuracy will dictate when and where it can be used in the drug discovery process. The authors also discuss how models can be developed to more effectively enable multiple parameter optimization. Models can be applied in lead generation and lead optimization steps to i) rank order a collection of hits, ii) prioritize the experimental assays needed for different hit series, iii) assess the likelihood of resolving a problem that might be present in a particular series in lead optimization and iv) screen a virtual library based on a hit or lead series to assess the impact of diverse structural changes on the predicted properties.
CHENG, JIANLIN; EICKHOLT, JESSE; WANG, ZHENG; DENG, XIN
2013-01-01
After decades of research, protein structure prediction remains a very challenging problem. In order to address the different levels of complexity of structural modeling, two types of modeling techniques — template-based modeling and template-free modeling — have been developed. Template-based modeling can often generate a moderate- to high-resolution model when a similar, homologous template structure is found for a query protein but fails if no template or only incorrect templates are found. Template-free modeling, such as fragment-based assembly, may generate models of moderate resolution for small proteins of low topological complexity. Seldom have the two techniques been integrated together to improve protein modeling. Here we develop a recursive protein modeling approach to selectively and collaboratively apply template-based and template-free modeling methods to model template-covered (i.e. certain) and template-free (i.e. uncertain) regions of a protein. A preliminary implementation of the approach was tested on a number of hard modeling cases during the 9th Critical Assessment of Techniques for Protein Structure Prediction (CASP9) and successfully improved the quality of modeling in most of these cases. Recursive modeling can signicantly reduce the complexity of protein structure modeling and integrate template-based and template-free modeling to improve the quality and efficiency of protein structure prediction. PMID:22809379
Karpušenkaitė, Aistė; Ruzgas, Tomas; Denafas, Gintaras
2018-05-01
The aim of the study was to create a hybrid forecasting method that could produce higher accuracy forecasts than previously used 'pure' time series methods. Mentioned methods were already tested with total automotive waste, hazardous automotive waste, and total medical waste generation, but demonstrated at least a 6% error rate in different cases and efforts were made to decrease it even more. Newly developed hybrid models used a random start generation method to incorporate different time-series advantages and it helped to increase the accuracy of forecasts by 3%-4% in hazardous automotive waste and total medical waste generation cases; the new model did not increase the accuracy of total automotive waste generation forecasts. Developed models' abilities to forecast short- and mid-term forecasts were tested using prediction horizon.
A Small Aircraft Transportation System (SATS) Demand Model
NASA Technical Reports Server (NTRS)
Long, Dou; Lee, David; Johnson, Jesse; Kostiuk, Peter; Yackovetsky, Robert (Technical Monitor)
2001-01-01
The Small Aircraft Transportation System (SATS) demand modeling is a tool that will be useful for decision-makers to analyze SATS demands in both airport and airspace. We constructed a series of models following the general top-down, modular principles in systems engineering. There are three principal models, SATS Airport Demand Model (SATS-ADM), SATS Flight Demand Model (SATS-FDM), and LMINET-SATS. SATS-ADM models SATS operations, by aircraft type, from the forecasts in fleet, configuration and performance, utilization, and traffic mixture. Given the SATS airport operations such as the ones generated by SATS-ADM, SATS-FDM constructs the SATS origin and destination (O&D) traffic flow based on the solution of the gravity model, from which it then generates SATS flights using the Monte Carlo simulation based on the departure time-of-day profile. LMINET-SATS, an extension of LMINET, models SATS demands at airspace and airport by all aircraft operations in US The models use parameters to provide the user with flexibility and ease of use to generate SATS demand for different scenarios. Several case studies are included to illustrate the use of the models, which are useful to identify the need for a new air traffic management system to cope with SATS.
Fernández, E N; Legarra, A; Martínez, R; Sánchez, J P; Baselga, M
2017-06-01
Inbreeding generates covariances between additive and dominance effects (breeding values and dominance deviations). In this work, we developed and applied models for estimation of dominance and additive genetic variances and their covariance, a model that we call "full dominance," from pedigree and phenotypic data. Estimates with this model such as presented here are very scarce both in livestock and in wild genetics. First, we estimated pedigree-based condensed probabilities of identity using recursion. Second, we developed an equivalent linear model in which variance components can be estimated using closed-form algorithms such as REML or Gibbs sampling and existing software. Third, we present a new method to refer the estimated variance components to meaningful parameters in a particular population, i.e., final partially inbred generations as opposed to outbred base populations. We applied these developments to three closed rabbit lines (A, V and H) selected for number of weaned at the Polytechnic University of Valencia. Pedigree and phenotypes are complete and span 43, 39 and 14 generations, respectively. Estimates of broad-sense heritability are 0.07, 0.07 and 0.05 at the base versus 0.07, 0.07 and 0.09 in the final generations. Narrow-sense heritability estimates are 0.06, 0.06 and 0.02 at the base versus 0.04, 0.04 and 0.01 at the final generations. There is also a reduction in the genotypic variance due to the negative additive-dominance correlation. Thus, the contribution of dominance variation is fairly large and increases with inbreeding and (over)compensates for the loss in additive variation. In addition, estimates of the additive-dominance correlation are -0.37, -0.31 and 0.00, in agreement with the few published estimates and theoretical considerations. © 2017 Blackwell Verlag GmbH.
Strbac, V; Pierce, D M; Vander Sloten, J; Famaey, N
2017-12-01
Finite element (FE) simulations are increasingly valuable in assessing and improving the performance of biomedical devices and procedures. Due to high computational demands such simulations may become difficult or even infeasible, especially when considering nearly incompressible and anisotropic material models prevalent in analyses of soft tissues. Implementations of GPGPU-based explicit FEs predominantly cover isotropic materials, e.g. the neo-Hookean model. To elucidate the computational expense of anisotropic materials, we implement the Gasser-Ogden-Holzapfel dispersed, fiber-reinforced model and compare solution times against the neo-Hookean model. Implementations of GPGPU-based explicit FEs conventionally rely on single-point (under) integration. To elucidate the expense of full and selective-reduced integration (more reliable) we implement both and compare corresponding solution times against those generated using underintegration. To better understand the advancement of hardware, we compare results generated using representative Nvidia GPGPUs from three recent generations: Fermi (C2075), Kepler (K20c), and Maxwell (GTX980). We explore scaling by solving the same boundary value problem (an extension-inflation test on a segment of human aorta) with progressively larger FE meshes. Our results demonstrate substantial improvements in simulation speeds relative to two benchmark FE codes (up to 300[Formula: see text] while maintaining accuracy), and thus open many avenues to novel applications in biomechanics and medicine.
Large-scale runoff generation - parsimonious parameterisation using high-resolution topography
NASA Astrophysics Data System (ADS)
Gong, L.; Halldin, S.; Xu, C.-Y.
2011-08-01
World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3" (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.
Large-scale runoff generation - parsimonious parameterisation using high-resolution topography
NASA Astrophysics Data System (ADS)
Gong, L.; Halldin, S.; Xu, C.-Y.
2010-09-01
World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting a very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TGR only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3'' (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.
Smart Grid Maturity Model: Model Definition. A Framework for Smart Grid Transformation
2010-09-01
adoption of more efficient and reliable generation sources and would allow consumer-generated electricity (e.g., solar power and wind) to be connected to...program that pays customers (or credits their accounts) for customer-provided electricity such as from solar panels to the grid or electric vehicles...deployed. CUST-5.3 Plug-and-play customer-based generation (e.g., wind and solar ) is supported. This includes the necessary infrastructure, such
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neylon, J; Min, Y; Qi, S
2014-06-15
Purpose: Deformable image registration (DIR) plays a pivotal role in head and neck adaptive radiotherapy but a systematic validation of DIR algorithms has been limited by a lack of quantitative high-resolution groundtruth. We address this limitation by developing a GPU-based framework that provides a systematic DIR validation by generating (a) model-guided synthetic CTs representing posture and physiological changes, and (b) model-guided landmark-based validation. Method: The GPU-based framework was developed to generate massive mass-spring biomechanical models from patient simulation CTs and contoured structures. The biomechanical model represented soft tissue deformations for known rigid skeletal motion. Posture changes were simulated by articulatingmore » skeletal anatomy, which subsequently applied elastic corrective forces upon the soft tissue. Physiological changes such as tumor regression and weight loss were simulated in a biomechanically precise manner. Synthetic CT data was then generated from the deformed anatomy. The initial and final positions for one hundred randomly-chosen mass elements inside each of the internal contoured structures were recorded as ground truth data. The process was automated to create 45 synthetic CT datasets for a given patient CT. For instance, the head rotation was varied between +/− 4 degrees along each axis, and tumor volumes were systematically reduced up to 30%. Finally, the original CT and deformed synthetic CT were registered using an optical flow based DIR. Results: Each synthetic data creation took approximately 28 seconds of computation time. The number of landmarks per data set varied between two and three thousand. The validation method is able to perform sub-voxel analysis of the DIR, and report the results by structure, giving a much more in depth investigation of the error. Conclusions: We presented a GPU based high-resolution biomechanical head and neck model to validate DIR algorithms by generating CT equivalent 3D volumes with simulated posture changes and physiological regression.« less
A simple topography-driven, calibration-free runoff generation model
NASA Astrophysics Data System (ADS)
Gao, H.; Birkel, C.; Hrachowitz, M.; Tetzlaff, D.; Soulsby, C.; Savenije, H. H. G.
2017-12-01
Determining the amount of runoff generation from rainfall occupies a central place in rainfall-runoff modelling. Moreover, reading landscapes and developing calibration-free runoff generation models that adequately reflect land surface heterogeneities remains the focus of much hydrological research. In this study, we created a new method to estimate runoff generation - HAND-based Storage Capacity curve (HSC) which uses a topographic index (HAND, Height Above the Nearest Drainage) to identify hydrological similarity and partially the saturated areas of catchments. We then coupled the HSC model with the Mass Curve Technique (MCT) method to estimate root zone storage capacity (SuMax), and obtained the calibration-free runoff generation model HSC-MCT. Both the two models (HSC and HSC-MCT) allow us to estimate runoff generation and simultaneously visualize the spatial dynamic of saturated area. We tested the two models in the data-rich Bruntland Burn (BB) experimental catchment in Scotland with an unusual time series of the field-mapped saturation area extent. The models were subsequently tested in 323 MOPEX (Model Parameter Estimation Experiment) catchments in the United States. HBV and TOPMODEL were used as benchmarks. We found that the HSC performed better in reproducing the spatio-temporal pattern of the observed saturated areas in the BB catchment compared with TOPMODEL which is based on the topographic wetness index (TWI). The HSC also outperformed HBV and TOPMODEL in the MOPEX catchments for both calibration and validation. Despite having no calibrated parameters, the HSC-MCT model also performed comparably well with the calibrated HBV and TOPMODEL, highlighting the robustness of the HSC model to both describe the spatial distribution of the root zone storage capacity and the efficiency of the MCT method to estimate the SuMax. Moreover, the HSC-MCT model facilitated effective visualization of the saturated area, which has the potential to be used for broader geoscience studies beyond hydrology.
Ko, Hee-Sang; Lee, Kwang Y; Kang, Min-Jae; Kim, Ho-Chan
2008-12-01
Wind power generation is gaining popularity as the power industry in the world is moving toward more liberalized trade of energy along with public concerns of more environmentally friendly mode of electricity generation. The weakness of wind power generation is its dependence on nature-the power output varies in quite a wide range due to the change of wind speed, which is difficult to model and predict. The excess fluctuation of power output and voltages can influence negatively the quality of electricity in the distribution system connected to the wind power generation plant. In this paper, the authors propose an intelligent adaptive system to control the output of a wind power generation plant to maintain the quality of electricity in the distribution system. The target wind generator is a cost-effective induction generator, while the plant is equipped with a small capacity energy storage based on conventional batteries, heater load for co-generation and braking, and a voltage smoothing device such as a static Var compensator (SVC). Fuzzy logic controller provides a flexible controller covering a wide range of energy/voltage compensation. A neural network inverse model is designed to provide compensating control amount for a system. The system can be optimized to cope with the fluctuating market-based electricity price conditions to lower the cost of electricity consumption or to maximize the power sales opportunities from the wind generation plant.
Hybrid modeling of nitrate fate in large catchments using fuzzy-rules
NASA Astrophysics Data System (ADS)
van der Heijden, Sven; Haberlandt, Uwe
2010-05-01
Especially for nutrient balance simulations, physically based ecohydrological modeling needs an abundance of measured data and model parameters, which for large catchments all too often are not available in sufficient spatial or temporal resolution or are simply unknown. For efficient large-scale studies it is thus beneficial to have methods at one's disposal which are parsimonious concerning the number of model parameters and the necessary input data. One such method is fuzzy-rule based modeling, which compared to other machine-learning techniques has the advantages to produce models (the fuzzy-rules) which are physically interpretable to a certain extent, and to allow the explicit introduction of expert knowledge through pre-defined rules. The study focuses on the application of fuzzy-rule based modeling for nitrate simulation in large catchments, in particular concerning decision support. Fuzzy-rule based modeling enables the generation of simple, efficient, easily understandable models with nevertheless satisfactory accuracy for problems of decision support. The chosen approach encompasses a hybrid metamodeling, which includes the generation of fuzzy-rules with data originating from physically based models as well as a coupling with a physically based water balance model. For the generation of the needed training data and also as coupled water balance model the ecohydrological model SWAT is employed. The conceptual model divides the nitrate pathway into three parts. The first fuzzy-module calculates nitrate leaching with the percolating water from soil surface to groundwater, the second module simulates groundwater passage, and the final module replaces the in-stream processes. The aim of this modularization is to create flexibility for using each of the modules on its own, for changing or completely replacing it. For fuzzy-rule based modeling this can explicitly mean that the re-training of one of the modules with newly available data will be possible without problem, while the module assembly does not have to be modified. Apart from the concept of hybrid metamodeling first results are presented for the fuzzy-module for nitrate passage through the unsaturated zone.
Logs Analysis of Adapted Pedagogical Scenarios Generated by a Simulation Serious Game Architecture
ERIC Educational Resources Information Center
Callies, Sophie; Gravel, Mathieu; Beaudry, Eric; Basque, Josianne
2017-01-01
This paper presents an architecture designed for simulation serious games, which automatically generates game-based scenarios adapted to learner's learning progression. We present three central modules of the architecture: (1) the learner model, (2) the adaptation module and (3) the logs module. The learner model estimates the progression of the…
Dynamics of a Flywheel Energy Storage System Supporting a Wind Turbine Generator in a Microgrid
NASA Astrophysics Data System (ADS)
Nair S, Gayathri; Senroy, Nilanjan
2016-02-01
Integration of an induction machine based flywheel energy storage system with a wind energy conversion system is implemented in this paper. The nonlinear and linearized models of the flywheel are studied, compared and a reduced order model of the same simulated to analyze the influence of the flywheel inertia and control in system response during a wind power change. A quantification of the relation between the inertia of the flywheel and the controller gain is obtained which allows the system to be considered as a reduced order model that is more controllable in nature. A microgrid setup comprising of the flywheel energy storage system, a two mass model of a DFIG based wind turbine generator and a reduced order model of a diesel generator is utilized to analyse the microgrid dynamics accurately in the event of frequency variations arising due to wind power change. The response of the microgrid with and without the flywheel is studied.
Entropy generation method to quantify thermal comfort.
Boregowda, S C; Tiwari, S N; Chaturvedi, S K
2001-12-01
The present paper presents a thermodynamic approach to assess the quality of human-thermal environment interaction and quantify thermal comfort. The approach involves development of entropy generation term by applying second law of thermodynamics to the combined human-environment system. The entropy generation term combines both human thermal physiological responses and thermal environmental variables to provide an objective measure of thermal comfort. The original concepts and definitions form the basis for establishing the mathematical relationship between thermal comfort and entropy generation term. As a result of logic and deterministic approach, an Objective Thermal Comfort Index (OTCI) is defined and established as a function of entropy generation. In order to verify the entropy-based thermal comfort model, human thermal physiological responses due to changes in ambient conditions are simulated using a well established and validated human thermal model developed at the Institute of Environmental Research of Kansas State University (KSU). The finite element based KSU human thermal computer model is being utilized as a "Computational Environmental Chamber" to conduct series of simulations to examine the human thermal responses to different environmental conditions. The output from the simulation, which include human thermal responses and input data consisting of environmental conditions are fed into the thermal comfort model. Continuous monitoring of thermal comfort in comfortable and extreme environmental conditions is demonstrated. The Objective Thermal Comfort values obtained from the entropy-based model are validated against regression based Predicted Mean Vote (PMV) values. Using the corresponding air temperatures and vapor pressures that were used in the computer simulation in the regression equation generates the PMV values. The preliminary results indicate that the OTCI and PMV values correlate well under ideal conditions. However, an experimental study is needed in the future to fully establish the validity of the OTCI formula and the model. One of the practical applications of this index is that could it be integrated in thermal control systems to develop human-centered environmental control systems for potential use in aircraft, mass transit vehicles, intelligent building systems, and space vehicles.
Entropy generation method to quantify thermal comfort
NASA Technical Reports Server (NTRS)
Boregowda, S. C.; Tiwari, S. N.; Chaturvedi, S. K.
2001-01-01
The present paper presents a thermodynamic approach to assess the quality of human-thermal environment interaction and quantify thermal comfort. The approach involves development of entropy generation term by applying second law of thermodynamics to the combined human-environment system. The entropy generation term combines both human thermal physiological responses and thermal environmental variables to provide an objective measure of thermal comfort. The original concepts and definitions form the basis for establishing the mathematical relationship between thermal comfort and entropy generation term. As a result of logic and deterministic approach, an Objective Thermal Comfort Index (OTCI) is defined and established as a function of entropy generation. In order to verify the entropy-based thermal comfort model, human thermal physiological responses due to changes in ambient conditions are simulated using a well established and validated human thermal model developed at the Institute of Environmental Research of Kansas State University (KSU). The finite element based KSU human thermal computer model is being utilized as a "Computational Environmental Chamber" to conduct series of simulations to examine the human thermal responses to different environmental conditions. The output from the simulation, which include human thermal responses and input data consisting of environmental conditions are fed into the thermal comfort model. Continuous monitoring of thermal comfort in comfortable and extreme environmental conditions is demonstrated. The Objective Thermal Comfort values obtained from the entropy-based model are validated against regression based Predicted Mean Vote (PMV) values. Using the corresponding air temperatures and vapor pressures that were used in the computer simulation in the regression equation generates the PMV values. The preliminary results indicate that the OTCI and PMV values correlate well under ideal conditions. However, an experimental study is needed in the future to fully establish the validity of the OTCI formula and the model. One of the practical applications of this index is that could it be integrated in thermal control systems to develop human-centered environmental control systems for potential use in aircraft, mass transit vehicles, intelligent building systems, and space vehicles.
Riemannian multi-manifold modeling and clustering in brain networks
NASA Astrophysics Data System (ADS)
Slavakis, Konstantinos; Salsabilian, Shiva; Wack, David S.; Muldoon, Sarah F.; Baidoo-Williams, Henry E.; Vettel, Jean M.; Cieslak, Matthew; Grafton, Scott T.
2017-08-01
This paper introduces Riemannian multi-manifold modeling in the context of brain-network analytics: Brainnetwork time-series yield features which are modeled as points lying in or close to a union of a finite number of submanifolds within a known Riemannian manifold. Distinguishing disparate time series amounts thus to clustering multiple Riemannian submanifolds. To this end, two feature-generation schemes for brain-network time series are put forth. The first one is motivated by Granger-causality arguments and uses an auto-regressive moving average model to map low-rank linear vector subspaces, spanned by column vectors of appropriately defined observability matrices, to points into the Grassmann manifold. The second one utilizes (non-linear) dependencies among network nodes by introducing kernel-based partial correlations to generate points in the manifold of positivedefinite matrices. Based on recently developed research on clustering Riemannian submanifolds, an algorithm is provided for distinguishing time series based on their Riemannian-geometry properties. Numerical tests on time series, synthetically generated from real brain-network structural connectivity matrices, reveal that the proposed scheme outperforms classical and state-of-the-art techniques in clustering brain-network states/structures.
High Speed Civil Transport Aircraft Simulation: Reference-H Cycle 1, MATLAB Implementation
NASA Technical Reports Server (NTRS)
Sotack, Robert A.; Chowdhry, Rajiv S.; Buttrill, Carey S.
1999-01-01
The mathematical model and associated code to simulate a high speed civil transport aircraft - the Boeing Reference H configuration - are described. The simulation was constructed in support of advanced control law research. In addition to providing time histories of the dynamic response, the code includes the capabilities for calculating trim solutions and for generating linear models. The simulation relies on the nonlinear, six-degree-of-freedom equations which govern the motion of a rigid aircraft in atmospheric flight. The 1962 Standard Atmosphere Tables are used along with a turbulence model to simulate the Earth atmosphere. The aircraft model has three parts - an aerodynamic model, an engine model, and a mass model. These models use the data from the Boeing Reference H cycle 1 simulation data base. Models for the actuator dynamics, landing gear, and flight control system are not included in this aircraft model. Dynamic responses generated by the nonlinear simulation are presented and compared with results generated from alternate simulations at Boeing Commercial Aircraft Company and NASA Langley Research Center. Also, dynamic responses generated using linear models are presented and compared with dynamic responses generated using the nonlinear simulation.
Tanabe, Katsuaki
2016-01-01
We modeled the dynamics of hydrogen and deuterium adsorbed on palladium nanoparticles including the heat generation induced by the chemical adsorption and desorption, as well as palladium-catalyzed reactions. Our calculations based on the proposed model reproduce the experimental time-evolution of pressure and temperature with a single set of fitting parameters for hydrogen and deuterium injection. The model we generated with a highly generalized set of formulations can be applied for any combination of a gas species and a catalytic adsorbent/absorbent. Our model can be used as a basis for future research into hydrogen storage and solid-state nuclear fusion technologies.
ERIC Educational Resources Information Center
Zimmerlin, Timothy A.; And Others
An effort to construct a model of the thermal properties of materials based on theoretical thermo-electromagnetic models, to construct a data base of the dense cultural hospital scene according to Defense Mapping Agency Aerospace Center (DMAAC) specifications, and to design and implement a program to evaluate the tonal model and generate imagery…
Constructing Agent Model for Virtual Training Systems
NASA Astrophysics Data System (ADS)
Murakami, Yohei; Sugimoto, Yuki; Ishida, Toru
Constructing highly realistic agents is essential if agents are to be employed in virtual training systems. In training for collaboration based on face-to-face interaction, the generation of emotional expressions is one key. In training for guidance based on one-to-many interaction such as direction giving for evacuations, emotional expressions must be supplemented by diverse agent behaviors to make the training realistic. To reproduce diverse behavior, we characterize agents by using a various combinations of operation rules instantiated by the user operating the agent. To accomplish this goal, we introduce a user modeling method based on participatory simulations. These simulations enable us to acquire information observed by each user in the simulation and the operating history. Using these data and the domain knowledge including known operation rules, we can generate an explanation for each behavior. Moreover, the application of hypothetical reasoning, which offers consistent selection of hypotheses, to the generation of explanations allows us to use otherwise incompatible operation rules as domain knowledge. In order to validate the proposed modeling method, we apply it to the acquisition of an evacuee's model in a fire-drill experiment. We successfully acquire a subject's model corresponding to the results of an interview with the subject.
NASA Astrophysics Data System (ADS)
Arya, L. D.; Koshti, Atul
2018-05-01
This paper investigates the Distributed Generation (DG) capacity optimization at location based on the incremental voltage sensitivity criteria for sub-transmission network. The Modified Shuffled Frog Leaping optimization Algorithm (MSFLA) has been used to optimize the DG capacity. Induction generator model of DG (wind based generating units) has been considered for study. Standard test system IEEE-30 bus has been considered for the above study. The obtained results are also validated by shuffled frog leaping algorithm and modified version of bare bones particle swarm optimization (BBExp). The performance of MSFLA has been found more efficient than the other two algorithms for real power loss minimization problem.
NASA Astrophysics Data System (ADS)
Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin
2016-03-01
How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.
Label fusion based brain MR image segmentation via a latent selective model
NASA Astrophysics Data System (ADS)
Liu, Gang; Guo, Xiantang; Zhu, Kai; Liao, Hengxu
2018-04-01
Multi-atlas segmentation is an effective approach and increasingly popular for automatically labeling objects of interest in medical images. Recently, segmentation methods based on generative models and patch-based techniques have become the two principal branches of label fusion. However, these generative models and patch-based techniques are only loosely related, and the requirement for higher accuracy, faster segmentation, and robustness is always a great challenge. In this paper, we propose novel algorithm that combines the two branches using global weighted fusion strategy based on a patch latent selective model to perform segmentation of specific anatomical structures for human brain magnetic resonance (MR) images. In establishing this probabilistic model of label fusion between the target patch and patch dictionary, we explored the Kronecker delta function in the label prior, which is more suitable than other models, and designed a latent selective model as a membership prior to determine from which training patch the intensity and label of the target patch are generated at each spatial location. Because the image background is an equally important factor for segmentation, it is analyzed in label fusion procedure and we regard it as an isolated label to keep the same privilege between the background and the regions of interest. During label fusion with the global weighted fusion scheme, we use Bayesian inference and expectation maximization algorithm to estimate the labels of the target scan to produce the segmentation map. Experimental results indicate that the proposed algorithm is more accurate and robust than the other segmentation methods.
Agent-Based Simulation for Interconnection-Scale Renewable Integration and Demand Response Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, David P.; Behboodi, Sahand; Crawford, Curran
This paper collects and synthesizes the technical requirements, implementation, and validation methods for quasi-steady agent-based simulations of interconnectionscale models with particular attention to the integration of renewable generation and controllable loads. Approaches for modeling aggregated controllable loads are presented and placed in the same control and economic modeling framework as generation resources for interconnection planning studies. Model performance is examined with system parameters that are typical for an interconnection approximately the size of the Western Electricity Coordinating Council (WECC) and a control area about 1/100 the size of the system. These results are used to demonstrate and validate the methodsmore » presented.« less
Model authoring system for fail safe analysis
NASA Technical Reports Server (NTRS)
Sikora, Scott E.
1990-01-01
The Model Authoring System is a prototype software application for generating fault tree analyses and failure mode and effects analyses for circuit designs. Utilizing established artificial intelligence and expert system techniques, the circuits are modeled as a frame-based knowledge base in an expert system shell, which allows the use of object oriented programming and an inference engine. The behavior of the circuit is then captured through IF-THEN rules, which then are searched to generate either a graphical fault tree analysis or failure modes and effects analysis. Sophisticated authoring techniques allow the circuit to be easily modeled, permit its behavior to be quickly defined, and provide abstraction features to deal with complexity.
Agent-Based Simulation for Interconnection-Scale Renewable Integration and Demand Response Studies
Chassin, David P.; Behboodi, Sahand; Crawford, Curran; ...
2015-12-23
This paper collects and synthesizes the technical requirements, implementation, and validation methods for quasi-steady agent-based simulations of interconnectionscale models with particular attention to the integration of renewable generation and controllable loads. Approaches for modeling aggregated controllable loads are presented and placed in the same control and economic modeling framework as generation resources for interconnection planning studies. Model performance is examined with system parameters that are typical for an interconnection approximately the size of the Western Electricity Coordinating Council (WECC) and a control area about 1/100 the size of the system. These results are used to demonstrate and validate the methodsmore » presented.« less
Generating human-like movements on an anthropomorphic robot using an interior point method
NASA Astrophysics Data System (ADS)
Costa e Silva, E.; Araújo, J. P.; Machado, D.; Costa, M. F.; Erlhagen, W.; Bicho, E.
2013-10-01
In previous work we have presented a model for generating human-like arm and hand movements on an anthropomorphic robot involved in human-robot collaboration tasks. This model was inspired by the Posture-Based Motion-Planning Model of human movements. Numerical results and simulations for reach-to-grasp movements with two different grip types have been presented previously. In this paper we extend our model in order to address the generation of more complex movement sequences which are challenged by scenarios cluttered with obstacles. The numerical results were obtained using the IPOPT solver, which was integrated in our MATLAB simulator of an anthropomorphic robot.
A Structural Model Decomposition Framework for Hybrid Systems Diagnosis
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil
2015-01-01
Nowadays, a large number of practical systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete modes of behavior, each defined by a set of continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task very challenging. In this work, we present a new modeling and diagnosis framework for hybrid systems. Models are composed from sets of user-defined components using a compositional modeling approach. Submodels for residual generation are then generated for a given mode, and reconfigured efficiently when the mode changes. Efficient reconfiguration is established by exploiting causality information within the hybrid system models. The submodels can then be used for fault diagnosis based on residual generation and analysis. We demonstrate the efficient causality reassignment, submodel reconfiguration, and residual generation for fault diagnosis using an electrical circuit case study.
[A dynamic model of the extravehicular (correction of extravehicuar) activity space suit].
Yang, Feng; Yuan, Xiu-gan
2002-12-01
Objective. To establish a dynamic model of the space suit base on the particular configuration of the space suit. Method. The mass of the space suit components, moment of inertia, mobility of the joints of space suit, as well as the suit-generated torques, were considered in this model. The expressions to calculate the moment of inertia were developed by simplifying the geometry of the space suit. A modified Preisach model was used to mathematically describe the hysteretic torque characteristics of joints in a pressurized space suit, and it was implemented numerically basing on the observed suit parameters. Result. A dynamic model considering mass, moment of inertia and suit-generated torques was established. Conclusion. This dynamic model provides some elements for the dynamic simulation of the astronaut extravehicular activity.
NASA Astrophysics Data System (ADS)
Ijjas, Anna; Steinhardt, Paul J.
2015-10-01
We introduce ``anamorphic'' cosmology, an approach for explaining the smoothness and flatness of the universe on large scales and the generation of a nearly scale-invariant spectrum of adiabatic density perturbations. The defining feature is a smoothing phase that acts like a contracting universe based on some Weyl frame-invariant criteria and an expanding universe based on other frame-invariant criteria. An advantage of the contracting aspects is that it is possible to avoid the multiverse and measure problems that arise in inflationary models. Unlike ekpyrotic models, anamorphic models can be constructed using only a single field and can generate a nearly scale-invariant spectrum of tensor perturbations. Anamorphic models also differ from pre-big bang and matter bounce models that do not explain the smoothness. We present some examples of cosmological models that incorporate an anamorphic smoothing phase.
Optimization and Validation of Rotating Current Excitation with GMR Array Sensors for Riveted
2016-09-16
distribution. Simulation results, using both an optimized coil and a conventional coil, are generated using the finite element method (FEM) model...optimized coil and a conventional coil, are generated using the finite element method (FEM) model. The signal magnitude for an optimized coil is seen to be...optimized coil. 4. Model Based Performance Analysis A 3D finite element model (FEM) is used to analyze the performance of the optimized coil and
NASA Astrophysics Data System (ADS)
Nardi, F.; Grimaldi, S.; Petroselli, A.
2012-12-01
Remotely sensed Digital Elevation Models (DEMs), largely available at high resolution, and advanced terrain analysis techniques built in Geographic Information Systems (GIS), provide unique opportunities for DEM-based hydrologic and hydraulic modelling in data-scarce river basins paving the way for flood mapping at the global scale. This research is based on the implementation of a fully continuous hydrologic-hydraulic modelling optimized for ungauged basins with limited river flow measurements. The proposed procedure is characterized by a rainfall generator that feeds a continuous rainfall-runoff model producing flow time series that are routed along the channel using a bidimensional hydraulic model for the detailed representation of the inundation process. The main advantage of the proposed approach is the characterization of the entire physical process during hydrologic extreme events of channel runoff generation, propagation, and overland flow within the floodplain domain. This physically-based model neglects the need for synthetic design hyetograph and hydrograph estimation that constitute the main source of subjective analysis and uncertainty of standard methods for flood mapping. Selected case studies show results and performances of the proposed procedure as respect to standard event-based approaches.
NASA Astrophysics Data System (ADS)
Millstein, D.; Zhai, P.; Menon, S.
2011-12-01
Over the past decade significant reductions of NOx and SOx emissions from coal burning power plants in the U.S. have been achieved due to regulatory action and substitution of new generation towards natural gas and wind power. Low natural gas prices, ever decreasing solar generation costs, and proposed regulatory changes, such as to the Cross State Air Pollution Rule, promise further long-run coal power plant emission reductions. Reduced power plant emissions have the potential to affect ozone and particulate air quality and influence regional climate through aerosol cloud interactions and visibility effects. Here we investigate, on a national scale, the effects on future (~2030) air quality and regional climate of power plant emission regulations in contrast to and combination with policies designed to aggressively promote solar electricity generation. A sophisticated, economic and engineering based, hourly power generation dispatch model is developed to explore the integration of significant solar generation resources (>10% on an energy basis) at various regions across the county, providing detailed estimates of substitution of solar generation for fossil fuel generation resources. Future air pollutant emissions from all sectors of the economy are scaled based on the U.S. Environmental Protection Agency's National Emission Inventory to account for activity changes based on population and economic projections derived from county level U.S. Census data and the Energy Information Administration's Annual Energy Outlook. Further adjustments are made for technological and regulatory changes applicable within various sectors, for example, emission intensity adjustments to on-road diesel trucking due to exhaust treatment and improved engine design. The future year 2030 is selected for the emissions scenarios to allow for the development of significant solar generation resources. A regional climate and air quality model (Weather Research and Forecasting, WRF model) is used to investigate the effects of the various solar generation scenarios given emissions projections that account for changing regulatory environment, economic and population growth, and technological change. The results will help to quantify the potential air quality benefits of promotion of solar electricity generation in regions containing high penetration of coal-fired power generation. Note current national solar incentives that are based only on solar generation capacity. Further investigation of changes to regional climate due to emission reductions of aerosols and relevant precursors will provide insight into the environmental effects that may occur if solar power generation becomes widespread.
A model-based exploration of the role of pattern generating circuits during locomotor adaptation.
Marjaninejad, Ali; Finley, James M
2016-08-01
In this study, we used a model-based approach to explore the potential contributions of central pattern generating circuits (CPGs) during adaptation to external perturbations during locomotion. We constructed a neuromechanical modeled of locomotion using a reduced-phase CPG controller and an inverted pendulum mechanical model. Two different forms of locomotor adaptation were examined in this study: split-belt treadmill adaptation and adaptation to a unilateral, elastic force field. For each simulation, we first examined the effects of phase resetting and varying the model's initial conditions on the resulting adaptation. After evaluating the effect of phase resetting on the adaptation of step length symmetry, we examined the extent to which the results from these simple models could explain previous experimental observations. We found that adaptation of step length symmetry during split-belt treadmill walking could be reproduced using our model, but this model failed to replicate patterns of adaptation observed in response to force field perturbations. Given that spinal animal models can adapt to both of these types of perturbations, our findings suggest that there may be distinct features of pattern generating circuits that mediate each form of adaptation.
Antiresonance induced spin-polarized current generation
NASA Astrophysics Data System (ADS)
Yin, Sun; Min, Wen-Jing; Gao, Kun; Xie, Shi-Jie; Liu, De-Sheng
2011-12-01
According to the one-dimensional antiresonance effect (Wang X R, Wang Y and Sun Z Z 2003 Phys. Rev. B 65 193402), we propose a possible spin-polarized current generation device. Our proposed model consists of one chain and an impurity coupling to the chain. The energy level of the impurity can be occupied by an electron with a specific spin, and the electron with such a spin is blocked because of the antiresonance effect. Based on this phenomenon our model can generate the spin-polarized current flowing through the chain due to different polarization rates. On the other hand, the device can also be used to measure the generated spin accumulation. Our model is feasible with today's technology.
On models of the genetic code generated by binary dichotomic algorithms.
Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz
2015-02-01
In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at http://mi.informatik.hs-mannheim.de/beady-a. It requires a JVM version 6 or higher. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.
2013-07-25
This paper presents four algorithms to generate random forecast error time series. The performance of four algorithms is compared. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets used in power grid operation to study the net load balancing need in variable generation integration studies. The four algorithms are truncated-normal distribution models, state-space based Markov models, seasonal autoregressive moving average (ARMA) models, and a stochastic-optimization based approach. The comparison is made using historical DA load forecast and actual load valuesmore » to generate new sets of DA forecasts with similar stoical forecast error characteristics (i.e., mean, standard deviation, autocorrelation, and cross-correlation). The results show that all methods generate satisfactory results. One method may preserve one or two required statistical characteristics better the other methods, but may not preserve other statistical characteristics as well compared with the other methods. Because the wind and load forecast error generators are used in wind integration studies to produce wind and load forecasts time series for stochastic planning processes, it is sometimes critical to use multiple methods to generate the error time series to obtain a statistically robust result. Therefore, this paper discusses and compares the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less
Comparing supply and demand models for future photovoltaic power generation in the USA
Basore, Paul A.; Cole, Wesley J.
2018-02-22
We explore the plausible range of future deployment of photovoltaic generation capacity in the USA using a supply-focused model based on supply-chain growth constraints and a demand-focused model based on minimizing the overall cost of the electricity system. Both approaches require assumptions based on previous experience and anticipated trends. For each of the models, we assign plausible ranges for the key assumptions and then compare the resulting PV deployment over time. Each model was applied to 2 different future scenarios: one in which PV market penetration is ultimately constrained by the uncontrolled variability of solar power and one in whichmore » low-cost energy storage or some equivalent measure largely alleviates this constraint. The supply-focused and demand-focused models are in substantial agreement, not just in the long term, where deployment is largely determined by the assumed market penetration constraints, but also in the interim years. For the future scenario without low-cost energy storage or equivalent measures, the 2 models give an average plausible range of PV generation capacity in the USA of 150 to 530 GWdc in 2030 and 260 to 810 GWdc in 2040. With low-cost energy storage or equivalent measures, the corresponding ranges are 160 to 630 GWdc in 2030 and 280 to 1200 GWdc in 2040. The latter range is enough to supply 10% to 40% of US electricity demand in 2040, based on current demand growth.« less
Comparing supply and demand models for future photovoltaic power generation in the USA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basore, Paul A.; Cole, Wesley J.
We explore the plausible range of future deployment of photovoltaic generation capacity in the USA using a supply-focused model based on supply-chain growth constraints and a demand-focused model based on minimizing the overall cost of the electricity system. Both approaches require assumptions based on previous experience and anticipated trends. For each of the models, we assign plausible ranges for the key assumptions and then compare the resulting PV deployment over time. Each model was applied to 2 different future scenarios: one in which PV market penetration is ultimately constrained by the uncontrolled variability of solar power and one in whichmore » low-cost energy storage or some equivalent measure largely alleviates this constraint. The supply-focused and demand-focused models are in substantial agreement, not just in the long term, where deployment is largely determined by the assumed market penetration constraints, but also in the interim years. For the future scenario without low-cost energy storage or equivalent measures, the 2 models give an average plausible range of PV generation capacity in the USA of 150 to 530 GWdc in 2030 and 260 to 810 GWdc in 2040. With low-cost energy storage or equivalent measures, the corresponding ranges are 160 to 630 GWdc in 2030 and 280 to 1200 GWdc in 2040. The latter range is enough to supply 10% to 40% of US electricity demand in 2040, based on current demand growth.« less
An empirical generative framework for computational modeling of language acquisition.
Waterfall, Heidi R; Sandbank, Ben; Onnis, Luca; Edelman, Shimon
2010-06-01
This paper reports progress in developing a computer model of language acquisition in the form of (1) a generative grammar that is (2) algorithmically learnable from realistic corpus data, (3) viable in its large-scale quantitative performance and (4) psychologically real. First, we describe new algorithmic methods for unsupervised learning of generative grammars from raw CHILDES data and give an account of the generative performance of the acquired grammars. Next, we summarize findings from recent longitudinal and experimental work that suggests how certain statistically prominent structural properties of child-directed speech may facilitate language acquisition. We then present a series of new analyses of CHILDES data indicating that the desired properties are indeed present in realistic child-directed speech corpora. Finally, we suggest how our computational results, behavioral findings, and corpus-based insights can be integrated into a next-generation model aimed at meeting the four requirements of our modeling framework.
Simulation for Grid Connected Wind Turbines with Fluctuating
NASA Astrophysics Data System (ADS)
Ye, Ying; Fu, Yang; Wei, Shurong
This paper establishes the whole dynamic model of wind turbine generator system which contains the wind speed model and DFIG wind turbines model .A simulation sample based on the mathematical models is built by using MATLAB in this paper. Research are did on the performance characteristics of doubly-fed wind generators (DFIG) which connected to power grid with three-phase ground fault and the disturbance by gust and mixed wind. The capacity of the wind farm is 9MW which consists of doubly-fed wind generators (DFIG). Simulation results demonstrate that the three-phase ground fault occurs on grid side runs less affected on the stability of doubly-fed wind generators. However, as a power source, fluctuations of the wind speed will run a large impact on stability of double-fed wind generators. The results also show that if the two disturbances occur in the meantime, the situation will be very serious.
On agent-based modeling and computational social science.
Conte, Rosaria; Paolucci, Mario
2014-01-01
In the first part of the paper, the field of agent-based modeling (ABM) is discussed focusing on the role of generative theories, aiming at explaining phenomena by growing them. After a brief analysis of the major strengths of the field some crucial weaknesses are analyzed. In particular, the generative power of ABM is found to have been underexploited, as the pressure for simple recipes has prevailed and shadowed the application of rich cognitive models. In the second part of the paper, the renewal of interest for Computational Social Science (CSS) is focused upon, and several of its variants, such as deductive, generative, and complex CSS, are identified and described. In the concluding remarks, an interdisciplinary variant, which takes after ABM, reconciling it with the quantitative one, is proposed as a fundamental requirement for a new program of the CSS.
On agent-based modeling and computational social science
Conte, Rosaria; Paolucci, Mario
2014-01-01
In the first part of the paper, the field of agent-based modeling (ABM) is discussed focusing on the role of generative theories, aiming at explaining phenomena by growing them. After a brief analysis of the major strengths of the field some crucial weaknesses are analyzed. In particular, the generative power of ABM is found to have been underexploited, as the pressure for simple recipes has prevailed and shadowed the application of rich cognitive models. In the second part of the paper, the renewal of interest for Computational Social Science (CSS) is focused upon, and several of its variants, such as deductive, generative, and complex CSS, are identified and described. In the concluding remarks, an interdisciplinary variant, which takes after ABM, reconciling it with the quantitative one, is proposed as a fundamental requirement for a new program of the CSS. PMID:25071642
NASA Astrophysics Data System (ADS)
Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.
2017-09-01
Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.
Pharmacophore-Map-Pick: A Method to Generate Pharmacophore Models for All Human GPCRs.
Dai, Shao-Xing; Li, Gong-Hua; Gao, Yue-Dong; Huang, Jing-Fei
2016-02-01
GPCR-based drug discovery is hindered by a lack of effective screening methods for most GPCRs that have neither ligands nor high-quality structures. With the aim to identify lead molecules for these GPCRs, we developed a new method called Pharmacophore-Map-Pick to generate pharmacophore models for all human GPCRs. The model of ADRB2 generated using this method not only predicts the binding mode of ADRB2-ligands correctly but also performs well in virtual screening. Findings also demonstrate that this method is powerful for generating high-quality pharmacophore models. The average enrichment for the pharmacophore models of the 15 targets in different GPCR families reached 15-fold at 0.5 % false-positive rate. Therefore, the pharmacophore models can be applied in virtual screening directly with no requirement for any ligand information or shape constraints. A total of 2386 pharmacophore models for 819 different GPCRs (99 % coverage (819/825)) were generated and are available at http://bsb.kiz.ac.cn/GPCRPMD. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Arai, Yukiko; Aoki, Hitoshi; Abe, Fumitaka; Todoroki, Shunichiro; Khatami, Ramin; Kazumi, Masaki; Totsuka, Takuya; Wang, Taifeng; Kobayashi, Haruo
2015-04-01
1/f noise is one of the most important characteristics for designing analog/RF circuits including operational amplifiers and oscillators. We have analyzed and developed a novel 1/f noise model in the strong inversion, saturation, and sub-threshold regions based on SPICE2 type model used in any public metal-oxide-semiconductor field-effect transistor (MOSFET) models developed by the University of California, Berkeley. Our model contains two noise generation mechanisms that are mobility and interface trap number fluctuations. Noise variability dependent on gate voltage is also newly implemented in our model. The proposed model has been implemented in BSIM4 model of a SPICE3 compatible circuit simulator. Parameters of the proposed model are extracted with 1/f noise measurements for simulation verifications. The simulation results show excellent agreements between measurement and simulations.
An Efficient Functional Test Generation Method For Processors Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Hudec, Ján; Gramatová, Elena
2015-07-01
The paper presents a new functional test generation method for processors testing based on genetic algorithms and evolutionary strategies. The tests are generated over an instruction set architecture and a processor description. Such functional tests belong to the software-oriented testing. Quality of the tests is evaluated by code coverage of the processor description using simulation. The presented test generation method uses VHDL models of processors and the professional simulator ModelSim. The rules, parameters and fitness functions were defined for various genetic algorithms used in automatic test generation. Functionality and effectiveness were evaluated using the RISC type processor DP32.
Fossett, Mark
2011-01-01
This paper considers the potential for using agent models to explore theories of residential segregation in urban areas. Results of generative experiments conducted using an agent-based simulation of segregation dynamics document that varying a small number of model parameters representing constructs from urban-ecological theories of segregation can generate a wide range of qualitatively distinct and substantively interesting segregation patterns. The results suggest how complex, macro-level patterns of residential segregation can arise from a small set of simple micro-level social dynamics operating within particular urban-demographic contexts. The promise and current limitations of agent simulation studies are noted and optimism is expressed regarding the potential for such studies to engage and contribute to the broader research literature on residential segregation. PMID:21379372
Next-generation genome-scale models for metabolic engineering.
King, Zachary A; Lloyd, Colton J; Feist, Adam M; Palsson, Bernhard O
2015-12-01
Constraint-based reconstruction and analysis (COBRA) methods have become widely used tools for metabolic engineering in both academic and industrial laboratories. By employing a genome-scale in silico representation of the metabolic network of a host organism, COBRA methods can be used to predict optimal genetic modifications that improve the rate and yield of chemical production. A new generation of COBRA models and methods is now being developed--encompassing many biological processes and simulation strategies-and next-generation models enable new types of predictions. Here, three key examples of applying COBRA methods to strain optimization are presented and discussed. Then, an outlook is provided on the next generation of COBRA models and the new types of predictions they will enable for systems metabolic engineering. Copyright © 2014 Elsevier Ltd. All rights reserved.
Toward Development of a Stochastic Wake Model: Validation Using LES and Turbine Loads
Moon, Jae; Manuel, Lance; Churchfield, Matthew; ...
2017-12-28
Wind turbines within an array do not experience free-stream undisturbed flow fields. Rather, the flow fields on internal turbines are influenced by wakes generated by upwind unit and exhibit different dynamic characteristics relative to the free stream. The International Electrotechnical Commission (IEC) standard 61400-1 for the design of wind turbines only considers a deterministic wake model for the design of a wind plant. This study is focused on the development of a stochastic model for waked wind fields. First, high-fidelity physics-based waked wind velocity fields are generated using Large-Eddy Simulation (LES). Stochastic characteristics of these LES waked wind velocity field,more » including mean and turbulence components, are analyzed. Wake-related mean and turbulence field-related parameters are then estimated for use with a stochastic model, using Multivariate Multiple Linear Regression (MMLR) with the LES data. To validate the simulated wind fields based on the stochastic model, wind turbine tower and blade loads are generated using aeroelastic simulation for utility-scale wind turbine models and compared with those based directly on the LES inflow. The study's overall objective is to offer efficient and validated stochastic approaches that are computationally tractable for assessing the performance and loads of turbines operating in wakes.« less
Toward Development of a Stochastic Wake Model: Validation Using LES and Turbine Loads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moon, Jae; Manuel, Lance; Churchfield, Matthew
Wind turbines within an array do not experience free-stream undisturbed flow fields. Rather, the flow fields on internal turbines are influenced by wakes generated by upwind unit and exhibit different dynamic characteristics relative to the free stream. The International Electrotechnical Commission (IEC) standard 61400-1 for the design of wind turbines only considers a deterministic wake model for the design of a wind plant. This study is focused on the development of a stochastic model for waked wind fields. First, high-fidelity physics-based waked wind velocity fields are generated using Large-Eddy Simulation (LES). Stochastic characteristics of these LES waked wind velocity field,more » including mean and turbulence components, are analyzed. Wake-related mean and turbulence field-related parameters are then estimated for use with a stochastic model, using Multivariate Multiple Linear Regression (MMLR) with the LES data. To validate the simulated wind fields based on the stochastic model, wind turbine tower and blade loads are generated using aeroelastic simulation for utility-scale wind turbine models and compared with those based directly on the LES inflow. The study's overall objective is to offer efficient and validated stochastic approaches that are computationally tractable for assessing the performance and loads of turbines operating in wakes.« less
Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code
NASA Technical Reports Server (NTRS)
Waithe, Kenrick A.
2004-01-01
A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.
A bootstrap based space-time surveillance model with an application to crime occurrences
NASA Astrophysics Data System (ADS)
Kim, Youngho; O'Kelly, Morton
2008-06-01
This study proposes a bootstrap-based space-time surveillance model. Designed to find emerging hotspots in near-real time, the bootstrap based model is characterized by its use of past occurrence information and bootstrap permutations. Many existing space-time surveillance methods, using population at risk data to generate expected values, have resulting hotspots bounded by administrative area units and are of limited use for near-real time applications because of the population data needed. However, this study generates expected values for local hotspots from past occurrences rather than population at risk. Also, bootstrap permutations of previous occurrences are used for significant tests. Consequently, the bootstrap-based model, without the requirement of population at risk data, (1) is free from administrative area restriction, (2) enables more frequent surveillance for continuously updated registry database, and (3) is readily applicable to criminology and epidemiology surveillance. The bootstrap-based model performs better for space-time surveillance than the space-time scan statistic. This is shown by means of simulations and an application to residential crime occurrences in Columbus, OH, year 2000.
Xu, Jiajia; Li, Yuanyuan; Ma, Xiuling; Ding, Jianfeng; Wang, Kai; Wang, Sisi; Tian, Ye; Zhang, Hui; Zhu, Xin-Guang
2013-09-01
Setaria viridis is an emerging model species for genetic studies of C4 photosynthesis. Many basic molecular resources need to be developed to support for this species. In this paper, we performed a comprehensive transcriptome analysis from multiple developmental stages and tissues of S. viridis using next-generation sequencing technologies. Sequencing of the transcriptome from multiple tissues across three developmental stages (seed germination, vegetative growth, and reproduction) yielded a total of 71 million single end 100 bp long reads. Reference-based assembly using Setaria italica genome as a reference generated 42,754 transcripts. De novo assembly generated 60,751 transcripts. In addition, 9,576 and 7,056 potential simple sequence repeats (SSRs) covering S. viridis genome were identified when using the reference based assembled transcripts and the de novo assembled transcripts, respectively. This identified transcripts and SSR provided by this study can be used for both reverse and forward genetic studies based on S. viridis.
A Tool for Model-Based Generation of Scenario-driven Electric Power Load Profiles
NASA Technical Reports Server (NTRS)
Rozek, Matthew L.; Donahue, Kenneth M.; Ingham, Michel D.; Kaderka, Justin D.
2015-01-01
Power consumption during all phases of spacecraft flight is of great interest to the aerospace community. As a result, significant analysis effort is exerted to understand the rates of electrical energy generation and consumption under many operational scenarios of the system. Previously, no standard tool existed for creating and maintaining a power equipment list (PEL) of spacecraft components that consume power, and no standard tool existed for generating power load profiles based on this PEL information during mission design phases. This paper presents the Scenario Power Load Analysis Tool (SPLAT) as a model-based systems engineering tool aiming to solve those problems. SPLAT is a plugin for MagicDraw (No Magic, Inc.) that aids in creating and maintaining a PEL, and also generates a power and temporal variable constraint set, in Maple language syntax, based on specified operational scenarios. The constraint set can be solved in Maple to show electric load profiles (i.e. power consumption from loads over time). SPLAT creates these load profiles from three modeled inputs: 1) a list of system components and their respective power modes, 2) a decomposition hierarchy of the system into these components, and 3) the specification of at least one scenario, which consists of temporal constraints on component power modes. In order to demonstrate how this information is represented in a system model, a notional example of a spacecraft planetary flyby is introduced. This example is also used to explain the overall functionality of SPLAT, and how this is used to generate electric power load profiles. Lastly, a cursory review of the usage of SPLAT on the Cold Atom Laboratory project is presented to show how the tool was used in an actual space hardware design application.
Op den Akker, Harm; Cabrita, Miriam; Op den Akker, Rieks; Jones, Valerie M; Hermens, Hermie J
2015-06-01
This paper presents a comprehensive and practical framework for automatic generation of real-time tailored messages in behavior change applications. Basic aspects of motivational messages are time, intention, content and presentation. Tailoring of messages to the individual user may involve all aspects of communication. A linear modular system is presented for generating such messages. It is explained how properties of user and context are taken into account in each of the modules of the system and how they affect the linguistic presentation of the generated messages. The model of motivational messages presented is based on an analysis of existing literature as well as the analysis of a corpus of motivational messages used in previous studies. The model extends existing 'ontology-based' approaches to message generation for real-time coaching systems found in the literature. Practical examples are given on how simple tailoring rules can be implemented throughout the various stages of the framework. Such examples can guide further research by clarifying what it means to use e.g. user targeting to tailor a message. As primary example we look at the issue of promoting daily physical activity. Future work is pointed out in applying the present model and framework, defining efficient ways of evaluating individual tailoring components, and improving effectiveness through the creation of accurate and complete user- and context models. Copyright © 2015 Elsevier Inc. All rights reserved.
Shirai, Hiroki; Ikeda, Kazuyoshi; Yamashita, Kazuo; Tsuchiya, Yuko; Sarmiento, Jamica; Liang, Shide; Morokata, Tatsuaki; Mizuguchi, Kenji; Higo, Junichi; Standley, Daron M; Nakamura, Haruki
2014-08-01
In the second antibody modeling assessment, we used a semiautomated template-based structure modeling approach for 11 blinded antibody variable region (Fv) targets. The structural modeling method involved several steps, including template selection for framework and canonical structures of complementary determining regions (CDRs), homology modeling, energy minimization, and expert inspection. The submitted models for Fv modeling in Stage 1 had the lowest average backbone root mean square deviation (RMSD) (1.06 Å). Comparison to crystal structures showed the most accurate Fv models were generated for 4 out of 11 targets. We found that the successful modeling in Stage 1 mainly was due to expert-guided template selection for CDRs, especially for CDR-H3, based on our previously proposed empirical method (H3-rules) and the use of position specific scoring matrix-based scoring. Loop refinement using fragment assembly and multicanonical molecular dynamics (McMD) was applied to CDR-H3 loop modeling in Stage 2. Fragment assembly and McMD produced putative structural ensembles with low free energy values that were scored based on the OSCAR all-atom force field and conformation density in principal component analysis space, respectively, as well as the degree of consensus between the two sampling methods. The quality of 8 out of 10 targets improved as compared with Stage 1. For 4 out of 10 Stage-2 targets, our method generated top-scoring models with RMSD values of less than 1 Å. In this article, we discuss the strengths and weaknesses of our approach as well as possible directions for improvement to generate better predictions in the future. © 2014 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Obschonka, Martin; Silbereisen, Rainer K.; Schmitt-Rodermund, Eva
2012-01-01
Applying a life-span approach of human development and using the example of science-based business idea generation, the authors used structural equation modeling to test a mediation model for predicting entrepreneurial behavior in a sample of German scientists (2 measurement occasions; Time 1, N = 488). It was found that recalled early…
Medium term municipal solid waste generation prediction by autoregressive integrated moving average
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.
2014-09-12
Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressivemore » Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.« less
NASA Astrophysics Data System (ADS)
Sun, Congcong; Wang, Zhijie; Liu, Sanming; Jiang, Xiuchen; Sheng, Gehao; Liu, Tianyu
2017-05-01
Wind power has the advantages of being clean and non-polluting and the development of bundled wind-thermal generation power systems (BWTGSs) is one of the important means to improve wind power accommodation rate and implement “clean alternative” on generation side. A two-stage optimization strategy for BWTGSs considering wind speed forecasting results and load characteristics is proposed. By taking short-term wind speed forecasting results of generation side and load characteristics of demand side into account, a two-stage optimization model for BWTGSs is formulated. By using the environmental benefit index of BWTGSs as the objective function, supply-demand balance and generator operation as the constraints, the first-stage optimization model is developed with the chance-constrained programming theory. By using the operation cost for BWTGSs as the objective function, the second-stage optimization model is developed with the greedy algorithm. The improved PSO algorithm is employed to solve the model and numerical test verifies the effectiveness of the proposed strategy.
Mechanism of the free charge carrier generation in the dielectric breakdown
NASA Astrophysics Data System (ADS)
Rahim, N. A. A.; Ranom, R.; Zainuddin, H.
2017-12-01
Many studies have been conducted to investigate the effect of environmental, mechanical and electrical stresses on insulator. However, studies on physical process of discharge phenomenon, leading to the breakdown of the insulator surface are lacking and difficult to comprehend. Therefore, this paper analysed charge carrier generation mechanism that can cause free charge carrier generation, leading toward surface discharge development. Besides, this paper developed a model of surface discharge based on the charge generation mechanism on the outdoor insulator. Nernst’s Planck theory was used in order to model the behaviour of the charge carriers while Poisson’s equation was used to determine the distribution of electric field on insulator surface. In the modelling of surface discharge on the outdoor insulator, electric field dependent molecular ionization was used as the charge generation mechanism. A mathematical model of the surface discharge was solved using method of line technique (MOL). The result from the mathematical model showed that the behaviour of net space charge density was correlated with the electric field distribution.
Medium term municipal solid waste generation prediction by autoregressive integrated moving average
NASA Astrophysics Data System (ADS)
Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan
2014-09-01
Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.
Generating Models of Infinite-State Communication Protocols Using Regular Inference with Abstraction
NASA Astrophysics Data System (ADS)
Aarts, Fides; Jonsson, Bengt; Uijen, Johan
In order to facilitate model-based verification and validation, effort is underway to develop techniques for generating models of communication system components from observations of their external behavior. Most previous such work has employed regular inference techniques which generate modest-size finite-state models. They typically suppress parameters of messages, although these have a significant impact on control flow in many communication protocols. We present a framework, which adapts regular inference to include data parameters in messages and states for generating components with large or infinite message alphabets. A main idea is to adapt the framework of predicate abstraction, successfully used in formal verification. Since we are in a black-box setting, the abstraction must be supplied externally, using information about how the component manages data parameters. We have implemented our techniques by connecting the LearnLib tool for regular inference with the protocol simulator ns-2, and generated a model of the SIP component as implemented in ns-2.
NASA Astrophysics Data System (ADS)
Tan, Yimin; Lin, Kejian; Zu, Jean W.
2018-05-01
Halbach permanent magnet (PM) array has attracted tremendous research attention in the development of electromagnetic generators for its unique properties. This paper has proposed a generalized analytical model for linear generators. The slotted stator pole-shifting and implementation of Halbach array have been combined for the first time. Initially, the magnetization components of the Halbach array have been determined using Fourier decomposition. Then, based on the magnetic scalar potential method, the magnetic field distribution has been derived employing specially treated boundary conditions. FEM analysis has been conducted to verify the analytical model. A slotted linear PM generator with Halbach PM has been constructed to validate the model and further improved using piece-wise springs to trigger full range reciprocating motion. A dynamic model has been developed to characterize the dynamic behavior of the slider. This analytical method provides an effective tool in development and optimization of Halbach PM generator. The experimental results indicate that piece-wise springs can be employed to improve generator performance under low excitation frequency.
NASA Astrophysics Data System (ADS)
Rasztovits, S.; Dorninger, P.
2013-07-01
Terrestrial Laser Scanning (TLS) is an established method to reconstruct the geometrical surface of given objects. Current systems allow for fast and efficient determination of 3D models with high accuracy and richness in detail. Alternatively, 3D reconstruction services are using images to reconstruct the surface of an object. While the instrumental expenses for laser scanning systems are high, upcoming free software services as well as open source software packages enable the generation of 3D models using digital consumer cameras. In addition, processing TLS data still requires an experienced user while recent web-services operate completely automatically. An indisputable advantage of image based 3D modeling is its implicit capability for model texturing. However, the achievable accuracy and resolution of the 3D models is lower than those of laser scanning data. Within this contribution, we investigate the results of automated web-services for image based 3D model generation with respect to a TLS reference model. For this, a copper sculpture was acquired using a laser scanner and using image series of different digital cameras. Two different webservices, namely Arc3D and AutoDesk 123D Catch were used to process the image data. The geometric accuracy was compared for the entire model and for some highly structured details. The results are presented and interpreted based on difference models. Finally, an economical comparison of the generation of the models is given considering the interactive and processing time costs.
NASA Astrophysics Data System (ADS)
Mitra, Joydeep; Torres, Andres; Ma, Yuansheng; Pan, David Z.
2018-01-01
Directed self-assembly (DSA) has emerged as one of the most compelling next-generation patterning techniques for sub 7 nm via or contact layers. A key issue in enabling DSA as a mainstream patterning technique is the generation of grapho-epitaxy-based guiding pattern (GP) shapes to assemble the contact patterns on target with high fidelity and resolution. Current GP generation is mostly empirical, and limited to a very small number of via configurations. We propose the first model-based GP synthesis algorithm and methodology for on-target and robust DSA, on general via pattern configurations. The final postoptical proximity correction-printed GPs derived from our original synthesized GPs are resilient to process variations and continue to maintain the same DSA fidelity in terms of placement error and target shape.
NASA Astrophysics Data System (ADS)
Moura, Ricardo; Sinha, Bimal; Coelho, Carlos A.
2017-06-01
The recent popularity of the use of synthetic data as a Statistical Disclosure Control technique has enabled the development of several methods of generating and analyzing such data, but almost always relying in asymptotic distributions and in consequence being not adequate for small sample datasets. Thus, a likelihood-based exact inference procedure is derived for the matrix of regression coefficients of the multivariate regression model, for multiply imputed synthetic data generated via Posterior Predictive Sampling. Since it is based in exact distributions this procedure may even be used in small sample datasets. Simulation studies compare the results obtained from the proposed exact inferential procedure with the results obtained from an adaptation of Reiters combination rule to multiply imputed synthetic datasets and an application to the 2000 Current Population Survey is discussed.
Optimal placement and sizing of wind / solar based DG sources in distribution system
NASA Astrophysics Data System (ADS)
Guan, Wanlin; Guo, Niao; Yu, Chunlai; Chen, Xiaoguang; Yu, Haiyang; Liu, Zhipeng; Cui, Jiapeng
2017-06-01
Proper placement and sizing of Distributed Generation (DG) in distribution system can obtain maximum potential benefits. This paper proposes quantum particle swarm algorithm (QPSO) based wind turbine generation unit (WTGU) and photovoltaic (PV) array placement and sizing approach for real power loss reduction and voltage stability improvement of distribution system. Performance modeling of wind and solar generation system are described and classified into PQ\\PQ (V)\\PI type models in power flow. Considering the WTGU and PV based DGs in distribution system is geographical restrictive, the optimal area and DG capacity limits of each bus in the setting area need to be set before optimization, the area optimization method is proposed . The method has been tested on IEEE 33-bus radial distribution systems to demonstrate the performance and effectiveness of the proposed method.
Wildlife tradeoffs based on landscape models of habitat preference
Loehle, C.; Mitchell, M.S.; White, M.
2000-01-01
Wildlife tradeoffs based on landscape models of habitat preference were presented. Multiscale logistic regression models were used and based on these models a spatial optimization technique was utilized to generate optimal maps. The tradeoffs were analyzed by gradually increasing the weighting on a single species in the objective function over a series of simulations. Results indicated that efficiency of habitat management for species diversity could be maximized for small landscapes by incorporating spatial context.
Optimization Research of Generation Investment Based on Linear Programming Model
NASA Astrophysics Data System (ADS)
Wu, Juan; Ge, Xueqian
Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.
Process-based Cost Estimation for Ramjet/Scramjet Engines
NASA Technical Reports Server (NTRS)
Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John
2003-01-01
Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.
TSARINA: A Computer Model for Assessing Conventional and Chemical Attacks on Airbases
1990-09-01
IV, and has been updated to FORTRAN 77; it has been adapted to various computer systems, as was the widely used AIDA model and the previous versions of...conventional and chemical attacks on sortie generation. In the first version of TSARINA [1 2], several key additions were made to the AIDA model so that (1...various on-base resources, in addition to the estimates of hits and facility damage that are generated by the original AIDA model . The second version
Modeling the Webgraph: How Far We Are
NASA Astrophysics Data System (ADS)
Donato, Debora; Laura, Luigi; Leonardi, Stefano; Millozzi, Stefano
The following sections are included: * Introduction * Preliminaries * WebBase * In-degree and out-degree * PageRank * Bipartite cliques * Strongly connected components * Stochastic models of the webgraph * Models of the webgraph * A multi-layer model * Large scale simulation * Algorithmic techniques for generating and measuring webgraphs * Data representation and multifiles * Generating webgraphs * Traversal with two bits for each node * Semi-external breadth first search * Semi-external depth first search * Computation of the SCCs * Computation of the bow-tie regions * Disjoint bipartite cliques * PageRank * Summary and outlook
Ozone reference models for the middle atmosphere (new CIRA)
NASA Technical Reports Server (NTRS)
Keating, G. M.; Pitts, M. C.; Young, D. F.
1989-01-01
Models of ozone vertical structure were generated that were based on multiple data sets from satellites. The very good absolute accuracy of the individual data sets allowed the data to be directly combined to generate these models. The data used for generation of these models are from some of the most recent satellite measurements over the period 1978 to 1983. A discussion is provided of validation and error analyses of these data sets. Also, inconsistencies in data sets brought about by temporal variations or other factors are indicated. The models cover the pressure range from from 20 to 0.003 mb (25 to 90 km). The models for pressures less than 0.5 mb represent only the day side and are only provisional since there was limited longitudinal coverage at these levels. The models start near 25 km in accord with previous COSPAR international reference atmosphere (CIRA) models. Models are also provided of ozone mixing ratio as a function of height. The monthly standard deviation and interannual variations relative to zonal means are also provided. In addition to the models of monthly latitudinal variations in vertical structure based on satellite measurements, monthly models of total column ozone and its characteristic variability as a function of latitude based on four years of Nimbus 7 measurements, models of the relationship between vertical structure and total column ozone, and a midlatitude annual mean model are incorporated in this set of ozone reference atmospheres. Various systematic variations are discussed including the annual, semiannual, and quasibiennial oscillations, and diurnal, longitudinal, and response to solar activity variations.
Aspects of Hydrological Modelling In The Punjab Himalayan and Karakoram Ranges, Pakistan
NASA Astrophysics Data System (ADS)
Loukas, A.; Khan, M. I.; Quick, M. C.
Various aspects of hydrologic modelling of high mountainous basins in the Punjab Hi- malayan and Karakoram ranges of Northern Pakistan were studied. The runoff from three basins in this region was simulated using the U.B.C. watershed model, which re- quires limited meteorological data of minimum and maximum daily temperature and precipitation. The structure of the model is based on the concept that the hydrolog- ical behavior is a function of elevation and thus, a watershed is conceptualized as a number of elevational zones. A simplified energy budget approach, which is based on daily maximum and minimum temperature and can account for forested and open areas, and aspect and latitude, is used in the U.B.C. model for the estimation of the snowmelt and glacier melt. The studied basins have different hydrological responses and limited data. The runoff from the first basin, the Astore basin, is mainly gener- ated by snowmelt. In the second basin, the Kunhar basin, the runoff is generated by snowmelt but significant redistribution of snow, caused by snow avalanches, affect the runoff generation. The third basin, the Hunza basin, is a highly glacierized basin and its runoff is mainly generated by glacier melt. The application of the U.B.C. watershed model to these three basins showed that the model could estimate reasonably well the runoff generated by the different components.
Multi-Topic Tracking Model for dynamic social network
NASA Astrophysics Data System (ADS)
Li, Yuhua; Liu, Changzheng; Zhao, Ming; Li, Ruixuan; Xiao, Hailing; Wang, Kai; Zhang, Jun
2016-07-01
The topic tracking problem has attracted much attention in the last decades. However, existing approaches rarely consider network structures and textual topics together. In this paper, we propose a novel statistical model based on dynamic bayesian network, namely Multi-Topic Tracking Model for Dynamic Social Network (MTTD). It takes influence phenomenon, selection phenomenon, document generative process and the evolution of textual topics into account. Specifically, in our MTTD model, Gibbs Random Field is defined to model the influence of historical status of users in the network and the interdependency between them in order to consider the influence phenomenon. To address the selection phenomenon, a stochastic block model is used to model the link generation process based on the users' interests to topics. Probabilistic Latent Semantic Analysis (PLSA) is used to describe the document generative process according to the users' interests. Finally, the dependence on the historical topic status is also considered to ensure the continuity of the topic itself in topic evolution model. Expectation Maximization (EM) algorithm is utilized to estimate parameters in the proposed MTTD model. Empirical experiments on real datasets show that the MTTD model performs better than Popular Event Tracking (PET) and Dynamic Topic Model (DTM) in generalization performance, topic interpretability performance, topic content evolution and topic popularity evolution performance.
NASA Astrophysics Data System (ADS)
Bosca, Ryan J.; Jackson, Edward F.
2016-01-01
Assessing and mitigating the various sources of bias and variance associated with image quantification algorithms is essential to the use of such algorithms in clinical research and practice. Assessment is usually accomplished with grid-based digital reference objects (DRO) or, more recently, digital anthropomorphic phantoms based on normal human anatomy. Publicly available digital anthropomorphic phantoms can provide a basis for generating realistic model-based DROs that incorporate the heterogeneity commonly found in pathology. Using a publicly available vascular input function (VIF) and digital anthropomorphic phantom of a normal human brain, a methodology was developed to generate a DRO based on the general kinetic model (GKM) that represented realistic and heterogeneously enhancing pathology. GKM parameters were estimated from a deidentified clinical dynamic contrast-enhanced (DCE) MRI exam. This clinical imaging volume was co-registered with a discrete tissue model, and model parameters estimated from clinical images were used to synthesize a DCE-MRI exam that consisted of normal brain tissues and a heterogeneously enhancing brain tumor. An example application of spatial smoothing was used to illustrate potential applications in assessing quantitative imaging algorithms. A voxel-wise Bland-Altman analysis demonstrated negligible differences between the parameters estimated with and without spatial smoothing (using a small radius Gaussian kernel). In this work, we reported an extensible methodology for generating model-based anthropomorphic DROs containing normal and pathological tissue that can be used to assess quantitative imaging algorithms.
Petersen, Per H; Lund, Flemming; Fraser, Callum G; Sölétormos, György
2016-11-01
Background The distributions of within-subject biological variation are usually described as coefficients of variation, as are analytical performance specifications for bias, imprecision and other characteristics. Estimation of specifications required for reference change values is traditionally done using relationship between the batch-related changes during routine performance, described as Δbias, and the coefficients of variation for analytical imprecision (CV A ): the original theory is based on standard deviations or coefficients of variation calculated as if distributions were Gaussian. Methods The distribution of between-subject biological variation can generally be described as log-Gaussian. Moreover, recent analyses of within-subject biological variation suggest that many measurands have log-Gaussian distributions. In consequence, we generated a model for the estimation of analytical performance specifications for reference change value, with combination of Δbias and CV A based on log-Gaussian distributions of CV I as natural logarithms. The model was tested using plasma prolactin and glucose as examples. Results Analytical performance specifications for reference change value generated using the new model based on log-Gaussian distributions were practically identical with the traditional model based on Gaussian distributions. Conclusion The traditional and simple to apply model used to generate analytical performance specifications for reference change value, based on the use of coefficients of variation and assuming Gaussian distributions for both CV I and CV A , is generally useful.
Composite Pseudoclassical Models of Quarks
NASA Astrophysics Data System (ADS)
Musin, Yu. R.
2018-05-01
Composite models of quarks are proposed, analogous to composite models of leptons. A model-based explanation of the appearance of generations of fundamental particles in the Standard Model is given. New empirical formulas are proposed for the quark masses, modifying Barut's well-known formula.
NASA Astrophysics Data System (ADS)
Sommer, Philipp; Kaplan, Jed
2016-04-01
Accurate modelling of large-scale vegetation dynamics, hydrology, and other environmental processes requires meteorological forcing on daily timescales. While meteorological data with high temporal resolution is becoming increasingly available, simulations for the future or distant past are limited by lack of data and poor performance of climate models, e.g., in simulating daily precipitation. To overcome these limitations, we may temporally downscale monthly summary data to a daily time step using a weather generator. Parameterization of such statistical models has traditionally been based on a limited number of observations. Recent developments in the archiving, distribution, and analysis of "big data" datasets provide new opportunities for the parameterization of a temporal downscaling model that is applicable over a wide range of climates. Here we parameterize a WGEN-type weather generator using more than 50 million individual daily meteorological observations, from over 10'000 stations covering all continents, based on the Global Historical Climatology Network (GHCN) and Synoptic Cloud Reports (EECRA) databases. Using the resulting "universal" parameterization and driven by monthly summaries, we downscale mean temperature (minimum and maximum), cloud cover, and total precipitation, to daily estimates. We apply a hybrid gamma-generalized Pareto distribution to calculate daily precipitation amounts, which overcomes much of the inability of earlier weather generators to simulate high amounts of daily precipitation. Our globally parameterized weather generator has numerous applications, including vegetation and crop modelling for paleoenvironmental studies.
Case Problems for Problem-Based Pedagogical Approaches: A Comparative Analysis
ERIC Educational Resources Information Center
Dabbagh, Nada; Dass, Susan
2013-01-01
A comparative analysis of 51 case problems used in five problem-based pedagogical models was conducted to examine whether there are differences in their characteristics and the implications of such differences on the selection and generation of ill-structured case problems. The five pedagogical models were: situated learning, goal-based scenario,…
A School-Based Mental Health Consultation Curriculum.
ERIC Educational Resources Information Center
Sandoval, Jonathan; Davis, John M.
1984-01-01
Presents one position on consultation that integrates a theoretical model, a process model, and a curriculum for training school-based mental health consultants. Elements of the proposed curriculum include: ethics, relationship building, maintaining rapport, defining problems, gathering data, sharing information, generating and supporting…
Base Case v.5.15 Documentation Supplement to Support the Clean Power Plan
Learn about several modeling assumptions used as part of EPA's analysis of the Clean Power Plan (Carbon Pollution Guidelines for Existing Electric Generating Units) using the EPA v.5.15 Base Case using Integrated Planning Model (IPM).
NASA Astrophysics Data System (ADS)
Hu, Ruiguang; Xiao, Liping; Zheng, Wenjuan
2015-12-01
In this paper, multi-kernel learning(MKL) is used for drug-related webpages classification. First, body text and image-label text are extracted through HTML parsing, and valid images are chosen by the FOCARSS algorithm. Second, text based BOW model is used to generate text representation, and image-based BOW model is used to generate images representation. Last, text and images representation are fused with a few methods. Experimental results demonstrate that the classification accuracy of MKL is higher than those of all other fusion methods in decision level and feature level, and much higher than the accuracy of single-modal classification.
Computer Model Of Fragmentation Of Atomic Nuclei
NASA Technical Reports Server (NTRS)
Wilson, John W.; Townsend, Lawrence W.; Tripathi, Ram K.; Norbury, John W.; KHAN FERDOUS; Badavi, Francis F.
1995-01-01
High Charge and Energy Semiempirical Nuclear Fragmentation Model (HZEFRG1) computer program developed to be computationally efficient, user-friendly, physics-based program for generating data bases on fragmentation of atomic nuclei. Data bases generated used in calculations pertaining to such radiation-transport applications as shielding against radiation in outer space, radiation dosimetry in outer space, cancer therapy in laboratories with beams of heavy ions, and simulation studies for designing detectors for experiments in nuclear physics. Provides cross sections for production of individual elements and isotopes in breakups of high-energy heavy ions by combined nuclear and Coulomb fields of interacting nuclei. Written in ANSI FORTRAN 77.
NASA Astrophysics Data System (ADS)
Hassan, Mahmoud A.
2004-02-01
Digital elevation models (DEMs) are important tools in the planning, design and maintenance of mobile communication networks. This research paper proposes a method for generating high accuracy DEMs based on SPOT satellite 1A stereo pair images, ground control points (GCP) and Erdas OrthoBASE Pro image processing software. DEMs with 0.2911 m mean error were achieved for the hilly and heavily populated city of Amman. The generated DEM was used to design a mobile communication network resulted in a minimum number of radio base transceiver stations, maximum number of covered regions and less than 2% of dead zones.
NASA Technical Reports Server (NTRS)
Stankovic, Ana V.
2003-01-01
Professor Stankovic will be developing and refining Simulink based models of the PM alternator and comparing the simulation results with experimental measurements taken from the unit. Her first task is to validate the models using the experimental data. Her next task is to develop alternative control techniques for the application of the Brayton Cycle PM Alternator in a nuclear electric propulsion vehicle. The control techniques will be first simulated using the validated models then tried experimentally with hardware available at NASA. Testing and simulation of a 2KW PM synchronous generator with diode bridge output is described. The parameters of a synchronous PM generator have been measured and used in simulation. Test procedures have been developed to verify the PM generator model with diode bridge output. Experimental and simulation results are in excellent agreement.
Evaluation of gravitational gradients generated by Earth's crustal structures
NASA Astrophysics Data System (ADS)
Novák, Pavel; Tenzer, Robert; Eshagh, Mehdi; Bagherbandi, Mohammad
2013-02-01
Spectral formulas for the evaluation of gravitational gradients generated by upper Earth's mass components are presented in the manuscript. The spectral approach allows for numerical evaluation of global gravitational gradient fields that can be used to constrain gravitational gradients either synthesised from global gravitational models or directly measured by the spaceborne gradiometer on board of the GOCE satellite mission. Gravitational gradients generated by static atmospheric, topographic and continental ice masses are evaluated numerically based on available global models of Earth's topography, bathymetry and continental ice sheets. CRUST2.0 data are then applied for the numerical evaluation of gravitational gradients generated by mass density contrasts within soft and hard sediments, upper, middle and lower crust layers. Combined gravitational gradients are compared to disturbing gravitational gradients derived from a global gravitational model and an idealised Earth's model represented by the geocentric homogeneous biaxial ellipsoid GRS80. The methodology could be used for improved modelling of the Earth's inner structure.
Inference from clustering with application to gene-expression microarrays.
Dougherty, Edward R; Barrera, Junior; Brun, Marcel; Kim, Seungchan; Cesar, Roberto M; Chen, Yidong; Bittner, Michael; Trent, Jeffrey M
2002-01-01
There are many algorithms to cluster sample data points based on nearness or a similarity measure. Often the implication is that points in different clusters come from different underlying classes, whereas those in the same cluster come from the same class. Stochastically, the underlying classes represent different random processes. The inference is that clusters represent a partition of the sample points according to which process they belong. This paper discusses a model-based clustering toolbox that evaluates cluster accuracy. Each random process is modeled as its mean plus independent noise, sample points are generated, the points are clustered, and the clustering error is the number of points clustered incorrectly according to the generating random processes. Various clustering algorithms are evaluated based on process variance and the key issue of the rate at which algorithmic performance improves with increasing numbers of experimental replications. The model means can be selected by hand to test the separability of expected types of biological expression patterns. Alternatively, the model can be seeded by real data to test the expected precision of that output or the extent of improvement in precision that replication could provide. In the latter case, a clustering algorithm is used to form clusters, and the model is seeded with the means and variances of these clusters. Other algorithms are then tested relative to the seeding algorithm. Results are averaged over various seeds. Output includes error tables and graphs, confusion matrices, principal-component plots, and validation measures. Five algorithms are studied in detail: K-means, fuzzy C-means, self-organizing maps, hierarchical Euclidean-distance-based and correlation-based clustering. The toolbox is applied to gene-expression clustering based on cDNA microarrays using real data. Expression profile graphics are generated and error analysis is displayed within the context of these profile graphics. A large amount of generated output is available over the web.
NASA Astrophysics Data System (ADS)
Luo, Keqin
1999-11-01
The electroplating industry of over 10,000 planting plants nationwide is one of the major waste generators in the industry. Large quantities of wastewater, spent solvents, spent process solutions, and sludge are the major wastes generated daily in plants, which costs the industry tremendously for waste treatment and disposal and hinders the further development of the industry. It becomes, therefore, an urgent need for the industry to identify technically most effective and economically most attractive methodologies and technologies to minimize the waste, while the production competitiveness can be still maintained. This dissertation aims at developing a novel WM methodology using artificial intelligence, fuzzy logic, and fundamental knowledge in chemical engineering, and an intelligent decision support tool. The WM methodology consists of two parts: the heuristic knowledge-based qualitative WM decision analysis and support methodology and fundamental knowledge-based quantitative process analysis methodology for waste reduction. In the former, a large number of WM strategies are represented as fuzzy rules. This becomes the main part of the knowledge base in the decision support tool, WMEP-Advisor. In the latter, various first-principles-based process dynamic models are developed. These models can characterize all three major types of operations in an electroplating plant, i.e., cleaning, rinsing, and plating. This development allows us to perform a thorough process analysis on bath efficiency, chemical consumption, wastewater generation, sludge generation, etc. Additional models are developed for quantifying drag-out and evaporation that are critical for waste reduction. The models are validated through numerous industrial experiments in a typical plating line of an industrial partner. The unique contribution of this research is that it is the first time for the electroplating industry to (i) use systematically available WM strategies, (ii) know quantitatively and accurately what is going on in each tank, and (iii) identify all WM opportunities through process improvement. This work has formed a solid foundation for the further development of powerful WM technologies for comprehensive WM in the following decade.
Automatic Traffic-Based Internet Control Message Protocol (ICMP) Model Generation for ns-3
2015-12-01
through visiting the inferred automata o Fuzzing of an implementation by generating altered message formats We tested with 3 versions of Netzob. First...relationships. Afterwards, we used the Automata module to generate state machines using different functions: “generateChainedStateAutomata...The “generatePTAAutomata” takes as input several communication sessions and then identifies common paths and merges these into a single automata . The
Developing a semantic web model for medical differential diagnosis recommendation.
Mohammed, Osama; Benlamri, Rachid
2014-10-01
In this paper we describe a novel model for differential diagnosis designed to make recommendations by utilizing semantic web technologies. The model is a response to a number of requirements, ranging from incorporating essential clinical diagnostic semantics to the integration of data mining for the process of identifying candidate diseases that best explain a set of clinical features. We introduce two major components, which we find essential to the construction of an integral differential diagnosis recommendation model: the evidence-based recommender component and the proximity-based recommender component. Both approaches are driven by disease diagnosis ontologies designed specifically to enable the process of generating diagnostic recommendations. These ontologies are the disease symptom ontology and the patient ontology. The evidence-based diagnosis process develops dynamic rules based on standardized clinical pathways. The proximity-based component employs data mining to provide clinicians with diagnosis predictions, as well as generates new diagnosis rules from provided training datasets. This article describes the integration between these two components along with the developed diagnosis ontologies to form a novel medical differential diagnosis recommendation model. This article also provides test cases from the implementation of the overall model, which shows quite promising diagnostic recommendation results.
Automatic Generation of Analogy Questions for Student Assessment: An Ontology-Based Approach
ERIC Educational Resources Information Center
Alsubait, Tahani; Parsia, Bijan; Sattler, Uli
2012-01-01
Different computational models for generating analogies of the form "A is to B as C is to D" have been proposed over the past 35 years. However, analogy generation is a challenging problem that requires further research. In this article, we present a new approach for generating analogies in Multiple Choice Question (MCQ) format that can be used…
Conditioning 3D object-based models to dense well data
NASA Astrophysics Data System (ADS)
Wang, Yimin C.; Pyrcz, Michael J.; Catuneanu, Octavian; Boisvert, Jeff B.
2018-06-01
Object-based stochastic simulation models are used to generate categorical variable models with a realistic representation of complicated reservoir heterogeneity. A limitation of object-based modeling is the difficulty of conditioning to dense data. One method to achieve data conditioning is to apply optimization techniques. Optimization algorithms can utilize an objective function measuring the conditioning level of each object while also considering the geological realism of the object. Here, an objective function is optimized with implicit filtering which considers constraints on object parameters. Thousands of objects conditioned to data are generated and stored in a database. A set of objects are selected with linear integer programming to generate the final realization and honor all well data, proportions and other desirable geological features. Although any parameterizable object can be considered, objects from fluvial reservoirs are used to illustrate the ability to simultaneously condition multiple types of geologic features. Channels, levees, crevasse splays and oxbow lakes are parameterized based on location, path, orientation and profile shapes. Functions mimicking natural river sinuosity are used for the centerline model. Channel stacking pattern constraints are also included to enhance the geological realism of object interactions. Spatial layout correlations between different types of objects are modeled. Three case studies demonstrate the flexibility of the proposed optimization-simulation method. These examples include multiple channels with high sinuosity, as well as fragmented channels affected by limited preservation. In all cases the proposed method reproduces input parameters for the object geometries and matches the dense well constraints. The proposed methodology expands the applicability of object-based simulation to complex and heterogeneous geological environments with dense sampling.
Utilization of DIRSIG in support of real-time infrared scene generation
NASA Astrophysics Data System (ADS)
Sanders, Jeffrey S.; Brown, Scott D.
2000-07-01
Real-time infrared scene generation for hardware-in-the-loop has been a traditionally difficult challenge. Infrared scenes are usually generated using commercial hardware that was not designed to properly handle the thermal and environmental physics involved. Real-time infrared scenes typically lack details that are included in scenes rendered in no-real- time by ray-tracing programs such as the Digital Imaging and Remote Sensing Scene Generation (DIRSIG) program. However, executing DIRSIG in real-time while retaining all the physics is beyond current computational capabilities for many applications. DIRSIG is a first principles-based synthetic image generation model that produces multi- or hyper-spectral images in the 0.3 to 20 micron region of the electromagnetic spectrum. The DIRSIG model is an integrated collection of independent first principles based on sub-models, each of which works in conjunction to produce radiance field images with high radiometric fidelity. DIRSIG uses the MODTRAN radiation propagation model for exo-atmospheric irradiance, emitted and scattered radiances (upwelled and downwelled) and path transmission predictions. This radiometry submodel utilizes bidirectional reflectance data, accounts for specular and diffuse background contributions, and features path length dependent extinction and emission for transmissive bodies (plumes, clouds, etc.) which may be present in any target, background or solar path. This detailed environmental modeling greatly enhances the number of rendered features and hence, the fidelity of a rendered scene. While DIRSIG itself cannot currently be executed in real-time, its outputs can be used to provide scene inputs for real-time scene generators. These inputs can incorporate significant features such as target to background thermal interactions, static background object thermal shadowing, and partially transmissive countermeasures. All of these features represent significant improvements over the current state of the art in real-time IR scene generation.
NASA Astrophysics Data System (ADS)
Wang, Rong; Moreno-Cruz, Juan; Caldeira, Ken
2017-05-01
Integrated assessment models are commonly used to generate optimal carbon prices based on an objective function that maximizes social welfare. Such models typically project an initially low carbon price that increases with time. This framework does not reflect the incentives of decision makers who are responsible for generating tax revenue. If a rising carbon price is to result in near-zero emissions, it must ultimately result in near-zero carbon tax revenue. That means that at some point, policy makers will be asked to increase the tax rate on carbon emissions to such an extent that carbon tax revenue will fall. Therefore, there is a risk that the use of a carbon tax to generate revenue could eventually create a perverse incentive to continue carbon emissions in order to provide a continued stream of carbon tax revenue. Using the Dynamic Integrated Climate Economy (DICE) model, we provide evidence that this risk is not a concern for the immediate future but that a revenue-generating carbon tax could create this perverse incentive as time goes on. This incentive becomes perverse at about year 2085 under the default configuration of DICE, but the timing depends on a range of factors including the cost of climate damages and the cost of decarbonizing the global energy system. While our study is based on a schematic model, it highlights the importance of considering a broader spectrum of incentives in studies using more comprehensive integrated assessment models. Our study demonstrates that the use of a carbon tax for revenue generation could potentially motivate implementation of such a tax today, but this source of revenue generation risks motivating continued carbon emissions far into the future.
Intrusion Detection Systems with Live Knowledge System
2016-05-31
Ripple -down Rule (RDR) to maintain the knowledge from human experts with knowledge base generated by the Induct RDR, which is a machine-learning based RDR...propose novel approach that uses Ripple -down Rule (RDR) to maintain the knowledge from human experts with knowledge base generated by the Induct RDR...detection model by applying Induct RDR approach. The proposed induct RDR ( Ripple Down Rules) approach allows to acquire the phishing detection
US National Large-scale City Orthoimage Standard Initiative
Zhou, G.; Song, C.; Benjamin, S.; Schickler, W.
2003-01-01
The early procedures and algorithms for National digital orthophoto generation in National Digital Orthophoto Program (NDOP) were based on earlier USGS mapping operations, such as field control, aerotriangulation (derived in the early 1920's), the quarter-quadrangle-centered (3.75 minutes of longitude and latitude in geographic extent), 1:40,000 aerial photographs, and 2.5 D digital elevation models. However, large-scale city orthophotos using early procedures have disclosed many shortcomings, e.g., ghost image, occlusion, shadow. Thus, to provide the technical base (algorithms, procedure) and experience needed for city large-scale digital orthophoto creation is essential for the near future national large-scale digital orthophoto deployment and the revision of the Standards for National Large-scale City Digital Orthophoto in National Digital Orthophoto Program (NDOP). This paper will report our initial research results as follows: (1) High-precision 3D city DSM generation through LIDAR data processing, (2) Spatial objects/features extraction through surface material information and high-accuracy 3D DSM data, (3) 3D city model development, (4) Algorithm development for generation of DTM-based orthophoto, and DBM-based orthophoto, (5) True orthophoto generation by merging DBM-based orthophoto and DTM-based orthophoto, and (6) Automatic mosaic by optimizing and combining imagery from many perspectives.
Quantum random number generator based on quantum nature of vacuum fluctuations
NASA Astrophysics Data System (ADS)
Ivanova, A. E.; Chivilikhin, S. A.; Gleim, A. V.
2017-11-01
Quantum random number generator (QRNG) allows obtaining true random bit sequences. In QRNG based on quantum nature of vacuum, optical beam splitter with two inputs and two outputs is normally used. We compare mathematical descriptions of spatial beam splitter and fiber Y-splitter in the quantum model for QRNG, based on homodyne detection. These descriptions were identical, that allows to use fiber Y-splitters in practical QRNG schemes, simplifying the setup. Also we receive relations between the input radiation and the resulting differential current in homodyne detector. We experimentally demonstrate possibility of true random bits generation by using QRNG based on homodyne detection with Y-splitter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ijjas, Anna; Steinhardt, Paul J., E-mail: aijjas@princeton.edu, E-mail: steinh@princeton.edu
We introduce ''anamorphic'' cosmology, an approach for explaining the smoothness and flatness of the universe on large scales and the generation of a nearly scale-invariant spectrum of adiabatic density perturbations. The defining feature is a smoothing phase that acts like a contracting universe based on some Weyl frame-invariant criteria and an expanding universe based on other frame-invariant criteria. An advantage of the contracting aspects is that it is possible to avoid the multiverse and measure problems that arise in inflationary models. Unlike ekpyrotic models, anamorphic models can be constructed using only a single field and can generate a nearly scale-invariantmore » spectrum of tensor perturbations. Anamorphic models also differ from pre-big bang and matter bounce models that do not explain the smoothness. We present some examples of cosmological models that incorporate an anamorphic smoothing phase.« less
Parameters of oscillation generation regions in open star cluster models
NASA Astrophysics Data System (ADS)
Danilov, V. M.; Putkov, S. I.
2017-07-01
We determine the masses and radii of central regions of open star cluster (OCL) models with small or zero entropy production and estimate the masses of oscillation generation regions in clustermodels based on the data of the phase-space coordinates of stars. The radii of such regions are close to the core radii of the OCL models. We develop a new method for estimating the total OCL masses based on the cluster core mass, the cluster and cluster core radii, and radial distribution of stars. This method yields estimates of dynamical masses of Pleiades, Praesepe, and M67, which agree well with the estimates of the total masses of the corresponding clusters based on proper motions and spectroscopic data for cluster stars.We construct the spectra and dispersion curves of the oscillations of the field of azimuthal velocities v φ in OCL models. Weak, low-amplitude unstable oscillations of v φ develop in cluster models near the cluster core boundary, and weak damped oscillations of v φ often develop at frequencies close to the frequencies of more powerful oscillations, which may reduce the non-stationarity degree in OCL models. We determine the number and parameters of such oscillations near the cores boundaries of cluster models. Such oscillations points to the possible role that gradient instability near the core of cluster models plays in the decrease of the mass of the oscillation generation regions and production of entropy in the cores of OCL models with massive extended cores.
NASA Astrophysics Data System (ADS)
Oh, Sehyeong; Lee, Boogeon; Park, Hyungmin; Choi, Haecheon
2017-11-01
We investigate a hovering rhinoceros beetle using numerical simulation and blade element theory. Numerical simulations are performed using an immersed boundary method. In the simulation, the hindwings are modeled as a rigid flat plate, and three-dimensionally scanned elytra and body are used. The results of simulation indicate that the lift force generated by the hindwings alone is sufficient to support the weight, and the elytra generate negligible lift force. Considering the hindwings only, we present a blade element model based on quasi-steady assumptions to identify the mechanisms of aerodynamic force generation and power expenditure in the hovering flight of a rhinoceros beetle. We show that the results from the present blade element model are in excellent agreement with numerical ones. Based on the current blade element model, we find the optimal wing kinematics minimizing the aerodynamic power requirement using a hybrid optimization algorithm combining a clustering genetic algorithm with a gradient-based optimizer. We show that the optimal wing kinematics reduce the aerodynamic power consumption, generating enough lift force to support the weight. This research was supported by a Grant to Bio-Mimetic Robot Research Center Funded by Defense Acquisition Program Administration, and by Agency for Defense Development (UD130070ID) and NRF-2016R1E1A1A02921549 of the MSIP of Korea.
Modeling and simulation research on electromagnetic and energy-recycled damper based on Adams
NASA Astrophysics Data System (ADS)
Zhou, C. F.; Zhang, K.; Zhang, Pengfei
2018-05-01
In order to study the voltage and power output characteristics of the electromagnetic and energy-recycled damper which consists of gear, rack and generator, the Adams model of this damper and the Simulink model of generator are established, and the co-simulation is accomplished with these two models. The output indexes such as the gear speed and power of generator are obtained by the simulation, and the simulation results demonstrate that the voltage peak of the damper is 25 V; the maximum output power of the damper is 8 W. The above research provides a basis for the prototype development of electromagnetic and energy-recycled damper with gear and rack.
Prediction of Spatiotemporal Patterns of Neural Activity from Pairwise Correlations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marre, O.; El Boustani, S.; Fregnac, Y.
We designed a model-based analysis to predict the occurrence of population patterns in distributed spiking activity. Using a maximum entropy principle with a Markovian assumption, we obtain a model that accounts for both spatial and temporal pairwise correlations among neurons. This model is tested on data generated with a Glauber spin-glass system and is shown to correctly predict the occurrence probabilities of spatiotemporal patterns significantly better than Ising models only based on spatial correlations. This increase of predictability was also observed on experimental data recorded in parietal cortex during slow-wave sleep. This approach can also be used to generate surrogatesmore » that reproduce the spatial and temporal correlations of a given data set.« less
Sel, Davorka; Lebar, Alenka Macek; Miklavcic, Damijan
2007-05-01
In electrochemotherapy (ECT) electropermeabilization, parameters (pulse amplitude, electrode setup) need to be customized in order to expose the whole tumor to electric field intensities above permeabilizing threshold to achieve effective ECT. In this paper, we present a model-based optimization approach toward determination of optimal electropermeabilization parameters for effective ECT. The optimization is carried out by minimizing the difference between the permeabilization threshold and electric field intensities computed by finite element model in selected points of tumor. We examined the feasibility of model-based optimization of electropermeabilization parameters on a model geometry generated from computer tomography images, representing brain tissue with tumor. Continuous parameter subject to optimization was pulse amplitude. The distance between electrode pairs was optimized as a discrete parameter. Optimization also considered the pulse generator constraints on voltage and current. During optimization the two constraints were reached preventing the exposure of the entire volume of the tumor to electric field intensities above permeabilizing threshold. However, despite the fact that with the particular needle array holder and pulse generator the entire volume of the tumor was not permeabilized, the maximal extent of permeabilization for the particular case (electrodes, tissue) was determined with the proposed approach. Model-based optimization approach could also be used for electro-gene transfer, where electric field intensities should be distributed between permeabilizing threshold and irreversible threshold-the latter causing tissue necrosis. This can be obtained by adding constraints on maximum electric field intensity in optimization procedure.
TALEN-based generation of a cynomolgus monkey disease model for human microcephaly
Ke, Qiong; Li, Weiqiang; Lai, Xingqiang; Chen, Hong; Huang, Lihua; Kang, Zhuang; Li, Kai; Ren, Jie; Lin, Xiaofeng; Zheng, Haiqing; Huang, Weijun; Ma, Yunhan; Xu, Dongdong; Chen, Zheng; Song, Xinming; Lin, Xinyi; Zhuang, Min; Wang, Tao; Zhuang, Fengfeng; Xi, Jianzhong; Mao, Frank Fuxiang; Xia, Huimin; Lahn, Bruce T; Zhou, Qi; Yang, Shihua; Xiang, Andy Peng
2016-01-01
Gene editing in non-human primates may lead to valuable models for exploring the etiologies and therapeutic strategies of genetically based neurological disorders in humans. However, a monkey model of neurological disorders that closely mimics pathological and behavioral deficits in humans has not yet been successfully generated. Microcephalin 1 (MCPH1) is implicated in the evolution of the human brain, and MCPH1 mutation causes microcephaly accompanied by mental retardation. Here we generated a cynomolgus monkey (Macaca fascicularis) carrying biallelic MCPH1 mutations using transcription activator-like effector nucleases. The monkey recapitulated most of the important clinical features observed in patients, including marked reductions in head circumference, premature chromosome condensation (PCC), hypoplasia of the corpus callosum and upper limb spasticity. Moreover, overexpression of MCPH1 in mutated dermal fibroblasts rescued the PCC syndrome. This monkey model may help us elucidate the role of MCPH1 in the pathogenesis of human microcephaly and better understand the function of this protein in the evolution of primate brain size. PMID:27502025
Enabling full-field physics-based optical proximity correction via dynamic model generation
NASA Astrophysics Data System (ADS)
Lam, Michael; Clifford, Chris; Raghunathan, Ananthan; Fenger, Germain; Adam, Kostas
2017-07-01
As extreme ultraviolet lithography becomes closer to reality for high volume production, its peculiar modeling challenges related to both inter and intrafield effects have necessitated building an optical proximity correction (OPC) infrastructure that operates with field position dependency. Previous state-of-the-art approaches to modeling field dependency used piecewise constant models where static input models are assigned to specific x/y-positions within the field. OPC and simulation could assign the proper static model based on simulation-level placement. However, in the realm of 7 and 5 nm feature sizes, small discontinuities in OPC from piecewise constant model changes can cause unacceptable levels of edge placement errors. The introduction of dynamic model generation (DMG) can be shown to effectively avoid these dislocations by providing unique mask and optical models per simulation region, allowing a near continuum of models through the field. DMG allows unique models for electromagnetic field, apodization, aberrations, etc. to vary through the entire field and provides a capability to precisely and accurately model systematic field signatures.
Combining 3d Volume and Mesh Models for Representing Complicated Heritage Buildings
NASA Astrophysics Data System (ADS)
Tsai, F.; Chang, H.; Lin, Y.-W.
2017-08-01
This study developed a simple but effective strategy to combine 3D volume and mesh models for representing complicated heritage buildings and structures. The idea is to seamlessly integrate 3D parametric or polyhedral models and mesh-based digital surfaces to generate a hybrid 3D model that can take advantages of both modeling methods. The proposed hybrid model generation framework is separated into three phases. Firstly, after acquiring or generating 3D point clouds of the target, these 3D points are partitioned into different groups. Secondly, a parametric or polyhedral model of each group is generated based on plane and surface fitting algorithms to represent the basic structure of that region. A "bare-bones" model of the target can subsequently be constructed by connecting all 3D volume element models. In the third phase, the constructed bare-bones model is used as a mask to remove points enclosed by the bare-bones model from the original point clouds. The remaining points are then connected to form 3D surface mesh patches. The boundary points of each surface patch are identified and these boundary points are projected onto the surfaces of the bare-bones model. Finally, new meshes are created to connect the projected points and original mesh boundaries to integrate the mesh surfaces with the 3D volume model. The proposed method was applied to an open-source point cloud data set and point clouds of a local historical structure. Preliminary results indicated that the reconstructed hybrid models using the proposed method can retain both fundamental 3D volume characteristics and accurate geometric appearance with fine details. The reconstructed hybrid models can also be used to represent targets in different levels of detail according to user and system requirements in different applications.
Nanoscale inhomogeneity and photoacid generation dynamics in extreme ultraviolet resist materials
NASA Astrophysics Data System (ADS)
Wu, Ping-Jui; Wang, Yu-Fu; Chen, Wei-Chi; Wang, Chien-Wei; Cheng, Joy; Chang, Vencent; Chang, Ching-Yu; Lin, John; Cheng, Yuan-Chung
2018-03-01
The development of extreme ultraviolet (EUV) lithography towards the 22 nm node and beyond depends critically on the availability of resist materials that meet stringent control requirements in resolution, line edge roughness, and sensitivity. However, the molecular mechanisms that govern the structure-function relationships in current EUV resist systems are not well understood. In particular, the nanoscale structures of the polymer base and the distributions of photoacid generators (PAGs) should play a critical roles in the performance of a resist system, yet currently available models for photochemical reactions in EUV resist systems are exclusively based on homogeneous bulk models that ignore molecular-level details of solid resist films. In this work, we investigate how microscopic molecular organizations in EUV resist affect photoacid generations in a bottom-up approach that describes structure-dependent electron-transfer dynamics in a solid film model. To this end, molecular dynamics simulations and stimulated annealing are used to obtain structures of a large simulation box containing poly(4-hydroxystyrene) (PHS) base polymers and triphenylsulfonium based PAGs. Our calculations reveal that ion-pair interactions govern the microscopic distributions of the polymer base and PAG molecules, resulting in a highly inhomogeneous system with nonuniform nanoscale chemical domains. Furthermore, the theoretical structures were used in combination of quantum chemical calculations and the Marcus theory to evaluate electron transfer rates between molecular sites, and then kinetic Monte Carlo simulations were carried out to model electron transfer dynamics with molecular structure details taken into consideration. As a result, the portion of thermalized electrons that are absorbed by the PAGs and the nanoscale spatial distribution of generated acids can be estimated. Our data reveal that the nanoscale inhomogeneous distributions of base polymers and PAGs strongly affect the electron transfer and the performance of the resist system. The implications to the performances of EUV resists and key engineering requirements for improved resist systems will also be discussed in this work. Our results shed light on the fundamental structure dependence of photoacid generation and the control of the nanoscale structures as well as base polymer-PAG interactions in EVU resist systems, and we expect these knowledge will be useful for the future development of improved EUV resist systems.
Tsao, Liuxing; Ma, Liang
2016-11-01
Digital human modelling enables ergonomists and designers to consider ergonomic concerns and design alternatives in a timely and cost-efficient manner in the early stages of design. However, the reliability of the simulation could be limited due to the percentile-based approach used in constructing the digital human model. To enhance the accuracy of the size and shape of the models, we proposed a framework to generate digital human models using three-dimensional (3D) anthropometric data. The 3D scan data from specific subjects' hands were segmented based on the estimated centres of rotation. The segments were then driven in forward kinematics to perform several functional postures. The constructed hand models were then verified, thereby validating the feasibility of the framework. The proposed framework helps generate accurate subject-specific digital human models, which can be utilised to guide product design and workspace arrangement. Practitioner Summary: Subject-specific digital human models can be constructed under the proposed framework based on three-dimensional (3D) anthropometry. This approach enables more reliable digital human simulation to guide product design and workspace arrangement.
Latino Definitions of Success: A Cultural Model of Intercultural Competence
Torres, Lucas
2010-01-01
The present study sought to examine Latino intercultural competence via two separate methodologies. Phase 1 entailed discovering and generating themes regarding the features of intercultural competence based on semistructured interviews of 15 Latino adults. Phase 2 included conducting a cultural consensus analysis from the quantitative responses of 46 Latino adults to determine the cultural model of intercultural competence. The major results indicated that the participants, despite variations in socioeconomic and generational statuses, shared a common knowledge base regarding the competencies needed for Latinos to successfully navigate different cultures. Overall, the cultural model of Latino intercultural competence includes a set of skills that integrates traditional cultural values along with attributes of self-efficacy. The findings are discussed within a competence-based conceptualization of cultural adaptation and potential advancements in acculturation research. PMID:20333325
Text Summarization Model based on Facility Location Problem
NASA Astrophysics Data System (ADS)
Takamura, Hiroya; Okumura, Manabu
e propose a novel multi-document generic summarization model based on the budgeted median problem, which is a facility location problem. The summarization method based on our model is an extractive method, which selects sentences from the given document cluster and generates a summary. Each sentence in the document cluster will be assigned to one of the selected sentences, where the former sentece is supposed to be represented by the latter. Our method selects sentences to generate a summary that yields a good sentence assignment and hence covers the whole content of the document cluster. An advantage of this method is that it can incorporate asymmetric relations between sentences such as textual entailment. Through experiments, we showed that the proposed method yields good summaries on the dataset of DUC'04.
NASA Astrophysics Data System (ADS)
Li, Xuping; Ogden, Joan; Yang, Christopher
2013-11-01
This study models the operation of molten carbonate fuel cell (MCFC) tri-generation systems for “big box” store businesses that combine grocery and retail business, and sometimes gasoline retail. Efficiency accounting methods and parameters for MCFC tri-generation systems have been developed. Interdisciplinary analysis and an engineering/economic model were applied for evaluating the technical, economic, and environmental performance of distributed MCFC tri-generation systems, and for exploring the optimal system design. Model results show that tri-generation is economically competitive with the conventional system, in which the stores purchase grid electricity and NG for heat, and sell gasoline fuel. The results are robust based on sensitivity analysis considering the uncertainty in energy prices and capital cost. Varying system sizes with base case engineering inputs, energy prices, and cost assumptions, it is found that there is a clear tradeoff between the portion of electricity demand covered and the capital cost increase of bigger system size. MCFC Tri-generation technology provides lower emission electricity, heat, and H2 fuel. With NG as feedstock the CO2 emission can be reduced by 10%-43.6%, depending on how the grid electricity is generated. With renewable methane as feedstock CO2 emission can be further reduced to near zero.
Freight model improvement project for ECWRPC.
DOT National Transportation Integrated Search
2011-08-01
In early 2009 WisDOT, HNTB and ECWRPC completed the first phase of the Northeast Region Travel Demand Model. : While the model includes a truck trip generation based on the quick response freight manual, the model lacks enough : truck classification ...
Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; ...
2016-02-02
Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less
Field-circuit analysis and measurements of a single-phase self-excited induction generator
NASA Astrophysics Data System (ADS)
Makowski, Krzysztof; Leicht, Aleksander
2017-12-01
The paper deals with a single-phase induction machine operating as a stand-alone self-excited single-phase induction generator for generation of electrical energy from renewable energy sources. By changing number of turns and size of wires in the auxiliary stator winding, an improvement of performance characteristics of the generator were obtained as regards no-load and load voltage of the stator windings as well as stator winding currents of the generator. Field-circuit simulation models of the generator were developed using Flux2D software package for the generator with shunt capacitor in the main stator winding. The obtained results have been validated experimentally at the laboratory setup using the single-phase capacitor induction motor of 1.1 kW rated power and 230 V voltage as a base model of the generator.
NOAA/West coast and Alaska Tsunami warning center Atlantic Ocean response criteria
Whitmore, P.; Refidaff, C.; Caropolo, M.; Huerfano-Moreno, V.; Knight, W.; Sammler, W.; Sandrik, A.
2009-01-01
West Coast/Alaska Tsunami Warning Center (WCATWC) response criteria for earthquakesoccurring in the Atlantic and Caribbean basins are presented. Initial warning center decisions are based on an earthquake's location, magnitude, depth, distance from coastal locations, and precomputed threat estimates based on tsunami models computed from similar events. The new criteria will help limit the geographical extent of warnings and advisories to threatened regions, and complement the new operational tsunami product suite. Criteria are set for tsunamis generated by earthquakes, which are by far the main cause of tsunami generation (either directly through sea floor displacement or indirectly by triggering of sub-sea landslides).The new criteria require development of a threat data base which sets warning or advisory zones based on location, magnitude, and pre-computed tsunami models. The models determine coastal tsunami amplitudes based on likely tsunami source parameters for a given event. Based on the computed amplitude, warning and advisory zones are pre-set.
Armanini, D G; Monk, W A; Carter, L; Cote, D; Baird, D J
2013-08-01
Evaluation of the ecological status of river sites in Canada is supported by building models using the reference condition approach. However, geography, data scarcity and inter-operability constraints have frustrated attempts to monitor national-scale status and trends. This issue is particularly true in Atlantic Canada, where no ecological assessment system is currently available. Here, we present a reference condition model based on the River Invertebrate Prediction and Classification System approach with regional-scale applicability. To achieve this, we used biological monitoring data collected from wadeable streams across Atlantic Canada together with freely available, nationally consistent geographic information system (GIS) environmental data layers. For the first time, we demonstrated that it is possible to use data generated from different studies, even when collected using different sampling methods, to generate a robust predictive model. This model was successfully generated and tested using GIS-based rather than local habitat variables and showed improved performance when compared to a null model. In addition, ecological quality ratio data derived from the model responded to observed stressors in a test dataset. Implications for future large-scale implementation of river biomonitoring using a standardised approach with global application are presented.
Integrated performance and reliability specification for digital avionics systems
NASA Technical Reports Server (NTRS)
Brehm, Eric W.; Goettge, Robert T.
1995-01-01
This paper describes an automated tool for performance and reliability assessment of digital avionics systems, called the Automated Design Tool Set (ADTS). ADTS is based on an integrated approach to design assessment that unifies traditional performance and reliability views of system designs, and that addresses interdependencies between performance and reliability behavior via exchange of parameters and result between mathematical models of each type. A multi-layer tool set architecture has been developed for ADTS that separates the concerns of system specification, model generation, and model solution. Performance and reliability models are generated automatically as a function of candidate system designs, and model results are expressed within the system specification. The layered approach helps deal with the inherent complexity of the design assessment process, and preserves long-term flexibility to accommodate a wide range of models and solution techniques within the tool set structure. ADTS research and development to date has focused on development of a language for specification of system designs as a basis for performance and reliability evaluation. A model generation and solution framework has also been developed for ADTS, that will ultimately encompass an integrated set of analytic and simulated based techniques for performance, reliability, and combined design assessment.
Path generation algorithm for UML graphic modeling of aerospace test software
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao
2018-03-01
Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.
On the design of henon and logistic map-based random number generator
NASA Astrophysics Data System (ADS)
Magfirawaty; Suryadi, M. T.; Ramli, Kalamullah
2017-10-01
The key sequence is one of the main elements in the cryptosystem. True Random Number Generators (TRNG) method is one of the approaches to generating the key sequence. The randomness source of the TRNG divided into three main groups, i.e. electrical noise based, jitter based and chaos based. The chaos based utilizes a non-linear dynamic system (continuous time or discrete time) as an entropy source. In this study, a new design of TRNG based on discrete time chaotic system is proposed, which is then simulated in LabVIEW. The principle of the design consists of combining 2D and 1D chaotic systems. A mathematical model is implemented for numerical simulations. We used comparator process as a harvester method to obtain the series of random bits. Without any post processing, the proposed design generated random bit sequence with high entropy value and passed all NIST 800.22 statistical tests.
A Comparison of Three Random Number Generators for Aircraft Dynamic Modeling Applications
NASA Technical Reports Server (NTRS)
Grauer, Jared A.
2017-01-01
Three random number generators, which produce Gaussian white noise sequences, were compared to assess their suitability in aircraft dynamic modeling applications. The first generator considered was the MATLAB (registered) implementation of the Mersenne-Twister algorithm. The second generator was a website called Random.org, which processes atmospheric noise measured using radios to create the random numbers. The third generator was based on synthesis of the Fourier series, where the random number sequences are constructed from prescribed amplitude and phase spectra. A total of 200 sequences, each having 601 random numbers, for each generator were collected and analyzed in terms of the mean, variance, normality, autocorrelation, and power spectral density. These sequences were then applied to two problems in aircraft dynamic modeling, namely estimating stability and control derivatives from simulated onboard sensor data, and simulating flight in atmospheric turbulence. In general, each random number generator had good performance and is well-suited for aircraft dynamic modeling applications. Specific strengths and weaknesses of each generator are discussed. For Monte Carlo simulation, the Fourier synthesis method is recommended because it most accurately and consistently approximated Gaussian white noise and can be implemented with reasonable computational effort.
Efficient generation of mouse models of human diseases via ABE- and BE-mediated base editing.
Liu, Zhen; Lu, Zongyang; Yang, Guang; Huang, Shisheng; Li, Guanglei; Feng, Songjie; Liu, Yajing; Li, Jianan; Yu, Wenxia; Zhang, Yu; Chen, Jia; Sun, Qiang; Huang, Xingxu
2018-06-14
A recently developed adenine base editor (ABE) efficiently converts A to G and is potentially useful for clinical applications. However, its precision and efficiency in vivo remains to be addressed. Here we achieve A-to-G conversion in vivo at frequencies up to 100% by microinjection of ABE mRNA together with sgRNAs. We then generate mouse models harboring clinically relevant mutations at Ar and Hoxd13, which recapitulates respective clinical defects. Furthermore, we achieve both C-to-T and A-to-G base editing by using a combination of ABE and SaBE3, thus creating mouse model harboring multiple mutations. We also demonstrate the specificity of ABE by deep sequencing and whole-genome sequencing (WGS). Taken together, ABE is highly efficient and precise in vivo, making it feasible to model and potentially cure relevant genetic diseases.
NASA Astrophysics Data System (ADS)
Li, Shuang; Peng, Yuming
2012-01-01
In order to accurately deliver an entry vehicle through the Martian atmosphere to the prescribed parachute deployment point, active Mars entry guidance is essential. This paper addresses the issue of Mars atmospheric entry guidance using the command generator tracker (CGT) based direct model reference adaptive control to reduce the adverse effect of the bounded uncertainties on atmospheric density and aerodynamic coefficients. Firstly, the nominal drag acceleration profile meeting a variety of constraints is planned off-line in the longitudinal plane as the reference model to track. Then, the CGT based direct model reference adaptive controller and the feed-forward compensator are designed to robustly track the aforementioned reference drag acceleration profile and to effectively reduce the downrange error. Afterwards, the heading alignment logic is adopted in the lateral plane to reduce the crossrange error. Finally, the validity of the guidance algorithm proposed in this paper is confirmed by Monte Carlo simulation analysis.
A new physics-based modeling approach for tsunami-ionosphere coupling
NASA Astrophysics Data System (ADS)
Meng, X.; Komjathy, A.; Verkhoglyadova, O. P.; Yang, Y.-M.; Deng, Y.; Mannucci, A. J.
2015-06-01
Tsunamis can generate gravity waves propagating upward through the atmosphere, inducing total electron content (TEC) disturbances in the ionosphere. To capture this process, we have implemented tsunami-generated gravity waves into the Global Ionosphere-Thermosphere Model (GITM) to construct a three-dimensional physics-based model WP (Wave Perturbation)-GITM. WP-GITM takes tsunami wave properties, including the wave height, wave period, wavelength, and propagation direction, as inputs and time-dependently characterizes the responses of the upper atmosphere between 100 km and 600 km altitudes. We apply WP-GITM to simulate the ionosphere above the West Coast of the United States around the time when the tsunami associated with the March 2011 Tohuku-Oki earthquke arrived. The simulated TEC perturbations agree with Global Positioning System observations reasonably well. For the first time, a fully self-consistent and physics-based model has reproduced the GPS-observed traveling ionospheric signatures of an actual tsunami event.
Fu, Kun; Jin, Junqi; Cui, Runpeng; Sha, Fei; Zhang, Changshui
2017-12-01
Recent progress on automatic generation of image captions has shown that it is possible to describe the most salient information conveyed by images with accurate and meaningful sentences. In this paper, we propose an image captioning system that exploits the parallel structures between images and sentences. In our model, the process of generating the next word, given the previously generated ones, is aligned with the visual perception experience where the attention shifts among the visual regions-such transitions impose a thread of ordering in visual perception. This alignment characterizes the flow of latent meaning, which encodes what is semantically shared by both the visual scene and the text description. Our system also makes another novel modeling contribution by introducing scene-specific contexts that capture higher-level semantic information encoded in an image. The contexts adapt language models for word generation to specific scene types. We benchmark our system and contrast to published results on several popular datasets, using both automatic evaluation metrics and human evaluation. We show that either region-based attention or scene-specific contexts improves systems without those components. Furthermore, combining these two modeling ingredients attains the state-of-the-art performance.
Generating description with multi-feature fusion and saliency maps of image
NASA Astrophysics Data System (ADS)
Liu, Lisha; Ding, Yuxuan; Tian, Chunna; Yuan, Bo
2018-04-01
Generating description for an image can be regard as visual understanding. It is across artificial intelligence, machine learning, natural language processing and many other areas. In this paper, we present a model that generates description for images based on RNN (recurrent neural network) with object attention and multi-feature of images. The deep recurrent neural networks have excellent performance in machine translation, so we use it to generate natural sentence description for images. The proposed method uses single CNN (convolution neural network) that is trained on ImageNet to extract image features. But we think it can not adequately contain the content in images, it may only focus on the object area of image. So we add scene information to image feature using CNN which is trained on Places205. Experiments show that model with multi-feature extracted by two CNNs perform better than which with a single feature. In addition, we make saliency weights on images to emphasize the salient objects in images. We evaluate our model on MSCOCO based on public metrics, and the results show that our model performs better than several state-of-the-art methods.
High-Fidelity Roadway Modeling and Simulation
NASA Technical Reports Server (NTRS)
Wang, Jie; Papelis, Yiannis; Shen, Yuzhong; Unal, Ozhan; Cetin, Mecit
2010-01-01
Roads are an essential feature in our daily lives. With the advances in computing technologies, 2D and 3D road models are employed in many applications, such as computer games and virtual environments. Traditional road models were generated by professional artists manually using modeling software tools such as Maya and 3ds Max. This approach requires both highly specialized and sophisticated skills and massive manual labor. Automatic road generation based on procedural modeling can create road models using specially designed computer algorithms or procedures, reducing the tedious manual editing needed for road modeling dramatically. But most existing procedural modeling methods for road generation put emphasis on the visual effects of the generated roads, not the geometrical and architectural fidelity. This limitation seriously restricts the applicability of the generated road models. To address this problem, this paper proposes a high-fidelity roadway generation method that takes into account road design principles practiced by civil engineering professionals, and as a result, the generated roads can support not only general applications such as games and simulations in which roads are used as 3D assets, but also demanding civil engineering applications, which requires accurate geometrical models of roads. The inputs to the proposed method include road specifications, civil engineering road design rules, terrain information, and surrounding environment. Then the proposed method generates in real time 3D roads that have both high visual and geometrical fidelities. This paper discusses in details the procedures that convert 2D roads specified in shape files into 3D roads and civil engineering road design principles. The proposed method can be used in many applications that have stringent requirements on high precision 3D models, such as driving simulations and road design prototyping. Preliminary results demonstrate the effectiveness of the proposed method.
Traffic Flow Management Using Aggregate Flow Models and the Development of Disaggregation Methods
NASA Technical Reports Server (NTRS)
Sun, Dengfeng; Sridhar, Banavar; Grabbe, Shon
2010-01-01
A linear time-varying aggregate traffic flow model can be used to develop Traffic Flow Management (tfm) strategies based on optimization algorithms. However, there are no methods available in the literature to translate these aggregate solutions into actions involving individual aircraft. This paper describes and implements a computationally efficient disaggregation algorithm, which converts an aggregate (flow-based) solution to a flight-specific control action. Numerical results generated by the optimization method and the disaggregation algorithm are presented and illustrated by applying them to generate TFM schedules for a typical day in the U.S. National Airspace System. The results show that the disaggregation algorithm generates control actions for individual flights while keeping the air traffic behavior very close to the optimal solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Jia-Mian; Wang, Bo; Ji, Yanzhou
Modeling the effective ion conductivities of heterogeneous solid electrolytes typically involves the use of a computer-generated microstructure consisting of randomly or uniformly oriented fillers in a matrix. But, the structural features of the filler/matrix interface, which critically determine the interface ion conductivity and the microstructure morphology, have not been considered during the microstructure generation. In using nanoporous β-Li 3PS 4 electrolyte as an example, we develop a phase-field model that enables generating nanoporous microstructures of different porosities and connectivity patterns based on the depth and the energy of the surface (pore/electrolyte interface), both of which are predicted through density functionalmore » theory (DFT) calculations. Room-temperature effective ion conductivities of the generated microstructures are then calculated numerically, using DFT-estimated surface Li-ion conductivity (3.14×10 -3 S/cm) and experimentally measured bulk Li-ion conductivity (8.93×10 -7 S/cm) of β-Li 3PS 4 as the inputs. We also use the generated microstructures to inform effective medium theories to rapidly predict the effective ion conductivity via analytical calculations. Furthemore, when porosity approaches the percolation threshold, both the numerical and analytical methods predict a significantly enhanced Li-ion conductivity (1.74×10 -4 S/cm) that is in good agreement with experimental data (1.64×10 -4 S/cm). The present phase-field based multiscale model is generally applicable to predict both the microstructure patterns and the effective properties of heterogeneous solid electrolytes.« less
Hu, Jia-Mian; Wang, Bo; Ji, Yanzhou; ...
2017-09-07
Modeling the effective ion conductivities of heterogeneous solid electrolytes typically involves the use of a computer-generated microstructure consisting of randomly or uniformly oriented fillers in a matrix. But, the structural features of the filler/matrix interface, which critically determine the interface ion conductivity and the microstructure morphology, have not been considered during the microstructure generation. In using nanoporous β-Li 3PS 4 electrolyte as an example, we develop a phase-field model that enables generating nanoporous microstructures of different porosities and connectivity patterns based on the depth and the energy of the surface (pore/electrolyte interface), both of which are predicted through density functionalmore » theory (DFT) calculations. Room-temperature effective ion conductivities of the generated microstructures are then calculated numerically, using DFT-estimated surface Li-ion conductivity (3.14×10 -3 S/cm) and experimentally measured bulk Li-ion conductivity (8.93×10 -7 S/cm) of β-Li 3PS 4 as the inputs. We also use the generated microstructures to inform effective medium theories to rapidly predict the effective ion conductivity via analytical calculations. Furthemore, when porosity approaches the percolation threshold, both the numerical and analytical methods predict a significantly enhanced Li-ion conductivity (1.74×10 -4 S/cm) that is in good agreement with experimental data (1.64×10 -4 S/cm). The present phase-field based multiscale model is generally applicable to predict both the microstructure patterns and the effective properties of heterogeneous solid electrolytes.« less
Numerical investigation of wake-collapse internal waves generated by a submerged moving body
NASA Astrophysics Data System (ADS)
Liang, Jianjun; Du, Tao; Huang, Weigen; He, Mingxia
2017-07-01
The state-of-the-art OpenFOAM technology is used to develop a numerical model that can be devoted to numerically investigating wake-collapse internal waves generated by a submerged moving body. The model incorporates body geometry, propeller forcing, and stratification magnitude of seawater. The generation mechanism and wave properties are discussed based on model results. It was found that the generation of the wave and its properties depend greatly on the body speed. Only when that speed exceeds some critical value, between 1.5 and 4.5 m/s, can the moving body generate wake-collapse internal waves, and with increases of this speed, the time of generation advances and wave amplitude increases. The generated wake-collapse internal waves are confirmed to have characteristics of the second baroclinic mode. As the body speed increases, wave amplitude and length increase and its waveform tends to take on a regular sinusoidal shape. For three linearly temperature-stratified profiles examined, the weaker the stratification, the stronger the wake-collapse internal wave.
Principles of health economic evaluations of lipid-lowering strategies.
Ara, Roberta; Basarir, Hasan; Ward, Sue Elizabeth
2012-08-01
Policy decision-making in cardiovascular disease is increasingly informed by the results generated from decision-analytic models (DAMs). The methodological approaches and assumptions used in these DAMs impact on the results generated and can influence a policy decision based on a cost per quality-adjusted life year (QALY) threshold. Decision makers need to be provided with a clear understanding of the key sources of evidence and how they are used in the DAM to make an informed judgement on the quality and appropriateness of the results generated. Our review identified 12 studies exploring the cost-effectiveness of pharmaceutical lipid-lowering interventions published since January 2010. All studies used Markov models with annual cycles to represent the long-term clinical pathway. Important differences in the model structures and evidence base used within the DAMs were identified. Whereas the reporting standards were reasonably good, there were many instances when reporting of methods could be improved, particularly relating to baseline risk levels, long-term benefit of treatment and health state utility values. There is a scope for improvement in the reporting of evidence and modelling approaches used within DAMs to provide decision makers with a clearer understanding of the quality and validity of the results generated. This would be assisted by fuller publication of models, perhaps through detailed web appendices.
A Stirling engine for use with lower quality fuels
NASA Astrophysics Data System (ADS)
Paul, Christopher J.
There is increasing interest in using renewable fuels from biomass or alternative fuels such as municipal waste to reduce the need for fossil based fuels. Due to the lower heating values and higher levels of impurities, small scale electricity generation is more problematic. Currently, there are not many technologically mature options for small scale electricity generation using lower quality fuels. Even though there are few manufacturers of Stirling engines, the history of their development for two centuries offers significant guidance in developing a viable small scale generator set using lower quality fuels. The history, development, and modeling of Stirling engines were reviewed to identify possible model and engine configurations. A Stirling engine model based on the finite volume, ideal adiabatic model was developed. Flow dissipation losses are shown to need correcting as they increase significantly at low mean engine pressure and high engine speed. The complete engine including external components was developed. A simple yet effective method of evaluating the external heat transfer to the Stirling engine was created that can be used with any second order Stirling engine model. A derivative of the General Motors Ground Power Unit 3 was designed. By significantly increasing heater, cooler and regenerator size at the expense of increased dead volume, and adding a combustion gas recirculation, a generator set with good efficiency was designed.
Photogrammetry for rapid prototyping: development of noncontact 3D reconstruction technologies
NASA Astrophysics Data System (ADS)
Knyaz, Vladimir A.
2002-04-01
An important stage of rapid prototyping technology is generating computer 3D model of an object to be reproduced. Wide variety of techniques for 3D model generation exists beginning with manual 3D models generation and finishing with full-automated reverse engineering system. The progress in CCD sensors and computers provides the background for integration of photogrammetry as an accurate 3D data source with CAD/CAM. The paper presents the results of developing photogrammetric methods for non-contact spatial coordinates measurements and generation of computer 3D model of real objects. The technology is based on object convergent images processing for calculating its 3D coordinates and surface reconstruction. The hardware used for spatial coordinates measurements is based on PC as central processing unit and video camera as image acquisition device. The original software for Windows 9X realizes the complete technology of 3D reconstruction for rapid input of geometry data in CAD/CAM systems. Technical characteristics of developed systems are given along with the results of applying for various tasks of 3D reconstruction. The paper describes the techniques used for non-contact measurements and the methods providing metric characteristics of reconstructed 3D model. Also the results of system application for 3D reconstruction of complex industrial objects are presented.
Optimal Solar PV Arrays Integration for Distributed Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Omitaomu, Olufemi A; Li, Xueping
2012-01-01
Solar photovoltaic (PV) systems hold great potential for distributed energy generation by installing PV panels on rooftops of residential and commercial buildings. Yet challenges arise along with the variability and non-dispatchability of the PV systems that affect the stability of the grid and the economics of the PV system. This paper investigates the integration of PV arrays for distributed generation applications by identifying a combination of buildings that will maximize solar energy output and minimize system variability. Particularly, we propose mean-variance optimization models to choose suitable rooftops for PV integration based on Markowitz mean-variance portfolio selection model. We further introducemore » quantity and cardinality constraints to result in a mixed integer quadratic programming problem. Case studies based on real data are presented. An efficient frontier is obtained for sample data that allows decision makers to choose a desired solar energy generation level with a comfortable variability tolerance level. Sensitivity analysis is conducted to show the tradeoffs between solar PV energy generation potential and variability.« less
New insight on petroleum system modeling of Ghadames basin, Libya
NASA Astrophysics Data System (ADS)
Bora, Deepender; Dubey, Siddharth
2015-12-01
Underdown and Redfern (2008) performed a detailed petroleum system modeling of the Ghadames basin along an E-W section. However, hydrocarbon generation, migration and accumulation changes significantly across the basin due to complex geological history. Therefore, a single section can't be considered representative for the whole basin. This study aims at bridging this gap by performing petroleum system modeling along a N-S section and provides new insights on source rock maturation, generation and migration of the hydrocarbons using 2D basin modeling. This study in conjunction with earlier work provides a 3D context of petroleum system modeling in the Ghadames basin. Hydrocarbon generation from the lower Silurian Tanezzuft formation and the Upper Devonian Aouinet Ouenine started during the late Carboniferous. However, high subsidence rate during middle to late Cretaceous and elevated heat flow in Cenozoic had maximum impact on source rock transformation and hydrocarbon generation whereas large-scale uplift and erosion during Alpine orogeny has significant impact on migration and accumulation. Visible migration observed along faults, which reactivated during Austrian unconformity. Peak hydrocarbon expulsion reached during Oligocene for both the Tanezzuft and the Aouinet Ouenine source rocks. Based on modeling results, capillary entry pressure driven downward expulsion of hydrocarbons from the lower Silurian Tanezzuft formation to the underlying Bir Tlacsin formation observed during middle Cretaceous. Kinetic modeling has helped to model hydrocarbon composition and distribution of generated hydrocarbons from both the source rocks. Application of source to reservoir tracking technology suggest some accumulations at shallow stratigraphic level has received hydrocarbons from both the Tanezzuft and Aouinet Ouenine source rocks, implying charge mixing. Five petroleum systems identified based on source to reservoir correlation technology in Petromod*. This Study builds upon the original work of Underdown and Redfern, 2008 and offers new insights and interpretation of the data.
SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations
NASA Astrophysics Data System (ADS)
Baes, M.; Camps, P.
2015-09-01
The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.
SU-F-T-447: The Impact of Treatment Planning Methods On RapidPlan Modeling for Rectum Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, S; Peng, J; Li, K
2016-06-15
Purpose: To investigate the dose volume histogram (DVH) prediction varieties based on intensity modulate radiotherapy (IMRT) plan or volume arc modulate radiotherapy (VMAT) plan models on the RapidPlan. Methods: Two DVH prediction models were generated in this study, including an IMRT model trained from 83 IMRT rectum plans and a VMAT model trained from 60 VMAT rectum plans. In the internal validation, 20 plans from each training database were selected to verify the clinical feasibility of the model. Then, 10 IMRT plans (PIMRT-by-IMRT-model) generated from IMRT model and 10 IMRT plans generated from VMAT model (PIMRT-by-VMAT-model) were compared on themore » dose to organs at risk (OAR), which included bladder, left and right femoral heads. The similar comparison was also performed on the VMAT plans generated from IMRT model (PVMAT-by-IMRT-model) and VMAT plans generated from VMAT (PVMAT-by-VMAT-model) model. Results: For the internal validation, all plans from IMRT or VMAT model shows significantly improvement on OAR sparing compared with the corresponded clinical ones. Compared to the PIMRT-by-VMAT-model, the PIMRT-by-IMRT-model has a reduction of 6.90±3.87%(p<0.001) on V40 6.63±3.62%(p<0.001) on V45 and 4.74±2.26%(p<0.001) on V50 in bladder; and a mean dose reduction of 2.12±1.75Gy(p=0.004) and 2.84±1.53Gy(p<0.001) in right and left femoral head, respectively. There was no significant difference on OAR sparing between PVMAT-by-IMRT-model and PVMAT-by-VMAT-model. Conclusion: The IMRT model for the rectal cancer in the RapidPlan can be applied to for VMAT planning. However, the VMAT model is not suggested to use in the IMRT planning. Cautions should be taken that the planning model based on some technique may not feasible to other planning techniques.« less
The use of Meteonorm weather generator for climate change studies
NASA Astrophysics Data System (ADS)
Remund, J.; Müller, S. C.; Schilter, C.; Rihm, B.
2010-09-01
The global climatological database Meteonorm (www.meteonorm.com) is widely used as meteorological input for simulation of solar applications and buildings. It's a combination of a climate database, a spatial interpolation tool and a stochastic weather generator. Like this typical years with hourly or minute time resolution can be calculated for any site. The input of Meteonorm for global radiation is the Global Energy Balance Archive (GEBA, http://proto-geba.ethz.ch). All other meteorological parameters are taken from databases of WMO and NCDC (periods 1961-90 and 1996-2005). The stochastic generation of global radiation is based on a Markov chain model for daily values and an autoregressive model for hourly and minute values (Aguiar and Collares-Pereira, 1988 and 1992). The generation of temperature is based on global radiation and measured distribution of daily temperature values of approx. 5000 sites. Meteonorm generates also additional parameters like precipitation, wind speed or radiation parameters like diffuse and direct normal irradiance. Meteonorm can also be used for climate change studies. Instead of climate values, the results of IPCC AR4 results are used as input. From all 18 public models an average has been made at a resolution of 1°. The anomalies of the parameters temperature, precipitation and global radiation and the three scenarios B1, A1B and A2 have been included. With the combination of Meteonorm's current database 1961-90, the interpolation algorithms and the stochastic generation typical years can be calculated for any site, for different scenarios and for any period between 2010 and 2200. From the analysis of variations of year to year and month to month variations of temperature, precipitation and global radiation of the past ten years as well of climate model forecasts (from project prudence, http://prudence.dmi.dk) a simple autoregressive model has been formed which is used to generate realistic monthly time series of future periods. Meteonorm can therefore be used as a relatively simple method to enhance the spatial and temporal resolution instead of using complicated and time consuming downscaling methods based on regional climate models. The combination of Meteonorm, gridded historical (based on work of Luterbach et al.) and IPCC results has been used for studies of vegetation simulation between 1660 and 2600 (publication of first version based on IS92a scenario and limited time period 1950 - 2100: http://www.pbl.nl/images/H5_Part2_van%20CCE_opmaak%28def%29_tcm61-46625.pdf). It's also applicable for other adaptation studies for e.g. road surfaces or building simulation. In Meteonorm 6.1 one scenario (IS92a) and one climate model has been included (Hadley CM3). In the new Meteonorm 7 (coming spring 2011) the model averages of the three above mentioned scenarios of the IPCC AR4 will be included.
Simulink-Based Simulation Architecture for Evaluating Controls for Aerospace Vehicles (SAREC-ASV)
NASA Technical Reports Server (NTRS)
Christhilf, David m.; Bacon, Barton J.
2006-01-01
The Simulation Architecture for Evaluating Controls for Aerospace Vehicles (SAREC-ASV) is a Simulink-based approach to providing an engineering quality desktop simulation capability for finding trim solutions, extracting linear models for vehicle analysis and control law development, and generating open-loop and closed-loop time history responses for control system evaluation. It represents a useful level of maturity rather than a finished product. The layout is hierarchical and supports concurrent component development and validation, with support from the Concurrent Versions System (CVS) software management tool. Real Time Workshop (RTW) is used to generate pre-compiled code for substantial component modules, and templates permit switching seamlessly between original Simulink and code compiled for various platforms. Two previous limitations are addressed. Turn around time for incorporating tabular model components was improved through auto-generation of required Simulink diagrams based on data received in XML format. The layout was modified to exploit a Simulink "compile once, evaluate multiple times" capability for zero elapsed time for use in trimming and linearizing. Trim is achieved through a Graphical User Interface (GUI) with a narrow, script definable interface to the vehicle model which facilitates incorporating new models.
Automated diagnosis of coronary artery disease based on data mining and fuzzy modeling.
Tsipouras, Markos G; Exarchos, Themis P; Fotiadis, Dimitrios I; Kotsia, Anna P; Vakalis, Konstantinos V; Naka, Katerina K; Michalis, Lampros K
2008-07-01
A fuzzy rule-based decision support system (DSS) is presented for the diagnosis of coronary artery disease (CAD). The system is automatically generated from an initial annotated dataset, using a four stage methodology: 1) induction of a decision tree from the data; 2) extraction of a set of rules from the decision tree, in disjunctive normal form and formulation of a crisp model; 3) transformation of the crisp set of rules into a fuzzy model; and 4) optimization of the parameters of the fuzzy model. The dataset used for the DSS generation and evaluation consists of 199 subjects, each one characterized by 19 features, including demographic and history data, as well as laboratory examinations. Tenfold cross validation is employed, and the average sensitivity and specificity obtained is 62% and 54%, respectively, using the set of rules extracted from the decision tree (first and second stages), while the average sensitivity and specificity increase to 80% and 65%, respectively, when the fuzzification and optimization stages are used. The system offers several advantages since it is automatically generated, it provides CAD diagnosis based on easily and noninvasively acquired features, and is able to provide interpretation for the decisions made.
The Global Modeling and Assimilation Office (GMAO) 4d-Var and its Adjoint-based Tools
NASA Technical Reports Server (NTRS)
Todling, Ricardo; Tremolet, Yannick
2008-01-01
The fifth generation of the Goddard Earth Observing System (GEOS-5) Data Assimilation System (DAS) is a 3d-var system that uses the Grid-point Statistical Interpolation (GSI) system developed in collaboration with NCEP, and a general circulation model developed at Goddard, that includes the finite-volume hydrodynamics of GEOS-4 wrapped in the Earth System Modeling Framework and physical packages tuned to provide a reliable hydrological cycle for the integration of the Modern Era Retrospective-analysis for Research and Applications (MERRA). This MERRA system is essentially complete and the next generation GEOS is under intense development. A prototype next generation system is now complete and has been producing preliminary results. This prototype system replaces the GSI-based Incremental Analysis Update procedure with a GSI-based 4d-var which uses the adjoint of the finite-volume hydrodynamics of GEOS-4 together with a vertical diffusing scheme for simplified physics. As part of this development we have kept the GEOS-5 IAU procedure as an option and have added the capability to experiment with a First Guess at the Appropriate Time (FGAT) procedure, thus allowing for at least three modes of running the data assimilation experiments. The prototype system is a large extension of GEOS-5 as it also includes various adjoint-based tools, namely, a forecast sensitivity tool, a singular vector tool, and an observation impact tool, that combines the model sensitivity tool with a GSI-based adjoint tool. These features bring the global data assimilation effort at Goddard up to date with technologies used in data assimilation systems at major meteorological centers elsewhere. Various aspects of the next generation GEOS will be discussed during the presentation at the Workshop, and preliminary results will illustrate the discussion.
Parisi Kern, Andrea; Ferreira Dias, Michele; Piva Kulakowski, Marlova; Paulo Gomes, Luciana
2015-05-01
Reducing construction waste is becoming a key environmental issue in the construction industry. The quantification of waste generation rates in the construction sector is an invaluable management tool in supporting mitigation actions. However, the quantification of waste can be a difficult process because of the specific characteristics and the wide range of materials used in different construction projects. Large variations are observed in the methods used to predict the amount of waste generated because of the range of variables involved in construction processes and the different contexts in which these methods are employed. This paper proposes a statistical model to determine the amount of waste generated in the construction of high-rise buildings by assessing the influence of design process and production system, often mentioned as the major culprits behind the generation of waste in construction. Multiple regression was used to conduct a case study based on multiple sources of data of eighteen residential buildings. The resulting statistical model produced dependent (i.e. amount of waste generated) and independent variables associated with the design and the production system used. The best regression model obtained from the sample data resulted in an adjusted R(2) value of 0.694, which means that it predicts approximately 69% of the factors involved in the generation of waste in similar constructions. Most independent variables showed a low determination coefficient when assessed in isolation, which emphasizes the importance of assessing their joint influence on the response (dependent) variable. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Zangori, Laura; Forbes, Cory T.; Schwarz, Christina V.
2015-01-01
Opportunities to generate model-based explanations are crucial for elementary students, yet are rarely foregrounded in elementary science learning environments despite evidence that early learners can reason from models when provided with scaffolding. We used a quasi-experimental research design to investigate the comparative impact of a scaffold…
Steen Magnussen; Ronald E. McRoberts; Erkki O. Tomppo
2009-01-01
New model-based estimators of the uncertainty of pixel-level and areal k-nearest neighbour (knn) predictions of attribute Y from remotely-sensed ancillary data X are presented. Non-parametric functions predict Y from scalar 'Single Index Model' transformations of X. Variance functions generated...
Bakas, Spyridon; Zeng, Ke; Sotiras, Aristeidis; Rathore, Saima; Akbari, Hamed; Gaonkar, Bilwaj; Rozycki, Martin; Pati, Sarthak; Davatzikos, Christos
2016-01-01
We present an approach for segmenting low- and high-grade gliomas in multimodal magnetic resonance imaging volumes. The proposed approach is based on a hybrid generative-discriminative model. Firstly, a generative approach based on an Expectation-Maximization framework that incorporates a glioma growth model is used to segment the brain scans into tumor, as well as healthy tissue labels. Secondly, a gradient boosting multi-class classification scheme is used to refine tumor labels based on information from multiple patients. Lastly, a probabilistic Bayesian strategy is employed to further refine and finalize the tumor segmentation based on patient-specific intensity statistics from the multiple modalities. We evaluated our approach in 186 cases during the training phase of the BRAin Tumor Segmentation (BRATS) 2015 challenge and report promising results. During the testing phase, the algorithm was additionally evaluated in 53 unseen cases, achieving the best performance among the competing methods.
A vertical handoff decision algorithm based on ARMA prediction model
NASA Astrophysics Data System (ADS)
Li, Ru; Shen, Jiao; Chen, Jun; Liu, Qiuhuan
2012-01-01
With the development of computer technology and the increasing demand for mobile communications, the next generation wireless networks will be composed of various wireless networks (e.g., WiMAX and WiFi). Vertical handoff is a key technology of next generation wireless networks. During the vertical handoff procedure, handoff decision is a crucial issue for an efficient mobility. Based on auto regression moving average (ARMA) prediction model, we propose a vertical handoff decision algorithm, which aims to improve the performance of vertical handoff and avoid unnecessary handoff. Based on the current received signal strength (RSS) and the previous RSS, the proposed approach adopt ARMA model to predict the next RSS. And then according to the predicted RSS to determine whether trigger the link layer triggering event and complete vertical handoff. The simulation results indicate that the proposed algorithm outperforms the RSS-based scheme with a threshold in the performance of handoff and the number of handoff.
Generating strain signals under consideration of road surface profiles
NASA Astrophysics Data System (ADS)
Putra, T. E.; Abdullah, S.; Schramm, D.; Nuawi, M. Z.; Bruckmann, T.
2015-08-01
The current study aimed to develop the mechanism for generating strain signal utilising computer-based simulation. The strain data, caused by the acceleration, were undertaken from a fatigue data acquisition involving car movements. Using a mathematical model, the measured strain signals yielded to acceleration data used to describe the bumpiness of road surfaces. The acceleration signals were considered as an external disturbance on generating strain signals. Based on this comparison, both the actual and simulated strain data have similar pattern. The results are expected to provide new knowledge to generate a strain signal via a simulation.
A GENERATIVE SKETCH OF BURMESE.
ERIC Educational Resources Information Center
BURLING, ROBBINS
ASSUMING THAT A GENERATIVE APPROACH PROVIDES A FAIRLY DIRECT AND SIMPLE DESCRIPTION OF LINGUISTIC DATA, THE AUTHOR TAKES A TRADITIONAL BURMESE GRAMMAR (W. CORNYN'S "OUTLINE OF BURMESE GRAMMAR," REFERRED TO AS OBG THROUGHOUT THE PAPER) AND REWORKS IT INTO A GENERATIVE FRAMEWORK BASED ON A MODEL BY CHOMSKY. THE STUDY IS DIVIDED INTO FIVE SECTIONS,…
NASA Technical Reports Server (NTRS)
Denney, Ewen W.; Fischer, Bernd
2009-01-01
Model-based development and automated code generation are increasingly used for production code in safety-critical applications, but since code generators are typically not qualified, the generated code must still be fully tested, reviewed, and certified. This is particularly arduous for mathematical and control engineering software which requires reviewers to trace subtle details of textbook formulas and algorithms to the code, and to match requirements (e.g., physical units or coordinate frames) not represented explicitly in models or code. Both tasks are complicated by the often opaque nature of auto-generated code. We address these problems by developing a verification-driven approach to traceability and documentation. We apply the AUTOCERT verification system to identify and then verify mathematical concepts in the code, based on a mathematical domain theory, and then use these verified traceability links between concepts, code, and verification conditions to construct a natural language report that provides a high-level structured argument explaining why and how the code uses the assumptions and complies with the requirements. We have applied our approach to generate review documents for several sub-systems of NASA s Project Constellation.
NASA Astrophysics Data System (ADS)
Chen, Long-chao; Fan, Wen-hui
2011-08-01
The numerical simulation of terahertz generation and detection in the interaction between femtosecond laser pulse and photoconductive material has been reported in this paper. The simulation model based on the Drude-Lorentz theory is used, and takes into account the phenomena that photo-generated electrons and holes are separated by the external bias field, which is screened by the space-charge field simultaneously. According to the numerical calculation, the terahertz time-domain waveforms and their Fourier-transformed spectra are presented under different conditions. The simulation results indicate that terahertz generation and detection properties of photoconductive antennas are largely influenced by three major factors, including photo-carriers' lifetime, laser pulse width and pump laser power. Finally, a simple model has been applied to simulate the detected terahertz pulses by photoconductive antennas with various photo-carriers' lifetimes, and the results show that the detected terahertz spectra are very different from the spectra radiated from the emitter.
Singlet delta oxygen generation for Chemical Oxygen-Iodine Lasers
NASA Astrophysics Data System (ADS)
Georges, E.; Mouthon, A.; Barraud, R.
1991-10-01
The development of Chemical Oxygen-Iodine Lasers is based on the generation of singlet delta oxygen. To improve the overall efficiency of these lasers, it is necessary to increase the generator production and yield of singlet delta oxygen at low and high pressure, respectively, for subsonic and supersonic lasers. Furthermore, the water vapor content must be as low as possible. A generator model, based on gas-liquid reaction and liquid-vapor equilibrium theories associated with thermophysical evaluations is presented. From model predictions, operating conditions have been drawn to attain the following experimental results in a bubble-column: by increasing the superficial gas velocity, the production of singlet delta oxygen is largely improved at low pressure; by mixing chlorine with an inert gas before injection in the reactor, this yield is maintained constant up to higher pressure. A theoretical analysis of these experimental results and their consequences for both subsonic and supersonic lasers are presented.
Monte Carlo generators for hadron physics: updates on PHOKHARA and EKHARA generators
NASA Astrophysics Data System (ADS)
Czyż, Henryk; Kisza, Patrycja; Tracz, Szymon
2017-04-01
A short review of current upgrades of the PHOKHARA and EKHARA generators is presented together with a report on the work in progress. The upgrades are based on a newly constructed model of the χci - γ* - γ* and χci - J/ψ* - γ* form factors. Within this model predictions were made for the electronic widths of χc1 and χc2 and, based on the event generators results, cross sections e+e- → χc1 (→ J/ψ(→ μ+μ-)γ), e+e-→ e+e-χc1 and e+e-→ e+e-χc1 (→ J/ψ(→ μ+μ-)γ). Work supported in part by the Polish National Science Centre, grant number DEC-2012/07/B/ST2/03867 and German Research Foundation DFG under Contract No. Collaborative Research Center CRC-1044.
Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex.
Ulloa, Antonio; Horwitz, Barry
2016-01-01
A number of recent efforts have used large-scale, biologically realistic, neural models to help understand the neural basis for the patterns of activity observed in both resting state and task-related functional neural imaging data. An example of the former is The Virtual Brain (TVB) software platform, which allows one to apply large-scale neural modeling in a whole brain framework. TVB provides a set of structural connectomes of the human cerebral cortex, a collection of neural processing units for each connectome node, and various forward models that can convert simulated neural activity into a variety of functional brain imaging signals. In this paper, we demonstrate how to embed a previously or newly constructed task-based large-scale neural model into the TVB platform. We tested our method on a previously constructed large-scale neural model (LSNM) of visual object processing that consisted of interconnected neural populations that represent, primary and secondary visual, inferotemporal, and prefrontal cortex. Some neural elements in the original model were "non-task-specific" (NS) neurons that served as noise generators to "task-specific" neurons that processed shapes during a delayed match-to-sample (DMS) task. We replaced the NS neurons with an anatomical TVB connectome model of the cerebral cortex comprising 998 regions of interest interconnected by white matter fiber tract weights. We embedded our LSNM of visual object processing into corresponding nodes within the TVB connectome. Reciprocal connections between TVB nodes and our task-based modules were included in this framework. We ran visual object processing simulations and showed that the TVB simulator successfully replaced the noise generation originally provided by NS neurons; i.e., the DMS tasks performed with the hybrid LSNM/TVB simulator generated equivalent neural and fMRI activity to that of the original task-based models. Additionally, we found partial agreement between the functional connectivities using the hybrid LSNM/TVB model and the original LSNM. Our framework thus presents a way to embed task-based neural models into the TVB platform, enabling a better comparison between empirical and computational data, which in turn can lead to a better understanding of how interacting neural populations give rise to human cognitive behaviors.
Is there a single best estimator? selection of home range estimators using area- under- the-curve
Walter, W. David; Onorato, Dave P.; Fischer, Justin W.
2015-01-01
Comparisons of fit of home range contours with locations collected would suggest that use of VHF technology is not as accurate as GPS technology to estimate size of home range for large mammals. Estimators of home range collected with GPS technology performed better than those estimated with VHF technology regardless of estimator used. Furthermore, estimators that incorporate a temporal component (third-generation estimators) appeared to be the most reliable regardless of whether kernel-based or Brownian bridge-based algorithms were used and in comparison to first- and second-generation estimators. We defined third-generation estimators of home range as any estimator that incorporates time, space, animal-specific parameters, and habitat. Such estimators would include movement-based kernel density, Brownian bridge movement models, and dynamic Brownian bridge movement models among others that have yet to be evaluated.
Infrared dim small target segmentation method based on ALI-PCNN model
NASA Astrophysics Data System (ADS)
Zhao, Shangnan; Song, Yong; Zhao, Yufei; Li, Yun; Li, Xu; Jiang, Yurong; Li, Lin
2017-10-01
Pulse Coupled Neural Network (PCNN) is improved by Adaptive Lateral Inhibition (ALI), while a method of infrared (IR) dim small target segmentation based on ALI-PCNN model is proposed in this paper. Firstly, the feeding input signal is modulated by lateral inhibition network to suppress background. Then, the linking input is modulated by ALI, and linking weight matrix is generated adaptively by calculating ALI coefficient of each pixel. Finally, the binary image is generated through the nonlinear modulation and the pulse generator in PCNN. The experimental results show that the segmentation effect as well as the values of contrast across region and uniformity across region of the proposed method are better than the OTSU method, maximum entropy method, the methods based on conventional PCNN and visual attention, and the proposed method has excellent performance in extracting IR dim small target from complex background.
Methodologies for Development of Patient Specific Bone Models from Human Body CT Scans
NASA Astrophysics Data System (ADS)
Chougule, Vikas Narayan; Mulay, Arati Vinayak; Ahuja, Bharatkumar Bhagatraj
2016-06-01
This work deals with development of algorithm for physical replication of patient specific human bone and construction of corresponding implants/inserts RP models by using Reverse Engineering approach from non-invasive medical images for surgical purpose. In medical field, the volumetric data i.e. voxel and triangular facet based models are primarily used for bio-modelling and visualization, which requires huge memory space. On the other side, recent advances in Computer Aided Design (CAD) technology provides additional facilities/functions for design, prototyping and manufacturing of any object having freeform surfaces based on boundary representation techniques. This work presents a process to physical replication of 3D rapid prototyping (RP) physical models of human bone from various CAD modeling techniques developed by using 3D point cloud data which is obtained from non-invasive CT/MRI scans in DICOM 3.0 format. This point cloud data is used for construction of 3D CAD model by fitting B-spline curves through these points and then fitting surface between these curve networks by using swept blend techniques. This process also can be achieved by generating the triangular mesh directly from 3D point cloud data without developing any surface model using any commercial CAD software. The generated STL file from 3D point cloud data is used as a basic input for RP process. The Delaunay tetrahedralization approach is used to process the 3D point cloud data to obtain STL file. CT scan data of Metacarpus (human bone) is used as the case study for the generation of the 3D RP model. A 3D physical model of the human bone is generated on rapid prototyping machine and its virtual reality model is presented for visualization. The generated CAD model by different techniques is compared for the accuracy and reliability. The results of this research work are assessed for clinical reliability in replication of human bone in medical field.
Wu, Chung-Hsien; Chiu, Yu-Hsien; Guo, Chi-Shiang
2004-12-01
This paper proposes a novel approach to the generation of Chinese sentences from ill-formed Taiwanese Sign Language (TSL) for people with hearing impairments. First, a sign icon-based virtual keyboard is constructed to provide a visualized interface to retrieve sign icons from a sign database. A proposed language model (LM), based on a predictive sentence template (PST) tree, integrates a statistical variable n-gram LM and linguistic constraints to deal with the translation problem from ill-formed sign sequences to grammatical written sentences. The PST tree trained by a corpus collected from the deaf schools was used to model the correspondence between signed and written Chinese. In addition, a set of phrase formation rules, based on trigger pair category, was derived for sentence pattern expansion. These approaches improved the efficiency of text generation and the accuracy of word prediction and, therefore, improved the input rate. For the assessment of practical communication aids, a reading-comprehension training program with ten profoundly deaf students was undertaken in a deaf school in Tainan, Taiwan. Evaluation results show that the literacy aptitude test and subjective satisfactory level are significantly improved.
A Measurement and Power Line Communication System Design for Renewable Smart Grids
NASA Astrophysics Data System (ADS)
Kabalci, E.; Kabalci, Y.
2013-10-01
The data communication over the electric power lines can be managed easily and economically since the grid connections are already spread around all over the world. This paper investigates the applicability of Power Line Communication (PLC) in an energy generation system that is based on photovoltaic (PV) panels with the modeling study in Matlab/Simulink. The Simulink model covers the designed PV panels, boost converter with Perturb and Observe (P&O) control algorithm, full bridge inverter, and the binary phase shift keying (BPSK) modem that is utilized to transfer the measured data over the power lines. This study proposes a novel method to use the electrical power lines not only for carrying the line voltage but also to transmit the measurements of the renewable energy generation plants. Hence, it is aimed at minimizing the additional monitoring costs such as SCADA, Ethernet-based or GSM based systems by using the proposed technique. Although this study is performed with solar power plants, the proposed model can be applied to other renewable generation systems. Consequently, the usage of the proposed technique instead of SCADA or Ethernet-based systems eliminates additional monitoring costs.
NASA Astrophysics Data System (ADS)
Amme, J.; Pleßmann, G.; Bühler, J.; Hülk, L.; Kötter, E.; Schwaegerl, P.
2018-02-01
The increasing integration of renewable energy into the electricity supply system creates new challenges for distribution grids. The planning and operation of distribution systems requires appropriate grid models that consider the heterogeneity of existing grids. In this paper, we describe a novel method to generate synthetic medium-voltage (MV) grids, which we applied in our DIstribution Network GeneratOr (DINGO). DINGO is open-source software and uses freely available data. Medium-voltage grid topologies are synthesized based on location and electricity demand in defined demand areas. For this purpose, we use GIS data containing demand areas with high-resolution spatial data on physical properties, land use, energy, and demography. The grid topology is treated as a capacitated vehicle routing problem (CVRP) combined with a local search metaheuristics. We also consider the current planning principles for MV distribution networks, paying special attention to line congestion and voltage limit violations. In the modelling process, we included power flow calculations for validation. The resulting grid model datasets contain 3608 synthetic MV grids in high resolution, covering all of Germany and taking local characteristics into account. We compared the modelled networks with real network data. In terms of number of transformers and total cable length, we conclude that the method presented in this paper generates realistic grids that could be used to implement a cost-optimised electrical energy system.
Large-scale building scenes reconstruction from close-range images based on line and plane feature
NASA Astrophysics Data System (ADS)
Ding, Yi; Zhang, Jianqing
2007-11-01
Automatic generate 3D models of buildings and other man-made structures from images has become a topic of increasing importance, those models may be in applications such as virtual reality, entertainment industry and urban planning. In this paper we address the main problems and available solution for the generation of 3D models from terrestrial images. We first generate a coarse planar model of the principal scene planes and then reconstruct windows to refine the building models. There are several points of novelty: first we reconstruct the coarse wire frame model use the line segments matching with epipolar geometry constraint; Secondly, we detect the position of all windows in the image and reconstruct the windows by established corner points correspondences between images, then add the windows to the coarse model to refine the building models. The strategy is illustrated on image triple of college building.
D Surface Generation from Aerial Thermal Imagery
NASA Astrophysics Data System (ADS)
Khodaei, B.; Samadzadegan, F.; Dadras Javan, F.; Hasani, H.
2015-12-01
Aerial thermal imagery has been recently applied to quantitative analysis of several scenes. For the mapping purpose based on aerial thermal imagery, high accuracy photogrammetric process is necessary. However, due to low geometric resolution and low contrast of thermal imaging sensors, there are some challenges in precise 3D measurement of objects. In this paper the potential of thermal video in 3D surface generation is evaluated. In the pre-processing step, thermal camera is geometrically calibrated using a calibration grid based on emissivity differences between the background and the targets. Then, Digital Surface Model (DSM) generation from thermal video imagery is performed in four steps. Initially, frames are extracted from video, then tie points are generated by Scale-Invariant Feature Transform (SIFT) algorithm. Bundle adjustment is then applied and the camera position and orientation parameters are determined. Finally, multi-resolution dense image matching algorithm is used to create 3D point cloud of the scene. Potential of the proposed method is evaluated based on thermal imaging cover an industrial area. The thermal camera has 640×480 Uncooled Focal Plane Array (UFPA) sensor, equipped with a 25 mm lens which mounted in the Unmanned Aerial Vehicle (UAV). The obtained results show the comparable accuracy of 3D model generated based on thermal images with respect to DSM generated from visible images, however thermal based DSM is somehow smoother with lower level of texture. Comparing the generated DSM with the 9 measured GCPs in the area shows the Root Mean Square Error (RMSE) value is smaller than 5 decimetres in both X and Y directions and 1.6 meters for the Z direction.
Modular, Semantics-Based Composition of Biosimulation Models
ERIC Educational Resources Information Center
Neal, Maxwell Lewis
2010-01-01
Biosimulation models are valuable, versatile tools used for hypothesis generation and testing, codification of biological theory, education, and patient-specific modeling. Driven by recent advances in computational power and the accumulation of systems-level experimental data, modelers today are creating models with an unprecedented level of…
NASA Astrophysics Data System (ADS)
Rai, Aakash C.; Lin, Chao-Hsin; Chen, Qingyan
2015-02-01
Ozone-terpene reactions are important sources of indoor ultrafine particles (UFPs), a potential health hazard for human beings. Humans themselves act as possible sites for ozone-initiated particle generation through reactions with squalene (a terpene) that is present in their skin, hair, and clothing. This investigation developed a numerical model to probe particle generation from ozone reactions with clothing worn by humans. The model was based on particle generation measured in an environmental chamber as well as physical formulations of particle nucleation, condensational growth, and deposition. In five out of the six test cases, the model was able to predict particle size distributions reasonably well. The failure in the remaining case demonstrated the fundamental limitations of nucleation models. The model that was developed was used to predict particle generation under various building and airliner cabin conditions. These predictions indicate that ozone reactions with human-worn clothing could be an important source of UFPs in densely occupied classrooms and airliner cabins. Those reactions could account for about 40% of the total UFPs measured on a Boeing 737-700 flight. The model predictions at this stage are indicative and should be improved further.
Bromochloromethane (BCM) is a volatile compound and a by-product of disinfection of water by ofchlorination. Physiologically based pharmacokinetic (PBPK) models are used in risk assessment applications. An updated PBPKmodel for BCM is generated and applied to hypotheses testing c...
Enforcing elemental mass and energy balances for reduced order models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, J.; Agarwal, K.; Sharma, P.
2012-01-01
Development of economically feasible gasification and carbon capture, utilization and storage (CCUS) technologies requires a variety of software tools to optimize the designs of not only the key devices involved (e., g., gasifier, CO{sub 2} adsorber) but also the entire power generation system. High-fidelity models such as Computational Fluid Dynamics (CFD) models are capable of accurately simulating the detailed flow dynamics, heat transfer, and chemistry inside the key devices. However, the integration of CFD models within steady-state process simulators, and subsequent optimization of the integrated system, still presents significant challenges due to the scale differences in both time and length,more » as well the high computational cost. A reduced order model (ROM) generated from a high-fidelity model can serve as a bridge between the models of different scales. While high-fidelity models are built upon the principles of mass, momentum, and energy conservations, ROMs are usually developed based on regression-type equations and hence their predictions may violate the mass and energy conservation laws. A high-fidelity model may also have the mass and energy balance problem if it is not tightly converged. Conservations of mass and energy are important when a ROM is integrated to a flowsheet for the process simulation of the entire chemical or power generation system, especially when recycle streams are connected to the modeled device. As a part of the Carbon Capture Simulation Initiative (CCSI) project supported by the U.S. Department of Energy, we developed a software framework for generating ROMs from CFD simulations and integrating them with Process Modeling Environments (PMEs) for system-wide optimization. This paper presents a method to correct the results of a high-fidelity model or a ROM such that the elemental mass and energy are conserved perfectly. Correction factors for the flow rates of individual species in the product streams are solved using a minimization algorithm based on Lagrangian multiplier method. Enthalpies of product streams are also modified to enforce the energy balance. The approach is illustrated for two ROMs, one based on a CFD model of an entrained-flow gasifier and the other based on the CFD model of a multiphase CO{sub 2} adsorber.« less
A skeleton family generator via physics-based deformable models.
Krinidis, Stelios; Chatzis, Vassilios
2009-01-01
This paper presents a novel approach for object skeleton family extraction. The introduced technique utilizes a 2-D physics-based deformable model that parameterizes the objects shape. Deformation equations are solved exploiting modal analysis, and proportional to model physical characteristics, a different skeleton is produced every time, generating, in this way, a family of skeletons. The theoretical properties and the experiments presented demonstrate that obtained skeletons match to hand-labeled skeletons provided by human subjects, even in the presence of significant noise and shape variations, cuts and tears, and have the same topology as the original skeletons. In particular, the proposed approach produces no spurious branches without the need of any known skeleton pruning method.
NASA Astrophysics Data System (ADS)
Trinks, I.; Wallner, M.; Kucera, M.; Verhoeven, G.; Torrejón Valdelomar, J.; Löcker, K.; Nau, E.; Sevara, C.; Aldrian, L.; Neubauer, E.; Klein, M.
2017-02-01
The excavated architecture of the exceptional prehistoric site of Akrotiri on the Greek island of Thera/Santorini is endangered by gradual decay, damage due to accidents, and seismic shocks, being located on an active volcano in an earthquake-prone area. Therefore, in 2013 and 2014 a digital documentation project has been conducted with support of the National Geographic Society in order to generate a detailed digital model of Akrotiri's architecture using terrestrial laser scanning and image-based modeling. Additionally, non-invasive geophysical prospection has been tested in order to investigate its potential to explore and map yet buried archaeological remains. This article describes the project and the generated results.
Physics of thermo-acoustic sound generation
NASA Astrophysics Data System (ADS)
Daschewski, M.; Boehm, R.; Prager, J.; Kreutzbruck, M.; Harrer, A.
2013-09-01
We present a generalized analytical model of thermo-acoustic sound generation based on the analysis of thermally induced energy density fluctuations and their propagation into the adjacent matter. The model provides exact analytical prediction of the sound pressure generated in fluids and solids; consequently, it can be applied to arbitrary thermal power sources such as thermophones, plasma firings, laser beams, and chemical reactions. Unlike existing approaches, our description also includes acoustic near-field effects and sound-field attenuation. Analytical results are compared with measurements of sound pressures generated by thermo-acoustic transducers in air for frequencies up to 1 MHz. The tested transducers consist of titanium and indium tin oxide coatings on quartz glass and polycarbonate substrates. The model reveals that thermo-acoustic efficiency increases linearly with the supplied thermal power and quadratically with thermal excitation frequency. Comparison of the efficiency of our thermo-acoustic transducers with those of piezoelectric-based airborne ultrasound transducers using impulse excitation showed comparable sound pressure values. The present results show that thermo-acoustic transducers can be applied as broadband, non-resonant, high-performance ultrasound sources.
Stylized facts in social networks: Community-based static modeling
NASA Astrophysics Data System (ADS)
Jo, Hang-Hyun; Murase, Yohsuke; Török, János; Kertész, János; Kaski, Kimmo
2018-06-01
The past analyses of datasets of social networks have enabled us to make empirical findings of a number of aspects of human society, which are commonly featured as stylized facts of social networks, such as broad distributions of network quantities, existence of communities, assortative mixing, and intensity-topology correlations. Since the understanding of the structure of these complex social networks is far from complete, for deeper insight into human society more comprehensive datasets and modeling of the stylized facts are needed. Although the existing dynamical and static models can generate some stylized facts, here we take an alternative approach by devising a community-based static model with heterogeneous community sizes and larger communities having smaller link density and weight. With these few assumptions we are able to generate realistic social networks that show most stylized facts for a wide range of parameters, as demonstrated numerically and analytically. Since our community-based static model is simple to implement and easily scalable, it can be used as a reference system, benchmark, or testbed for further applications.
NASA Astrophysics Data System (ADS)
Guarracino, L.; Jougnot, D.
2018-01-01
Among the different contributions generating self-potential, the streaming potential is of particular interest in hydrogeology for its sensitivity to water flow. Estimating water flux in porous media using streaming potential data relies on our capacity to understand, model, and upscale the electrokinetic coupling at the mineral-solution interface. Different approaches have been proposed to predict streaming potential generation in porous media. One of these approaches is the flux averaging which is based on determining the excess charge which is effectively dragged in the medium by water flow. In this study, we develop a physically based analytical model to predict the effective excess charge in saturated porous media using a flux-averaging approach in a bundle of capillary tubes with a fractal pore size distribution. The proposed model allows the determination of the effective excess charge as a function of pore water ionic concentration and hydrogeological parameters like porosity, permeability, and tortuosity. The new model has been successfully tested against different set of experimental data from the literature. One of the main findings of this study is the mechanistic explanation to the empirical dependence between the effective excess charge and the permeability that has been found by several researchers. The proposed model also highlights the link to other lithological properties, and it is able to reproduce the evolution of effective excess charge with electrolyte concentrations.
Regulation, cell differentiation and protein-based inheritance.
Malagnac, Fabienne; Silar, Philippe
2006-11-01
Recent research using fungi as models provide new insight into the ability of regulatory networks to generate cellular states that are sufficiently stable to be faithfully transmitted to daughter cells, thereby generating epigenetic inheritance. Such protein-based inheritance is driven by infectious factors endowed with properties usually displayed by prions. We emphasize the contribution of regulatory networks to the emerging properties displayed by cells.
Liu, Zhen; Cai, Yijun; Sun, Qiang
2017-01-01
Gene-modified monkey models would be particularly valuable in biomedical and neuroscience research. Virus-based transgenic and programmable nucleases-based site-specific gene editing methods (TALEN, CRISPR-cas9) enable the generation of gene-modified monkeys with gain or loss of function of specific genes. Here, we describe the generation of transgenic and knock-out (KO) monkeys with high efficiency by lentivirus and programmable nucleases.
Modenese, Luca; Montefiori, Erica; Wang, Anqi; Wesarg, Stefan; Viceconti, Marco; Mazzà, Claudia
2018-05-17
The generation of subject-specific musculoskeletal models of the lower limb has become a feasible task thanks to improvements in medical imaging technology and musculoskeletal modelling software. Nevertheless, clinical use of these models in paediatric applications is still limited for what concerns the estimation of muscle and joint contact forces. Aiming to improve the current state of the art, a methodology to generate highly personalized subject-specific musculoskeletal models of the lower limb based on magnetic resonance imaging (MRI) scans was codified as a step-by-step procedure and applied to data from eight juvenile individuals. The generated musculoskeletal models were used to simulate 107 gait trials using stereophotogrammetric and force platform data as input. To ensure completeness of the modelling procedure, muscles' architecture needs to be estimated. Four methods to estimate muscles' maximum isometric force and two methods to estimate musculotendon parameters (optimal fiber length and tendon slack length) were assessed and compared, in order to quantify their influence on the models' output. Reported results represent the first comprehensive subject-specific model-based characterization of juvenile gait biomechanics, including profiles of joint kinematics and kinetics, muscle forces and joint contact forces. Our findings suggest that, when musculotendon parameters were linearly scaled from a reference model and the muscle force-length-velocity relationship was accounted for in the simulations, realistic knee contact forces could be estimated and these forces were not sensitive the method used to compute muscle maximum isometric force. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Generation of High Resolution Land Surface Parameters in the Community Land Model
NASA Astrophysics Data System (ADS)
Ke, Y.; Coleman, A. M.; Wigmosta, M. S.; Leung, L.; Huang, M.; Li, H.
2010-12-01
The Community Land Model (CLM) is the land surface model used for the Community Atmosphere Model (CAM) and the Community Climate System Model (CCSM). It examines the physical, chemical, and biological processes across a variety of spatial and temporal scales. Currently, efforts are being made to improve the spatial resolution of the CLM, in part, to represent finer scale hydrologic characteristics. Current land surface parameters of CLM4.0, in particular plant functional types (PFT) and leaf area index (LAI), are generated from MODIS and calculated at a 0.05 degree resolution. These MODIS-derived land surface parameters have also been aggregated to coarser resolutions (e.g., 0.5, 1.0 degrees). To evaluate the response of CLM across various spatial scales, higher spatial resolution land surface parameters need to be generated. In this study we examine the use of Landsat TM/ETM+ imagery and data fusion techniques for generating land surface parameters at a 1km resolution within the Pacific Northwest United States. . Land cover types and PFTs are classified based on Landsat multi-season spectral information, DEM, National Land Cover Database (NLCD) and the USDA-NASS Crop Data Layer (CDL). For each PFT, relationships between MOD15A2 high quality LAI values, Landsat-based vegetation indices, climate variables, terrain, and laser-altimeter derived vegetation height are used to generate monthly LAI values at a 30m resolution. The high-resolution PFT and LAI data are aggregated to create a 1km model grid resolution. An evaluation and comparison of CLM land surface response at both fine and moderate scale is presented.
A method for the computational modeling of the physics of heart murmurs
NASA Astrophysics Data System (ADS)
Seo, Jung Hee; Bakhshaee, Hani; Garreau, Guillaume; Zhu, Chi; Andreou, Andreas; Thompson, William R.; Mittal, Rajat
2017-05-01
A computational method for direct simulation of the generation and propagation of blood flow induced sounds is proposed. This computational hemoacoustic method is based on the immersed boundary approach and employs high-order finite difference methods to resolve wave propagation and scattering accurately. The current method employs a two-step, one-way coupled approach for the sound generation and its propagation through the tissue. The blood flow is simulated by solving the incompressible Navier-Stokes equations using the sharp-interface immersed boundary method, and the equations corresponding to the generation and propagation of the three-dimensional elastic wave corresponding to the murmur are resolved with a high-order, immersed boundary based, finite-difference methods in the time-domain. The proposed method is applied to a model problem of aortic stenosis murmur and the simulation results are verified and validated by comparing with known solutions as well as experimental measurements. The murmur propagation in a realistic model of a human thorax is also simulated by using the computational method. The roles of hemodynamics and elastic wave propagation on the murmur are discussed based on the simulation results.
Simulation of load-sharing in standalone distributed generation system
NASA Astrophysics Data System (ADS)
Ajewole, Titus O.; Craven, Robert P. M.; Kayode, Olakunle; Babalola, Olufisayo S.
2018-05-01
This paper presents a study on load-sharing among the component generating units of a multi-source electric microgrid that is operated as an autonomous ac supply-mode system. Emerging trend in power system development permits deployment of microgrids for standalone or stand-by applications, thereby requiring active- and reactive power sharing among the discrete generating units contained in hybrid-source microgrids. In this study, therefore, a laboratory-scale model of a microgrid energized with three renewable energy-based sources is employed as a simulation platform to investigate power sharing among the power-generating units. Each source is represented by a source emulator that captures the real operational characteristics of the mimicked generating unit and, with implementation of real-life weather data and load profiles on the model; the sharing of the load among the generating units is investigated. There is a proportionate generation of power by the three source emulators, with their frequencies perfectly synchronized at the point of common coupling as a result of balance flow of power among them. This hybrid topology of renewable energy-based microgrid could therefore be seamlessly adapted into national energy mix by the indigenous electric utility providers in Nigeria.
Stone, Vathsala I; Lane, Joseph P
2012-05-16
Government-sponsored science, technology, and innovation (STI) programs support the socioeconomic aspects of public policies, in addition to expanding the knowledge base. For example, beneficial healthcare services and devices are expected to result from investments in research and development (R&D) programs, which assume a causal link to commercial innovation. Such programs are increasingly held accountable for evidence of impact-that is, innovative goods and services resulting from R&D activity. However, the absence of comprehensive models and metrics skews evidence gathering toward bibliometrics about research outputs (published discoveries), with less focus on transfer metrics about development outputs (patented prototypes) and almost none on econometrics related to production outputs (commercial innovations). This disparity is particularly problematic for the expressed intent of such programs, as most measurable socioeconomic benefits result from the last category of outputs. This paper proposes a conceptual framework integrating all three knowledge-generating methods into a logic model, useful for planning, obtaining, and measuring the intended beneficial impacts through the implementation of knowledge in practice. Additionally, the integration of the Context-Input-Process-Product (CIPP) model of evaluation proactively builds relevance into STI policies and programs while sustaining rigor. The resulting logic model framework explicitly traces the progress of knowledge from inputs, following it through the three knowledge-generating processes and their respective knowledge outputs (discovery, invention, innovation), as it generates the intended socio-beneficial impacts. It is a hybrid model for generating technology-based innovations, where best practices in new product development merge with a widely accepted knowledge-translation approach. Given the emphasis on evidence-based practice in the medical and health fields and "bench to bedside" expectations for knowledge transfer, sponsors and grantees alike should find the model useful for planning, implementing, and evaluating innovation processes. High-cost/high-risk industries like healthcare require the market deployment of technology-based innovations to improve domestic society in a global economy. An appropriate balance of relevance and rigor in research, development, and production is crucial to optimize the return on public investment in such programs. The technology-innovation process needs a comprehensive operational model to effectively allocate public funds and thereby deliberately and systematically accomplish socioeconomic benefits.
2012-01-01
Background Government-sponsored science, technology, and innovation (STI) programs support the socioeconomic aspects of public policies, in addition to expanding the knowledge base. For example, beneficial healthcare services and devices are expected to result from investments in research and development (R&D) programs, which assume a causal link to commercial innovation. Such programs are increasingly held accountable for evidence of impact—that is, innovative goods and services resulting from R&D activity. However, the absence of comprehensive models and metrics skews evidence gathering toward bibliometrics about research outputs (published discoveries), with less focus on transfer metrics about development outputs (patented prototypes) and almost none on econometrics related to production outputs (commercial innovations). This disparity is particularly problematic for the expressed intent of such programs, as most measurable socioeconomic benefits result from the last category of outputs. Methods This paper proposes a conceptual framework integrating all three knowledge-generating methods into a logic model, useful for planning, obtaining, and measuring the intended beneficial impacts through the implementation of knowledge in practice. Additionally, the integration of the Context-Input-Process-Product (CIPP) model of evaluation proactively builds relevance into STI policies and programs while sustaining rigor. Results The resulting logic model framework explicitly traces the progress of knowledge from inputs, following it through the three knowledge-generating processes and their respective knowledge outputs (discovery, invention, innovation), as it generates the intended socio-beneficial impacts. It is a hybrid model for generating technology-based innovations, where best practices in new product development merge with a widely accepted knowledge-translation approach. Given the emphasis on evidence-based practice in the medical and health fields and “bench to bedside” expectations for knowledge transfer, sponsors and grantees alike should find the model useful for planning, implementing, and evaluating innovation processes. Conclusions High-cost/high-risk industries like healthcare require the market deployment of technology-based innovations to improve domestic society in a global economy. An appropriate balance of relevance and rigor in research, development, and production is crucial to optimize the return on public investment in such programs. The technology-innovation process needs a comprehensive operational model to effectively allocate public funds and thereby deliberately and systematically accomplish socioeconomic benefits. PMID:22591638
NASA Astrophysics Data System (ADS)
Liu, Lu; Hejazi, Mohamad; Li, Hongyi; Forman, Barton; Zhang, Xiao
2017-08-01
Previous modelling studies suggest that thermoelectric power generation is vulnerable to climate change, whereas studies based on historical data suggest the impact will be less severe. Here we explore the vulnerability of thermoelectric power generation in the United States to climate change by coupling an Earth system model with a thermoelectric power generation model, including state-level representation of environmental regulations on thermal effluents. We find that the impact of climate change is lower than in previous modelling estimates due to an inclusion of a spatially disaggregated representation of environmental regulations and provisional variances that temporarily relieve power plants from permit requirements. More specifically, our results indicate that climate change alone may reduce average generating capacity by 2-3% by the 2060s, while reductions of up to 12% are expected if environmental requirements are enforced without waivers for thermal variation. Our work highlights the significance of accounting for legal constructs and underscores the effects of provisional variances in addition to environmental requirements.
NASA Astrophysics Data System (ADS)
Pohle, Ina; Niebisch, Michael; Zha, Tingting; Schümberg, Sabine; Müller, Hannes; Maurer, Thomas; Hinz, Christoph
2017-04-01
Rainfall variability within a storm is of major importance for fast hydrological processes, e.g. surface runoff, erosion and solute dissipation from surface soils. To investigate and simulate the impacts of within-storm variabilities on these processes, long time series of rainfall with high resolution are required. Yet, observed precipitation records of hourly or higher resolution are in most cases available only for a small number of stations and only for a few years. To obtain long time series of alternating rainfall events and interstorm periods while conserving the statistics of observed rainfall events, the Poisson model can be used. Multiplicative microcanonical random cascades have been widely applied to disaggregate rainfall time series from coarse to fine temporal resolution. We present a new coupling approach of the Poisson rectangular pulse model and the multiplicative microcanonical random cascade model that preserves the characteristics of rainfall events as well as inter-storm periods. In the first step, a Poisson rectangular pulse model is applied to generate discrete rainfall events (duration and mean intensity) and inter-storm periods (duration). The rainfall events are subsequently disaggregated to high-resolution time series (user-specified, e.g. 10 min resolution) by a multiplicative microcanonical random cascade model. One of the challenges of coupling these models is to parameterize the cascade model for the event durations generated by the Poisson model. In fact, the cascade model is best suited to downscale rainfall data with constant time step such as daily precipitation data. Without starting from a fixed time step duration (e.g. daily), the disaggregation of events requires some modifications of the multiplicative microcanonical random cascade model proposed by Olsson (1998): Firstly, the parameterization of the cascade model for events of different durations requires continuous functions for the probabilities of the multiplicative weights, which we implemented through sigmoid functions. Secondly, the branching of the first and last box is constrained to preserve the rainfall event durations generated by the Poisson rectangular pulse model. The event-based continuous time step rainfall generator has been developed and tested using 10 min and hourly rainfall data of four stations in North-Eastern Germany. The model performs well in comparison to observed rainfall in terms of event durations and mean event intensities as well as wet spell and dry spell durations. It is currently being tested using data from other stations across Germany and in different climate zones. Furthermore, the rainfall event generator is being applied in modelling approaches aimed at understanding the impact of rainfall variability on hydrological processes. Reference Olsson, J.: Evaluation of a scaling cascade model for temporal rainfall disaggregation, Hydrology and Earth System Sciences, 2, 19.30
Formal methods for test case generation
NASA Technical Reports Server (NTRS)
Rushby, John (Inventor); De Moura, Leonardo Mendonga (Inventor); Hamon, Gregoire (Inventor)
2011-01-01
The invention relates to the use of model checkers to generate efficient test sets for hardware and software systems. The method provides for extending existing tests to reach new coverage targets; searching *to* some or all of the uncovered targets in parallel; searching in parallel *from* some or all of the states reached in previous tests; and slicing the model relative to the current set of coverage targets. The invention provides efficient test case generation and test set formation. Deep regions of the state space can be reached within allotted time and memory. The approach has been applied to use of the model checkers of SRI's SAL system and to model-based designs developed in Stateflow. Stateflow models achieving complete state and transition coverage in a single test case are reported.
Dissecting psychiatric spectrum disorders by generative embedding☆☆☆
Brodersen, Kay H.; Deserno, Lorenz; Schlagenhauf, Florian; Lin, Zhihao; Penny, Will D.; Buhmann, Joachim M.; Stephan, Klaas E.
2013-01-01
This proof-of-concept study examines the feasibility of defining subgroups in psychiatric spectrum disorders by generative embedding, using dynamical system models which infer neuronal circuit mechanisms from neuroimaging data. To this end, we re-analysed an fMRI dataset of 41 patients diagnosed with schizophrenia and 42 healthy controls performing a numerical n-back working-memory task. In our generative-embedding approach, we used parameter estimates from a dynamic causal model (DCM) of a visual–parietal–prefrontal network to define a model-based feature space for the subsequent application of supervised and unsupervised learning techniques. First, using a linear support vector machine for classification, we were able to predict individual diagnostic labels significantly more accurately (78%) from DCM-based effective connectivity estimates than from functional connectivity between (62%) or local activity within the same regions (55%). Second, an unsupervised approach based on variational Bayesian Gaussian mixture modelling provided evidence for two clusters which mapped onto patients and controls with nearly the same accuracy (71%) as the supervised approach. Finally, when restricting the analysis only to the patients, Gaussian mixture modelling suggested the existence of three patient subgroups, each of which was characterised by a different architecture of the visual–parietal–prefrontal working-memory network. Critically, even though this analysis did not have access to information about the patients' clinical symptoms, the three neurophysiologically defined subgroups mapped onto three clinically distinct subgroups, distinguished by significant differences in negative symptom severity, as assessed on the Positive and Negative Syndrome Scale (PANSS). In summary, this study provides a concrete example of how psychiatric spectrum diseases may be split into subgroups that are defined in terms of neurophysiological mechanisms specified by a generative model of network dynamics such as DCM. The results corroborate our previous findings in stroke patients that generative embedding, compared to analyses of more conventional measures such as functional connectivity or regional activity, can significantly enhance both the interpretability and performance of computational approaches to clinical classification. PMID:24363992
Brief history of agricultural systems modeling.
Jones, James W; Antle, John M; Basso, Bruno; Boote, Kenneth J; Conant, Richard T; Foster, Ian; Godfray, H Charles J; Herrero, Mario; Howitt, Richard E; Janssen, Sander; Keating, Brian A; Munoz-Carpena, Rafael; Porter, Cheryl H; Rosenzweig, Cynthia; Wheeler, Tim R
2017-07-01
Agricultural systems science generates knowledge that allows researchers to consider complex problems or take informed agricultural decisions. The rich history of this science exemplifies the diversity of systems and scales over which they operate and have been studied. Modeling, an essential tool in agricultural systems science, has been accomplished by scientists from a wide range of disciplines, who have contributed concepts and tools over more than six decades. As agricultural scientists now consider the "next generation" models, data, and knowledge products needed to meet the increasingly complex systems problems faced by society, it is important to take stock of this history and its lessons to ensure that we avoid re-invention and strive to consider all dimensions of associated challenges. To this end, we summarize here the history of agricultural systems modeling and identify lessons learned that can help guide the design and development of next generation of agricultural system tools and methods. A number of past events combined with overall technological progress in other fields have strongly contributed to the evolution of agricultural system modeling, including development of process-based bio-physical models of crops and livestock, statistical models based on historical observations, and economic optimization and simulation models at household and regional to global scales. Characteristics of agricultural systems models have varied widely depending on the systems involved, their scales, and the wide range of purposes that motivated their development and use by researchers in different disciplines. Recent trends in broader collaboration across institutions, across disciplines, and between the public and private sectors suggest that the stage is set for the major advances in agricultural systems science that are needed for the next generation of models, databases, knowledge products and decision support systems. The lessons from history should be considered to help avoid roadblocks and pitfalls as the community develops this next generation of agricultural systems models.
Brief history of agricultural systems modeling
Jones, James W.; Antle, John M.; Basso, Bruno; ...
2017-06-21
Agricultural systems science generates knowledge that allows researchers to consider complex problems or take informed agricultural decisions. The rich history of this science exemplifies the diversity of systems and scales over which they operate and have been studied. Modeling, an essential tool in agricultural systems science, has been accomplished by scientists from a wide range of disciplines, who have contributed concepts and tools over more than six decades. As agricultural scientists now consider the "next generation" models, data, and knowledge products needed to meet the increasingly complex systems problems faced by society, it is important to take stock of thismore » history and its lessons to ensure that we avoid re-invention and strive to consider all dimensions of associated challenges. To this end, we summarize here the history of agricultural systems modeling and identify lessons learned that can help guide the design and development of next generation of agricultural system tools and methods. A number of past events combined with overall technological progress in other fields have strongly contributed to the evolution of agricultural system modeling, including development of process-based bio-physical models of crops and livestock, statistical models based on historical observations, and economic optimization and simulation models at household and regional to global scales. Characteristics of agricultural systems models have varied widely depending on the systems involved, their scales, and the wide range of purposes that motivated their development and use by researchers in different disciplines. Recent trends in broader collaboration across institutions, across disciplines, and between the public and private sectors suggest that the stage is set for the major advances in agricultural systems science that are needed for the next generation of models, databases, knowledge products and decision support systems. Furthermore, the lessons from history should be considered to help avoid roadblocks and pitfalls as the community develops this next generation of agricultural systems models.« less
Brief history of agricultural systems modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, James W.; Antle, John M.; Basso, Bruno
Agricultural systems science generates knowledge that allows researchers to consider complex problems or take informed agricultural decisions. The rich history of this science exemplifies the diversity of systems and scales over which they operate and have been studied. Modeling, an essential tool in agricultural systems science, has been accomplished by scientists from a wide range of disciplines, who have contributed concepts and tools over more than six decades. As agricultural scientists now consider the "next generation" models, data, and knowledge products needed to meet the increasingly complex systems problems faced by society, it is important to take stock of thismore » history and its lessons to ensure that we avoid re-invention and strive to consider all dimensions of associated challenges. To this end, we summarize here the history of agricultural systems modeling and identify lessons learned that can help guide the design and development of next generation of agricultural system tools and methods. A number of past events combined with overall technological progress in other fields have strongly contributed to the evolution of agricultural system modeling, including development of process-based bio-physical models of crops and livestock, statistical models based on historical observations, and economic optimization and simulation models at household and regional to global scales. Characteristics of agricultural systems models have varied widely depending on the systems involved, their scales, and the wide range of purposes that motivated their development and use by researchers in different disciplines. Recent trends in broader collaboration across institutions, across disciplines, and between the public and private sectors suggest that the stage is set for the major advances in agricultural systems science that are needed for the next generation of models, databases, knowledge products and decision support systems. Furthermore, the lessons from history should be considered to help avoid roadblocks and pitfalls as the community develops this next generation of agricultural systems models.« less
Brief History of Agricultural Systems Modeling
NASA Technical Reports Server (NTRS)
Jones, James W.; Antle, John M.; Basso, Bruno O.; Boote, Kenneth J.; Conant, Richard T.; Foster, Ian; Godfray, H. Charles J.; Herrrero, Mario; Howitt, Richard E.; Janssen, Sandor;
2016-01-01
Agricultural systems science generates knowledge that allows researchers to consider complex problems or take informed agricultural decisions. The rich history of this science exemplifies the diversity of systems and scales over which they operate and have been studied. Modeling, an essential tool in agricultural systems science, has been accomplished by scientists from a wide range of disciplines, who have contributed concepts and tools over more than six decades. As agricultural scientists now consider the next generation models, data, and knowledge products needed to meet the increasingly complex systems problems faced by society, it is important to take stock of this history and its lessons to ensure that we avoid re-invention and strive to consider all dimensions of associated challenges. To this end, we summarize here the history of agricultural systems modeling and identify lessons learned that can help guide the design and development of next generation of agricultural system tools and methods. A number of past events combined with overall technological progress in other fields have strongly contributed to the evolution of agricultural system modeling, including development of process-based bio-physical models of crops and livestock, statistical models based on historical observations, and economic optimization and simulation models at household and regional to global scales. Characteristics of agricultural systems models have varied widely depending on the systems involved, their scales, and the wide range of purposes that motivated their development and use by researchers in different disciplines. Recent trends in broader collaboration across institutions, across disciplines, and between the public and private sectors suggest that the stage is set for the major advances in agricultural systems science that are needed for the next generation of models, databases, knowledge products and decision support systems. The lessons from history should be considered to help avoid roadblocks and pitfalls as the community develops this next generation of agricultural systems models.
Scalability of grid- and subbasin-based land surface modeling approaches for hydrologic simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tesfa, Teklu K.; Ruby Leung, L.; Huang, Maoyi
2014-03-27
This paper investigates the relative merits of grid- and subbasin-based land surface modeling approaches for hydrologic simulations, with a focus on their scalability (i.e., abilities to perform consistently across a range of spatial resolutions) in simulating runoff generation. Simulations produced by the grid- and subbasin-based configurations of the Community Land Model (CLM) are compared at four spatial resolutions (0.125o, 0.25o, 0.5o and 1o) over the topographically diverse region of the U.S. Pacific Northwest. Using the 0.125o resolution simulation as the “reference”, statistical skill metrics are calculated and compared across simulations at 0.25o, 0.5o and 1o spatial resolutions of each modelingmore » approach at basin and topographic region levels. Results suggest significant scalability advantage for the subbasin-based approach compared to the grid-based approach for runoff generation. Basin level annual average relative errors of surface runoff at 0.25o, 0.5o, and 1o compared to 0.125o are 3%, 4%, and 6% for the subbasin-based configuration and 4%, 7%, and 11% for the grid-based configuration, respectively. The scalability advantages of the subbasin-based approach are more pronounced during winter/spring and over mountainous regions. The source of runoff scalability is found to be related to the scalability of major meteorological and land surface parameters of runoff generation. More specifically, the subbasin-based approach is more consistent across spatial scales than the grid-based approach in snowfall/rainfall partitioning, which is related to air temperature and surface elevation. Scalability of a topographic parameter used in the runoff parameterization also contributes to improved scalability of the rain driven saturated surface runoff component, particularly during winter. Hence this study demonstrates the importance of spatial structure for multi-scale modeling of hydrological processes, with implications to surface heat fluxes in coupled land-atmosphere modeling.« less
Do we need demographic data to forecast plant population dynamics?
Tredennick, Andrew T.; Hooten, Mevin B.; Adler, Peter B.
2017-01-01
Rapid environmental change has generated growing interest in forecasts of future population trajectories. Traditional population models built with detailed demographic observations from one study site can address the impacts of environmental change at particular locations, but are difficult to scale up to the landscape and regional scales relevant to management decisions. An alternative is to build models using population-level data that are much easier to collect over broad spatial scales than individual-level data. However, it is unknown whether models built using population-level data adequately capture the effects of density-dependence and environmental forcing that are necessary to generate skillful forecasts.Here, we test the consequences of aggregating individual responses when forecasting the population states (percent cover) and trajectories of four perennial grass species in a semi-arid grassland in Montana, USA. We parameterized two population models for each species, one based on individual-level data (survival, growth and recruitment) and one on population-level data (percent cover), and compared their forecasting accuracy and forecast horizons with and without the inclusion of climate covariates. For both models, we used Bayesian ridge regression to weight the influence of climate covariates for optimal prediction.In the absence of climate effects, we found no significant difference between the forecast accuracy of models based on individual-level data and models based on population-level data. Climate effects were weak, but increased forecast accuracy for two species. Increases in accuracy with climate covariates were similar between model types.In our case study, percent cover models generated forecasts as accurate as those from a demographic model. For the goal of forecasting, models based on aggregated individual-level data may offer a practical alternative to data-intensive demographic models. Long time series of percent cover data already exist for many plant species. Modelers should exploit these data to predict the impacts of environmental change.
Formal Methods for Automated Diagnosis of Autosub 6000
NASA Technical Reports Server (NTRS)
Ernits, Juhan; Dearden, Richard; Pebody, Miles
2009-01-01
This is a progress report on applying formal methods in the context of building an automated diagnosis and recovery system for Autosub 6000, an Autonomous Underwater Vehicle (AUV). The diagnosis task involves building abstract models of the control system of the AUV. The diagnosis engine is based on Livingstone 2, a model-based diagnoser originally built for aerospace applications. Large parts of the diagnosis model can be built without concrete knowledge about each mission, but actual mission scripts and configuration parameters that carry important information for diagnosis are changed for every mission. Thus we use formal methods for generating the mission control part of the diagnosis model automatically from the mission script and perform a number of invariant checks to validate the configuration. After the diagnosis model is augmented with the generated mission control component model, it needs to be validated using verification techniques.
NASA Astrophysics Data System (ADS)
Bai, Xian-Xu; Zhong, Wei-Min; Zou, Qi; Zhu, An-Ding; Sun, Jun
2018-07-01
Based on the structural design concept of ‘functional integration’, this paper proposes the principle of a power-generated magnetorheological energy absorber with velocity self-sensing capability (PGMREA), which realizes the integration of controllable damping mechanism and mechanical energy-electrical energy conversion mechanism in structure profile and multiple functions in function profile, including controllable damping, power generation and velocity self-sensing. The controllable damping mechanism consists of an annular gap and a ball screw. The annular gap fulfilled with MR fluid that operates in pure shear mode under controllable electromagnetic field. The rotational damping torque generated from the controllable damping mechanism is translated to a linear damping force via the ball screw. The mechanical energy-electrical energy conversion mechanism is realized by the ball screw and a generator composed of a permanent magnet rotor and a generator stator. The ball screw based mechanical energy-electrical energy conversion mechanism converts the mechanical energy of excitations to electrical energy for storage or directly to power the controllable damping mechanism of the PGMREA. The velocity self-sensing capability of the PGMREA is achieved via signal processing using the mechanical energy-electrical energy conversion information. Based on the principle of the proposed PGMREA, the mathematical model of the PGMREA is established, including the damping force, generated power and self-sensing velocity. The electromagnetic circuit of the PGMREA is simulated and verified via a finite element analysis software ANSYS. The developed PGMREA prototype is experimentally tested on a servo-hydraulic testing system. The model-based predicted results and the experimental results are compared and analyzed.
Schiek, Richard [Albuquerque, NM
2006-06-20
A method of generating two-dimensional masks from a three-dimensional model comprises providing a three-dimensional model representing a micro-electro-mechanical structure for manufacture and a description of process mask requirements, reducing the three-dimensional model to a topological description of unique cross sections, and selecting candidate masks from the unique cross sections and the cross section topology. The method further can comprise reconciling the candidate masks based on the process mask requirements description to produce two-dimensional process masks.
Surgical motion characterization in simulated needle insertion procedures
NASA Astrophysics Data System (ADS)
Holden, Matthew S.; Ungi, Tamas; Sargent, Derek; McGraw, Robert C.; Fichtinger, Gabor
2012-02-01
PURPOSE: Evaluation of surgical performance in image-guided needle insertions is of emerging interest, to both promote patient safety and improve the efficiency and effectiveness of training. The purpose of this study was to determine if a Markov model-based algorithm can more accurately segment a needle-based surgical procedure into its five constituent tasks than a simple threshold-based algorithm. METHODS: Simulated needle trajectories were generated with known ground truth segmentation by a synthetic procedural data generator, with random noise added to each degree of freedom of motion. The respective learning algorithms were trained, and then tested on different procedures to determine task segmentation accuracy. In the threshold-based algorithm, a change in tasks was detected when the needle crossed a position/velocity threshold. In the Markov model-based algorithm, task segmentation was performed by identifying the sequence of Markov models most likely to have produced the series of observations. RESULTS: For amplitudes of translational noise greater than 0.01mm, the Markov model-based algorithm was significantly more accurate in task segmentation than the threshold-based algorithm (82.3% vs. 49.9%, p<0.001 for amplitude 10.0mm). For amplitudes less than 0.01mm, the two algorithms produced insignificantly different results. CONCLUSION: Task segmentation of simulated needle insertion procedures was improved by using a Markov model-based algorithm as opposed to a threshold-based algorithm for procedures involving translational noise.
The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors
NASA Astrophysics Data System (ADS)
Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.
2015-12-01
Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and the most popular ones in each category were selected (Arc 3D, Visual SfM, Sure, Agisoft). Also four small objects with distinct geometric properties and especial complexities were chosen and their accurate models as reliable true data was created using ATOS Compact Scan 2M 3D scanner. Images were taken using Fujifilm Real 3D stereo camera, Apple iPhone 5 and Nikon D3200 professional camera and three dimensional models of the objects were obtained using each of the software. Finally, a comprehensive comparison between the detailed reviews of the results on the data set showed that the best combination of software and sensors for generating three-dimensional models is directly related to the object shape as well as the expected accuracy of the final model. Generally better quantitative and qualitative results were obtained by using the Nikon D3200 professional camera, while Fujifilm Real 3D stereo camera and Apple iPhone 5 were the second and third respectively in this comparison. On the other hand, three software of Visual SfM, Sure and Agisoft had a hard competition to achieve the most accurate and complete model of the objects and the best software was different according to the geometric properties of the object.
Design and Test of Pseudorandom Number Generator Using a Star Network of Lorenz Oscillators
NASA Astrophysics Data System (ADS)
Cho, Kenichiro; Miyano, Takaya
We have recently developed a chaos-based stream cipher based on augmented Lorenz equations as a star network of Lorenz subsystems. In our method, the augmented Lorenz equations are used as a pseudorandom number generator. In this study, we propose a new method based on the augmented Lorenz equations for generating binary pseudorandom numbers and evaluate its security using the statistical tests of SP800-22 published by the National Institute for Standards and Technology in comparison with the performances of other chaotic dynamical models used as binary pseudorandom number generators. We further propose a faster version of the proposed method and evaluate its security using the statistical tests of TestU01 published by L’Ecuyer and Simard.
Cognitive/emotional models for human behavior representation in 3D avatar simulations
NASA Astrophysics Data System (ADS)
Peterson, James K.
2004-08-01
Simplified models of human cognition and emotional response are presented which are based on models of auditory/ visual polymodal fusion. At the core of these models is a computational model of Area 37 of the temporal cortex which is based on new isocortex models presented recently by Grossberg. These models are trained using carefully chosen auditory (musical sequences), visual (paintings) and higher level abstract (meta level) data obtained from studies of how optimization strategies are chosen in response to outside managerial inputs. The software modules developed are then used as inputs to character generation codes in standard 3D virtual world simulations. The auditory and visual training data also enable the development of simple music and painting composition generators which significantly enhance one's ability to validate the cognitive model. The cognitive models are handled as interacting software agents implemented as CORBA objects to allow the use of multiple language coding choices (C++, Java, Python etc) and efficient use of legacy code.
NASA Astrophysics Data System (ADS)
Fiorini, Rodolfo A.; Dacquino, Gianfranco
2005-03-01
GEOGINE (GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for n-Dimensional shape/texture optimal synthetic representation, description and learning, was presented in previous conferences elsewhere recently. Improved computational algorithms based on the computational invariant theory of finite groups in Euclidean space and a demo application is presented. Progressive model automatic generation is discussed. GEOGINE can be used as an efficient computational kernel for fast reliable application development and delivery in advanced biomedical engineering, biometric, intelligent computing, target recognition, content image retrieval, data mining technological areas mainly. Ontology can be regarded as a logical theory accounting for the intended meaning of a formal dictionary, i.e., its ontological commitment to a particular conceptualization of the world object. According to this approach, "n-D Tensor Calculus" can be considered a "Formal Language" to reliably compute optimized "n-Dimensional Tensor Invariants" as specific object "invariant parameter and attribute words" for automated n-Dimensional shape/texture optimal synthetic object description by incremental model generation. The class of those "invariant parameter and attribute words" can be thought as a specific "Formal Vocabulary" learned from a "Generalized Formal Dictionary" of the "Computational Tensor Invariants" language. Even object chromatic attributes can be effectively and reliably computed from object geometric parameters into robust colour shape invariant characteristics. As a matter of fact, any highly sophisticated application needing effective, robust object geometric/colour invariant attribute capture and parameterization features, for reliable automated object learning and discrimination can deeply benefit from GEOGINE progressive automated model generation computational kernel performance. Main operational advantages over previous, similar approaches are: 1) Progressive Automated Invariant Model Generation, 2) Invariant Minimal Complete Description Set for computational efficiency, 3) Arbitrary Model Precision for robust object description and identification.
D Modelling of AN Indoor Space Using a Rotating Stereo Frame Camera System
NASA Astrophysics Data System (ADS)
Kang, J.; Lee, I.
2016-06-01
Sophisticated indoor design and growing development in urban architecture make indoor spaces more complex. And the indoor spaces are easily connected to public transportations such as subway and train stations. These phenomena allow to transfer outdoor activities to the indoor spaces. Constant development of technology has a significant impact on people knowledge about services such as location awareness services in the indoor spaces. Thus, it is required to develop the low-cost system to create the 3D model of the indoor spaces for services based on the indoor models. In this paper, we thus introduce the rotating stereo frame camera system that has two cameras and generate the indoor 3D model using the system. First, select a test site and acquired images eight times during one day with different positions and heights of the system. Measurements were complemented by object control points obtained from a total station. As the data were obtained from the different positions and heights of the system, it was possible to make various combinations of data and choose several suitable combinations for input data. Next, we generated the 3D model of the test site using commercial software with previously chosen input data. The last part of the processes will be to evaluate the accuracy of the generated indoor model from selected input data. In summary, this paper introduces the low-cost system to acquire indoor spatial data and generate the 3D model using images acquired by the system. Through this experiments, we ensure that the introduced system is suitable for generating indoor spatial information. The proposed low-cost system will be applied to indoor services based on the indoor spatial information.
NASA Technical Reports Server (NTRS)
Spekreijse, S. P.; Boerstoel, J. W.; Vitagliano, P. L.; Kuyvenhoven, J. L.
1992-01-01
About five years ago, a joint development was started of a flow simulation system for engine-airframe integration studies on propeller as well as jet aircraft. The initial system was based on the Euler equations and made operational for industrial aerodynamic design work. The system consists of three major components: a domain modeller, for the graphical interactive subdivision of flow domains into an unstructured collection of blocks; a grid generator, for the graphical interactive computation of structured grids in blocks; and a flow solver, for the computation of flows on multi-block grids. The industrial partners of the collaboration and NLR have demonstrated that the domain modeller, grid generator and flow solver can be applied to simulate Euler flows around complete aircraft, including propulsion system simulation. Extension to Navier-Stokes flows is in progress. Delft Hydraulics has shown that both the domain modeller and grid generator can also be applied successfully for hydrodynamic configurations. An overview is given about the main aspects of both domain modelling and grid generation.
A Method for Generating Reduced Order Linear Models of Supersonic Inlets
NASA Technical Reports Server (NTRS)
Chicatelli, Amy; Hartley, Tom T.
1997-01-01
For the modeling of high speed propulsion systems, there are at least two major categories of models. One is based on computational fluid dynamics (CFD), and the other is based on design and analysis of control systems. CFD is accurate and gives a complete view of the internal flow field, but it typically has many states and runs much slower dm real-time. Models based on control design typically run near real-time but do not always capture the fundamental dynamics. To provide improved control models, methods are needed that are based on CFD techniques but yield models that are small enough for control analysis and design.
UDE-based control of variable-speed wind turbine systems
NASA Astrophysics Data System (ADS)
Ren, Beibei; Wang, Yeqin; Zhong, Qing-Chang
2017-01-01
In this paper, the control of a PMSG (permanent magnet synchronous generator)-based variable-speed wind turbine system with a back-to-back converter is considered. The uncertainty and disturbance estimator (UDE)-based control approach is applied to the regulation of the DC-link voltage and the control of the RSC (rotor-side converter) and the GSC (grid-side converter). For the rotor-side controller, the UDE-based vector control is developed for the RSC with PMSG control to facilitate the application of the MPPT (maximum power point tracking) algorithm for the maximum wind energy capture. For the grid-side controller, the UDE-based vector control is developed to control the GSC with the power reference generated by a UDE-based DC-link voltage controller. Compared with the conventional vector control, the UDE-based vector control can achieve reliable current decoupling control with fast response. Moreover, the UDE-based DC-link voltage regulation can achieve stable DC-link voltage under model uncertainties and external disturbances, e.g. wind speed variations. The effectiveness of the proposed UDE-based control approach is demonstrated through extensive simulation studies in the presence of coupled dynamics, model uncertainties and external disturbances under varying wind speeds. The UDE-based control is able to generate more energy, e.g. by 5% for the wind profile tested.
Incorporating structure from motion uncertainty into image-based pose estimation
NASA Astrophysics Data System (ADS)
Ludington, Ben T.; Brown, Andrew P.; Sheffler, Michael J.; Taylor, Clark N.; Berardi, Stephen
2015-05-01
A method for generating and utilizing structure from motion (SfM) uncertainty estimates within image-based pose estimation is presented. The method is applied to a class of problems in which SfM algorithms are utilized to form a geo-registered reference model of a particular ground area using imagery gathered during flight by a small unmanned aircraft. The model is then used to form camera pose estimates in near real-time from imagery gathered later. The resulting pose estimates can be utilized by any of the other onboard systems (e.g. as a replacement for GPS data) or downstream exploitation systems, e.g., image-based object trackers. However, many of the consumers of pose estimates require an assessment of the pose accuracy. The method for generating the accuracy assessment is presented. First, the uncertainty in the reference model is estimated. Bundle Adjustment (BA) is utilized for model generation. While the high-level approach for generating a covariance matrix of the BA parameters is straightforward, typical computing hardware is not able to support the required operations due to the scale of the optimization problem within BA. Therefore, a series of sparse matrix operations is utilized to form an exact covariance matrix for only the parameters that are needed at a particular moment. Once the uncertainty in the model has been determined, it is used to augment Perspective-n-Point pose estimation algorithms to improve the pose accuracy and to estimate the resulting pose uncertainty. The implementation of the described method is presented along with results including results gathered from flight test data.
State-of-the-Art: DTM Generation Using Airborne LIDAR Data
Chen, Ziyue; Gao, Bingbo; Devereux, Bernard
2017-01-01
Digital terrain model (DTM) generation is the fundamental application of airborne Lidar data. In past decades, a large body of studies has been conducted to present and experiment a variety of DTM generation methods. Although great progress has been made, DTM generation, especially DTM generation in specific terrain situations, remains challenging. This research introduces the general principles of DTM generation and reviews diverse mainstream DTM generation methods. In accordance with the filtering strategy, these methods are classified into six categories: surface-based adjustment; morphology-based filtering, triangulated irregular network (TIN)-based refinement, segmentation and classification, statistical analysis and multi-scale comparison. Typical methods for each category are briefly introduced and the merits and limitations of each category are discussed accordingly. Despite different categories of filtering strategies, these DTM generation methods present similar difficulties when implemented in sharply changing terrain, areas with dense non-ground features and complicated landscapes. This paper suggests that the fusion of multi-sources and integration of different methods can be effective ways for improving the performance of DTM generation. PMID:28098810
Test pattern generation for ILA sequential circuits
NASA Technical Reports Server (NTRS)
Feng, YU; Frenzel, James F.; Maki, Gary K.
1993-01-01
An efficient method of generating test patterns for sequential machines implemented using one-dimensional, unilateral, iterative logic arrays (ILA's) of BTS pass transistor networks is presented. Based on a transistor level fault model, the method affords a unique opportunity for real-time fault detection with improved fault coverage. The resulting test sets are shown to be equivalent to those obtained using conventional gate level models, thus eliminating the need for additional test patterns. The proposed method advances the simplicity and ease of the test pattern generation for a special class of sequential circuitry.
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less
Sentence Paraphrasing from a Conceptual Base
ERIC Educational Resources Information Center
Goldman, Neil M.
1975-01-01
A model of natural language generation based on an underlying language-free representation of meaning is described. A computer implementation of this model, called BABEL, has been developed at Stanford University. It is able to produce sentence paraphrases which demonstrate understanding with respect to a given context. Available from Association…
Manufacturing Licorice: Modeling with Data in Third Grade
ERIC Educational Resources Information Center
English, Lyn D.
2017-01-01
This paper reports on a study of 3rd-grade students' modeling with data, which involves comprehensive investigations that draw upon STEM-based concepts, contexts, and questions, and generate products supported by evidence and open to informal inferential thinking. Within a real-world STEM-based context of licorice manufacturing, students…
Improving Conceptual Understanding and Representation Skills through Excel-Based Modeling
ERIC Educational Resources Information Center
Malone, Kathy L.; Schunn, Christian D.; Schuchardt, Anita M.
2018-01-01
The National Research Council framework for science education and the Next Generation Science Standards have developed a need for additional research and development of curricula that is both technologically model-based and includes engineering practices. This is especially the case for biology education. This paper describes a quasi-experimental…
Prediction of energy balance and utilization for solar electric cars
NASA Astrophysics Data System (ADS)
Cheng, K.; Guo, L. M.; Wang, Y. K.; Zafar, M. T.
2017-11-01
Solar irradiation and ambient temperature are characterized by region, season and time-domain, which directly affects the performance of solar energy based car system. In this paper, the model of solar electric cars used was based in Xi’an. Firstly, the meteorological data are modelled to simulate the change of solar irradiation and ambient temperature, and then the temperature change of solar cell is calculated using the thermal equilibrium relation. The above work is based on the driving resistance and solar cell power generation model, which is simulated under the varying radiation conditions in a day. The daily power generation and solar electric car cruise mileage can be predicted by calculating solar cell efficiency and power. The above theoretical approach and research results can be used in the future for solar electric car program design and optimization for the future developments.
ERIC Educational Resources Information Center
Roduta Roberts, Mary; Alves, Cecilia B.; Chu, Man-Wai; Thompson, Margaret; Bahry, Louise M.; Gotzmann, Andrea
2014-01-01
The purpose of this study was to evaluate the adequacy of three cognitive models, one developed by content experts and two generated from student verbal reports for explaining examinee performance on a grade 3 diagnostic mathematics test. For this study, the items were developed to directly measure the attributes in the cognitive model. The…
Generating Performance Models for Irregular Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friese, Ryan D.; Tallent, Nathan R.; Vishnu, Abhinav
2017-05-30
Many applications have irregular behavior --- non-uniform input data, input-dependent solvers, irregular memory accesses, unbiased branches --- that cannot be captured using today's automated performance modeling techniques. We describe new hierarchical critical path analyses for the \\Palm model generation tool. To create a model's structure, we capture tasks along representative MPI critical paths. We create a histogram of critical tasks with parameterized task arguments and instance counts. To model each task, we identify hot instruction-level sub-paths and model each sub-path based on data flow, instruction scheduling, and data locality. We describe application models that generate accurate predictions for strong scalingmore » when varying CPU speed, cache speed, memory speed, and architecture. We present results for the Sweep3D neutron transport benchmark; Page Rank on multiple graphs; Support Vector Machine with pruning; and PFLOTRAN's reactive flow/transport solver with domain-induced load imbalance.« less
The Economic Impact of the President’s 2013 Budget
2012-04-01
and capital . According to the Solow-type model , people base their decisions about working and saving pri- marily on current economic... model developed by Robert Solow. CBO’s life-cycle growth model is an overlapping - generations general -equilibrium model that is based on a standard...services produced in a given period by the labor and capital supplied by the country’s residents , regardless of where the labor
Modelling of internal architecture of kinesin nanomotor as a machine language.
Khataee, H R; Ibrahim, M Y
2012-09-01
Kinesin is a protein-based natural nanomotor that transports molecular cargoes within cells by walking along microtubules. Kinesin nanomotor is considered as a bio-nanoagent which is able to sense the cell through its sensors (i.e. its heads and tail), make the decision internally and perform actions on the cell through its actuator (i.e. its motor domain). The study maps the agent-based architectural model of internal decision-making process of kinesin nanomotor to a machine language using an automata algorithm. The applied automata algorithm receives the internal agent-based architectural model of kinesin nanomotor as a deterministic finite automaton (DFA) model and generates a regular machine language. The generated regular machine language was acceptable by the architectural DFA model of the nanomotor and also in good agreement with its natural behaviour. The internal agent-based architectural model of kinesin nanomotor indicates the degree of autonomy and intelligence of the nanomotor interactions with its cell. Thus, our developed regular machine language can model the degree of autonomy and intelligence of kinesin nanomotor interactions with its cell as a language. Modelling of internal architectures of autonomous and intelligent bio-nanosystems as machine languages can lay the foundation towards the concept of bio-nanoswarms and next phases of the bio-nanorobotic systems development.
Dynamic Gate Product and Artifact Generation from System Models
NASA Technical Reports Server (NTRS)
Jackson, Maddalena; Delp, Christopher; Bindschadler, Duane; Sarrel, Marc; Wollaeger, Ryan; Lam, Doris
2011-01-01
Model Based Systems Engineering (MBSE) is gaining acceptance as a way to formalize systems engineering practice through the use of models. The traditional method of producing and managing a plethora of disjointed documents and presentations ("Power-Point Engineering") has proven both costly and limiting as a means to manage the complex and sophisticated specifications of modern space systems. We have developed a tool and method to produce sophisticated artifacts as views and by-products of integrated models, allowing us to minimize the practice of "Power-Point Engineering" from model-based projects and demonstrate the ability of MBSE to work within and supersede traditional engineering practices. This paper describes how we have created and successfully used model-based document generation techniques to extract paper artifacts from complex SysML and UML models in support of successful project reviews. Use of formal SysML and UML models for architecture and system design enables production of review documents, textual artifacts, and analyses that are consistent with one-another and require virtually no labor-intensive maintenance across small-scale design changes and multiple authors. This effort thus enables approaches that focus more on rigorous engineering work and less on "PowerPoint engineering" and production of paper-based documents or their "office-productivity" file equivalents.
ENU mutagenesis to generate genetically modified rat models.
van Boxtel, Ruben; Gould, Michael N; Cuppen, Edwin; Smits, Bart M G
2010-01-01
The rat is one of the most preferred model organisms in biomedical research and has been extremely useful for linking physiology and pathology to the genome. However, approaches to genetically modify specific genes in the rat germ line remain relatively scarce. To date, the most efficient approach for generating genetically modified rats has been the target-selected N-ethyl-N-nitrosourea (ENU) mutagenesis-based technology. Here, we describe the detailed protocols for ENU mutagenesis and mutant retrieval in the rat model organism.
Mechanical Aspects of Interfaces and Surfaces in Ceramic Containing Systems.
1984-12-14
of a computer model to simulate the crack damage. The model is based on the fracture mechanics of cracks engulfed by the short stress pulse generated...by drop impact. Inertial effects of the crack faces are a particularly important aspect of the model. The computer scheme thereby allows the stress...W. R. Beaumont, "On the Toughness of Particulate Filled Polymers." Water Drop Impact X. E. D. Case and A. G. Evans, "A Computer -Generated Simulation
A CellML simulation compiler and code generator using ODE solving schemes
2012-01-01
Models written in description languages such as CellML are becoming a popular solution to the handling of complex cellular physiological models in biological function simulations. However, in order to fully simulate a model, boundary conditions and ordinary differential equation (ODE) solving schemes have to be combined with it. Though boundary conditions can be described in CellML, it is difficult to explicitly specify ODE solving schemes using existing tools. In this study, we define an ODE solving scheme description language-based on XML and propose a code generation system for biological function simulations. In the proposed system, biological simulation programs using various ODE solving schemes can be easily generated. We designed a two-stage approach where the system generates the equation set associating the physiological model variable values at a certain time t with values at t + Δt in the first stage. The second stage generates the simulation code for the model. This approach enables the flexible construction of code generation modules that can support complex sets of formulas. We evaluate the relationship between models and their calculation accuracies by simulating complex biological models using various ODE solving schemes. Using the FHN model simulation, results showed good qualitative and quantitative correspondence with the theoretical predictions. Results for the Luo-Rudy 1991 model showed that only first order precision was achieved. In addition, running the generated code in parallel on a GPU made it possible to speed up the calculation time by a factor of 50. The CellML Compiler source code is available for download at http://sourceforge.net/projects/cellmlcompiler. PMID:23083065
Robust optimization-based DC optimal power flow for managing wind generation uncertainty
NASA Astrophysics Data System (ADS)
Boonchuay, Chanwit; Tomsovic, Kevin; Li, Fangxing; Ongsakul, Weerakorn
2012-11-01
Integrating wind generation into the wider grid causes a number of challenges to traditional power system operation. Given the relatively large wind forecast errors, congestion management tools based on optimal power flow (OPF) need to be improved. In this paper, a robust optimization (RO)-based DCOPF is proposed to determine the optimal generation dispatch and locational marginal prices (LMPs) for a day-ahead competitive electricity market considering the risk of dispatch cost variation. The basic concept is to use the dispatch to hedge against the possibility of reduced or increased wind generation. The proposed RO-based DCOPF is compared with a stochastic non-linear programming (SNP) approach on a modified PJM 5-bus system. Primary test results show that the proposed DCOPF model can provide lower dispatch cost than the SNP approach.
Luboz, Vincent; Chabanas, Matthieu; Swider, Pascal; Payan, Yohan
2005-08-01
This paper addresses an important issue raised for the clinical relevance of Computer-Assisted Surgical applications, namely the methodology used to automatically build patient-specific finite element (FE) models of anatomical structures. From this perspective, a method is proposed, based on a technique called the mesh-matching method, followed by a process that corrects mesh irregularities. The mesh-matching algorithm generates patient-specific volume meshes from an existing generic model. The mesh regularization process is based on the Jacobian matrix transform related to the FE reference element and the current element. This method for generating patient-specific FE models is first applied to computer-assisted maxillofacial surgery, and more precisely, to the FE elastic modelling of patient facial soft tissues. For each patient, the planned bone osteotomies (mandible, maxilla, chin) are used as boundary conditions to deform the FE face model, in order to predict the aesthetic outcome of the surgery. Seven FE patient-specific models were successfully generated by our method. For one patient, the prediction of the FE model is qualitatively compared with the patient's post-operative appearance, measured from a computer tomography scan. Then, our methodology is applied to computer-assisted orbital surgery. It is, therefore, evaluated for the generation of 11 patient-specific FE poroelastic models of the orbital soft tissues. These models are used to predict the consequences of the surgical decompression of the orbit. More precisely, an average law is extrapolated from the simulations carried out for each patient model. This law links the size of the osteotomy (i.e. the surgical gesture) and the backward displacement of the eyeball (the consequence of the surgical gesture).
A New Regulatory Policy for FTTx-Based Next-Generation Access Networks
NASA Astrophysics Data System (ADS)
Makarovič, Boštjan
2013-07-01
This article critically assesses the latest European Commission policies in relation to next-generation access investment that put focus on regulated prices and relaxing of wholesale access obligations. Pointing at the vital socio-legal and economic arguments, it further challenges the assumptions of the current EU regulatory framework and calls for a more contractual utility-based model of regulation instead of the current system that overly relies on market-driven infrastructure-based competition.
A Personal Navigation System Based on Inertial and Magnetic Field Measurements
2010-09-01
MATLAB IMPLEMENTATION.................................................................74 G. A MODEL FOR PENDULUM MOTION SENSOR DATA...76 1. Pendulum Model for MATLAB Simulation....................................76 2. Sensor Data Generated with the Pendulum Model... PENDULUM ..................................................................................................88 I. FILTER PERFORMANCE WITH REAL PENDULUM DATA
NEXT GENERATION MULTIMEDIA/MULTIPATHWAY EXPOSURE MODELING
The Stochastic Human Exposure and Dose Simulation model for pesticides (SHEDS-Pesticides) supports the efforts of EPA to better understand human exposures and doses to multimedia, multipathway pollutants. It is a physically-based, probabilistic computer model that predicts, for u...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Y.; Edwards, R.M.; Lee, K.Y.
1997-03-01
In this paper, a simplified model with a lower order is first developed for a nuclear steam generator system and verified against some realistic environments. Based on this simplified model, a hybrid multi-input and multi-out (MIMO) control system, consisting of feedforward control (FFC) and feedback control (FBC), is designed for wide range conditions by using the genetic algorithm (GA) technique. The FFC control, obtained by the GA optimization method, injects an a priori command input into the system to achieve an optimal performance for the designed system, while the GA-based FBC control provides the necessary compensation for any disturbances ormore » uncertainties in a real steam generator. The FBC control is an optimal design of a PI-based control system which would be more acceptable for industrial practices and power plant control system upgrades. The designed hybrid MIMO FFC/FBC control system is first applied to the simplified model and then to a more complicated model with a higher order which is used as a substitute of the real system to test the efficacy of the designed control system. Results from computer simulations show that the designed GA-based hybrid MIMO FFC/FBC control can achieve good responses and robust performances. Hence, it can be considered as a viable alternative to the current control system upgrade.« less
Anatomy guided automated SPECT renal seed point estimation
NASA Astrophysics Data System (ADS)
Dwivedi, Shekhar; Kumar, Sailendra
2010-04-01
Quantification of SPECT(Single Photon Emission Computed Tomography) images can be more accurate if correct segmentation of region of interest (ROI) is achieved. Segmenting ROI from SPECT images is challenging due to poor image resolution. SPECT is utilized to study the kidney function, though the challenge involved is to accurately locate the kidneys and bladder for analysis. This paper presents an automated method for generating seed point location of both kidneys using anatomical location of kidneys and bladder. The motivation for this work is based on the premise that the anatomical location of the bladder relative to the kidneys will not differ much. A model is generated based on manual segmentation of the bladder and both the kidneys on 10 patient datasets (including sum and max images). Centroid is estimated for manually segmented bladder and kidneys. Relatively easier bladder segmentation is followed by feeding bladder centroid coordinates into the model to generate seed point for kidneys. Percentage error observed in centroid coordinates of organs from ground truth to estimated values from our approach are acceptable. Percentage error of approximately 1%, 6% and 2% is observed in X coordinates and approximately 2%, 5% and 8% is observed in Y coordinates of bladder, left kidney and right kidney respectively. Using a regression model and the location of the bladder, the ROI generation for kidneys is facilitated. The model based seed point estimation will enhance the robustness of kidney ROI estimation for noisy cases.
Batool, Fozia; Iqbal, Shahid; Akbar, Jamshed
2018-04-03
The present study describes Quantitative Structure Property Relationship (QSPR) modeling to relate metal ions characteristics with adsorption potential of Ficus carica leaves for 13 selected metal ions (Ca +2 , Cr +3 , Co +2 , Cu +2 , Cd +2 , K +1 , Mg +2 , Mn +2 , Na +1 , Ni +2 , Pb +2 , Zn +2 , and Fe +2 ) to generate QSPR model. A set of 21 characteristic descriptors were selected and relationship of these metal characteristics with adsorptive behavior of metal ions was investigated. Stepwise Multiple Linear Regression (SMLR) analysis and Artificial Neural Network (ANN) were applied for descriptors selection and model generation. Langmuir and Freundlich isotherms were also applied on adsorption data to generate proper correlation for experimental findings. Model generated indicated covalent index as the most significant descriptor, which is responsible for more than 90% predictive adsorption (α = 0.05). Internal validation of model was performed by measuring [Formula: see text] (0.98). The results indicate that present model is a useful tool for prediction of adsorptive behavior of different metal ions based on their ionic characteristics.
Yuan, Jintao; Yu, Shuling; Zhang, Ting; Yuan, Xuejie; Cao, Yunyuan; Yu, Xingchen; Yang, Xuan; Yao, Wu
2016-06-01
Octanol/water (K(OW)) and octanol/air (K(OA)) partition coefficients are two important physicochemical properties of organic substances. In current practice, K(OW) and K(OA) values of some polychlorinated biphenyls (PCBs) are measured using generator column method. Quantitative structure-property relationship (QSPR) models can serve as a valuable alternative method of replacing or reducing experimental steps in the determination of K(OW) and K(OA). In this paper, two different methods, i.e., multiple linear regression based on dragon descriptors and hologram quantitative structure-activity relationship, were used to predict generator-column-derived log K(OW) and log K(OA) values of PCBs. The predictive ability of the developed models was validated using a test set, and the performances of all generated models were compared with those of three previously reported models. All results indicated that the proposed models were robust and satisfactory and can thus be used as alternative models for the rapid assessment of the K(OW) and K(OA) of PCBs. Copyright © 2016 Elsevier Inc. All rights reserved.
Bing, Zhenshan; Cheng, Long; Chen, Guang; Röhrbein, Florian; Huang, Kai; Knoll, Alois
2017-04-04
Snake-like robots with 3D locomotion ability have significant advantages of adaptive travelling in diverse complex terrain over traditional legged or wheeled mobile robots. Despite numerous developed gaits, these snake-like robots suffer from unsmooth gait transitions by changing the locomotion speed, direction, and body shape, which would potentially cause undesired movement and abnormal torque. Hence, there exists a knowledge gap for snake-like robots to achieve autonomous locomotion. To address this problem, this paper presents the smooth slithering gait transition control based on a lightweight central pattern generator (CPG) model for snake-like robots. First, based on the convergence behavior of the gradient system, a lightweight CPG model with fast computing time was designed and compared with other widely adopted CPG models. Then, by reshaping the body into a more stable geometry, the slithering gait was modified, and studied based on the proposed CPG model, including the gait transition of locomotion speed, moving direction, and body shape. In contrast to sinusoid-based method, extensive simulations and prototype experiments finally demonstrated that smooth slithering gait transition can be effectively achieved using the proposed CPG-based control method without generating undesired locomotion and abnormal torque.
NASA Astrophysics Data System (ADS)
Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco; Ribeiro, Bruno R.
2018-04-01
Species distribution models (SDM) have been broadly used in ecology to address theoretical and practical problems. Currently, there are two main approaches to generate SDMs: (i) correlative, which is based on species occurrences and environmental predictor layers and (ii) process-based models, which are constructed based on species' functional traits and physiological tolerances. The distributions estimated by each approach are based on different components of species niche. Predictions of correlative models approach species realized niches, while predictions of process-based are more akin to species fundamental niche. Here, we integrated the predictions of fundamental and realized distributions of the freshwater turtle Trachemys dorbigni. Fundamental distribution was estimated using data of T. dorbigni's egg incubation temperature, and realized distribution was estimated using species occurrence records. Both types of distributions were estimated using the same regression approaches (logistic regression and support vector machines), both considering macroclimatic and microclimatic temperatures. The realized distribution of T. dorbigni was generally nested in its fundamental distribution reinforcing theoretical assumptions that the species' realized niche is a subset of its fundamental niche. Both modelling algorithms produced similar results but microtemperature generated better results than macrotemperature for the incubation model. Finally, our results reinforce the conclusion that species realized distributions are constrained by other factors other than just thermal tolerances.
Raghuram, Jayaram; Miller, David J; Kesidis, George
2014-07-01
We propose a method for detecting anomalous domain names, with focus on algorithmically generated domain names which are frequently associated with malicious activities such as fast flux service networks, particularly for bot networks (or botnets), malware, and phishing. Our method is based on learning a (null hypothesis) probability model based on a large set of domain names that have been white listed by some reliable authority. Since these names are mostly assigned by humans, they are pronounceable, and tend to have a distribution of characters, words, word lengths, and number of words that are typical of some language (mostly English), and often consist of words drawn from a known lexicon. On the other hand, in the present day scenario, algorithmically generated domain names typically have distributions that are quite different from that of human-created domain names. We propose a fully generative model for the probability distribution of benign (white listed) domain names which can be used in an anomaly detection setting for identifying putative algorithmically generated domain names. Unlike other methods, our approach can make detections without considering any additional (latency producing) information sources, often used to detect fast flux activity. Experiments on a publicly available, large data set of domain names associated with fast flux service networks show encouraging results, relative to several baseline methods, with higher detection rates and low false positive rates.
Raghuram, Jayaram; Miller, David J.; Kesidis, George
2014-01-01
We propose a method for detecting anomalous domain names, with focus on algorithmically generated domain names which are frequently associated with malicious activities such as fast flux service networks, particularly for bot networks (or botnets), malware, and phishing. Our method is based on learning a (null hypothesis) probability model based on a large set of domain names that have been white listed by some reliable authority. Since these names are mostly assigned by humans, they are pronounceable, and tend to have a distribution of characters, words, word lengths, and number of words that are typical of some language (mostly English), and often consist of words drawn from a known lexicon. On the other hand, in the present day scenario, algorithmically generated domain names typically have distributions that are quite different from that of human-created domain names. We propose a fully generative model for the probability distribution of benign (white listed) domain names which can be used in an anomaly detection setting for identifying putative algorithmically generated domain names. Unlike other methods, our approach can make detections without considering any additional (latency producing) information sources, often used to detect fast flux activity. Experiments on a publicly available, large data set of domain names associated with fast flux service networks show encouraging results, relative to several baseline methods, with higher detection rates and low false positive rates. PMID:25685511
Integration of Irma tactical scene generator into directed-energy weapon system simulation
NASA Astrophysics Data System (ADS)
Owens, Monte A.; Cole, Madison B., III; Laine, Mark R.
2003-08-01
Integrated high-fidelity physics-based simulations that include engagement models, image generation, electro-optical hardware models and control system algorithms have previously been developed by Boeing-SVS for various tracking and pointing systems. These simulations, however, had always used images with featureless or random backgrounds and simple target geometries. With the requirement to engage tactical ground targets in the presence of cluttered backgrounds, a new type of scene generation tool was required to fully evaluate system performance in this challenging environment. To answer this need, Irma was integrated into the existing suite of Boeing-SVS simulation tools, allowing scene generation capabilities with unprecedented realism. Irma is a US Air Force research tool used for high-resolution rendering and prediction of target and background signatures. The MATLAB/Simulink-based simulation achieves closed-loop tracking by running track algorithms on the Irma-generated images, processing the track errors through optical control algorithms, and moving simulated electro-optical elements. The geometry of these elements determines the sensor orientation with respect to the Irma database containing the three-dimensional background and target models. This orientation is dynamically passed to Irma through a Simulink S-function to generate the next image. This integrated simulation provides a test-bed for development and evaluation of tracking and control algorithms against representative images including complex background environments and realistic targets calibrated using field measurements.
NASA Astrophysics Data System (ADS)
Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long
2014-11-01
The authors have developed a method to automatically generate non-uniform CFD mesh for image-based human airway models. The sizes of generated tetrahedral elements vary in both radial and longitudinal directions to account for boundary layer and multiscale nature of pulmonary airflow. The proposed method takes advantage of our previously developed centerline-based geometry reconstruction method. In order to generate the mesh branch by branch in parallel, we used the open-source programs Gmsh and TetGen for surface and volume meshes, respectively. Both programs can specify element sizes by means of background mesh. The size of an arbitrary element in the domain is a function of wall distance, element size on the wall, and element size at the center of airway lumen. The element sizes on the wall are computed based on local flow rate and airway diameter. The total number of elements in the non-uniform mesh (10 M) was about half of that in the uniform mesh, although the computational time for the non-uniform mesh was about twice longer (170 min). The proposed method generates CFD meshes with fine elements near the wall and smooth variation of element size in longitudinal direction, which are required, e.g., for simulations with high flow rate. NIH Grants R01-HL094315, U01-HL114494, and S10-RR022421. Computer time provided by XSEDE.
Improving Power System Modeling. A Tool to Link Capacity Expansion and Production Cost Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diakov, Victor; Cole, Wesley; Sullivan, Patrick
2015-11-01
Capacity expansion models (CEM) provide a high-level long-term view at the prospects of the evolving power system. In simulating the possibilities of long-term capacity expansion, it is important to maintain the viability of power system operation in the short-term (daily, hourly and sub-hourly) scales. Production-cost models (PCM) simulate routine power system operation on these shorter time scales using detailed load, transmission and generation fleet data by minimizing production costs and following reliability requirements. When based on CEM 'predictions' about generating unit retirements and buildup, PCM provide more detailed simulation for the short-term system operation and, consequently, may confirm the validitymore » of capacity expansion predictions. Further, production cost model simulations of a system that is based on capacity expansion model solution are 'evolutionary' sound: the generator mix is the result of logical sequence of unit retirement and buildup resulting from policy and incentives. The above has motivated us to bridge CEM with PCM by building a capacity expansion - to - production cost model Linking Tool (CEPCoLT). The Linking Tool is built to onset capacity expansion model prescriptions onto production cost model inputs. NREL's ReEDS and Energy Examplar's PLEXOS are the capacity expansion and the production cost models, respectively. Via the Linking Tool, PLEXOS provides details of operation for the regionally-defined ReEDS scenarios.« less
Completing and Adapting Models of Biological Processes
NASA Technical Reports Server (NTRS)
Margaria, Tiziana; Hinchey, Michael G.; Raffelt, Harald; Rash, James L.; Rouff, Christopher A.; Steffen, Bernhard
2006-01-01
We present a learning-based method for model completion and adaptation, which is based on the combination of two approaches: 1) R2D2C, a technique for mechanically transforming system requirements via provably equivalent models to running code, and 2) automata learning-based model extrapolation. The intended impact of this new combination is to make model completion and adaptation accessible to experts of the field, like biologists or engineers. The principle is briefly illustrated by generating models of biological procedures concerning gene activities in the production of proteins, although the main application is going to concern autonomic systems for space exploration.
Campbell, J Q; Petrella, A J
2016-09-06
Population-based modeling of the lumbar spine has the potential to be a powerful clinical tool. However, developing a fully parameterized model of the lumbar spine with accurate geometry has remained a challenge. The current study used automated methods for landmark identification to create a statistical shape model of the lumbar spine. The shape model was evaluated using compactness, generalization ability, and specificity. The primary shape modes were analyzed visually, quantitatively, and biomechanically. The biomechanical analysis was performed by using the statistical shape model with an automated method for finite element model generation to create a fully parameterized finite element model of the lumbar spine. Functional finite element models of the mean shape and the extreme shapes (±3 standard deviations) of all 17 shape modes were created demonstrating the robust nature of the methods. This study represents an advancement in finite element modeling of the lumbar spine and will allow population-based modeling in the future. Copyright © 2016 Elsevier Ltd. All rights reserved.
Predicting intensity ranks of peptide fragment ions.
Frank, Ari M
2009-05-01
Accurate modeling of peptide fragmentation is necessary for the development of robust scoring functions for peptide-spectrum matches, which are the cornerstone of MS/MS-based identification algorithms. Unfortunately, peptide fragmentation is a complex process that can involve several competing chemical pathways, which makes it difficult to develop generative probabilistic models that describe it accurately. However, the vast amounts of MS/MS data being generated now make it possible to use data-driven machine learning methods to develop discriminative ranking-based models that predict the intensity ranks of a peptide's fragment ions. We use simple sequence-based features that get combined by a boosting algorithm into models that make peak rank predictions with high accuracy. In an accompanying manuscript, we demonstrate how these prediction models are used to significantly improve the performance of peptide identification algorithms. The models can also be useful in the design of optimal multiple reaction monitoring (MRM) transitions, in cases where there is insufficient experimental data to guide the peak selection process. The prediction algorithm can also be run independently through PepNovo+, which is available for download from http://bix.ucsd.edu/Software/PepNovo.html.
Predicting Intensity Ranks of Peptide Fragment Ions
Frank, Ari M.
2009-01-01
Accurate modeling of peptide fragmentation is necessary for the development of robust scoring functions for peptide-spectrum matches, which are the cornerstone of MS/MS-based identification algorithms. Unfortunately, peptide fragmentation is a complex process that can involve several competing chemical pathways, which makes it difficult to develop generative probabilistic models that describe it accurately. However, the vast amounts of MS/MS data being generated now make it possible to use data-driven machine learning methods to develop discriminative ranking-based models that predict the intensity ranks of a peptide's fragment ions. We use simple sequence-based features that get combined by a boosting algorithm in to models that make peak rank predictions with high accuracy. In an accompanying manuscript, we demonstrate how these prediction models are used to significantly improve the performance of peptide identification algorithms. The models can also be useful in the design of optimal MRM transitions, in cases where there is insufficient experimental data to guide the peak selection process. The prediction algorithm can also be run independently through PepNovo+, which is available for download from http://bix.ucsd.edu/Software/PepNovo.html. PMID:19256476
Learners' Epistemic Criteria for Good Scientific Models
ERIC Educational Resources Information Center
Pluta, William J.; Chinn, Clark A.; Duncan, Ravit Golan
2011-01-01
Epistemic criteria are the standards used to evaluate scientific products (e.g., models, evidence, arguments). In this study, we analyzed epistemic criteria for good models generated by 324 middle-school students. After evaluating a range of scientific models, but before extensive instruction or experience with model-based reasoning practices,…
Modal Survey of ETM-3, A 5-Segment Derivative of the Space Shuttle Solid Rocket Booster
NASA Technical Reports Server (NTRS)
Nielsen, D.; Townsend, J.; Kappus, K.; Driskill, T.; Torres, I.; Parks, R.
2005-01-01
The complex interactions between internal motor generated pressure oscillations and motor structural vibration modes associated with the static test configuration of a Reusable Solid Rocket Motor have potential to generate significant dynamic thrust loads in the 5-segment configuration (Engineering Test Motor 3). Finite element model load predictions for worst-case conditions were generated based on extrapolation of a previously correlated 4-segment motor model. A modal survey was performed on the largest rocket motor to date, Engineering Test Motor #3 (ETM-3), to provide data for finite element model correlation and validation of model generated design loads. The modal survey preparation included pretest analyses to determine an efficient analysis set selection using the Effective Independence Method and test simulations to assure critical test stand component loads did not exceed design limits. Historical Reusable Solid Rocket Motor modal testing, ETM-3 test analysis model development and pre-test loads analyses, as well as test execution, and a comparison of results to pre-test predictions are discussed.
Three-dimensional modeling of the cochlea by use of an arc fitting approach.
Schurzig, Daniel; Lexow, G Jakob; Majdani, Omid; Lenarz, Thomas; Rau, Thomas S
2016-12-01
A cochlea modeling approach is presented allowing for a user defined degree of geometry simplification which automatically adjusts to the patient specific anatomy. Model generation can be performed in a straightforward manner due to error estimation prior to the actual generation, thus minimizing modeling time. Therefore, the presented technique is well suited for a wide range of applications including finite element analyses where geometrical simplifications are often inevitable. The method is presented for n=5 cochleae which were segmented using a custom software for increased accuracy. The linear basilar membrane cross sections are expanded to areas while the scalae contours are reconstructed by a predefined number of arc segments. Prior to model generation, geometrical errors are evaluated locally for each cross section as well as globally for the resulting models and their basal turn profiles. The final combination of all reconditioned features to a 3D volume is performed in Autodesk Inventor using the loft feature. Due to the volume generation based on cubic splines, low errors could be achieved even for low numbers of arc segments and provided cross sections, both of which correspond to a strong degree of model simplification. Model generation could be performed in a time efficient manner. The proposed simplification method was proven to be well suited for the helical cochlea geometry. The generated output data can be imported into commercial software tools for various analyses representing a time efficient way to create cochlea models optimally suited for the desired task.
Research of PV Power Generation MPPT based on GABP Neural Network
NASA Astrophysics Data System (ADS)
Su, Yu; Lin, Xianfu
2018-05-01
Photovoltaic power generation has become the main research direction of new energy power generation. But high investment and low efficiency of photovoltaic industry arouse concern in some extent. So maximum power point tracking of photovoltaic power generation has been a popular study point. Due to slow response, oscillation at maximum power point and low precision, the algorithm based on genetic algorithm combined with BP neural network are designed detailedly in this paper. And the modeling and simulation are completed by use of MATLAB/SIMULINK. The results show that the algorithm is effective and the maximum power point can be tracked accurately and quickly.
Forward modeling of gravity data using geostatistically generated subsurface density variations
Phelps, Geoffrey
2016-01-01
Using geostatistical models of density variations in the subsurface, constrained by geologic data, forward models of gravity anomalies can be generated by discretizing the subsurface and calculating the cumulative effect of each cell (pixel). The results of such stochastically generated forward gravity anomalies can be compared with the observed gravity anomalies to find density models that match the observed data. These models have an advantage over forward gravity anomalies generated using polygonal bodies of homogeneous density because generating numerous realizations explores a larger region of the solution space. The stochastic modeling can be thought of as dividing the forward model into two components: that due to the shape of each geologic unit and that due to the heterogeneous distribution of density within each geologic unit. The modeling demonstrates that the internally heterogeneous distribution of density within each geologic unit can contribute significantly to the resulting calculated forward gravity anomaly. Furthermore, the stochastic models match observed statistical properties of geologic units, the solution space is more broadly explored by producing a suite of successful models, and the likelihood of a particular conceptual geologic model can be compared. The Vaca Fault near Travis Air Force Base, California, can be successfully modeled as a normal or strike-slip fault, with the normal fault model being slightly more probable. It can also be modeled as a reverse fault, although this structural geologic configuration is highly unlikely given the realizations we explored.
FacetModeller: Software for manual creation, manipulation and analysis of 3D surface-based models
NASA Astrophysics Data System (ADS)
Lelièvre, Peter G.; Carter-McAuslan, Angela E.; Dunham, Michael W.; Jones, Drew J.; Nalepa, Mariella; Squires, Chelsea L.; Tycholiz, Cassandra J.; Vallée, Marc A.; Farquharson, Colin G.
2018-01-01
The creation of 3D models is commonplace in many disciplines. Models are often built from a collection of tessellated surfaces. To apply numerical methods to such models it is often necessary to generate a mesh of space-filling elements that conforms to the model surfaces. While there are meshing algorithms that can do so, they place restrictive requirements on the surface-based models that are rarely met by existing 3D model building software. Hence, we have developed a Java application named FacetModeller, designed for efficient manual creation, modification and analysis of 3D surface-based models destined for use in numerical modelling.
NASA Astrophysics Data System (ADS)
Singh, Nidhi; Chevé, Gwénaël; Ferguson, David M.; McCurdy, Christopher R.
2006-08-01
Combined ligand-based and target-based drug design approaches provide a synergistic advantage over either method individually. Therefore, we set out to develop a powerful virtual screening model to identify novel molecular scaffolds as potential leads for the human KOP (hKOP) receptor employing a combined approach. Utilizing a set of recently reported derivatives of salvinorin A, a structurally unique KOP receptor agonist, a pharmacophore model was developed that consisted of two hydrogen bond acceptor and three hydrophobic features. The model was cross-validated by randomizing the data using the CatScramble technique. Further validation was carried out using a test set that performed well in classifying active and inactive molecules correctly. Simultaneously, a bovine rhodopsin based "agonist-bound" hKOP receptor model was also generated. The model provided more accurate information about the putative binding site of salvinorin A based ligands. Several protein structure-checking programs were used to validate the model. In addition, this model was in agreement with the mutation experiments carried out on KOP receptor. The predictive ability of the model was evaluated by docking a set of known KOP receptor agonists into the active site of this model. The docked scores correlated reasonably well with experimental p K i values. It is hypothesized that the integration of these two independently generated models would enable a swift and reliable identification of new lead compounds that could reduce time and cost of hit finding within the drug discovery and development process, particularly in the case of GPCRs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masi, K; Ditman, M; Marsh, R
Purpose: There is potentially a wide variation in plan quality for a certain disease site, even for clinics located in the same system of hospitals. We have used a prostate-specific knowledge-based planning (KBP) model as a quality control tool to investigate the variation in prostate treatment planning across a network of affiliated radiation oncology departments. Methods: A previously created KBP model was applied to 10 patients each from 4 community-based clinics (Clinics A, B, C, and D). The KBP model was developed using RapidPlan (Eclipse v13.5, Varian Medical Systems) from 60 prostate/prostate bed IMRT plans that were originally planned usingmore » an in-house treatment planning system at the central institution of the community-based clinics. The dosimetric plan quality (target coverage and normal-tissue sparing) of each model-generated plan was compared to the respective clinically-used plan. Each community-based clinic utilized the same planning goals to develop the clinically-used plans that were used at the main institution. Results: Across all 4 clinics, the model-generated plans decreased the mean dose to the rectum by varying amounts (on average, 12.5, 2.6, 4.5, and 2.7 Gy for Clinics A, B, C, and D, respectively). The mean dose to the bladder also decreased with the model-generated plans (5.4, 2.3, 3.0, and 4.1 Gy, respectively). The KBP model also identified that target coverage (D95%) improvements were possible for for Clinics A, B, and D (0.12, 1.65, and 2.75%) while target coverage decreased by 0.72% for Clinic C, demonstrating potentially different trade-offs made in clinical plans at different institutions. Conclusion: Quality control of dosimetric plan quality across a system of radiation oncology practices is possible with knowledge-based planning. By using a quality KBP model, smaller community-based clinics can potentially identify the areas of their treatment plans that may be improved, whether it be in normal-tissue sparing or improved target coverage. M. Matuszak has research funding for KBP from Varian Medical Systems.« less
Gutierrez, Eric; Quinn, Daniel B; Chin, Diana D; Lentink, David
2016-12-06
There are three common methods for calculating the lift generated by a flying animal based on the measured airflow in the wake. However, these methods might not be accurate according to computational and robot-based studies of flapping wings. Here we test this hypothesis for the first time for a slowly flying Pacific parrotlet in still air using stereo particle image velocimetry recorded at 1000 Hz. The bird was trained to fly between two perches through a laser sheet wearing laser safety goggles. We found that the wingtip vortices generated during mid-downstroke advected down and broke up quickly, contradicting the frozen turbulence hypothesis typically assumed in animal flight experiments. The quasi-steady lift at mid-downstroke was estimated based on the velocity field by applying the widely used Kutta-Joukowski theorem, vortex ring model, and actuator disk model. The calculated lift was found to be sensitive to the applied model and its different parameters, including vortex span and distance between the bird and laser sheet-rendering these three accepted ways of calculating weight support inconsistent. The three models predict different aerodynamic force values mid-downstroke compared to independent direct measurements with an aerodynamic force platform that we had available for the same species flying over a similar distance. Whereas the lift predictions of the Kutta-Joukowski theorem and the vortex ring model stayed relatively constant despite vortex breakdown, their values were too low. In contrast, the actuator disk model predicted lift reasonably accurately before vortex breakdown, but predicted almost no lift during and after vortex breakdown. Some of these limitations might be better understood, and partially reconciled, if future animal flight studies report lift calculations based on all three quasi-steady lift models instead. This would also enable much needed meta studies of animal flight to derive bioinspired design principles for quasi-steady lift generation with flapping wings.
Faville, R A; Pullan, A J; Sanders, K M; Koh, S D; Lloyd, C M; Smith, N P
2009-06-17
Spontaneously rhythmic pacemaker activity produced by interstitial cells of Cajal (ICC) is the result of the entrainment of unitary potential depolarizations generated at intracellular sites termed pacemaker units. In this study, we present a mathematical modeling framework that quantitatively represents the transmembrane ion flows and intracellular Ca2+ dynamics from a single ICC operating over the physiological membrane potential range. The mathematical model presented here extends our recently developed biophysically based pacemaker unit modeling framework by including mechanisms necessary for coordinating unitary potential events, such as a T-Type Ca2+ current, Vm-dependent K+ currents, and global Ca2+ diffusion. Model simulations produce spontaneously rhythmic slow wave depolarizations with an amplitude of 65 mV at a frequency of 17.4 cpm. Our model predicts that activity at the spatial scale of the pacemaker unit is fundamental for ICC slow wave generation, and Ca2+ influx from activation of the T-Type Ca2+ current is required for unitary potential entrainment. These results suggest that intracellular Ca2+ levels, particularly in the region local to the mitochondria and endoplasmic reticulum, significantly influence pacing frequency and synchronization of pacemaker unit discharge. Moreover, numerical investigations show that our ICC model is capable of qualitatively replicating a wide range of experimental observations.
Model compilation: An approach to automated model derivation
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo
1990-01-01
An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.
Zhang, Yiming; Jin, Quan; Wang, Shuting; Ren, Ren
2011-05-01
The mobile behavior of 1481 peptides in ion mobility spectrometry (IMS), which are generated by protease digestion of the Drosophila melanogaster proteome, is modeled and predicted based on two different types of characterization methods, i.e. sequence-based approach and structure-based approach. In this procedure, the sequence-based approach considers both the amino acid composition of a peptide and the local environment profile of each amino acid in the peptide; the structure-based approach is performed with the CODESSA protocol, which regards a peptide as a common organic compound and generates more than 200 statistically significant variables to characterize the whole structure profile of a peptide molecule. Subsequently, the nonlinear support vector machine (SVM) and Gaussian process (GP) as well as linear partial least squares (PLS) regression is employed to correlate the structural parameters of the characterizations with the IMS drift times of these peptides. The obtained quantitative structure-spectrum relationship (QSSR) models are evaluated rigorously and investigated systematically via both one-deep and two-deep cross-validations as well as the rigorous Monte Carlo cross-validation (MCCV). We also give a comprehensive comparison on the resulting statistics arising from the different combinations of variable types with modeling methods and find that the sequence-based approach can give the QSSR models with better fitting ability and predictive power but worse interpretability than the structure-based approach. In addition, though the QSSR modeling using sequence-based approach is not needed for the preparation of the minimization structures of peptides before the modeling, it would be considerably efficient as compared to that using structure-based approach. Copyright © 2011 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shurupov, A. V.; Zavalova, V. E., E-mail: zavalova@fites.ru; Kozlov, A. V.
The report presents the results of the development and field testing of a mobile test facility based on a helical magnetic cumulative generator (MCGTF). The system is designed for full-scale modeling of lightning currents to study the safety of power plants of any type, including nuclear power plants. Advanced technologies of high-energy physics for solving both engineering and applied problems underlie this pilot project. The energy from the magnetic cumulative generator (MCG) is transferred to a high-impedance load with high efficiency of more than 50% using pulse transformer coupling. Modeling of the dynamics of the MEG that operates in amore » circuit with lumped parameters allows one to apply the law of inductance output during operation of the MCG, thus providing the required front of the current pulse in the load without using any switches. The results of field testing of the MCGTF are presented for both the ground loop and the model load. The ground loop generates a load resistance of 2–4 Ω. In the tests, the ohmic resistance of the model load is 10 Ω. It is shown that the current pulse parameters recorded in the resistive-inductive load are close to the calculated values.« less
Comparison of the Battery Life of Nonrechargeable Generators for Deep Brain Stimulation.
Helmers, Ann-Kristin; Lübbing, Isabel; Deuschl, Günther; Witt, Karsten; Synowitz, Michael; Mehdorn, Hubertus Maximilian; Falk, Daniela
2017-11-03
Nonrechargeable deep brain stimulation (DBS) generators must be replaced when the battery capacity is exhausted. Battery life depends on many factors and differs between generator models. A new nonrechargeable generator model replaced the previous model in 2008. Our clinical impression is that the earlier model had a longer battery life than the new one. We conducted this study to substantiate this. We determined the battery life of every DBS generator that had been implanted between 2005 and 2012 in our department for the treatment of Parkinson's disease, and compared the battery lives of the both devices. We calculated the current used by estimating the total electrical energy delivered (TEED) based on the stimulation parameters in use one year after electrode implantation. One hundred ninety-two patients were included in the study; 105 with the old and 86 with the new model generators. The mean battery life in the older model was significantly longer (5.44 ± 0.20 years) than that in the new model (4.44 ± 0.17 years) (p = 0.023). The mean TEED without impedance was 219.9 ± 121.5 mW * Ω in the older model and 145.1 ± 72.7 mW * Ω in the new one, which indicated significantly lower stimulation parameters in the new model (p = 0.00038). The battery life of the new model was significantly shorter than that of the previous model. A lower battery capacity is the most likely reason, since current consumption was similar in both groups. © 2017 International Neuromodulation Society.
Triangle Geometry Processing for Surface Modeling and Cartesian Grid Generation
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J. (Inventor); Melton, John E. (Inventor); Berger, Marsha J. (Inventor)
2002-01-01
Cartesian mesh generation is accomplished for component based geometries, by intersecting components subject to mesh generation to extract wetted surfaces with a geometry engine using adaptive precision arithmetic in a system which automatically breaks ties with respect to geometric degeneracies. During volume mesh generation, intersected surface triangulations are received to enable mesh generation with cell division of an initially coarse grid. The hexagonal cells are resolved, preserving the ability to directionally divide cells which are locally well aligned.
Triangle geometry processing for surface modeling and cartesian grid generation
Aftosmis, Michael J [San Mateo, CA; Melton, John E [Hollister, CA; Berger, Marsha J [New York, NY
2002-09-03
Cartesian mesh generation is accomplished for component based geometries, by intersecting components subject to mesh generation to extract wetted surfaces with a geometry engine using adaptive precision arithmetic in a system which automatically breaks ties with respect to geometric degeneracies. During volume mesh generation, intersected surface triangulations are received to enable mesh generation with cell division of an initially coarse grid. The hexagonal cells are resolved, preserving the ability to directionally divide cells which are locally well aligned.
New insights into insect's silent flight. Part II: sound source and noise control
NASA Astrophysics Data System (ADS)
Xue, Qian; Geng, Biao; Zheng, Xudong; Liu, Geng; Dong, Haibo
2016-11-01
The flapping flight of aerial animals has excellent aerodynamic performance but meanwhile generates low noise. In this study, the unsteady flow and acoustic characteristics of the flapping wing are numerically investigated for three-dimensional (3D) models of Tibicen linnei cicada at free forward flight conditions. Single cicada wing is modelled as a membrane with prescribed motion reconstructed by Wan et al. (2015). The flow field and acoustic field around the flapping wing are solved with immersed-boundary-method based incompressible flow solver and linearized-perturbed-compressible-equations based acoustic solver. The 3D simulation allows examination of both directivity and frequency composition of the produced sound in a full space. The mechanism of sound generation of flapping wing is analyzed through correlations between acoustic signals and flow features. Along with a flexible wing model, a rigid wing model is also simulated. The results from these two cases will be compared to investigate the effects of wing flexibility on sound generation. This study is supported by NSF CBET-1313217 and AFOSR FA9550-12-1-0071.
NASA Technical Reports Server (NTRS)
Kirkman, K. L.; Brown, C. E.; Goodman, A.
1973-01-01
The effectiveness of various candidate aircraft-wing devices for attenuation of trailing vortices generated by large aircraft is evaluated on basis of results of experiments conducted with a 0.03-scale model of a Boeing 747 transport aircraft using a technique developed at the HYDRONAUTICS Ship Model Basin. Emphasis is on the effects produced by these devices in the far-field (up to 8 kilometers downstream of full-scale generating aircraft) where the unaltered vortex-wakes could still be hazardous to small following aircraft. The evaluation is based primarily on quantitative measurements of the respective vortex velocity distributions made by means of hot-film probe traverses in a transverse plane at selected stations downstream. The effects of these altered wakes on rolling moment induced on a small following aircraft are also studied using a modified lifting-surface theory with a synthesized Gates Learjet as a typical example. Lift and drag measurements concurrently obtained in the model tests are used to appraise the effects of each device investigated on the performance characteristics of the generating aircraft.
An improved simulation based biomechanical model to estimate static muscle loadings
NASA Technical Reports Server (NTRS)
Rajulu, Sudhakar L.; Marras, William S.; Woolford, Barbara
1991-01-01
The objectives of this study are to show that the characteristics of an intact muscle are different from those of an isolated muscle and to describe a simulation based model. This model, unlike the optimization based models, accounts for the redundancy in the musculoskeletal system in predicting the amount of forces generated within a muscle. The results of this study show that the loading of the primary muscle is increased by the presence of other muscle activities. Hence, the previous models based on optimization techniques may underestimate the severity of the muscle and joint loadings which occur during manual material handling tasks.
LIMEPY: Lowered Isothermal Model Explorer in PYthon
NASA Astrophysics Data System (ADS)
Gieles, Mark; Zocchi, Alice
2017-10-01
LIMEPY solves distribution function (DF) based lowered isothermal models. It solves Poisson's equation used on input parameters and offers fast solutions for isotropic/anisotropic, single/multi-mass models, normalized DF values, density and velocity moments, projected properties, and generates discrete samples.
Integrated PK-PD and agent-based modeling in oncology.
Wang, Zhihui; Butner, Joseph D; Cristini, Vittorio; Deisboeck, Thomas S
2015-04-01
Mathematical modeling has become a valuable tool that strives to complement conventional biomedical research modalities in order to predict experimental outcome, generate new medical hypotheses, and optimize clinical therapies. Two specific approaches, pharmacokinetic-pharmacodynamic (PK-PD) modeling, and agent-based modeling (ABM), have been widely applied in cancer research. While they have made important contributions on their own (e.g., PK-PD in examining chemotherapy drug efficacy and resistance, and ABM in describing and predicting tumor growth and metastasis), only a few groups have started to combine both approaches together in an effort to gain more insights into the details of drug dynamics and the resulting impact on tumor growth. In this review, we focus our discussion on some of the most recent modeling studies building on a combined PK-PD and ABM approach that have generated experimentally testable hypotheses. Some future directions are also discussed.
Integrated PK-PD and Agent-Based Modeling in Oncology
Wang, Zhihui; Butner, Joseph D.; Cristini, Vittorio
2016-01-01
Mathematical modeling has become a valuable tool that strives to complement conventional biomedical research modalities in order to predict experimental outcome, generate new medical hypotheses, and optimize clinical therapies. Two specific approaches, pharmacokinetic-pharmacodynamic (PK-PD) modeling, and agent-based modeling (ABM), have been widely applied in cancer research. While they have made important contributions on their own (e.g., PK-PD in examining chemotherapy drug efficacy and resistance, and ABM in describing and predicting tumor growth and metastasis), only a few groups have started to combine both approaches together in an effort to gain more insights into the details of drug dynamics and the resulting impact on tumor growth. In this review, we focus our discussion on some of the most recent modeling studies building on a combined PK-PD and ABM approach that have generated experimentally testable hypotheses. Some future directions are also discussed. PMID:25588379
A model for simulating random atmospheres as a function of latitude, season, and time
NASA Technical Reports Server (NTRS)
Campbell, J. W.
1977-01-01
An empirical stochastic computer model was developed with the capability of generating random thermodynamic profiles of the atmosphere below an altitude of 99 km which are characteristic of any given season, latitude, and time of day. Samples of temperature, density, and pressure profiles generated by the model are statistically similar to measured profiles in a data base of over 6000 rocket and high-altitude atmospheric soundings; that is, means and standard deviations of modeled profiles and their vertical gradients are in close agreement with data. Model-generated samples can be used for Monte Carlo simulations of aircraft or spacecraft trajectories to predict or account for the effects on a vehicle's performance of atmospheric variability. Other potential uses for the model are in simulating pollutant dispersion patterns, variations in sound propagation, and other phenomena which are dependent on atmospheric properties, and in developing data-reduction software for satellite monitoring systems.
Experience With Bayesian Image Based Surface Modeling
NASA Technical Reports Server (NTRS)
Stutz, John C.
2005-01-01
Bayesian surface modeling from images requires modeling both the surface and the image generation process, in order to optimize the models by comparing actual and generated images. Thus it differs greatly, both conceptually and in computational difficulty, from conventional stereo surface recovery techniques. But it offers the possibility of using any number of images, taken under quite different conditions, and by different instruments that provide independent and often complementary information, to generate a single surface model that fuses all available information. I describe an implemented system, with a brief introduction to the underlying mathematical models and the compromises made for computational efficiency. I describe successes and failures achieved on actual imagery, where we went wrong and what we did right, and how our approach could be improved. Lastly I discuss how the same approach can be extended to distinct types of instruments, to achieve true sensor fusion.
Booth, James F; Naud, Catherine M; Willison, Jeff
2018-03-01
The representation of extratropical cyclones (ETCs) precipitation in general circulation models (GCMs) and a weather research and forecasting (WRF) model is analyzed. This work considers the link between ETC precipitation and dynamical strength and tests if parameterized convection affects this link for ETCs in the North Atlantic Basin. Lagrangian cyclone tracks of ETCs in ERA-Interim reanalysis (ERAI), the GISS and GFDL CMIP5 models, and WRF with two horizontal resolutions are utilized in a compositing analysis. The 20-km resolution WRF model generates stronger ETCs based on surface wind speed and cyclone precipitation. The GCMs and ERAI generate similar composite means and distributions for cyclone precipitation rates, but GCMs generate weaker cyclone surface winds than ERAI. The amount of cyclone precipitation generated by the convection scheme differs significantly across the datasets, with GISS generating the most, followed by ERAI and then GFDL. The models and reanalysis generate relatively more parameterized convective precipitation when the total cyclone-averaged precipitation is smaller. This is partially due to the contribution of parameterized convective precipitation occurring more often late in the ETC life cycle. For reanalysis and models, precipitation increases with both cyclone moisture and surface wind speed, and this is true if the contribution from the parameterized convection scheme is larger or not. This work shows that these different models generate similar total ETC precipitation despite large differences in the parameterized convection, and these differences do not cause unexpected behavior in ETC precipitation sensitivity to cyclone moisture or surface wind speed.
Simulating the Effects of Cross-Generational Cultural Transmission on Language Change
NASA Astrophysics Data System (ADS)
Gong, Tao; Shuai, Lan
Language evolves in a socio-cultural environment. Apart from biological evolution and individual learning, cultural transmission also casts important influence on many aspects of language evolution. In this paper, based on the lexicon-syntax coevolution model, we extend the acquisition framework in our previous work to examine the roles of three forms of cultural transmission spanning the offspring, parent, and grandparent generations in language change. These transmissions are: those between the parent and offspring generations (PO), those within the offspring generation (OO), and those between the grandparent and offspring generations (GO). The simulation results of the considered model and relevant analyses illustrate not only the necessity of PO and OO transmissions for language change, thus echoing our previous findings, but also the importance of GO transmission, a form of cross-generational cultural transmission, on preserving the mutual understandability of the communal language across generations of individuals.
Performance and Architecture Lab Modeling Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-06-19
Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less
An Automatic and Robust Algorithm of Reestablishment of Digital Dental Occlusion
Chang, Yu-Bing; Xia, James J.; Gateno, Jaime; Xiong, Zixiang; Zhou, Xiaobo; Wong, Stephen T. C.
2017-01-01
In the field of craniomaxillofacial (CMF) surgery, surgical planning can be performed on composite 3-D models that are generated by merging a computerized tomography scan with digital dental models. Digital dental models can be generated by scanning the surfaces of plaster dental models or dental impressions with a high-resolution laser scanner. During the planning process, one of the essential steps is to reestablish the dental occlusion. Unfortunately, this task is time-consuming and often inaccurate. This paper presents a new approach to automatically and efficiently reestablish dental occlusion. It includes two steps. The first step is to initially position the models based on dental curves and a point matching technique. The second step is to reposition the models to the final desired occlusion based on iterative surface-based minimum distance mapping with collision constraints. With linearization of rotation matrix, the alignment is modeled by solving quadratic programming. The simulation was completed on 12 sets of digital dental models. Two sets of dental models were partially edentulous, and another two sets have first premolar extractions for orthodontic treatment. Two validation methods were applied to the articulated models. The results show that using our method, the dental models can be successfully articulated with a small degree of deviations from the occlusion achieved with the gold-standard method. PMID:20529735
Unifying Model-Based and Reactive Programming within a Model-Based Executive
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)
1999-01-01
Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.
Phenomenological modeling of nonlinear holograms based on metallic geometric metasurfaces.
Ye, Weimin; Li, Xin; Liu, Juan; Zhang, Shuang
2016-10-31
Benefiting from efficient local phase and amplitude control at the subwavelength scale, metasurfaces offer a new platform for computer generated holography with high spatial resolution. Three-dimensional and high efficient holograms have been realized by metasurfaces constituted by subwavelength meta-atoms with spatially varying geometries or orientations. Metasurfaces have been recently extended to the nonlinear optical regime to generate holographic images in harmonic generation waves. Thus far, there has been no vector field simulation of nonlinear metasurface holograms because of the tremendous computational challenge in numerically calculating the collective nonlinear responses of the large number of different subwavelength meta-atoms in a hologram. Here, we propose a general phenomenological method to model nonlinear metasurface holograms based on the assumption that every meta-atom could be described by a localized nonlinear polarizability tensor. Applied to geometric nonlinear metasurfaces, we numerically model the holographic images formed by the second-harmonic waves of different spins. We show that, in contrast to the metasurface holograms operating in the linear optical regime, the wavelength of incident fundamental light should be slightly detuned from the fundamental resonant wavelength to optimize the efficiency and quality of nonlinear holographic images. The proposed modeling provides a general method to simulate nonlinear optical devices based on metallic metasurfaces.
NASA Astrophysics Data System (ADS)
Ostiguy, Pierre-Claude; Quaegebeur, Nicolas; Masson, Patrice
2014-03-01
In this study, a correlation-based imaging technique called "Excitelet" is used to monitor an aerospace grade aluminum plate, representative of an aircraft component. The principle is based on ultrasonic guided wave generation and sensing using three piezoceramic (PZT) transducers, and measurement of reflections induced by potential defects. The method uses a propagation model to correlate measured signals with a bank of signals and imaging is performed using a roundrobin procedure (Full-Matrix Capture). The formulation compares two models for the complex transducer dynamics: one where the shear stress at the tip of the PZT is considered to vary as a function of the frequency generated, and one where the PZT is discretized in order to consider the shear distribution under the PZT. This method allows taking into account the transducer dynamics and finite dimensions, multi-modal and dispersive characteristics of the material and complex interactions between guided wave and damages. Experimental validation has been conducted on an aerospace grade aluminum joint instrumented with three circular PZTs of 10 mm diameter. A magnet, acting as a reflector, is used in order to simulate a local reflection in the structure. It is demonstrated that the defect can be accurately detected and localized. The two models proposed are compared to the classical pin-force model, using narrow and broad-band excitations. The results demonstrate the potential of the proposed imaging techniques for damage monitoring of aerospace structures considering improved models for guided wave generation and propagation.
NASA Astrophysics Data System (ADS)
Arya, Sabha Raj; Patel, Ashish; Giri, Ashutosh
2018-06-01
This paper deals wind energy based power generation system using Permanent Magnet Synchronous Generator (PMSG). It is controlled using advanced enhanced phase-lock loop for power quality features using distribution static compensator to eliminate the harmonics and to provide KVAR compensation as well as load balancing. It also manages rated potential at the point of common interface under linear and non-linear loads. In order to have better efficiency and reliable operation of PMSG driven by wind turbine, it is necessary to analyze the governing equation of wind based turbine and PMSG under fixed and variable wind speed. For handling power quality problems, power electronics based shunt connected custom power device is used in three wire system. The simulations in MATLAB/Simulink environment have been carried out in order to demonstrate this model and control approach used for the power quality enhancement. The performance results show the adequate performance of PMSG based power generation system and control algorithm.
A two-stage flow-based intrusion detection model for next-generation networks.
Umer, Muhammad Fahad; Sher, Muhammad; Bi, Yaxin
2018-01-01
The next-generation network provides state-of-the-art access-independent services over converged mobile and fixed networks. Security in the converged network environment is a major challenge. Traditional packet and protocol-based intrusion detection techniques cannot be used in next-generation networks due to slow throughput, low accuracy and their inability to inspect encrypted payload. An alternative solution for protection of next-generation networks is to use network flow records for detection of malicious activity in the network traffic. The network flow records are independent of access networks and user applications. In this paper, we propose a two-stage flow-based intrusion detection system for next-generation networks. The first stage uses an enhanced unsupervised one-class support vector machine which separates malicious flows from normal network traffic. The second stage uses a self-organizing map which automatically groups malicious flows into different alert clusters. We validated the proposed approach on two flow-based datasets and obtained promising results.
A two-stage flow-based intrusion detection model for next-generation networks
2018-01-01
The next-generation network provides state-of-the-art access-independent services over converged mobile and fixed networks. Security in the converged network environment is a major challenge. Traditional packet and protocol-based intrusion detection techniques cannot be used in next-generation networks due to slow throughput, low accuracy and their inability to inspect encrypted payload. An alternative solution for protection of next-generation networks is to use network flow records for detection of malicious activity in the network traffic. The network flow records are independent of access networks and user applications. In this paper, we propose a two-stage flow-based intrusion detection system for next-generation networks. The first stage uses an enhanced unsupervised one-class support vector machine which separates malicious flows from normal network traffic. The second stage uses a self-organizing map which automatically groups malicious flows into different alert clusters. We validated the proposed approach on two flow-based datasets and obtained promising results. PMID:29329294
NASA Astrophysics Data System (ADS)
Arya, Sabha Raj; Patel, Ashish; Giri, Ashutosh
2018-03-01
This paper deals wind energy based power generation system using Permanent Magnet Synchronous Generator (PMSG). It is controlled using advanced enhanced phase-lock loop for power quality features using distribution static compensator to eliminate the harmonics and to provide KVAR compensation as well as load balancing. It also manages rated potential at the point of common interface under linear and non-linear loads. In order to have better efficiency and reliable operation of PMSG driven by wind turbine, it is necessary to analyze the governing equation of wind based turbine and PMSG under fixed and variable wind speed. For handling power quality problems, power electronics based shunt connected custom power device is used in three wire system. The simulations in MATLAB/Simulink environment have been carried out in order to demonstrate this model and control approach used for the power quality enhancement. The performance results show the adequate performance of PMSG based power generation system and control algorithm.
Higley, Debra K.
2013-01-01
The Upper Devonian and Lower Mississippian Woodford Shale is an important petroleum source rock for Mississippian reservoirs in the Anadarko Basin Province of Oklahoma, Kansas, Texas, and Colorado, based on results from a 4D petroleum system model of the basin. The Woodford Shale underlies Mississippian strata over most of the Anadarko Basin portions of Oklahoma and northeastern Texas. The Kansas and Colorado portions of the province are almost entirely thermally immature for oil generation from the Woodford Shale or potential Mississippian source rocks, based mainly on measured vitrinite reflectance and modeled thermal maturation. Thermal maturities of the Woodford Shale range from mature for oil to overmature for gas generation at present-day depths of about 5,000 to 20,000 ft. Oil generation began at burial depths of about 6,000 to 6,500 ft. Modeled onset of Woodford Shale oil generation was about 330 million years ago (Ma); peak oil generation was from 300 to 220 Ma.Mississippian production, including horizontal wells of the informal Mississippi limestone, is concentrated within and north of the Sooner Trend area in the northeast Oklahoma portion of the basin. This large pod of oil and gas production is within the area modeled as thermally mature for oil generation from the Woodford Shale. The southern boundary of the trend approximates the 99% transformation ratio of the Woodford Shale, which marks the end of oil generation. Because most of the Sooner Trend area is thermally mature for oil generation from the Woodford Shale, the trend probably includes short- and longer-distance vertical and lateral migration. The Woodford Shale is absent in the Mocane-Laverne Field area of the eastern Oklahoma panhandle; because of this, associated oil migrated from the south into the field. If the Springer Formation or deeper Mississippian strata generated oil, then the southern field area is within the oil window for associated petroleum source rocks. Mississippian fields along the western boundary of the study area were supplied by oil that flowed northward from the Panhandle Field area and westward from the deep basin.
Multi-Level Building Reconstruction for Automatic Enhancement of High Resolution Dsms
NASA Astrophysics Data System (ADS)
Arefi, H.; Reinartz, P.
2012-07-01
In this article a multi-level approach is proposed for reconstruction-based improvement of high resolution Digital Surface Models (DSMs). The concept of Levels of Detail (LOD) defined by CityGML standard has been considered as basis for abstraction levels of building roof structures. Here, the LOD1 and LOD2 which are related to prismatic and parametric roof shapes are reconstructed. Besides proposing a new approach for automatic LOD1 and LOD2 generation from high resolution DSMs, the algorithm contains two generalization levels namely horizontal and vertical. Both generalization levels are applied to prismatic model of buildings. The horizontal generalization allows controlling the approximation level of building footprints which is similar to cartographic generalization concept of the urban maps. In vertical generalization, the prismatic model is formed using an individual building height and continuous to included all flat structures locating in different height levels. The concept of LOD1 generation is based on approximation of the building footprints into rectangular or non-rectangular polygons. For a rectangular building containing one main orientation a method based on Minimum Bounding Rectangle (MBR) in employed. In contrast, a Combined Minimum Bounding Rectangle (CMBR) approach is proposed for regularization of non-rectilinear polygons, i.e. buildings without perpendicular edge directions. Both MBRand CMBR-based approaches are iteratively employed on building segments to reduce the original building footprints to a minimum number of nodes with maximum similarity to original shapes. A model driven approach based on the analysis of the 3D points of DSMs in a 2D projection plane is proposed for LOD2 generation. Accordingly, a building block is divided into smaller parts according to the direction and number of existing ridge lines. The 3D model is derived for each building part and finally, a complete parametric model is formed by merging all the 3D models of the individual parts and adjusting the nodes after the merging step. In order to provide an enhanced DSM, a surface model is provided for each building by interpolation of the internal points of the generated models. All interpolated models are situated on a Digital Terrain Model (DTM) of corresponding area to shape the enhanced DSM. Proposed DSM enhancement approach has been tested on a dataset from Munich central area. The original DSM is created using robust stereo matching of Worldview-2 stereo images. A quantitative assessment of the new DSM by comparing the heights of the ridges and eaves shows a standard deviation of better than 50cm.
Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe
2013-01-01
Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2) to 30 cm(2), whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.
Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe
2013-01-01
Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm2 to 30 cm2, whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered. PMID:23418485
NASA Astrophysics Data System (ADS)
Akbardin, J.; Parikesit, D.; Riyanto, B.; TMulyono, A.
2018-05-01
Zones that produce land fishery commodity and its yields have characteristics that is limited in distribution capability because infrastructure conditions availability. High demand for fishery commodities caused to a growing distribution at inefficient distribution distance. The development of the gravity theory with the limitation of movement generation from the production zone can increase the interaction inter-zones by distribution distances effectively and efficiently with shorter movement distribution distances. Regression analysis method with multiple variable of transportation infrastructure condition based on service level and quantitative capacity is determined to estimate the 'mass' of movement generation that is formed. The resulting movement distribution (Tid) model has the equation Tid = 27.04 -0.49 tid. Based on barrier function of power model with calibration value β = 0.0496. In the way of development of the movement generation 'mass' boundary at production zone will shorten the distribution distance effectively with shorter distribution distances. Shorter distribution distances will increase the accessibility inter-zones to interact according to the magnitude of the movement generation 'mass'.