Adaptive Modeling of Details for Physically-Based Sound Synthesis and Propagation
2015-03-21
the interface that ensures the consistency and validity of the solution given by the two methods. Transfer functions are used to model two-way...release; distribution is unlimited. Adaptive modeling of details for physically-based sound synthesis and propagation The views, opinions and/or...Research Triangle Park, NC 27709-2211 Applied sciences, Adaptive modeling , Physcially-based, Sound synthesis, Propagation, Virtual world REPORT
Simulation tools for particle-based reaction-diffusion dynamics in continuous space
2014-01-01
Particle-based reaction-diffusion algorithms facilitate the modeling of the diffusional motion of individual molecules and the reactions between them in cellular environments. A physically realistic model, depending on the system at hand and the questions asked, would require different levels of modeling detail such as particle diffusion, geometrical confinement, particle volume exclusion or particle-particle interaction potentials. Higher levels of detail usually correspond to increased number of parameters and higher computational cost. Certain systems however, require these investments to be modeled adequately. Here we present a review on the current field of particle-based reaction-diffusion software packages operating on continuous space. Four nested levels of modeling detail are identified that capture incrementing amount of detail. Their applicability to different biological questions is discussed, arching from straight diffusion simulations to sophisticated and expensive models that bridge towards coarse grained molecular dynamics. PMID:25737778
Citygml and the Streets of New York - a Proposal for Detailed Street Space Modelling
NASA Astrophysics Data System (ADS)
Beil, C.; Kolbe, T. H.
2017-10-01
Three-dimensional semantic city models are increasingly used for the analysis of large urban areas. Until now the focus has mostly been on buildings. Nonetheless many applications could also benefit from detailed models of public street space for further analysis. However, there are only few guidelines for representing roads within city models. Therefore, related standards dealing with street modelling are examined and discussed. Nearly all street representations are based on linear abstractions. However, there are many use cases that require or would benefit from the detailed geometrical and semantic representation of street space. A variety of potential applications for detailed street space models are presented. Subsequently, based on related standards as well as on user requirements, a concept for a CityGML-compliant representation of street space in multiple levels of detail is developed. In the course of this process, the CityGML Transportation model of the currently valid OGC standard CityGML2.0 is examined to discover possibilities for further developments. Moreover, a number of improvements are presented. Finally, based on open data sources, the proposed concept is implemented within a semantic 3D city model of New York City generating a detailed 3D street space model for the entire city. As a result, 11 thematic classes, such as roadbeds, sidewalks or traffic islands are generated and enriched with a large number of thematic attributes.
Soong, Ming Foong; Ramli, Rahizar; Saifizul, Ahmad
2017-01-01
Quarter vehicle model is the simplest representation of a vehicle that belongs to lumped-mass vehicle models. It is widely used in vehicle and suspension analyses, particularly those related to ride dynamics. However, as much as its common adoption, it is also commonly accepted without quantification that this model is not as accurate as many higher-degree-of-freedom models due to its simplicity and limited degrees of freedom. This study investigates the trade-off between simplicity and accuracy within the context of quarter vehicle model by determining the effect of adding various modeling details on model accuracy. In the study, road input detail, tire detail, suspension stiffness detail and suspension damping detail were factored in, and several enhanced models were compared to the base model to assess the significance of these details. The results clearly indicated that these details do have effect on simulated vehicle response, but to various extents. In particular, road input detail and suspension damping detail have the most significance and are worth being added to quarter vehicle model, as the inclusion of these details changed the response quite fundamentally. Overall, when it comes to lumped-mass vehicle modeling, it is reasonable to say that model accuracy depends not just on the number of degrees of freedom employed, but also on the contributions from various modeling details.
Between simplicity and accuracy: Effect of adding modeling details on quarter vehicle model accuracy
2017-01-01
Quarter vehicle model is the simplest representation of a vehicle that belongs to lumped-mass vehicle models. It is widely used in vehicle and suspension analyses, particularly those related to ride dynamics. However, as much as its common adoption, it is also commonly accepted without quantification that this model is not as accurate as many higher-degree-of-freedom models due to its simplicity and limited degrees of freedom. This study investigates the trade-off between simplicity and accuracy within the context of quarter vehicle model by determining the effect of adding various modeling details on model accuracy. In the study, road input detail, tire detail, suspension stiffness detail and suspension damping detail were factored in, and several enhanced models were compared to the base model to assess the significance of these details. The results clearly indicated that these details do have effect on simulated vehicle response, but to various extents. In particular, road input detail and suspension damping detail have the most significance and are worth being added to quarter vehicle model, as the inclusion of these details changed the response quite fundamentally. Overall, when it comes to lumped-mass vehicle modeling, it is reasonable to say that model accuracy depends not just on the number of degrees of freedom employed, but also on the contributions from various modeling details. PMID:28617819
NASA Astrophysics Data System (ADS)
Leskiw, Donald M.; Zhau, Junmei
2000-06-01
This paper reports on results from an ongoing project to develop methodologies for representing and managing multiple, concurrent levels of detail and enabling high performance computing using parallel arrays within distributed object-based simulation frameworks. At this time we present the methodology for representing and managing multiple, concurrent levels of detail and modeling accuracy by using a representation based on the Kalman approach for estimation. The Kalman System Model equations are used to represent model accuracy, Kalman Measurement Model equations provide transformations between heterogeneous levels of detail, and interoperability among disparate abstractions is provided using a form of the Kalman Update equations.
Development and testing of a fast conceptual river water quality model.
Keupers, Ingrid; Willems, Patrick
2017-04-15
Modern, model based river quality management strongly relies on river water quality models to simulate the temporal and spatial evolution of pollutant concentrations in the water body. Such models are typically constructed by extending detailed hydrodynamic models with a component describing the advection-diffusion and water quality transformation processes in a detailed, physically based way. This approach is too computational time demanding, especially when simulating long time periods that are needed for statistical analysis of the results or when model sensitivity analysis, calibration and validation require a large number of model runs. To overcome this problem, a structure identification method to set up a conceptual river water quality model has been developed. Instead of calculating the water quality concentrations at each water level and discharge node, the river branch is divided into conceptual reservoirs based on user information such as location of interest and boundary inputs. These reservoirs are modelled as Plug Flow Reactor (PFR) and Continuously Stirred Tank Reactor (CSTR) to describe advection and diffusion processes. The same water quality transformation processes as in the detailed models are considered but with adjusted residence times based on the hydrodynamic simulation results and calibrated to the detailed water quality simulation results. The developed approach allows for a much faster calculation time (factor 10 5 ) without significant loss of accuracy, making it feasible to perform time demanding scenario runs. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing
2018-02-01
For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.
Min, Yul Ha; Park, Hyeoun-Ae; Chung, Eunja; Lee, Hyunsook
2013-12-01
The purpose of this paper is to describe the components of a next-generation electronic nursing records system ensuring full semantic interoperability and integrating evidence into the nursing records system. A next-generation electronic nursing records system based on detailed clinical models and clinical practice guidelines was developed at Seoul National University Bundang Hospital in 2013. This system has two components, a terminology server and a nursing documentation system. The terminology server manages nursing narratives generated from entity-attribute-value triplets of detailed clinical models using a natural language generation system. The nursing documentation system provides nurses with a set of nursing narratives arranged around the recommendations extracted from clinical practice guidelines. An electronic nursing records system based on detailed clinical models and clinical practice guidelines was successfully implemented in a hospital in Korea. The next-generation electronic nursing records system can support nursing practice and nursing documentation, which in turn will improve data quality.
Integration of Evidence into a Detailed Clinical Model-based Electronic Nursing Record System
Park, Hyeoun-Ae; Jeon, Eunjoo; Chung, Eunja
2012-01-01
Objectives The purpose of this study was to test the feasibility of an electronic nursing record system for perinatal care that is based on detailed clinical models and clinical practice guidelines in perinatal care. Methods This study was carried out in five phases: 1) generating nursing statements using detailed clinical models; 2) identifying the relevant evidence; 3) linking nursing statements with the evidence; 4) developing a prototype electronic nursing record system based on detailed clinical models and clinical practice guidelines; and 5) evaluating the prototype system. Results We first generated 799 nursing statements describing nursing assessments, diagnoses, interventions, and outcomes using entities, attributes, and value sets of detailed clinical models for perinatal care which we developed in a previous study. We then extracted 506 recommendations from nine clinical practice guidelines and created sets of nursing statements to be used for nursing documentation by grouping nursing statements according to these recommendations. Finally, we developed and evaluated a prototype electronic nursing record system that can provide nurses with recommendations for nursing practice and sets of nursing statements based on the recommendations for guiding nursing documentation. Conclusions The prototype system was found to be sufficiently complete, relevant, useful, and applicable in terms of content, and easy to use and useful in terms of system user interface. This study has revealed the feasibility of developing such an ENR system. PMID:22844649
The Role of Empirical Evidence in Modeling Speech Segmentation
ERIC Educational Resources Information Center
Phillips, Lawrence
2015-01-01
Choosing specific implementational details is one of the most important aspects of creating and evaluating a model. In order to properly model cognitive processes, choices for these details must be made based on empirical research. Unfortunately, modelers are often forced to make decisions in the absence of relevant data. My work investigates the…
Clinical professional governance for detailed clinical models.
Goossen, William; Goossen-Baremans, Anneke
2013-01-01
This chapter describes the need for Detailed Clinical Models for contemporary Electronic Health Systems, data exchange and data reuse. It starts with an explanation of the components related to Detailed Clinical Models with a brief summary of knowledge representation, including terminologies representing clinic relevant "things" in the real world, and information models that abstract these in order to let computers process data about these things. Next, Detailed Clinical Models are defined and their purpose is described. It builds on existing developments around the world and accumulates in current work to create a technical specification at the level of the International Standards Organization. The core components of properly expressed Detailed Clinical Models are illustrated, including clinical knowledge and context, data element specification, code bindings to terminologies and meta-information about authors, versioning among others. Detailed Clinical Models to date are heavily based on user requirements and specify the conceptual and logical levels of modelling. It is not precise enough for specific implementations, which requires an additional step. However, this allows Detailed Clinical Models to serve as specifications for many different kinds of implementations. Examples of Detailed Clinical Models are presented both in text and in Unified Modelling Language. Detailed Clinical Models can be positioned in health information architectures, where they serve at the most detailed granular level. The chapter ends with examples of projects that create and deploy Detailed Clinical Models. All have in common that they can often reuse materials from earlier projects, and that strict governance of these models is essential to use them safely in health care information and communication technology. Clinical validation is one point of such governance, and model testing another. The Plan Do Check Act cycle can be applied for governance of Detailed Clinical Models. Finally, collections of clinical models do require a repository in which they can be stored, searched, and maintained. Governance of Detailed Clinical Models is required at local, national, and international levels.
PID-based error signal modeling
NASA Astrophysics Data System (ADS)
Yohannes, Tesfay
1997-10-01
This paper introduces a PID based signal error modeling. The error modeling is based on the betterment process. The resulting iterative learning algorithm is introduced and a detailed proof is provided for both linear and nonlinear systems.
Detailed Primitive-Based 3d Modeling of Architectural Elements
NASA Astrophysics Data System (ADS)
Remondino, F.; Lo Buglio, D.; Nony, N.; De Luca, L.
2012-07-01
The article describes a pipeline, based on image-data, for the 3D reconstruction of building façades or architectural elements and the successive modeling using geometric primitives. The approach overcome some existing problems in modeling architectural elements and deliver efficient-in-size reality-based textured 3D models useful for metric applications. For the 3D reconstruction, an opensource pipeline developed within the TAPENADE project is employed. In the successive modeling steps, the user manually selects an area containing an architectural element (capital, column, bas-relief, window tympanum, etc.) and then the procedure fits geometric primitives and computes disparity and displacement maps in order to tie visual and geometric information together in a light but detailed 3D model. Examples are reported and commented.
NASA Astrophysics Data System (ADS)
Chen, Chun-Nan; Luo, Win-Jet; Shyu, Feng-Lin; Chung, Hsien-Ching; Lin, Chiun-Yan; Wu, Jhao-Ying
2018-01-01
Using a non-equilibrium Green’s function framework in combination with the complex energy-band method, an atomistic full-quantum model for solving quantum transport problems for a zigzag-edge graphene nanoribbon (zGNR) structure is proposed. For transport calculations, the mathematical expressions from the theory for zGNR-based device structures are derived in detail. The transport properties of zGNR-based devices are calculated and studied in detail using the proposed method.
A statistical approach to develop a detailed soot growth model using PAH characteristics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raj, Abhijeet; Celnik, Matthew; Shirley, Raphael
A detailed PAH growth model is developed, which is solved using a kinetic Monte Carlo algorithm. The model describes the structure and growth of planar PAH molecules, and is referred to as the kinetic Monte Carlo-aromatic site (KMC-ARS) model. A detailed PAH growth mechanism based on reactions at radical sites available in the literature, and additional reactions obtained from quantum chemistry calculations are used to model the PAH growth processes. New rates for the reactions involved in the cyclodehydrogenation process for the formation of 6-member rings on PAHs are calculated in this work based on density functional theory simulations. Themore » KMC-ARS model is validated by comparing experimentally observed ensembles on PAHs with the computed ensembles for a C{sub 2}H{sub 2} and a C{sub 6}H{sub 6} flame at different heights above the burner. The motivation for this model is the development of a detailed soot particle population balance model which describes the evolution of an ensemble of soot particles based on their PAH structure. However, at present incorporating such a detailed model into a population balance is computationally unfeasible. Therefore, a simpler model referred to as the site-counting model has been developed, which replaces the structural information of the PAH molecules by their functional groups augmented with statistical closure expressions. This closure is obtained from the KMC-ARS model, which is used to develop correlations and statistics in different flame environments which describe such PAH structural information. These correlations and statistics are implemented in the site-counting model, and results from the site-counting model and the KMC-ARS model are in good agreement. Additionally the effect of steric hindrance in large PAH structures is investigated and correlations for sites unavailable for reaction are presented. (author)« less
Means, John A.; Simson, Crystal M.; Zhou, Shu; Rachford, Aaron A.; Rack, Jeffrey J.; Hines, Jennifer V.
2009-01-01
The T box transcription antitermination riboswitch is one of the main regulatory mechanisms utilized by Gram-positive bacteria to regulate genes that are involved in amino acid metabolism. The details of the antitermination event, including the role that Mg2+ plays, in this riboswitch have not been completely elucidated. In these studies, details of the antitermination event were investigated utilizing 2-aminopurine to monitor structural changes of a model antiterminator RNA when it was bound to model tRNA. Based on the results of these fluorescence studies, the model tRNA binds the model antiterminator RNA via an induced fit. This binding is enhanced by the presence of Mg2+, facilitating the complete base pairing of the model tRNA acceptor end with the complementary bases in the model antiterminator bulge. PMID:19755116
NASA Astrophysics Data System (ADS)
Gao, Jie; Jiang, Li-Li; Xu, Zhen-Yuan
2009-10-01
A new chaos game representation of protein sequences based on the detailed hydrophobic-hydrophilic (HP) model has been proposed by Yu et al (Physica A 337 (2004) 171). A CGR-walk model is proposed based on the new CGR coordinates for the protein sequences from complete genomes in the present paper. The new CGR coordinates based on the detailed HP model are converted into a time series, and a long-memory ARFIMA(p, d, q) model is introduced into the protein sequence analysis. This model is applied to simulating real CGR-walk sequence data of twelve protein sequences. Remarkably long-range correlations are uncovered in the data and the results obtained from these models are reasonably consistent with those available from the ARFIMA(p, d, q) model.
Hamm, V; Collon-Drouaillet, P; Fabriol, R
2008-02-19
The flooding of abandoned mines in the Lorraine Iron Basin (LIB) over the past 25 years has degraded the quality of the groundwater tapped for drinking water. High concentrations of dissolved sulphate have made the water unsuitable for human consumption. This problematic issue has led to the development of numerical tools to support water-resource management in mining contexts. Here we examine two modelling approaches using different numerical tools that we tested on the Saizerais flooded iron-ore mine (Lorraine, France). A first approach considers the Saizerais Mine as a network of two chemical reactors (NCR). The second approach is based on a physically distributed pipe network model (PNM) built with EPANET 2 software. This approach considers the mine as a network of pipes defined by their geometric and chemical parameters. Each reactor in the NCR model includes a detailed chemical model built to simulate quality evolution in the flooded mine water. However, in order to obtain a robust PNM, we simplified the detailed chemical model into a specific sulphate dissolution-precipitation model that is included as sulphate source/sink in both a NCR model and a pipe network model. Both the NCR model and the PNM, based on different numerical techniques, give good post-calibration agreement between the simulated and measured sulphate concentrations in the drinking-water well and overflow drift. The NCR model incorporating the detailed chemical model is useful when a detailed chemical behaviour at the overflow is needed. The PNM incorporating the simplified sulphate dissolution-precipitation model provides better information of the physics controlling the effect of flow and low flow zones, and the time of solid sulphate removal whereas the NCR model will underestimate clean-up time due to the complete mixing assumption. In conclusion, the detailed NCR model will give a first assessment of chemical processes at overflow, and in a second time, the PNM model will provide more detailed information on flow and chemical behaviour (dissolved sulphate concentrations, remaining mass of solid sulphate) in the network. Nevertheless, both modelling methods require hydrological and chemical parameters (recharge flow rate, outflows, volume of mine voids, mass of solids, kinetic constants of the dissolution-precipitation reactions), which are commonly not available for a mine and therefore call for calibration data.
Sharma, Deepak K; Solbrig, Harold R; Tao, Cui; Weng, Chunhua; Chute, Christopher G; Jiang, Guoqian
2017-06-05
Detailed Clinical Models (DCMs) have been regarded as the basis for retaining computable meaning when data are exchanged between heterogeneous computer systems. To better support clinical cancer data capturing and reporting, there is an emerging need to develop informatics solutions for standards-based clinical models in cancer study domains. The objective of the study is to develop and evaluate a cancer genome study metadata management system that serves as a key infrastructure in supporting clinical information modeling in cancer genome study domains. We leveraged a Semantic Web-based metadata repository enhanced with both ISO11179 metadata standard and Clinical Information Modeling Initiative (CIMI) Reference Model. We used the common data elements (CDEs) defined in The Cancer Genome Atlas (TCGA) data dictionary, and extracted the metadata of the CDEs using the NCI Cancer Data Standards Repository (caDSR) CDE dataset rendered in the Resource Description Framework (RDF). The ITEM/ITEM_GROUP pattern defined in the latest CIMI Reference Model is used to represent reusable model elements (mini-Archetypes). We produced a metadata repository with 38 clinical cancer genome study domains, comprising a rich collection of mini-Archetype pattern instances. We performed a case study of the domain "clinical pharmaceutical" in the TCGA data dictionary and demonstrated enriched data elements in the metadata repository are very useful in support of building detailed clinical models. Our informatics approach leveraging Semantic Web technologies provides an effective way to build a CIMI-compliant metadata repository that would facilitate the detailed clinical modeling to support use cases beyond TCGA in clinical cancer study domains.
A Comparison of Functional Models for Use in the Function-Failure Design Method
NASA Technical Reports Server (NTRS)
Stock, Michael E.; Stone, Robert B.; Tumer, Irem Y.
2006-01-01
When failure analysis and prevention, guided by historical design knowledge, are coupled with product design at its conception, shorter design cycles are possible. By decreasing the design time of a product in this manner, design costs are reduced and the product will better suit the customer s needs. Prior work indicates that similar failure modes occur with products (or components) with similar functionality. To capitalize on this finding, a knowledge base of historical failure information linked to functionality is assembled for use by designers. One possible use for this knowledge base is within the Elemental Function-Failure Design Method (EFDM). This design methodology and failure analysis tool begins at conceptual design and keeps the designer cognizant of failures that are likely to occur based on the product s functionality. The EFDM offers potential improvement over current failure analysis methods, such as FMEA, FMECA, and Fault Tree Analysis, because it can be implemented hand in hand with other conceptual design steps and carried throughout a product s design cycle. These other failure analysis methods can only truly be effective after a physical design has been completed. The EFDM however is only as good as the knowledge base that it draws from, and therefore it is of utmost importance to develop a knowledge base that will be suitable for use across a wide spectrum of products. One fundamental question that arises in using the EFDM is: At what level of detail should functional descriptions of components be encoded? This paper explores two approaches to populating a knowledge base with actual failure occurrence information from Bell 206 helicopters. Functional models expressed at various levels of detail are investigated to determine the necessary detail for an applicable knowledge base that can be used by designers in both new designs as well as redesigns. High level and more detailed functional descriptions are derived for each failed component based on NTSB accident reports. To best record this data, standardized functional and failure mode vocabularies are used. Two separate function-failure knowledge bases are then created aid compared. Results indicate that encoding failure data using more detailed functional models allows for a more robust knowledge base. Interestingly however, when applying the EFDM, high level descriptions continue to produce useful results when using the knowledge base generated from the detailed functional models.
Towards cleaner combustion engines through groundbreaking detailed chemical kinetic models
Battin-Leclerc, Frédérique; Blurock, Edward; Bounaceur, Roda; Fournet, René; Glaude, Pierre-Alexandre; Herbinet, Olivier; Sirjean, Baptiste; Warth, V.
2013-01-01
In the context of limiting the environmental impact of transportation, this paper reviews new directions which are being followed in the development of more predictive and more accurate detailed chemical kinetic models for the combustion of fuels. In the first part, the performance of current models, especially in terms of the prediction of pollutant formation, is evaluated. In the next parts, recent methods and ways to improve these models are described. An emphasis is given on the development of detailed models based on elementary reactions, on the production of the related thermochemical and kinetic parameters, and on the experimental techniques available to produce the data necessary to evaluate model predictions under well defined conditions. PMID:21597604
ERIC Educational Resources Information Center
Xiang, Lin
2011-01-01
This is a collective case study seeking to develop detailed descriptions of how programming an agent-based simulation influences a group of 8th grade students' model-based inquiry (MBI) by examining students' agent-based programmable modeling (ABPM) processes and the learning outcomes. The context of the present study was a biology unit on…
Extending rule-based methods to model molecular geometry and 3D model resolution.
Hoard, Brittany; Jacobson, Bruna; Manavi, Kasra; Tapia, Lydia
2016-08-01
Computational modeling is an important tool for the study of complex biochemical processes associated with cell signaling networks. However, it is challenging to simulate processes that involve hundreds of large molecules due to the high computational cost of such simulations. Rule-based modeling is a method that can be used to simulate these processes with reasonably low computational cost, but traditional rule-based modeling approaches do not include details of molecular geometry. The incorporation of geometry into biochemical models can more accurately capture details of these processes, and may lead to insights into how geometry affects the products that form. Furthermore, geometric rule-based modeling can be used to complement other computational methods that explicitly represent molecular geometry in order to quantify binding site accessibility and steric effects. We propose a novel implementation of rule-based modeling that encodes details of molecular geometry into the rules and binding rates. We demonstrate how rules are constructed according to the molecular curvature. We then perform a study of antigen-antibody aggregation using our proposed method. We simulate the binding of antibody complexes to binding regions of the shrimp allergen Pen a 1 using a previously developed 3D rigid-body Monte Carlo simulation, and we analyze the aggregate sizes. Then, using our novel approach, we optimize a rule-based model according to the geometry of the Pen a 1 molecule and the data from the Monte Carlo simulation. We use the distances between the binding regions of Pen a 1 to optimize the rules and binding rates. We perform this procedure for multiple conformations of Pen a 1 and analyze the impact of conformation and resolution on the optimal rule-based model. We find that the optimized rule-based models provide information about the average steric hindrance between binding regions and the probability that antibodies will bind to these regions. These optimized models quantify the variation in aggregate size that results from differences in molecular geometry and from model resolution.
Details of insect wing design and deformation enhance aerodynamic function and flight efficiency.
Young, John; Walker, Simon M; Bomphrey, Richard J; Taylor, Graham K; Thomas, Adrian L R
2009-09-18
Insect wings are complex structures that deform dramatically in flight. We analyzed the aerodynamic consequences of wing deformation in locusts using a three-dimensional computational fluid dynamics simulation based on detailed wing kinematics. We validated the simulation against smoke visualizations and digital particle image velocimetry on real locusts. We then used the validated model to explore the effects of wing topography and deformation, first by removing camber while keeping the same time-varying twist distribution, and second by removing camber and spanwise twist. The full-fidelity model achieved greater power economy than the uncambered model, which performed better than the untwisted model, showing that the details of insect wing topography and deformation are important aerodynamically. Such details are likely to be important in engineering applications of flapping flight.
Devil is in the details: Using logic models to investigate program process.
Peyton, David J; Scicchitano, Michael
2017-12-01
Theory-based logic models are commonly developed as part of requirements for grant funding. As a tool to communicate complex social programs, theory based logic models are an effective visual communication. However, after initial development, theory based logic models are often abandoned and remain in their initial form despite changes in the program process. This paper examines the potential benefits of committing time and resources to revising the initial theory driven logic model and developing detailed logic models that describe key activities to accurately reflect the program and assist in effective program management. The authors use a funded special education teacher preparation program to exemplify the utility of drill down logic models. The paper concludes with lessons learned from the iterative revision process and suggests how the process can lead to more flexible and calibrated program management. Copyright © 2017 Elsevier Ltd. All rights reserved.
Critical success factors for achieving superior m-health success.
Dwivedi, A; Wickramasinghe, N; Bali, R K; Naguib, R N G
2007-01-01
Recent healthcare trends clearly show significant investment by healthcare institutions into various types of wired and wireless technologies to facilitate and support superior healthcare delivery. This trend has been spurred by the shift in the concept and growing importance of the role of health information and the influence of fields such as bio-informatics, biomedical and genetic engineering. The demand is currently for integrated healthcare information systems; however for such initiatives to be successful it is necessary to adopt a macro model and appropriate methodology with respect to wireless initiatives. The key contribution of this paper is the presentation of one such integrative model for mobile health (m-health) known as the Wi-INET Business Model, along with a detailed Adaptive Mapping to Realisation (AMR) methodology. The AMR methodology details how the Wi-INET Business Model can be implemented. Further validation on the concepts detailed in the Wi-INET Business Model and the AMR methodology is offered via a short vignette on a toolkit based on a leading UK-based healthcare information technology solution.
Version Control in Project-Based Learning
ERIC Educational Resources Information Center
Milentijevic, Ivan; Ciric, Vladimir; Vojinovic, Oliver
2008-01-01
This paper deals with the development of a generalized model for version control systems application as a support in a range of project-based learning methods. The model is given as UML sequence diagram and described in detail. The proposed model encompasses a wide range of different project-based learning approaches by assigning a supervisory…
Automatic network coupling analysis for dynamical systems based on detailed kinetic models.
Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich
2005-10-01
We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.
NASA Technical Reports Server (NTRS)
Huning, J. R.; Logan, T. L.; Smith, J. H.
1982-01-01
The potential of using digital satellite data to establish a cloud cover data base for the United States, one that would provide detailed information on the temporal and spatial variability of cloud development are studied. Key elements include: (1) interfacing GOES data from the University of Wisconsin Meteorological Data Facility with the Jet Propulsion Laboratory's VICAR image processing system and IBIS geographic information system; (2) creation of a registered multitemporal GOES data base; (3) development of a simple normalization model to compensate for sun angle; (4) creation of a variable size georeference grid that provides detailed cloud information in selected areas and summarized information in other areas; and (5) development of a cloud/shadow model which details the percentage of each grid cell that is cloud and shadow covered, and the percentage of cloud or shadow opacity. In addition, comparison of model calculations of insolation with measured values at selected test sites was accomplished, as well as development of preliminary requirements for a large scale data base of cloud cover statistics.
Detailed field test of yaw-based wake steering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleming, Paul; Churchfield, Matt; Scholbrock, Andrew
This study describes a detailed field-test campaign to investigate yaw-based wake steering. In yaw-based wake steering, an upstream turbine intentionally misaligns its yaw with respect to the inflow to deflect its wake away from a downstream turbine, with the goal of increasing total power production. In the first phase, a nacelle-mounted scanning lidar was used to verify wake deflection of a misaligned turbine and calibrate wake deflection models. In the second phase, these models were used within a yaw controller to achieve a desired wake deflection. This paper details the experimental design and setup. Lastly, all data collected as partmore » of this field experiment will be archived and made available to the public via the U.S. Department of Energy's Atmosphere to Electrons Data Archive and Portal.« less
Detailed field test of yaw-based wake steering
Fleming, Paul; Churchfield, Matt; Scholbrock, Andrew; ...
2016-10-03
This study describes a detailed field-test campaign to investigate yaw-based wake steering. In yaw-based wake steering, an upstream turbine intentionally misaligns its yaw with respect to the inflow to deflect its wake away from a downstream turbine, with the goal of increasing total power production. In the first phase, a nacelle-mounted scanning lidar was used to verify wake deflection of a misaligned turbine and calibrate wake deflection models. In the second phase, these models were used within a yaw controller to achieve a desired wake deflection. This paper details the experimental design and setup. Lastly, all data collected as partmore » of this field experiment will be archived and made available to the public via the U.S. Department of Energy's Atmosphere to Electrons Data Archive and Portal.« less
ReaDDy - A Software for Particle-Based Reaction-Diffusion Dynamics in Crowded Cellular Environments
Schöneberg, Johannes; Noé, Frank
2013-01-01
We introduce the software package ReaDDy for simulation of detailed spatiotemporal mechanisms of dynamical processes in the cell, based on reaction-diffusion dynamics with particle resolution. In contrast to other particle-based reaction kinetics programs, ReaDDy supports particle interaction potentials. This permits effects such as space exclusion, molecular crowding and aggregation to be modeled. The biomolecules simulated can be represented as a sphere, or as a more complex geometry such as a domain structure or polymer chain. ReaDDy bridges the gap between small-scale but highly detailed molecular dynamics or Brownian dynamics simulations and large-scale but little-detailed reaction kinetics simulations. ReaDDy has a modular design that enables the exchange of the computing core by efficient platform-specific implementations or dynamical models that are different from Brownian dynamics. PMID:24040218
Using artificial neural networks to model aluminium based sheet forming processes and tools details
NASA Astrophysics Data System (ADS)
Mekras, N.
2017-09-01
In this paper, a methodology and a software system will be presented concerning the use of Artificial Neural Networks (ANNs) for modeling aluminium based sheet forming processes. ANNs models’ creation is based on the training of the ANNs using experimental, trial and historical data records of processes’ inputs and outputs. ANNs models are useful in cases that processes’ mathematical models are not accurate enough, are not well defined or are missing e.g. in cases of complex product shapes, new material alloys, new process requirements, micro-scale products, etc. Usually, after the design and modeling of the forming tools (die, punch, etc.) and before mass production, a set of trials takes place at the shop floor for finalizing processes and tools details concerning e.g. tools’ minimum radii, die/punch clearance, press speed, process temperature, etc. and in relation with the material type, the sheet thickness and the quality achieved from the trials. Using data from the shop floor trials and forming theory data, ANNs models can be trained and created, and can be used to estimate processes and tools final details, hence supporting efficient set-up of processes and tools before mass production starts. The proposed ANNs methodology and the respective software system are implemented within the EU H2020 project LoCoMaTech for the aluminium-based sheet forming process HFQ (solution Heat treatment, cold die Forming and Quenching).
Potential of 3D City Models to assess flood vulnerability
NASA Astrophysics Data System (ADS)
Schröter, Kai; Bochow, Mathias; Schüttig, Martin; Nagel, Claus; Ross, Lutz; Kreibich, Heidi
2016-04-01
Vulnerability, as the product of exposure and susceptibility, is a key factor of the flood risk equation. Furthermore, the estimation of flood loss is very sensitive to the choice of the vulnerability model. Still, in contrast to elaborate hazard simulations, vulnerability is often considered in a simplified manner concerning the spatial resolution and geo-location of exposed objects as well as the susceptibility of these objects at risk. Usually, area specific potential flood loss is quantified on the level of aggregated land-use classes, and both hazard intensity and resistance characteristics of affected objects are represented in highly simplified terms. We investigate the potential of 3D City Models and spatial features derived from remote sensing data to improve the differentiation of vulnerability in flood risk assessment. 3D City Models are based on CityGML, an application scheme of the Geography Markup Language (GML), which represents the 3D geometry, 3D topology, semantics and appearance of objects on different levels of detail. As such, 3D City Models offer detailed spatial information which is useful to describe the exposure and to characterize the susceptibility of residential buildings at risk. This information is further consolidated with spatial features of the building stock derived from remote sensing data. Using this database a spatially detailed flood vulnerability model is developed by means of data-mining. Empirical flood damage data are used to derive and to validate flood susceptibility models for individual objects. We present first results from a prototype application in the city of Dresden, Germany. The vulnerability modeling based on 3D City Models and remote sensing data is compared i) to the generally accepted good engineering practice based on area specific loss potential and ii) to a highly detailed representation of flood vulnerability based on a building typology using urban structure types. Comparisons are drawn in terms of affected building area and estimated loss for a selection of inundation scenarios.
Boosting flood warning schemes with fast emulator of detailed hydrodynamic models
NASA Astrophysics Data System (ADS)
Bellos, V.; Carbajal, J. P.; Leitao, J. P.
2017-12-01
Floods are among the most destructive catastrophic events and their frequency has incremented over the last decades. To reduce flood impact and risks, flood warning schemes are installed in flood prone areas. Frequently, these schemes are based on numerical models which quickly provide predictions of water levels and other relevant observables. However, the high complexity of flood wave propagation in the real world and the need of accurate predictions in urban environments or in floodplains hinders the use of detailed simulators. This sets the difficulty, we need fast predictions that meet the accuracy requirements. Most physics based detailed simulators although accurate, will not fulfill the speed demand. Even if High Performance Computing techniques are used (the magnitude of required simulation time is minutes/hours). As a consequence, most flood warning schemes are based in coarse ad-hoc approximations that cannot take advantage a detailed hydrodynamic simulation. In this work, we present a methodology for developing a flood warning scheme using an Gaussian Processes based emulator of a detailed hydrodynamic model. The methodology consists of two main stages: 1) offline stage to build the emulator; 2) online stage using the emulator to predict and generate warnings. The offline stage consists of the following steps: a) definition of the critical sites of the area under study, and the specification of the observables to predict at those sites, e.g. water depth, flow velocity, etc.; b) generation of a detailed simulation dataset to train the emulator; c) calibration of the required parameters (if measurements are available). The online stage is carried on using the emulator to predict the relevant observables quickly, and the detailed simulator is used in parallel to verify key predictions of the emulator. The speed gain given by the emulator allows also to quantify uncertainty in predictions using ensemble methods. The above methodology is applied in real world scenario.
Yellepeddi, Venkata; Rower, Joseph; Liu, Xiaoxi; Kumar, Shaun; Rashid, Jahidur; Sherwin, Catherine M T
2018-05-18
Physiologically based pharmacokinetic modeling and simulation is an important tool for predicting the pharmacokinetics, pharmacodynamics, and safety of drugs in pediatrics. Physiologically based pharmacokinetic modeling is applied in pediatric drug development for first-time-in-pediatric dose selection, simulation-based trial design, correlation with target organ toxicities, risk assessment by investigating possible drug-drug interactions, real-time assessment of pharmacokinetic-safety relationships, and assessment of non-systemic biodistribution targets. This review summarizes the details of a physiologically based pharmacokinetic modeling approach in pediatric drug research, emphasizing reports on pediatric physiologically based pharmacokinetic models of individual drugs. We also compare and contrast the strategies employed by various researchers in pediatric physiologically based pharmacokinetic modeling and provide a comprehensive overview of physiologically based pharmacokinetic modeling strategies and approaches in pediatrics. We discuss the impact of physiologically based pharmacokinetic models on regulatory reviews and product labels in the field of pediatric pharmacotherapy. Additionally, we examine in detail the current limitations and future directions of physiologically based pharmacokinetic modeling in pediatrics with regard to the ability to predict plasma concentrations and pharmacokinetic parameters. Despite the skepticism and concern in the pediatric community about the reliability of physiologically based pharmacokinetic models, there is substantial evidence that pediatric physiologically based pharmacokinetic models have been used successfully to predict differences in pharmacokinetics between adults and children for several drugs. It is obvious that the use of physiologically based pharmacokinetic modeling to support various stages of pediatric drug development is highly attractive and will rapidly increase, provided the robustness and reliability of these techniques are well established.
USDA-ARS?s Scientific Manuscript database
Process-based modeling provides detailed spatial and temporal information of the soil environment in the shallow seedling recruitment zone across field topography where measurements of soil temperature and water may not sufficiently describe the zone. Hourly temperature and water profiles within the...
Adaptive exponential integrate-and-fire model as an effective description of neuronal activity.
Brette, Romain; Gerstner, Wulfram
2005-11-01
We introduce a two-dimensional integrate-and-fire model that combines an exponential spike mechanism with an adaptation equation, based on recent theoretical findings. We describe a systematic method to estimate its parameters with simple electrophysiological protocols (current-clamp injection of pulses and ramps) and apply it to a detailed conductance-based model of a regular spiking neuron. Our simple model predicts correctly the timing of 96% of the spikes (+/-2 ms) of the detailed model in response to injection of noisy synaptic conductances. The model is especially reliable in high-conductance states, typical of cortical activity in vivo, in which intrinsic conductances were found to have a reduced role in shaping spike trains. These results are promising because this simple model has enough expressive power to reproduce qualitatively several electrophysiological classes described in vitro.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Renke; Jin, Shuangshuang; Chen, Yousu
This paper presents a faster-than-real-time dynamic simulation software package that is designed for large-size power system dynamic simulation. It was developed on the GridPACKTM high-performance computing (HPC) framework. The key features of the developed software package include (1) faster-than-real-time dynamic simulation for a WECC system (17,000 buses) with different types of detailed generator, controller, and relay dynamic models, (2) a decoupled parallel dynamic simulation algorithm with optimized computation architecture to better leverage HPC resources and technologies, (3) options for HPC-based linear and iterative solvers, (4) hidden HPC details, such as data communication and distribution, to enable development centered on mathematicalmore » models and algorithms rather than on computational details for power system researchers, and (5) easy integration of new dynamic models and related algorithms into the software package.« less
NASA Astrophysics Data System (ADS)
García-Barberena, Javier; Mutuberria, Amaia; Palacin, Luis G.; Sanz, Javier L.; Pereira, Daniel; Bernardos, Ana; Sanchez, Marcelino; Rocha, Alberto R.
2017-06-01
The National Renewable Energy Centre of Spain, CENER, and the Technology & Innovation area of ACS Cobra, as a result of their long term expertise in the CSP field, have developed a high-quality and high level of detail optical and thermal simulation software for the accurate evaluation of Molten Salts Solar Towers. The main purpose of this software is to make a step forward in the state-of-the-art of the Solar Towers simulation programs. Generally, these programs deal with the most critical systems of such plants, i.e. the solar field and the receiver, on an independent basis. Therefore, these programs typically neglect relevant aspects in the operation of the plant as heliostat aiming strategies, solar flux shapes onto the receiver, material physical and operational limitations, transient processes as preheating and secure cloud passing operating modes, and more. The modelling approach implemented in the developed program consists on effectively coupling detailed optical simulations of the heliostat field with also detailed and full-transient thermal simulations of the molten salts tube-based external receiver. The optical model is based on an accurate Monte Carlo ray-tracing method which solves the complete solar field by simulating each of the heliostats at once according to their specific layout in the field. In the thermal side, the tube-based cylindrical external receiver of a Molten Salts Solar Tower is modelled assuming one representative tube per panel, and implementing the specific connection layout of the panels as well as the internal receiver pipes. Each tube is longitudinally discretized and the transient energy and mass balances in the temperature dependent molten salts and steel tube models are solved. For this, a one dimensional radial heat transfer model based is used. The thermal model is completed with a detailed control and operation strategy module, able to represent the appropriate operation of the plant. An integration framework has been developed, helping ACS Cobra to adequately handle the optical and thermal coupled simulations. According to current results it can be concluded that the developed model has resulted in a powerful tool to improve the design and operation of future ACS Cobra's Molten Salts Solar Towers, since historical data based on its projects have been used for validation of the final tool.
Enhanced LOD Concepts for Virtual 3d City Models
NASA Astrophysics Data System (ADS)
Benner, J.; Geiger, A.; Gröger, G.; Häfele, K.-H.; Löwner, M.-O.
2013-09-01
Virtual 3D city models contain digital three dimensional representations of city objects like buildings, streets or technical infrastructure. Because size and complexity of these models continuously grow, a Level of Detail (LoD) concept effectively supporting the partitioning of a complete model into alternative models of different complexity and providing metadata, addressing informational content, complexity and quality of each alternative model is indispensable. After a short overview on various LoD concepts, this paper discusses the existing LoD concept of the CityGML standard for 3D city models and identifies a number of deficits. Based on this analysis, an alternative concept is developed and illustrated with several examples. It differentiates between first, a Geometric Level of Detail (GLoD) and a Semantic Level of Detail (SLoD), and second between the interior building and its exterior shell. Finally, a possible implementation of the new concept is demonstrated by means of an UML model.
Integration of remote sensing based surface information into a three-dimensional microclimate model
NASA Astrophysics Data System (ADS)
Heldens, Wieke; Heiden, Uta; Esch, Thomas; Mueller, Andreas; Dech, Stefan
2017-03-01
Climate change urges cities to consider the urban climate as part of sustainable planning. Urban microclimate models can provide knowledge on the climate at building block level. However, very detailed information on the area of interest is required. Most microclimate studies therefore make use of assumptions and generalizations to describe the model area. Remote sensing data with area wide coverage provides a means to derive many parameters at the detailed spatial and thematic scale required by urban climate models. This study shows how microclimate simulations for a series of real world urban areas can be supported by using remote sensing data. In an automated process, surface materials, albedo, LAI/LAD and object height have been derived and integrated into the urban microclimate model ENVI-met. Multiple microclimate simulations have been carried out both with the dynamic remote sensing based input data as well as with manual and static input data to analyze the impact of the RS-based surface information and the suitability of the applied data and techniques. A valuable support of the integration of the remote sensing based input data for ENVI-met is the use of an automated processing chain. This saves tedious manual editing and allows for fast and area wide generation of simulation areas. The analysis of the different modes shows the importance of high quality height data, detailed surface material information and albedo.
Reduced modeling of signal transduction – a modular approach
Koschorreck, Markus; Conzelmann, Holger; Ebert, Sybille; Ederer, Michael; Gilles, Ernst Dieter
2007-01-01
Background Combinatorial complexity is a challenging problem in detailed and mechanistic mathematical modeling of signal transduction. This subject has been discussed intensively and a lot of progress has been made within the last few years. A software tool (BioNetGen) was developed which allows an automatic rule-based set-up of mechanistic model equations. In many cases these models can be reduced by an exact domain-oriented lumping technique. However, the resulting models can still consist of a very large number of differential equations. Results We introduce a new reduction technique, which allows building modularized and highly reduced models. Compared to existing approaches further reduction of signal transduction networks is possible. The method also provides a new modularization criterion, which allows to dissect the model into smaller modules that are called layers and can be modeled independently. Hallmarks of the approach are conservation relations within each layer and connection of layers by signal flows instead of mass flows. The reduced model can be formulated directly without previous generation of detailed model equations. It can be understood and interpreted intuitively, as model variables are macroscopic quantities that are converted by rates following simple kinetics. The proposed technique is applicable without using complex mathematical tools and even without detailed knowledge of the mathematical background. However, we provide a detailed mathematical analysis to show performance and limitations of the method. For physiologically relevant parameter domains the transient as well as the stationary errors caused by the reduction are negligible. Conclusion The new layer based reduced modeling method allows building modularized and strongly reduced models of signal transduction networks. Reduced model equations can be directly formulated and are intuitively interpretable. Additionally, the method provides very good approximations especially for macroscopic variables. It can be combined with existing reduction methods without any difficulties. PMID:17854494
Finite element based micro-mechanics modeling of textile composites
NASA Technical Reports Server (NTRS)
Glaessgen, E. H.; Griffin, O. H., Jr.
1995-01-01
Textile composites have the advantage over laminated composites of a significantly greater damage tolerance and resistance to delamination. Currently, a disadvantage of textile composites is the inability to examine the details of the internal response of these materials under load. Traditional approaches to the study fo textile based composite materials neglect many of the geometric details that affect the performance of the material. The present three dimensional analysis, based on the representative volume element (RVE) of a plain weave, allows prediction of the internal details of displacement, strain, stress, and failure quantities. Through this analysis, the effect of geometric and material parameters on the aforementioned quantities are studied.
ERIC Educational Resources Information Center
Sperry, Len
2012-01-01
A paradigm shift is underway in the training of professional counselors. It involves a shift in orientation from an input-based or traditional model of training to an outcomes-based or competency-based model of training. This article provides a detailed description of both input-based and outcomes-based training and instructional methods. It…
Modelling Teaching Strategies.
ERIC Educational Resources Information Center
Major, Nigel
1995-01-01
Describes a modelling language for representing teaching strategies, based in the context of the COCA intelligent tutoring system. Examines work on meta-reasoning in knowledge-based systems and describes COCA's architecture, giving details of the language used for representing teaching knowledge. Discusses implications for future work. (AEF)
Simulation-based modeling of building complexes construction management
NASA Astrophysics Data System (ADS)
Shepelev, Aleksandr; Severova, Galina; Potashova, Irina
2018-03-01
The study reported here examines the experience in the development and implementation of business simulation games based on network planning and management of high-rise construction. Appropriate network models of different types and levels of detail have been developed; a simulation model including 51 blocks (11 stages combined in 4 units) is proposed.
Using Toulmin analysis to analyse an instructor's proof presentation in abstract algebra
NASA Astrophysics Data System (ADS)
Fukawa-connelly, Timothy
2014-01-01
This paper provides a method for analysing undergraduate teaching of proof-based courses using Toulmin's model (1969) of argumentation. It presents a case study of one instructor's presentation of proofs. The analysis shows that the instructor presents different levels of detail in different proofs; thus, the students have an inconsistent set of written models for their work. Similarly, the analysis shows that the details the instructor says aloud differ from what she writes down. Although her verbal commentary provides additional detail and appears to have pedagogical value, for instance, by modelling thinking that supports proof writing, this value might be better realized if she were to change her teaching practices.
Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones.
Sohn, Bong-Soo
2017-03-11
This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing.
Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones
Sohn, Bong-Soo
2017-01-01
This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing. PMID:28287487
Evaluating a Control System Architecture Based on a Formally Derived AOCS Model
NASA Astrophysics Data System (ADS)
Ilic, Dubravka; Latvala, Timo; Varpaaniemi, Kimmo; Vaisanen, Pauli; Troubitsyna, Elena; Laibinis, Linas
2010-08-01
Attitude & Orbit Control System (AOCS) refers to a wider class of control systems which are used to determine and control the attitude of the spacecraft while in orbit, based on the information obtained from various sensors. In this paper, we propose an approach to evaluate a typical (yet somewhat simplified) AOCS architecture using formal development - based on the Event-B method. As a starting point, an Ada specification of the AOCS is translated into a formal specification and further refined to incorporate all the details of its original source code specification. This way we are able not only to evaluate the Ada specification by expressing and verifying specific system properties in our formal models, but also to determine how well the chosen modelling framework copes with the level of detail required for an actual implementation and code generation from the derived models.
Development of surrogate models for the prediction of the flow around an aircraft propeller
NASA Astrophysics Data System (ADS)
Salpigidou, Christina; Misirlis, Dimitris; Vlahostergios, Zinon; Yakinthos, Kyros
2018-05-01
In the present work, the derivation of two surrogate models (SMs) for modelling the flow around a propeller for small aircrafts is presented. Both methodologies use derived functions based on computations with the detailed propeller geometry. The computations were performed using k-ω shear stress transport for modelling turbulence. In the SMs, the modelling of the propeller was performed in a computational domain of disk-like geometry, where source terms were introduced in the momentum equations. In the first SM, the source terms were polynomial functions of swirl and thrust, mainly related to the propeller radius. In the second SM, regression analysis was used to correlate the source terms with the velocity distribution through the propeller. The proposed SMs achieved faster convergence, in relation to the detail model, by providing also results closer to the available operational data. The regression-based model was the most accurate and required less computational time for convergence.
A Framework for Model-Based Inquiry through Agent-Based Programming
ERIC Educational Resources Information Center
Xiang, Lin; Passmore, Cynthia
2015-01-01
There has been increased recognition in the past decades that model-based inquiry (MBI) is a promising approach for cultivating deep understandings by helping students unite phenomena and underlying mechanisms. Although multiple technology tools have been used to improve the effectiveness of MBI, there are not enough detailed examinations of how…
Automated map sharpening by maximization of detail and connectivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.
An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less
Automated map sharpening by maximization of detail and connectivity
Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.; ...
2018-05-18
An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less
A standard protocol for describing individual-based and agent-based models
Grimm, Volker; Berger, Uta; Bastiansen, Finn; Eliassen, Sigrunn; Ginot, Vincent; Giske, Jarl; Goss-Custard, John; Grand, Tamara; Heinz, Simone K.; Huse, Geir; Huth, Andreas; Jepsen, Jane U.; Jorgensen, Christian; Mooij, Wolf M.; Muller, Birgit; Pe'er, Guy; Piou, Cyril; Railsback, Steven F.; Robbins, Andrew M.; Robbins, Martha M.; Rossmanith, Eva; Ruger, Nadja; Strand, Espen; Souissi, Sami; Stillman, Richard A.; Vabo, Rune; Visser, Ute; DeAngelis, Donald L.
2006-01-01
Simulation models that describe autonomous individual organisms (individual based models, IBM) or agents (agent-based models, ABM) have become a widely used tool, not only in ecology, but also in many other disciplines dealing with complex systems made up of autonomous entities. However, there is no standard protocol for describing such simulation models, which can make them difficult to understand and to duplicate. This paper presents a proposed standard protocol, ODD, for describing IBMs and ABMs, developed and tested by 28 modellers who cover a wide range of fields within ecology. This protocol consists of three blocks (Overview, Design concepts, and Details), which are subdivided into seven elements: Purpose, State variables and scales, Process overview and scheduling, Design concepts, Initialization, Input, and Submodels. We explain which aspects of a model should be described in each element, and we present an example to illustrate the protocol in use. In addition, 19 examples are available in an Online Appendix. We consider ODD as a first step for establishing a more detailed common format of the description of IBMs and ABMs. Once initiated, the protocol will hopefully evolve as it becomes used by a sufficiently large proportion of modellers.
On the role of modeling choices in estimation of cerebral aneurysm wall tension.
Ramachandran, Manasi; Laakso, Aki; Harbaugh, Robert E; Raghavan, Madhavan L
2012-11-15
To assess various approaches to estimating pressure-induced wall tension in intracranial aneurysms (IA) and their effect on the stratification of subjects in a study population. Three-dimensional models of 26 IAs (9 ruptured and 17 unruptured) were segmented from Computed Tomography Angiography (CTA) images. Wall tension distributions in these patient-specific geometric models were estimated based on various approaches such as differences in morphological detail utilized or modeling choices made. For all subjects in the study population, the peak wall tension was estimated using all investigated approaches and were compared to a reference approach-nonlinear finite element (FE) analysis using the Fung anisotropic model with regionally varying material fiber directions. Comparisons between approaches were focused toward assessing the similarity in stratification of IAs within the population based on peak wall tension. The stratification of IAs tension deviated to some extent from the reference approach as less geometric detail was incorporated. Interestingly, the size of the cerebral aneurysm as captured by a single size measure was the predominant determinant of peak wall tension-based stratification. Within FE approaches, simplifications to isotropy, material linearity and geometric linearity caused a gradual deviation from the reference estimates, but it was minimal and resulted in little to no impact on stratifications of IAs. Differences in modeling choices made without patient-specificity in parameters of such models had little impact on tension-based IA stratification in this population. Increasing morphological detail did impact the estimated peak wall tension, but size was the predominant determinant. Copyright © 2012 Elsevier Ltd. All rights reserved.
CHIMERA II - A real-time multiprocessing environment for sensor-based robot control
NASA Technical Reports Server (NTRS)
Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.
1989-01-01
A multiprocessing environment for a wide variety of sensor-based robot system, providing the flexibility, performance, and UNIX-compatible interface needed for fast development of real-time code is addressed. The requirements imposed on the design of a programming environment for sensor-based robotic control is outlined. The details of the current hardware configuration are presented, along with the details of the CHIMERA II software. Emphasis is placed on the kernel, low-level interboard communication, user interface, extended file system, user-definable and dynamically selectable real-time schedulers, remote process synchronization, and generalized interprocess communication. A possible implementation of a hierarchical control model, the NASA/NBS standard reference model for telerobot control system is demonstrated.
Wan Chan Tseung, H; Ma, J; Beltran, C
2015-06-01
Very fast Monte Carlo (MC) simulations of proton transport have been implemented recently on graphics processing units (GPUs). However, these MCs usually use simplified models for nonelastic proton-nucleus interactions. Our primary goal is to build a GPU-based proton transport MC with detailed modeling of elastic and nonelastic proton-nucleus collisions. Using the cuda framework, the authors implemented GPU kernels for the following tasks: (1) simulation of beam spots from our possible scanning nozzle configurations, (2) proton propagation through CT geometry, taking into account nuclear elastic scattering, multiple scattering, and energy loss straggling, (3) modeling of the intranuclear cascade stage of nonelastic interactions when they occur, (4) simulation of nuclear evaporation, and (5) statistical error estimates on the dose. To validate our MC, the authors performed (1) secondary particle yield calculations in proton collisions with therapeutically relevant nuclei, (2) dose calculations in homogeneous phantoms, (3) recalculations of complex head and neck treatment plans from a commercially available treatment planning system, and compared with (GEANT)4.9.6p2/TOPAS. Yields, energy, and angular distributions of secondaries from nonelastic collisions on various nuclei are in good agreement with the (GEANT)4.9.6p2 Bertini and Binary cascade models. The 3D-gamma pass rate at 2%-2 mm for treatment plan simulations is typically 98%. The net computational time on a NVIDIA GTX680 card, including all CPU-GPU data transfers, is ∼ 20 s for 1 × 10(7) proton histories. Our GPU-based MC is the first of its kind to include a detailed nuclear model to handle nonelastic interactions of protons with any nucleus. Dosimetric calculations are in very good agreement with (GEANT)4.9.6p2/TOPAS. Our MC is being integrated into a framework to perform fast routine clinical QA of pencil-beam based treatment plans, and is being used as the dose calculation engine in a clinically applicable MC-based IMPT treatment planning system. The detailed nuclear modeling will allow us to perform very fast linear energy transfer and neutron dose estimates on the GPU.
Research on the Hotel Image Based on the Detail Service
NASA Astrophysics Data System (ADS)
Li, Ban; Shenghua, Zheng; He, Yi
Detail service management, initially developed as marketing programs to enhance customer loyalty, has now become an important part of customer relation strategy. This paper analyzes the critical factors of detail service and its influence on the hotel image. We establish the theoretical model of influencing factors on hotel image and propose corresponding hypotheses. We use applying statistical method to test and verify the above-mentioned hypotheses. This paper provides a foundation for further study of detail service design and planning issues.
van der Wegen, M.; Jaffe, B.E.; Roelvink, J.A.
2011-01-01
This study investigates the possibility of hindcasting-observed decadal-scale morphologic change in San Pablo Bay, a subembayment of the San Francisco Estuary, California, USA, by means of a 3-D numerical model (Delft3D). The hindcast period, 1856-1887, is characterized by upstream hydraulic mining that resulted in a high sediment input to the estuary. The model includes wind waves, salt water and fresh water interactions, and graded sediment transport, among others. Simplified initial conditions and hydrodynamic forcing were necessary because detailed historic descriptions were lacking. Model results show significant skill. The river discharge and sediment concentration have a strong positive influence on deposition volumes. Waves decrease deposition rates and have, together with tidal movement, the greatest effect on sediment distribution within San Pablo Bay. The applied process-based (or reductionist) modeling approach is valuable once reasonable values for model parameters and hydrodynamic forcing are obtained. Sensitivity analysis reveals the dominant forcing of the system and suggests that the model planform plays a dominant role in the morphodynamic development. A detailed physical explanation of the model outcomes is difficult because of the high nonlinearity of the processes. Process formulation refinement, a more detailed description of the forcing, or further model parameter variations may lead to an enhanced model performance, albeit to a limited extent. The approach potentially provides a sound basis for prediction of future developments. Parallel use of highly schematized box models and a process-based approach as described in the present work is probably the most valuable method to assess decadal morphodynamic development. Copyright ?? 2011 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Pillai, D.; Gerbig, C.; Kretschmer, R.; Beck, V.; Karstens, U.; Neininger, B.; Heimann, M.
2012-10-01
We present simulations of atmospheric CO2 concentrations provided by two modeling systems, run at high spatial resolution: the Eulerian-based Weather Research Forecasting (WRF) model and the Lagrangian-based Stochastic Time-Inverted Lagrangian Transport (STILT) model, both of which are coupled to a diagnostic biospheric model, the Vegetation Photosynthesis and Respiration Model (VPRM). The consistency of the simulations is assessed with special attention paid to the details of horizontal as well as vertical transport and mixing of CO2 concentrations in the atmosphere. The dependence of model mismatch (Eulerian vs. Lagrangian) on models' spatial resolution is further investigated. A case study using airborne measurements during which two models showed large deviations from each other is analyzed in detail as an extreme case. Using aircraft observations and pulse release simulations, we identified differences in the representation of details in the interaction between turbulent mixing and advection through wind shear as the main cause of discrepancies between WRF and STILT transport at a spatial resolution such as 2 and 6 km. Based on observations and inter-model comparisons of atmospheric CO2 concentrations, we show that a refinement of the parameterization of turbulent velocity variance and Lagrangian time-scale in STILT is needed to achieve a better match between the Eulerian and the Lagrangian transport at such a high spatial resolution (e.g. 2 and 6 km). Nevertheless, the inter-model differences in simulated CO2 time series for a tall tower observatory at Ochsenkopf in Germany are about a factor of two smaller than the model-data mismatch and about a factor of three smaller than the mismatch between the current global model simulations and the data.
Modelling of and Conjecturing on a Soccer Ball in a Korean Eighth Grade Mathematics Classroom
ERIC Educational Resources Information Center
Lee, Kyeong-Hwa
2011-01-01
The purpose of this article was to describe the task design and implementation of cultural artefacts in a mathematics lesson based on the integration of modelling and conjecturing perspectives. The conceived process of integrating a soccer ball into mathematics lessons via modelling- and conjecturing-based instruction was first detailed. Next, the…
Individual-based model formulation for cutthroat trout, Little Jones Creek, California
Steven F. Railsback; Bret C. Harvey
2001-01-01
This report contains the detailed formulation of an individual-based model (IBM) of cutthroat trout developed for three study sites on Little Jones Creek, Del Norte County, in northwestern California. The model was designed to support research on relations between habitat and fish population dynamics, the importance of small tributaries to trout populations, and the...
Evaluating crown fire rate of spread predictions from physics-based models
C. M. Hoffman; J. Ziegler; J. Canfield; R. R. Linn; W. Mell; C. H. Sieg; F. Pimont
2015-01-01
Modeling the behavior of crown fires is challenging due to the complex set of coupled processes that drive the characteristics of a spreading wildfire and the large range of spatial and temporal scales over which these processes occur. Detailed physics-based modeling approaches such as FIRETEC and the Wildland Urban Interface Fire Dynamics Simulator (WFDS) simulate...
Patil, M P; Sonolikar, R L
2008-10-01
This paper presents a detailed computational fluid dynamics (CFD) based approach for modeling thermal destruction of hazardous wastes in a circulating fluidized bed (CFB) incinerator. The model is based on Eular - Lagrangian approach in which gas phase (continuous phase) is treated in a Eularian reference frame, whereas the waste particulate (dispersed phase) is treated in a Lagrangian reference frame. The reaction chemistry hasbeen modeled through a mixture fraction/ PDF approach. The conservation equations for mass, momentum, energy, mixture fraction and other closure equations have been solved using a general purpose CFD code FLUENT4.5. Afinite volume method on a structured grid has been used for solution of governing equations. The model provides detailed information on the hydrodynamics (gas velocity, particulate trajectories), gas composition (CO, CO2, O2) and temperature inside the riser. The model also allows different operating scenarios to be examined in an efficient manner.
NASA Technical Reports Server (NTRS)
Petty, Grant W.; Stettner, David R.
1994-01-01
This paper discusses certain aspects of a new inversion based algorithm for the retrieval of rain rate over the open ocean from the special sensor microwave/imager (SSM/I) multichannel imagery. This algorithm takes a more detailed physical approach to the retrieval problem than previously discussed algorithms that perform explicit forward radiative transfer calculations based on detailed model hydrometer profiles and attempt to match the observations to the predicted brightness temperature.
Preliminary Cost Model for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Prince, F. Andrew; Smart, Christian; Stephens, Kyle; Henrichs, Todd
2009-01-01
Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. However, great care is required. Some space telescope cost models, such as those based only on mass, lack sufficient detail to support such analysis and may lead to inaccurate conclusions. Similarly, using ground based telescope models which include the dome cost will also lead to inaccurate conclusions. This paper reviews current and historical models. Then, based on data from 22 different NASA space telescopes, this paper tests those models and presents preliminary analysis of single and multi-variable space telescope cost models.
Detailed p- and s-wave velocity models along the LARSE II transect, Southern California
Murphy, J.M.; Fuis, G.S.; Ryberg, T.; Lutter, W.J.; Catchings, R.D.; Goldman, M.R.
2010-01-01
Structural details of the crust determined from P-wave velocity models can be improved with S-wave velocity models, and S-wave velocities are needed for model-based predictions of strong ground motion in southern California. We picked P- and S-wave travel times for refracted phases from explosive-source shots of the Los Angeles Region Seismic Experiment, Phase II (LARSE II); we developed refraction velocity models from these picks using two different inversion algorithms. For each inversion technique, we calculated ratios of P- to S-wave velocities (VP/VS) where there is coincident P- and S-wave ray coverage.We compare the two VP inverse velocity models to each other and to results from forward modeling, and we compare the VS inverse models. The VS and VP/VS models differ in structural details from the VP models. In particular, dipping, tabular zones of low VS, or high VP/VS, appear to define two fault zones in the central Transverse Ranges that could be parts of a positive flower structure to the San Andreas fault. These two zones are marginally resolved, but their presence in two independent models lends them some credibility. A plot of VS versus VP differs from recently published plots that are based on direct laboratory or down-hole sonic measurements. The difference in plots is most prominent in the range of VP = 3 to 5 km=s (or VS ~ 1:25 to 2:9 km/s), where our refraction VS is lower by a few tenths of a kilometer per second from VS based on direct measurements. Our new VS - VP curve may be useful for modeling the lower limit of VS from a VP model in calculating strong motions from scenario earthquakes.
NASA Astrophysics Data System (ADS)
El Akbar, R. Reza; Anshary, Muhammad Adi Khairul; Hariadi, Dennis
2018-02-01
Model MACP for HE ver.1. Is a model that describes how to perform measurement and monitoring performance for Higher Education. Based on a review of the research related to the model, there are several parts of the model component to develop in further research, so this research has four main objectives. The first objective is to differentiate the CSF (critical success factor) components in the previous model, the two key KPI (key performance indicators) exploration in the previous model, the three based on the previous objective, the new and more detailed model design. The final goal is the fourth designed prototype application for performance measurement in higher education, based on a new model created. The method used is explorative research method and application design using prototype method. The results of this study are first, forming a more detailed new model for measurement and monitoring of performance in higher education, differentiation and exploration of the Model MACP for HE Ver.1. The second result compiles a dictionary of college performance measurement by re-evaluating the existing indicators. The third result is the design of prototype application of performance measurement in higher education.
Physics of Accretion in X-Ray Binaries
NASA Technical Reports Server (NTRS)
Vrtilek, Saeqa D.
2004-01-01
This project consists of several related investigations directed to the study of mass transfer processes in X-ray binaries. Models developed over several years incorporating highly detailed physics will be tested on a balanced mix of existing data and planned observations with both ground and space-based observatories. The extended time coverage of the observations and the existence of {\\it simultaneous} X-ray, ultraviolet, and optical observations will be particularly beneficial for studying the accretion flows. These investigations, which take as detailed a look at the accretion process in X-ray binaries as is now possible, test current models to their limits, and force us to extend them. We now have the ability to do simultaneous ultraviolet/X-ray/optical spectroscopy with HST, Chandra, XMM, and ground-based observatories. The rich spectroscopy that these Observations give us must be interpreted principally by reference to detailed models, the development of which is already well underway; tests of these essential interpretive tools are an important product of the proposed investigations.
The Physics of Accretion in X-Ray Binaries
NASA Technical Reports Server (NTRS)
Vrtilek, S.; Oliversen, Ronald (Technical Monitor)
2001-01-01
This project consists of several related investigations directed to the study of mass transfer processes in X-ray binaries. Models developed over several years incorporating highly detailed physics will be tested on a balanced mix of existing data and planned observations with both ground and space-based observatories. The extended time coverage of the observations and the existence of simultaneous X-ray, ultraviolet, and optical observations will be particularly beneficial for studying the accretion flows. These investigations, which take as detailed a look at the accretion process in X-ray binaries as is now possible, test current models to their limits, and force us to extend them. We now have the ability to do simultaneous ultraviolet/X-ray/optical spectroscopy with HST, Chandra, XMM, and ground-based observatories. The rich spectroscopy that these observations give us must be interpreted principally by reference to detailed models, the development of which is already well underway; tests of these essential interpretive tools are an important product of the proposed investigations.
Design of robotic cells based on relative handling modules with use of SolidWorks system
NASA Astrophysics Data System (ADS)
Gaponenko, E. V.; Anciferov, S. I.
2018-05-01
The article presents a diagramed engineering solution for a robotic cell with six degrees of freedom for machining of complex details, consisting of the base with a tool installation module and a detail machining module made as parallel structure mechanisms. The output links of the detail machining module and the tool installation module can move along X-Y-Z coordinate axes each. A 3D-model of the complex is designed in the SolidWorks system. It will be used further for carrying out engineering calculations and mathematical analysis and obtaining all required documentation.
Improving Power System Modeling. A Tool to Link Capacity Expansion and Production Cost Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diakov, Victor; Cole, Wesley; Sullivan, Patrick
2015-11-01
Capacity expansion models (CEM) provide a high-level long-term view at the prospects of the evolving power system. In simulating the possibilities of long-term capacity expansion, it is important to maintain the viability of power system operation in the short-term (daily, hourly and sub-hourly) scales. Production-cost models (PCM) simulate routine power system operation on these shorter time scales using detailed load, transmission and generation fleet data by minimizing production costs and following reliability requirements. When based on CEM 'predictions' about generating unit retirements and buildup, PCM provide more detailed simulation for the short-term system operation and, consequently, may confirm the validitymore » of capacity expansion predictions. Further, production cost model simulations of a system that is based on capacity expansion model solution are 'evolutionary' sound: the generator mix is the result of logical sequence of unit retirement and buildup resulting from policy and incentives. The above has motivated us to bridge CEM with PCM by building a capacity expansion - to - production cost model Linking Tool (CEPCoLT). The Linking Tool is built to onset capacity expansion model prescriptions onto production cost model inputs. NREL's ReEDS and Energy Examplar's PLEXOS are the capacity expansion and the production cost models, respectively. Via the Linking Tool, PLEXOS provides details of operation for the regionally-defined ReEDS scenarios.« less
Goal Structuring Notation in a Radiation Hardening Assurance Case for COTS-Based Spacecraft
NASA Technical Reports Server (NTRS)
Witulski, A.; Austin, R.; Evans, J.; Mahadevan, N.; Karsai, G.; Sierawski, B.; LaBel, K.; Reed, R.
2016-01-01
The attached presentation is a summary of how mission assurance is supported by model-based representations of spacecraft systems that can define sub-system functionality and interfacing, reliability parameters, as well as detailing a new paradigm for assurance, a model-centric and not document-centric process.
77 FR 31169 - Airworthiness Directives; Piper Aircraft, Inc. Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-25
... detailed search for all applicable airworthiness related documents that apply to any airplane that has an incorrectly marked data plate and take necessary corrective actions based on the search findings. We are... affected model airplanes. The NPRM also proposed to require a detailed search for all applicable...
Fatigue assessment of an existing steel bridge by finite element modelling and field measurements
NASA Astrophysics Data System (ADS)
Kwad, J.; Alencar, G.; Correia, J.; Jesus, A.; Calçada, R.; Kripakaran, P.
2017-05-01
The evaluation of fatigue life of structural details in metallic bridges is a major challenge for bridge engineers. A reliable and cost-effective approach is essential to ensure appropriate maintenance and management of these structures. Typically, local stresses predicted by a finite element model of the bridge are employed to assess the fatigue life of fatigue-prone details. This paper illustrates an approach for fatigue assessment based on measured data for a connection in an old bascule steel bridge located in Exeter (UK). A finite element model is first developed from the design information. The finite element model of the bridge is calibrated using measured responses from an ambient vibration test. The stress time histories are calculated through dynamic analysis of the updated finite element model. Stress cycles are computed through the rainflow counting algorithm, and the fatigue prone details are evaluated using the standard SN curves approach and the Miner’s rule. Results show that the proposed approach can estimate the fatigue damage of a fatigue prone detail in a structure using measured strain data.
Aspen: A microsimulation model of the economy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basu, N.; Pryor, R.J.; Quint, T.
1996-10-01
This report presents, Aspen. Sandia National Laboratories is developing this new agent-based microeconomic simulation model of the U.S. economy. The model is notable because it allows a large number of individual economic agents to be modeled at a high level of detail and with a great degree of freedom. Some features of Aspen are (a) a sophisticated message-passing system that allows individual pairs of agents to communicate, (b) the use of genetic algorithms to simulate the learning of certain agents, and (c) a detailed financial sector that includes a banking system and a bond market. Results from runs of themore » model are also presented.« less
Population projections for AIDS using an actuarial model.
Wilkie, A D
1989-09-05
This paper gives details of a model for forecasting AIDS, developed for actuarial purposes, but used also for population projections. The model is only appropriate for homosexual transmission, but it is age-specific, and it allows variation in the transition intensities by age, duration in certain states and calendar year. The differential equations controlling transitions between states are defined, the method of numerical solution is outlined, and the parameters used in five different Bases of projection are given in detail. Numerical results for the population of England and Wales are shown.
Brayton Power Conversion System Parametric Design Modelling for Nuclear Electric Propulsion
NASA Technical Reports Server (NTRS)
Ashe, Thomas L.; Otting, William D.
1993-01-01
The parametrically based closed Brayton cycle (CBC) computer design model was developed for inclusion into the NASA LeRC overall Nuclear Electric Propulsion (NEP) end-to-end systems model. The code is intended to provide greater depth to the NEP system modeling which is required to more accurately predict the impact of specific technology on system performance. The CBC model is parametrically based to allow for conducting detailed optimization studies and to provide for easy integration into an overall optimizer driver routine. The power conversion model includes the modeling of the turbines, alternators, compressors, ducting, and heat exchangers (hot-side heat exchanger and recuperator). The code predicts performance to significant detail. The system characteristics determined include estimates of mass, efficiency, and the characteristic dimensions of the major power conversion system components. These characteristics are parametrically modeled as a function of input parameters such as the aerodynamic configuration (axial or radial), turbine inlet temperature, cycle temperature ratio, power level, lifetime, materials, and redundancy.
NASA Astrophysics Data System (ADS)
Zachary, Wayne; Eggleston, Robert; Donmoyer, Jason; Schremmer, Serge
2003-09-01
Decision-making is strongly shaped and influenced by the work context in which decisions are embedded. This suggests that decision support needs to be anchored by a model (implicit or explicit) of the work process, in contrast to traditional approaches that anchor decision support to either context free decision models (e.g., utility theory) or to detailed models of the external (e.g., battlespace) environment. An architecture for cognitively-based, work centered decision support called the Work-centered Informediary Layer (WIL) is presented. WIL separates decision support into three overall processes that build and dynamically maintain an explicit context model, use the context model to identify opportunities for decision support and tailor generic decision-support strategies to the current context and offer them to the system-user/decision-maker. The generic decision support strategies include such things as activity/attention aiding, decision process structuring, work performance support (selective, contextual automation), explanation/ elaboration, infosphere data retrieval, and what if/action-projection and visualization. A WIL-based application is a work-centered decision support layer that provides active support without intent inferencing, and that is cognitively based without requiring classical cognitive task analyses. Example WIL applications are detailed and discussed.
Leong, Siow Hoo; Ong, Seng Huat
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.
Leong, Siow Hoo
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634
NASA Astrophysics Data System (ADS)
Barazzetti, L.; Banfi, F.; Brumana, R.; Oreni, D.; Previtali, M.; Roncoroni, F.
2015-08-01
This paper describes a procedure for the generation of a detailed HBIM which is then turned into a model for mobile apps based on augmented and virtual reality. Starting from laser point clouds, photogrammetric data and additional information, a geometric reconstruction with a high level of detail can be carried out by considering the basic requirements of BIM projects (parametric modelling, object relations, attributes). The work aims at demonstrating that a complex HBIM can be managed in portable devices to extract useful information not only for expert operators, but also towards a wider user community interested in cultural tourism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cipolla, C.L.; Mayerhofer, M.
The paper details the acquisition of detailed core and pressure data and the subsequent reservoir modeling in the Ozona Gas Field, Crockett County, Texas. The Canyon formation is the focus of the study and consists of complex turbidite sands characterized by numerous lenticular gas bearing members. The sands cannot be characterized using indirect measurements (logs) and no reliable porosity-permeability relationship could be developed. The reservoir simulation results illustrate the problems associated with interpreting typical pressure and production data in tight gas sands and details procedures to identify incremental reserves. Reservoir layering was represented by five model layers and layer permeabilitiesmore » were estimated based on statistical distributions from core measurements.« less
Image-based models of cardiac structure in health and disease
Vadakkumpadan, Fijoy; Arevalo, Hermenegild; Prassl, Anton J.; Chen, Junjie; Kickinger, Ferdinand; Kohl, Peter; Plank, Gernot; Trayanova, Natalia
2010-01-01
Computational approaches to investigating the electromechanics of healthy and diseased hearts are becoming essential for the comprehensive understanding of cardiac function. In this article, we first present a brief review of existing image-based computational models of cardiac structure. We then provide a detailed explanation of a processing pipeline which we have recently developed for constructing realistic computational models of the heart from high resolution structural and diffusion tensor (DT) magnetic resonance (MR) images acquired ex vivo. The presentation of the pipeline incorporates a review of the methodologies that can be used to reconstruct models of cardiac structure. In this pipeline, the structural image is segmented to reconstruct the ventricles, normal myocardium, and infarct. A finite element mesh is generated from the segmented structural image, and fiber orientations are assigned to the elements based on DTMR data. The methods were applied to construct seven different models of healthy and diseased hearts. These models contain millions of elements, with spatial resolutions in the order of hundreds of microns, providing unprecedented detail in the representation of cardiac structure for simulation studies. PMID:20582162
1D-3D hybrid modeling-from multi-compartment models to full resolution models in space and time.
Grein, Stephan; Stepniewski, Martin; Reiter, Sebastian; Knodel, Markus M; Queisser, Gillian
2014-01-01
Investigation of cellular and network dynamics in the brain by means of modeling and simulation has evolved into a highly interdisciplinary field, that uses sophisticated modeling and simulation approaches to understand distinct areas of brain function. Depending on the underlying complexity, these models vary in their level of detail, in order to cope with the attached computational cost. Hence for large network simulations, single neurons are typically reduced to time-dependent signal processors, dismissing the spatial aspect of each cell. For single cell or networks with relatively small numbers of neurons, general purpose simulators allow for space and time-dependent simulations of electrical signal processing, based on the cable equation theory. An emerging field in Computational Neuroscience encompasses a new level of detail by incorporating the full three-dimensional morphology of cells and organelles into three-dimensional, space and time-dependent, simulations. While every approach has its advantages and limitations, such as computational cost, integrated and methods-spanning simulation approaches, depending on the network size could establish new ways to investigate the brain. In this paper we present a hybrid simulation approach, that makes use of reduced 1D-models using e.g., the NEURON simulator-which couples to fully resolved models for simulating cellular and sub-cellular dynamics, including the detailed three-dimensional morphology of neurons and organelles. In order to couple 1D- and 3D-simulations, we present a geometry-, membrane potential- and intracellular concentration mapping framework, with which graph- based morphologies, e.g., in the swc- or hoc-format, are mapped to full surface and volume representations of the neuron and computational data from 1D-simulations can be used as boundary conditions for full 3D simulations and vice versa. Thus, established models and data, based on general purpose 1D-simulators, can be directly coupled to the emerging field of fully resolved, highly detailed 3D-modeling approaches. We present the developed general framework for 1D/3D hybrid modeling and apply it to investigate electrically active neurons and their intracellular spatio-temporal calcium dynamics.
Supporting Air and Space Expeditionary Forces: Analysis of Combat Support Basing Options
2004-01-01
Brooke et al., 2003. 13 For more information on Set Covering models, see Daskin , 1995. Analysis Methodology 43 Transportation Model. A detailed...This PDF document was made available from www.rand.org as a public service of the RAND Corporation. 6Jump down to document Visit RAND at...www.rand.org Explore RAND Project AIR FORCE View document details This document and trademark(s) contained herein are protected by law as indicated in a
A support vector machine based control application to the experimental three-tank system.
Iplikci, Serdar
2010-07-01
This paper presents a support vector machine (SVM) approach to generalized predictive control (GPC) of multiple-input multiple-output (MIMO) nonlinear systems. The possession of higher generalization potential and at the same time avoidance of getting stuck into the local minima have motivated us to employ SVM algorithms for modeling MIMO systems. Based on the SVM model, detailed and compact formulations for calculating predictions and gradient information, which are used in the computation of the optimal control action, are given in the paper. The proposed MIMO SVM-based GPC method has been verified on an experimental three-tank liquid level control system. Experimental results have shown that the proposed method can handle the control task successfully for different reference trajectories. Moreover, a detailed discussion on data gathering, model selection and effects of the control parameters have been given in this paper. 2010 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Addy, A. L.; Chow, W. L.; Korst, H. H.; White, R. A.
1983-05-01
Significant data and detailed results of a joint research effort investigating the fluid dynamic mechanisms and interactions within separated flows are presented. The results were obtained through analytical, experimental, and computational investigations of base flow related configurations. The research objectives focus on understanding the component mechanisms and interactions which establish and maintain separated flow regions. Flow models and theoretical analyses were developed to describe the base flowfield. The research approach has been to conduct extensive small-scale experiments on base flow configurations and to analyze these flows by component models and finite-difference techniques. The modeling of base flows of missiles (both powered and unpowered) for transonic and supersonic freestreams has been successful by component models. Research on plume effects and plume modeling indicated the need to match initial plume slope and plume surface curvature for valid wind tunnel simulation of an actual rocket plume. The assembly and development of a state-of-the-art laser Doppler velocimeter (LDV) system for experiments with two-dimensional small-scale models has been completed and detailed velocity and turbulence measurements are underway. The LDV experiments include the entire range of base flowfield mechanisms - shear layer development, recompression/reattachment, shock-induced separation, and plume-induced separation.
A biochemically semi-detailed model of auxin-mediated vein formation in plant leaves.
Roussel, Marc R; Slingerland, Martin J
2012-09-01
We present here a model intended to capture the biochemistry of vein formation in plant leaves. The model consists of three modules. Two of these modules, those describing auxin signaling and transport in plant cells, are biochemically detailed. We couple these modules to a simple model for PIN (auxin efflux carrier) protein localization based on an extracellular auxin sensor. We study the single-cell responses of this combined model in order to verify proper functioning of the modeled biochemical network. We then assemble a multicellular model from the single-cell building blocks. We find that the model can, under some conditions, generate files of polarized cells, but not true veins. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
The calculation of theoretical chromospheric models and the interpretation of the solar spectrum
NASA Technical Reports Server (NTRS)
Avrett, Eugene H.
1994-01-01
Since the early 1970s we have been developing the extensive computer programs needed to construct models of the solar atmosphere and to calculate detailed spectra for use in the interpretation of solar observations. This research involves two major related efforts: work by Avrett and Loeser on the Pandora computer program for non-LTE modeling of the solar atmosphere including a wide range of physical processes, and work by Kurucz on the detailed synthesis of the solar spectrum based on opacity data for over 58 million atomic and molecular lines. Our goals are to determine models of the various features observed on the sun (sunspots, different components of quiet and active regions, and flares) by means of physically realistic models, and to calculate detailed spectra at all wavelengths that match observations of those features. These two goals are interrelated: discrepancies between calculated and observed spectra are used to determine improvements in the structure of the models, and in the detailed physical processes used in both the model calculations and the spectrum calculations. The atmospheric models obtained in this way provide not only the depth variation of various atmospheric parameters, but also a description of the internal physical processes that are responsible for nonradiative heating, and for solar activity in general.
NASA Technical Reports Server (NTRS)
Avrett, Eugene H.
1993-01-01
Since the early 1970s we have been developing the extensive computer programs needed to construct models of the solar atmosphere and to calculate detailed spectra for use in the interpretation of solar observations. This research involves two major related efforts: work by Avrett and Loeser on the Pandora computer program for non-LTE modeling of the solar atmosphere including a wide range of physical processes, and work by Kurucz on the detailed synthesis of the solar spectrum based on opacity data for over 58 million atomic and molecular lines. Our goals are to determine models of the various features observed on the Sun (sunspots, different components of quiet and active regions, and flares) by means of physically realistic models, and to calculate detailed spectra at all wavelengths that match observations of those features. These two goals are interrelated: discrepancies between calculated and observed spectra are used to determine improvements in the structure of the models, and in the detailed physical processes used in both the model calculations and the spectrum calculations. The atmospheric models obtained in this way provide not only the depth variation of various atmospheric parameters, but also a description of the internal physical processes that are responsible for non-radiative heating, and for solar activity in general.
A Four-Stage Model for Planning Computer-Based Instruction.
ERIC Educational Resources Information Center
Morrison, Gary R.; Ross, Steven M.
1988-01-01
Describes a flexible planning process for developing computer based instruction (CBI) in which the CBI design is implemented on paper between the lesson design and the program production. A four-stage model is explained, including (1) an initial flowchart, (2) storyboards, (3) a detailed flowchart, and (4) an evaluation. (16 references)…
1D-3D hybrid modeling—from multi-compartment models to full resolution models in space and time
Grein, Stephan; Stepniewski, Martin; Reiter, Sebastian; Knodel, Markus M.; Queisser, Gillian
2014-01-01
Investigation of cellular and network dynamics in the brain by means of modeling and simulation has evolved into a highly interdisciplinary field, that uses sophisticated modeling and simulation approaches to understand distinct areas of brain function. Depending on the underlying complexity, these models vary in their level of detail, in order to cope with the attached computational cost. Hence for large network simulations, single neurons are typically reduced to time-dependent signal processors, dismissing the spatial aspect of each cell. For single cell or networks with relatively small numbers of neurons, general purpose simulators allow for space and time-dependent simulations of electrical signal processing, based on the cable equation theory. An emerging field in Computational Neuroscience encompasses a new level of detail by incorporating the full three-dimensional morphology of cells and organelles into three-dimensional, space and time-dependent, simulations. While every approach has its advantages and limitations, such as computational cost, integrated and methods-spanning simulation approaches, depending on the network size could establish new ways to investigate the brain. In this paper we present a hybrid simulation approach, that makes use of reduced 1D-models using e.g., the NEURON simulator—which couples to fully resolved models for simulating cellular and sub-cellular dynamics, including the detailed three-dimensional morphology of neurons and organelles. In order to couple 1D- and 3D-simulations, we present a geometry-, membrane potential- and intracellular concentration mapping framework, with which graph- based morphologies, e.g., in the swc- or hoc-format, are mapped to full surface and volume representations of the neuron and computational data from 1D-simulations can be used as boundary conditions for full 3D simulations and vice versa. Thus, established models and data, based on general purpose 1D-simulators, can be directly coupled to the emerging field of fully resolved, highly detailed 3D-modeling approaches. We present the developed general framework for 1D/3D hybrid modeling and apply it to investigate electrically active neurons and their intracellular spatio-temporal calcium dynamics. PMID:25120463
Logic-Based Models for the Analysis of Cell Signaling Networks†
2010-01-01
Computational models are increasingly used to analyze the operation of complex biochemical networks, including those involved in cell signaling networks. Here we review recent advances in applying logic-based modeling to mammalian cell biology. Logic-based models represent biomolecular networks in a simple and intuitive manner without describing the detailed biochemistry of each interaction. A brief description of several logic-based modeling methods is followed by six case studies that demonstrate biological questions recently addressed using logic-based models and point to potential advances in model formalisms and training procedures that promise to enhance the utility of logic-based methods for studying the relationship between environmental inputs and phenotypic or signaling state outputs of complex signaling networks. PMID:20225868
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castro, Ricardo
The report describes in details the achievements of the project addressing the performance of nanomaterials in radioactive environments. The project addresses the fundamentals of the role of interface features on the defect dynamics during irradiation and present models to predict behavior based on thermodynamic properties. Papers and products, including formation of students in this strategic area, are presented in details as well.
Model-based Acceleration Control of Turbofan Engines with a Hammerstein-Wiener Representation
NASA Astrophysics Data System (ADS)
Wang, Jiqiang; Ye, Zhifeng; Hu, Zhongzhi; Wu, Xin; Dimirovsky, Georgi; Yue, Hong
2017-05-01
Acceleration control of turbofan engines is conventionally designed through either schedule-based or acceleration-based approach. With the widespread acceptance of model-based design in aviation industry, it becomes necessary to investigate the issues associated with model-based design for acceleration control. In this paper, the challenges for implementing model-based acceleration control are explained; a novel Hammerstein-Wiener representation of engine models is introduced; based on the Hammerstein-Wiener model, a nonlinear generalized minimum variance type of optimal control law is derived; the feature of the proposed approach is that it does not require the inversion operation that usually upsets those nonlinear control techniques. The effectiveness of the proposed control design method is validated through a detailed numerical study.
Calculation methods study on hot spot stress of new girder structure detail
NASA Astrophysics Data System (ADS)
Liao, Ping; Zhao, Renda; Jia, Yi; Wei, Xing
2017-10-01
To study modeling calculation methods of new girder structure detail's hot spot stress, based on surface extrapolation method among hot spot stress method, a few finite element analysis models of this welded detail were established by finite element software ANSYS. The influence of element type, mesh density, different local modeling methods of the weld toe and extrapolation methods was analyzed on hot spot stress calculation results at the toe of welds. The results show that the difference of the normal stress in the thickness direction and the surface direction among different models is larger when the distance from the weld toe is smaller. When the distance from the toe is greater than 0.5t, the normal stress of solid models, shell models with welds and non-weld shell models tends to be consistent along the surface direction. Therefore, it is recommended that the extrapolated point should be selected outside the 0.5t for new girder welded detail. According to the results of the calculation and analysis, shell models have good grid stability, and extrapolated hot spot stress of solid models is smaller than that of shell models. So it is suggested that formula 2 and solid45 should be carried out during the hot spot stress extrapolation calculation of this welded detail. For each finite element model under different shell modeling methods, the results calculated by formula 2 are smaller than those of the other two methods, and the results of shell models with welds are the largest. Under the same local mesh density, the extrapolated hot spot stress decreases gradually with the increase of the number of layers in the thickness direction of the main plate, and the variation range is within 7.5%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom Elicson; Bentley Harwood; Jim Bouchard
Over a 12 month period, a fire PRA was developed for a DOE facility using the NUREG/CR-6850 EPRI/NRC fire PRA methodology. The fire PRA modeling included calculation of fire severity factors (SFs) and fire non-suppression probabilities (PNS) for each safe shutdown (SSD) component considered in the fire PRA model. The SFs were developed by performing detailed fire modeling through a combination of CFAST fire zone model calculations and Latin Hypercube Sampling (LHS). Component damage times and automatic fire suppression system actuation times calculated in the CFAST LHS analyses were then input to a time-dependent model of fire non-suppression probability. Themore » fire non-suppression probability model is based on the modeling approach outlined in NUREG/CR-6850 and is supplemented with plant specific data. This paper presents the methodology used in the DOE facility fire PRA for modeling fire-induced SSD component failures and includes discussions of modeling techniques for: • Development of time-dependent fire heat release rate profiles (required as input to CFAST), • Calculation of fire severity factors based on CFAST detailed fire modeling, and • Calculation of fire non-suppression probabilities.« less
Insights into DNA-mediated interparticle interactions from a coarse-grained model
NASA Astrophysics Data System (ADS)
Ding, Yajun; Mittal, Jeetain
2014-11-01
DNA-functionalized particles have great potential for the design of complex self-assembled materials. The major hurdle in realizing crystal structures from DNA-functionalized particles is expected to be kinetic barriers that trap the system in metastable amorphous states. Therefore, it is vital to explore the molecular details of particle assembly processes in order to understand the underlying mechanisms. Molecular simulations based on coarse-grained models can provide a convenient route to explore these details. Most of the currently available coarse-grained models of DNA-functionalized particles ignore key chemical and structural details of DNA behavior. These models therefore are limited in scope for studying experimental phenomena. In this paper, we present a new coarse-grained model of DNA-functionalized particles which incorporates some of the desired features of DNA behavior. The coarse-grained DNA model used here provides explicit DNA representation (at the nucleotide level) and complementary interactions between Watson-Crick base pairs, which lead to the formation of single-stranded hairpin and double-stranded DNA. Aggregation between multiple complementary strands is also prevented in our model. We study interactions between two DNA-functionalized particles as a function of DNA grafting density, lengths of the hybridizing and non-hybridizing parts of DNA, and temperature. The calculated free energies as a function of pair distance between particles qualitatively resemble experimental measurements of DNA-mediated pair interactions.
Modeling and Analysis of Realistic Fire Scenarios in Spacecraft
NASA Technical Reports Server (NTRS)
Brooker, J. E.; Dietrich, D. L.; Gokoglu, S. A.; Urban, D. L.; Ruff, G. A.
2015-01-01
An accidental fire inside a spacecraft is an unlikely, but very real emergency situation that can easily have dire consequences. While much has been learned over the past 25+ years of dedicated research on flame behavior in microgravity, a quantitative understanding of the initiation, spread, detection and extinguishment of a realistic fire aboard a spacecraft is lacking. Virtually all combustion experiments in microgravity have been small-scale, by necessity (hardware limitations in ground-based facilities and safety concerns in space-based facilities). Large-scale, realistic fire experiments are unlikely for the foreseeable future (unlike in terrestrial situations). Therefore, NASA will have to rely on scale modeling, extrapolation of small-scale experiments and detailed numerical modeling to provide the data necessary for vehicle and safety system design. This paper presents the results of parallel efforts to better model the initiation, spread, detection and extinguishment of fires aboard spacecraft. The first is a detailed numerical model using the freely available Fire Dynamics Simulator (FDS). FDS is a CFD code that numerically solves a large eddy simulation form of the Navier-Stokes equations. FDS provides a detailed treatment of the smoke and energy transport from a fire. The simulations provide a wealth of information, but are computationally intensive and not suitable for parametric studies where the detailed treatment of the mass and energy transport are unnecessary. The second path extends a model previously documented at ICES meetings that attempted to predict maximum survivable fires aboard space-craft. This one-dimensional model implies the heat and mass transfer as well as toxic species production from a fire. These simplifications result in a code that is faster and more suitable for parametric studies (having already been used to help in the hatch design of the Multi-Purpose Crew Vehicle, MPCV).
NASA Astrophysics Data System (ADS)
Pesci, Arianna; Fabris, Massimo; Conforti, Dario; Loddo, Fabiana; Baldi, Paolo; Anzidei, Marco
2007-05-01
This work deals with the integration of different surveying methodologies for the definition of very accurate Digital Terrain Models (DTM) and/or Digital Surface Models (DSM): in particular, the aerial digital photogrammetry and the terrestrial laser scanning were used to survey the Vesuvio volcano, allowing the total coverage of the internal cone and surroundings (the whole surveyed area was about 3 km × 3 km). The possibility to reach a very high precision, especially from the laser scanner data set, allowed a detailed description of the morphology of the volcano. The comparisons of models obtained in repeated surveys allow a detailed map of residuals providing a data set that can be used for detailed studies of the morphological evolution. Moreover, the reflectivity information, highly correlated to materials properties, allows for the measurement and quantification of some morphological variations in areas where structural discontinuities and displacements are present.
A multi-objective programming model for assessment the GHG emissions in MSW management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mavrotas, George, E-mail: mavrotas@chemeng.ntua.gr; Skoulaxinou, Sotiria; Gakis, Nikos
2013-09-15
Highlights: • The multi-objective multi-period optimization model. • The solution approach for the generation of the Pareto front with mathematical programming. • The very detailed description of the model (decision variables, parameters, equations). • The use of IPCC 2006 guidelines for landfill emissions (first order decay model) in the mathematical programming formulation. - Abstract: In this study a multi-objective mathematical programming model is developed for taking into account GHG emissions for Municipal Solid Waste (MSW) management. Mathematical programming models are often used for structure, design and operational optimization of various systems (energy, supply chain, processes, etc.). The last twenty yearsmore » they are used all the more often in Municipal Solid Waste (MSW) management in order to provide optimal solutions with the cost objective being the usual driver of the optimization. In our work we consider the GHG emissions as an additional criterion, aiming at a multi-objective approach. The Pareto front (Cost vs. GHG emissions) of the system is generated using an appropriate multi-objective method. This information is essential to the decision maker because he can explore the trade-offs in the Pareto curve and select his most preferred among the Pareto optimal solutions. In the present work a detailed multi-objective, multi-period mathematical programming model is developed in order to describe the waste management problem. Apart from the bi-objective approach, the major innovations of the model are (1) the detailed modeling considering 34 materials and 42 technologies, (2) the detailed calculation of the energy content of the various streams based on the detailed material balances, and (3) the incorporation of the IPCC guidelines for the CH{sub 4} generated in the landfills (first order decay model). The equations of the model are described in full detail. Finally, the whole approach is illustrated with a case study referring to the application of the model in a Greek region.« less
Mirus, B.B.; Ebel, B.A.; Heppner, C.S.; Loague, K.
2011-01-01
Concept development simulation with distributed, physics-based models provides a quantitative approach for investigating runoff generation processes across environmental conditions. Disparities within data sets employed to design and parameterize boundary value problems used in heuristic simulation inevitably introduce various levels of bias. The objective was to evaluate the impact of boundary value problem complexity on process representation for different runoff generation mechanisms. The comprehensive physics-based hydrologic response model InHM has been employed to generate base case simulations for four well-characterized catchments. The C3 and CB catchments are located within steep, forested environments dominated by subsurface stormflow; the TW and R5 catchments are located in gently sloping rangeland environments dominated by Dunne and Horton overland flows. Observational details are well captured within all four of the base case simulations, but the characterization of soil depth, permeability, rainfall intensity, and evapotranspiration differs for each. These differences are investigated through the conversion of each base case into a reduced case scenario, all sharing the same level of complexity. Evaluation of how individual boundary value problem characteristics impact simulated runoff generation processes is facilitated by quantitative analysis of integrated and distributed responses at high spatial and temporal resolution. Generally, the base case reduction causes moderate changes in discharge and runoff patterns, with the dominant process remaining unchanged. Moderate differences between the base and reduced cases highlight the importance of detailed field observations for parameterizing and evaluating physics-based models. Overall, similarities between the base and reduced cases indicate that the simpler boundary value problems may be useful for concept development simulation to investigate fundamental controls on the spectrum of runoff generation mechanisms. Copyright 2011 by the American Geophysical Union.
Inhibitor-based validation of a homology model of the active-site of tripeptidyl peptidase II.
De Winter, Hans; Breslin, Henry; Miskowski, Tamara; Kavash, Robert; Somers, Marijke
2005-04-01
A homology model of the active site region of tripeptidyl peptidase II (TPP II) was constructed based on the crystal structures of four subtilisin-like templates. The resulting model was subsequently validated by judging expectations of the model versus observed activities for a broad set of prepared TPP II inhibitors. The structure-activity relationships observed for the prepared TPP II inhibitors correlated nicely with the structural details of the TPP II active site model, supporting the validity of this model and its usefulness for structure-based drug design and pharmacophore searching experiments.
The role of a detailed aqueous phase source release model in the LANL area G performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vold, E.L.; Shuman, R.; Hollis, D.K.
1995-12-31
A preliminary draft of the Performance Assessment for the Los Alamos National Laboratory (LANL) low-level radioactive waste disposal facility at Area G is currently being completed as required by Department of Energy orders. A detailed review of the inventory data base records and the existing models for source release led to the development of a new modeling capability to describe the liquid phase transport from the waste package volumes. Nuclide quantities are sorted down to four waste package release categories for modeling: rapid release, soil, concrete/sludge, and corrosion. Geochemistry for the waste packages was evaluated in terms of the equilibriummore » coefficients, Kds, and elemental solubility limits, Csl, interpolated from the literature. Percolation calculations for the base case closure cover show a highly skewed distribution with an average of 4 mm/yr percolation from the disposal unit bottom. The waste release model is based on a compartment representation of the package efflux, and depends on package size, percolation rate or Darcy flux, retardation coefficient, and moisture content.« less
Technoeconomic Modeling of Battery Energy Storage in SAM
DOE Office of Scientific and Technical Information (OSTI.GOV)
DiOrio, Nicholas; Dobos, Aron; Janzou, Steven
Detailed comprehensive lead-acid and lithium-ion battery models have been integrated with photovoltaic models in an effort to allow System Advisor Model (SAM) to offer the ability to predict the performance and economic benefit of behind the meter storage. In a system with storage, excess PV energy can be saved until later in the day when PV production has fallen, or until times of peak demand when it is more valuable. Complex dispatch strategies can be developed to leverage storage to reduce energy consumption or power demand based on the utility rate structure. This document describes the details of the batterymore » performance and economic models in SAM.« less
The standard data model approach to patient record transfer.
Canfield, K.; Silva, M.; Petrucci, K.
1994-01-01
This paper develops an approach to electronic data exchange of patient records from Ambulatory Encounter Systems (AESs). This approach assumes that the AES is based upon a standard data model. The data modeling standard used here is IDEFIX for Entity/Relationship (E/R) modeling. Each site that uses a relational database implementation of this standard data model (or a subset of it) can exchange very detailed patient data with other such sites using industry standard tools and without excessive programming efforts. This design is detailed below for a demonstration project between the research-oriented geriatric clinic at the Baltimore Veterans Affairs Medical Center (BVAMC) and the Laboratory for Healthcare Informatics (LHI) at the University of Maryland. PMID:7949973
Modeling and visualizing borehole information on virtual globes using KML
NASA Astrophysics Data System (ADS)
Zhu, Liang-feng; Wang, Xi-feng; Zhang, Bing
2014-01-01
Advances in virtual globes and Keyhole Markup Language (KML) are providing the Earth scientists with the universal platforms to manage, visualize, integrate and disseminate geospatial information. In order to use KML to represent and disseminate subsurface geological information on virtual globes, we present an automatic method for modeling and visualizing a large volume of borehole information. Based on a standard form of borehole database, the method first creates a variety of borehole models with different levels of detail (LODs), including point placemarks representing drilling locations, scatter dots representing contacts and tube models representing strata. Subsequently, the level-of-detail based (LOD-based) multi-scale representation is constructed to enhance the efficiency of visualizing large numbers of boreholes. Finally, the modeling result can be loaded into a virtual globe application for 3D visualization. An implementation program, termed Borehole2KML, is developed to automatically convert borehole data into KML documents. A case study of using Borehole2KML to create borehole models in Shanghai shows that the modeling method is applicable to visualize, integrate and disseminate borehole information on the Internet. The method we have developed has potential use in societal service of geological information.
SHAWNEE LIME/LIMESTONE SCRUBBING COMPUTERIZED DESIGN/COST-ESTIMATE MODEL USERS MANUAL
The manual gives a general description of the Shawnee lime/limestone scrubbing computerized design/cost-estimate model and detailed procedures for using it. It describes all inputs and outputs, along with available options. The model, based on Shawnee Test Facility scrubbing data...
Wang, Yan-Cang; Yang, Gui-Jun; Zhu, Jin-Shan; Gu, Xiao-He; Xu, Peng; Liao, Qin-Hong
2014-07-01
For improving the estimation accuracy of soil organic matter content of the north fluvo-aquic soil, wavelet transform technology is introduced. The soil samples were collected from Tongzhou district and Shunyi district in Beijing city. And the data source is from soil hyperspectral data obtained under laboratory condition. First, discrete wavelet transform efficiently decomposes hyperspectral into approximate coefficients and detail coefficients. Then, the correlation between approximate coefficients, detail coefficients and organic matter content was analyzed, and the sensitive bands of the organic matter were screened. Finally, models were established to estimate the soil organic content by using the partial least squares regression (PLSR). Results show that the NIR bands made more contributions than the visible band in estimating organic matter content models; the ability of approximate coefficients to estimate organic matter content is better than that of detail coefficients; The estimation precision of the detail coefficients fir soil organic matter content decreases with the spectral resolution being lower; Compared with the commonly used three types of soil spectral reflectance transforms, the wavelet transform can improve the estimation ability of soil spectral fir organic content; The accuracy of the best model established by the approximate coefficients or detail coefficients is higher, and the coefficient of determination (R2) and the root mean square error (RMSE) of the best model for approximate coefficients are 0.722 and 0.221, respectively. The R2 and RMSE of the best model for detail coefficients are 0.670 and 0.255, respectively.
NASA Astrophysics Data System (ADS)
Pillai, D.; Gerbig, C.; Kretschmer, R.; Beck, V.; Karstens, U.; Neininger, B.; Heimann, M.
2012-01-01
We present simulations of atmospheric CO2 concentrations provided by two modeling systems, run at high spatial resolution: the Eulerian-based Weather Research Forecasting (WRF) model and the Lagrangian-based Stochastic Time-Inverted Lagrangian Transport (STILT) model, both of which are coupled to a diagnostic biospheric model, the Vegetation Photosynthesis and Respiration Model (VPRM). The consistency of the simulations is assessed with special attention paid to the details of horizontal as well as vertical transport and mixing of CO2 concentrations in the atmosphere. The dependence of model mismatch (Eulerian vs. Lagrangian) on models' spatial resolution is further investigated. A case study using airborne measurements during which both models showed large deviations from each other is analyzed in detail as an extreme case. Using aircraft observations and pulse release simulations, we identified differences in the representation of details in the interaction between turbulent mixing and advection through wind shear as the main cause of discrepancies between WRF and STILT transport at a spatial resolution such as 2 and 6 km. Based on observations and inter-model comparisons of atmospheric CO2 concentrations, we show that a refinement of the parameterization of turbulent velocity variance and Lagrangian time-scale in STILT is needed to achieve a better match between the Eulerian and the Lagrangian transport at such a high spatial resolution (e.g. 2 and 6 km). Nevertheless, the inter-model differences in simulated CO2 time series for a tall tower observatory at Ochsenkopf in Germany are about a factor of two smaller than the model-data mismatch and about a factor of three smaller than the mismatch between the current global model simulations and the data. Thus suggests that it is reasonable to use STILT as an adjoint model of WRF atmospheric transport.
Space-based Scintillation Nowcasting with the Communications/Navigation Outage Forecast System
NASA Astrophysics Data System (ADS)
Groves, K.; Starks, M.; Beach, T.; Basu, S.
2008-12-01
The Air Force Research Laboratory's Communication/Navigation Outage Forecast System (C/NOFS) fuses ground- and space-based data in a near real-time physics-based model aimed at forecasting and nowcasting equatorial scintillations and their impacts on satellite communications and navigation. A key component of the system is the C/NOFS satellite that was launched into a low-inclination (13°) elliptical orbit (400 km x 850 km) in April 2008. The satellite contains six sensors to measure space environment parameters including electron density and temperature, ion density and drift, electric and magnetic fields and neutral wind, as well as a tri-band radio beacon transmitting at 150 MHz, 400 MHz and 1067 MHz. Scintillation nowcasts are derived from measuring the one-dimensional in situ electron density fluctuations and subsequently modeling the propagation environment for satellite-to-ground radio links. The modeling process requires a number of simplifying assumptions regarding the three-dimensional structure of the ionosphere and the results are readily validated by comparisons with ground-based measurements of the satellite's tri-band beacon signals. In mid-September 2008 a campaign to perform detailed analyses of space-based scintillation nowcasts with numerous ground observations was conducted in the vicinity of Kwajalein Atoll, Marshall Islands. To maximize the collection of ground-truth data, the ALTAIR radar was employed to obtain detailed information on the spatial structure of the ionosphere during the campaign and to aid the improvement of space-based nowcasting algorithms. A comparison of these results will be presented; it appears that detailed information on the electron density structure is a limiting factor in modeling the scintillation environment from in situ observations.
Model documentation report: Commercial Sector Demand Module of the National Energy Modeling System
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-01-01
This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Commercial Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components. The NEMS Commercial Sector Demand Module is a simulation tool based upon economic and engineering relationships that models commercial sector energy demands at the nine Census Division level of detail for eleven distinct categories of commercial buildings. Commercial equipment selections are performed for the major fuels of electricity, natural gas,more » and distillate fuel, for the major services of space heating, space cooling, water heating, ventilation, cooking, refrigeration, and lighting. The algorithm also models demand for the minor fuels of residual oil, liquefied petroleum gas, steam coal, motor gasoline, and kerosene, the renewable fuel sources of wood and municipal solid waste, and the minor services of office equipment. Section 2 of this report discusses the purpose of the model, detailing its objectives, primary input and output quantities, and the relationship of the Commercial Module to the other modules of the NEMS system. Section 3 of the report describes the rationale behind the model design, providing insights into further assumptions utilized in the model development process to this point. Section 3 also reviews alternative commercial sector modeling methodologies drawn from existing literature, providing a comparison to the chosen approach. Section 4 details the model structure, using graphics and text to illustrate model flows and key computations.« less
A Detailed Picture of the (93) Minerva Triple System
NASA Astrophysics Data System (ADS)
Marchis, F.; Descamps, P.; Dalba, P.; Enriquez, J. E.; Durech, J.; Emery, J. P.; Berthier, J.; Vachier, F.; Merlbourne, J.; Stockton, A. N.; Fassnacht, C. D.; Dupuy, T. J.
2011-10-01
We developed an orbital model of the satellites of (93) Minerva based on Keck II AO observations recorded in 2009 and a mutual event between one moon and the primary detected in March 2010. Using new lightcurves we found an approximated ellipsoid shape model for the primary. With a reanalysis of the IRAS data, we derived a preliminary bulk density of 1.5±0.2 g/cc. We will present a detailed analysis of the system, including a 3D shape model of the 93 Minerva primary derived by combining our AO observations, lightcurve, and stellar occultations.
Wall Paint Exposure Assessment Model (WPEM)
WPEM uses mathematical models developed from small chamber data to estimate the emissions of chemicals from oil-based (alkyd) and latex wall paint which is then combined with detailed use, workload and occupancy data to estimate user exposure.
On the Connection Between One-and Two-Equation Models of Turbulence
NASA Technical Reports Server (NTRS)
Menter, F. R.; Rai, Man Mohan (Technical Monitor)
1994-01-01
A formalism will be presented that allows the transformation of two-equation eddy viscosity turbulence models into one-equation models. The transformation is based on an assumption that is widely accepted over a large range of boundary layer flows and that has been shown to actually improve predictions when incorporated into two-equation models of turbulence. Based on that assumption, a new one-equation turbulence model will be derived. The new model will be tested in great detail against a previously introduced one-equation model and against its parent two-equation model.
Model Based Mission Assurance: Emerging Opportunities for Robotic Systems
NASA Technical Reports Server (NTRS)
Evans, John W.; DiVenti, Tony
2016-01-01
The emergence of Model Based Systems Engineering (MBSE) in a Model Based Engineering framework has created new opportunities to improve effectiveness and efficiencies across the assurance functions. The MBSE environment supports not only system architecture development, but provides for support of Systems Safety, Reliability and Risk Analysis concurrently in the same framework. Linking to detailed design will further improve assurance capabilities to support failures avoidance and mitigation in flight systems. This also is leading new assurance functions including model assurance and management of uncertainty in the modeling environment. Further, the assurance cases, a structured hierarchal argument or model, are emerging as a basis for supporting a comprehensive viewpoint in which to support Model Based Mission Assurance (MBMA).
A Constructivist-Based Model for the Teaching of Dissolution of Gas in a Liquid
ERIC Educational Resources Information Center
Calik, Muammer; Ayas, Alipasa; Coll, Richard K.
2006-01-01
In this article we present details of a four-step constructivist-based teaching strategy, which helps students understand the dissolution of a gas in a liquid. The model derived from Ayas (1995) involves elicitation of pre-existing ideas, focusing on the target concept, challenging students' ideas, and applying newly constructed ideas to similar…
Issues, concerns, and initial implementation results for space based telerobotic control
NASA Technical Reports Server (NTRS)
Lawrence, D. A.; Chapel, J. D.; Depkovich, T. M.
1987-01-01
Telerobotic control for space based assembly and servicing tasks presents many problems in system design. Traditional force reflection teleoperation schemes are not well suited to this application, and the approaches to compliance control via computer algorithms have yet to see significant testing and comparison. These observations are discussed in detail, as well as the concerns they raise for imminent design and testing of space robotic systems. As an example of the detailed technical work yet to be done before such systems can be specified, a particular approach to providing manipulator compliance is examined experimentally and through modeling and analysis. This yields some initial insight into the limitations and design trade-offs for this class of manipulator control schemes. Implications of this investigation for space based telerobots are discussed in detail.
Quantifying the Global Nitrous Oxide Emissions Using a Trait-based Biogeochemistry Model
NASA Astrophysics Data System (ADS)
Zhuang, Q.; Yu, T.
2017-12-01
Nitrogen is an essential element for the global biogeochemical cycle. It is a key nutrient for organisms and N compounds including nitrous oxide significantly influence the global climate. The activities of bacteria and archaea are responsible for the nitrification and denitrification in a wide variety of environments, so microbes play an important role in the nitrogen cycle in soils. To date, most existing process-based models treated nitrification and denitrification as chemical reactions driven by soil physical variables including soil temperature and moisture. In general, the effect of microbes on N cycling has not been modeled in sufficient details. Soil organic carbon also affects the N cycle because it supplies energy to microbes. In my study, a trait-based biogeochemistry model quantifying N2O emissions from the terrestrial ecosystems is developed based on an extant process-based model TEM (Terrestrial Ecosystem Model). Specifically, the improvement to TEM includes: 1) Incorporating the N fixation process to account for the inflow of N from the atmosphere to biosphere; 2) Implementing the effects of microbial dynamics on nitrification process; 3) fully considering the effects of carbon cycling on N nitrogen cycling following the principles of stoichiometry of carbon and nitrogen in soils, plants, and microbes. The difference between simulations with and without the consideration of bacterial activity lies between 5% 25% based on climate conditions and vegetation types. The trait based module allows a more detailed estimation of global N2O emissions.
1991-09-01
constant data into the gaining base’s computer records. Among the data elements to be loaded, the 1XT434 image contains the level detail effective date...the mission support effective date, and the PBR override (19:19-203). In conjunction with the 1XT434, the Mission Change Parameter Image (Constant...the gaining base (19:19-208). The level detail effective date establishes the date the MCDDFR and MCDDR "are considered by the requirements computation
Physiologically Based Pharmacokinetic Model for Long-Circulating Inorganic Nanoparticles.
Liang, Xiaowen; Wang, Haolu; Grice, Jeffrey E; Li, Li; Liu, Xin; Xu, Zhi Ping; Roberts, Michael S
2016-02-10
A physiologically based pharmacokinetic model was developed for accurately characterizing and predicting the in vivo fate of long-circulating inorganic nanoparticles (NPs). This model is built based on direct visualization of NP disposition details at the organ and cellular level. It was validated with multiple data sets, indicating robust inter-route and interspecies predictive capability. We suggest that the biodistribution of long-circulating inorganic NPs is determined by the uptake and release of NPs by phagocytic cells in target organs.
Angular Random Walk Estimation of a Time-Domain Switching Micromachined Gyroscope
2016-10-19
1 2. PARAMETRIC SYSTEM IDENTIFICATION BASED ON TIME-DOMAIN SWITCHING ........ 2 3. FINITE ELEMENT MODELING OF RESONATOR...8 3. FINITE ELEMENT MODELING OF RESONATOR This section details basic finite element modeling of the resonator used with the TDSMG. While it...Based on finite element simulations of the employed resonator, it is found that the effects of thermomechanical noise is on par with 10 ps of timing
Temporal Subtraction of Digital Breast Tomosynthesis Images for Improved Mass Detection
2009-11-01
imaging using two distinct methods7-15: mathematically based models defined by geometric primitives and voxelized models derived from real human...trees to complete them. We also plan to add further detail by defining the Cooper’s ligaments using geometrical NURBS surfaces. Realistic...generated model for the coronary arterial tree based on multislice CT and morphometric data," Medical Imaging 2006: Physics of Medical Imaging 6142
Renton, Michael
2011-01-01
Background and aims Simulations that integrate sub-models of important biological processes can be used to ask questions about optimal management strategies in agricultural and ecological systems. Building sub-models with more detail and aiming for greater accuracy and realism may seem attractive, but is likely to be more expensive and time-consuming and result in more complicated models that lack transparency. This paper illustrates a general integrated approach for constructing models of agricultural and ecological systems that is based on the principle of starting simple and then directly testing for the need to add additional detail and complexity. Methodology The approach is demonstrated using LUSO (Land Use Sequence Optimizer), an agricultural system analysis framework based on simulation and optimization. A simple sensitivity analysis and functional perturbation analysis is used to test to what extent LUSO's crop–weed competition sub-model affects the answers to a number of questions at the scale of the whole farming system regarding optimal land-use sequencing strategies and resulting profitability. Principal results The need for accuracy in the crop–weed competition sub-model within LUSO depended to a small extent on the parameter being varied, but more importantly and interestingly on the type of question being addressed with the model. Only a small part of the crop–weed competition model actually affects the answers to these questions. Conclusions This study illustrates an example application of the proposed integrated approach for constructing models of agricultural and ecological systems based on testing whether complexity needs to be added to address particular questions of interest. We conclude that this example clearly demonstrates the potential value of the general approach. Advantages of this approach include minimizing costs and resources required for model construction, keeping models transparent and easy to analyse, and ensuring the model is well suited to address the question of interest. PMID:22476477
Haufe, Stefan; Huang, Yu; Parra, Lucas C
2015-08-01
In electroencephalographic (EEG) source imaging as well as in transcranial current stimulation (TCS), it is common to model the head using either three-shell boundary element (BEM) or more accurate finite element (FEM) volume conductor models. Since building FEMs is computationally demanding and labor intensive, they are often extensively reused as templates even for subjects with mismatching anatomies. BEMs can in principle be used to efficiently build individual volume conductor models; however, the limiting factor for such individualization are the high acquisition costs of structural magnetic resonance images. Here, we build a highly detailed (0.5mm(3) resolution, 6 tissue type segmentation, 231 electrodes) FEM based on the ICBM152 template, a nonlinear average of 152 adult human heads, which we call ICBM-NY. We show that, through more realistic electrical modeling, our model is similarly accurate as individual BEMs. Moreover, through using an unbiased population average, our model is also more accurate than FEMs built from mismatching individual anatomies. Our model is made available in Matlab format.
Biomechanical Modeling of the Human Head
2017-10-03
between model predictions and experimental data. This report details model calibration for all materials identified in models of a human head and...14 3 Stress-strain data for the pia mater and dura mater (human subject); experimental data orig- inally presented in [28...treated as one material) based on a hyperelastic model and experimental data from [59] ............................................... 20 5 Comparison of
Three Dimensional Modeling via Photographs for Documentation of a Village Bath
NASA Astrophysics Data System (ADS)
Balta, H. B.; Hamamcioglu-Turan, M.; Ocali, O.
2013-07-01
The aim of this study is supporting the conceptual discussions of architectural restoration with three dimensional modeling of monuments based on photogrammetric survey. In this study, a 16th century village bath in Ulamış, Seferihisar, and Izmir is modeled for documentation. Ulamış is one of the historical villages within which Turkish population first settled in the region of Seferihisar - Urla. The methodology was tested on an antique monument; a bath with a cubical form. Within the limits of this study, only the exterior of the bath was modeled. The presentation scale for the bath was determined as 1 / 50, considering the necessities of designing structural interventions and architectural ones within the scope of a restoration project. The three dimensional model produced is a realistic document presenting the present situation of the ruin. Traditional plan, elevation and perspective drawings may be produced from the model, in addition to the realistic textured renderings and wireframe representations. The model developed in this study provides opportunity for presenting photorealistic details of historical morphologies in scale. Compared to conventional drawings, the renders based on the 3d models provide an opportunity for conceiving architectural details such as color, material and texture. From these documents, relatively more detailed restitution hypothesis can be developed and intervention decisions can be taken. Finally, the principles derived from the case study can be used for 3d documentation of historical structures with irregular surfaces.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Dongsu; Cox, Sam J.; Cho, Heejin
With increased use of variable refrigerant flow (VRF) systems in the U.S. building sector, interests in capability and rationality of various building energy modeling tools to simulate VRF systems are rising. This paper presents the detailed procedures for model calibration of a VRF system with a dedicated outdoor air system (DOAS) by comparing to detailed measured data from an occupancy emulated small office building. The building energy model is first developed based on as-built drawings, and building and system characteristics available. The whole building energy modeling tool used for the study is U.S. DOE’s EnergyPlus version 8.1. The initial modelmore » is, then, calibrated with the hourly measured data from the target building and VRF-DOAS system. In a detailed calibration procedures of the VRF-DOAS, the original EnergyPlus source code is modified to enable the modeling of the specific VRF-DOAS installed in the building. After a proper calibration during cooling and heating seasons, the VRF-DOAS model can reasonably predict the performance of the actual VRF-DOAS system based on the criteria from ASHRAE Guideline 14-2014. The calibration results show that hourly CV-RMSE and NMBE would be 15.7% and 3.8%, respectively, which is deemed to be calibrated. As a result, the whole-building energy usage after calibration of the VRF-DOAS model is 1.9% (78.8 kWh) lower than that of the measurements during comparison period.« less
Suzuki, Shigeru; Machida, Haruhiko; Tanaka, Isao; Ueno, Eiko
2012-11-01
To compare the performance of model-based iterative reconstruction (MBIR) with that of standard filtered back projection (FBP) for measuring vascular wall attenuation. After subjecting 9 vascular models (actual attenuation value of wall, 89 HU) with wall thickness of 0.5, 1.0, or 1.5 mm that we filled with contrast material of 275, 396, or 542 HU to scanning using 64-detector computed tomography (CT), we reconstructed images using MBIR and FBP (Bone, Detail kernels) and measured wall attenuation at the center of the wall for each model. We performed attenuation measurements for each model and additional supportive measurements by a differentiation curve. We analyzed statistics using analyzes of variance with repeated measures. Using the Bone kernel, standard deviation of the measurement exceeded 30 HU in most conditions. In measurements at the wall center, the attenuation values obtained using MBIR were comparable to or significantly closer to the actual wall attenuation than those acquired using Detail kernel. Using differentiation curves, we could measure attenuation for models with walls of 1.0- or 1.5-mm thickness using MBIR but only those of 1.5-mm thickness using Detail kernel. We detected no significant differences among the attenuation values of the vascular walls of either thickness (MBIR, P=0.1606) or among the 3 densities of intravascular contrast material (MBIR, P=0.8185; Detail kernel, P=0.0802). Compared with FBP, MBIR reduces both reconstruction blur and image noise simultaneously, facilitates recognition of vascular wall boundaries, and can improve accuracy in measuring wall attenuation. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Communications network design and costing model users manual
NASA Technical Reports Server (NTRS)
Logan, K. P.; Somes, S. S.; Clark, C. A.
1983-01-01
The information and procedures needed to exercise the communications network design and costing model for performing network analysis are presented. Specific procedures are included for executing the model on the NASA Lewis Research Center IBM 3033 computer. The concepts, functions, and data bases relating to the model are described. Model parameters and their format specifications for running the model are detailed.
Layer-Based Approach for Image Pair Fusion.
Son, Chang-Hwan; Zhang, Xiao-Ping
2016-04-20
Recently, image pairs, such as noisy and blurred images or infrared and noisy images, have been considered as a solution to provide high-quality photographs under low lighting conditions. In this paper, a new method for decomposing the image pairs into two layers, i.e., the base layer and the detail layer, is proposed for image pair fusion. In the case of infrared and noisy images, simple naive fusion leads to unsatisfactory results due to the discrepancies in brightness and image structures between the image pair. To address this problem, a local contrast-preserving conversion method is first proposed to create a new base layer of the infrared image, which can have visual appearance similar to another base layer such as the denoised noisy image. Then, a new way of designing three types of detail layers from the given noisy and infrared images is presented. To estimate the noise-free and unknown detail layer from the three designed detail layers, the optimization framework is modeled with residual-based sparsity and patch redundancy priors. To better suppress the noise, an iterative approach that updates the detail layer of the noisy image is adopted via a feedback loop. This proposed layer-based method can also be applied to fuse another noisy and blurred image pair. The experimental results show that the proposed method is effective for solving the image pair fusion problem.
User modeling techniques for enhanced usability of OPSMODEL operations simulation software
NASA Technical Reports Server (NTRS)
Davis, William T.
1991-01-01
The PC based OPSMODEL operations software for modeling and simulation of space station crew activities supports engineering and cost analyses and operations planning. Using top-down modeling, the level of detail required in the data base can be limited to being commensurate with the results required of any particular analysis. To perform a simulation, a resource environment consisting of locations, crew definition, equipment, and consumables is first defined. Activities to be simulated are then defined as operations and scheduled as desired. These operations are defined within a 1000 level priority structure. The simulation on OPSMODEL, then, consists of the following: user defined, user scheduled operations executing within an environment of user defined resource and priority constraints. Techniques for prioritizing operations to realistically model a representative daily scenario of on-orbit space station crew activities are discussed. The large number of priority levels allows priorities to be assigned commensurate with the detail necessary for a given simulation. Several techniques for realistic modeling of day-to-day work carryover are also addressed.
Modelling decremental ramps using 2- and 3-parameter "critical power" models.
Morton, R Hugh; Billat, Veronique
2013-01-01
The "Critical Power" (CP) model of human bioenergetics provides a valuable way to identify both limits of tolerance to exercise and mechanisms that underpin that tolerance. It applies principally to cycling-based exercise, but with suitable adjustments for analogous units it can be applied to other exercise modalities; in particular to incremental ramp exercise. It has not yet been applied to decremental ramps which put heavy early demand on the anaerobic energy supply system. This paper details cycling-based bioenergetics of decremental ramps using 2- and 3-parameter CP models. It derives equations that, for an individual of known CP model parameters, define those combinations of starting intensity and decremental gradient which will or will not lead to exhaustion before ramping to zero; and equations that predict time to exhaustion on those decremental ramps that will. These are further detailed with suitably chosen numerical and graphical illustrations. These equations can be used for parameter estimation from collected data, or to make predictions when parameters are known.
Representing Micro-Macro Linkages by Actor-Based Dynamic Network Models
Snijders, Tom A.B.; Steglich, Christian E.G.
2014-01-01
Stochastic actor-based models for network dynamics have the primary aim of statistical inference about processes of network change, but may be regarded as a kind of agent-based models. Similar to many other agent-based models, they are based on local rules for actor behavior. Different from many other agent-based models, by including elements of generalized linear statistical models they aim to be realistic detailed representations of network dynamics in empirical data sets. Statistical parallels to micro-macro considerations can be found in the estimation of parameters determining local actor behavior from empirical data, and the assessment of goodness of fit from the correspondence with network-level descriptives. This article studies several network-level consequences of dynamic actor-based models applied to represent cross-sectional network data. Two examples illustrate how network-level characteristics can be obtained as emergent features implied by micro-specifications of actor-based models. PMID:25960578
A maintenance and operations cost model for DSN
NASA Technical Reports Server (NTRS)
Burt, R. W.; Kirkbride, H. L.
1977-01-01
A cost model for the DSN is developed which is useful in analyzing the 10-year Life Cycle Cost of the Bent Pipe Project. The philosophy behind the development and the use made of a computer data base are detailed; the applicability of this model to other projects is discussed.
Learning Molecular Behaviour May Improve Student Explanatory Models of the Greenhouse Effect
ERIC Educational Resources Information Center
Harris, Sara E.; Gold, Anne U.
2018-01-01
We assessed undergraduates' representations of the greenhouse effect, based on student-generated concept sketches, before and after a 30-min constructivist lesson. Principal component analysis of features in student sketches revealed seven distinct and coherent explanatory models including a new "Molecular Details" model. After the…
Spacecraft Thermal and Optical Modeling Impacts on Estimation of the GRAIL Lunar Gravity Field
NASA Technical Reports Server (NTRS)
Fahnestock, Eugene G.; Park, Ryan S.; Yuan, Dah-Ning; Konopliv, Alex S.
2012-01-01
We summarize work performed involving thermo-optical modeling of the two Gravity Recovery And Interior Laboratory (GRAIL) spacecraft. We derived several reconciled spacecraft thermo-optical models having varying detail. We used the simplest in calculating SRP acceleration, and used the most detailed to calculate acceleration due to thermal re-radiation. For the latter, we used both the output of pre-launch finite-element-based thermal simulations and downlinked temperature sensor telemetry. The estimation process to recover the lunar gravity field utilizes both a nominal thermal re-radiation accleration history and an apriori error model derived from that plus an off-nominal history, which bounds parameter uncertainties as informed by sensitivity studies.
Automatic Generation of Building Models with Levels of Detail 1-3
NASA Astrophysics Data System (ADS)
Nguatem, W.; Drauschke, M.; Mayer, H.
2016-06-01
We present a workflow for the automatic generation of building models with levels of detail (LOD) 1 to 3 according to the CityGML standard (Gröger et al., 2012). We start with orienting unsorted image sets employing (Mayer et al., 2012), we compute depth maps using semi-global matching (SGM) (Hirschmüller, 2008), and fuse these depth maps to reconstruct dense 3D point clouds (Kuhn et al., 2014). Based on planes segmented from these point clouds, we have developed a stochastic method for roof model selection (Nguatem et al., 2013) and window model selection (Nguatem et al., 2014). We demonstrate our workflow up to the export into CityGML.
Di Mascolo, Maria; Gouin, Alexia
2013-03-01
The work presented here is with a view to improving performance of sterilization services in hospitals. We carried out a survey in a large number of health establishments in the Rhône-Alpes region in France. Based on the results of this survey and a detailed study of a specific service, we have built a generic model. The generic nature of the model relies on a common structure with a high level of detail. This model can be used to improve the performance of a specific sterilization service and/or to dimension its resources. It can also serve for quantitative comparison of performance indicators of various sterilization services.
Physical models of collective cell motility: from cell to tissue
NASA Astrophysics Data System (ADS)
Camley, B. A.; Rappel, W.-J.
2017-03-01
In this article, we review physics-based models of collective cell motility. We discuss a range of techniques at different scales, ranging from models that represent cells as simple self-propelled particles to phase field models that can represent a cell’s shape and dynamics in great detail. We also extensively review the ways in which cells within a tissue choose their direction, the statistics of cell motion, and some simple examples of how cell-cell signaling can interact with collective cell motility. This review also covers in more detail selected recent works on collective cell motion of small numbers of cells on micropatterns, in wound healing, and the chemotaxis of clusters of cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huff, Kathryn D.
Component level and system level abstraction of detailed computational geologic repository models have resulted in four rapid computational models of hydrologic radionuclide transport at varying levels of detail. Those models are described, as is their implementation in Cyder, a software library of interchangeable radionuclide transport models appropriate for representing natural and engineered barrier components of generic geology repository concepts. A proof of principle demonstration was also conducted in which these models were used to represent the natural and engineered barrier components of a repository concept in a reducing, homogenous, generic geology. This base case demonstrates integration of the Cyder openmore » source library with the Cyclus computational fuel cycle systems analysis platform to facilitate calculation of repository performance metrics with respect to fuel cycle choices. (authors)« less
Models and signal processing for an implanted ethanol bio-sensor.
Han, Jae-Joon; Doerschuk, Peter C; Gelfand, Saul B; O'Connor, Sean J
2008-02-01
The understanding of drinking patterns leading to alcoholism has been hindered by an inability to unobtrusively measure ethanol consumption over periods of weeks to months in the community environment. An implantable ethanol sensor is under development using microelectromechanical systems technology. For safety and user acceptability issues, the sensor will be implanted subcutaneously and, therefore, measure peripheral-tissue ethanol concentration. Determining ethanol consumption and kinetics in other compartments from the time course of peripheral-tissue ethanol concentration requires sophisticated signal processing based on detailed descriptions of the relevant physiology. A statistical signal processing system based on detailed models of the physiology and using extended Kalman filtering and dynamic programming tools is described which can estimate the time series of ethanol concentration in blood, liver, and peripheral tissue and the time series of ethanol consumption based on peripheral-tissue ethanol concentration measurements.
Nutrient Dynamics In Flooded Wetlands. I: Model Development
Wetlands are rich ecosystems recognized for ameliorating floods, improving water quality and providing other ecosystem benefits. In this part of a two-paper sequel, we present a relatively detailed process-based model for nitrogen and phosphorus retention, cycling and removal in...
Analytic Modeling of Insurgencies
2014-08-01
Counterinsurgency, Situational Awareness, Civilians, Lanchester 1. Introduction Combat modeling is one of the oldest areas of operations research, dating...Army. The ground-breaking work of Lanchester in 1916 [1] marks the beginning of formal models of conflicts, where mathematical formulas and, later...Warfare model [3], which is a Lanchester - based mathematical model (see more details about this model later on), and McCormick’s Magic Diamond model [4
NASA Astrophysics Data System (ADS)
Pawłowicz, Joanna A.
2017-10-01
The TLS method (Terrestrial Laser Scanning) may replace the traditional building survey methods, e.g. those requiring the use measuring tapes or range finders. This technology allows for collecting digital data in the form of a point cloud, which can be used to create a 3D model of a building. In addition, it allows for collecting data with an incredible precision, which translates into the possibility to reproduce all architectural features of a building. This data is applied in reverse engineering to create a 3D model of an object existing in a physical space. This study presents the results of a research carried out using a point cloud to recreate the architectural features of a historical building with the application of reverse engineering. The research was conducted on a two-storey residential building with a basement and an attic. Out of the building’s façade sticks a veranda featuring a complicated, wooden structure. The measurements were taken at the medium and the highest resolution using a ScanStation C10 laser scanner by Leica. The data obtained was processed using specialist software, which allowed for the application of reverse engineering, especially for reproducing the sculpted details of the veranda. Following digitization, all redundant data was removed from the point cloud and the cloud was subjected to modelling. For testing purposes, a selected part of the veranda was modelled by means of two methods: surface matching and Triangulated Irregular Network. Both modelling methods were applied in the case of data collected at medium and the highest resolution. Creating a model based on data obtained at medium resolution, both by means of the surface matching and the TIN method, does not allow for a precise recreation of architectural details. The study presents certain sculpted elements recreated based on the highest resolution data with superimposed TIN juxtaposed against a digital image. The resulting model is very precise. Creating good models requires highly accurate field data. It is important to properly choose the distance between the measuring station and the measured object in order to ensure that the angles of incidence (horizontal and vertical) of the laser beam are as straight as possible. The model created based on medium resolution offers very poor quality of details, i.e. only the bigger, basic elements of each detail are clearly visible, while the smaller ones are blurred. This is why in order to obtain data sufficient to reproduce architectural details laser scanning should be performed at the highest resolution. In addition, modelling by means of the surface matching method should be avoided - a better idea is to use the TIN method. In addition to providing a realistically-looking visualization, the method has one more important advantage - it is 4 times faster than the surface matching method.
ERIC Educational Resources Information Center
Gandhi, Allison Gruner; Murphy-Graham, Erin; Petrosino, Anthony; Chrismer, Sara Schwartz; Weiss, Carol H.
2007-01-01
In an effort to promote evidence-based practice, government officials, researchers, and program developers have developed lists of model programs in the prevention field. This article reviews the evidence used by seven best-practice lists to select five model prevention programs. The authors' examination of this research raises questions about the…
Direct Push supported geotechnical and hydrogeological characterisation of an active sinkhole area
NASA Astrophysics Data System (ADS)
Tippelt, Thomas; Vienken, Thomas; Kirsch, Reinhard; Dietrich, Peter; Werban, Ulrike
2017-04-01
Sinkholes represent a natural geologic hazard in areas where soluble layers are present in the subsurface. A detailed knowledge of the composition of the subsurface and its hydrogeological and geotechnical properties is essential for the understanding of sinkhole formation and propagation. This serves as base for risk evaluation and the development of an early warning system. However, site models often depend on data from drillings and surface geophysical surveys that in many cases cannot resolve the spatial distribution of relevant hydrogeological and geotechnical parameters sufficiently. Therefore, an active sinkhole area in Münsterdorf, Northern Germany, was investigated in detail using Direct Push technology, a minimally invasive sounding method. The obtained vertical high-resolution profiles of geotechnical and hydrogeological characteristics, in combination with Direct Push based sampling and surface geophysical measurements lead to a strong improvement of the geologic site model. The conceptual site model regarding sinkhole formation and propagation will then be tested based on the gathered data and, if necessary, adapted accordingly.
Face aging effect simulation model based on multilayer representation and shearlet transform
NASA Astrophysics Data System (ADS)
Li, Yuancheng; Li, Yan
2017-09-01
In order to extract detailed facial features, we build a face aging effect simulation model based on multilayer representation and shearlet transform. The face is divided into three layers: the global layer of the face, the local features layer, and texture layer, which separately establishes the aging model. First, the training samples are classified according to different age groups, and we use active appearance model (AAM) at the global level to obtain facial features. The regression equations of shape and texture with age are obtained by fitting the support vector machine regression, which is based on the radial basis function. We use AAM to simulate the aging of facial organs. Then, for the texture detail layer, we acquire the significant high-frequency characteristic components of the face by using the multiscale shearlet transform. Finally, we get the last simulated aging images of the human face by the fusion algorithm. Experiments are carried out on the FG-NET dataset, and the experimental results show that the simulated face images have less differences from the original image and have a good face aging simulation effect.
Balakrishnan, Karthik; Goico, Brian; Arjmand, Ellis M
2015-04-01
(1) To describe the application of a detailed cost-accounting method (time-driven activity-cased costing) to operating room personnel costs, avoiding the proxy use of hospital and provider charges. (2) To model potential cost efficiencies using different staffing models with the case study of outpatient adenotonsillectomy. Prospective cost analysis case study. Tertiary pediatric hospital. All otolaryngology providers and otolaryngology operating room staff at our institution. Time-driven activity-based costing demonstrated precise per-case and per-minute calculation of personnel costs. We identified several areas of unused personnel capacity in a basic staffing model. Per-case personnel costs decreased by 23.2% by allowing a surgeon to run 2 operating rooms, despite doubling all other staff. Further cost reductions up to a total of 26.4% were predicted with additional staffing rearrangements. Time-driven activity-based costing allows detailed understanding of not only personnel costs but also how personnel time is used. This in turn allows testing of alternative staffing models to decrease unused personnel capacity and increase efficiency. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.
Communication: Introducing prescribed biases in out-of-equilibrium Markov models
NASA Astrophysics Data System (ADS)
Dixit, Purushottam D.
2018-03-01
Markov models are often used in modeling complex out-of-equilibrium chemical and biochemical systems. However, many times their predictions do not agree with experiments. We need a systematic framework to update existing Markov models to make them consistent with constraints that are derived from experiments. Here, we present a framework based on the principle of maximum relative path entropy (minimum Kullback-Leibler divergence) to update Markov models using stationary state and dynamical trajectory-based constraints. We illustrate the framework using a biochemical model network of growth factor-based signaling. We also show how to find the closest detailed balanced Markov model to a given Markov model. Further applications and generalizations are discussed.
Global Existence Analysis of Cross-Diffusion Population Systems for Multiple Species
NASA Astrophysics Data System (ADS)
Chen, Xiuqing; Daus, Esther S.; Jüngel, Ansgar
2018-02-01
The existence of global-in-time weak solutions to reaction-cross-diffusion systems for an arbitrary number of competing population species is proved. The equations can be derived from an on-lattice random-walk model with general transition rates. In the case of linear transition rates, it extends the two-species population model of Shigesada, Kawasaki, and Teramoto. The equations are considered in a bounded domain with homogeneous Neumann boundary conditions. The existence proof is based on a refined entropy method and a new approximation scheme. Global existence follows under a detailed balance or weak cross-diffusion condition. The detailed balance condition is related to the symmetry of the mobility matrix, which mirrors Onsager's principle in thermodynamics. Under detailed balance (and without reaction) the entropy is nonincreasing in time, but counter-examples show that the entropy may increase initially if detailed balance does not hold.
Mergers of Non-spinning Black-hole Binaries: Gravitational Radiation Characteristics
NASA Technical Reports Server (NTRS)
Baker, John G.; Boggs, William D.; Centrella, Joan; Kelly, Bernard J.; McWilliams, Sean T.; vanMeter, James R.
2008-01-01
We present a detailed descriptive analysis of the gravitational radiation from black-hole binary mergers of non-spinning black holes, based on numerical simulations of systems varying from equal-mass to a 6:1 mass ratio. Our primary goal is to present relatively complete information about the waveforms, including all the leading multipolar components, to interested researchers. In our analysis, we pursue the simplest physical description of the dominant features in the radiation, providing an interpretation of the waveforms in terms of an implicit rotating source. This interpretation applies uniformly to the full wavetrain, from inspiral through ringdown. We emphasize strong relationships among the l = m modes that persist through the full wavetrain. Exploring the structure of the waveforms in more detail, we conduct detailed analytic fitting of the late-time frequency evolution, identifying a key quantitative feature shared by the l = m modes among all mass-ratios. We identify relationships, with a simple interpretation in terms of the implicit rotating source, among the evolution of frequency and amplitude, which hold for the late-time radiation. These detailed relationships provide sufficient information about the late-time radiation to yield a predictive model for the late-time waveforms, an alternative to the common practice of modeling by a sum of quasinormal mode overtones. We demonstrate an application of this in a new effective-one-body-based analytic waveform model.
Mergers of nonspinning black-hole binaries: Gravitational radiation characteristics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, John G.; Centrella, Joan; Kelly, Bernard J.
2008-08-15
We present a detailed descriptive analysis of the gravitational radiation from black-hole binary mergers of nonspinning black holes, based on numerical simulations of systems varying from equal mass to a 6 ratio 1 mass ratio. Our primary goal is to present relatively complete information about the waveforms, including all the leading multipolar components, to interested researchers. In our analysis, we pursue the simplest physical description of the dominant features in the radiation, providing an interpretation of the waveforms in terms of an implicit rotating source. This interpretation applies uniformly to the full wave train, from inspiral through ringdown. We emphasizemore » strong relationships among the l=m modes that persist through the full wave train. Exploring the structure of the waveforms in more detail, we conduct detailed analytic fitting of the late-time frequency evolution, identifying a key quantitative feature shared by the l=m modes among all mass ratios. We identify relationships, with a simple interpretation in terms of the implicit rotating source, among the evolution of frequency and amplitude, which hold for the late-time radiation. These detailed relationships provide sufficient information about the late-time radiation to yield a predictive model for the late-time waveforms, an alternative to the common practice of modeling by a sum of quasinormal mode overtones. We demonstrate an application of this in a new effective-one-body-based analytic waveform model.« less
Use of paired simple and complex models to reduce predictive bias and quantify uncertainty
NASA Astrophysics Data System (ADS)
Doherty, John; Christensen, Steen
2011-12-01
Modern environmental management and decision-making is based on the use of increasingly complex numerical models. Such models have the advantage of allowing representation of complex processes and heterogeneous system property distributions inasmuch as these are understood at any particular study site. The latter are often represented stochastically, this reflecting knowledge of the character of system heterogeneity at the same time as it reflects a lack of knowledge of its spatial details. Unfortunately, however, complex models are often difficult to calibrate because of their long run times and sometimes questionable numerical stability. Analysis of predictive uncertainty is also a difficult undertaking when using models such as these. Such analysis must reflect a lack of knowledge of spatial hydraulic property details. At the same time, it must be subject to constraints on the spatial variability of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration-constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology is demonstrated using a synthetic example based on groundwater modeling environments commonly encountered in northern Europe and North America.
Lashin, Sergey A; Suslov, Valentin V; Matushkin, Yuri G
2010-06-01
We propose an original program "Evolutionary constructor" that is capable of computationally efficient modeling of both population-genetic and ecological problems, combining these directions in one model of required detail level. We also present results of comparative modeling of stability, adaptability and biodiversity dynamics in populations of unicellular haploid organisms which form symbiotic ecosystems. The advantages and disadvantages of two evolutionary strategies of biota formation--a few generalists' taxa-based biota formation and biodiversity-based biota formation--are discussed.
Design and Implementation of 3D Model Data Management System Based on SQL
NASA Astrophysics Data System (ADS)
Li, Shitao; Zhang, Shixin; Zhang, Zhanling; Li, Shiming; Jia, Kun; Hu, Zhongxu; Ping, Liang; Hu, Youming; Li, Yanlei
CAD/CAM technology plays an increasingly important role in the machinery manufacturing industry. As an important means of production, the accumulated three-dimensional models in many years of design work are valuable. Thus the management of these three-dimensional models is of great significance. This paper gives detailed explanation for a method to design three-dimensional model databases based on SQL and to implement the functions such as insertion, modification, inquiry, preview and so on.
Three dimensional modeling of cirrus during the 1991 FIRE IFO 2: Detailed process study
NASA Technical Reports Server (NTRS)
Jensen, Eric J.; Toon, Owen B.; Westphal, Douglas L.
1993-01-01
A three-dimensional model of cirrus cloud formation and evolution, including microphysical, dynamical, and radiative processes, was used to simulate cirrus observed in the FIRE Phase 2 Cirrus field program (13 Nov. - 7 Dec. 1991). Sulfate aerosols, solution drops, ice crystals, and water vapor are all treated as interactive elements in the model. Ice crystal size distributions are fully resolved based on calculations of homogeneous freezing of solution drops, growth by water vapor deposition, evaporation, aggregation, and vertical transport. Visible and infrared radiative fluxes, and radiative heating rates are calculated using the two-stream algorithm described by Toon et al. Wind velocities, diffusion coefficients, and temperatures were taken from the MAPS analyses and the MM4 mesoscale model simulations. Within the model, moisture is transported and converted to liquid or vapor by the microphysical processes. The simulated cloud bulk and microphysical properties are shown in detail for the Nov. 26 and Dec. 5 case studies. Comparisons with lidar, radar, and in situ data are used to determine how well the simulations reproduced the observed cirrus. The roles played by various processes in the model are described in detail. The potential modes of nucleation are evaluated, and the importance of small-scale variations in temperature and humidity are discussed. The importance of competing ice crystal growth mechanisms (water vapor deposition and aggregation) are evaluated based on model simulations. Finally, the importance of ice crystal shape for crystal growth and vertical transport of ice are discussed.
NASA Astrophysics Data System (ADS)
Frankl, Amaury; Stal, Cornelis; Abraha, Amanuel; De Wulf, Alain; Poesen, Jean
2014-05-01
Taking climate change scenarios into account, rainfall patterns are likely to change over the coming decades in eastern Africa. In brief, large parts of eastern Africa are expected to experience a wetting, including seasonality changes. Gullies are threshold phenomena that accomplish most of their geomorphic change during short periods of strong rainfall. Understanding the links between geomorphic change and rainfall characteristics in detail, is thus crucial to ensure the sustainability of future land management. In this study, we present image-based 3D modelling as a low-cost, flexible and rapid method to quantify gully morphology from terrestrial photographs. The methodology was tested on two gully heads in Northern Ethiopia. Ground photographs (n = 88-235) were taken during days with cloud cover. The photographs were processed in PhotoScan software using a semi-automated Structure from Motion-Multi View Stereo (SfM-MVS) workflow. As a result, full 3D models were created, accurate at cm level. These models allow to quantify gully morphology in detail, including information on undercut walls and soil pipe inlets. Such information is crucial for understanding the hydrogeomorphic processes involved. Producing accurate 3D models after each rainfall event, allows to model interrelations between rainfall, land management, runoff and erosion. Expected outcomes are the production of detailed vulnerability maps that allow to design soil and water conservation measures in a cost-effective way. Keywords: 3D model, Ethiopia, Image-based 3D modelling, Gully, PhotoScan, Rainfall.
Numerical Simulations of Single Flow Element in a Nuclear Thermal Thrust Chamber
NASA Technical Reports Server (NTRS)
Cheng, Gary; Ito, Yasushi; Ross, Doug; Chen, Yen-Sen; Wang, Ten-See
2007-01-01
The objective of this effort is to develop an efficient and accurate computational methodology to predict both detailed and global thermo-fluid environments of a single now element in a hypothetical solid-core nuclear thermal thrust chamber assembly, Several numerical and multi-physics thermo-fluid models, such as chemical reactions, turbulence, conjugate heat transfer, porosity, and power generation, were incorporated into an unstructured-grid, pressure-based computational fluid dynamics solver. The numerical simulations of a single now element provide a detailed thermo-fluid environment for thermal stress estimation and insight for possible occurrence of mid-section corrosion. In addition, detailed conjugate heat transfer simulations were employed to develop the porosity models for efficient pressure drop and thermal load calculations.
Modeling Progressive Failure of Bonded Joints Using a Single Joint Finite Element
NASA Technical Reports Server (NTRS)
Stapleton, Scott E.; Waas, Anthony M.; Bednarcyk, Brett A.
2010-01-01
Enhanced finite elements are elements with an embedded analytical solution which can capture detailed local fields, enabling more efficient, mesh-independent finite element analysis. In the present study, an enhanced finite element is applied to generate a general framework capable of modeling an array of joint types. The joint field equations are derived using the principle of minimum potential energy, and the resulting solutions for the displacement fields are used to generate shape functions and a stiffness matrix for a single joint finite element. This single finite element thus captures the detailed stress and strain fields within the bonded joint, but it can function within a broader structural finite element model. The costs associated with a fine mesh of the joint can thus be avoided while still obtaining a detailed solution for the joint. Additionally, the capability to model non-linear adhesive constitutive behavior has been included within the method, and progressive failure of the adhesive can be modeled by using a strain-based failure criteria and re-sizing the joint as the adhesive fails. Results of the model compare favorably with experimental and finite element results.
NASA Astrophysics Data System (ADS)
Hamedianfar, Alireza; Shafri, Helmi Zulhaidi Mohd
2016-04-01
This paper integrates decision tree-based data mining (DM) and object-based image analysis (OBIA) to provide a transferable model for the detailed characterization of urban land-cover classes using WorldView-2 (WV-2) satellite images. Many articles have been published on OBIA in recent years based on DM for different applications. However, less attention has been paid to the generation of a transferable model for characterizing detailed urban land cover features. Three subsets of WV-2 images were used in this paper to generate transferable OBIA rule-sets. Many features were explored by using a DM algorithm, which created the classification rules as a decision tree (DT) structure from the first study area. The developed DT algorithm was applied to object-based classifications in the first study area. After this process, we validated the capability and transferability of the classification rules into second and third subsets. Detailed ground truth samples were collected to assess the classification results. The first, second, and third study areas achieved 88%, 85%, and 85% overall accuracies, respectively. Results from the investigation indicate that DM was an efficient method to provide the optimal and transferable classification rules for OBIA, which accelerates the rule-sets creation stage in the OBIA classification domain.
Development of an "Alert Framework" Based on the Practices in the Medical Front.
Sakata, Takuya; Araki, Kenji; Yamazaki, Tomoyoshi; Kawano, Koichi; Maeda, Minoru; Kushima, Muneo; Araki, Sanae
2018-05-09
At the University of Miyazaki Hospital (UMH), we have accumulated and semantically structured a vast amount of medical information since the activation of the electronic health record system approximately 10 years ago. With this medical information, we have decided to develop an alert system for aiding in medical treatment. The purpose of this investigation is to not only to integrate an alert framework into the electronic heath record system, but also to formulate a modeling method of this knowledge. A trial alert framework was developed for the staff in various occupational categories at the UMH. Based on findings of subsequent interviews, a more detailed and upgraded alert framework was constructed, resulting in the final model. Based on our current findings, an alert framework was developed with four major items. Based on the analysis of the medical practices from the trial model, it has been concluded that there are four major risk patterns that trigger the alert. Furthermore, the current alert framework contains detailed definitions which are easily substituted into the database, leading to easy implementation of the electronic health records.
Structural analysis consultation using artificial intelligence
NASA Technical Reports Server (NTRS)
Melosh, R. J.; Marcal, P. V.; Berke, L.
1978-01-01
The primary goal of consultation is definition of the best strategy to deal with a structural engineering analysis objective. The knowledge base to meet the need is designed to identify the type of numerical analysis, the needed modeling detail, and specific analysis data required. Decisions are constructed on the basis of the data in the knowledge base - material behavior, relations between geometry and structural behavior, measures of the importance of time and temperature changes - and user supplied specifics characteristics of the spectrum of analysis types, the relation between accuracy and model detail on the structure, its mechanical loadings, and its temperature states. Existing software demonstrated the feasibility of the approach, encompassing the 36 analysis classes spanning nonlinear, temperature affected, incremental analyses which track the behavior of structural systems.
Multigrid Method for Modeling Multi-Dimensional Combustion with Detailed Chemistry
NASA Technical Reports Server (NTRS)
Zheng, Xiaoqing; Liu, Chaoqun; Liao, Changming; Liu, Zhining; McCormick, Steve
1996-01-01
A highly accurate and efficient numerical method is developed for modeling 3-D reacting flows with detailed chemistry. A contravariant velocity-based governing system is developed for general curvilinear coordinates to maintain simplicity of the continuity equation and compactness of the discretization stencil. A fully-implicit backward Euler technique and a third-order monotone upwind-biased scheme on a staggered grid are used for the respective temporal and spatial terms. An efficient semi-coarsening multigrid method based on line-distributive relaxation is used as the flow solver. The species equations are solved in a fully coupled way and the chemical reaction source terms are treated implicitly. Example results are shown for a 3-D gas turbine combustor with strong swirling inflows.
ERIC Educational Resources Information Center
Finch, Holmes
2010-01-01
The accuracy of item parameter estimates in the multidimensional item response theory (MIRT) model context is one that has not been researched in great detail. This study examines the ability of two confirmatory factor analysis models specifically for dichotomous data to properly estimate item parameters using common formulae for converting factor…
This report provides detailed comparisons and sensitivity analyses of three candidate models, MESOPLUME, MESOPUFF, and MESOGRID. This was not a validation study; there was no suitable regional air quality data base for the Four Corners area. Rather, the models have been evaluated...
Calculation of single chain cellulose elasticity using fully atomistic modeling
Xiawa Wu; Robert J. Moon; Ashlie Martini
2011-01-01
Cellulose nanocrystals, a potential base material for green nanocomposites, are ordered bundles of cellulose chains. The properties of these chains have been studied for many years using atomic-scale modeling. However, model predictions are difficult to interpret because of the significant dependence of predicted properties on model details. The goal of this study is...
NASA Astrophysics Data System (ADS)
Jaiswal, D.; Long, S.; Parton, W. J.; Hartman, M.
2012-12-01
A coupled modeling system of crop growth model (BioCro) and biogeochemical model (DayCent) has been developed to assess the two-way interactions between plant growth and biogeochemistry. Crop growth in BioCro is simulated using a detailed mechanistic biochemical and biophysical multi-layer canopy model and partitioning of dry biomass into different plant organs according to phenological stages. Using hourly weather records, the model partitions light between dynamically changing sunlit and shaded portions of the canopy and computes carbon and water exchange with the atmosphere and through the canopy for each hour of the day, each day of the year. The model has been parameterized for the bioenergy crops sugarcane, Miscanthus and switchgrass, and validation has shown it to predict growth cycles and partitioning of biomass to a high degree of accuracy. As such it provides an ideal input for a soil biogeochemical model. DayCent is an established model for predicting long-term changes in soil C & N and soil-atmosphere exchanges of greenhouse gases. At present, DayCent uses a relatively simple productivity model. In this project BioCro has replaced this simple model to provide DayCent with a productivity and growth model equal in detail to its biogeochemistry. Dynamic coupling of these two models to produce CroCent allows for differential C: N ratios of litter fall (based on rates of senescence of different plant organs) and calibration of the model for realistic plant productivity in a mechanistic way. A process-based approach to modeling plant growth is needed for bioenergy crops because research on these crops (especially second generation feedstocks) has started only recently, and detailed agronomic information for growth, yield and management is too limited for effective empirical models. The coupled model provides means to test and improve the model against high resolution data, such as that obtained by eddy covariance and explore yield implications of different crop and soil management.
NASA Astrophysics Data System (ADS)
Peckham, S. D.; Kelbert, A.; Rudan, S.; Stoica, M.
2016-12-01
Standardized metadata for models is the key to reliable and greatly simplified coupling in model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System). This model metadata also helps model users to understand the important details that underpin computational models and to compare the capabilities of different models. These details include simplifying assumptions on the physics, governing equations and the numerical methods used to solve them, discretization of space (the grid) and time (the time-stepping scheme), state variables (input or output), model configuration parameters. This kind of metadata provides a "deep description" of a computational model that goes well beyond other types of metadata (e.g. author, purpose, scientific domain, programming language, digital rights, provenance, execution) and captures the science that underpins a model. While having this kind of standardized metadata for each model in a repository opens up a wide range of exciting possibilities, it is difficult to collect this information and a carefully conceived "data model" or schema is needed to store it. Automated harvesting and scraping methods can provide some useful information, but they often result in metadata that is inaccurate or incomplete, and this is not sufficient to enable the desired capabilities. In order to address this problem, we have developed a browser-based tool called the MCM Tool (Model Component Metadata) which runs on notebooks, tablets and smart phones. This tool was partially inspired by the TurboTax software, which greatly simplifies the necessary task of preparing tax documents. It allows a model developer or advanced user to provide a standardized, deep description of a computational geoscience model, including hydrologic models. Under the hood, the tool uses a new ontology for models built on the CSDMS Standard Names, expressed as a collection of RDF files (Resource Description Framework). This ontology is based on core concepts such as variables, objects, quantities, operations, processes and assumptions. The purpose of this talk is to present details of the new ontology and to then demonstrate the MCM Tool for several hydrologic models.
Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.
Caglar, Mehmet Umut; Pal, Ranadip
2013-01-01
Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.
NASA Astrophysics Data System (ADS)
Santagati, C.; Inzerillo, L.; Di Paola, F.
2013-07-01
3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS), the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases) to large scale buildings for practitioner purpose.
Holistic versus monomeric strategies for hydrological modelling of human-modified hydrosystems
NASA Astrophysics Data System (ADS)
Nalbantis, I.; Efstratiadis, A.; Rozos, E.; Kopsiafti, M.; Koutsoyiannis, D.
2011-03-01
The modelling of human-modified basins that are inadequately measured constitutes a challenge for hydrological science. Often, models for such systems are detailed and hydraulics-based for only one part of the system while for other parts oversimplified models or rough assumptions are used. This is typically a bottom-up approach, which seeks to exploit knowledge of hydrological processes at the micro-scale at some components of the system. Also, it is a monomeric approach in two ways: first, essential interactions among system components may be poorly represented or even omitted; second, differences in the level of detail of process representation can lead to uncontrolled errors. Additionally, the calibration procedure merely accounts for the reproduction of the observed responses using typical fitting criteria. The paper aims to raise some critical issues, regarding the entire modelling approach for such hydrosystems. For this, two alternative modelling strategies are examined that reflect two modelling approaches or philosophies: a dominant bottom-up approach, which is also monomeric and, very often, based on output information, and a top-down and holistic approach based on generalized information. Critical options are examined, which codify the differences between the two strategies: the representation of surface, groundwater and water management processes, the schematization and parameterization concepts and the parameter estimation methodology. The first strategy is based on stand-alone models for surface and groundwater processes and for water management, which are employed sequentially. For each model, a different (detailed or coarse) parameterization is used, which is dictated by the hydrosystem schematization. The second strategy involves model integration for all processes, parsimonious parameterization and hybrid manual-automatic parameter optimization based on multiple objectives. A test case is examined in a hydrosystem in Greece with high complexities, such as extended surface-groundwater interactions, ill-defined boundaries, sinks to the sea and anthropogenic intervention with unmeasured abstractions both from surface water and aquifers. Criteria for comparison are the physical consistency of parameters, the reproduction of runoff hydrographs at multiple sites within the studied basin, the likelihood of uncontrolled model outputs, the required amount of computational effort and the performance within a stochastic simulation setting. Our work allows for investigating the deterioration of model performance in cases where no balanced attention is paid to all components of human-modified hydrosystems and the related information. Also, sources of errors are identified and their combined effect are evaluated.
Architecture with GIDEON, A Program for Design in Structural DNA Nanotechnology
Birac, Jeffrey J.; Sherman, William B.; Kopatsch, Jens; Constantinou, Pamela E.; Seeman, Nadrian C.
2012-01-01
We present geometry based design strategies for DNA nanostructures. The strategies have been implemented with GIDEON – a Graphical Integrated Development Environment for OligoNucleotides. GIDEON has a highly flexible graphical user interface that facilitates the development of simple yet precise models, and the evaluation of strains therein. Models are built on a simple model of undistorted B-DNA double-helical domains. Simple point and click manipulations of the model allow the minimization of strain in the phosphate-backbone linkages between these domains and the identification of any steric clashes that might occur as a result. Detailed analysis of 3D triangles yields clear predictions of the strains associated with triangles of different sizes. We have carried out experiments that confirm that 3D triangles form well only when their geometrical strain is less than 4% deviation from the estimated relaxed structure. Thus geometry-based techniques alone, without energetic considerations, can be used to explain general trends in DNA structure formation. We have used GIDEON to build detailed models of double crossover and triple crossover molecules, evaluating the non-planarity associated with base tilt and junction mis-alignments. Computer modeling using a graphical user interface overcomes the limited precision of physical models for larger systems, and the limited interaction rate associated with earlier, command-line driven software. PMID:16630733
The virtual dissecting room: Creating highly detailed anatomy models for educational purposes.
Zilverschoon, Marijn; Vincken, Koen L; Bleys, Ronald L A W
2017-01-01
Virtual 3D models are powerful tools for teaching anatomy. At the present day, there are a lot of different digital anatomy models, most of these commercial applications are based on a 3D model of a human body reconstructed from images with a 1mm intervals. The use of even smaller intervals may result in more details and more realistic appearances of 3D anatomy models. The aim of this study was to create a realistic and highly detailed 3D model of the hand and wrist based on small interval cross-sectional images, suitable for undergraduate and postgraduate teaching purposes with the possibility to perform a virtual dissection in an educational application. In 115 transverse cross-sections from a human hand and wrist, segmentation was done by manually delineating 90 different structures. With the use of Amira the segments were imported and a surface model/polygon model was created, followed by smoothening of the surfaces in Mudbox. In 3D Coat software the smoothed polygon models were automatically retopologied into a quadrilaterals formation and a UV map was added. In Mudbox, the textures from 90 structures were depicted in a realistic way by using photos from real tissue and afterwards height maps, gloss and specular maps were created to add more level of detail and realistic lightning on every structure. Unity was used to build a new software program that would support all the extra map features together with a preferred user interface. A 3D hand model has been created, containing 100 structures (90 at start and 10 extra structures added along the way). The model can be used interactively by changing the transparency, manipulating single or grouped structures and thereby simulating a virtual dissection. This model can be used for a variety of teaching purposes, ranging from undergraduate medical students to residents of hand surgery. Studying the hand and wrist anatomy using this model is cost-effective and not hampered by the limited access to real dissecting facilities. Copyright © 2016 Elsevier Inc. All rights reserved.
Bachis, Giulia; Maruéjouls, Thibaud; Tik, Sovanna; Amerlinck, Youri; Melcer, Henryk; Nopens, Ingmar; Lessard, Paul; Vanrolleghem, Peter A
2015-01-01
Characterization and modelling of primary settlers have been neglected pretty much to date. However, whole plant and resource recovery modelling requires primary settler model development, as current models lack detail in describing the dynamics and the diversity of the removal process for different particulate fractions. This paper focuses on the improved modelling and experimental characterization of primary settlers. First, a new modelling concept based on particle settling velocity distribution is proposed which is then applied for the development of an improved primary settler model as well as for its characterization under addition of chemicals (chemically enhanced primary treatment, CEPT). This model is compared to two existing simple primary settler models (Otterpohl and Freund; Lessard and Beck), showing to be better than the first one and statistically comparable to the second one, but with easier calibration thanks to the ease with which wastewater characteristics can be translated into model parameters. Second, the changes in the activated sludge model (ASM)-based chemical oxygen demand fractionation between inlet and outlet induced by primary settling is investigated, showing that typical wastewater fractions are modified by primary treatment. As they clearly impact the downstream processes, both model improvements demonstrate the need for more detailed primary settler models in view of whole plant modelling.
Hudjetz, Silvana; Lennartz, Gottfried; Krämer, Klara; Roß-Nickoll, Martina; Gergs, André; Preuss, Thomas G.
2014-01-01
The degradation of natural and semi-natural landscapes has become a matter of global concern. In Germany, semi-natural grasslands belong to the most species-rich habitat types but have suffered heavily from changes in land use. After abandonment, the course of succession at a specific site is often difficult to predict because many processes interact. In order to support decision making when managing semi-natural grasslands in the Eifel National Park, we built the WoodS-Model (Woodland Succession Model). A multimodeling approach was used to integrate vegetation dynamics in both the herbaceous and shrub/tree layer. The cover of grasses and herbs was simulated in a compartment model, whereas bushes and trees were modelled in an individual-based manner. Both models worked and interacted in a spatially explicit, raster-based landscape. We present here the model description, parameterization and testing. We show highly detailed projections of the succession of a semi-natural grassland including the influence of initial vegetation composition, neighborhood interactions and ungulate browsing. We carefully weighted the single processes against each other and their relevance for landscape development under different scenarios, while explicitly considering specific site conditions. Model evaluation revealed that the model is able to emulate successional patterns as observed in the field as well as plausible results for different population densities of red deer. Important neighborhood interactions such as seed dispersal, the protection of seedlings from browsing ungulates by thorny bushes, and the inhibition of wood encroachment by the herbaceous layer, have been successfully reproduced. Therefore, not only a detailed model but also detailed initialization turned out to be important for spatially explicit projections of a given site. The advantage of the WoodS-Model is that it integrates these many mutually interacting processes of succession. PMID:25494057
This presentation reviews the status and progress in forecasting particulate matter distributions. The shortcomings in representation of particulate matter formation in current atmospheric chemistry/transport models are presented based on analyses and detailed comparisons with me...
A Distributed Platform for Global-Scale Agent-Based Models of Disease Transmission
Parker, Jon; Epstein, Joshua M.
2013-01-01
The Global-Scale Agent Model (GSAM) is presented. The GSAM is a high-performance distributed platform for agent-based epidemic modeling capable of simulating a disease outbreak in a population of several billion agents. It is unprecedented in its scale, its speed, and its use of Java. Solutions to multiple challenges inherent in distributing massive agent-based models are presented. Communication, synchronization, and memory usage are among the topics covered in detail. The memory usage discussion is Java specific. However, the communication and synchronization discussions apply broadly. We provide benchmarks illustrating the GSAM’s speed and scalability. PMID:24465120
Quantifying Astronaut Tasks: Robotic Technology and Future Space Suit Design
NASA Technical Reports Server (NTRS)
Newman, Dava
2003-01-01
The primary aim of this research effort was to advance the current understanding of astronauts' capabilities and limitations in space-suited EVA by developing models of the constitutive and compatibility relations of a space suit, based on experimental data gained from human test subjects as well as a 12 degree-of-freedom human-sized robot, and utilizing these fundamental relations to estimate a human factors performance metric for space suited EVA work. The three specific objectives are to: 1) Compile a detailed database of torques required to bend the joints of a space suit, using realistic, multi- joint human motions. 2) Develop a mathematical model of the constitutive relations between space suit joint torques and joint angular positions, based on experimental data and compare other investigators' physics-based models to experimental data. 3) Estimate the work envelope of a space suited astronaut, using the constitutive and compatibility relations of the space suit. The body of work that makes up this report includes experimentation, empirical and physics-based modeling, and model applications. A detailed space suit joint torque-angle database was compiled with a novel experimental approach that used space-suited human test subjects to generate realistic, multi-joint motions and an instrumented robot to measure the torques required to accomplish these motions in a space suit. Based on the experimental data, a mathematical model is developed to predict joint torque from the joint angle history. Two physics-based models of pressurized fabric cylinder bending are compared to experimental data, yielding design insights. The mathematical model is applied to EVA operations in an inverse kinematic analysis coupled to the space suit model to calculate the volume in which space-suited astronauts can work with their hands, demonstrating that operational human factors metrics can be predicted from fundamental space suit information.
A Physics-Based Approach for Power Integrity in Multi-Layered PCBs
NASA Astrophysics Data System (ADS)
Zhao, Biyao
Developing a power distribution network (PDN) for ASICs and ICs to achieve the low-voltage ripple specifications for current digital designs is challenging with the high-speed and low-voltage ICs. Present methods are typically guided by best engineering practices for low impedance looking into the PDN from the IC. A pre-layout design methodology for power integrity in multi-layered PCB PDN geometry is proposed in the thesis. The PCB PDN geometry is segmented into four parts and every part is modelled using different methods based on the geometry details of the part. Physics-based circuit models are built for every part and the four parts are re-assembled into one model. The influence of geometry details is clearly revealed in this methodology. Based on the physics-based circuit mode, the procedures of using the pre-layout design methodology as a guideline during the PDN design is illustrated. Some common used geometries are used to build design space, and the design curves with the geometry details are provided to be a look up library for engineering use. The pre-layout methodology is based on the resonant cavity model of parallel planes for the cavity structures, and parallel-plane PEEC (PPP) for the irregular shaped plane inductance, and PEEC for the decoupling capacitor connection above the top most or bottom most power-return planes. PCB PDN is analyzed based on the input impedance looking into the PCB from the IC. The pre-layout design methodology can be used to obtain the best possible PCB PDN design. With the switching current profile, the target impedance can be selected to evaluate the PDN performance, and the frequency domain PDN input impedance can be used to obtain the voltage ripple in the time domain to give intuitive insight of the geometry impact on the voltage ripple.
Development of an Efficient CFD Model for Nuclear Thermal Thrust Chamber Assembly Design
NASA Technical Reports Server (NTRS)
Cheng, Gary; Ito, Yasushi; Ross, Doug; Chen, Yen-Sen; Wang, Ten-See
2007-01-01
The objective of this effort is to develop an efficient and accurate computational methodology to predict both detailed thermo-fluid environments and global characteristics of the internal ballistics for a hypothetical solid-core nuclear thermal thrust chamber assembly (NTTCA). Several numerical and multi-physics thermo-fluid models, such as real fluid, chemically reacting, turbulence, conjugate heat transfer, porosity, and power generation, were incorporated into an unstructured-grid, pressure-based computational fluid dynamics solver as the underlying computational methodology. The numerical simulations of detailed thermo-fluid environment of a single flow element provide a mechanism to estimate the thermal stress and possible occurrence of the mid-section corrosion of the solid core. In addition, the numerical results of the detailed simulation were employed to fine tune the porosity model mimic the pressure drop and thermal load of the coolant flow through a single flow element. The use of the tuned porosity model enables an efficient simulation of the entire NTTCA system, and evaluating its performance during the design cycle.
Geometric Accuracy Analysis of Worlddem in Relation to AW3D30, Srtm and Aster GDEM2
NASA Astrophysics Data System (ADS)
Bayburt, S.; Kurtak, A. B.; Büyüksalih, G.; Jacobsen, K.
2017-05-01
In a project area close to Istanbul the quality of WorldDEM, AW3D30, SRTM DSM and ASTER GDEM2 have been analyzed in relation to a reference aerial LiDAR DEM and to each other. The random and the systematic height errors have been separated. The absolute offset for all height models in X, Y and Z is within the expectation. The shifts have been respected in advance for a satisfying estimation of the random error component. All height models are influenced by some tilts, different in size. In addition systematic deformations can be seen not influencing the standard deviation too much. The delivery of WorldDEM includes information about the height error map which is based on the interferometric phase errors, and the number and location of coverage's from different orbits. A dependency of the height accuracy from the height error map information and the number of coverage's can be seen, but it is smaller as expected. WorldDEM is more accurate as the other investigated height models and with 10 m point spacing it includes more morphologic details, visible at contour lines. The morphologic details are close to the details based on the LiDAR digital surface model (DSM). As usual a dependency of the accuracy from the terrain slope can be seen. In forest areas the canopy definition of InSAR X- and C-band height models as well as for the height models based on optical satellite images is not the same as the height definition by LiDAR. In addition the interferometric phase uncertainty over forest areas is larger. Both effects lead to lower height accuracy in forest areas, also visible in the height error map.
Numerical Solutions of the Complete Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Robinson, David F.; Hassan, H. A.
1997-01-01
This report details the development of a new two-equation turbulence closure model based on the exact turbulent kinetic energy k and the variance of vorticity, zeta. The model, which is applicable to three dimensional flowfields, employs one set of model constants and does not use damping or wall functions, or geometric factors.
A Comprehensive Multi-Level Model for Campus-Based Leadership Education
ERIC Educational Resources Information Center
Rosch, David; Spencer, Gayle L.; Hoag, Beth L.
2017-01-01
Within this application brief, we propose a comprehensive model for mapping the shape and optimizing the effectiveness of leadership education in campus-wide university settings. The four-level model is highlighted by inclusion of a philosophy statement detailing the values and purpose of leadership education on campus, a set of skills and…
Attrition Cost Model Instruction Manual
ERIC Educational Resources Information Center
Yanagiura, Takeshi
2012-01-01
This instruction manual explains in detail how to use the Attrition Cost Model program, which estimates the cost of student attrition for a state's higher education system. Programmed with SAS, this model allows users to instantly calculate the cost of attrition and the cumulative attrition rate that is based on the most recent retention and…
A Generalized Approach to Defining Item Discrimination for DCMs
ERIC Educational Resources Information Center
Henson, Robert; DiBello, Lou; Stout, Bill
2018-01-01
Diagnostic classification models (DCMs, also known as cognitive diagnosis models) hold the promise of providing detailed classroom information about the skills a student has or has not mastered. Specifically, DCMs are special cases of constrained latent class models where classes are defined based on mastery/nonmastery of a set of attributes (or…
Testing for detailed balance in a financial market
NASA Astrophysics Data System (ADS)
Fiebig, H. R.; Musgrove, D. P.
2015-06-01
We test a historical price-time series in a financial market (the NASDAQ 100 index) for a statistical property known as detailed balance. The presence of detailed balance would imply that the market can be modeled by a stochastic process based on a Markov chain, thus leading to equilibrium. In economic terms, a positive outcome of the test would support the efficient market hypothesis, a cornerstone of neo-classical economic theory. In contrast to the usage in prevalent economic theory the term equilibrium here is tied to the returns, rather than the price-time series. The test is based on an action functional S constructed from the elements of the detailed balance condition and the historical data set, and then analyzing S by means of simulated annealing. Checks are performed to verify the validity of the analysis method. We discuss the outcome of this analysis.
NASA Astrophysics Data System (ADS)
Cui, Yan; Liao, Xiaoping
2012-05-01
In the work, modeling and design of a capacitive microwave power sensor employing the MEMS plate with clamped-clamped and free-free edges are presented. A novel analytical model of the sensor is established in detail. Through the function of mode shapes presented, the natural frequency can be solved by the Rayleigh-Ritz method. And based on the generalized coordinate introduced, the displacement of the plate with the irradiation of microwave power can be solved. Furthermore, the sensitivity for the power is also derived. Then the detailed consideration of the design and simulation of the microwave characteristic of the sensor are also presented. The linearly graded ground planar in the coplanar waveguide is employed to avoid step discontinuity. The fabrication process is compatible with GaAs MMIC technology completely, also described in detail. The measurement of the proposed sensor indicates a sensitivity of 7.2 fF W-1 and superior return and insertion losses (S11 and S21), less than -22.16 dB and -0.25 dB, respectively, up to 12 GHz, suggesting that it can be available for microwave power detecting in the X-band frequency range.
Aeroservoelastic and Flight Dynamics Analysis Using Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Arena, Andrew S., Jr.
1999-01-01
This document in large part is based on the Masters Thesis of Cole Stephens. The document encompasses a variety of technical and practical issues involved when using the STARS codes for Aeroservoelastic analysis of vehicles. The document covers in great detail a number of technical issues and step-by-step details involved in the simulation of a system where aerodynamics, structures and controls are tightly coupled. Comparisons are made to a benchmark experimental program conducted at NASA Langley. One of the significant advantages of the methodology detailed is that as a result of the technique used to accelerate the CFD-based simulation, a systems model is produced which is very useful for developing the control law strategy, and subsequent high-speed simulations.
NASA Astrophysics Data System (ADS)
Hsu, L.; Lehnert, K. A.; Walker, J. D.; Chan, C.; Ash, J.; Johansson, A. K.; Rivera, T. A.
2011-12-01
Sample-based measurements in geochemistry are highly diverse, due to the large variety of sample types, measured properties, and idiosyncratic analytical procedures. In order to ensure the utility of sample-based data for re-use in research or education they must be associated with a high quality and quantity of descriptive, discipline-specific metadata. Without an adequate level of documentation, it is not possible to reproduce scientific results or have confidence in using the data for new research inquiries. The required detail in data documentation makes it challenging to aggregate large sets of data from different investigators and disciplines. One solution to this challenge is to build data systems with several tiers of intricacy, where the less detailed tiers are geared toward discovery and interoperability, and the more detailed tiers have higher value for data analysis. The Geoinformatics for Geochemistry (GfG) group, which is part of the Integrated Earth Data Applications facility (http://www.iedadata.org), has taken this approach to provide services for the discovery, access, and analysis of sample-based geochemical data for a diverse user community, ranging from the highly informed geochemist to non-domain scientists and undergraduate students. GfG builds and maintains three tiers in the sample based data systems, from a simple data catalog (Geochemical Resource Library), to a substantially richer data model for the EarthChem Portal (EarthChem XML), and finally to detailed discipline-specific data models for petrologic (PetDB), sedimentary (SedDB), hydrothermal spring (VentDB), and geochronological (GeoChron) samples. The data catalog, the lowest level in the hierarchy, contains the sample data values plus metadata only about the dataset itself (Dublin Core metadata such as dataset title and author), and therefore can accommodate the widest diversity of data holdings. The second level includes measured data values from the sample, basic information about the analytical method, and metadata about the samples such as geospatial information and sample type. The third and highest level includes detailed data quality documentation and more specific information about the scientific context of the sample. The three tiers are linked to allow users to quickly navigate to their desired level of metadata detail. Links are based on the use of unique identifiers: (a) DOI at the granularity of datasets, and (b) the International Geo Sample Number IGSN at the granularity of samples. Current developments in the GfG sample-based systems include new registry architecture for the IGSN to advance international implementation, growth and modification of EarthChemXML to include geochemical data for new sample types such as soils and liquids, and the construction of a hydrothermal vent data system. This flexible, tiered, model provides a solution for offering varying levels of detail in order to aggregate a large quantity of data and serve the largest user group of both disciplinary novices and experts.
ERIC Educational Resources Information Center
Mantri, Archana
2014-01-01
The intent of the study presented in this paper is to show that the model of problem-based learning (PBL) can be made scalable by designing curriculum around a set of open-ended problems (OEPs). The detailed statistical analysis of the data collected to measure the effects of traditional and PBL instructions for three courses in Electronics and…
Analysis of 3d Building Models Accuracy Based on the Airborne Laser Scanning Point Clouds
NASA Astrophysics Data System (ADS)
Ostrowski, W.; Pilarska, M.; Charyton, J.; Bakuła, K.
2018-05-01
Creating 3D building models in large scale is becoming more popular and finds many applications. Nowadays, a wide term "3D building models" can be applied to several types of products: well-known CityGML solid models (available on few Levels of Detail), which are mainly generated from Airborne Laser Scanning (ALS) data, as well as 3D mesh models that can be created from both nadir and oblique aerial images. City authorities and national mapping agencies are interested in obtaining the 3D building models. Apart from the completeness of the models, the accuracy aspect is also important. Final accuracy of a building model depends on various factors (accuracy of the source data, complexity of the roof shapes, etc.). In this paper the methodology of inspection of dataset containing 3D models is presented. The proposed approach check all building in dataset with comparison to ALS point clouds testing both: accuracy and level of details. Using analysis of statistical parameters for normal heights for reference point cloud and tested planes and segmentation of point cloud provides the tool that can indicate which building and which roof plane in do not fulfill requirement of model accuracy and detail correctness. Proposed method was tested on two datasets: solid and mesh model.
A stochastic two-scale model for pressure-driven flow between rough surfaces
Larsson, Roland; Lundström, Staffan; Wall, Peter; Almqvist, Andreas
2016-01-01
Seal surface topography typically consists of global-scale geometric features as well as local-scale roughness details and homogenization-based approaches are, therefore, readily applied. These provide for resolving the global scale (large domain) with a relatively coarse mesh, while resolving the local scale (small domain) in high detail. As the total flow decreases, however, the flow pattern becomes tortuous and this requires a larger local-scale domain to obtain a converged solution. Therefore, a classical homogenization-based approach might not be feasible for simulation of very small flows. In order to study small flows, a model allowing feasibly-sized local domains, for really small flow rates, is developed. Realization was made possible by coupling the two scales with a stochastic element. Results from numerical experiments, show that the present model is in better agreement with the direct deterministic one than the conventional homogenization type of model, both quantitatively in terms of flow rate and qualitatively in reflecting the flow pattern. PMID:27436975
Uncertainty in temperature-based determination of time of death
NASA Astrophysics Data System (ADS)
Weiser, Martin; Erdmann, Bodo; Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Mall, Gita; Zachow, Stefan
2018-03-01
Temperature-based estimation of time of death (ToD) can be performed either with the help of simple phenomenological models of corpse cooling or with detailed mechanistic (thermodynamic) heat transfer models. The latter are much more complex, but allow a higher accuracy of ToD estimation as in principle all relevant cooling mechanisms can be taken into account. The potentially higher accuracy depends on the accuracy of tissue and environmental parameters as well as on the geometric resolution. We investigate the impact of parameter variations and geometry representation on the estimated ToD. For this, numerical simulation of analytic heat transport models is performed on a highly detailed 3D corpse model, that has been segmented and geometrically reconstructed from a computed tomography (CT) data set, differentiating various organs and tissue types. From that and prior information available on thermal parameters and their variability, we identify the most crucial parameters to measure or estimate, and obtain an a priori uncertainty quantification for the ToD.
NASA Astrophysics Data System (ADS)
Jha, Pradeep Kumar
Capturing the effects of detailed-chemistry on turbulent combustion processes is a central challenge faced by the numerical combustion community. However, the inherent complexity and non-linear nature of both turbulence and chemistry require that combustion models rely heavily on engineering approximations to remain computationally tractable. This thesis proposes a computationally efficient algorithm for modelling detailed-chemistry effects in turbulent diffusion flames and numerically predicting the associated flame properties. The cornerstone of this combustion modelling tool is the use of parallel Adaptive Mesh Refinement (AMR) scheme with the recently proposed Flame Prolongation of Intrinsic low-dimensional manifold (FPI) tabulated-chemistry approach for modelling complex chemistry. The effect of turbulence on the mean chemistry is incorporated using a Presumed Conditional Moment (PCM) approach based on a beta-probability density function (PDF). The two-equation k-w turbulence model is used for modelling the effects of the unresolved turbulence on the mean flow field. The finite-rate of methane-air combustion is represented here by using the GRI-Mech 3.0 scheme. This detailed mechanism is used to build the FPI tables. A state of the art numerical scheme based on a parallel block-based solution-adaptive algorithm has been developed to solve the Favre-averaged Navier-Stokes (FANS) and other governing partial-differential equations using a second-order accurate, fully-coupled finite-volume formulation on body-fitted, multi-block, quadrilateral/hexahedral mesh for two-dimensional and three-dimensional flow geometries, respectively. A standard fourth-order Runge-Kutta time-marching scheme is used for time-accurate temporal discretizations. Numerical predictions of three different diffusion flames configurations are considered in the present work: a laminar counter-flow flame; a laminar co-flow diffusion flame; and a Sydney bluff-body turbulent reacting flow. Comparisons are made between the predicted results of the present FPI scheme and Steady Laminar Flamelet Model (SLFM) approach for diffusion flames. The effects of grid resolution on the predicted overall flame solutions are also assessed. Other non-reacting flows have also been considered to further validate other aspects of the numerical scheme. The present schemes predict results which are in good agreement with published experimental results and reduces the computational cost involved in modelling turbulent diffusion flames significantly, both in terms of storage and processing time.
Detail view of the sculpted pediment on the south facade ...
Detail view of the sculpted pediment on the south facade entitled Recorder of the Archives; the artist was James Earle Fraser. The great danes in the corner were based on sketches by Fraser's assistant Bruce Moore and the dogs behind the great danes are modeled after Fraser's own dogs. - National Archives, Constitution Avenue, between Seventh & Ninth Streets Northwest, Washington, District of Columbia, DC
Zhang, Y; Joines, W T; Jirtle, R L; Samulski, T V
1993-08-01
The magnitude of E-field patterns generated by an annular array prototype device has been calculated and measured. Two models were used to describe the radiating sources: a simple linear dipole and a stripline antenna model. The stripline model includes detailed geometry of the actual antennas used in the prototype and an estimate of the antenna current based on microstrip transmission line theory. This more detailed model yields better agreement with the measured field patterns, reducing the rms discrepancy by a factor of about 6 (from approximately 23 to 4%) in the central region of interest where the SEM is within 25% of the maximum. We conclude that accurate modeling of source current distributions is important for determining SEM distributions associated with such heating devices.
Implementing secure laptop-based testing in an undergraduate nursing program: a case study.
Tao, Jinyuan; Lorentz, B Chris; Hawes, Stacey; Rugless, Fely; Preston, Janice
2012-07-01
This article presents the implementation of secure laptop-based testing in an undergraduate nursing program. Details on how to design, develop, implement, and secure tests are discussed. Laptop-based testing mode is also compared with the computer-laboratory-based testing model. Five elements of the laptop-based testing model are illustrated: (1) it simulates the national board examination, (2) security is achievable, (3) it is convenient for both instructors and students, (4) it provides students hands-on practice, (5) continuous technical support is the key.
Performance-Based Funding of Higher Education: A Detailed Look at Best Practices in 6 States
ERIC Educational Resources Information Center
Miao, Kysie
2012-01-01
Performance-based funding is a system based on allocating a portion of a state's higher education budget according to specific performance measures such as course completion, credit attainment, and degree completion, instead of allocating funding based entirely on enrollment. It is a model that provides a fuller picture of how successfully…
An Efficient Analysis Methodology for Fluted-Core Composite Structures
NASA Technical Reports Server (NTRS)
Oremont, Leonard; Schultz, Marc R.
2012-01-01
The primary loading condition in launch-vehicle barrel sections is axial compression, and it is therefore important to understand the compression behavior of any structures, structural concepts, and materials considered in launch-vehicle designs. This understanding will necessarily come from a combination of test and analysis. However, certain potentially beneficial structures and structural concepts do not lend themselves to commonly used simplified analysis methods, and therefore innovative analysis methodologies must be developed if these structures and structural concepts are to be considered. This paper discusses such an analysis technique for the fluted-core sandwich composite structural concept. The presented technique is based on commercially available finite-element codes, and uses shell elements to capture behavior that would normally require solid elements to capture the detailed mechanical response of the structure. The shell thicknesses and offsets using this analysis technique are parameterized, and the parameters are adjusted through a heuristic procedure until this model matches the mechanical behavior of a more detailed shell-and-solid model. Additionally, the detailed shell-and-solid model can be strategically placed in a larger, global shell-only model to capture important local behavior. Comparisons between shell-only models, experiments, and more detailed shell-and-solid models show excellent agreement. The discussed analysis methodology, though only discussed in the context of fluted-core composites, is widely applicable to other concepts.
Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator
NASA Astrophysics Data System (ADS)
Rehman, Naveed Ur; Siddiqui, Mubashir Ali
2017-03-01
In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.
NASA Technical Reports Server (NTRS)
Briggs, Maxwell H.
2011-01-01
The Fission Power System (FPS) project is developing a Technology Demonstration Unit (TDU) to verify the performance and functionality of a subscale version of the FPS reference concept in a relevant environment, and to verify component and system models. As hardware is developed for the TDU, component and system models must be refined to include the details of specific component designs. This paper describes the development of a Sage-based pseudo-steady-state Stirling convertor model and its implementation into a system-level model of the TDU.
Multi-scale coarse-graining for the study of assembly pathways in DNA-brick self-assembly.
Fonseca, Pedro; Romano, Flavio; Schreck, John S; Ouldridge, Thomas E; Doye, Jonathan P K; Louis, Ard A
2018-04-07
Inspired by recent successes using single-stranded DNA tiles to produce complex structures, we develop a two-step coarse-graining approach that uses detailed thermodynamic calculations with oxDNA, a nucleotide-based model of DNA, to parametrize a coarser kinetic model that can reach the time and length scales needed to study the assembly mechanisms of these structures. We test the model by performing a detailed study of the assembly pathways for a two-dimensional target structure made up of 334 unique strands each of which are 42 nucleotides long. Without adjustable parameters, the model reproduces a critical temperature for the formation of the assembly that is close to the temperature at which assembly first occurs in experiments. Furthermore, the model allows us to investigate in detail the nucleation barriers and the distribution of critical nucleus shapes for the assembly of a single target structure. The assembly intermediates are compact and highly connected (although not maximally so), and classical nucleation theory provides a good fit to the height and shape of the nucleation barrier at temperatures close to where assembly first occurs.
Multi-scale coarse-graining for the study of assembly pathways in DNA-brick self-assembly
NASA Astrophysics Data System (ADS)
Fonseca, Pedro; Romano, Flavio; Schreck, John S.; Ouldridge, Thomas E.; Doye, Jonathan P. K.; Louis, Ard A.
2018-04-01
Inspired by recent successes using single-stranded DNA tiles to produce complex structures, we develop a two-step coarse-graining approach that uses detailed thermodynamic calculations with oxDNA, a nucleotide-based model of DNA, to parametrize a coarser kinetic model that can reach the time and length scales needed to study the assembly mechanisms of these structures. We test the model by performing a detailed study of the assembly pathways for a two-dimensional target structure made up of 334 unique strands each of which are 42 nucleotides long. Without adjustable parameters, the model reproduces a critical temperature for the formation of the assembly that is close to the temperature at which assembly first occurs in experiments. Furthermore, the model allows us to investigate in detail the nucleation barriers and the distribution of critical nucleus shapes for the assembly of a single target structure. The assembly intermediates are compact and highly connected (although not maximally so), and classical nucleation theory provides a good fit to the height and shape of the nucleation barrier at temperatures close to where assembly first occurs.
Efficient Use of Video for 3d Modelling of Cultural Heritage Objects
NASA Astrophysics Data System (ADS)
Alsadik, B.; Gerke, M.; Vosselman, G.
2015-03-01
Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.
Statistical inference for capture-recapture experiments
Pollock, Kenneth H.; Nichols, James D.; Brownie, Cavell; Hines, James E.
1990-01-01
This monograph presents a detailed, practical exposition on the design, analysis, and interpretation of capture-recapture studies. The Lincoln-Petersen model (Chapter 2) and the closed population models (Chapter 3) are presented only briefly because these models have been covered in detail elsewhere. The Jolly- Seber open population model, which is central to the monograph, is covered in detail in Chapter 4. In Chapter 5 we consider the "enumeration" or "calendar of captures" approach, which is widely used by mammalogists and other vertebrate ecologists. We strongly recommend that it be abandoned in favor of analyses based on the Jolly-Seber model. We consider 2 restricted versions of the Jolly-Seber model. We believe the first of these, which allows losses (mortality or emigration) but not additions (births or immigration), is likely to be useful in practice. Another series of restrictive models requires the assumptions of a constant survival rate or a constant survival rate and a constant capture rate for the duration of the study. Detailed examples are given that illustrate the usefulness of these restrictions. There often can be a substantial gain in precision over Jolly-Seber estimates. In Chapter 5 we also consider 2 generalizations of the Jolly-Seber model. The temporary trap response model allows newly marked animals to have different survival and capture rates for 1 period. The other generalization is the cohort Jolly-Seber model. Ideally all animals would be marked as young, and age effects considered by using the Jolly-Seber model on each cohort separately. In Chapter 6 we present a detailed description of an age-dependent Jolly-Seber model, which can be used when 2 or more identifiable age classes are marked. In Chapter 7 we present a detailed description of the "robust" design. Under this design each primary period contains several secondary sampling periods. We propose an estimation procedure based on closed and open population models that allows for heterogeneity and trap response of capture rates (hence the name robust design). We begin by considering just 1 age class and then extend to 2 age classes. When there are 2 age classes it is possible to distinguish immigrants and births. In Chapter 8 we give a detailed discussion of the design of capture-recapture studies. First, capture-recapture is compared to other possible sampling procedures. Next, the design of capture-recapture studies to minimize assumption violations is considered. Finally, we consider the precision of parameter estimates and present figures on proportional standard errors for a variety of initial parameter values to aid the biologist about to plan a study. A new program, JOLLY, has been written to accompany the material on the Jolly-Seber model (Chapter 4) and its extensions (Chapter 5). Another new program, JOLLYAGE, has been written for a special case of the age-dependent model (Chapter 6) where there are only 2 age classes. In Chapter 9 a brief description of the different versions of the 2 programs is given. Chapter 10 gives a brief description of some alternative approaches that were not considered in this monograph. We believe that an excellent overall view of capture- recapture models may be obtained by reading the monograph by White et al. (1982) emphasizing closed models and then reading this monograph where we concentrate on open models. The important recent monograph by Burnham et al. (1987) could then be read if there were interest in the comparison of different populations.
Using the model statement to elicit information and cues to deceit in interpreter-based interviews.
Vrij, Aldert; Leal, Sharon; Mann, Samantha; Dalton, Gary; Jo, Eunkyung; Shaboltas, Alla; Khaleeva, Maria; Granskaya, Juliana; Houston, Kate
2017-06-01
We examined how the presence of an interpreter during an interview affects eliciting information and cues to deceit, while using a method that encourages interviewees to provide more detail (model statement, MS). A total of 199 Hispanic, Korean and Russian participants were interviewed either in their own native language without an interpreter, or through an interpreter. Interviewees either lied or told the truth about a trip they made during the last twelve months. Half of the participants listened to a MS at the beginning of the interview. The dependent variables were 'detail', 'complications', 'common knowledge details', 'self-handicapping strategies' and 'ratio of complications'. In the MS-absent condition, the interviews resulted in less detail when an interpreter was present than when an interpreter was absent. In the MS-present condition, the interviews resulted in a similar amount of detail in the interpreter present and absent conditions. Truthful statements included more complications and fewer common knowledge details and self-handicapping strategies than deceptive statements, and the ratio of complications was higher for truth tellers than liars. The MS strengthened these results, whereas an interpreter had no effect on these results. Copyright © 2017. Published by Elsevier B.V.
School-Based Instructional Rounds: Improving Teaching and Learning across Classrooms
ERIC Educational Resources Information Center
Teitel, Lee
2013-01-01
In "School-Based Instructional Rounds," Teitel offers detailed case studies of five different models of school-based rounds and investigates critical learning from each. Instructional rounds--one of the most innovative and powerful approaches to improving teaching and learning--has been taken up by districts across the country and around…
Toward micro-scale spatial modeling of gentrification
NASA Astrophysics Data System (ADS)
O'Sullivan, David
A simple preliminary model of gentrification is presented. The model is based on an irregular cellular automaton architecture drawing on the concept of proximal space, which is well suited to the spatial externalities present in housing markets at the local scale. The rent gap hypothesis on which the model's cell transition rules are based is discussed. The model's transition rules are described in detail. Practical difficulties in configuring and initializing the model are described and its typical behavior reported. Prospects for further development of the model are discussed. The current model structure, while inadequate, is well suited to further elaboration and the incorporation of other interesting and relevant effects.
Identifying Model-Based Reconfiguration Goals through Functional Deficiencies
NASA Technical Reports Server (NTRS)
Benazera, Emmanuel; Trave-Massuyes, Louise
2004-01-01
Model-based diagnosis is now advanced to the point autonomous systems face some uncertain and faulty situations with success. The next step toward more autonomy is to have the system recovering itself after faults occur, a process known as model-based reconfiguration. After faults occur, given a prediction of the nominal behavior of the system and the result of the diagnosis operation, this paper details how to automatically determine the functional deficiencies of the system. These deficiencies are characterized in the case of uncertain state estimates. A methodology is then presented to determine the reconfiguration goals based on the deficiencies. Finally, a recovery process interleaves planning and model predictive control to restore the functionalities in prioritized order.
Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Somerville, R.C.J.; Iacobellis, S.F.
2005-03-18
Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiativemore » quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional models. One fruitful strategy for evaluating advances in parameterizations has turned out to be using short-range numerical weather prediction as a test-bed within which to implement and improve parameterizations for modeling and predicting climate variability. The global models we have used to date are the CAM atmospheric component of the National Center for Atmospheric Research (NCAR) CCSM climate model as well as the National Centers for Environmental Prediction (NCEP) numerical weather prediction model, thus allowing testing in both climate simulation and numerical weather prediction modes. We present detailed results of these tests, demonstrating the sensitivity of model performance to changes in parameterizations.« less
Hsu, Bing-Cheng
2018-01-01
Waxing is an important aspect of automobile detailing, aimed at protecting the finish of the car and preventing rust. At present, this delicate work is conducted manually due to the need for iterative adjustments to achieve acceptable quality. This paper presents a robotic waxing system in which surface images are used to evaluate the quality of the finish. An RGB-D camera is used to build a point cloud that details the sheet metal components to enable path planning for a robot manipulator. The robot is equipped with a multi-axis force sensor to measure and control the forces involved in the application and buffing of wax. Images of sheet metal components that were waxed by experienced car detailers were analyzed using image processing algorithms. A Gaussian distribution function and its parameterized values were obtained from the images for use as a performance criterion in evaluating the quality of surfaces prepared by the robotic waxing system. Waxing force and dwell time were optimized using a mathematical model based on the image-based criterion used to measure waxing performance. Experimental results demonstrate the feasibility of the proposed robotic waxing system and image-based performance evaluation scheme. PMID:29757940
Lin, Chi-Ying; Hsu, Bing-Cheng
2018-05-14
Waxing is an important aspect of automobile detailing, aimed at protecting the finish of the car and preventing rust. At present, this delicate work is conducted manually due to the need for iterative adjustments to achieve acceptable quality. This paper presents a robotic waxing system in which surface images are used to evaluate the quality of the finish. An RGB-D camera is used to build a point cloud that details the sheet metal components to enable path planning for a robot manipulator. The robot is equipped with a multi-axis force sensor to measure and control the forces involved in the application and buffing of wax. Images of sheet metal components that were waxed by experienced car detailers were analyzed using image processing algorithms. A Gaussian distribution function and its parameterized values were obtained from the images for use as a performance criterion in evaluating the quality of surfaces prepared by the robotic waxing system. Waxing force and dwell time were optimized using a mathematical model based on the image-based criterion used to measure waxing performance. Experimental results demonstrate the feasibility of the proposed robotic waxing system and image-based performance evaluation scheme.
ERIC Educational Resources Information Center
Kenny, John; Fluck, Andrew; Jetson, Tim
2012-01-01
This paper presents a detailed case study of the development and implementation of a quantifiable academic workload model in the education faculty of an Australian university. Flowing from the enterprise bargaining process, the Academic Staff Agreement required the implementation of a workload allocation model for academics that was quantifiable…
Clarification and Purpose of the Race-Based Traumatic Stress Injury Model
ERIC Educational Resources Information Center
Carter, Robert T.
2007-01-01
The author responds to the four reactions authored by Thompson-Miller and Feagin, Griffin, Speight, and Bryant-Davis in this issue. In responding, he clarifies the purpose of the model presented in his major contribution and adds information about the model's legal and forensic applications that he did not touch on with any detail. He also…
Improving the FLORIS wind plant model for compatibility with gradient-based optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Jared J.; Gebraad, Pieter MO; Ning, Andrew
The FLORIS (FLOw Redirection and Induction in Steady-state) model, a parametric wind turbine wake model that predicts steady-state wake characteristics based on wind turbine position and yaw angle, was developed for optimization of control settings and turbine locations. This article provides details on changes made to the FLORIS model to make the model more suitable for gradient-based optimization. Changes to the FLORIS model were made to remove discontinuities and add curvature to regions of non-physical zero gradient. Exact gradients for the FLORIS model were obtained using algorithmic differentiation. A set of three case studies demonstrate that using exact gradients withmore » gradient-based optimization reduces the number of function calls by several orders of magnitude. The case studies also show that adding curvature improves convergence behavior, allowing gradient-based optimization algorithms used with the FLORIS model to more reliably find better solutions to wind farm optimization problems.« less
Micro-foundations for macroeconomics: New set-up based on statistical physics
NASA Astrophysics Data System (ADS)
Yoshikawa, Hiroshi
2016-12-01
Modern macroeconomics is built on "micro foundations." Namely, optimization of micro agent such as consumer and firm is explicitly analyzed in model. Toward this goal, standard model presumes "the representative" consumer/firm, and analyzes its behavior in detail. However, the macroeconomy consists of 107 consumers and 106 firms. For the purpose of analyzing such macro system, it is meaningless to pursue the micro behavior in detail. In this respect, there is no essential difference between economics and physics. The method of statistical physics can be usefully applied to the macroeconomy, and provides Keynesian economics with correct micro-foundations.
NASA Astrophysics Data System (ADS)
Zahid, F.; Paulsson, M.; Polizzi, E.; Ghosh, A. W.; Siddiqui, L.; Datta, S.
2005-08-01
We present a transport model for molecular conduction involving an extended Hückel theoretical treatment of the molecular chemistry combined with a nonequilibrium Green's function treatment of quantum transport. The self-consistent potential is approximated by CNDO (complete neglect of differential overlap) method and the electrostatic effects of metallic leads (bias and image charges) are included through a three-dimensional finite element method. This allows us to capture spatial details of the electrostatic potential profile, including effects of charging, screening, and complicated electrode configurations employing only a single adjustable parameter to locate the Fermi energy. As this model is based on semiempirical methods it is computationally inexpensive and flexible compared to ab initio models, yet at the same time it is able to capture salient qualitative features as well as several relevant quantitative details of transport. We apply our model to investigate recent experimental data on alkane dithiol molecules obtained in a nanopore setup. We also present a comparison study of single molecule transistors and identify electronic properties that control their performance.
Harvey, Adam C; Vrij, Aldert; Leal, Sharon; Lafferty, Marcus; Nahari, Galit
2017-03-01
The Verifiability Approach (VA) is verbal lie detection tool that has shown promise when applied to insurance claims settings. This study examined the effectiveness of incorporating a Model Statement comprised of checkable information to the VA protocol for enhancing the verbal differences between liars and truth tellers. The study experimentally manipulated supplementing (or withholding) the VA with a Model Statement. It was hypothesised that such a manipulation would (i) encourage truth tellers to provide more verifiable details than liars and (ii) encourage liars to report more unverifiable details than truth tellers (compared to the no model statement control). As a result, it was hypothesized that (iii) the model statement would improve classificatory accuracy of the VA. Participants reported 40 genuine and 40 fabricated insurance claim statements, in which half the liars and truth tellers where provided with a model statement as part of the VA procedure, and half where provide no model statement. All three hypotheses were supported. In terms of accuracy, the model statement increased classificatory rates by the VA considerably from 65.0% to 90.0%. Providing interviewee's with a model statement prime consisting of checkable detail appears to be a useful refinement to the VA procedure. Copyright © 2017 Elsevier B.V. All rights reserved.
Bittig, Arne T; Uhrmacher, Adelinde M
2017-01-01
Spatio-temporal dynamics of cellular processes can be simulated at different levels of detail, from (deterministic) partial differential equations via the spatial Stochastic Simulation algorithm to tracking Brownian trajectories of individual particles. We present a spatial simulation approach for multi-level rule-based models, which includes dynamically hierarchically nested cellular compartments and entities. Our approach ML-Space combines discrete compartmental dynamics, stochastic spatial approaches in discrete space, and particles moving in continuous space. The rule-based specification language of ML-Space supports concise and compact descriptions of models and to adapt the spatial resolution of models easily.
An infrastructure for accurate characterization of single-event transients in digital circuits.
Savulimedu Veeravalli, Varadan; Polzer, Thomas; Schmid, Ulrich; Steininger, Andreas; Hofbauer, Michael; Schweiger, Kurt; Dietrich, Horst; Schneider-Hornstein, Kerstin; Zimmermann, Horst; Voss, Kay-Obbe; Merk, Bruno; Hajek, Michael
2013-11-01
We present the architecture and a detailed pre-fabrication analysis of a digital measurement ASIC facilitating long-term irradiation experiments of basic asynchronous circuits, which also demonstrates the suitability of the general approach for obtaining accurate radiation failure models developed in our FATAL project. Our ASIC design combines radiation targets like Muller C-elements and elastic pipelines as well as standard combinational gates and flip-flops with an elaborate on-chip measurement infrastructure. Major architectural challenges result from the fact that the latter must operate reliably under the same radiation conditions the target circuits are exposed to, without wasting precious die area for a rad-hard design. A measurement architecture based on multiple non-rad-hard counters is used, which we show to be resilient against double faults, as well as many triple and even higher-multiplicity faults. The design evaluation is done by means of comprehensive fault injection experiments, which are based on detailed Spice models of the target circuits in conjunction with a standard double-exponential current injection model for single-event transients (SET). To be as accurate as possible, the parameters of this current model have been aligned with results obtained from 3D device simulation models, which have in turn been validated and calibrated using micro-beam radiation experiments at the GSI in Darmstadt, Germany. For the latter, target circuits instrumented with high-speed sense amplifiers have been used for analog SET recording. Together with a probabilistic analysis of the sustainable particle flow rates, based on a detailed area analysis and experimental cross-section data, we can conclude that the proposed architecture will indeed sustain significant target hit rates, without exceeding the resilience bound of the measurement infrastructure.
An infrastructure for accurate characterization of single-event transients in digital circuits☆
Savulimedu Veeravalli, Varadan; Polzer, Thomas; Schmid, Ulrich; Steininger, Andreas; Hofbauer, Michael; Schweiger, Kurt; Dietrich, Horst; Schneider-Hornstein, Kerstin; Zimmermann, Horst; Voss, Kay-Obbe; Merk, Bruno; Hajek, Michael
2013-01-01
We present the architecture and a detailed pre-fabrication analysis of a digital measurement ASIC facilitating long-term irradiation experiments of basic asynchronous circuits, which also demonstrates the suitability of the general approach for obtaining accurate radiation failure models developed in our FATAL project. Our ASIC design combines radiation targets like Muller C-elements and elastic pipelines as well as standard combinational gates and flip-flops with an elaborate on-chip measurement infrastructure. Major architectural challenges result from the fact that the latter must operate reliably under the same radiation conditions the target circuits are exposed to, without wasting precious die area for a rad-hard design. A measurement architecture based on multiple non-rad-hard counters is used, which we show to be resilient against double faults, as well as many triple and even higher-multiplicity faults. The design evaluation is done by means of comprehensive fault injection experiments, which are based on detailed Spice models of the target circuits in conjunction with a standard double-exponential current injection model for single-event transients (SET). To be as accurate as possible, the parameters of this current model have been aligned with results obtained from 3D device simulation models, which have in turn been validated and calibrated using micro-beam radiation experiments at the GSI in Darmstadt, Germany. For the latter, target circuits instrumented with high-speed sense amplifiers have been used for analog SET recording. Together with a probabilistic analysis of the sustainable particle flow rates, based on a detailed area analysis and experimental cross-section data, we can conclude that the proposed architecture will indeed sustain significant target hit rates, without exceeding the resilience bound of the measurement infrastructure. PMID:24748694
NASA Astrophysics Data System (ADS)
Rasztovits, S.; Dorninger, P.
2013-07-01
Terrestrial Laser Scanning (TLS) is an established method to reconstruct the geometrical surface of given objects. Current systems allow for fast and efficient determination of 3D models with high accuracy and richness in detail. Alternatively, 3D reconstruction services are using images to reconstruct the surface of an object. While the instrumental expenses for laser scanning systems are high, upcoming free software services as well as open source software packages enable the generation of 3D models using digital consumer cameras. In addition, processing TLS data still requires an experienced user while recent web-services operate completely automatically. An indisputable advantage of image based 3D modeling is its implicit capability for model texturing. However, the achievable accuracy and resolution of the 3D models is lower than those of laser scanning data. Within this contribution, we investigate the results of automated web-services for image based 3D model generation with respect to a TLS reference model. For this, a copper sculpture was acquired using a laser scanner and using image series of different digital cameras. Two different webservices, namely Arc3D and AutoDesk 123D Catch were used to process the image data. The geometric accuracy was compared for the entire model and for some highly structured details. The results are presented and interpreted based on difference models. Finally, an economical comparison of the generation of the models is given considering the interactive and processing time costs.
ATLAS - A new Lagrangian transport and mixing model with detailed stratospheric chemistry
NASA Astrophysics Data System (ADS)
Wohltmann, I.; Rex, M.; Lehmann, R.
2009-04-01
We present a new global Chemical Transport Model (CTM) with full stratospheric chemistry and Lagrangian transport and mixing called ATLAS. Lagrangian models have some crucial advantages over Eulerian grid-box based models, like no numerical diffusion, no limitation of the time step of the model by the CFL criterion, conservation of mixing ratios by design and easy parallelization of code. The transport module is based on a trajectory code developed at the Alfred Wegener Institute. The horizontal and vertical resolution, the vertical coordinate system (pressure, potential temperature, hybrid coordinate) and the time step of the model are flexible, so that the model can be used both for process studies and long-time runs over several decades. Mixing of the Lagrangian air parcels is parameterized based on the local shear and strain of the flow with a method similar to that used in the CLaMS model, but with some modifications like a triangulation that introduces no vertical layers. The stratospheric chemistry module was developed at the Institute and includes 49 species and 170 reactions and a detailed treatment of heterogenous chemistry on polar stratospheric clouds. We present an overview over the model architecture, the transport and mixing concept and some validation results. Comparison of model results with tracer data from flights of the ER2 aircraft in the stratospheric polar vortex in 1999/2000 which are able to resolve fine tracer filaments show that excellent agreement with observed tracer structures can be achieved with a suitable mixing parameterization.
Integration of Dynamic Models in Range Operations
NASA Technical Reports Server (NTRS)
Bardina, Jorge; Thirumalainambi, Rajkumar
2004-01-01
This work addresses the various model interactions in real-time to make an efficient internet based decision making tool for Shuttle launch. The decision making tool depends on the launch commit criteria coupled with physical models. Dynamic interaction between a wide variety of simulation applications and techniques, embedded algorithms, and data visualizations are needed to exploit the full potential of modeling and simulation. This paper also discusses in depth details of web based 3-D graphics and applications to range safety. The advantages of this dynamic model integration are secure accessibility and distribution of real time information to other NASA centers.
SU-C-303-03: Dosimetric Model of the Beagle Needed for Pre-Clinical Testing of Radiopharmaceuticals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, M; Sands, M; Bolch, W
2015-06-15
Purpose: Large animal models, most popularly beagles, have been crucial surrogates to humans in determining radiation safety levels of radiopharmaceuticals. This study aims to develop a detailed beagle phantom to accurately approximate organ absorbed doses for therapy nuclear medicine preclinical studies. Methods: A 3D NURBS model was created subordinate to a whole body CT of an adult beagle. Bones were harvested and CT imaged to offer macroscopic skeletal detail. Samples of trabecular spongiosa were cored and imaged to offer microscopic skeletal detail for bone trabeculae and marrow volume fractions. Results: Organ masses in the model are typical of an adultmore » beagle. Trends in volume fractions for skeletal dosimetry are fundamentally similar to those found in existing models of other canine species. Conclusion: This work warrants its use in further investigations of radiation transport calculation for electron and photon dosimetry. This model accurately represents the anatomy of a beagle, and can be directly translated into a useable geometry for a voxel-based Monte Carlo radiation transport program such as MCNP6. Work supported by a grant from the Hyundai Hope on Wheels Foundation for Pediatric Cancer Research.« less
75 FR 72611 - Assessments, Large Bank Pricing
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-24
... the worst risk ranking and are included in the statistical analysis. Appendix 1 to the NPR describes the statistical analysis in detail. \\12\\ The percentage approximated by factors is based on the statistical model for that particual year. Actual weights assigned to each scorecard measure are largely based...
ERIC Educational Resources Information Center
Okawa, Yayoi; Nakamura, Shigemi; Kudo, Minako; Ueda, Satoshi
2009-01-01
The purpose of this study is to confirm the working hypothesis on two major models of functioning decline and two corresponding models of rehabilitation program in an older population through detailed interviews with the persons who have functioning declines and on-the-spot observations of key activities on home visits. A total of 542…
Mathematical Modeling Projects: Success for All Students
ERIC Educational Resources Information Center
Shelton, Therese
2018-01-01
Mathematical modeling allows flexibility for a project-based experience. We share details of our regular capstone course, successful for virtually 100% of our math majors for almost two decades. Our research-like approach in this course accommodates a variety of student backgrounds and interests, and has produced some award-winning student…
"No Soy de Aqui ni Soy de Alla": Transgenerational Cultural Identity Formation
ERIC Educational Resources Information Center
Cardona, Jose Ruben Parra; Busby, Dean M.; Wampler, Richard S.
2004-01-01
The transgenerational cultural identity model offers a detailed understanding of the immigration experience by challenging agendas of assimilation and by expanding on existing theories of cultural identity. Based on this model, immigration is a complex phenomenon influenced by many variables such as sociopsychological dimensions, family,…
Campus network security model study
NASA Astrophysics Data System (ADS)
Zhang, Yong-ku; Song, Li-ren
2011-12-01
Campus network security is growing importance, Design a very effective defense hacker attacks, viruses, data theft, and internal defense system, is the focus of the study in this paper. This paper compared the firewall; IDS based on the integrated, then design of a campus network security model, and detail the specific implementation principle.
Beaudouin, Rémy; Micallef, Sandrine; Brochot, Céline
2010-06-01
Physiologically based pharmacokinetic (PBPK) models have proven to be successful in integrating and evaluating the influence of age- or gender-dependent changes with respect to the pharmacokinetics of xenobiotics throughout entire lifetimes. Nevertheless, for an effective application of toxicokinetic modelling to chemical risk assessment, a PBPK model has to be detailed enough to include all the multiple tissues that could be targeted by the various xenobiotics present in the environment. For this reason, we developed a PBPK model based on a detailed compartmentalization of the human body and parameterized with new relationships describing the time evolution of physiological and anatomical parameters. To take into account the impact of human variability on the predicted toxicokinetics, we defined probability distributions for key parameters related to the xenobiotics absorption, distribution, metabolism and excretion. The model predictability was evaluated by a direct comparison between computational predictions and experimental data for the internal concentrations of two chemicals (1,3-butadiene and 2,3,7,8-tetrachlorodibenzo-p-dioxin). A good agreement between predictions and observed data was achieved for different scenarios of exposure (e.g., acute or chronic exposure and different populations). Our results support that the general stochastic PBPK model can be a valuable computational support in the area of chemical risk analysis. (c)2010 Elsevier Inc. All rights reserved.
A Method for Generating Reduced-Order Linear Models of Multidimensional Supersonic Inlets
NASA Technical Reports Server (NTRS)
Chicatelli, Amy; Hartley, Tom T.
1998-01-01
Simulation of high speed propulsion systems may be divided into two categories, nonlinear and linear. The nonlinear simulations are usually based on multidimensional computational fluid dynamics (CFD) methodologies and tend to provide high resolution results that show the fine detail of the flow. Consequently, these simulations are large, numerically intensive, and run much slower than real-time. ne linear simulations are usually based on large lumping techniques that are linearized about a steady-state operating condition. These simplistic models often run at or near real-time but do not always capture the detailed dynamics of the plant. Under a grant sponsored by the NASA Lewis Research Center, Cleveland, Ohio, a new method has been developed that can be used to generate improved linear models for control design from multidimensional steady-state CFD results. This CFD-based linear modeling technique provides a small perturbation model that can be used for control applications and real-time simulations. It is important to note the utility of the modeling procedure; all that is needed to obtain a linear model of the propulsion system is the geometry and steady-state operating conditions from a multidimensional CFD simulation or experiment. This research represents a beginning step in establishing a bridge between the controls discipline and the CFD discipline so that the control engineer is able to effectively use multidimensional CFD results in control system design and analysis.
Lessons learned in detailed clinical modeling at Intermountain Healthcare
Oniki, Thomas A; Coyle, Joseph F; Parker, Craig G; Huff, Stanley M
2014-01-01
Background and objective Intermountain Healthcare has a long history of using coded terminology and detailed clinical models (DCMs) to govern storage of clinical data to facilitate decision support and semantic interoperability. The latest iteration of DCMs at Intermountain is called the clinical element model (CEM). We describe the lessons learned from our CEM efforts with regard to subjective decisions a modeler frequently needs to make in creating a CEM. We present insights and guidelines, but also describe situations in which use cases conflict with the guidelines. We propose strategies that can help reconcile the conflicts. The hope is that these lessons will be helpful to others who are developing and maintaining DCMs in order to promote sharing and interoperability. Methods We have used the Clinical Element Modeling Language (CEML) to author approximately 5000 CEMs. Results Based on our experience, we have formulated guidelines to lead our modelers through the subjective decisions they need to make when authoring models. Reported here are guidelines regarding precoordination/postcoordination, dividing content between the model and the terminology, modeling logical attributes, and creating iso-semantic models. We place our lessons in context, exploring the potential benefits of an implementation layer, an iso-semantic modeling framework, and ontologic technologies. Conclusions We assert that detailed clinical models can advance interoperability and sharing, and that our guidelines, an implementation layer, and an iso-semantic framework will support our progress toward that goal. PMID:24993546
Extended Graph-Based Models for Enhanced Similarity Search in Cavbase.
Krotzky, Timo; Fober, Thomas; Hüllermeier, Eyke; Klebe, Gerhard
2014-01-01
To calculate similarities between molecular structures, measures based on the maximum common subgraph are frequently applied. For the comparison of protein binding sites, these measures are not fully appropriate since graphs representing binding sites on a detailed atomic level tend to get very large. In combination with an NP-hard problem, a large graph leads to a computationally demanding task. Therefore, for the comparison of binding sites, a less detailed coarse graph model is used building upon so-called pseudocenters. Consistently, a loss of structural data is caused since many atoms are discarded and no information about the shape of the binding site is considered. This is usually resolved by performing subsequent calculations based on additional information. These steps are usually quite expensive, making the whole approach very slow. The main drawback of a graph-based model solely based on pseudocenters, however, is the loss of information about the shape of the protein surface. In this study, we propose a novel and efficient modeling formalism that does not increase the size of the graph model compared to the original approach, but leads to graphs containing considerably more information assigned to the nodes. More specifically, additional descriptors considering surface characteristics are extracted from the local surface and attributed to the pseudocenters stored in Cavbase. These properties are evaluated as additional node labels, which lead to a gain of information and allow for much faster but still very accurate comparisons between different structures.
Ajelli, Marco; Gonçalves, Bruno; Balcan, Duygu; Colizza, Vittoria; Hu, Hao; Ramasco, José J; Merler, Stefano; Vespignani, Alessandro
2010-06-29
In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM) model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age breakdown analysis shows that similar attack rates are obtained for the younger age classes. The good agreement between the two modeling approaches is very important for defining the tradeoff between data availability and the information provided by the models. The results we present define the possibility of hybrid models combining the agent-based and the metapopulation approaches according to the available data and computational resources.
MXA: a customizable HDF5-based data format for multi-dimensional data sets
NASA Astrophysics Data System (ADS)
Jackson, M.; Simmons, J. P.; De Graef, M.
2010-09-01
A new digital file format is proposed for the long-term archival storage of experimental data sets generated by serial sectioning instruments. The format is known as the multi-dimensional eXtensible Archive (MXA) format and is based on the public domain Hierarchical Data Format (HDF5). The MXA data model, its description by means of an eXtensible Markup Language (XML) file with associated Document Type Definition (DTD) are described in detail. The public domain MXA package is available through a dedicated web site (mxa.web.cmu.edu), along with implementation details and example data files.
Non-minimally coupled condensate cosmologies: a phase space analysis
NASA Astrophysics Data System (ADS)
Carloni, Sante; Vignolo, Stefano; Cianci, Roberto
2014-09-01
We present an analysis of the phase space of cosmological models based on a non-minimal coupling between the geometry and a fermionic condensate. We observe that the strong constraint coming from the Dirac equations allows a detailed design of the cosmology of these models, and at the same time guarantees an evolution towards a state indistinguishable from general relativistic cosmological models. In this light, we show in detail how the use of some specific potentials can naturally reproduce a phase of accelerated expansion. In particular, we find for the first time that an exponential potential is able to induce two de Sitter phases separated by a power law expansion, which could be an interesting model for the unification of an inflationary phase and a dark energy era.
Performance analysis and dynamic modeling of a single-spool turbojet engine
NASA Astrophysics Data System (ADS)
Andrei, Irina-Carmen; Toader, Adrian; Stroe, Gabriela; Frunzulica, Florin
2017-01-01
The purposes of modeling and simulation of a turbojet engine are the steady state analysis and transient analysis. From the steady state analysis, which consists in the investigation of the operating, equilibrium regimes and it is based on appropriate modeling describing the operation of a turbojet engine at design and off-design regimes, results the performance analysis, concluded by the engine's operational maps (i.e. the altitude map, velocity map and speed map) and the engine's universal map. The mathematical model that allows the calculation of the design and off-design performances, in case of a single spool turbojet is detailed. An in house code was developed, its calibration was done for the J85 turbojet engine as the test case. The dynamic modeling of the turbojet engine is obtained from the energy balance equations for compressor, combustor and turbine, as the engine's main parts. The transient analysis, which is based on appropriate modeling of engine and its main parts, expresses the dynamic behavior of the turbojet engine, and further, provides details regarding the engine's control. The aim of the dynamic analysis is to determine a control program for the turbojet, based on the results provided by performance analysis. In case of the single-spool turbojet engine, with fixed nozzle geometry, the thrust is controlled by one parameter, which is the fuel flow rate. The design and management of the aircraft engine controls are based on the results of the transient analysis. The construction of the design model is complex, since it is based on both steady-state and transient analysis, further allowing the flight path cycle analysis and optimizations. This paper presents numerical simulations for a single-spool turbojet engine (J85 as test case), with appropriate modeling for steady-state and dynamic analysis.
The uses and limitations of the square‐root‐impedance method for computing site amplification
Boore, David
2013-01-01
The square‐root‐impedance (SRI) method is a fast way of computing approximate site amplification that does not depend on the details from velocity models. The SRI method underestimates the peak response of models with large impedance contrasts near their base, but the amplifications for those models is often close to or equal to the root mean square of the theoretical full resonant (FR) response of the higher modes. On the other hand, for velocity models made up of gradients, with no significant impedance changes across small ranges of depth, the SRI method systematically underestimates the theoretical FR response over a wide frequency range. For commonly used gradient models for generic rock sites, the SRI method underestimates the FR response by about 20%–30%. Notwithstanding the persistent underestimation of amplifications from theoretical FR calculations, however, amplifications from the SRI method may often provide more useful estimates of amplifications than the FR method, because the SRI amplifications are not sensitive to details of the models and will not exhibit the many peaks and valleys characteristic of theoretical full resonant amplifications (jaggedness sometimes not seen in amplifications based on averages of site response from multiple recordings at a given site). The lack of sensitivity to details of the velocity models also makes the SRI method useful in comparing the response of various velocity models, in spite of any systematic underestimation of the response. The quarter‐wavelength average velocity, which is fundamental to the SRI method, is useful by itself in site characterization, and as such, is the fundamental parameter used to characterize the site response in a number of recent ground‐motion prediction equations.
Forecasting of wet snow avalanche activity: Proof of concept and operational implementation
NASA Astrophysics Data System (ADS)
Gobiet, Andreas; Jöbstl, Lisa; Rieder, Hannes; Bellaire, Sascha; Mitterer, Christoph
2017-04-01
State-of-the-art tools for the operational assessment of avalanche danger include field observations, recordings from automatic weather stations, meteorological analyses and forecasts, and recently also indices derived from snowpack models. In particular, an index for identifying the onset of wet-snow avalanche cycles (LWCindex), has been demonstrated to be useful. However, its value for operational avalanche forecasting is currently limited, since detailed, physically based snowpack models are usually driven by meteorological data from automatic weather stations only and have therefore no prognostic ability. Since avalanche risk management heavily relies on timely information and early warnings, many avalanche services in Europe nowadays start issuing forecasts for the following days, instead of the traditional assessment of the current avalanche danger. In this context, the prognostic operation of detailed snowpack models has recently been objective of extensive research. In this study a new, observationally constrained setup for forecasting the onset of wet-snow avalanche cycles with the detailed snow cover model SNOWPACK is presented and evaluated. Based on data from weather stations and different numerical weather prediction models, we demonstrate that forecasts of the LWCindex as indicator for wet-snow avalanche cycles can be useful for operational warning services, but is so far not reliable enough to be used as single warning tool without considering other factors. Therefore, further development currently focuses on the improvement of the forecasts by applying ensemble techniques and suitable post processing approaches to the output of numerical weather prediction models. In parallel, the prognostic meteo-snow model chain is operationally used by two regional avalanche warning services in Austria since winter 2016/2017 for the first time. Experiences from the first operational season and first results from current model developments will be reported.
NASA Astrophysics Data System (ADS)
Partovi, T.; Fraundorfer, F.; Azimi, S.; Marmanis, D.; Reinartz, P.
2017-05-01
3D building reconstruction from remote sensing image data from satellites is still an active research topic and very valuable for 3D city modelling. The roof model is the most important component to reconstruct the Level of Details 2 (LoD2) for a building in 3D modelling. While the general solution for roof modelling relies on the detailed cues (such as lines, corners and planes) extracted from a Digital Surface Model (DSM), the correct detection of the roof type and its modelling can fail due to low quality of the DSM generated by dense stereo matching. To reduce dependencies of roof modelling on DSMs, the pansharpened satellite images as a rich resource of information are used in addition. In this paper, two strategies are employed for roof type classification. In the first one, building roof types are classified in a state-of-the-art supervised pre-trained convolutional neural network (CNN) framework. In the second strategy, deep features from deep layers of different pre-trained CNN model are extracted and then an RBF kernel using SVM is employed to classify the building roof type. Based on roof complexity of the scene, a roof library including seven types of roofs is defined. A new semi-automatic method is proposed to generate training and test patches of each roof type in the library. Using the pre-trained CNN model does not only decrease the computation time for training significantly but also increases the classification accuracy.
HESS Opinions: The complementary merits of competing modelling philosophies in hydrology
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Clark, Martyn P.
2017-08-01
In hydrology, two somewhat competing philosophies form the basis of most process-based models. At one endpoint of this continuum are detailed, high-resolution descriptions of small-scale processes that are numerically integrated to larger scales (e.g. catchments). At the other endpoint of the continuum are spatially lumped representations of the system that express the hydrological response via, in the extreme case, a single linear transfer function. Many other models, developed starting from these two contrasting endpoints, plot along this continuum with different degrees of spatial resolutions and process complexities. A better understanding of the respective basis as well as the respective shortcomings of different modelling philosophies has the potential to improve our models. In this paper we analyse several frequently communicated beliefs and assumptions to identify, discuss and emphasize the functional similarity of the seemingly competing modelling philosophies. We argue that deficiencies in model applications largely do not depend on the modelling philosophy, although some models may be more suitable for specific applications than others and vice versa, but rather on the way a model is implemented. Based on the premises that any model can be implemented at any desired degree of detail and that any type of model remains to some degree conceptual, we argue that a convergence of modelling strategies may hold some value for advancing the development of hydrological models.
Cognitive Modeling of Video Game Player User Experience
NASA Technical Reports Server (NTRS)
Bohil, Corey J.; Biocca, Frank A.
2010-01-01
This paper argues for the use of cognitive modeling to gain a detailed and dynamic look into user experience during game play. Applying cognitive models to game play data can help researchers understand a player's attentional focus, memory status, learning state, and decision strategies (among other things) as these cognitive processes occurred throughout game play. This is a stark contrast to the common approach of trying to assess the long-term impact of games on cognitive functioning after game play has ended. We describe what cognitive models are, what they can be used for and how game researchers could benefit by adopting these methods. We also provide details of a single model - based on decision field theory - that has been successfUlly applied to data sets from memory, perception, and decision making experiments, and has recently found application in real world scenarios. We examine possibilities for applying this model to game-play data.
Modeling epidemics on adaptively evolving networks: A data-mining perspective.
Kattis, Assimakis A; Holiday, Alexander; Stoica, Ana-Andreea; Kevrekidis, Ioannis G
2016-01-01
The exploration of epidemic dynamics on dynamically evolving ("adaptive") networks poses nontrivial challenges to the modeler, such as the determination of a small number of informative statistics of the detailed network state (that is, a few "good observables") that usefully summarize the overall (macroscopic, systems-level) behavior. Obtaining reduced, small size accurate models in terms of these few statistical observables--that is, trying to coarse-grain the full network epidemic model to a small but useful macroscopic one--is even more daunting. Here we describe a data-based approach to solving the first challenge: the detection of a few informative collective observables of the detailed epidemic dynamics. This is accomplished through Diffusion Maps (DMAPS), a recently developed data-mining technique. We illustrate the approach through simulations of a simple mathematical model of epidemics on a network: a model known to exhibit complex temporal dynamics. We discuss potential extensions of the approach, as well as possible shortcomings.
Nev, Olga A; van den Berg, Hugo A
2017-01-01
Variable-Internal-Stores models of microbial metabolism and growth have proven to be invaluable in accounting for changes in cellular composition as microbial cells adapt to varying conditions of nutrient availability. Here, such a model is extended with explicit allocation of molecular building blocks among various types of catalytic machinery. Such an extension allows a reconstruction of the regulatory rules employed by the cell as it adapts its physiology to changing environmental conditions. Moreover, the extension proposed here creates a link between classic models of microbial growth and analyses based on detailed transcriptomics and proteomics data sets. We ascertain the compatibility between the extended Variable-Internal-Stores model and the classic models, demonstrate its behaviour by means of simulations, and provide a detailed treatment of the uniqueness and the stability of its equilibrium point as a function of the availabilities of the various nutrients.
Correlation techniques to determine model form in robust nonlinear system realization/identification
NASA Technical Reports Server (NTRS)
Stry, Greselda I.; Mook, D. Joseph
1991-01-01
The fundamental challenge in identification of nonlinear dynamic systems is determining the appropriate form of the model. A robust technique is presented which essentially eliminates this problem for many applications. The technique is based on the Minimum Model Error (MME) optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature is the ability to identify nonlinear dynamic systems without prior assumption regarding the form of the nonlinearities, in contrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. Model form is determined via statistical correlation of the MME optimal state estimates with the MME optimal model error estimates. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.
NASA Astrophysics Data System (ADS)
Leitão, João P.; Moy de Vitry, Matthew; Scheidegger, Andreas; Rieckermann, Jörg
2016-04-01
Precise and detailed digital elevation models (DEMs) are essential to accurately predict overland flow in urban areas. Unfortunately, traditional sources of DEM, such as airplane light detection and ranging (lidar) DEMs and point and contour maps, remain a bottleneck for detailed and reliable overland flow models, because the resulting DEMs are too coarse to provide DEMs of sufficient detail to inform urban overland flows. Interestingly, technological developments of unmanned aerial vehicles (UAVs) suggest that they have matured enough to be a competitive alternative to satellites or airplanes. However, this has not been tested so far. In this study we therefore evaluated whether DEMs generated from UAV imagery are suitable for urban drainage overland flow modelling. Specifically, 14 UAV flights were conducted to assess the influence of four different flight parameters on the quality of generated DEMs: (i) flight altitude, (ii) image overlapping, (iii) camera pitch, and (iv) weather conditions. In addition, we compared the best-quality UAV DEM to a conventional lidar-based DEM. To evaluate both the quality of the UAV DEMs and the comparison to lidar-based DEMs, we performed regression analysis on several qualitative and quantitative metrics, such as elevation accuracy, quality of object representation (e.g. buildings, walls and trees) in the DEM, which were specifically tailored to assess overland flow modelling performance, using the flight parameters as explanatory variables. Our results suggested that, first, as expected, flight altitude influenced the DEM quality most, where lower flights produce better DEMs; in a similar fashion, overcast weather conditions are preferable, but weather conditions and other factors influence DEM quality much less. Second, we found that for urban overland flow modelling, the UAV DEMs performed competitively in comparison to a traditional lidar-based DEM. An important advantage of using UAVs to generate DEMs in urban areas is their flexibility that enables more frequent, local, and affordable elevation data updates, allowing, for example, to capture different tree foliage conditions.
FINAL REPORT (DE-FG02-97ER62338): Single-column modeling, GCM parameterizations, and ARM data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard C. J. Somerville
2009-02-27
Our overall goal is the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have compared SCM (single-column model) output with ARM observations at the SGP, NSA and TWP sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments ofmore » cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art three-dimensional atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable.« less
Photo-Modeling and Cloud Computing. Applications in the Survey of Late Gothic Architectural Elements
NASA Astrophysics Data System (ADS)
Casu, P.; Pisu, C.
2013-02-01
This work proposes the application of the latest methods of photo-modeling to the study of Gothic architecture in Sardinia. The aim is to consider the versatility and ease of use of such documentation tools in order to study architecture and its ornamental details. The paper illustrates a procedure of integrated survey and restitution, with the purpose to obtain an accurate 3D model of some gothic portals. We combined the contact survey and the photographic survey oriented to the photo-modelling. The software used is 123D Catch by Autodesk an Image Based Modelling (IBM) system available free. It is a web-based application that requires a few simple steps to produce a mesh from a set of not oriented photos. We tested the application on four portals, working at different scale of detail: at first the whole portal and then the different architectural elements that composed it. We were able to model all the elements and to quickly extrapolate simple sections, in order to make a comparison between the moldings, highlighting similarities and differences. Working in different sites at different scale of detail, have allowed us to test the procedure under different conditions of exposure, sunshine, accessibility, degradation of surface, type of material, and with different equipment and operators, showing if the final result could be affected by these factors. We tested a procedure, articulated in a few repeatable steps, that can be applied, with the right corrections and adaptations, to similar cases and/or larger or smaller elements.
The Impact of Carbon Dioxide on Climate.
ERIC Educational Resources Information Center
MacDonald, Gordon J.
1979-01-01
Examines the relationship between climatic change and carbon dioxide from the historical perspective; details the contributions of carbon-based fuels to increasing carbon dioxide concentrations; and using global circulation models, discusses the future impact of the heavy reliance of our society on carbon-based fuels on climatic change. (BT)
Sulfuric acid as a catalyst for ring-opening of biobased bis-epoxides
USDA-ARS?s Scientific Manuscript database
Vegetable oils can be relatively and easily transformed into bio-based epoxides. Because of this, the acid-catalyzed epoxide ring-opening has been explored for the preparation of bio-based lubricants and polymers. Detailed model studies are carried out only with mono-epoxide made from methyl oleate,...
Process-based Cost Estimation for Ramjet/Scramjet Engines
NASA Technical Reports Server (NTRS)
Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John
2003-01-01
Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.
ERIC Educational Resources Information Center
Hood, Michelle; Creed, Peter A.; Neumann, David L.
2012-01-01
We tested a model of the relationship between attitudes toward statistics and achievement based on Eccles' Expectancy Value Model (1983). Participants (n = 149; 83% female) were second-year Australian university students in a psychology statistics course (mean age = 23.36 years, SD = 7.94 years). We obtained demographic details, past performance,…
CCl4 is a common environmental contaminant in water and superfund sites, and a model liver toxicant. One application of PBPK models used in risk assessment is simulation of internal dose for the metric involved with toxicity, particularly for different routes of exposure. Time-co...
A simple physical model for forest fire spread
E. Koo; P. Pagni; J. Woycheese; S. Stephens; D. Weise; J. Huff
2005-01-01
Based on energy conservation and detailed heat transfer mechanisms, a simple physical model for fire spread is presented for the limit of one-dimensional steady-state contiguous spread of a line fire in a thermally-thin uniform porous fuel bed. The solution for the fire spread rate is found as an eigenvalue from this model with appropriate boundary conditions through a...
Properties of interfaces and transport across them.
Cabezas, H
2000-01-01
Much of the biological activity in cell cytoplasm occurs in compartments some of which may be formed, as suggested in this book, by phase separation, and many of the functions of such compartments depend on the transport or exchange of molecules across interfaces. Thus a fundamentally based discussion of the properties of phases, interfaces, and diffusive transport across interfaces has been given to further elucidate these phenomena. An operational criterion for the width of interfaces is given in terms of molecular and physical arguments, and the properties of molecules inside phases and interfaces are discussed in terms of molecular arguments. In general, the properties of the interface become important when the molecules diffusing across are smaller than the width of the interface. Equilibrium partitioning, Donnan phenomena, and electrochemical potentials at interfaces are also discussed in detail. The mathematical expressions for modeling transport across interfaces are discussed in detail. These describe a practical and detailed model for transport across interfaces. For molecules smaller than the width of the interface, this includes a detailed model for diffusion inside the interface. Last, the question of the time scale for phase formation and equilibration in biological systems is discussed.
NASA Technical Reports Server (NTRS)
Marsh, J. G.; Vincent, S.
1973-01-01
The GEOS-C spacecraft is scheduled to carry onboard a radar altimeter for the purpose of measuring the geoid undulations in oceanic areas. An independently derived geoid map will provide a valuable complement to these experiments. A detailed gravimetric geoid is presented for the Atlantic and northeast Pacific Ocean areas based upon a combination of the Goddard Space Flight Center GEM-6 earth model and surface 1 deg x 1 deg gravity data. As part of this work a number of satellite derived gravity models were evaluated to establish the model which best represented the long wave length features of the geoid in the above mentioned area. Comparisons of the detailed geoid with the astrogeodetic data provided by the National Ocean Survey and dynamically derived tracking station heights indicate that the accuracy of this combined geoid is on the order of 2 meters or better where data was dense and 5 to 7 meters where data was less dense.
Modelling of induced electric fields based on incompletely known magnetic fields
NASA Astrophysics Data System (ADS)
Laakso, Ilkka; De Santis, Valerio; Cruciani, Silvano; Campi, Tommaso; Feliziani, Mauro
2017-08-01
Determining the induced electric fields in the human body is a fundamental problem in bioelectromagnetics that is important for both evaluation of safety of electromagnetic fields and medical applications. However, existing techniques for numerical modelling of induced electric fields require detailed information about the sources of the magnetic field, which may be unknown or difficult to model in realistic scenarios. Here, we show how induced electric fields can accurately be determined in the case where the magnetic fields are known only approximately, e.g. based on field measurements. The robustness of our approach is shown in numerical simulations for both idealized and realistic scenarios featuring a personalized MRI-based head model. The approach allows for modelling of the induced electric fields in biological bodies directly based on real-world magnetic field measurements.
Development of an Aeroelastic Code Based on an Euler/Navier-Stokes Aerodynamic Solver
NASA Technical Reports Server (NTRS)
Bakhle, Milind A.; Srivastava, Rakesh; Keith, Theo G., Jr.; Stefko, George L.; Janus, Mark J.
1996-01-01
This paper describes the development of an aeroelastic code (TURBO-AE) based on an Euler/Navier-Stokes unsteady aerodynamic analysis. A brief review of the relevant research in the area of propulsion aeroelasticity is presented. The paper briefly describes the original Euler/Navier-Stokes code (TURBO) and then details the development of the aeroelastic extensions. The aeroelastic formulation is described. The modeling of the dynamics of the blade using a modal approach is detailed, along with the grid deformation approach used to model the elastic deformation of the blade. The work-per-cycle approach used to evaluate aeroelastic stability is described. Representative results used to verify the code are presented. The paper concludes with an evaluation of the development thus far, and some plans for further development and validation of the TURBO-AE code.
ProbOnto: ontology and knowledge base of probability distributions.
Swat, Maciej J; Grenon, Pierre; Wimalaratne, Sarala
2016-09-01
Probability distributions play a central role in mathematical and statistical modelling. The encoding, annotation and exchange of such models could be greatly simplified by a resource providing a common reference for the definition of probability distributions. Although some resources exist, no suitably detailed and complex ontology exists nor any database allowing programmatic access. ProbOnto, is an ontology-based knowledge base of probability distributions, featuring more than 80 uni- and multivariate distributions with their defining functions, characteristics, relationships and re-parameterization formulas. It can be used for model annotation and facilitates the encoding of distribution-based models, related functions and quantities. http://probonto.org mjswat@ebi.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
High pressure common rail injection system modeling and control.
Wang, H P; Zheng, D; Tian, Y
2016-07-01
In this paper modeling and common-rail pressure control of high pressure common rail injection system (HPCRIS) is presented. The proposed mathematical model of high pressure common rail injection system which contains three sub-systems: high pressure pump sub-model, common rail sub-model and injector sub-model is a relative complicated nonlinear system. The mathematical model is validated by the software Matlab and a virtual detailed simulation environment. For the considered HPCRIS, an effective model free controller which is called Extended State Observer - based intelligent Proportional Integral (ESO-based iPI) controller is designed. And this proposed method is composed mainly of the referred ESO observer, and a time delay estimation based iPI controller. Finally, to demonstrate the performances of the proposed controller, the proposed ESO-based iPI controller is compared with a conventional PID controller and ADRC. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Anatomical and spiral wave reentry in a simplified model for atrial electrophysiology.
Richter, Yvonne; Lind, Pedro G; Seemann, Gunnar; Maass, Philipp
2017-04-21
For modeling the propagation of action potentials in the human atria, various models have been developed in the past, which take into account in detail the influence of the numerous ionic currents flowing through the cell membrane. Aiming at a simplified description, the Bueno-Orovio-Cherry-Fenton (BOCF) model for electric wave propagation in the ventricle has been adapted recently to atrial physiology. Here, we study this adapted BOCF (aBOCF) model with respect to its capability to accurately generate spatio-temporal excitation patterns found in anatomical and spiral wave reentry. To this end, we compare results of the aBOCF model with the more detailed one proposed by Courtemanche, Ramirez and Nattel (CRN model). We find that characteristic features of the reentrant excitation patterns seen in the CRN model are well captured by the aBOCF model. This opens the possibility to study origins of atrial fibrillation based on a simplified but still reliable description. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kapalova, N.; Haumen, A.
2018-05-01
This paper addresses to structures and properties of the cryptographic information protection algorithm model based on NPNs and constructed on an SP-network. The main task of the research is to increase the cryptostrength of the algorithm. In the paper, the transformation resulting in the improvement of the cryptographic strength of the algorithm is described in detail. The proposed model is based on an SP-network. The reasons for using the SP-network in this model are the conversion properties used in these networks. In the encryption process, transformations based on S-boxes and P-boxes are used. It is known that these transformations can withstand cryptanalysis. In addition, in the proposed model, transformations that satisfy the requirements of the "avalanche effect" are used. As a result of this work, a computer program that implements an encryption algorithm model based on the SP-network has been developed.
Wang, Yi-Shan; Potts, Jonathan R
2017-03-07
Recent advances in animal tracking have allowed us to uncover the drivers of movement in unprecedented detail. This has enabled modellers to construct ever more realistic models of animal movement, which aid in uncovering detailed patterns of space use in animal populations. Partial differential equations (PDEs) provide a popular tool for mathematically analysing such models. However, their construction often relies on simplifying assumptions which may greatly affect the model outcomes. Here, we analyse the effect of various PDE approximations on the analysis of some simple movement models, including a biased random walk, central-place foraging processes and movement in heterogeneous landscapes. Perhaps the most commonly-used PDE method dates back to a seminal paper of Patlak from 1953. However, our results show that this can be a very poor approximation in even quite simple models. On the other hand, more recent methods, based on transport equation formalisms, can provide more accurate results, as long as the kernel describing the animal's movement is sufficiently smooth. When the movement kernel is not smooth, we show that both the older and newer methods can lead to quantitatively misleading results. Our detailed analysis will aid future researchers in the appropriate choice of PDE approximation for analysing models of animal movement. Copyright © 2017 Elsevier Ltd. All rights reserved.
Extra-Tropical Cyclones at Climate Scales: Comparing Models to Observations
NASA Astrophysics Data System (ADS)
Tselioudis, G.; Bauer, M.; Rossow, W.
2009-04-01
Climate is often defined as the accumulation of weather, and weather is not the concern of climate models. Justification for this latter sentiment has long been hidden behind coarse model resolutions and blunt validation tools based on climatological maps. The spatial-temporal resolutions of today's climate models and observations are converging onto meteorological scales, however, which means that with the correct tools we can test the largely unproven assumption that climate model weather is correct enough that its accumulation results in a robust climate simulation. Towards this effort we introduce a new tool for extracting detailed cyclone statistics from observations and climate model output. These include the usual cyclone characteristics (centers, tracks), but also adaptive cyclone-centric composites. We have created a novel dataset, the MAP Climatology of Mid-latitude Storminess (MCMS), which provides a detailed 6 hourly assessment of the areas under the influence of mid-latitude cyclones, using a search algorithm that delimits the boundaries of each system from the outer-most closed SLP contour. Using this we then extract composites of cloud, radiation, and precipitation properties from sources such as ISCCP and GPCP to create a large comparative dataset for climate model validation. A demonstration of the potential usefulness of these tools in process-based climate model evaluation studies will be shown.
Economic communication model set
NASA Astrophysics Data System (ADS)
Zvereva, Olga M.; Berg, Dmitry B.
2017-06-01
This paper details findings from the research work targeted at economic communications investigation with agent-based models usage. The agent-based model set was engineered to simulate economic communications. Money in the form of internal and external currencies was introduced into the models to support exchanges in communications. Every model, being based on the general concept, has its own peculiarities in algorithm and input data set since it was engineered to solve the specific problem. Several and different origin data sets were used in experiments: theoretic sets were estimated on the basis of static Leontief's equilibrium equation and the real set was constructed on the basis of statistical data. While simulation experiments, communication process was observed in dynamics, and system macroparameters were estimated. This research approved that combination of an agent-based and mathematical model can cause a synergetic effect.
NEVER forget: negative emotional valence enhances recapitulation.
Bowen, Holly J; Kark, Sarah M; Kensinger, Elizabeth A
2018-06-01
A hallmark feature of episodic memory is that of "mental time travel," whereby an individual feels they have returned to a prior moment in time. Cognitive and behavioral neuroscience methods have revealed a neurobiological counterpart: Successful retrieval often is associated with reactivation of a prior brain state. We review the emerging literature on memory reactivation and recapitulation, and we describe evidence for the effects of emotion on these processes. Based on this review, we propose a new model: Negative Emotional Valence Enhances Recapitulation (NEVER). This model diverges from existing models of emotional memory in three key ways. First, it underscores the effects of emotion during retrieval. Second, it stresses the importance of sensory processing to emotional memory. Third, it emphasizes how emotional valence - whether an event is negative or positive - affects the way that information is remembered. The model specifically proposes that, as compared to positive events, negative events both trigger increased encoding of sensory detail and elicit a closer resemblance between the sensory encoding signature and the sensory retrieval signature. The model also proposes that negative valence enhances the reactivation and storage of sensory details over offline periods, leading to a greater divergence between the sensory recapitulation of negative and positive memories over time. Importantly, the model proposes that these valence-based differences occur even when events are equated for arousal, thus rendering an exclusively arousal-based theory of emotional memory insufficient. We conclude by discussing implications of the model and suggesting directions for future research to test the tenets of the model.
Basic research for the geodynamics program
NASA Technical Reports Server (NTRS)
1991-01-01
The mathematical models of space very long base interferometry (VLBI) observables suitable for least squares covariance analysis were derived and estimatability problems inherent in the space VLBI system were explored, including a detailed rank defect analysis and sensitivity analysis. An important aim is to carry out a comparative analysis of the mathematical models of the ground-based VLBI and space VLBI observables in order to describe the background in detail. Computer programs were developed in order to check the relations, assess errors, and analyze sensitivity. In order to investigate the estimatability of different geodetic and geodynamic parameters from the space VLBI observables, the mathematical models for time delay and time delay rate observables of space VLBI were analytically derived along with the partial derivatives with respect to the parameters. Rank defect analysis was carried out both by analytical and numerical testing of linear dependencies between the columns of the normal matrix thus formed. Definite conclusions were formed about the rank defects in the system.
Failure detection and correction for turbofan engines
NASA Technical Reports Server (NTRS)
Corley, R. C.; Spang, H. A., III
1977-01-01
In this paper, a failure detection and correction strategy for turbofan engines is discussed. This strategy allows continuing control of the engines in the event of a sensor failure. An extended Kalman filter is used to provide the best estimate of the state of the engine based on currently available sensor outputs. Should a sensor failure occur the control is based on the best estimate rather than the sensor output. The extended Kalman filter consists of essentially two parts, a nonlinear model of the engine and up-date logic which causes the model to track the actual engine. Details on the model and up-date logic are presented. To allow implementation, approximations are made to the feedback gain matrix which result in a single feedback matrix which is suitable for use over the entire flight envelope. The effect of these approximations on stability and response is discussed. Results from a detailed nonlinear simulation indicate that good control can be maintained even under multiple failures.
Memory-efficient RNA energy landscape exploration
Mann, Martin; Kucharík, Marcel; Flamm, Christoph; Wolfinger, Michael T.
2014-01-01
Motivation: Energy landscapes provide a valuable means for studying the folding dynamics of short RNA molecules in detail by modeling all possible structures and their transitions. Higher abstraction levels based on a macro-state decomposition of the landscape enable the study of larger systems; however, they are still restricted by huge memory requirements of exact approaches. Results: We present a highly parallelizable local enumeration scheme that enables the computation of exact macro-state transition models with highly reduced memory requirements. The approach is evaluated on RNA secondary structure landscapes using a gradient basin definition for macro-states. Furthermore, we demonstrate the need for exact transition models by comparing two barrier-based approaches, and perform a detailed investigation of gradient basins in RNA energy landscapes. Availability and implementation: Source code is part of the C++ Energy Landscape Library available at http://www.bioinf.uni-freiburg.de/Software/. Contact: mmann@informatik.uni-freiburg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24833804
Developing Capture Mechanisms and High-Fidelity Dynamic Models for the MXER Tether System
NASA Technical Reports Server (NTRS)
Canfield, Steven L.
2007-01-01
A team consisting of collaborators from Tennessee Technological University (TTU), Marshall Space Flight Center, BD Systems, and the University of Delaware (herein called the TTU team) conducted specific research and development activities in MXER tether systems during the base period of May 15, 2004 through September 30, 2006 under contract number NNM04AB13C. The team addressed two primary topics related to the MXER tether system: 1) Development of validated high-fidelity dynamic models of an elastic rotating tether and 2) development of feasible mechanisms to enable reliable rendezvous and capture. This contractor report will describe in detail the activities that were performed during the base period of this cycle-2 MXER tether activity and will summarize the results of this funded activity. The primary deliverables of this project were the quad trap, a robust capture mechanism proposed, developed, tested, and demonstrated with a high degree of feasibility and the detailed development of a validated high-fidelity elastic tether dynamic model provided through multiple formulations.
Morphologic Quality of DSMs Based on Optical and Radar Space Imagery
NASA Astrophysics Data System (ADS)
Sefercik, U. G.; Bayik, C.; Karakis, S.; Jacobsen, K.
2011-09-01
Digital Surface Models (DSMs) are representing the visible surface of the earth by the height corresponding to its X-, Y-location and height value Z. The quality of a DSM can be described by the accuracy and the morphologic details. Both depend upon the used input information, the used technique and the roughness of the terrain. The influence of the topographic details to the DSM quality is shown for the test fields Istanbul and Zonguldak. Zonguldak has a rough mountainous character with heights from sea level up to 1640m, while Istanbul is dominated by rolling hills going up to an elevation of 435m. DSMs from SPOT-5, the SRTM C-band height models and ASTER GDEM have been investigated. The DSMs have been verified with height models from large scale aerial photos being more accurate and including morphologic details. It was necessary to determine and respect shifts of the height models caused by datum problems and orientation of the height models. The DSM quality is analyzed depending upon the terrain inclination. The DSM quality differs for both test fields. The morphologic quality depends upon the point spacing of the analyzed DSMs and the terrain characteristics.
Building occupancy simulation and data assimilation using a graph-based agent-oriented model
NASA Astrophysics Data System (ADS)
Rai, Sanish; Hu, Xiaolin
2018-07-01
Building occupancy simulation and estimation simulates the dynamics of occupants and estimates their real-time spatial distribution in a building. It requires a simulation model and an algorithm for data assimilation that assimilates real-time sensor data into the simulation model. Existing building occupancy simulation models include agent-based models and graph-based models. The agent-based models suffer high computation cost for simulating large numbers of occupants, and graph-based models overlook the heterogeneity and detailed behaviors of individuals. Recognizing the limitations of existing models, this paper presents a new graph-based agent-oriented model which can efficiently simulate large numbers of occupants in various kinds of building structures. To support real-time occupancy dynamics estimation, a data assimilation framework based on Sequential Monte Carlo Methods is also developed and applied to the graph-based agent-oriented model to assimilate real-time sensor data. Experimental results show the effectiveness of the developed model and the data assimilation framework. The major contributions of this work are to provide an efficient model for building occupancy simulation that can accommodate large numbers of occupants and an effective data assimilation framework that can provide real-time estimations of building occupancy from sensor data.
Markov models of genome segmentation
NASA Astrophysics Data System (ADS)
Thakur, Vivek; Azad, Rajeev K.; Ramaswamy, Ram
2007-01-01
We introduce Markov models for segmentation of symbolic sequences, extending a segmentation procedure based on the Jensen-Shannon divergence that has been introduced earlier. Higher-order Markov models are more sensitive to the details of local patterns and in application to genome analysis, this makes it possible to segment a sequence at positions that are biologically meaningful. We show the advantage of higher-order Markov-model-based segmentation procedures in detecting compositional inhomogeneity in chimeric DNA sequences constructed from genomes of diverse species, and in application to the E. coli K12 genome, boundaries of genomic islands, cryptic prophages, and horizontally acquired regions are accurately identified.
USDA-ARS?s Scientific Manuscript database
The Agricultural Policy/Environmental eXtender (APEX) is a watershed-scale water quality model that includes detailed representation of agricultural management but currently does not have microbial fate and transport simulation capabilities. The objective of this work was to develop a process-based ...
3-D and quasi-2-D discrete element modeling of grain commingling in a bucket elevator boot system
USDA-ARS?s Scientific Manuscript database
Unwanted grain commingling impedes new quality-based grain handling systems and has proven to be an expensive and time consuming issue to study experimentally. Experimentally validated models may reduce the time and expense of studying grain commingling while providing additional insight into detail...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grove, John W.
2016-08-16
The xRage code supports a variety of hydrodynamic equation of state (EOS) models. In practice these are generally accessed in the executing code via a pressure-temperature based table look up. This document will describe the various models supported by these codes and provide details on the algorithms used to evaluate the equation of state.
Supplemental Tables to the Annual Energy Outlook
2017-01-01
The Annual Energy Outlook (AEO) Supplemental tables were generated for the reference case of the AEO using the National Energy Modeling System, a computer-based model which produces annual projections of energy markets. Most of the tables were not published in the AEO, but contain regional and other more detailed projections underlying the AEO projections.
ERIC Educational Resources Information Center
Ayalon, Aram
2011-01-01
The book describes two similar and successful models of youth mentoring used by two acclaimed urban high schools that have consistently achieved exceptional graduation rates. Providing a detailed description of their methods--based upon extensive observation, and interviews with teachers, students, administrators, and parents--this book makes a…
Growth in Mathematical Understanding: How Can We Characterise It and How Can We Represent It?
ERIC Educational Resources Information Center
Pirie, Susan; Kieren, Thomas
1994-01-01
Proposes a model for the growth of mathematical understanding based on the consideration of understanding as a whole, dynamic, leveled but nonlinear process. Illustrates the model using the concept of fractions. How to map the growth of understanding is explained in detail. (Contains 26 references.) (MKR)
NASA Technical Reports Server (NTRS)
Bennett, David P.
1988-01-01
Cosmic strings are linear topological defects which are predicted by some grand unified theories to form during a spontaneous symmetry breaking phase transition in the early universe. They are the basis for the only theories of galaxy formation aside from quantum fluctuations from inflation based on fundamental physics. In contrast to inflation, they can also be observed directly through gravitational lensing and their characterisitc microwave background anisotropy. It was recently discovered that details of cosmic string evolution are very differnt from the so-called standard model that was assumed in most of the string-induced galaxy formation calculations. Therefore, the details of galaxy formation in the cosmic string models are currently very uncertain.
NASA Astrophysics Data System (ADS)
Evans, M. E.; Merow, C.; Record, S.; Menlove, J.; Gray, A.; Cundiff, J.; McMahon, S.; Enquist, B. J.
2013-12-01
Current attempts to forecast how species' distributions will change in response to climate change suffer under a fundamental trade-off: between modeling many species superficially vs. few species in detail (between correlative vs. mechanistic models). The goals of this talk are two-fold: first, we present a Bayesian multilevel modeling framework, dynamic range modeling (DRM), for building process-based forecasts of many species' distributions at a time, designed to address the trade-off between detail and number of distribution forecasts. In contrast to 'species distribution modeling' or 'niche modeling', which uses only species' occurrence data and environmental data, DRMs draw upon demographic data, abundance data, trait data, occurrence data, and GIS layers of climate in a single framework to account for two processes known to influence range dynamics - demography and dispersal. The vision is to use extensive databases on plant demography, distributions, and traits - in the Botanical Information and Ecology Network, the Forest Inventory and Analysis database (FIA), and the International Tree Ring Data Bank - to develop DRMs for North American trees. Second, we present preliminary results from building the core submodel of a DRM - an integral projection model (IPM) - for a sample of dominant tree species in western North America. IPMs are used to infer demographic niches - i.e., the set of environmental conditions under which population growth rate is positive - and project population dynamics through time. Based on >550,000 data points derived from FIA for nine tree species in western North America, we show IPM-based models of their current and future distributions, and discuss how IPMs can be used to forecast future forest productivity, mortality patterns, and inform efforts at assisted migration.
Comprehensive European dietary exposure model (CEDEM) for food additives.
Tennant, David R
2016-05-01
European methods for assessing dietary exposures to nutrients, additives and other substances in food are limited by the availability of detailed food consumption data for all member states. A proposed comprehensive European dietary exposure model (CEDEM) applies summary data published by the European Food Safety Authority (EFSA) in a deterministic model based on an algorithm from the EFSA intake method for food additives. The proposed approach can predict estimates of food additive exposure provided in previous EFSA scientific opinions that were based on the full European food consumption database.
The effects of geometric uncertainties on computational modelling of knee biomechanics
NASA Astrophysics Data System (ADS)
Meng, Qingen; Fisher, John; Wilcox, Ruth
2017-08-01
The geometry of the articular components of the knee is an important factor in predicting joint mechanics in computational models. There are a number of uncertainties in the definition of the geometry of cartilage and meniscus, and evaluating the effects of these uncertainties is fundamental to understanding the level of reliability of the models. In this study, the sensitivity of knee mechanics to geometric uncertainties was investigated by comparing polynomial-based and image-based knee models and varying the size of meniscus. The results suggested that the geometric uncertainties in cartilage and meniscus resulting from the resolution of MRI and the accuracy of segmentation caused considerable effects on the predicted knee mechanics. Moreover, even if the mathematical geometric descriptors can be very close to the imaged-based articular surfaces, the detailed contact pressure distribution produced by the mathematical geometric descriptors was not the same as that of the image-based model. However, the trends predicted by the models based on mathematical geometric descriptors were similar to those of the imaged-based models.
Mathematical models to characterize early epidemic growth: A Review
Chowell, Gerardo; Sattenspiel, Lisa; Bansal, Shweta; Viboud, Cécile
2016-01-01
There is a long tradition of using mathematical models to generate insights into the transmission dynamics of infectious diseases and assess the potential impact of different intervention strategies. The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing reliable models that capture the baseline transmission characteristics of specific pathogens and social contexts. More refined models are needed however, in particular to account for variation in the early growth dynamics of real epidemics and to gain a better understanding of the mechanisms at play. Here, we review recent progress on modeling and characterizing early epidemic growth patterns from infectious disease outbreak data, and survey the types of mathematical formulations that are most useful for capturing a diversity of early epidemic growth profiles, ranging from sub-exponential to exponential growth dynamics. Specifically, we review mathematical models that incorporate spatial details or realistic population mixing structures, including meta-population models, individual-based network models, and simple SIR-type models that incorporate the effects of reactive behavior changes or inhomogeneous mixing. In this process, we also analyze simulation data stemming from detailed large-scale agent-based models previously designed and calibrated to study how realistic social networks and disease transmission characteristics shape early epidemic growth patterns, general transmission dynamics, and control of international disease emergencies such as the 2009 A/H1N1 influenza pandemic and the 2014-15 Ebola epidemic in West Africa. PMID:27451336
Mathematical models to characterize early epidemic growth: A review
NASA Astrophysics Data System (ADS)
Chowell, Gerardo; Sattenspiel, Lisa; Bansal, Shweta; Viboud, Cécile
2016-09-01
There is a long tradition of using mathematical models to generate insights into the transmission dynamics of infectious diseases and assess the potential impact of different intervention strategies. The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing reliable models that capture the baseline transmission characteristics of specific pathogens and social contexts. More refined models are needed however, in particular to account for variation in the early growth dynamics of real epidemics and to gain a better understanding of the mechanisms at play. Here, we review recent progress on modeling and characterizing early epidemic growth patterns from infectious disease outbreak data, and survey the types of mathematical formulations that are most useful for capturing a diversity of early epidemic growth profiles, ranging from sub-exponential to exponential growth dynamics. Specifically, we review mathematical models that incorporate spatial details or realistic population mixing structures, including meta-population models, individual-based network models, and simple SIR-type models that incorporate the effects of reactive behavior changes or inhomogeneous mixing. In this process, we also analyze simulation data stemming from detailed large-scale agent-based models previously designed and calibrated to study how realistic social networks and disease transmission characteristics shape early epidemic growth patterns, general transmission dynamics, and control of international disease emergencies such as the 2009 A/H1N1 influenza pandemic and the 2014-2015 Ebola epidemic in West Africa.
Prediction of Erectile Function Following Treatment for Prostate Cancer
Alemozaffar, Mehrdad; Regan, Meredith M.; Cooperberg, Matthew R.; Wei, John T.; Michalski, Jeff M.; Sandler, Howard M.; Hembroff, Larry; Sadetsky, Natalia; Saigal, Christopher S.; Litwin, Mark S.; Klein, Eric; Kibel, Adam S.; Hamstra, Daniel A.; Pisters, Louis L.; Kuban, Deborah A.; Kaplan, Irving D.; Wood, David P.; Ciezki, Jay; Dunn, Rodney L.; Carroll, Peter R.; Sanda, Martin G.
2013-01-01
Context Sexual function is the health-related quality of life (HRQOL) domain most commonly impaired after prostate cancer treatment; however, validated tools to enable personalized prediction of erectile dysfunction after prostate cancer treatment are lacking. Objective To predict long-term erectile function following prostate cancer treatment based on individual patient and treatment characteristics. Design Pretreatment patient characteristics, sexual HRQOL, and treatment details measured in a longitudinal academic multicenter cohort (Prostate Cancer Outcomes and Satisfaction With Treatment Quality Assessment; enrolled from 2003 through 2006), were used to develop models predicting erectile function 2 years after treatment. A community-based cohort (community-based Cancer of the Prostate Strategic Urologic Research Endeavor [CaPSURE]; enrolled 1995 through 2007) externally validated model performance. Patients in US academic and community-based practices whose HRQOL was measured pretreatment (N = 1201) underwent follow-up after prostatectomy, external radiotherapy, or brachytherapy for prostate cancer. Sexual outcomes among men completing 2 years’ follow-up (n = 1027) were used to develop models predicting erectile function that were externally validated among 1913 patients in a community-based cohort. Main Outcome Measures Patient-reported functional erections suitable for intercourse 2 years following prostate cancer treatment. Results Two years after prostate cancer treatment, 368 (37% [95% CI, 34%–40%]) of all patients and 335 (48% [95% CI, 45%–52%]) of those with functional erections prior to treatment reported functional erections; 531 (53% [95% CI, 50%–56%]) of patients without penile prostheses reported use of medications or other devices for erectile dysfunction. Pretreatment sexual HRQOL score, age, serum prostate-specific antigen level, race/ethnicity, body mass index, and intended treatment details were associated with functional erections 2 years after treatment. Multivariable logistic regression models predicting erectile function estimated 2-year function probabilities from as low as 10% or less to as high as 70% or greater depending on the individual’s pretreatment patient characteristics and treatment details. The models performed well in predicting erections in external validation among CaPSURE cohort patients (areas under the receiver operating characteristic curve, 0.77 [95% CI, 0.74–0.80] for prostatectomy; 0.87 [95% CI, 0.80–0.94] for external radiotherapy; and 0.90 [95% CI, 0.85–0.95] for brachytherapy). Conclusion Stratification by pretreatment patient characteristics and treatment details enables prediction of erectile function 2 years after prostatectomy, external radiotherapy, or brachytherapy for prostate cancer. PMID:21934053
Mechanics of blood supply to the heart: wave reflection effects in a right coronary artery.
Zamir, M
1998-01-01
Mechanics of blood flow in the coronary circulation have in the past been based largely on models in which the detailed architecture of the coronary network is not included because of lack of data: properties of individual vessels do not appear individually in the model but are represented collectively by the elements of a single electric circuit. Recent data from the human heart make it possible, for the first time, to examine the dynamics of flow in the coronary network based on detailed, measured vascular architecture. In particular, admittance values along the full course of the right coronary artery are computed based on actual lengths and diameters of the many thousands of branches which make up the distribution system of this vessel. The results indicate that effects of wave reflections on this flow are far more significant than those generally suspected to occur in coronary blood flow and that they are actually the reverse of the well known wave reflection effects in the aorta. PMID:9523440
Color Sparse Representations for Image Processing: Review, Models, and Prospects.
Barthélemy, Quentin; Larue, Anthony; Mars, Jérôme I
2015-11-01
Sparse representations have been extended to deal with color images composed of three channels. A review of dictionary-learning-based sparse representations for color images is made here, detailing the differences between the models, and comparing their results on the real and simulated data. These models are considered in a unifying framework that is based on the degrees of freedom of the linear filtering/transformation of the color channels. Moreover, this allows it to be shown that the scalar quaternionic linear model is equivalent to constrained matrix-based color filtering, which highlights the filtering implicitly applied through this model. Based on this reformulation, the new color filtering model is introduced, using unconstrained filters. In this model, spatial morphologies of color images are encoded by atoms, and colors are encoded by color filters. Color variability is no longer captured in increasing the dictionary size, but with color filters, this gives an efficient color representation.
Suter, Paula; Hennessey, Beth; Florez, Donna; Newton Suter, W
2011-01-01
Individuals with chronic obstructive pulmonary disease (COPD) face significant challenges due to frequent distressing dyspnea and deficits related to activities of daily living. Individuals with COPD are often hospitalized frequently for disease exacerbations, negatively impacting quality of life and healthcare expenditure burden. The home-based chronic care model (HBCCM) was designed to address the needs of patients with chronic diseases. This model facilitates the re-design of chronic care delivery within the home health sector by ensuring patient-centered evidence-based care. This HBCCM foundation is Dr. Edward Wagner s chronic care model and has four additional areas of focus: high touch delivery, theory-based self management, specialist oversight and the use of technology. This article will describe this model in detail and outline how model use for patients with COPD can bring value to stakeholders across the health care continuum.
Low-Dimensional Models for Physiological Systems: Nonlinear Coupling of Gas and Liquid Flows
NASA Astrophysics Data System (ADS)
Staples, A. E.; Oran, E. S.; Boris, J. P.; Kailasanath, K.
2006-11-01
Current computational models of biological organisms focus on the details of a specific component of the organism. For example, very detailed models of the human heart, an aorta, a vein, or part of the respiratory or digestive system, are considered either independently from the rest of the body, or as interacting simply with other systems and components in the body. In actual biological organisms, these components and systems are strongly coupled and interact in complex, nonlinear ways leading to complicated global behavior. Here we describe a low-order computational model of two physiological systems, based loosely on a circulatory and respiratory system. Each system is represented as a one-dimensional fluid system with an interconnected series of mass sources, pumps, valves, and other network components, as appropriate, representing different physical organs and system components. Preliminary results from a first version of this model system are presented.
NASA Technical Reports Server (NTRS)
Xiang, Xuwu; Smith, Eric A.; Tripoli, Gregory J.
1992-01-01
A hybrid statistical-physical retrieval scheme is explored which combines a statistical approach with an approach based on the development of cloud-radiation models designed to simulate precipitating atmospheres. The algorithm employs the detailed microphysical information from a cloud model as input to a radiative transfer model which generates a cloud-radiation model database. Statistical procedures are then invoked to objectively generate an initial guess composite profile data set from the database. The retrieval algorithm has been tested for a tropical typhoon case using Special Sensor Microwave/Imager (SSM/I) data and has shown satisfactory results.
NASA Astrophysics Data System (ADS)
Künne, A.; Fink, M.; Kipka, H.; Krause, P.; Flügel, W.-A.
2012-06-01
In this paper, a method is presented to estimate excess nitrogen on large scales considering single field processes. The approach was implemented by using the physically based model J2000-S to simulate the nitrogen balance as well as the hydrological dynamics within meso-scale test catchments. The model input data, the parameterization, the results and a detailed system understanding were used to generate the regression tree models with GUIDE (Loh, 2002). For each landscape type in the federal state of Thuringia a regression tree was calibrated and validated using the model data and results of excess nitrogen from the test catchments. Hydrological parameters such as precipitation and evapotranspiration were also used to predict excess nitrogen by the regression tree model. Hence they had to be calculated and regionalized as well for the state of Thuringia. Here the model J2000g was used to simulate the water balance on the macro scale. With the regression trees the excess nitrogen was regionalized for each landscape type of Thuringia. The approach allows calculating the potential nitrogen input into the streams of the drainage area. The results show that the applied methodology was able to transfer the detailed model results of the meso-scale catchments to the entire state of Thuringia by low computing time without losing the detailed knowledge from the nitrogen transport modeling. This was validated with modeling results from Fink (2004) in a catchment lying in the regionalization area. The regionalized and modeled excess nitrogen correspond with 94%. The study was conducted within the framework of a project in collaboration with the Thuringian Environmental Ministry, whose overall aim was to assess the effect of agro-environmental measures regarding load reduction in the water bodies of Thuringia to fulfill the requirements of the European Water Framework Directive (Bäse et al., 2007; Fink, 2006; Fink et al., 2007).
Parameterization of the Van Hove dynamic self-scattering law Ss(Q,omega)
NASA Astrophysics Data System (ADS)
Zetterstrom, P.
In this paper we present a model of the Van Hove dynamic scattering law SME(Q, omega) based on the maximum entropy principle which is developed for the first time. The model is aimed to be used in the calculation of inelastic corrections to neutron diffraction data. The model is constrained by the first and second frequency moments and detailed balance, but can be expanded to an arbitrary number of frequency moments. The second moment can be varied by an effective temperature to account for the kinetic energy of the atoms. The results are compared with a diffusion model of the scattering law. Finally some calculations of the inelastic self-scattering for a time-of-flight diffractometer are presented. From this we show that the inelastic self-scattering is very sensitive to the details of the dynamic scattering law.
[Study on the automatic parameters identification of water pipe network model].
Jia, Hai-Feng; Zhao, Qi-Feng
2010-01-01
Based on the problems analysis on development and application of water pipe network model, the model parameters automatic identification is regarded as a kernel bottleneck of model's application in water supply enterprise. The methodology of water pipe network model parameters automatic identification based on GIS and SCADA database is proposed. Then the kernel algorithm of model parameters automatic identification is studied, RSA (Regionalized Sensitivity Analysis) is used for automatic recognition of sensitive parameters, and MCS (Monte-Carlo Sampling) is used for automatic identification of parameters, the detail technical route based on RSA and MCS is presented. The module of water pipe network model parameters automatic identification is developed. At last, selected a typical water pipe network as a case, the case study on water pipe network model parameters automatic identification is conducted and the satisfied results are achieved.
NASA Technical Reports Server (NTRS)
Stahara, S. S.; Spreiter, J. R.
1983-01-01
A computational model for the determination of the detailed plasma and magnetic field properties of the global interaction of the solar wind with nonmagnetic terrestrial planetary obstacles is described. The theoretical method is based on an established single fluid, steady, dissipationless, magnetohydrodynamic continuum model, and is appropriate for the calculation of supersonic, super-Alfvenic solar wind flow past terrestrial ionospheres.
ERIC Educational Resources Information Center
Deane, Robert T.; And Others
The development of econometric models and a data base to predict the responsiveness of arts institutions to changes in the economy is reported. The study focused on models for museums, theaters (profit and non-profit), symphony, ballet, opera, and dance. The report details four objectives of the project: to identify useful databases and studies on…
NASA Astrophysics Data System (ADS)
Yan, Xuewei; Xu, Qingyan; Liu, Baicheng
2017-12-01
Dendritic structures are the predominant microstructural constituents of nickel-based superalloys, an understanding of the dendrite growth is required in order to obtain the desirable microstructure and improve the performance of castings. For this reason, numerical simulation method and an in-situ observation technology by employing high temperature confocal laser scanning microscopy (HT-CLSM) were used to investigate dendrite growth during solidification process. A combined cellular automaton-finite difference (CA-FD) model allowing for the prediction of dendrite growth of binary alloys was developed. The algorithm of cells capture was modified, and a deterministic cellular automaton (DCA) model was proposed to describe neighborhood tracking. The dendrite and detail morphology, especially hundreds of dendrites distribution at a large scale and three-dimensional (3-D) polycrystalline growth, were successfully simulated based on this model. The dendritic morphologies of samples before and after HT-CLSM were both observed by optical microscope (OM) and scanning electron microscope (SEM). The experimental observations presented a reasonable agreement with the simulation results. It was also found that primary or secondary dendrite arm spacing, and segregation pattern were significantly influenced by dendrite growth. Furthermore, the directional solidification (DS) dendritic evolution behavior and detail morphology were also simulated based on the proposed model, and the simulation results also agree well with experimental results.
Computed 3D visualisation of an extinct cephalopod using computer tomographs.
Lukeneder, Alexander
2012-08-01
The first 3D visualisation of a heteromorph cephalopod species from the Southern Alps (Dolomites, northern Italy) is presented. Computed tomography, palaeontological data and 3D reconstructions were included in the production of a movie, which shows a life reconstruction of the extinct organism. This detailed reconstruction is according to the current knowledge of the shape and mode of life as well as habitat of this animal. The results are based on the most complete shell known thus far of the genus Dissimilites . Object-based combined analyses from computed tomography and various computed 3D facility programmes help to understand morphological details as well as their ontogentical changes in fossil material. In this study, an additional goal was to show changes in locomotion during different ontogenetic phases of such fossil, marine shell-bearing animals (ammonoids). Hence, the presented models and tools can serve as starting points for discussions on morphology and locomotion of extinct cephalopods in general, and of the genus Dissimilites in particular. The heteromorph ammonoid genus Dissimilites is interpreted here as an active swimmer of the Tethyan Ocean. This study portrays non-destructive methods of 3D visualisation applied on palaeontological material, starting with computed tomography resulting in animated, high-quality video clips. The here presented 3D geometrical models and animation, which are based on palaeontological material, demonstrate the wide range of applications, analytical techniques and also outline possible limitations of 3D models in earth sciences and palaeontology. The realistic 3D models and motion pictures can easily be shared amongst palaeontologists. Data, images and short clips can be discussed online and, if necessary, adapted in morphological details and motion-style to better represent the cephalopod animal.
Computed 3D visualisation of an extinct cephalopod using computer tomographs
NASA Astrophysics Data System (ADS)
Lukeneder, Alexander
2012-08-01
The first 3D visualisation of a heteromorph cephalopod species from the Southern Alps (Dolomites, northern Italy) is presented. Computed tomography, palaeontological data and 3D reconstructions were included in the production of a movie, which shows a life reconstruction of the extinct organism. This detailed reconstruction is according to the current knowledge of the shape and mode of life as well as habitat of this animal. The results are based on the most complete shell known thus far of the genus Dissimilites. Object-based combined analyses from computed tomography and various computed 3D facility programmes help to understand morphological details as well as their ontogentical changes in fossil material. In this study, an additional goal was to show changes in locomotion during different ontogenetic phases of such fossil, marine shell-bearing animals (ammonoids). Hence, the presented models and tools can serve as starting points for discussions on morphology and locomotion of extinct cephalopods in general, and of the genus Dissimilites in particular. The heteromorph ammonoid genus Dissimilites is interpreted here as an active swimmer of the Tethyan Ocean. This study portrays non-destructive methods of 3D visualisation applied on palaeontological material, starting with computed tomography resulting in animated, high-quality video clips. The here presented 3D geometrical models and animation, which are based on palaeontological material, demonstrate the wide range of applications, analytical techniques and also outline possible limitations of 3D models in earth sciences and palaeontology. The realistic 3D models and motion pictures can easily be shared amongst palaeontologists. Data, images and short clips can be discussed online and, if necessary, adapted in morphological details and motion-style to better represent the cephalopod animal.
Computed 3D visualisation of an extinct cephalopod using computer tomographs
Lukeneder, Alexander
2012-01-01
The first 3D visualisation of a heteromorph cephalopod species from the Southern Alps (Dolomites, northern Italy) is presented. Computed tomography, palaeontological data and 3D reconstructions were included in the production of a movie, which shows a life reconstruction of the extinct organism. This detailed reconstruction is according to the current knowledge of the shape and mode of life as well as habitat of this animal. The results are based on the most complete shell known thus far of the genus Dissimilites. Object-based combined analyses from computed tomography and various computed 3D facility programmes help to understand morphological details as well as their ontogentical changes in fossil material. In this study, an additional goal was to show changes in locomotion during different ontogenetic phases of such fossil, marine shell-bearing animals (ammonoids). Hence, the presented models and tools can serve as starting points for discussions on morphology and locomotion of extinct cephalopods in general, and of the genus Dissimilites in particular. The heteromorph ammonoid genus Dissimilites is interpreted here as an active swimmer of the Tethyan Ocean. This study portrays non-destructive methods of 3D visualisation applied on palaeontological material, starting with computed tomography resulting in animated, high-quality video clips. The here presented 3D geometrical models and animation, which are based on palaeontological material, demonstrate the wide range of applications, analytical techniques and also outline possible limitations of 3D models in earth sciences and palaeontology. The realistic 3D models and motion pictures can easily be shared amongst palaeontologists. Data, images and short clips can be discussed online and, if necessary, adapted in morphological details and motion-style to better represent the cephalopod animal. PMID:24850976
Liu, Yun-Feng; Fan, Ying-Ying; Dong, Hui-Yue; Zhang, Jian-Xing
2017-12-01
The method used in biomechanical modeling for finite element method (FEM) analysis needs to deliver accurate results. There are currently two solutions used in FEM modeling for biomedical model of human bone from computerized tomography (CT) images: one is based on a triangular mesh and the other is based on the parametric surface model and is more popular in practice. The outline and modeling procedures for the two solutions are compared and analyzed. Using a mandibular bone as an example, several key modeling steps are then discussed in detail, and the FEM calculation was conducted. Numerical calculation results based on the models derived from the two methods, including stress, strain, and displacement, are compared and evaluated in relation to accuracy and validity. Moreover, a comprehensive comparison of the two solutions is listed. The parametric surface based method is more helpful when using powerful design tools in computer-aided design (CAD) software, but the triangular mesh based method is more robust and efficient.
A Field Guide to Extra-Tropical Cyclones: Comparing Models to Observations
NASA Astrophysics Data System (ADS)
Bauer, M.
2008-12-01
Climate it is said is the accumulation of weather. And weather is not the concern of climate models. Justification for this latter sentiment has long hidden behind coarse model resolutions and blunt validation tools based on climatological maps and the like. The spatial-temporal resolutions of today's models and observations are converging onto meteorological scales however, which means that with the correct tools we can test the largely unproven assumption that climate model weather is correct enough, or at least lacks perverting biases, such that its accumulation does in fact result in a robust climate prediction. Towards this effort we introduce a new tool for extracting detailed cyclone statistics from climate model output. These include the usual cyclone distribution statistics (maps, histograms), but also adaptive cyclone- centric composites. We have also created a complementary dataset, The MAP Climatology of Mid-latitude Storminess (MCMS), which provides a detailed 6 hourly assessment of the areas under the influence of mid- latitude cyclones based on Reanalysis products. Using this we then extract complimentary composites from sources such as ISCCP and GPCP to create a large comparative dataset for climate model validation. A demonstration of the potential usefulness of these tools will be shown. dime.giss.nasa.gov/mcms/mcms.html
Software risk estimation and management techniques at JPL
NASA Technical Reports Server (NTRS)
Hihn, J.; Lum, K.
2002-01-01
In this talk we will discuss how uncertainty has been incorporated into the JPL software model, probabilistic-based estimates, and how risk is addressed, how cost risk is currently being explored via a variety of approaches, from traditional risk lists, to detailed WBS-based risk estimates to the Defect Detection and Prevention (DDP) tool.
USDA-ARS?s Scientific Manuscript database
The high spatial resolution of QuickBird satellite images makes it possible to show spatial variability at fine details. However, the effect of topography-induced illumination variations become more evident, even in moderately sloped areas. Based on a high resolution (1 m) digital elevation model ge...
Development and Evaluation of Computer-Based Laboratory Practical Learning Tool
ERIC Educational Resources Information Center
Gandole, Y. B.
2006-01-01
Effective evaluation of educational software is a key issue for successful introduction of advanced tools in the curriculum. This paper details to developing and evaluating a tool for computer assisted learning of science laboratory courses. The process was based on the generic instructional system design model. Various categories of educational…
Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Dali; Yuan, Fengming; Hernandez, Benjamin
Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less
Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations
Wang, Dali; Yuan, Fengming; Hernandez, Benjamin; ...
2017-01-01
Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less
A detail enhancement and dynamic range adjustment algorithm for high dynamic range images
NASA Astrophysics Data System (ADS)
Xu, Bo; Wang, Huachuang; Liang, Mingtao; Yu, Cong; Hu, Jinlong; Cheng, Hua
2014-08-01
Although high dynamic range (HDR) images contain large amounts of information, they have weak texture and low contrast. What's more, these images are difficult to be reproduced on low dynamic range displaying mediums. If much more information is to be acquired when these images are displayed on PCs, some specific transforms, such as compressing the dynamic range, enhancing the portions of little difference in original contrast and highlighting the texture details on the premise of keeping the parts of large contrast, are needed. To this ends, a multi-scale guided filter enhancement algorithm which derives from the single-scale guided filter based on the analysis of non-physical model is proposed in this paper. Firstly, this algorithm decomposes the original HDR images into base image and detail images of different scales, and then it adaptively selects a transform function which acts on the enhanced detail images and original images. By comparing the treatment effects of HDR images and low dynamic range (LDR) images of different scene features, it proves that this algorithm, on the basis of maintaining the hierarchy and texture details of images, not only improves the contrast and enhances the details of images, but also adjusts the dynamic range well. Thus, it is much suitable for human observation or analytical processing of machines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rainer, Leo I.; Hoeschele, Marc A.; Apte, Michael G.
This report addresses the results of detailed monitoring completed under Program Element 6 of Lawrence Berkeley National Laboratory's High Performance Commercial Building Systems (HPCBS) PIER program. The purpose of the Energy Simulations and Projected State-Wide Energy Savings project is to develop reasonable energy performance and cost models for high performance relocatable classrooms (RCs) across California climates. A key objective of the energy monitoring was to validate DOE2 simulations for comparison to initial DOE2 performance projections. The validated DOE2 model was then used to develop statewide savings projections by modeling base case and high performance RC operation in the 16 Californiamore » climate zones. The primary objective of this phase of work was to utilize detailed field monitoring data to modify DOE2 inputs and generate performance projections based on a validated simulation model. Additional objectives include the following: (1) Obtain comparative performance data on base case and high performance HVAC systems to determine how they are operated, how they perform, and how the occupants respond to the advanced systems. This was accomplished by installing both HVAC systems side-by-side (i.e., one per module of a standard two module, 24 ft by 40 ft RC) on the study RCs and switching HVAC operating modes on a weekly basis. (2) Develop projected statewide energy and demand impacts based on the validated DOE2 model. (3) Develop cost effectiveness projections for the high performance HVAC system in the 16 California climate zones.« less
Turbulent Mixing of Primary and Secondary Flow Streams in a Rocket-Based Combined Cycle Engine
NASA Technical Reports Server (NTRS)
Cramer, J. M.; Greene, M. U.; Pal, S.; Santoro, R. J.; Turner, Jim (Technical Monitor)
2002-01-01
This viewgraph presentation gives an overview of the turbulent mixing of primary and secondary flow streams in a rocket-based combined cycle (RBCC) engine. A significant RBCC ejector mode database has been generated, detailing single and twin thruster configurations and global and local measurements. On-going analysis and correlation efforts include Marshall Space Flight Center computational fluid dynamics modeling and turbulent shear layer analysis. Potential follow-on activities include detailed measurements of air flow static pressure and velocity profiles, investigations into other thruster spacing configurations, performing a fundamental shear layer mixing study, and demonstrating single-shot Raman measurements.
Watermarking on 3D mesh based on spherical wavelet transform.
Jin, Jian-Qiu; Dai, Min-Ya; Bao, Hu-Jun; Peng, Qun-Sheng
2004-03-01
In this paper we propose a robust watermarking algorithm for 3D mesh. The algorithm is based on spherical wavelet transform. Our basic idea is to decompose the original mesh into a series of details at different scales by using spherical wavelet transform; the watermark is then embedded into the different levels of details. The embedding process includes: global sphere parameterization, spherical uniform sampling, spherical wavelet forward transform, embedding watermark, spherical wavelet inverse transform, and at last resampling the mesh watermarked to recover the topological connectivity of the original model. Experiments showed that our algorithm can improve the capacity of the watermark and the robustness of watermarking against attacks.
A Critical Look at Entropy-Based Gene-Gene Interaction Measures.
Lee, Woojoo; Sjölander, Arvid; Pawitan, Yudi
2016-07-01
Several entropy-based measures for detecting gene-gene interaction have been proposed recently. It has been argued that the entropy-based measures are preferred because entropy can better capture the nonlinear relationships between genotypes and traits, so they can be useful to detect gene-gene interactions for complex diseases. These suggested measures look reasonable at intuitive level, but so far there has been no detailed characterization of the interactions captured by them. Here we study analytically the properties of some entropy-based measures for detecting gene-gene interactions in detail. The relationship between interactions captured by the entropy-based measures and those of logistic regression models is clarified. In general we find that the entropy-based measures can suffer from a lack of specificity in terms of target parameters, i.e., they can detect uninteresting signals as interactions. Numerical studies are carried out to confirm theoretical findings. © 2016 WILEY PERIODICALS, INC.
Aerodynamic design and analysis of small horizontal axis wind turbine blades
NASA Astrophysics Data System (ADS)
Tang, Xinzi
This work investigates the aerodynamic design and analysis of small horizontal axis wind turbine blades via the blade element momentum (BEM) based approach and the computational fluid dynamics (CFD) based approach. From this research, it is possible to draw a series of detailed guidelines on small wind turbine blade design and analysis. The research also provides a platform for further comprehensive study using these two approaches. The wake induction corrections and stall corrections of the BEM method were examined through a case study of the NREL/NASA Phase VI wind turbine. A hybrid stall correction model was proposed to analyse wind turbine power performance. The proposed model shows improvement in power prediction for the validation case, compared with the existing stall correction models. The effects of the key rotor parameters of a small wind turbine as well as the blade chord and twist angle distributions on power performance were investigated through two typical wind turbines, i.e. a fixed-pitch variable-speed (FPVS) wind turbine and a fixed-pitch fixed-speed (FPFS) wind turbine. An engineering blade design and analysis code was developed in MATLAB to accommodate aerodynamic design and analysis of the blades.. The linearisation for radial profiles of blade chord and twist angle for the FPFS wind turbine blade design was discussed. Results show that, the proposed linearisation approach leads to reduced manufacturing cost and higher annual energy production (AEP), with minimal effects on the low wind speed performance. Comparative studies of mesh and turbulence models in 2D and 3D CFD modelling were conducted. The CFD predicted lift and drag coefficients of the airfoil S809 were compared with wind tunnel test data and the 3D CFD modelling method of the NREL/NASA Phase VI wind turbine were validated against measurements. Airfoil aerodynamic characterisation and wind turbine power performance as well as 3D flow details were studied. The detailed flow characteristics from the CFD modelling are quantitatively comparable to the measurements, such as blade surface pressure distribution and integrated forces and moments. It is confirmed that the CFD approach is able to provide a more detailed qualitative and quantitative analysis for wind turbine airfoils and rotors..
Spectral Quasi-Equilibrium Manifold for Chemical Kinetics.
Kooshkbaghi, Mahdi; Frouzakis, Christos E; Boulouchos, Konstantinos; Karlin, Iliya V
2016-05-26
The Spectral Quasi-Equilibrium Manifold (SQEM) method is a model reduction technique for chemical kinetics based on entropy maximization under constraints built by the slowest eigenvectors at equilibrium. The method is revisited here and discussed and validated through the Michaelis-Menten kinetic scheme, and the quality of the reduction is related to the temporal evolution and the gap between eigenvalues. SQEM is then applied to detailed reaction mechanisms for the homogeneous combustion of hydrogen, syngas, and methane mixtures with air in adiabatic constant pressure reactors. The system states computed using SQEM are compared with those obtained by direct integration of the detailed mechanism, and good agreement between the reduced and the detailed descriptions is demonstrated. The SQEM reduced model of hydrogen/air combustion is also compared with another similar technique, the Rate-Controlled Constrained-Equilibrium (RCCE). For the same number of representative variables, SQEM is found to provide a more accurate description.
Integrating high dimensional bi-directional parsing models for gene mention tagging.
Hsu, Chun-Nan; Chang, Yu-Ming; Kuo, Cheng-Ju; Lin, Yu-Shi; Huang, Han-Shen; Chung, I-Fang
2008-07-01
Tagging gene and gene product mentions in scientific text is an important initial step of literature mining. In this article, we describe in detail our gene mention tagger participated in BioCreative 2 challenge and analyze what contributes to its good performance. Our tagger is based on the conditional random fields model (CRF), the most prevailing method for the gene mention tagging task in BioCreative 2. Our tagger is interesting because it accomplished the highest F-scores among CRF-based methods and second over all. Moreover, we obtained our results by mostly applying open source packages, making it easy to duplicate our results. We first describe in detail how we developed our CRF-based tagger. We designed a very high dimensional feature set that includes most of information that may be relevant. We trained bi-directional CRF models with the same set of features, one applies forward parsing and the other backward, and integrated two models based on the output scores and dictionary filtering. One of the most prominent factors that contributes to the good performance of our tagger is the integration of an additional backward parsing model. However, from the definition of CRF, it appears that a CRF model is symmetric and bi-directional parsing models will produce the same results. We show that due to different feature settings, a CRF model can be asymmetric and the feature setting for our tagger in BioCreative 2 not only produces different results but also gives backward parsing models slight but constant advantage over forward parsing model. To fully explore the potential of integrating bi-directional parsing models, we applied different asymmetric feature settings to generate many bi-directional parsing models and integrate them based on the output scores. Experimental results show that this integrated model can achieve even higher F-score solely based on the training corpus for gene mention tagging. Data sets, programs and an on-line service of our gene mention tagger can be accessed at http://aiia.iis.sinica.edu.tw/biocreative2.htm.
NASA Astrophysics Data System (ADS)
Kang, Daiwen
In this research, the sources, distributions, transport, ozone formation potential, and biogenic emissions of VOCs are investigated focusing on three Southeast United States National Parks: Shenandoah National Park, Big Meadows site (SHEN), Great Smoky Mountains National Park at Cove Mountain (GRSM) and Mammoth Cave National Park (MACA). A detailed modeling analysis is conducted using the Multiscale Air Quality SImulation Platform (MAQSIP) focusing on nonmethane hydrocarbons and ozone characterized by high O3 surface concentrations. Nine emissions perturbation using the Multiscale Air Quality SImulation Platform (MAQSIP) focusing on nonmethane hydrocarbons and ozone characterized by high O 3 surface concentrations. In the observation-based analysis, source classification techniques based on correlation coefficient, chemical reactivity, and certain ratios were developed and applied to the data set. Anthropogenic VOCs from automobile exhaust dominate at Mammoth Cave National Park, and at Cove Mountain, Great Smoky Mountains National Park, while at Big Meadows, Shenandoah National Park, the source composition is complex and changed from 1995 to 1996. The dependence of isoprene concentrations on ambient temperatures is investigated, and similar regressional relationships are obtained for all three monitoring locations. Propylene-equivalent concentrations are calculated to account for differences in reaction rates between the OH and individual hydrocarbons, and to thereby estimate their relative contributions to ozone formation. Isoprene fluxes were also estimated for all these rural areas. Model predictions (base scenario) tend to give lower daily maximum O 3 concentrations than observations by 10 to 30%. Model predicted concentrations of lumped paraffin compounds are of the same order of magnitude as the observed values, while the observed concentrations for other species (isoprene, ethene, surrogate olefin, surrogate toluene, and surrogate xylene) are usually an order of magnitude higher than the predictions. Detailed sensitivity and process analyses in terms of ozone and VOC scenarios including the base scenario are designed and utilized in the model simulations. Model predictions are compared with the observed values at the three locations for the same time period. Detailed sensitivity and process analyses in terms of ozone and VOC budgets, and relative importance of various VOCs species are provided. (Abstract shortened by UMI.)
Zhang, Liming; Yu, Dongsheng; Shi, Xuezheng; Xu, Shengxiang; Xing, Shihe; Zhao, Yongcong
2014-01-01
Soil organic carbon (SOC) models were often applied to regions with high heterogeneity, but limited spatially differentiated soil information and simulation unit resolution. This study, carried out in the Tai-Lake region of China, defined the uncertainty derived from application of the DeNitrification-DeComposition (DNDC) biogeochemical model in an area with heterogeneous soil properties and different simulation units. Three different resolution soil attribute databases, a polygonal capture of mapping units at 1∶50,000 (P5), a county-based database of 1∶50,000 (C5) and county-based database of 1∶14,000,000 (C14), were used as inputs for regional DNDC simulation. The P5 and C5 databases were combined with the 1∶50,000 digital soil map, which is the most detailed soil database for the Tai-Lake region. The C14 database was combined with 1∶14,000,000 digital soil map, which is a coarse database and is often used for modeling at a national or regional scale in China. The soil polygons of P5 database and county boundaries of C5 and C14 databases were used as basic simulation units. Results project that from 1982 to 2000, total SOC change in the top layer (0–30 cm) of the 2.3 M ha of paddy soil in the Tai-Lake region was +1.48 Tg C, −3.99 Tg C and −15.38 Tg C based on P5, C5 and C14 databases, respectively. With the total SOC change as modeled with P5 inputs as the baseline, which is the advantages of using detailed, polygon-based soil dataset, the relative deviation of C5 and C14 were 368% and 1126%, respectively. The comparison illustrates that DNDC simulation is strongly influenced by choice of fundamental geographic resolution as well as input soil attribute detail. The results also indicate that improving the framework of DNDC is essential in creating accurate models of the soil carbon cycle. PMID:24523922
A Review of the Ginzburg-Syrovatskii's Galactic Cosmic-Ray Propagation Model and its Leaky-Box Limit
NASA Technical Reports Server (NTRS)
Barghouty, A. F.
2012-01-01
Phenomenological models of galactic cosmic-ray propagation are based on a diffusion equation known as the Ginzburg-Syrovatskii s equation, or variants (or limits) of this equation. Its one-dimensional limit in a homogeneous volume, known as the leaky-box limit or model, is sketched here. The justification, utility, limitations, and a typical numerical implementation of the leaky-box model are examined in some detail.
Simulation of Blast Loading on an Ultrastructurally-based Computational Model of the Ocular Lens
2016-12-01
organelles. Additionally, the cell membranes demonstrated the classic ball-and-socket loops . For the SEM images, they were placed in two fixatives and mounted...considered (fibrous network and matrix), both components are modelled using a hyper - elastic framework, and the resulting constitutive model is embedded in a...within the framework of hyper - elasticity). Full details on the linearization procedures that were adopted in these previous models or the convergence
Modelling strategies to predict the multi-scale effects of rural land management change
NASA Astrophysics Data System (ADS)
Bulygina, N.; Ballard, C. E.; Jackson, B. M.; McIntyre, N.; Marshall, M.; Reynolds, B.; Wheater, H. S.
2011-12-01
Changes to the rural landscape due to agricultural land management are ubiquitous, yet predicting the multi-scale effects of land management change on hydrological response remains an important scientific challenge. Much empirical research has been of little generic value due to inadequate design and funding of monitoring programmes, while the modelling issues challenge the capability of data-based, conceptual and physics-based modelling approaches. In this paper we report on a major UK research programme, motivated by a national need to quantify effects of agricultural intensification on flood risk. Working with a consortium of farmers in upland Wales, a multi-scale experimental programme (from experimental plots to 2nd order catchments) was developed to address issues of upland agricultural intensification. This provided data support for a multi-scale modelling programme, in which highly detailed physics-based models were conditioned on the experimental data and used to explore effects of potential field-scale interventions. A meta-modelling strategy was developed to represent detailed modelling in a computationally-efficient manner for catchment-scale simulation; this allowed catchment-scale quantification of potential management options. For more general application to data-sparse areas, alternative approaches were needed. Physics-based models were developed for a range of upland management problems, including the restoration of drained peatlands, afforestation, and changing grazing practices. Their performance was explored using literature and surrogate data; although subject to high levels of uncertainty, important insights were obtained, of practical relevance to management decisions. In parallel, regionalised conceptual modelling was used to explore the potential of indices of catchment response, conditioned on readily-available catchment characteristics, to represent ungauged catchments subject to land management change. Although based in part on speculative relationships, significant predictive power was derived from this approach. Finally, using a formal Bayesian procedure, these different sources of information were combined with local flow data in a catchment-scale conceptual model application , i.e. using small-scale physical properties, regionalised signatures of flow and available flow measurements.
NASA Astrophysics Data System (ADS)
Niederheiser, R.; Rutzinger, M.; Bremer, M.; Wichmann, V.
2018-04-01
The investigation of changes in spatial patterns of vegetation and identification of potential micro-refugia requires detailed topographic and terrain information. However, mapping alpine topography at very detailed scales is challenging due to limited accessibility of sites. Close-range sensing by photogrammetric dense matching approaches based on terrestrial images captured with hand-held cameras offers a light-weight and low-cost solution to retrieve high-resolution measurements even in steep terrain and at locations, which are difficult to access. We propose a novel approach for rapid capturing of terrestrial images and a highly automated processing chain for retrieving detailed dense point clouds for topographic modelling. For this study, we modelled 249 plot locations. For the analysis of vegetation distribution and location properties, topographic parameters, such as slope, aspect, and potential solar irradiation were derived by applying a multi-scale approach utilizing voxel grids and spherical neighbourhoods. The result is a micro-topography archive of 249 alpine locations that includes topographic parameters at multiple scales ready for biogeomorphological analysis. Compared with regional elevation models at larger scales and traditional 2D gridding approaches to create elevation models, we employ analyses in a fully 3D environment that yield much more detailed insights into interrelations between topographic parameters, such as potential solar irradiation, surface area, aspect and roughness.
Modeling the atmospheric chemistry of TICs
NASA Astrophysics Data System (ADS)
Henley, Michael V.; Burns, Douglas S.; Chynwat, Veeradej; Moore, William; Plitz, Angela; Rottmann, Shawn; Hearn, John
2009-05-01
An atmospheric chemistry model that describes the behavior and disposition of environmentally hazardous compounds discharged into the atmosphere was coupled with the transport and diffusion model, SCIPUFF. The atmospheric chemistry model was developed by reducing a detailed atmospheric chemistry mechanism to a simple empirical effective degradation rate term (keff) that is a function of important meteorological parameters such as solar flux, temperature, and cloud cover. Empirically derived keff functions that describe the degradation of target toxic industrial chemicals (TICs) were derived by statistically analyzing data generated from the detailed chemistry mechanism run over a wide range of (typical) atmospheric conditions. To assess and identify areas to improve the developed atmospheric chemistry model, sensitivity and uncertainty analyses were performed to (1) quantify the sensitivity of the model output (TIC concentrations) with respect to changes in the input parameters and (2) improve, where necessary, the quality of the input data based on sensitivity results. The model predictions were evaluated against experimental data. Chamber data were used to remove the complexities of dispersion in the atmosphere.
Terminal Area Productivity Airport Wind Analysis and Chicago O'Hare Model Description
NASA Technical Reports Server (NTRS)
Hemm, Robert; Shapiro, Gerald
1998-01-01
This paper describes two results from a continuing effort to provide accurate cost-benefit analyses of the NASA Terminal Area Productivity (TAP) program technologies. Previous tasks have developed airport capacity and delay models and completed preliminary cost benefit estimates for TAP technologies at 10 U.S. airports. This task covers two improvements to the capacity and delay models. The first improvement is the completion of a detailed model set for the Chicago O'Hare (ORD) airport. Previous analyses used a more general model to estimate the benefits for ORD. This paper contains a description of the model details with results corresponding to current conditions. The second improvement is the development of specific wind speed and direction criteria for use in the delay models to predict when the Aircraft Vortex Spacing System (AVOSS) will allow use of reduced landing separations. This paper includes a description of the criteria and an estimate of AVOSS utility for 10 airports based on analysis of 35 years of weather data.
Absorbable energy monitoring scheme: new design protocol to test vehicle structural crashworthiness.
Ofochebe, Sunday M; Enibe, Samuel O; Ozoegwu, Chigbogu G
2016-05-01
In vehicle crashworthiness design optimization detailed system evaluation capable of producing reliable results are basically achieved through high-order numerical computational (HNC) models such as the dynamic finite element model, mesh-free model etc. However the application of these models especially during optimization studies is basically challenged by their inherent high demand on computational resources, conditional stability of the solution process, and lack of knowledge of viable parameter range for detailed optimization studies. The absorbable energy monitoring scheme (AEMS) presented in this paper suggests a new design protocol that attempts to overcome such problems in evaluation of vehicle structure for crashworthiness. The implementation of the AEMS involves studying crash performance of vehicle components at various absorbable energy ratios based on a 2DOF lumped-mass-spring (LMS) vehicle impact model. This allows for prompt prediction of useful parameter values in a given design problem. The application of the classical one-dimensional LMS model in vehicle crash analysis is further improved in the present work by developing a critical load matching criterion which allows for quantitative interpretation of the results of the abstract model in a typical vehicle crash design. The adequacy of the proposed AEMS for preliminary vehicle crashworthiness design is demonstrated in this paper, however its extension to full-scale design-optimization problem involving full vehicle model that shows greater structural detail requires more theoretical development.
Man power/cost estimation model: Automated planetary projects
NASA Technical Reports Server (NTRS)
Kitchen, L. D.
1975-01-01
A manpower/cost estimation model is developed which is based on a detailed level of financial analysis of over 30 million raw data points which are then compacted by more than three orders of magnitude to the level at which the model is applicable. The major parameter of expenditure is manpower (specifically direct labor hours) for all spacecraft subsystem and technical support categories. The resultant model is able to provide a mean absolute error of less than fifteen percent for the eight programs comprising the model data base. The model includes cost saving inheritance factors, broken down in four levels, for estimating follow-on type programs where hardware and design inheritance are evident or expected.
Model-based learning and the contribution of the orbitofrontal cortex to the model-free world
McDannald, Michael A.; Takahashi, Yuji K.; Lopatina, Nina; Pietras, Brad W.; Jones, Josh L.; Schoenbaum, Geoffrey
2012-01-01
Learning is proposed to occur when there is a discrepancy between reward prediction and reward receipt. At least two separate systems are thought to exist: one in which predictions are proposed to be based on model-free or cached values; and another in which predictions are model-based. A basic neural circuit for model-free reinforcement learning has already been described. In the model-free circuit the ventral striatum (VS) is thought to supply a common-currency reward prediction to midbrain dopamine neurons that compute prediction errors and drive learning. In a model-based system, predictions can include more information about an expected reward, such as its sensory attributes or current, unique value. This detailed prediction allows for both behavioral flexibility and learning driven by changes in sensory features of rewards alone. Recent evidence from animal learning and human imaging suggests that, in addition to model-free information, the VS also signals model-based information. Further, there is evidence that the orbitofrontal cortex (OFC) signals model-based information. Here we review these data and suggest that the OFC provides model-based information to this traditional model-free circuitry and offer possibilities as to how this interaction might occur. PMID:22487030
Kirkilionis, Markus; Janus, Ulrich; Sbano, Luca
2011-09-01
We model in detail a simple synthetic genetic clock that was engineered in Atkinson et al. (Cell 113(5):597-607, 2003) using Escherichia coli as a host organism. Based on this engineered clock its theoretical description uses the modelling framework presented in Kirkilionis et al. (Theory Biosci. doi: 10.1007/s12064-011-0125-0 , 2011, this volume). The main goal of this accompanying article was to illustrate that parts of the modelling process can be algorithmically automatised once the model framework we called 'average dynamics' is accepted (Sbano and Kirkilionis, WMI Preprint 7/2007, 2008c; Kirkilionis and Sbano, Adv Complex Syst 13(3):293-326, 2010). The advantage of the 'average dynamics' framework is that system components (especially in genetics) can be easier represented in the model. In particular, if once discovered and characterised, specific molecular players together with their function can be incorporated. This means that, for example, the 'gene' concept becomes more clear, for example, in the way the genetic component would react under different regulatory conditions. Using the framework it has become a realistic aim to link mathematical modelling to novel tools of bioinformatics in the future, at least if the number of regulatory units can be estimated. This should hold in any case in synthetic environments due to the fact that the different synthetic genetic components are simply known (Elowitz and Leibler, Nature 403(6767):335-338, 2000; Gardner et al., Nature 403(6767):339-342, 2000; Hasty et al., Nature 420(6912):224-230, 2002). The paper illustrates therefore as a necessary first step how a detailed modelling of molecular interactions with known molecular components leads to a dynamic mathematical model that can be compared to experimental results on various levels or scales. The different genetic modules or components are represented in different detail by model variants. We explain how the framework can be used for investigating other more complex genetic systems in terms of regulation and feedback.
Modelling population distribution using remote sensing imagery and location-based data
NASA Astrophysics Data System (ADS)
Song, J.; Prishchepov, A. V.
2017-12-01
Detailed spatial distribution of population density is essential for city studies such as urban planning, environmental pollution and city emergency, even estimate pressure on the environment and human exposure and risks to health. However, most of the researches used census data as the detailed dynamic population distribution are difficult to acquire, especially in microscale research. This research describes a method using remote sensing imagery and location-based data to model population distribution at the function zone level. Firstly, urban functional zones within a city were mapped by high-resolution remote sensing images and POIs. The workflow of functional zones extraction includes five parts: (1) Urban land use classification. (2) Segmenting images in built-up area. (3) Identification of functional segments by POIs. (4) Identification of functional blocks by functional segmentation and weight coefficients. (5) Assessing accuracy by validation points. The result showed as Fig.1. Secondly, we applied ordinary least square and geographically weighted regression to assess spatial nonstationary relationship between light digital number (DN) and population density of sampling points. The two methods were employed to predict the population distribution over the research area. The R²of GWR model were in the order of 0.7 and typically showed significant variations over the region than traditional OLS model. The result showed as Fig.2.Validation with sampling points of population density demonstrated that the result predicted by the GWR model correlated well with light value. The result showed as Fig.3. Results showed: (1) Population density is not linear correlated with light brightness using global model. (2) VIIRS night-time light data could estimate population density integrating functional zones at city level. (3) GWR is a robust model to map population distribution, the adjusted R2 of corresponding GWR models were higher than the optimal OLS models, confirming that GWR models demonstrate better prediction accuracy. So this method provide detailed population density information for microscale citizen studies.
Enhancing the LVRT Capability of PMSG-Based Wind Turbines Based on R-SFCL
NASA Astrophysics Data System (ADS)
Xu, Lin; Lin, Ruixing; Ding, Lijie; Huang, Chunjun
2018-03-01
A novel low voltage ride-through (LVRT) scheme for PMSG-based wind turbines based on the Resistor Superconducting Fault Current Limiter (R-SFCL) is proposed in this paper. The LVRT scheme is mainly formed by R-SFCL in series between the transformer and the Grid Side Converter (GSC), and basic modelling has been discussed in detail. The proposed LVRT scheme is implemented to interact with PMSG model in PSCAD/EMTDC under three phase short circuit fault condition, which proves that the proposed scheme based on R-SFCL can improve the transient performance and LVRT capability to consolidate grid connection with wind turbines.
NASA Astrophysics Data System (ADS)
Niaz, Mansoor; Aguilera, Damarys; Maza, Arelys; Liendo, Gustavo
2002-07-01
Most general chemistry courses and textbooks emphasize experimental details and lack a history and philosophy of science perspective. The objective of this study is to facilitate freshman general chemistry students' understanding of atomic structure based on the work of Thomson, Rutherford, and Bohr. It is hypothesized that classroom discussions based on arguments/counterarguments of the heuristic principles, on which these scientists based their atomic models, can facilitate students' conceptual understanding. This study is based on 160 freshman students enrolled in six sections of General Chemistry I (three sections formed part of the experimental group). All three models (Thomson, Rutherford, and Bohr) were presented to the experimental and control group students in the traditional manner, as found in most textbooks. After this, the three sections of the experimental group participated in the discussion of six items with alternative responses. Students were first asked to select a response and then participate in classroom discussions leading to arguments in favor or against the selected response and finally select a new response. Three weeks after having discussed the six items, both the experimental and control groups presented a monthly exam (based on the three models) and after another 3 weeks a semester exam. Results obtained show that given the opportunity to argue and discuss, students' understanding can go beyond the simple regurgitation of experimental details. Performance of the experimental group showed contradictions, resistances, and progressive conceptual change with considerable and consistent improvement in the last item. It is concluded that if we want our students to understand scientific progress and practice, then it is important that we include the experimental details not as a rhetoric of conclusions (Schwab, 1962, The teaching of science as enquiry, Cambridge, MA, Harward University Press; Schwab, 1974, Conflicting conceptions of curriculum, Berkeley, CA, McCutchan) but as heuristic principles (Lakatos, 1970, Criticism and the growth of knowledge, Cambridge, UK, Cambridge University Press, pp. 91-195), which were based on arguments, controversies, and interpretations of the scientists.
Dallmann, André; Ince, Ibrahim; Meyer, Michaela; Willmann, Stefan; Eissing, Thomas; Hempel, Georg
2017-11-01
In the past years, several repositories for anatomical and physiological parameters required for physiologically based pharmacokinetic modeling in pregnant women have been published. While providing a good basis, some important aspects can be further detailed. For example, they did not account for the variability associated with parameters or were lacking key parameters necessary for developing more detailed mechanistic pregnancy physiologically based pharmacokinetic models, such as the composition of pregnancy-specific tissues. The aim of this meta-analysis was to provide an updated and extended database of anatomical and physiological parameters in healthy pregnant women that also accounts for changes in the variability of a parameter throughout gestation and for the composition of pregnancy-specific tissues. A systematic literature search was carried out to collect study data on pregnancy-related changes of anatomical and physiological parameters. For each parameter, a set of mathematical functions was fitted to the data and to the standard deviation observed among the data. The best performing functions were selected based on numerical and visual diagnostics as well as based on physiological plausibility. The literature search yielded 473 studies, 302 of which met the criteria to be further analyzed and compiled in a database. In total, the database encompassed 7729 data. Although the availability of quantitative data for some parameters remained limited, mathematical functions could be generated for many important parameters. Gaps were filled based on qualitative knowledge and based on physiologically plausible assumptions. The presented results facilitate the integration of pregnancy-dependent changes in anatomy and physiology into mechanistic population physiologically based pharmacokinetic models. Such models can ultimately provide a valuable tool to investigate the pharmacokinetics during pregnancy in silico and support informed decision making regarding optimal dosing regimens in this vulnerable special population.
ERIC Educational Resources Information Center
Andreae, John H.; Cleary, John G.
1976-01-01
The new mechanism, PUSS, enables experience of any complex environment to be accumulated in a predictive model. PURR-PUSS is a teachable robot system based on the new mechanism. Cumulative learning is demonstrated by a detailed example. (Author)
Quantification of transendothelial migration using three-dimensional confocal microscopy.
Cain, Robert J; d'Água, Bárbara Borda; Ridley, Anne J
2011-01-01
Migration of cells across endothelial barriers, termed transendothelial migration (TEM), is an important cellular process that underpins the pathology of many disease states including chronic inflammation and cancer metastasis. While this process can be modeled in vitro using cultured cells, many model systems are unable to provide detailed visual information of cell morphologies and distribution of proteins such as junctional markers, as well as quantitative data on the rate of TEM. Improvements in imaging techniques have made microscopy-based assays an invaluable tool for studying this type of detailed cell movement in physiological processes. In this chapter, we describe a confocal microscopy-based method that can be used to assess TEM of both leukocytes and cancer cells across endothelial barriers in response to a chemotactic gradient, as well as providing information on their migration into a subendothelial extracellular matrix, designed to mimic that found in vivo.
REVIEWS OF TOPICAL PROBLEMS: The nature of neutrino mass and the phenomenon of neutrino oscillations
NASA Astrophysics Data System (ADS)
Gershtein, Semen S.; Kuznetsov, E. P.; Ryabov, Vladimir A.
1997-08-01
Various aspects of the neutrino mass problem are discussed in the light of existing model predictions and extensive experimental data. Generation mechanisms are considered and possible gauge-theory neutrino mass hierarchies, in particular the most popular 'flipped see-saw' models, are discussed. Based on the currently available astrophysical data on the integral density of matter in the Universe and on the spectral anisotropy of the relic cosmic radiation, the cosmological implications of a non-zero neutrino mass are described in detail. Results from various mass-measuring methods are presented. Considerable attention is given to heavy neutrino oscillations. Oscillation mechanisms both in vacuum and in matter are considered in detail. Experiments on oscillations at low and high energies and new generation large-flight-base facilities are described. The present state of research into oscillations of solar and atmospheric neutrinos is reviewed.
A Model-Based Prognostics Approach Applied to Pneumatic Valves
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Goebel, Kai
2011-01-01
Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.
Sohl, Terry L.; Dornbierer, Jordan; Wika, Steve; Sayler, Kristi L.; Quenzer, Robert
2017-01-01
Land use and land cover (LULC) change occurs at a local level within contiguous ownership and management units (parcels), yet LULC models primarily use pixel-based spatial frameworks. The few parcel-based models being used overwhelmingly focus on small geographic areas, limiting the ability to assess LULC change impacts at regional to national scales. We developed a modified version of the Forecasting Scenarios of land use change model to project parcel-based agricultural change across a large region in the United States Great Plains. A scenario representing an agricultural biofuel scenario was modeled from 2012 to 2030, using real parcel boundaries based on contiguous ownership and land management units. The resulting LULC projection provides a vastly improved representation of landscape pattern over existing pixel-based models, while simultaneously providing an unprecedented combination of thematic detail and broad geographic extent. The conceptual approach is practical and scalable, with potential use for national-scale projections.
Huang, Zheng; Chen, Zhi
2013-10-01
This study describes the details of how to construct a three-dimensional (3D) finite element model of a maxillary first premolar tooth based on micro-CT data acquisition technique, MIMICS software and ANSYS software. The tooth was scanned by micro-CT, in which 1295 slices were obtained and then 648 slices were selected for modeling. The 3D surface mesh models of enamel and dentin were created by MIMICS (STL file). The solid mesh model was constructed by ANSYS. After the material properties and boundary conditions were set, a loading analysis was performed to demonstrate the applicableness of the resulting model. The first and third principal stresses were then evaluated. The results showed that the number of nodes and elements of the finite element model were 56 618 and 311801, respectively. The geometric form of the model was highly consistent with that of the true tooth, and the deviation between them was -0.28%. The loading analysis revealed the typical stress patterns in the contour map. The maximum compressive stress existed in the contact points and the maximum tensile stress existed in the deep fissure between the two cusps. It is concluded that by using the micro-CT and highly integrated software, construction of the 3D finite element model with high quality will not be difficult for clinical researchers.
Wallas' Four-Stage Model of the Creative Process: More than Meets the Eye?
ERIC Educational Resources Information Center
Sadler-Smith, Eugene
2015-01-01
Based on a detailed reading of Graham Wallas' "Art of Thought" (1926) it is argued that his four-stage model of the creative process (Preparation, Incubation, Illumination, Verification), in spite of holding sway as a conceptual anchor for many creativity researchers, does not reflect accurately Wallas' full account of the creative…
An assembly process model based on object-oriented hierarchical time Petri Nets
NASA Astrophysics Data System (ADS)
Wang, Jiapeng; Liu, Shaoli; Liu, Jianhua; Du, Zenghui
2017-04-01
In order to improve the versatility, accuracy and integrity of the assembly process model of complex products, an assembly process model based on object-oriented hierarchical time Petri Nets is presented. A complete assembly process information model including assembly resources, assembly inspection, time, structure and flexible parts is established, and this model describes the static and dynamic data involved in the assembly process. Through the analysis of three-dimensional assembly process information, the assembly information is hierarchically divided from the whole, the local to the details and the subnet model of different levels of object-oriented Petri Nets is established. The communication problem between Petri subnets is solved by using message database, and it reduces the complexity of system modeling effectively. Finally, the modeling process is presented, and a five layer Petri Nets model is established based on the hoisting process of the engine compartment of a wheeled armored vehicle.
Reliability of equivalent sphere model in blood-forming organ dose estimation
NASA Technical Reports Server (NTRS)
Shinn, Judy L.; Wilson, John W.; Nealy, John E.
1990-01-01
The radiation dose equivalents to blood-forming organs (BFO's) of the astronauts at the Martian surface due to major solar flare events are calculated using the detailed body geometry of Langley and Billings. The solar flare spectra of February 1956, November 1960, and August 1972 events are employed instead of the idealized Webber form. The detailed geometry results are compared with those based on the 5-cm sphere model which was used often in the past to approximate BFO dose or dose equivalent. Larger discrepancies are found for the later two events possibly due to the lower numbers of highly penetrating protons. It is concluded that the 5-cm sphere model is not suitable for quantitative use in connection with future NASA deep-space, long-duration mission shield design studies.
Robust Dehaze Algorithm for Degraded Image of CMOS Image Sensors.
Qu, Chen; Bi, Du-Yan; Sui, Ping; Chao, Ai-Nong; Wang, Yun-Fei
2017-09-22
The CMOS (Complementary Metal-Oxide-Semiconductor) is a new type of solid image sensor device widely used in object tracking, object recognition, intelligent navigation fields, and so on. However, images captured by outdoor CMOS sensor devices are usually affected by suspended atmospheric particles (such as haze), causing a reduction in image contrast, color distortion problems, and so on. In view of this, we propose a novel dehazing approach based on a local consistent Markov random field (MRF) framework. The neighboring clique in traditional MRF is extended to the non-neighboring clique, which is defined on local consistent blocks based on two clues, where both the atmospheric light and transmission map satisfy the character of local consistency. In this framework, our model can strengthen the restriction of the whole image while incorporating more sophisticated statistical priors, resulting in more expressive power of modeling, thus, solving inadequate detail recovery effectively and alleviating color distortion. Moreover, the local consistent MRF framework can obtain details while maintaining better results for dehazing, which effectively improves the image quality captured by the CMOS image sensor. Experimental results verified that the method proposed has the combined advantages of detail recovery and color preservation.
NASA Technical Reports Server (NTRS)
Bieberbach, George, Jr.; Fuelberg, Henry E.; Thompson, Anne M.; Schmitt, Alf; Hannan, John R.; Gregory, G. L.; Kondo, Yutaka; Knabb, Richard D.; Sachse, G. W.; Talbot, R. W.
1999-01-01
Chemical data from flight 8 of NASA's Subsonic Assessment (SASS) Ozone and Nitrogen Oxide Experiment (SONEX) exhibited signatures consistent with aircraft emissions, stratospheric air, and surface-based pollution. These signatures are examined in detail, focussing on the broad aircraft emission signatures that are several hundred kilometers in length. A mesoscale meteorological model provides high resolution wind data that are used to calculate backward trajectories arriving at locations along the flight track. These trajectories are compared to aircraft locations in the North Atlantic Flight Corridor over a 27-33 hour period. Time series of flight level NO and the number of trajectory/aircraft encounters within the NAFC show excellent agreement. Trajectories arriving within the stratospheric and surface-based pollution regions are found to experience very few aircraft encounters. Conversely, there are many trajectory/aircraft encounters within the two chemical signatures corresponding to aircraft emissions. Even many detailed fluctuations of NO within the two aircraft signature regions correspond to similar fluctuations in aircraft encountered during the previous 27-33 hours. Results indicate that high resolution meteorological modeling, when coupled with detailed aircraft location data, is useful for understanding chemical signatures from aircraft emissions at scales of several hundred kilometers.
3D Surface Temperature Measurement of Plant Canopies Using Photogrammetry Techniques From A UAV.
NASA Astrophysics Data System (ADS)
Irvine, M.; Lagouarde, J. P.
2017-12-01
Surface temperature of plant canopies and within canopies results from the coupling of radiative and energy exchanges processes which govern the fluxes at the interface soil-plant-atmosphere. As a key parameter, surface temperature permits the estimation of canopy exchanges using processes based modeling methods. However detailed 3D surface temperature measurements or even profile surface temperature measurements are rarely made as they have inherent difficulties. Such measurements would greatly improve multi-level canopy models such as NOAH (Chen and Dudhia 2001) or MuSICA (Ogée and Brunet 2002, Ogée et al 2003) where key surface temperature estimations, at present, are not tested. Additionally, at larger scales, canopy structure greatly influences satellite based surface temperature measurements as the structure impacts the observations which are intrinsically made at varying satellite viewing angles and solar heights. In order to account for these differences, again accurate modeling is required such as through the above mentioned multi-layer models or with several source type models such as SCOPE (Van der Tol 2009) in order to standardize observations. As before, in order to validate these models, detailed field observations are required. With the need for detailed surface temperature observations in mind we have planned a series of experiments over non-dense plant canopies to investigate the use of photogrammetry techniques. Photogrammetry is normally used for visible wavelengths to produce 3D images using cloud point reconstruction of aerial images (for example Dandois and Ellis, 2010, 2013 over a forest). From these cloud point models it should be possible to establish 3D plant surface temperature images when using thermal infrared array sensors. In order to do this our experiments are based on the use of a thermal Infrared camera embarked on a UAV. We adapt standard photogrammetry to account for limits imposed by thermal imaginary, especially the low image resolution compared with standard RGB sensors. At the session B081, we intend to present first results of our thermal photogrammetric experiments with 3D surface temperature plots in order to discuss and adapt our methods to the modelling community's needs.
ERIC Educational Resources Information Center
Brevard Community Coll., Cocoa, FL.
A 310 Special Demonstration Project was conducted in Florida to create a model of competency-based adult education (CBAE) based on the programs currently in existence. This manual, which was produced through the project, presents an overview of CBAE and explains in detail how to operate a CBAE program. The manual is organized in 11 sections. The…
Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging.
Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio; Ntziachristos, Vasilis; Rosenthal, Amir
2015-09-01
With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. The optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV-L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. In all cases, model-based TV-L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV-L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV-L1 inversion yielded sharper images and weaker streak artifact. The results herein show that TV-L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV-L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.
Applying Model Based Systems Engineering to NASA's Space Communications Networks
NASA Technical Reports Server (NTRS)
Bhasin, Kul; Barnes, Patrick; Reinert, Jessica; Golden, Bert
2013-01-01
System engineering practices for complex systems and networks now require that requirement, architecture, and concept of operations product development teams, simultaneously harmonize their activities to provide timely, useful and cost-effective products. When dealing with complex systems of systems, traditional systems engineering methodology quickly falls short of achieving project objectives. This approach is encumbered by the use of a number of disparate hardware and software tools, spreadsheets and documents to grasp the concept of the network design and operation. In case of NASA's space communication networks, since the networks are geographically distributed, and so are its subject matter experts, the team is challenged to create a common language and tools to produce its products. Using Model Based Systems Engineering methods and tools allows for a unified representation of the system in a model that enables a highly related level of detail. To date, Program System Engineering (PSE) team has been able to model each network from their top-level operational activities and system functions down to the atomic level through relational modeling decomposition. These models allow for a better understanding of the relationships between NASA's stakeholders, internal organizations, and impacts to all related entities due to integration and sustainment of existing systems. Understanding the existing systems is essential to accurate and detailed study of integration options being considered. In this paper, we identify the challenges the PSE team faced in its quest to unify complex legacy space communications networks and their operational processes. We describe the initial approaches undertaken and the evolution toward model based system engineering applied to produce Space Communication and Navigation (SCaN) PSE products. We will demonstrate the practice of Model Based System Engineering applied to integrating space communication networks and the summary of its results and impact. We will highlight the insights gained by applying the Model Based System Engineering and provide recommendations for its applications and improvements.
NASA Technical Reports Server (NTRS)
Colborn, B. L.; Armstrong, T. W.
1992-01-01
A computer model of the three dimensional geometry and material distributions for the LDEF spacecraft, experiment trays, and, for selected trays, the components of experiments within a tray was developed for use in ionizing radiation assessments. The model is being applied to provide 3-D shielding distributions around radiation dosimeters to aid in data interpretation, particularly in assessing the directional properties of the radiation exposure. Also, the model has been interfaced with radiation transport codes for 3-D dosimetry response predictions and for calculations related to determining the accuracy of trapped proton and cosmic ray environment models. The methodology is described used in developing the 3-D LDEF model and the level of detail incorporated. Currently, the trays modeled in detail are F2, F8, and H12 and H3. Applications of the model which are discussed include the 3-D shielding distributions around various dosimeters, the influence of shielding on dosimetry responses, and comparisons of dose predictions based on the present 3-D model vs those from 1-D geometry model approximations used in initial estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jayaraman, Buvaneswari; Finlayson, Elizabeth U.; Sohn, MichaelD.
We compare computational fluid dynamics (CFD) predictions using a steady-state Reynolds Averaged Navier-Stokes (RANS) model with experimental data on airflow and pollutant dispersion under mixed-convection conditions in a 7 x 9 x 11m high experimental facility. The Rayleigh number, based on height, was O(10{sup 11}) and the atrium was mechanically ventilated. We released tracer gas in the atrium and measured the spatial distribution of concentrations; we then modeled the experiment using four different levels of modeling detail. The four computational models differ in the choice of temperature boundary conditions and the choice of turbulence model. Predictions from a low-Reynolds-number k-{var_epsilon}more » model with detailed boundary conditions agreed well with the data using three different model-measurement comparison metrics. Results from the same model with a single temperature prescribed for each wall also agreed well with the data. Predictions of a standard k-{var_epsilon} model were about the same as those of an isothermal model; neither performed well. Implications of the results for practical applications are discussed.« less
Orion Flight Test 1 Architecture: Observed Benefits of a Model Based Engineering Approach
NASA Technical Reports Server (NTRS)
Simpson, Kimberly A.; Sindiy, Oleg V.; McVittie, Thomas I.
2012-01-01
This paper details how a NASA-led team is using a model-based systems engineering approach to capture, analyze and communicate the end-to-end information system architecture supporting the first unmanned orbital flight of the Orion Multi-Purpose Crew Exploration Vehicle. Along with a brief overview of the approach and its products, the paper focuses on the observed program-level benefits, challenges, and lessons learned; all of which may be applied to improve system engineering tasks for characteristically similarly challenges
A dc model for power switching transistors suitable for computer-aided design and analysis
NASA Technical Reports Server (NTRS)
Wilson, P. M.; George, R. T., Jr.; Owen, H. A., Jr.; Wilson, T. G.
1979-01-01
The proposed dc model for bipolar junction power switching transistors is based on measurements which may be made with standard laboratory equipment. Those nonlinearities which are of importance to power electronics design are emphasized. Measurements procedures are discussed in detail. A model formulation adapted for use with a computer program is presented, and a comparison between actual and computer-generated results is made.
The Chandra X-ray Observatory PSF Library
NASA Astrophysics Data System (ADS)
Karovska, M.; Beikman, S. J.; Elvis, M. S.; Flanagan, J. M.; Gaetz, T.; Glotfelty, K. J.; Jerius, D.; McDowell, J. C.; Rots, A. H.
Pre-flight and on-orbit calibration of the Chandra X-Ray Observatory provided a unique base for developing detailed models of the optics and detectors. Using these models we have produced a set of simulations of the Chandra point spread function (PSF) which is available to the users via PSF library files. We describe here how the PSF models are generated and the design and content of the Chandra PSF library files.
Model of Ni-63 battery with realistic PIN structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munson, Charles E.; Voss, Paul L.; Ougazzaden, Abdallah, E-mail: aougazza@georgiatech-metz.fr
2015-09-14
GaN, with its wide bandgap of 3.4 eV, has emerged as an efficient material for designing high-efficiency betavoltaic batteries. An important part of designing efficient betavoltaic batteries involves a good understanding of the full process, from the behavior of the nuclear material and the creation of electron-hole pairs all the way through the collection of photo-generated carriers. This paper presents a detailed model based on Monte Carlo and Silvaco for a GaN-based betavoltaic battery device, modeled after Ni-63 as an energy source. The accuracy of the model is verified by comparing it with experimental values obtained for a GaN-based p-i-nmore » structure under scanning electron microscope illumination.« less
On the biophysics and kinetics of toehold-mediated DNA strand displacement
Srinivas, Niranjan; Ouldridge, Thomas E.; Šulc, Petr; Schaeffer, Joseph M.; Yurke, Bernard; Louis, Ard A.; Doye, Jonathan P. K.; Winfree, Erik
2013-01-01
Dynamic DNA nanotechnology often uses toehold-mediated strand displacement for controlling reaction kinetics. Although the dependence of strand displacement kinetics on toehold length has been experimentally characterized and phenomenologically modeled, detailed biophysical understanding has remained elusive. Here, we study strand displacement at multiple levels of detail, using an intuitive model of a random walk on a 1D energy landscape, a secondary structure kinetics model with single base-pair steps and a coarse-grained molecular model that incorporates 3D geometric and steric effects. Further, we experimentally investigate the thermodynamics of three-way branch migration. Two factors explain the dependence of strand displacement kinetics on toehold length: (i) the physical process by which a single step of branch migration occurs is significantly slower than the fraying of a single base pair and (ii) initiating branch migration incurs a thermodynamic penalty, not captured by state-of-the-art nearest neighbor models of DNA, due to the additional overhang it engenders at the junction. Our findings are consistent with previously measured or inferred rates for hybridization, fraying and branch migration, and they provide a biophysical explanation of strand displacement kinetics. Our work paves the way for accurate modeling of strand displacement cascades, which would facilitate the simulation and construction of more complex molecular systems. PMID:24019238
On the biophysics and kinetics of toehold-mediated DNA strand displacement.
Srinivas, Niranjan; Ouldridge, Thomas E; Sulc, Petr; Schaeffer, Joseph M; Yurke, Bernard; Louis, Ard A; Doye, Jonathan P K; Winfree, Erik
2013-12-01
Dynamic DNA nanotechnology often uses toehold-mediated strand displacement for controlling reaction kinetics. Although the dependence of strand displacement kinetics on toehold length has been experimentally characterized and phenomenologically modeled, detailed biophysical understanding has remained elusive. Here, we study strand displacement at multiple levels of detail, using an intuitive model of a random walk on a 1D energy landscape, a secondary structure kinetics model with single base-pair steps and a coarse-grained molecular model that incorporates 3D geometric and steric effects. Further, we experimentally investigate the thermodynamics of three-way branch migration. Two factors explain the dependence of strand displacement kinetics on toehold length: (i) the physical process by which a single step of branch migration occurs is significantly slower than the fraying of a single base pair and (ii) initiating branch migration incurs a thermodynamic penalty, not captured by state-of-the-art nearest neighbor models of DNA, due to the additional overhang it engenders at the junction. Our findings are consistent with previously measured or inferred rates for hybridization, fraying and branch migration, and they provide a biophysical explanation of strand displacement kinetics. Our work paves the way for accurate modeling of strand displacement cascades, which would facilitate the simulation and construction of more complex molecular systems.
Model of Ni-63 battery with realistic PIN structure
NASA Astrophysics Data System (ADS)
Munson, Charles E.; Arif, Muhammad; Streque, Jeremy; Belahsene, Sofiane; Martinez, Anthony; Ramdane, Abderrahim; El Gmili, Youssef; Salvestrini, Jean-Paul; Voss, Paul L.; Ougazzaden, Abdallah
2015-09-01
GaN, with its wide bandgap of 3.4 eV, has emerged as an efficient material for designing high-efficiency betavoltaic batteries. An important part of designing efficient betavoltaic batteries involves a good understanding of the full process, from the behavior of the nuclear material and the creation of electron-hole pairs all the way through the collection of photo-generated carriers. This paper presents a detailed model based on Monte Carlo and Silvaco for a GaN-based betavoltaic battery device, modeled after Ni-63 as an energy source. The accuracy of the model is verified by comparing it with experimental values obtained for a GaN-based p-i-n structure under scanning electron microscope illumination.
NASA Astrophysics Data System (ADS)
Kumar, Rohit; Puri, Rajeev K.
2018-03-01
Employing the quantum molecular dynamics (QMD) approach for nucleus-nucleus collisions, we test the predictive power of the energy-based clusterization algorithm, i.e., the simulating annealing clusterization algorithm (SACA), to describe the experimental data of charge distribution and various event-by-event correlations among fragments. The calculations are constrained into the Fermi-energy domain and/or mildly excited nuclear matter. Our detailed study spans over different system masses, and system-mass asymmetries of colliding partners show the importance of the energy-based clusterization algorithm for understanding multifragmentation. The present calculations are also compared with the other available calculations, which use one-body models, statistical models, and/or hybrid models.
Development of Flight-Test Performance Estimation Techniques for Small Unmanned Aerial Systems
NASA Astrophysics Data System (ADS)
McCrink, Matthew Henry
This dissertation provides a flight-testing framework for assessing the performance of fixed-wing, small-scale unmanned aerial systems (sUAS) by leveraging sub-system models of components unique to these vehicles. The development of the sub-system models, and their links to broader impacts on sUAS performance, is the key contribution of this work. The sub-system modeling and analysis focuses on the vehicle's propulsion, navigation and guidance, and airframe components. Quantification of the uncertainty in the vehicle's power available and control states is essential for assessing the validity of both the methods and results obtained from flight-tests. Therefore, detailed propulsion and navigation system analyses are presented to validate the flight testing methodology. Propulsion system analysis required the development of an analytic model of the propeller in order to predict the power available over a range of flight conditions. The model is based on the blade element momentum (BEM) method. Additional corrections are added to the basic model in order to capture the Reynolds-dependent scale effects unique to sUAS. The model was experimentally validated using a ground based testing apparatus. The BEM predictions and experimental analysis allow for a parameterized model relating the electrical power, measurable during flight, to the power available required for vehicle performance analysis. Navigation system details are presented with a specific focus on the sensors used for state estimation, and the resulting uncertainty in vehicle state. Uncertainty quantification is provided by detailed calibration techniques validated using quasi-static and hardware-in-the-loop (HIL) ground based testing. The HIL methods introduced use a soft real-time flight simulator to provide inertial quality data for assessing overall system performance. Using this tool, the uncertainty in vehicle state estimation based on a range of sensors, and vehicle operational environments is presented. The propulsion and navigation system models are used to evaluate flight-testing methods for evaluating fixed-wing sUAS performance. A brief airframe analysis is presented to provide a foundation for assessing the efficacy of the flight-test methods. The flight-testing presented in this work is focused on validating the aircraft drag polar, zero-lift drag coefficient, and span efficiency factor. Three methods are detailed and evaluated for estimating these design parameters. Specific focus is placed on the influence of propulsion and navigation system uncertainty on the resulting performance data. Performance estimates are used in conjunction with the propulsion model to estimate the impact sensor and measurement uncertainty on the endurance and range of a fixed-wing sUAS. Endurance and range results for a simplistic power available model are compared to the Reynolds-dependent model presented in this work. Additional parameter sensitivity analysis related to state estimation uncertainties encountered in flight-testing are presented. Results from these analyses indicate that the sub-system models introduced in this work are of first-order importance, on the order of 5-10% change in range and endurance, in assessing the performance of a fixed-wing sUAS.
Influences on physicians' adoption of electronic detailing (e-detailing).
Alkhateeb, Fadi M; Doucette, William R
2009-01-01
E-detailing means using digital technology: internet, video conferencing and interactive voice response. There are two types of e-detailing: interactive (virtual) and video. Currently, little is known about what factors influence physicians' adoption of e-detailing. The objectives of this study were to test a model of physicians' adoption of e-detailing and to describe physicians using e-detailing. A mail survey was sent to a random sample of 2000 physicians practicing in Iowa. Binomial logistic regression was used to test the model of influences on physician adoption of e-detailing. On the basis of Rogers' model of adoption, the independent variables included relative advantage, compatibility, complexity, peer influence, attitudes, years in practice, presence of restrictive access to traditional detailing, type of specialty, academic affiliation, type of practice setting and control variables. A total of 671 responses were received giving a response rate of 34.7%. A total of 141 physicians (21.0%) reported using of e-detailing. The overall adoption model for using either type of e-detailing was found to be significant. Relative advantage, peer influence, attitudes, type of specialty, presence of restrictive access and years of practice had significant influences on physician adoption of e-detailing. The model of adoption of innovation is useful to explain physicians' adoption of e-detailing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
This is an Aspen Plus process model for in situ and ex situ upgrading of fast pyrolysis vapors for the conversion of biomass to hydrocarbon fuels. It is based on conceptual designs that allow projections of future commercial implementations of the technologies based on a combination of research and existing commercial technologies. The process model was developed from the ground up at NREL. Results from the model are documented in a detailed design report NREL/TP-5100-62455 (available at http://www.nrel.gov/docs/fy15osti/62455.pdf).
Latent Class Analysis of Incomplete Data via an Entropy-Based Criterion
Larose, Chantal; Harel, Ofer; Kordas, Katarzyna; Dey, Dipak K.
2016-01-01
Latent class analysis is used to group categorical data into classes via a probability model. Model selection criteria then judge how well the model fits the data. When addressing incomplete data, the current methodology restricts the imputation to a single, pre-specified number of classes. We seek to develop an entropy-based model selection criterion that does not restrict the imputation to one number of clusters. Simulations show the new criterion performing well against the current standards of AIC and BIC, while a family studies application demonstrates how the criterion provides more detailed and useful results than AIC and BIC. PMID:27695391
Wilson, David C; Kanjogera, Jennifer Bangirana; Soós, Reka; Briciu, Cosmin; Smith, Stephen R; Whiteman, Andrew D; Spies, Sandra; Oelz, Barbara
2017-08-01
This article presents the evidence base for 'operator models' - that is, how to deliver a sustainable service through the interaction of the 'client', 'revenue collector' and 'operator' functions - for municipal solid waste management in emerging and developing countries. The companion article addresses a selection of locally appropriate operator models. The evidence shows that no 'standard' operator model is effective in all developing countries and circumstances. Each city uses a mix of different operator models; 134 cases showed on average 2.5 models per city, each applying to different elements of municipal solid waste management - that is, street sweeping, primary collection, secondary collection, transfer, recycling, resource recovery and disposal or a combination. Operator models were analysed in detail for 28 case studies; the article summarises evidence across all elements and in more detail for waste collection. Operators fall into three main groups: The public sector, formal private sector, and micro-service providers including micro-, community-based and informal enterprises. Micro-service providers emerge as a common group; they are effective in expanding primary collection service coverage into poor- or peri-urban neighbourhoods and in delivering recycling. Both public and private sector operators can deliver effective services in the appropriate situation; what matters more is a strong client organisation responsible for municipal solid waste management within the municipality, with stable political and financial backing and capacity to manage service delivery. Revenue collection is also integral to operator models: Generally the municipality pays the operator from direct charges and/or indirect taxes, rather than the operator collecting fees directly from the service user.
Detailed assessment of global transport-energy models’ structures and projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeh, Sonia; Mishra, Gouri Shankar; Fulton, Lew
This paper focuses on comparing the frameworks and projections from four major global transportation models with considerable transportation technology and behavioral detail. We analyze and compare the modeling frameworks, underlying data, assumptions, intermediate parameters, and projections to identify the sources of divergence or consistency, as well as key knowledge gaps. We find that there are significant differences in the base-year data and key parameters for future projections, especially for developing countries. These include passenger and freight activity, mode shares, vehicle ownership rates, and even energy consumption by mode, particularly for shipping, aviation and trucking. This may be due in partmore » to a lack of previous efforts to do such consistency-checking and “bench-marking.” We find that the four models differ in terms of the relative roles of various mitigation strategies to achieve a 2°C / 450 ppm CO2e target: the economics-based integrated assessment models favor the use of low carbon fuels as the primary mitigation option followed by efficiency improvements, whereas transport-only and expert-based models favor efficiency improvements of vehicles followed by mode shifts. We offer recommendations for future modeling improvements focusing on (1) reducing data gaps; (2) translating the findings from this study into relevant policy implications such as feasibility of current policy goals, additional policy targets needed, regional vs. global reductions, etc.; (3) modeling strata of demographic groups to improve understanding of vehicle ownership levels, travel behavior, and urban vs. rural considerations; and (4) conducting coordinated efforts in aligning input assumptions and historical data, policy analysis, and modeling insights.« less
Argumentation in Science Education: A Model-based Framework
NASA Astrophysics Data System (ADS)
Böttcher, Florian; Meisert, Anke
2011-02-01
The goal of this article is threefold: First, the theoretical background for a model-based framework of argumentation to describe and evaluate argumentative processes in science education is presented. Based on the general model-based perspective in cognitive science and the philosophy of science, it is proposed to understand arguments as reasons for the appropriateness of a theoretical model which explains a certain phenomenon. Argumentation is considered to be the process of the critical evaluation of such a model if necessary in relation to alternative models. Secondly, some methodological details are exemplified for the use of a model-based analysis in the concrete classroom context. Third, the application of the approach in comparison with other analytical models will be presented to demonstrate the explicatory power and depth of the model-based perspective. Primarily, the framework of Toulmin to structurally analyse arguments is contrasted with the approach presented here. It will be demonstrated how common methodological and theoretical problems in the context of Toulmin's framework can be overcome through a model-based perspective. Additionally, a second more complex argumentative sequence will also be analysed according to the invented analytical scheme to give a broader impression of its potential in practical use.
An experimental study of nonlinear dynamic system identification
NASA Technical Reports Server (NTRS)
Stry, Greselda I.; Mook, D. Joseph
1990-01-01
A technique for robust identification of nonlinear dynamic systems is developed and illustrated using both simulations and analog experiments. The technique is based on the Minimum Model Error optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature of the current work is the ability to identify nonlinear dynamic systems without prior assumptions regarding the form of the nonlinearities, in constrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.
Measurable realistic image-based 3D mapping
NASA Astrophysics Data System (ADS)
Liu, W.; Wang, J.; Wang, J. J.; Ding, W.; Almagbile, A.
2011-12-01
Maps with 3D visual models are becoming a remarkable feature of 3D map services. High-resolution image data is obtained for the construction of 3D visualized models.The3D map not only provides the capabilities of 3D measurements and knowledge mining, but also provides the virtual experienceof places of interest, such as demonstrated in the Google Earth. Applications of 3D maps are expanding into the areas of architecture, property management, and urban environment monitoring. However, the reconstruction of high quality 3D models is time consuming, and requires robust hardware and powerful software to handle the enormous amount of data. This is especially for automatic implementation of 3D models and the representation of complicated surfacesthat still need improvements with in the visualisation techniques. The shortcoming of 3D model-based maps is the limitation of detailed coverage since a user can only view and measure objects that are already modelled in the virtual environment. This paper proposes and demonstrates a 3D map concept that is realistic and image-based, that enables geometric measurements and geo-location services. Additionally, image-based 3D maps provide more detailed information of the real world than 3D model-based maps. The image-based 3D maps use geo-referenced stereo images or panoramic images. The geometric relationships between objects in the images can be resolved from the geometric model of stereo images. The panoramic function makes 3D maps more interactive with users but also creates an interesting immersive circumstance. Actually, unmeasurable image-based 3D maps already exist, such as Google street view, but only provide virtual experiences in terms of photos. The topographic and terrain attributes, such as shapes and heights though are omitted. This paper also discusses the potential for using a low cost land Mobile Mapping System (MMS) to implement realistic image 3D mapping, and evaluates the positioning accuracy that a measureable realistic image-based (MRI) system can produce. The major contribution here is the implementation of measurable images on 3D maps to obtain various measurements from real scenes.
NASA Astrophysics Data System (ADS)
Schmaltz, Elmar; Steger, Stefan; Bogaard, Thom; Van Beek, Rens; Glade, Thomas
2017-04-01
Hydromechanic slope stability models are often used to assess the landslide susceptibility of hillslopes. Some of these models are able to account for vegetation related effects when assessing slope stability. However, spatial information of required vegetation parameters (especially of woodland) that are defined by land cover type, tree species and stand density are mostly underrepresented compared to hydropedological and geomechanical parameters. The aim of this study is to assess how LiDAR-derived biomass information can help to distinguish distinct tree stand-immanent properties (e.g. stand density and diversity) and further improve the performance of hydromechanic slope stability models. We used spatial vegetation data produced from sophisticated algorithms that are able to separate single trees within a stand based on LiDAR point clouds and thus allow an extraordinary detailed determination of the aboveground biomass. Further, this information is used to estimate the species- and stand-related distribution of the subsurface biomass using an innovative approach to approximate root system architecture and development. The hydrological tree-soil interactions and their impact on the geotechnical stability of the soil mantle are then reproduced in the dynamic and spatially distributed slope stability model STARWARS/PROBSTAB. This study highlights first advances in the approximation of biomechanical reinforcement potential of tree root systems in tree stands. Based on our findings, we address the advantages and limitations of highly detailed biomass information in hydromechanic modelling and physically based slope failure prediction.
EXTINCTION DEBT OF PROTECTED AREAS IN DEVELOPING LANDSCAPES
To conserve biological diversity, protected-area networks must be based not only upon current species distributions but also the landscape's long-term capacity to support populations. We used spatially-explicit population models requiring detailed habitat and demographic data to ...
The need for conducting forensic analysis of decommissioned bridges.
DOT National Transportation Integrated Search
2014-01-01
A limiting factor in current bridge management programs is a lack of detailed knowledge of bridge deterioration : mechanisms and processes. The current state of the art is to predict future condition using statistical forecasting : models based upon ...
A parsimonious dynamic model for river water quality assessment.
Mannina, Giorgio; Viviani, Gaspare
2010-01-01
Water quality modelling is of crucial importance for the assessment of physical, chemical, and biological changes in water bodies. Mathematical approaches to water modelling have become more prevalent over recent years. Different model types ranging from detailed physical models to simplified conceptual models are available. Actually, a possible middle ground between detailed and simplified models may be parsimonious models that represent the simplest approach that fits the application. The appropriate modelling approach depends on the research goal as well as on data available for correct model application. When there is inadequate data, it is mandatory to focus on a simple river water quality model rather than detailed ones. The study presents a parsimonious river water quality model to evaluate the propagation of pollutants in natural rivers. The model is made up of two sub-models: a quantity one and a quality one. The model employs a river schematisation that considers different stretches according to the geometric characteristics and to the gradient of the river bed. Each stretch is represented with a conceptual model of a series of linear channels and reservoirs. The channels determine the delay in the pollution wave and the reservoirs cause its dispersion. To assess the river water quality, the model employs four state variables: DO, BOD, NH(4), and NO. The model was applied to the Savena River (Italy), which is the focus of a European-financed project in which quantity and quality data were gathered. A sensitivity analysis of the model output to the model input or parameters was done based on the Generalised Likelihood Uncertainty Estimation methodology. The results demonstrate the suitability of such a model as a tool for river water quality management.
An Ontology-Based Framework for Bridging Learning Design and Learning Content
ERIC Educational Resources Information Center
Knight, Colin, Gasevic, Dragan; Richards, Griff
2006-01-01
The paper describes an ontology-based framework for bridging learning design and learning object content. In present solutions, researchers have proposed conceptual models and developed tools for both of those subjects, but without detailed discussions of how they can be used together. In this paper we advocate the use of ontologies to explicitly…
ERIC Educational Resources Information Center
Bailey, Cheryl P.
2009-01-01
This new biochemistry laboratory course moves through a progression of experiments that generates a platform for guided inquiry-based experiments. RNase One gene is isolated from prokaryotic genomic DNA, expressed as a tagged protein, affinity purified, and tested for activity and substrate specificity. Student pairs present detailed explanations…
A New Generation of Los Alamos Opacity Tables
Colgan, James Patrick; Kilcrease, David Parker; Magee, Jr., Norman H.; ...
2016-01-26
We present a new, publicly available, set of Los Alamos OPLIB opacity tables for the elements hydrogen through zinc. Our tables are computed using the Los Alamos ATOMIC opacity and plasma modeling code, and make use of atomic structure calculations that use fine-structure detail for all the elements considered. Our equation-of-state (EOS) model, known as ChemEOS, is based on the minimization of free energy in a chemical picture and appears to be a reasonable and robust approach to determining atomic state populations over a wide range of temperatures and densities. In this paper we discuss in detail the calculations thatmore » we have performed for the 30 elements considered, and present some comparisons of our monochromatic opacities with measurements and other opacity codes. We also use our new opacity tables in solar modeling calculations and compare and contrast such modeling with previous work.« less
NASA Technical Reports Server (NTRS)
Bose, Deepak
2012-01-01
The design of entry vehicles requires predictions of aerothermal environment during the hypersonic phase of their flight trajectories. These predictions are made using computational fluid dynamics (CFD) codes that often rely on physics and chemistry models of nonequilibrium processes. The primary processes of interest are gas phase chemistry, internal energy relaxation, electronic excitation, nonequilibrium emission and absorption of radiation, and gas-surface interaction leading to surface recession and catalytic recombination. NASAs Hypersonics Project is advancing the state-of-the-art in modeling of nonequilibrium phenomena by making detailed spectroscopic measurements in shock tube and arcjets, using ab-initio quantum mechanical techniques develop fundamental chemistry and spectroscopic databases, making fundamental measurements of finite-rate gas surface interactions, implementing of detailed mechanisms in the state-of-the-art CFD codes, The development of new models is based on validation with relevant experiments. We will present the latest developments and a roadmap for the technical areas mentioned above
NASA Astrophysics Data System (ADS)
Hefni, Baligh El; Bourdil, Charles
2017-06-01
Molten salt technology represents nowadays the most cost-effective technology for electricity generation for solar power plant. The molten salt tower receiver is based on a field of individually sun-tracking mirrors (heliostats) that reflect the incident sunshine to a receiver at the top of a centrally located tower. The objective of this study is to assess the impact of several transients issued from different scenarios (failure or normal operation mode) on the receiver dynamic behavior. A dynamic detailed model of Solar Two molten salt central receiver has been developed. The component model is meant to be used for receiver modeling with the ThermoSysPro library, developed by EDF. The paper also gives the results of the dynamic simulation for the selected scenarios on Solar Two receiver.
Freni, G; La Loggia, G; Notaro, V
2010-01-01
Due to the increased occurrence of flooding events in urban areas, many procedures for flood damage quantification have been defined in recent decades. The lack of large databases in most cases is overcome by combining the output of urban drainage models and damage curves linking flooding to expected damage. The application of advanced hydraulic models as diagnostic, design and decision-making support tools has become a standard practice in hydraulic research and application. Flooding damage functions are usually evaluated by a priori estimation of potential damage (based on the value of exposed goods) or by interpolating real damage data (recorded during historical flooding events). Hydraulic models have undergone continuous advancements, pushed forward by increasing computer capacity. The details of the flooding propagation process on the surface and the details of the interconnections between underground and surface drainage systems have been studied extensively in recent years, resulting in progressively more reliable models. The same level of was advancement has not been reached with regard to damage curves, for which improvements are highly connected to data availability; this remains the main bottleneck in the expected flooding damage estimation. Such functions are usually affected by significant uncertainty intrinsically related to the collected data and to the simplified structure of the adopted functional relationships. The present paper aimed to evaluate this uncertainty by comparing the intrinsic uncertainty connected to the construction of the damage-depth function to the hydraulic model uncertainty. In this way, the paper sought to evaluate the role of hydraulic model detail level in the wider context of flood damage estimation. This paper demonstrated that the use of detailed hydraulic models might not be justified because of the higher computational cost and the significant uncertainty in damage estimation curves. This uncertainty occurs mainly because a large part of the total uncertainty is dependent on depth-damage curves. Improving the estimation of these curves may provide better results in term of uncertainty reduction than the adoption of detailed hydraulic models.
Individual-based modelling and control of bovine brucellosis
NASA Astrophysics Data System (ADS)
Nepomuceno, Erivelton G.; Barbosa, Alípio M.; Silva, Marcos X.; Perc, Matjaž
2018-05-01
We present a theoretical approach to control bovine brucellosis. We have used individual-based modelling, which is a network-type alternative to compartmental models. Our model thus considers heterogeneous populations, and spatial aspects such as migration among herds and control actions described as pulse interventions are also easily implemented. We show that individual-based modelling reproduces the mean field behaviour of an equivalent compartmental model. Details of this process, as well as flowcharts, are provided to facilitate the reproduction of the presented results. We further investigate three numerical examples using real parameters of herds in the São Paulo state of Brazil, in scenarios which explore eradication, continuous and pulsed vaccination and meta-population effects. The obtained results are in good agreement with the expected behaviour of this disease, which ultimately showcases the effectiveness of our theory.
Understanding bistability in yeast glycolysis using general properties of metabolic pathways.
Planqué, Robert; Bruggeman, Frank J; Teusink, Bas; Hulshof, Josephus
2014-09-01
Glycolysis is the central pathway in energy metabolism in the majority of organisms. In a recent paper, van Heerden et al. showed experimentally and computationally that glycolysis can exist in two states, a global steady state and a so-called imbalanced state. In the imbalanced state, intermediary metabolites accumulate at low levels of ATP and inorganic phosphate. It was shown that Baker's yeast uses a peculiar regulatory mechanism--via trehalose metabolism--to ensure that most yeast cells reach the steady state and not the imbalanced state. Here we explore the apparent bistable behaviour in a core model of glycolysis that is based on a well-established detailed model, and study in great detail the bifurcation behaviour of solutions, without using any numerical information on parameter values. We uncover a rich suite of solutions, including so-called imbalanced states, bistability, and oscillatory behaviour. The techniques employed are generic, directly suitable for a wide class of biochemical pathways, and could lead to better analytical treatments of more detailed models. Copyright © 2014 Elsevier Inc. All rights reserved.
Model-based pH monitor for sensor assessment.
van Schagen, Kim; Rietveld, Luuk; Veersma, Alex; Babuska, Robert
2009-01-01
Owing to the nature of the treatment processes, monitoring the processes based on individual online measurements is difficult or even impossible. However, the measurements (online and laboratory) can be combined with a priori process knowledge, using mathematical models, to objectively monitor the treatment processes and measurement devices. The pH measurement is a commonly used measurement at different stages in the drinking water treatment plant, although it is a unreliable instrument, requiring significant maintenance. It is shown that, using a grey-box model, it is possible to assess the measurement devices effectively, even if detailed information of the specific processes is unknown.
NASA Astrophysics Data System (ADS)
Shi, Chengkun; Sun, Hanxu; Jia, Qingxuan; Zhao, Kailiang
2009-05-01
For realizing omni-directional movement and operating task of spherical space robot system, this paper describes an innovated prototype and analyzes dynamic characteristics of a spherical rolling robot with telescopic manipulator. Based on the Newton-Euler equations, the kinematics and dynamic equations of the spherical robot's motion are instructed detailedly. Then the motion simulations of the robot in different environments are developed with ADAMS. The simulation results validate the mathematics model of the system. And the dynamic model establishes theoretical basis for the latter job.
Model-Based GUI Testing Using Uppaal at Novo Nordisk
NASA Astrophysics Data System (ADS)
Hjort, Ulrik H.; Illum, Jacob; Larsen, Kim G.; Petersen, Michael A.; Skou, Arne
This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates a test suite satisfying some testing criterion, such as edge or state coverage, and converts the individual test case into a scripting language that can be automatically executed against the target. The tool has significantly reduced the time required for test construction and generation, and reduced the number of test scripts while increasing the coverage.
Large Terrain Modeling and Visualization for Planets
NASA Technical Reports Server (NTRS)
Myint, Steven; Jain, Abhinandan; Cameron, Jonathan; Lim, Christopher
2011-01-01
Physics-based simulations are actively used in the design, testing, and operations phases of surface and near-surface planetary space missions. One of the challenges in realtime simulations is the ability to handle large multi-resolution terrain data sets within models as well as for visualization. In this paper, we describe special techniques that we have developed for visualization, paging, and data storage for dealing with these large data sets. The visualization technique uses a real-time GPU-based continuous level-of-detail technique that delivers multiple frames a second performance even for planetary scale terrain model sizes.
Modeling of Adaptive Optics-Based Free-Space Communications Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilks, S C; Morris, J R; Brase, J M
2002-08-06
We introduce a wave-optics based simulation code written for air-optic laser communications links, that includes a detailed model of an adaptive optics compensation system. We present the results obtained by this model, where the phase of a communications laser beam is corrected, after it propagates through a turbulent atmosphere. The phase of the received laser beam is measured using a Shack-Hartmann wavefront sensor, and the correction method utilizes a MEMS mirror. Strehl improvement and amount of power coupled to the receiving fiber for both 1 km horizontal and 28 km slant paths are presented.
Hydrological and hydraulic models for determination of flood-prone and flood inundation areas
NASA Astrophysics Data System (ADS)
Aksoy, Hafzullah; Sadan Ozgur Kirca, Veysel; Burgan, Halil Ibrahim; Kellecioglu, Dorukhan
2016-05-01
Geographic Information Systems (GIS) are widely used in most studies on water resources. Especially, when the topography and geomorphology of study area are considered, GIS can ease the work load. Detailed data should be used in this kind of studies. Because of, either the complication of the models or the requirement of highly detailed data, model outputs can be obtained fast only with a good optimization. The aim in this study, firstly, is to determine flood-prone areas in a watershed by using a hydrological model considering two wetness indexes; the topographical wetness index, and the SAGA (System for Automated Geoscientific Analyses) wetness index. The wetness indexes were obtained in the Quantum GIS (QGIS) software by using the Digital Elevation Model of the study area. Flood-prone areas are determined by considering the wetness index maps of the watershed. As the second stage of this study, a hydraulic model, HEC-RAS, was executed to determine flood inundation areas under different return period-flood events. River network cross-sections required for this study were derived from highly detailed digital elevation models by QGIS. Also river hydraulic parameters were used in the hydraulic model. Modelling technology used in this study is made of freely available open source softwares. Based on case studies performed on watersheds in Turkey, it is concluded that results of such studies can be used for taking precaution measures against life and monetary losses due to floods in urban areas particularly.
Neutronic design studies of a conceptual DCLL fusion reactor for a DEMO and a commercial power plant
NASA Astrophysics Data System (ADS)
Palermo, I.; Veredas, G.; Gómez-Ros, J. M.; Sanz, J.; Ibarra, A.
2016-01-01
Neutronic analyses or, more widely, nuclear analyses have been performed for the development of a dual-coolant He/LiPb (DCLL) conceptual design reactor. A detailed three-dimensional (3D) model has been examined and optimized. The design is based on the plasma parameters and functional materials of the power plant conceptual studies (PPCS) model C. The initial radial-build for the detailed model has been determined according to the dimensions established in a previous work on an equivalent simplified homogenized reactor model. For optimization purposes, the initial specifications established over the simplified model have been refined on the detailed 3D design, modifying material and dimension of breeding blanket, shield and vacuum vessel in order to fulfil the priority requirements of a fusion reactor in terms of the fundamental neutronic responses. Tritium breeding ratio, energy multiplication factor, radiation limits in the TF coils, helium production and displacements per atom (dpa) have been calculated in order to demonstrate the functionality and viability of the reactor design in guaranteeing tritium self-sufficiency, power efficiency, plasma confinement, and re-weldability and structural integrity of the components. The paper describes the neutronic design improvements of the DCLL reactor, obtaining results for both DEMO and power plant operational scenarios.
Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R
2016-01-25
Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in <1h compared to >3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kuik, Friderike; Lauer, Axel; Churkina, Galina; Denier van der Gon, Hugo A. C.; Fenner, Daniel; Mar, Kathleen A.; Butler, Tim M.
2016-12-01
Air pollution is the number one environmental cause of premature deaths in Europe. Despite extensive regulations, air pollution remains a challenge, especially in urban areas. For studying summertime air quality in the Berlin-Brandenburg region of Germany, the Weather Research and Forecasting Model with Chemistry (WRF-Chem) is set up and evaluated against meteorological and air quality observations from monitoring stations as well as from a field campaign conducted in 2014. The objective is to assess which resolution and level of detail in the input data is needed for simulating urban background air pollutant concentrations and their spatial distribution in the Berlin-Brandenburg area. The model setup includes three nested domains with horizontal resolutions of 15, 3 and 1 km and anthropogenic emissions from the TNO-MACC III inventory. We use RADM2 chemistry and the MADE/SORGAM aerosol scheme. Three sensitivity simulations are conducted updating input parameters to the single-layer urban canopy model based on structural data for Berlin, specifying land use classes on a sub-grid scale (mosaic option) and downscaling the original emissions to a resolution of ca. 1 km × 1 km for Berlin based on proxy data including traffic density and population density. The results show that the model simulates meteorology well, though urban 2 m temperature and urban wind speeds are biased high and nighttime mixing layer height is biased low in the base run with the settings described above. We show that the simulation of urban meteorology can be improved when specifying the input parameters to the urban model, and to a lesser extent when using the mosaic option. On average, ozone is simulated reasonably well, but maximum daily 8 h mean concentrations are underestimated, which is consistent with the results from previous modelling studies using the RADM2 chemical mechanism. Particulate matter is underestimated, which is partly due to an underestimation of secondary organic aerosols. NOx (NO + NO2) concentrations are simulated reasonably well on average, but nighttime concentrations are overestimated due to the model's underestimation of the mixing layer height, and urban daytime concentrations are underestimated. The daytime underestimation is improved when using downscaled, and thus locally higher emissions, suggesting that part of this bias is due to deficiencies in the emission input data and their resolution. The results further demonstrate that a horizontal resolution of 3 km improves the results and spatial representativeness of the model compared to a horizontal resolution of 15 km. With the input data (land use classes, emissions) at the level of detail of the base run of this study, we find that a horizontal resolution of 1 km does not improve the results compared to a resolution of 3 km. However, our results suggest that a 1 km horizontal model resolution could enable a detailed simulation of local pollution patterns in the Berlin-Brandenburg region if the urban land use classes, together with the respective input parameters to the urban canopy model, are specified with a higher level of detail and if urban emissions of higher spatial resolution are used.
Mathematical models of behavior of individual animals.
Tsibulsky, Vladimir L; Norman, Andrew B
2007-01-01
This review is focused on mathematical modeling of behaviors of a whole organism with special emphasis on models with a clearly scientific approach to the problem that helps to understand the mechanisms underlying behavior. The aim is to provide an overview of old and contemporary mathematical models without complex mathematical details. Only deterministic and stochastic, but not statistical models are reviewed. All mathematical models of behavior can be divided into two main classes. First, models that are based on the principle of teleological determinism assume that subjects choose the behavior that will lead them to a better payoff in the future. Examples are game theories and operant behavior models both of which are based on the matching law. The second class of models are based on the principle of causal determinism, which assume that subjects do not choose from a set of possibilities but rather are compelled to perform a predetermined behavior in response to specific stimuli. Examples are perception and discrimination models, drug effects models and individual-based population models. A brief overview of the utility of each mathematical model is provided for each section.
Mori, Kentaro; Yamamoto, Takuji; Oyama, Kazutaka; Ueno, Hideaki; Nakao, Yasuaki; Honma, Keiichirou
2008-12-01
Experience with dissection of the cavernous sinus and the temporal bone is essential for training in skull base surgery, but the opportunities for cadaver dissection are very limited. A modification of a commercially available prototype three-dimensional (3D) skull base model, made by a selective laser sintering method and incorporating surface details and inner bony structures such as the inner ear structures and air cells, is proposed to include artificial dura mater, cranial nerves, venous sinuses, and the internal carotid artery for such surgical training. The transpetrosal approach and epidural cavernous sinus surgery (Dolenc's technique) were performed on this modified model using a high speed drill or ultrasonic bone curette under an operating microscope. The model could be dissected in almost the same way as a real cadaver. The modified 3D skull base model provides a good educational tool for training in skull base surgery.
Update: Advancement of Contact Dynamics Modeling for Human Spaceflight Simulation Applications
NASA Technical Reports Server (NTRS)
Brain, Thomas A.; Kovel, Erik B.; MacLean, John R.; Quiocho, Leslie J.
2017-01-01
Pong is a new software tool developed at the NASA Johnson Space Center that advances interference-based geometric contact dynamics based on 3D graphics models. The Pong software consists of three parts: a set of scripts to extract geometric data from 3D graphics models, a contact dynamics engine that provides collision detection and force calculations based on the extracted geometric data, and a set of scripts for visualizing the dynamics response with the 3D graphics models. The contact dynamics engine can be linked with an external multibody dynamics engine to provide an integrated multibody contact dynamics simulation. This paper provides a detailed overview of Pong including the overall approach and modeling capabilities, which encompasses force generation from contact primitives and friction to computational performance. Two specific Pong-based examples of International Space Station applications are discussed, and the related verification and validation using this new tool are also addressed.
Modelling the nonlinear behaviour of an underplatform damper test rig for turbine applications
NASA Astrophysics Data System (ADS)
Pesaresi, L.; Salles, L.; Jones, A.; Green, J. S.; Schwingshackl, C. W.
2017-02-01
Underplatform dampers (UPD) are commonly used in aircraft engines to mitigate the risk of high-cycle fatigue failure of turbine blades. The energy dissipated at the friction contact interface of the damper reduces the vibration amplitude significantly, and the couplings of the blades can also lead to significant shifts of the resonance frequencies of the bladed disk. The highly nonlinear behaviour of bladed discs constrained by UPDs requires an advanced modelling approach to ensure that the correct damper geometry is selected during the design of the turbine, and that no unexpected resonance frequencies and amplitudes will occur in operation. Approaches based on an explicit model of the damper in combination with multi-harmonic balance solvers have emerged as a promising way to predict the nonlinear behaviour of UPDs correctly, however rigorous experimental validations are required before approaches of this type can be used with confidence. In this study, a nonlinear analysis based on an updated explicit damper model having different levels of detail is performed, and the results are evaluated against a newly-developed UPD test rig. Detailed linear finite element models are used as input for the nonlinear analysis, allowing the inclusion of damper flexibility and inertia effects. The nonlinear friction interface between the blades and the damper is described with a dense grid of 3D friction contact elements which allow accurate capturing of the underlying nonlinear mechanism that drives the global nonlinear behaviour. The introduced explicit damper model showed a great dependence on the correct contact pressure distribution. The use of an accurate, measurement based, distribution, better matched the nonlinear dynamic behaviour of the test rig. Good agreement with the measured frequency response data could only be reached when the zero harmonic term (constant term) was included in the multi-harmonic expansion of the nonlinear problem, highlighting its importance when the contact interface experiences large normal load variation. The resulting numerical damper kinematics with strong translational and rotational motion, and the global blades frequency response were fully validated experimentally, showing the accuracy of the suggested high detailed explicit UPD modelling approach.
The effects of geometric uncertainties on computational modelling of knee biomechanics
Fisher, John; Wilcox, Ruth
2017-01-01
The geometry of the articular components of the knee is an important factor in predicting joint mechanics in computational models. There are a number of uncertainties in the definition of the geometry of cartilage and meniscus, and evaluating the effects of these uncertainties is fundamental to understanding the level of reliability of the models. In this study, the sensitivity of knee mechanics to geometric uncertainties was investigated by comparing polynomial-based and image-based knee models and varying the size of meniscus. The results suggested that the geometric uncertainties in cartilage and meniscus resulting from the resolution of MRI and the accuracy of segmentation caused considerable effects on the predicted knee mechanics. Moreover, even if the mathematical geometric descriptors can be very close to the imaged-based articular surfaces, the detailed contact pressure distribution produced by the mathematical geometric descriptors was not the same as that of the image-based model. However, the trends predicted by the models based on mathematical geometric descriptors were similar to those of the imaged-based models. PMID:28879008
NASA Astrophysics Data System (ADS)
Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.
2015-08-01
Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCityDB incorporating a PostgreSQL geo-database is used to manage and manipulate 3D data and their semantics.
The ABC's of Online Course Design According to Addie Model
ERIC Educational Resources Information Center
Durak, Gürhan; Ataizi, Murat
2016-01-01
The purpose of this study was to design the course of Programming Languages-I online, which is given on face-to-face basis at undergraduate level. It is seen in literature that there is no detailed research on the preparation of a fully-online course directly based on an instructional design model. In this respect, depending on the ADDIE design…
Simulation of Electric Propulsion Thrusters (Preprint)
2011-02-07
activity concerns the plumes produced by electric thrusters. Detailed information on the plumes is required for safe integration of the thruster...ground-based laboratory facilities. Device modelling also plays an important role in plume simulations by providing accurate boundary conditions at...methods used to model the flow of gas and plasma through electric propulsion devices. Discussion of the numerical analysis of other aspects of
Simulation of Electric Propulsion Thrusters
2011-01-01
and operational lifetime. The second area of modelling activity concerns the plumes produced by electric thrusters. Detailed information on the plumes ...to reproduce the in-orbit space environment using ground-based laboratory facilities. Device modelling also plays an important role in plume ...of the numerical analysis of other aspects of thruster design, such as thermal and structural processes, is omitted here. There are two fundamental
Gaussian approximation potential modeling of lithium intercalation in carbon nanostructures
NASA Astrophysics Data System (ADS)
Fujikake, So; Deringer, Volker L.; Lee, Tae Hoon; Krynski, Marcin; Elliott, Stephen R.; Csányi, Gábor
2018-06-01
We demonstrate how machine-learning based interatomic potentials can be used to model guest atoms in host structures. Specifically, we generate Gaussian approximation potential (GAP) models for the interaction of lithium atoms with graphene, graphite, and disordered carbon nanostructures, based on reference density functional theory data. Rather than treating the full Li-C system, we demonstrate how the energy and force differences arising from Li intercalation can be modeled and then added to a (prexisting and unmodified) GAP model of pure elemental carbon. Furthermore, we show the benefit of using an explicit pair potential fit to capture "effective" Li-Li interactions and to improve the performance of the GAP model. This provides proof-of-concept for modeling guest atoms in host frameworks with machine-learning based potentials and in the longer run is promising for carrying out detailed atomistic studies of battery materials.
NASA Astrophysics Data System (ADS)
Bergamasco, A.; De Nat, L.; Flindt, M. R.; Amos, C. L.
2003-11-01
Phytobenthic communities can play an active role in modifying the environmental characteristics of the ecosystem in which they live so mediating the human impact on Coastal Zone habitats. Complicated feedbacks couple the establishment of phytobenthic communities with water quality and physical parameters in estuaries. Direct and indirect interactions between physical and biological attributes need to be considered in order to improve the management of these ecosystems to guarantee a sustainable use of coastal resources. Within the project F-ECTS ("Feedbacks of Estuarine Circulation and Transport of Sediments on phytobenthos") this issue was approached through a three-step strategy: (i) Monitoring: detailed fieldwork activities focusing on the measurement and evaluation of the main processes involving hydrodynamics, sediments, nutrients, light and phytobenthic biomass; (ii) Modeling: joint modeling of the suspended particulate matter erosion/transport/deposition and biological mediation of the hydrodynamics and (iii) GIS: development of GIS-based practical tools able to manage and exploit measured and modeled data on the basis of scientific investigation guidelines and procedures. The overall strategy is described by illustrating results of field measurements, providing details of model implementation and demonstrating the GIS-based tools.
Detailed simulation of a Lobster-eye telescope.
Putkunz, Corey T; Peele, Andrew G
2009-08-03
The concept of an x-ray telescope based on the optics of the eye of certain types of crustacea has been in currency for nearly thirty years. However, it is only in the last decade that the technology to make the telescope and the opportunity to mount it on a suitable space platform have combined to allow the idea to become a reality. Accordingly, we have undertaken a detailed simulation study, updating previous simplified models, to properly characterise the performance of the instrument in orbit. The study reveals details of how the particular characteristics of the lobster-eye optics affect the sensitivity of the instrument and allow us to implement new ideas in data extraction methods.
Modelling the permafrost extent on the Tibetan Plateau
NASA Astrophysics Data System (ADS)
Zhao, L.; Zou, D.; Sheng, Y.; Chen, J.; Wu, T.; Wu, J.; Pang, Q.; Wang, W.
2016-12-01
The Tibetan Plateau (TP) possesses the largest areas of permafrost terrain in mid- and low-latitude regions of the world. Permafrost plays significant role in climatic, hydrological, and ecological systems, and has great influences on landforms formation, slope and engineering construction. Detailed database of distribution and characteristics of permafrost is crucial for engineering planning, water resource management, ecosystem protection, climate modeling, and carbon cycle research. Although some permafrost distribution maps were compiled in previous studies and proved very useful, due to the limited data source, ambiguous criteria, little validation, and the deficiency of high-quality spatial datasets, there are a large uncertainty in the mapping permafrost distribution. In this paper, a new permafrost map was generated mostly based on freezing and thawing indices from modified MODIS land surface temperatures (LSTs), and validated by various ground-based dataset. Soil thermal properties of five soil types across the TP estimated according to the empirical equation and in situ observed soil properties (water content and bulk density) which were obtained during the field survey. Based on these data sets, the model of Temperature at the Top Of Permafrost (TTOP) was applied to simulate permafrost distribution over the TP. The results show that permafrost, seasonally frozen ground, and unfrozen ground covered areas of 106.4´104 km2, 145.6´104 km2, and 2.9´104 km2. The ground based observations of permafrost distribution across the five investigated regions (IRs) and three highway transects (across the entire permafrost regions from north to south) have been using to validate the model. Result of validation shows that the kappa coefficient vary from 0.38 to 0.78 in average 0.57 at the five IRs and from 0.62 to 0.74 in average 0.68 within three transects. The result of TTOP modeling shows more accuracy to identify thawing regions in comparison with two maps, compiled in 1996 and 2006 and could be better represent the detailed permafrost distribution than other methods. Overall, the results are providing much more detailed maps of permafrost distribution, which could be a promising basic data set for further research on permafrost on the Tibetan Plateau.
Multilayered nonuniform sampling for three-dimensional scene representation
NASA Astrophysics Data System (ADS)
Lin, Huei-Yung; Xiao, Yu-Hua; Chen, Bo-Ren
2015-09-01
The representation of a three-dimensional (3-D) scene is essential in multiview imaging technologies. We present a unified geometry and texture representation based on global resampling of the scene. A layered data map representation with a distance-dependent nonuniform sampling strategy is proposed. It is capable of increasing the details of the 3-D structure locally and is compact in size. The 3-D point cloud obtained from the multilayered data map is used for view rendering. For any given viewpoint, image synthesis with different levels of detail is carried out using the quadtree-based nonuniformly sampled 3-D data points. Experimental results are presented using the 3-D models of reconstructed real objects.
NASA Astrophysics Data System (ADS)
Sereda, T. G.; Kostarev, S. N.
2018-03-01
Theoretical bases of linkage of material streams of the machine-building enterprise and the automated system of decision-making are developed. The process of machine-building manufacture is submitted by the existential system. The equation of preservation of movement is based on calculation of volume of manufacture. The basis of resource variables includes capacities and operators of the equipment. Indignations such as a defect and failure are investigated in the existential basis. The equation of a stream of details on a manufacturing route is made. The received analytical expression expresses a condition of a stream of movement of details in view of influence of work of the equipment and traumatism of the personnel.
Robust High-Resolution Cloth Using Parallelism, History-Based Collisions and Accurate Friction
Selle, Andrew; Su, Jonathan; Irving, Geoffrey; Fedkiw, Ronald
2015-01-01
In this paper we simulate high resolution cloth consisting of up to 2 million triangles which allows us to achieve highly detailed folds and wrinkles. Since the level of detail is also influenced by object collision and self collision, we propose a more accurate model for cloth-object friction. We also propose a robust history-based repulsion/collision framework where repulsions are treated accurately and efficiently on a per time step basis. Distributed memory parallelism is used for both time evolution and collisions and we specifically address Gauss-Seidel ordering of repulsion/collision response. This algorithm is demonstrated by several high-resolution and high-fidelity simulations. PMID:19147895
Performance model for grid-connected photovoltaic inverters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyson, William Earl; Galbraith, Gary M.; King, David L.
2007-09-01
This document provides an empirically based performance model for grid-connected photovoltaic inverters used for system performance (energy) modeling and for continuous monitoring of inverter performance during system operation. The versatility and accuracy of the model were validated for a variety of both residential and commercial size inverters. Default parameters for the model can be obtained from manufacturers specification sheets, and the accuracy of the model can be further refined using measurements from either well-instrumented field measurements in operational systems or using detailed measurements from a recognized testing laboratory. An initial database of inverter performance parameters was developed based on measurementsmore » conducted at Sandia National Laboratories and at laboratories supporting the solar programs of the California Energy Commission.« less
Multiple Damage Progression Paths in Model-Based Prognostics
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Goebel, Kai Frank
2011-01-01
Model-based prognostics approaches employ domain knowledge about a system, its components, and how they fail through the use of physics-based models. Component wear is driven by several different degradation phenomena, each resulting in their own damage progression path, overlapping to contribute to the overall degradation of the component. We develop a model-based prognostics methodology using particle filters, in which the problem of characterizing multiple damage progression paths is cast as a joint state-parameter estimation problem. The estimate is represented as a probability distribution, allowing the prediction of end of life and remaining useful life within a probabilistic framework that supports uncertainty management. We also develop a novel variance control mechanism that maintains an uncertainty bound around the hidden parameters to limit the amount of estimation uncertainty and, consequently, reduce prediction uncertainty. We construct a detailed physics-based model of a centrifugal pump, to which we apply our model-based prognostics algorithms. We illustrate the operation of the prognostic solution with a number of simulation-based experiments and demonstrate the performance of the chosen approach when multiple damage mechanisms are active
ERIC Educational Resources Information Center
Hirsch, Jorge E.; Scalapino, Douglas J.
1983-01-01
Discusses ways computers are being used in condensed-matter physics by experimenters and theorists. Experimenters use them to control experiments and to gather and analyze data. Theorists use them for detailed predictions based on realistic models and for studies on systems not realizable in practice. (JN)
Evaluation of the flame propagation within an SI engine using flame imaging and LES
NASA Astrophysics Data System (ADS)
He, Chao; Kuenne, Guido; Yildar, Esra; van Oijen, Jeroen; di Mare, Francesca; Sadiki, Amsini; Ding, Carl-Philipp; Baum, Elias; Peterson, Brian; Böhm, Benjamin; Janicka, Johannes
2017-11-01
This work shows experiments and simulations of the fired operation of a spark ignition engine with port-fuelled injection. The test rig considered is an optically accessible single cylinder engine specifically designed at TU Darmstadt for the detailed investigation of in-cylinder processes and model validation. The engine was operated under lean conditions using iso-octane as a substitute for gasoline. Experiments have been conducted to provide a sound database of the combustion process. A planar flame imaging technique has been applied within the swirl- and tumble-planes to provide statistical information on the combustion process to complement a pressure-based comparison between simulation and experiments. This data is then analysed and used to assess the large eddy simulation performed within this work. For the simulation, the engine code KIVA has been extended by the dynamically thickened flame model combined with chemistry reduction by means of pressure dependent tabulation. Sixty cycles have been simulated to perform a statistical evaluation. Based on a detailed comparison with the experimental data, a systematic study has been conducted to obtain insight into the most crucial modelling uncertainties.
3-DIMENSIONAL Geometric Survey and Structural Modelling of the Dome of Pisa Cathedral
NASA Astrophysics Data System (ADS)
Aita, D.; Barsotti, R.; Bennati, S.; Caroti, G.; Piemonte, A.
2017-02-01
This paper aims to illustrate the preliminary results of a research project on the dome of Pisa Cathedral (Italy). The final objective of the present research is to achieve a deep understanding of the structural behaviour of the dome, through a detailed knowledge of its geometry and constituent materials, and by taking into account historical and architectural aspects as well. A reliable survey of the dome is the essential starting point for any further investigation and adequate structural modelling. Examination of the status quo on the surveys of the Cathedral dome shows that a detailed survey suitable for structural analysis is in fact lacking. For this reason, high-density and high-precision surveys have been planned, by considering that a different survey output is needed, according both to the type of structural model chosen and purposes to be achieved. Thus, both range-based (laser scanning) and image-based (3D Photogrammetry) survey methodologies have been used. This contribution introduces the first results concerning the shape of the dome derived from surveys. Furthermore, a comparison is made between such survey outputs and those available in the literature.
Watcharapong Tachajapong; Jesse Lozano; Shankar Mahalingam; Xiangyang Zhou; David R. Weise
2008-01-01
Crown fire initiation is studied by using a simple experimental and detailed physical modeling based on Large Eddy Simulation (LES). Experiments conducted thus far reveal that crown fuel ignition via surface fire occurs when the crown base is within the continuous flame region and does not occur when the crown base is located in the hot plume gas region of the surface...
NASA Astrophysics Data System (ADS)
Carette, Yannick; Vanhove, Hans; Duflou, Joost
2018-05-01
Single Point Incremental Forming is a flexible process that is well-suited for small batch production and rapid prototyping of complex sheet metal parts. The distributed nature of the deformation process and the unsupported sheet imply that controlling the final accuracy of the workpiece is challenging. To improve the process limits and the accuracy of SPIF, the use of multiple forming passes has been proposed and discussed by a number of authors. Most methods use multiple intermediate models, where the previous one is strictly smaller than the next one, while gradually increasing the workpieces' wall angles. Another method that can be used is the manufacture of a smoothed-out "base geometry" in the first pass, after which more detailed features can be added in subsequent passes. In both methods, the selection of these intermediate shapes is freely decided by the user. However, their practical implementation in the production of complex freeform parts is not straightforward. The original CAD model can be manually adjusted or completely new CAD models can be created. This paper discusses an automatic method that is able to extract the base geometry from a full STL-based CAD model in an analytical way. Harmonic decomposition is used to express the final geometry as the sum of individual surface harmonics. It is then possible to filter these harmonic contributions to obtain a new CAD model with a desired level of geometric detail. This paper explains the technique and its implementation, as well as its use in the automatic generation of multi-step geometries.
An information model to support user-centered design of medical devices.
Hagedorn, Thomas J; Krishnamurty, Sundar; Grosse, Ian R
2016-08-01
The process of engineering design requires the product development team to balance the needs and limitations of many stakeholders, including those of the user, regulatory organizations, and the designing institution. This is particularly true in medical device design, where additional consideration must be given for a much more complex user-base that can only be accessed on a limited basis. Given this inherent challenge, few projects exist that consider design domain concepts, such as aspects of a detailed design, a detailed view of various stakeholders and their capabilities, along with the user-needs simultaneously. In this paper, we present a novel information model approach that combines a detailed model of design elements with a model of the design itself, customer requirements, and of the capabilities of the customer themselves. The information model is used to facilitate knowledge capture and automated reasoning across domains with a minimal set of rules by adopting a terminology that treats customer and design specific factors identically, thus enabling straightforward assessments. A uniqueness of this approach is that it systematically provides an integrated perspective on the key usability information that drive design decisions towards more universal or effective outcomes with the very design information impacted by the usability information. This can lead to cost-efficient optimal designs based on a direct inclusion of the needs of customers alongside those of business, marketing, and engineering requirements. Two case studies are presented to show the method's potential as a more effective knowledge management tool with built-in automated inferences that provide design insight, as well as its overall effectiveness as a platform to develop and execute medical device design from a holistic perspective. Copyright © 2016 Elsevier Inc. All rights reserved.
2012-01-01
Background Academic detailing is an interactive, convenient, and user-friendly approach to delivering non-commercial education to healthcare clinicians. While evidence suggests academic detailing is associated with improvements in prescribing behavior, uncertainty exists about generalizability and scalability in diverse settings. Our study evaluates different models of delivering academic detailing in a rural family medicine setting. Methods We conducted a pilot project to assess the feasibility, effectiveness, and satisfaction with academic detailing delivered face-to-face as compared to a modified approach using distance-learning technology. The recipients were four family medicine clinics within the Oregon Rural Practice-based Research Network (ORPRN). Two clinics were allocated to receive face-to-face detailing and two received outreach through video conferencing or asynchronous web-based outreach. Surveys at midpoint and completion were used to assess effectiveness and satisfaction. Results Each clinic received four outreach visits over an eight month period. Topics included treatment-resistant depression, management of atypical antipsychotics, drugs for insomnia, and benzodiazepine tapering. Overall, 90% of participating clinicians were satisfied with the program. Respondents who received in person detailing reported a higher likelihood of changing their behavior compared to respondents in the distance detailing group for five of seven content areas. While 90%-100% of respondents indicated they would continue to participate if the program were continued, the likelihood of participation declined if only distance approaches were offered. Conclusions We found strong support and satisfaction for the program among participating clinicians. Participants favored in-person approaches to distance interactions. Future efforts will be directed at quantitative methods for evaluating the economic and clinical effectiveness of detailing in rural family practice settings. PMID:23276303
D. M. Jimenez; B. W. Butler; J. Reardon
2003-01-01
Current methods for predicting fire-induced plant mortality in shrubs and trees are largely empirical. These methods are not readily linked to duff burning, soil heating, and surface fire behavior models. In response to the need for a physics-based model of this process, a detailed model for predicting the temperature distribution through a tree stem as a function of...
Modeling of a ring rosen-type piezoelectric transformer by Hamilton's principle.
Nadal, Clément; Pigache, Francois; Erhart, Jiří
2015-04-01
This paper deals with the analytical modeling of a ring Rosen-type piezoelectric transformer. The developed model is based on a Hamiltonian approach, enabling to obtain main parameters and performance evaluation for the first radial vibratory modes. Methodology is detailed, and final results, both the input admittance and the electric potential distribution on the surface of the secondary part, are compared with numerical and experimental ones for discussion and validation.
Information Processing and Collective Behavior in a Model Neuronal System
2014-03-28
for an AFOSR project headed by Steve Reppert on Monarch Butterfly navigation. We visited the Reppert lab at the UMASS Medical School and have had many...developed a detailed mathematical model of the mammalian circadian clock. Our model can accurately predict diverse experimental data including the...i.e. P1 affects P2 which affects P3 …). The output of the system is calculated (measurements), and the interactions are forgotten. Based on
Multi Sensor Data Integration for AN Accurate 3d Model Generation
NASA Astrophysics Data System (ADS)
Chhatkuli, S.; Satoh, T.; Tachibana, K.
2015-05-01
The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.
NASA Technical Reports Server (NTRS)
Noor, A. K.; Malik, M.
2000-01-01
A study is made of the effects of variation in the lamination and geometric parameters, and boundary conditions of multi-layered composite panels on the accuracy of the detailed response characteristics obtained by five different modeling approaches. The modeling approaches considered include four two-dimensional models, each with five parameters to characterize the deformation in the thickness direction, and a predictor-corrector approach with twelve displacement parameters. The two-dimensional models are first-order shear deformation theory, third-order theory; a theory based on trigonometric variation of the transverse shear stresses through the thickness, and a discrete layer theory. The combination of the following four key elements distinguishes the present study from previous studies reported in the literature: (1) the standard of comparison is taken to be the solutions obtained by using three-dimensional continuum models for each of the individual layers; (2) both mechanical and thermal loadings are considered; (3) boundary conditions other than simply supported edges are considered; and (4) quantities compared include detailed through-the-thickness distributions of transverse shear and transverse normal stresses. Based on the numerical studies conducted, the predictor-corrector approach appears to be the most effective technique for obtaining accurate transverse stresses, and for thermal loading, none of the two-dimensional models is adequate for calculating transverse normal stresses, even when used in conjunction with three-dimensional equilibrium equations.
Aymerich, I; Rieger, L; Sobhani, R; Rosso, D; Corominas, Ll
2015-09-15
The objective of this paper is to demonstrate the importance of incorporating more realistic energy cost models (based on current energy tariff structures) into existing water resource recovery facilities (WRRFs) process models when evaluating technologies and cost-saving control strategies. In this paper, we first introduce a systematic framework to model energy usage at WRRFs and a generalized structure to describe energy tariffs including the most common billing terms. Secondly, this paper introduces a detailed energy cost model based on a Spanish energy tariff structure coupled with a WRRF process model to evaluate several control strategies and provide insights into the selection of the contracted power structure. The results for a 1-year evaluation on a 115,000 population-equivalent WRRF showed monthly cost differences ranging from 7 to 30% when comparing the detailed energy cost model to an average energy price. The evaluation of different aeration control strategies also showed that using average energy prices and neglecting energy tariff structures may lead to biased conclusions when selecting operating strategies or comparing technologies or equipment. The proposed framework demonstrated that for cost minimization, control strategies should be paired with a specific optimal contracted power. Hence, the design of operational and control strategies must take into account the local energy tariff. Copyright © 2015 Elsevier Ltd. All rights reserved.
Chakraborty, Arindom
2016-12-01
A common objective in longitudinal studies is to characterize the relationship between a longitudinal response process and a time-to-event data. Ordinal nature of the response and possible missing information on covariates add complications to the joint model. In such circumstances, some influential observations often present in the data may upset the analysis. In this paper, a joint model based on ordinal partial mixed model and an accelerated failure time model is used, to account for the repeated ordered response and time-to-event data, respectively. Here, we propose an influence function-based robust estimation method. Monte Carlo expectation maximization method-based algorithm is used for parameter estimation. A detailed simulation study has been done to evaluate the performance of the proposed method. As an application, a data on muscular dystrophy among children is used. Robust estimates are then compared with classical maximum likelihood estimates. © The Author(s) 2014.
Traditional Payment Models in Radiology: Historical Context for Ongoing Reform.
Silva, Ezequiel; McGinty, Geraldine B; Hughes, Danny R; Duszak, Richard
2016-10-01
The passage of the Medicare Access and CHIP Reauthorization Act (MACRA) replaces the sustainable growth rate with a payment system based on quality and alternative payment model participation. The general structure of payment under MACRA is included in the statute, but the rules and regulations defining its implementation are yet to be formalized. It is imperative that the radiology profession inform policymakers on their role in health care under MACRA. This will require a detailed understanding of prior legislative and nonlegislative actions that helped shape MACRA. To that end, the authors provide a detailed historical context for payment reform, focusing on the payment quality initiatives and alternative payment model demonstrations that helped provide the foundation of future MACRA-driven payment reform. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
[Perceptual sharpness metric for visible and infrared color fusion images].
Gao, Shao-Shu; Jin, Wei-Qi; Wang, Xia; Wang, Ling-Xue; Luo, Yuan
2012-12-01
For visible and infrared color fusion images, objective sharpness assessment model is proposed to measure the clarity of detail and edge definition of the fusion image. Firstly, the contrast sensitivity functions (CSF) of the human visual system is used to reduce insensitive frequency components under certain viewing conditions. Secondly, perceptual contrast model, which takes human luminance masking effect into account, is proposed based on local band-limited contrast model. Finally, the perceptual contrast is calculated in the region of interest (contains image details and edges) in the fusion image to evaluate image perceptual sharpness. Experimental results show that the proposed perceptual sharpness metrics provides better predictions, which are more closely matched to human perceptual evaluations, than five existing sharpness (blur) metrics for color images. The proposed perceptual sharpness metrics can evaluate the perceptual sharpness for color fusion images effectively.
High-fidelity simulation capability for virtual testing of seismic and acoustic sensors
NASA Astrophysics Data System (ADS)
Wilson, D. Keith; Moran, Mark L.; Ketcham, Stephen A.; Lacombe, James; Anderson, Thomas S.; Symons, Neill P.; Aldridge, David F.; Marlin, David H.; Collier, Sandra L.; Ostashev, Vladimir E.
2005-05-01
This paper describes development and application of a high-fidelity, seismic/acoustic simulation capability for battlefield sensors. The purpose is to provide simulated sensor data so realistic that they cannot be distinguished by experts from actual field data. This emerging capability provides rapid, low-cost trade studies of unattended ground sensor network configurations, data processing and fusion strategies, and signatures emitted by prototype vehicles. There are three essential components to the modeling: (1) detailed mechanical signature models for vehicles and walkers, (2) high-resolution characterization of the subsurface and atmospheric environments, and (3) state-of-the-art seismic/acoustic models for propagating moving-vehicle signatures through realistic, complex environments. With regard to the first of these components, dynamic models of wheeled and tracked vehicles have been developed to generate ground force inputs to seismic propagation models. Vehicle models range from simple, 2D representations to highly detailed, 3D representations of entire linked-track suspension systems. Similarly detailed models of acoustic emissions from vehicle engines are under development. The propagation calculations for both the seismics and acoustics are based on finite-difference, time-domain (FDTD) methodologies capable of handling complex environmental features such as heterogeneous geologies, urban structures, surface vegetation, and dynamic atmospheric turbulence. Any number of dynamic sources and virtual sensors may be incorporated into the FDTD model. The computational demands of 3D FDTD simulation over tactical distances require massively parallel computers. Several example calculations of seismic/acoustic wave propagation through complex atmospheric and terrain environments are shown.
Flight crew aiding for recovery from subsystem failures
NASA Technical Reports Server (NTRS)
Hudlicka, E.; Corker, K.; Schudy, R.; Baron, Sheldon
1990-01-01
Some of the conceptual issues associated with pilot aiding systems are discussed and an implementation of one component of such an aiding system is described. It is essential that the format and content of the information the aiding system presents to the crew be compatible with the crew's mental models of the task. It is proposed that in order to cooperate effectively, both the aiding system and the flight crew should have consistent information processing models, especially at the point of interface. A general information processing strategy, developed by Rasmussen, was selected to serve as the bridge between the human and aiding system's information processes. The development and implementation of a model-based situation assessment and response generation system for commercial transport aircraft are described. The current implementation is a prototype which concentrates on engine and control surface failure situations and consequent flight emergencies. The aiding system, termed Recovery Recommendation System (RECORS), uses a causal model of the relevant subset of the flight domain to simulate the effects of these failures and to generate appropriate responses, given the current aircraft state and the constraints of the current flight phase. Since detailed information about the aircraft state may not always be available, the model represents the domain at varying levels of abstraction and uses the less detailed abstraction levels to make inferences when exact information is not available. The structure of this model is described in detail.
ERIC Educational Resources Information Center
von Davier, Matthias; González B., Jorge; von Davier, Alina A.
2013-01-01
Local equating (LE) is based on Lord's criterion of equity. It defines a family of true transformations that aim at the ideal of equitable equating. van der Linden (this issue) offers a detailed discussion of common issues in observed-score equating relative to this local approach. By assuming an underlying item response theory model, one of…
NASA Astrophysics Data System (ADS)
Christian, Wolfgang; Belloni, Mario
2013-04-01
We have recently developed a Graphs and Tracks model based on an earlier program by David Trowbridge, as shown in Fig. 1. Our model can show position, velocity, acceleration, and energy graphs and can be used for motion-to-graphs exercises. Users set the heights of the track segments, and the model displays the motion of the ball on the track together with position, velocity, and acceleration graphs. This ready-to-run model is available in the ComPADRE OSP Collection at www.compadre.org/osp/items/detail.cfm?ID=12023.
Research in DRM architecture based on watermarking and PKI
NASA Astrophysics Data System (ADS)
Liu, Ligang; Chen, Xiaosu; Xiao, Dao-ju; Yi, Miao
2005-02-01
Analyze the virtue and disadvantage of the present digital copyright protecting system, design a kind of security protocol model of digital copyright protection, which equilibrium consider the digital media"s use validity, integrality, security of transmission, and trade equity, make a detailed formalize description to the protocol model, analyze the relationship of the entities involved in the digital work copyright protection. The analysis of the security and capability of the protocol model shows that the model is good at security and practicability.
Data-base development for water-quality modeling of the Patuxent River basin, Maryland
Fisher, G.T.; Summers, R.M.
1987-01-01
Procedures and rationale used to develop a data base and data management system for the Patuxent Watershed Nonpoint Source Water Quality Monitoring and Modeling Program of the Maryland Department of the Environment and the U.S. Geological Survey are described. A detailed data base and data management system has been developed to facilitate modeling of the watershed for water quality planning purposes; statistical analysis; plotting of meteorologic, hydrologic and water quality data; and geographic data analysis. The system is Maryland 's prototype for development of a basinwide water quality management program. A key step in the program is to build a calibrated and verified water quality model of the basin using the Hydrological Simulation Program--FORTRAN (HSPF) hydrologic model, which has been used extensively in large-scale basin modeling. The compilation of the substantial existing data base for preliminary calibration of the basin model, including meteorologic, hydrologic, and water quality data from federal and state data bases and a geographic information system containing digital land use and soils data is described. The data base development is significant in its application of an integrated, uniform approach to data base management and modeling. (Lantz-PTT)
NASA Technical Reports Server (NTRS)
Ungar, Eugene K.; Richards, W. Lance
2015-01-01
The aircraft-based Stratospheric Observatory for Infrared Astronomy (SOFIA) is a platform for multiple infrared astronomical observation experiments. These experiments carry sensors cooled to liquid helium temperatures. The liquid helium supply is contained in large (i.e., 10 liters or more) vacuum-insulated dewars. Should the dewar vacuum insulation fail, the inrushing air will condense and freeze on the dewar wall, resulting in a large heat flux on the dewar's contents. The heat flux results in a rise in pressure and the actuation of the dewar pressure relief system. A previous NASA Engineering and Safety Center (NESC) assessment provided recommendations for the wall heat flux that would be expected from a loss of vacuum and detailed an appropriate method to use in calculating the maximum pressure that would occur in a loss of vacuum event. This method involved building a detailed supercritical helium compressible flow thermal/fluid model of the vent stack and exercising the model over the appropriate range of parameters. The experimenters designing science instruments for SOFIA are not experts in compressible supercritical flows and do not generally have access to the thermal/fluid modeling packages that are required to build detailed models of the vent stacks. Therefore, the SOFIA Program engaged the NESC to develop a simplified methodology to estimate the maximum pressure in a liquid helium dewar after the loss of vacuum insulation. The method would allow the university-based science instrument development teams to conservatively determine the cryostat's vent neck sizing during preliminary design of new SOFIA Science Instruments. This report details the development of the simplified method, the method itself, and the limits of its applicability. The simplified methodology provides an estimate of the dewar pressure after a loss of vacuum insulation that can be used for the initial design of the liquid helium dewar vent stacks. However, since it is not an exact tool, final verification of the dewar pressure vessel design requires a complete, detailed real fluid compressible flow model of the vent stack. The wall heat flux resulting from a loss of vacuum insulation increases the dewar pressure, which actuates the pressure relief mechanism and results in high-speed flow through the dewar vent stack. At high pressures, the flow can be choked at the vent stack inlet, at the exit, or at an intermediate transition or restriction. During previous SOFIA analyses, it was observed that there was generally a readily identifiable section of the vent stack that would limit the flow – e.g., a small diameter entrance or an orifice. It was also found that when the supercritical helium was approximated as an ideal gas at the dewar condition, the calculated mass flow rate based on choking at the limiting entrance or transition was less than the mass flow rate calculated using the detailed real fluid model2. Using this lower mass flow rate would yield a conservative prediction of the dewar’s wall heat flux capability. The simplified method of the current work was developed by building on this observation.
Fluorescence-based proxies for lignin in freshwater dissolved organic matter
Hernes, Peter J.; Bergamaschi, Brian A.; Eckard, Robert S.; Spencer, Robert G.M.
2009-01-01
Lignin phenols have proven to be powerful biomarkers in environmental studies; however, the complexity of lignin analysis limits the number of samples and thus spatial and temporal resolution in any given study. In contrast, spectrophotometric characterization of dissolved organic matter (DOM) is rapid, noninvasive, relatively inexpensive, requires small sample volumes, and can even be measured in situ to capture fine-scale temporal and spatial detail of DOM cycling. Here we present a series of cross-validated Partial Least Squares models that use fluorescence properties of DOM to explain up to 91% of lignin compositional and concentration variability in samples collected seasonally over 2 years in the Sacramento River/San Joaquin River Delta in California, United States. These models were subsequently used to predict lignin composition and concentration from fluorescence measurements collected during a diurnal study in the San Joaquin River. While modeled lignin composition remained largely unchanged over the diurnal cycle, changes in modeled lignin concentrations were much greater than expected and indicate that the sensitivity of fluorescence-based proxies for lignin may prove invaluable as a tool for selecting the most informative samples for detailed lignin characterization. With adequate calibration, similar models could be used to significantly expand our ability to study sources and processing of DOM in complex surface water systems.
Prediction of reaction knockouts to maximize succinate production by Actinobacillus succinogenes
Nag, Ambarish; St. John, Peter C.; Crowley, Michael F.
2018-01-01
Succinate is a precursor of multiple commodity chemicals and bio-based succinate production is an active area of industrial bioengineering research. One of the most important microbial strains for bio-based production of succinate is the capnophilic gram-negative bacterium Actinobacillus succinogenes, which naturally produces succinate by a mixed-acid fermentative pathway. To engineer A. succinogenes to improve succinate yields during mixed acid fermentation, it is important to have a detailed understanding of the metabolic flux distribution in A. succinogenes when grown in suitable media. To this end, we have developed a detailed stoichiometric model of the A. succinogenes central metabolism that includes the biosynthetic pathways for the main components of biomass—namely glycogen, amino acids, DNA, RNA, lipids and UDP-N-Acetyl-α-D-glucosamine. We have validated our model by comparing model predictions generated via flux balance analysis with experimental results on mixed acid fermentation. Moreover, we have used the model to predict single and double reaction knockouts to maximize succinate production while maintaining growth viability. According to our model, succinate production can be maximized by knocking out either of the reactions catalyzed by the PTA (phosphate acetyltransferase) and ACK (acetyl kinase) enzymes, whereas the double knockouts of PEPCK (phosphoenolpyruvate carboxykinase) and PTA or PEPCK and ACK enzymes are the most effective in increasing succinate production. PMID:29381705
Prediction of reaction knockouts to maximize succinate production by Actinobacillus succinogenes.
Nag, Ambarish; St John, Peter C; Crowley, Michael F; Bomble, Yannick J
2018-01-01
Succinate is a precursor of multiple commodity chemicals and bio-based succinate production is an active area of industrial bioengineering research. One of the most important microbial strains for bio-based production of succinate is the capnophilic gram-negative bacterium Actinobacillus succinogenes, which naturally produces succinate by a mixed-acid fermentative pathway. To engineer A. succinogenes to improve succinate yields during mixed acid fermentation, it is important to have a detailed understanding of the metabolic flux distribution in A. succinogenes when grown in suitable media. To this end, we have developed a detailed stoichiometric model of the A. succinogenes central metabolism that includes the biosynthetic pathways for the main components of biomass-namely glycogen, amino acids, DNA, RNA, lipids and UDP-N-Acetyl-α-D-glucosamine. We have validated our model by comparing model predictions generated via flux balance analysis with experimental results on mixed acid fermentation. Moreover, we have used the model to predict single and double reaction knockouts to maximize succinate production while maintaining growth viability. According to our model, succinate production can be maximized by knocking out either of the reactions catalyzed by the PTA (phosphate acetyltransferase) and ACK (acetyl kinase) enzymes, whereas the double knockouts of PEPCK (phosphoenolpyruvate carboxykinase) and PTA or PEPCK and ACK enzymes are the most effective in increasing succinate production.
NASA Technical Reports Server (NTRS)
Au, Andrew Y.; Brown, Richard D.; Welker, Jean E.
1991-01-01
Satellite-based altimetric data taken by GOES-3, SEASAT, and GEOSAT over the Aral Sea, the Black Sea, and the Caspian Sea are analyzed and a least squares collocation technique is used to predict the geoid undulations on a 0.25x0.25 deg. grid and to transform these geoid undulations to free air gravity anomalies. Rapp's 180x180 geopotential model is used as the reference surface for the collocation procedure. The result of geoid to gravity transformation is, however, sensitive to the information content of the reference geopotential model used. For example, considerable detailed surface gravity data were incorporated into the reference model over the Black Sea, resulting in a reference model with significant information content at short wavelengths. Thus, estimation of short wavelength gravity anomalies from gridded geoid heights is generally reliable over regions such as the Black Sea, using the conventional collocation technique with local empirical covariance functions. Over regions such as the Caspian Sea, where detailed surface data are generally not incorporated into the reference model, unconventional techniques are needed to obtain reliable gravity anomalies. Based on the predicted gravity anomalies over these inland seas, speculative tectonic structures are identified and geophysical processes are inferred.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henning, Brian; Lu, Xiaochuan; Murayama, Hitoshi
Here, we present a practical three-step procedure of using the Standard Model effective field theory (SM EFT) to connect ultraviolet (UV) models of new physics with weak scale precision observables. With this procedure, one can interpret precision measurements as constraints on a given UV model. We give a detailed explanation for calculating the effective action up to one-loop order in a manifestly gauge covariant fashion. This covariant derivative expansion method dramatically simplifies the process of matching a UV model with the SM EFT, and also makes available a universal formalism that is easy to use for a variety of UVmore » models. A few general aspects of RG running effects and choosing operator bases are discussed. Finally, we provide mapping results between the bosonic sector of the SM EFT and a complete set of precision electroweak and Higgs observables to which present and near future experiments are sensitive. Many results and tools which should prove useful to those wishing to use the SM EFT are detailed in several appendices.« less
NASA Astrophysics Data System (ADS)
Sanchez, Beatriz; Santiago, Jose Luis; Martilli, Alberto; Martin, Fernando; Borge, Rafael; Quaassdorff, Christina; de la Paz, David
2017-08-01
Air quality management requires more detailed studies about air pollution at urban and local scale over long periods of time. This work focuses on obtaining the spatial distribution of NOx concentration averaged over several days in a heavily trafficked urban area in Madrid (Spain) using a computational fluid dynamics (CFD) model. A methodology based on weighted average of CFD simulations is applied computing the time evolution of NOx dispersion as a sequence of steady-state scenarios taking into account the actual atmospheric conditions. The inputs of emissions are estimated from the traffic emission model and the meteorological information used is derived from a mesoscale model. Finally, the computed concentration map correlates well with 72 passive samplers deployed in the research area. This work reveals the potential of using urban mesoscale simulations together with detailed traffic emissions so as to provide accurate maps of pollutant concentration at microscale using CFD simulations.
Delusions and prediction error: clarifying the roles of behavioural and brain responses
Corlett, Philip Robert; Fletcher, Paul Charles
2015-01-01
Griffiths and colleagues provided a clear and thoughtful review of the prediction error model of delusion formation [Cognitive Neuropsychiatry, 2014 April 4 (Epub ahead of print)]. As well as reviewing the central ideas and concluding that the existing evidence base is broadly supportive of the model, they provide a detailed critique of some of the experiments that we have performed to study it. Though they conclude that the shortcomings that they identify in these experiments do not fundamentally challenge the prediction error model, we nevertheless respond to these criticisms. We begin by providing a more detailed outline of the model itself as there are certain important aspects of it that were not covered in their review. We then respond to their specific criticisms of the empirical evidence. We defend the neuroimaging contrasts that we used to explore this model of psychosis arguing that, while any single contrast entails some ambiguity, our assumptions have been justified by our extensive background work before and since. PMID:25559871
EXAMINING TATOOINE: ATMOSPHERIC MODELS OF NEPTUNE-LIKE CIRCUMBINARY PLANETS
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, E. M.; Rauscher, E.
2016-08-01
Circumbinary planets experience a time-varying irradiation pattern as they orbit their two host stars. In this work, we present the first detailed study of the atmospheric effects of this irradiation pattern on known and hypothetical gaseous circumbinary planets. Using both a one-dimensional energy balance model (EBM) and a three-dimensional general circulation model (GCM), we look at the temperature differences between circumbinary planets and their equivalent single-star cases in order to determine the nature of the atmospheres of these planets. We find that for circumbinary planets on stable orbits around their host stars, temperature differences are on average no more thanmore » 1.0% in the most extreme cases. Based on detailed modeling with the GCM, we find that these temperature differences are not large enough to excite circulation differences between the two cases. We conclude that gaseous circumbinary planets can be treated as their equivalent single-star case in future atmospheric modeling efforts.« less
Downscaling GLOF Hazards: An in-depth look at the Nepal Himalaya
NASA Astrophysics Data System (ADS)
Rounce, D.; McKinney, D. C.; Lala, J.
2016-12-01
The Nepal Himalaya house a large number of glacial lakes that pose a flood hazard to downstream communities and infrastructure. The modeling of the entire process chain of these glacial lake outburst floods (GLOFs) has been advancing rapidly in recent years. The most common cause of failure is mass movement entering the glacial lake, which triggers a tsunami-like wave that breaches the terminal moraine and causes the ensuing downstream flood. Unfortunately, modeling the avalanche, the breach of the moraine, and the downstream flood requires a large amount of site-specific information and can be very labor-intensive. Therefore, these detailed models need to be paired with large-scale hazard assessments that identify the glacial lakes that are the biggest threat and the triggering events that threaten these lakes. This study discusses the merger of a large-scale, remotely-based hazard assessment with more detailed GLOF models to show how GLOF hazard modeling can be downscaled in the Nepal Himalaya.
Klein, Michael T; Hou, Gang; Quann, Richard J; Wei, Wei; Liao, Kai H; Yang, Raymond S H; Campain, Julie A; Mazurek, Monica A; Broadbelt, Linda J
2002-01-01
A chemical engineering approach for the rigorous construction, solution, and optimization of detailed kinetic models for biological processes is described. This modeling capability addresses the required technical components of detailed kinetic modeling, namely, the modeling of reactant structure and composition, the building of the reaction network, the organization of model parameters, the solution of the kinetic model, and the optimization of the model. Even though this modeling approach has enjoyed successful application in the petroleum industry, its application to biomedical research has just begun. We propose to expand the horizons on classic pharmacokinetics and physiologically based pharmacokinetics (PBPK), where human or animal bodies were often described by a few compartments, by integrating PBPK with reaction network modeling described in this article. If one draws a parallel between an oil refinery, where the application of this modeling approach has been very successful, and a human body, the individual processing units in the oil refinery may be considered equivalent to the vital organs of the human body. Even though the cell or organ may be much more complicated, the complex biochemical reaction networks in each organ may be similarly modeled and linked in much the same way as the modeling of the entire oil refinery through linkage of the individual processing units. The integrated chemical engineering software package described in this article, BioMOL, denotes the biological application of molecular-oriented lumping. BioMOL can build a detailed model in 1-1,000 CPU sec using standard desktop hardware. The models solve and optimize using standard and widely available hardware and software and can be presented in the context of a user-friendly interface. We believe this is an engineering tool with great promise in its application to complex biological reaction networks. PMID:12634134
Procedural 3d Modelling for Traditional Settlements. The Case Study of Central Zagori
NASA Astrophysics Data System (ADS)
Kitsakis, D.; Tsiliakou, E.; Labropoulos, T.; Dimopoulou, E.
2017-02-01
Over the last decades 3D modelling has been a fast growing field in Geographic Information Science, extensively applied in various domains including reconstruction and visualization of cultural heritage, especially monuments and traditional settlements. Technological advances in computer graphics, allow for modelling of complex 3D objects achieving high precision and accuracy. Procedural modelling is an effective tool and a relatively novel method, based on algorithmic modelling concept. It is utilized for the generation of accurate 3D models and composite facade textures from sets of rules which are called Computer Generated Architecture grammars (CGA grammars), defining the objects' detailed geometry, rather than altering or editing the model manually. In this paper, procedural modelling tools have been exploited to generate the 3D model of a traditional settlement in the region of Central Zagori in Greece. The detailed geometries of 3D models derived from the application of shape grammars on selected footprints, and the process resulted in a final 3D model, optimally describing the built environment of Central Zagori, in three levels of Detail (LoD). The final 3D scene was exported and published as 3D web-scene which can be viewed with 3D CityEngine viewer, giving a walkthrough the whole model, same as in virtual reality or game environments. This research work addresses issues regarding textures' precision, LoD for 3D objects and interactive visualization within one 3D scene, as well as the effectiveness of large scale modelling, along with the benefits and drawbacks that derive from procedural modelling techniques in the field of cultural heritage and more specifically on 3D modelling of traditional settlements.
Ftmp-Based Simulation of Twin Nucleation and Substructure Evolution Under Hypervelocity Impact
NASA Astrophysics Data System (ADS)
Okuda, Tatsuya; Imiya, Kazuhiro; Hasebe, Tadashi
2013-01-01
The deformation twinning model based on Field Theory of Multiscale Plasticity (FTMP) represents the twin degrees of freedom with the incompatibility tensor, which is incorporated into the hardening law of the FTMP-based crystalline plasticity framework. The model is further implemented into a finite element code. In the present study, the model is adapted to a single slip-oriented FCC single crystal sample, and preliminary simulations are conducted under static conditions to confirm the model's basic capabilities. The simulation results exhibit nucleation and growth of twinned regions, accompanied by serrated stress response and overall softening. Simulations under hypervelocity impact conditions are also conducted to investigate the model's descriptive capabilities of induced complex substructures composing of both twins and dislocations. The simulated nucleation of twins is examined in detail by using duality diagrams in terms of the flow-evolutionary hypothesis.
NASA Astrophysics Data System (ADS)
Fuchs, Erica R. H.; Bruce, E. J.; Ram, R. J.; Kirchain, Randolph E.
2006-08-01
The monolithic integration of components holds promise to increase network functionality and reduce packaging expense. Integration also drives down yield due to manufacturing complexity and the compounding of failures across devices. Consensus is lacking on the economically preferred extent of integration. Previous studies on the cost feasibility of integration have used high-level estimation methods. This study instead focuses on accurate-to-industry detail, basing a process-based cost model of device manufacture on data collected from 20 firms across the optoelectronics supply chain. The model presented allows for the definition of process organization, including testing, as well as processing conditions, operational characteristics, and level of automation at each step. This study focuses on the cost implications of integration of a 1550-nm DFB laser with an electroabsorptive modulator on an InP platform. Results show the monolithically integrated design to be more cost competitive over discrete component options regardless of production scale. Dominant cost drivers are packaging, testing, and assembly. Leveraging the technical detail underlying model projections, component alignment, bonding, and metal-organic chemical vapor deposition (MOCVD) are identified as processes where technical improvements are most critical to lowering costs. Such results should encourage exploration of the cost advantages of further integration and focus cost-driven technology development.
Tsoukias, Nikolaos M; Goldman, Daniel; Vadapalli, Arjun; Pittman, Roland N; Popel, Aleksander S
2007-10-21
A detailed computational model is developed to simulate oxygen transport from a three-dimensional (3D) microvascular network to the surrounding tissue in the presence of hemoglobin-based oxygen carriers. The model accounts for nonlinear O(2) consumption, myoglobin-facilitated diffusion and nonlinear oxyhemoglobin dissociation in the RBCs and plasma. It also includes a detailed description of intravascular resistance to O(2) transport and is capable of incorporating realistic 3D microvascular network geometries. Simulations in this study were performed using a computer-generated microvascular architecture that mimics morphometric parameters for the hamster cheek pouch retractor muscle. Theoretical results are presented next to corresponding experimental data. Phosphorescence quenching microscopy provided PO(2) measurements at the arteriolar and venular ends of capillaries in the hamster retractor muscle before and after isovolemic hemodilution with three different hemodilutents: a non-oxygen-carrying plasma expander and two hemoglobin solutions with different oxygen affinities. Sample results in a microvascular network show an enhancement of diffusive shunting between arterioles, venules and capillaries and a decrease in hemoglobin's effectiveness for tissue oxygenation when its affinity for O(2) is decreased. Model simulations suggest that microvascular network anatomy can affect the optimal hemoglobin affinity for reducing tissue hypoxia. O(2) transport simulations in realistic representations of microvascular networks should provide a theoretical framework for choosing optimal parameter values in the development of hemoglobin-based blood substitutes.
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-01-01
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation. PMID:27649190
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-09-15
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.
Directly e-mailing authors of newly published papers encourages community curation
Bunt, Stephanie M.; Grumbling, Gary B.; Field, Helen I.; Marygold, Steven J.; Brown, Nicholas H.; Millburn, Gillian H.
2012-01-01
Much of the data within Model Organism Databases (MODs) comes from manual curation of the primary research literature. Given limited funding and an increasing density of published material, a significant challenge facing all MODs is how to efficiently and effectively prioritize the most relevant research papers for detailed curation. Here, we report recent improvements to the triaging process used by FlyBase. We describe an automated method to directly e-mail corresponding authors of new papers, requesting that they list the genes studied and indicate (‘flag’) the types of data described in the paper using an online tool. Based on the author-assigned flags, papers are then prioritized for detailed curation and channelled to appropriate curator teams for full data extraction. The overall response rate has been 44% and the flagging of data types by authors is sufficiently accurate for effective prioritization of papers. In summary, we have established a sustainable community curation program, with the result that FlyBase curators now spend less time triaging and can devote more effort to the specialized task of detailed data extraction. Database URL: http://flybase.org/ PMID:22554788
Rajagopal, Vijay; Bass, Gregory; Ghosh, Shouryadipta; Hunt, Hilary; Walker, Cameron; Hanssen, Eric; Crampin, Edmund; Soeller, Christian
2018-04-18
With the advent of three-dimensional (3D) imaging technologies such as electron tomography, serial-block-face scanning electron microscopy and confocal microscopy, the scientific community has unprecedented access to large datasets at sub-micrometer resolution that characterize the architectural remodeling that accompanies changes in cardiomyocyte function in health and disease. However, these datasets have been under-utilized for investigating the role of cellular architecture remodeling in cardiomyocyte function. The purpose of this protocol is to outline how to create an accurate finite element model of a cardiomyocyte using high resolution electron microscopy and confocal microscopy images. A detailed and accurate model of cellular architecture has significant potential to provide new insights into cardiomyocyte biology, more than experiments alone can garner. The power of this method lies in its ability to computationally fuse information from two disparate imaging modalities of cardiomyocyte ultrastructure to develop one unified and detailed model of the cardiomyocyte. This protocol outlines steps to integrate electron tomography and confocal microscopy images of adult male Wistar (name for a specific breed of albino rat) rat cardiomyocytes to develop a half-sarcomere finite element model of the cardiomyocyte. The procedure generates a 3D finite element model that contains an accurate, high-resolution depiction (on the order of ~35 nm) of the distribution of mitochondria, myofibrils and ryanodine receptor clusters that release the necessary calcium for cardiomyocyte contraction from the sarcoplasmic reticular network (SR) into the myofibril and cytosolic compartment. The model generated here as an illustration does not incorporate details of the transverse-tubule architecture or the sarcoplasmic reticular network and is therefore a minimal model of the cardiomyocyte. Nevertheless, the model can already be applied in simulation-based investigations into the role of cell structure in calcium signaling and mitochondrial bioenergetics, which is illustrated and discussed using two case studies that are presented following the detailed protocol.
Adaptive two-regime method: Application to front propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, Martin, E-mail: martin.robinson@maths.ox.ac.uk; Erban, Radek, E-mail: erban@maths.ox.ac.uk; Flegg, Mark, E-mail: mark.flegg@monash.edu
2014-03-28
The Adaptive Two-Regime Method (ATRM) is developed for hybrid (multiscale) stochastic simulation of reaction-diffusion problems. It efficiently couples detailed Brownian dynamics simulations with coarser lattice-based models. The ATRM is a generalization of the previously developed Two-Regime Method [Flegg et al., J. R. Soc., Interface 9, 859 (2012)] to multiscale problems which require a dynamic selection of regions where detailed Brownian dynamics simulation is used. Typical applications include a front propagation or spatio-temporal oscillations. In this paper, the ATRM is used for an in-depth study of front propagation in a stochastic reaction-diffusion system which has its mean-field model given in termsmore » of the Fisher equation [R. Fisher, Ann. Eugen. 7, 355 (1937)]. It exhibits a travelling reaction front which is sensitive to stochastic fluctuations at the leading edge of the wavefront. Previous studies into stochastic effects on the Fisher wave propagation speed have focused on lattice-based models, but there has been limited progress using off-lattice (Brownian dynamics) models, which suffer due to their high computational cost, particularly at the high molecular numbers that are necessary to approach the Fisher mean-field model. By modelling only the wavefront itself with the off-lattice model, it is shown that the ATRM leads to the same Fisher wave results as purely off-lattice models, but at a fraction of the computational cost. The error analysis of the ATRM is also presented for a morphogen gradient model.« less
Seismic Wave Amplification in Las Vegas: Site Characterization Measurements and Response Models
NASA Astrophysics Data System (ADS)
Louie, J. N.; Anderson, J. G.; Luke, B.; Snelson, C.; Taylor, W.; Rodgers, A.; McCallen, D.; Tkalcic, H.; Wagoner, J.
2004-12-01
As part of a multidisciplinary effort to understand seismic wave amplification in Las Vegas Valley, we conducted geotechnical and seismic refraction field studies, geologic and lithologic interpretation, and geophysical model building. Frequency-dependent amplifications (site response) and peak ground motions strongly correlate with site conditions as characterized by the models. The models include basin depths and velocities, which also correlate against ground motions. Preliminary geologic models were constructed from detailed geologic and fault mapping, logs of over 500 wells penetrating greater than 200 m depth, gravity-inversion results from the USGS, and USDA soil maps. Valley-wide refraction studies we conducted in 2002 and 2003 were inverted for constraints on basin geometry, and deep basin and basement P velocities with some 3-d control to depths of 5 km. Surface-wave studies during 2002-2004 characterized more than 75 sites within the Valley for shear velocity to depths exceeding 100 m, including all the legacy sites where nuclear-blast ground motions were recorded. The SASW and refraction-microtremor surface-surveying techniques proved to provide complementary, and coordinating Rayleigh dispersion-curve data at a dozen sites. Borehole geotechnical studies at a half-dozen sites confirmed the shear-velocity profiles that we derived from surface-wave studies. We then correlated all the geotechnical data against a detailed stratigraphic model, derived from drilling logs, to create a Valley-wide model for shallow site conditions. This well-log-based model predicts site measurements better than do models based solely on geologic or soil mapping.
Evaluation of TOPLATS on three Mediterranean catchments
NASA Astrophysics Data System (ADS)
Loizu, Javier; Álvarez-Mozos, Jesús; Casalí, Javier; Goñi, Mikel
2016-08-01
Physically based hydrological models are complex tools that provide a complete description of the different processes occurring on a catchment. The TOPMODEL-based Land-Atmosphere Transfer Scheme (TOPLATS) simulates water and energy balances at different time steps, in both lumped and distributed modes. In order to gain insight on the behavior of TOPLATS and its applicability in different conditions a detailed evaluation needs to be carried out. This study aimed to develop a complete evaluation of TOPLATS including: (1) a detailed review of previous research works using this model; (2) a sensitivity analysis (SA) of the model with two contrasted methods (Morris and Sobol) of different complexity; (3) a 4-step calibration strategy based on a multi-start Powell optimization algorithm; and (4) an analysis of the influence of simulation time step (hourly vs. daily). The model was applied on three catchments of varying size (La Tejeria, Cidacos and Arga), located in Navarre (Northern Spain), and characterized by different levels of Mediterranean climate influence. Both Morris and Sobol methods showed very similar results that identified Brooks-Corey Pore Size distribution Index (B), Bubbling pressure (ψc) and Hydraulic conductivity decay (f) as the three overall most influential parameters in TOPLATS. After calibration and validation, adequate streamflow simulations were obtained in the two wettest catchments, but the driest (Cidacos) gave poor results in validation, due to the large climatic variability between calibration and validation periods. To overcome this issue, an alternative random and discontinuous method of cal/val period selection was implemented, improving model results.
Problems in Catalytic Oxidation of Hydrocarbons and Detailed Simulation of Combustion Processes
NASA Astrophysics Data System (ADS)
Xin, Yuxuan
This dissertation research consists of two parts, with Part I on the kinetics of catalytic oxidation of hydrocarbons and Part II on aspects on the detailed simulation of combustion processes. In Part I, the catalytic oxidation of C1--C3 hydrocarbons, namely methane, ethane, propane and ethylene, was investigated for lean hydrocarbon-air mixtures over an unsupported Pd-based catalyst, from 600 to 800 K and under atmospheric pressure. In Chapter 2, the experimental facility of wire microcalorimetry and simulation configuration were described in details. In Chapter 3 and 4, the oxidation rate of C1--C 3 hydrocarbons is demonstrated to be determined by the dissociative adsorption of hydrocarbons. A detailed surface kinetics model is proposed with deriving the rate coefficient of hydrocarbon dissociative adsorption from the wire microcalorimetry data. In Part II, four fundamental studies were conducted through detailed combustion simulations. In Chapter 5, self-accelerating hydrogen-air flames are studied via two-dimensional detailed numerical simulation (DNS). The increase in the global flame velocity is shown to be caused by the increase of flame surface area, and the fractal structure of the flame front is demonstrated by the box-counting method. In Chapter 6, skeletal reaction models for butane combustion are derived by using directed relation graph (DRG) and DRG-aided sensitivity analysis (DRGASA), and uncertainty minimization by polynomial chaos expansion (MUM-PCE) mothodes. The dependence of model uncertainty is subjected to the completeness of the model. In Chapter 7, a systematic strategy is proposed to reduce the cost of the multicomponent diffusion model by accurately accounting for the species whose diffusivity is important to the global responses of the combustion systems, and approximating those of less importance by the mixture-averaged model. The reduced model is validated in an n-heptane mechanism with 88 species. In Chapter 8, the influence of Soret diffusion on the n-heptane/air flames is investigated numerically. In the unstretched flames, Soret diffusion primarily affects the chemical kinetics embedded in the flame structure and the net effect is small; while in the stretched flames, its impact is mainly through those of n-heptane and the secondary fuel, H2, in modifying the flame temperature, with substantial effects.
Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio
2015-09-15
Purpose: With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. Methods: In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. Themore » optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV–L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. Results: In all cases, model-based TV–L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV–L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV–L1 inversion yielded sharper images and weaker streak artifact. Conclusions: The results herein show that TV–L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV–L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.« less
The dynamics of acute inflammation
NASA Astrophysics Data System (ADS)
Kumar, Rukmini
The acute inflammatory response is the non-specific and immediate reaction of the body to pathogenic organisms, tissue trauma and unregulated cell growth. An imbalance in this response could lead to a condition commonly known as "shock" or "sepsis". This thesis is an attempt to elucidate the dynamics of acute inflammatory response to infection and contribute to its systemic understanding through mathematical modeling and analysis. The models of immunity discussed use Ordinary Differential Equations (ODEs) to model the variation of concentration in time of the various interacting species. Chapter 2 discusses three such models of increasing complexity. Sections 2.1 and 2.2 discuss smaller models that capture the core features of inflammation and offer general predictions concerning the design of the system. Phase-space and bifurcation analyses have been used to examine the behavior at various parameter regimes. Section 2.3 discusses a global physiological model that includes several equations modeling the concentration (or numbers) of cells, cytokines and other mediators. The conclusions drawn from the reduced and detailed models about the qualitative effects of the parameters are very similar and these similarities have also been discussed. In Chapter 3, the specific applications of the biologically detailed model are discussed in greater detail. These include a simulation of anthrax infection and an in silico simulation of a clinical trial. Such simulations are very useful to biologists and could prove to be invaluable tools in drug design. Finally, Chapter 4 discusses the general problem of extinction of populations modeled as continuous variables in ODES is discussed. The average time to extinction and threshold are estimated based on analyzing the equivalent stochastic processes.
Development of a recursion RNG-based turbulence model
NASA Technical Reports Server (NTRS)
Zhou, YE; Vahala, George; Thangam, S.
1993-01-01
Reynolds stress closure models based on the recursion renormalization group theory are developed for the prediction of turbulent separated flows. The proposed model uses a finite wavenumber truncation scheme to account for the spectral distribution of energy. In particular, the model incorporates effects of both local and nonlocal interactions. The nonlocal interactions are shown to yield a contribution identical to that from the epsilon-renormalization group (RNG), while the local interactions introduce higher order dispersive effects. A formal analysis of the model is presented and its ability to accurately predict separated flows is analyzed from a combined theoretical and computational stand point. Turbulent flow past a backward facing step is chosen as a test case and the results obtained based on detailed computations demonstrate that the proposed recursion -RNG model with finite cut-off wavenumber can yield very good predictions for the backstep problem.
Gu, Yingxin; Wylie, Bruce K.
2015-01-01
The satellite-derived growing season time-integrated Normalized Difference Vegetation Index (GSN) has been used as a proxy for vegetation biomass productivity. The 250-m GSN data estimated from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors have been used for terrestrial ecosystem modeling and monitoring. High temporal resolution with a wide range of wavelengths make the MODIS land surface products robust and reliable. The long-term 30-m Landsat data provide spatial detailed information for characterizing human-scale processes and have been used for land cover and land change studies. The main goal of this study is to combine 250-m MODIS GSN and 30-m Landsat observations to generate a quality-improved high spatial resolution (30-m) GSN database. A rule-based piecewise regression GSN model based on MODIS and Landsat data was developed. Results show a strong correlation between predicted GSN and actual GSN (r = 0.97, average error = 0.026). The most important Landsat variables in the GSN model are Normalized Difference Vegetation Indices (NDVIs) in May and August. The derived MODIS-Landsat-based 30-m GSN map provides biophysical information for moderate-scale ecological features. This multiple sensor study retains the detailed seasonal dynamic information captured by MODIS and leverages the high-resolution information from Landsat, which will be useful for regional ecosystem studies.
NASA Astrophysics Data System (ADS)
Lv, Zheng; Sui, Haigang; Zhang, Xilin; Huang, Xianfeng
2007-11-01
As one of the most important geo-spatial objects and military establishment, airport is always a key target in fields of transportation and military affairs. Therefore, automatic recognition and extraction of airport from remote sensing images is very important and urgent for updating of civil aviation and military application. In this paper, a new multi-source data fusion approach on automatic airport information extraction, updating and 3D modeling is addressed. Corresponding key technologies including feature extraction of airport information based on a modified Ostu algorithm, automatic change detection based on new parallel lines-based buffer detection algorithm, 3D modeling based on gradual elimination of non-building points algorithm, 3D change detecting between old airport model and LIDAR data, typical CAD models imported and so on are discussed in detail. At last, based on these technologies, we develop a prototype system and the results show our method can achieve good effects.
NASA Astrophysics Data System (ADS)
Bergner, F.; Pareige, C.; Hernández-Mayoral, M.; Malerba, L.; Heintze, C.
2014-05-01
An attempt is made to quantify the contributions of different types of defect-solute clusters to the total irradiation-induced yield stress increase in neutron-irradiated (300 °C, 0.6 dpa), industrial-purity Fe-Cr model alloys (target Cr contents of 2.5, 5, 9 and 12 at.% Cr). Former work based on the application of transmission electron microscopy, atom probe tomography, and small-angle neutron scattering revealed the formation of dislocation loops, NiSiPCr-enriched clusters and α‧-phase particles, which act as obstacles to dislocation glide. The values of the dimensionless obstacle strength are estimated in the framework of a three-feature dispersed-barrier hardening model. Special attention is paid to the effect of measuring errors, experimental details and model details on the estimates. The three families of obstacles and the hardening model are well capable of reproducing the observed yield stress increase as a function of Cr content, suggesting that the nanostructural features identified experimentally are the main, if not the only, causes of irradiation hardening in these model alloys.
Ditlev, Jonathon A; Mayer, Bruce J; Loew, Leslie M
2013-02-05
Mathematical modeling has established its value for investigating the interplay of biochemical and mechanical mechanisms underlying actin-based motility. Because of the complex nature of actin dynamics and its regulation, many of these models are phenomenological or conceptual, providing a general understanding of the physics at play. But the wealth of carefully measured kinetic data on the interactions of many of the players in actin biochemistry cries out for the creation of more detailed and accurate models that could permit investigators to dissect interdependent roles of individual molecular components. Moreover, no human mind can assimilate all of the mechanisms underlying complex protein networks; so an additional benefit of a detailed kinetic model is that the numerous binding proteins, signaling mechanisms, and biochemical reactions can be computationally organized in a fully explicit, accessible, visualizable, and reusable structure. In this review, we will focus on how comprehensive and adaptable modeling allows investigators to explain experimental observations and develop testable hypotheses on the intracellular dynamics of the actin cytoskeleton. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Ditlev, Jonathon A.; Mayer, Bruce J.; Loew, Leslie M.
2013-01-01
Mathematical modeling has established its value for investigating the interplay of biochemical and mechanical mechanisms underlying actin-based motility. Because of the complex nature of actin dynamics and its regulation, many of these models are phenomenological or conceptual, providing a general understanding of the physics at play. But the wealth of carefully measured kinetic data on the interactions of many of the players in actin biochemistry cries out for the creation of more detailed and accurate models that could permit investigators to dissect interdependent roles of individual molecular components. Moreover, no human mind can assimilate all of the mechanisms underlying complex protein networks; so an additional benefit of a detailed kinetic model is that the numerous binding proteins, signaling mechanisms, and biochemical reactions can be computationally organized in a fully explicit, accessible, visualizable, and reusable structure. In this review, we will focus on how comprehensive and adaptable modeling allows investigators to explain experimental observations and develop testable hypotheses on the intracellular dynamics of the actin cytoskeleton. PMID:23442903
Smart wing wind tunnel model design
NASA Astrophysics Data System (ADS)
Martin, Christopher A.; Jasmin, Larry; Flanagan, John S.; Appa, Kari; Kudva, Jayanth N.
1997-05-01
To verify the predicted benefits of the smart wing concept, two 16% scale wind tunnel models, one conventional and the other incorporating smart wing design features, were designed, fabricated and tested. Meticulous design of the two models was essential to: (1) ensure the required factor of safety of four for operation in the NASA Langley TDT wind tunnel, (2) efficiently integrate the smart actuation systems, (3) quantify the performance improvements, and (4) facilitate eventual scale-up to operational aircraft. Significant challenges were encountered in designing the attachment of the shape memory alloy control surfaces to the wing box, integration of the SMA torque tube in the wing structure, and development of control mechanisms to protect the model and the tunnel in the event of failure of the smart systems. In this paper, detailed design of the two models are presented. First, dynamic scaling of the models based on the geometry and structural details of the full- scale aircraft is presented. Next, results of the stress, divergence and flutter analyses are summarized. Finally some of the challenges of integrating the smart actuators with the model are highlighted.
Users guide: The LaRC human-operator-simulator-based pilot model
NASA Technical Reports Server (NTRS)
Bogart, E. H.; Waller, M. C.
1985-01-01
A Human Operator Simulator (HOS) based pilot model has been developed for use at NASA LaRC for analysis of flight management problems. The model is currently configured to simulate piloted flight of an advanced transport airplane. The generic HOS operator and machine model was originally developed under U.S. Navy sponsorship by Analytics, Inc. and through a contract with LaRC was configured to represent a pilot flying a transport airplane. A version of the HOS program runs in batch mode on LaRC's (60-bit-word) central computer system. This document provides a guide for using the program and describes in some detail the assortment of files used during its operation.
An agent-based computational model of the spread of tuberculosis
NASA Astrophysics Data System (ADS)
de Espíndola, Aquino L.; Bauch, Chris T.; Troca Cabella, Brenno C.; Souto Martinez, Alexandre
2011-05-01
In this work we propose an alternative model of the spread of tuberculosis (TB) and the emergence of drug resistance due to the treatment with antibiotics. We implement the simulations by an agent-based model computational approach where the spatial structure is taken into account. The spread of tuberculosis occurs according to probabilities defined by the interactions among individuals. The model was validated by reproducing results already known from the literature in which different treatment regimes yield the emergence of drug resistance. The different patterns of TB spread can be visualized at any time of the system evolution. The implementation details as well as some results of this alternative approach are discussed.
Zhang, Xiaoyan; Kim, Daeseung; Shen, Shunyao; Yuan, Peng; Liu, Siting; Tang, Zhen; Zhang, Guangming; Zhou, Xiaobo; Gateno, Jaime
2017-01-01
Accurate surgical planning and prediction of craniomaxillofacial surgery outcome requires simulation of soft tissue changes following osteotomy. This can only be achieved by using an anatomically detailed facial soft tissue model. The current state-of-the-art of model generation is not appropriate to clinical applications due to the time-intensive nature of manual segmentation and volumetric mesh generation. The conventional patient-specific finite element (FE) mesh generation methods are to deform a template FE mesh to match the shape of a patient based on registration. However, these methods commonly produce element distortion. Additionally, the mesh density for patients depends on that of the template model. It could not be adjusted to conduct mesh density sensitivity analysis. In this study, we propose a new framework of patient-specific facial soft tissue FE mesh generation. The goal of the developed method is to efficiently generate a high-quality patient-specific hexahedral FE mesh with adjustable mesh density while preserving the accuracy in anatomical structure correspondence. Our FE mesh is generated by eFace template deformation followed by volumetric parametrization. First, the patient-specific anatomically detailed facial soft tissue model (including skin, mucosa, and muscles) is generated by deforming an eFace template model. The adaptation of the eFace template model is achieved by using a hybrid landmark-based morphing and dense surface fitting approach followed by a thin-plate spline interpolation. Then, high-quality hexahedral mesh is constructed by using volumetric parameterization. The user can control the resolution of hexahedron mesh to best reflect clinicians’ need. Our approach was validated using 30 patient models and 4 visible human datasets. The generated patient-specific FE mesh showed high surface matching accuracy, element quality, and internal structure matching accuracy. They can be directly and effectively used for clinical simulation of facial soft tissue change. PMID:29027022
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmons, N. A.; Myers, S. C.; Johannesson, G.
[1] We develop a global-scale P wave velocity model (LLNL-G3Dv3) designed to accurately predict seismic travel times at regional and teleseismic distances simultaneously. The model provides a new image of Earth's interior, but the underlying practical purpose of the model is to provide enhanced seismic event location capabilities. The LLNL-G3Dv3 model is based on ∼2.8 millionP and Pnarrivals that are re-processed using our global multiple-event locator called Bayesloc. We construct LLNL-G3Dv3 within a spherical tessellation based framework, allowing for explicit representation of undulating and discontinuous layers including the crust and transition zone layers. Using a multiscale inversion technique, regional trendsmore » as well as fine details are captured where the data allow. LLNL-G3Dv3 exhibits large-scale structures including cratons and superplumes as well numerous complex details in the upper mantle including within the transition zone. Particularly, the model reveals new details of a vast network of subducted slabs trapped within the transition beneath much of Eurasia, including beneath the Tibetan Plateau. We demonstrate the impact of Bayesloc multiple-event location on the resulting tomographic images through comparison with images produced without the benefit of multiple-event constraints (single-event locations). We find that the multiple-event locations allow for better reconciliation of the large set of direct P phases recorded at 0–97° distance and yield a smoother and more continuous image relative to the single-event locations. Travel times predicted from a 3-D model are also found to be strongly influenced by the initial locations of the input data, even when an iterative inversion/relocation technique is employed.« less
Zhang, Xiaoyan; Kim, Daeseung; Shen, Shunyao; Yuan, Peng; Liu, Siting; Tang, Zhen; Zhang, Guangming; Zhou, Xiaobo; Gateno, Jaime; Liebschner, Michael A K; Xia, James J
2018-04-01
Accurate surgical planning and prediction of craniomaxillofacial surgery outcome requires simulation of soft tissue changes following osteotomy. This can only be achieved by using an anatomically detailed facial soft tissue model. The current state-of-the-art of model generation is not appropriate to clinical applications due to the time-intensive nature of manual segmentation and volumetric mesh generation. The conventional patient-specific finite element (FE) mesh generation methods are to deform a template FE mesh to match the shape of a patient based on registration. However, these methods commonly produce element distortion. Additionally, the mesh density for patients depends on that of the template model. It could not be adjusted to conduct mesh density sensitivity analysis. In this study, we propose a new framework of patient-specific facial soft tissue FE mesh generation. The goal of the developed method is to efficiently generate a high-quality patient-specific hexahedral FE mesh with adjustable mesh density while preserving the accuracy in anatomical structure correspondence. Our FE mesh is generated by eFace template deformation followed by volumetric parametrization. First, the patient-specific anatomically detailed facial soft tissue model (including skin, mucosa, and muscles) is generated by deforming an eFace template model. The adaptation of the eFace template model is achieved by using a hybrid landmark-based morphing and dense surface fitting approach followed by a thin-plate spline interpolation. Then, high-quality hexahedral mesh is constructed by using volumetric parameterization. The user can control the resolution of hexahedron mesh to best reflect clinicians' need. Our approach was validated using 30 patient models and 4 visible human datasets. The generated patient-specific FE mesh showed high surface matching accuracy, element quality, and internal structure matching accuracy. They can be directly and effectively used for clinical simulation of facial soft tissue change.
Effects of linking a soil-water-balance model with a groundwater-flow model
Stanton, Jennifer S.; Ryter, Derek W.; Peterson, Steven M.
2013-01-01
A previously published regional groundwater-flow model in north-central Nebraska was sequentially linked with the recently developed soil-water-balance (SWB) model to analyze effects to groundwater-flow model parameters and calibration results. The linked models provided a more detailed spatial and temporal distribution of simulated recharge based on hydrologic processes, improvement of simulated groundwater-level changes and base flows at specific sites in agricultural areas, and a physically based assessment of the relative magnitude of recharge for grassland, nonirrigated cropland, and irrigated cropland areas. Root-mean-squared (RMS) differences between the simulated and estimated or measured target values for the previously published model and linked models were relatively similar and did not improve for all types of calibration targets. However, without any adjustment to the SWB-generated recharge, the RMS difference between simulated and estimated base-flow target values for the groundwater-flow model was slightly smaller than for the previously published model, possibly indicating that the volume of recharge simulated by the SWB code was closer to actual hydrogeologic conditions than the previously published model provided. Groundwater-level and base-flow hydrographs showed that temporal patterns of simulated groundwater levels and base flows were more accurate for the linked models than for the previously published model at several sites, particularly in agricultural areas.
NASA Technical Reports Server (NTRS)
Vincent, S.; Marsh, J. G.
1973-01-01
A global detailed gravimetric geoid has been computed by combining the Goddard Space Flight Center GEM-4 gravity model derived from satellite and surface gravity data and surface 1 deg-by-1 deg mean free air gravity anomaly data. The accuracy of the geoid is + or - 2 meters on continents, 5 to 7 meters in areas where surface gravity data are sparse, and 10 to 15 meters in areas where no surface gravity data are available. Comparisons have been made with the astrogeodetic data provided by Rice (United States), Bomford (Europe), and Mather (Australia). Comparisons have also been carried out with geoid heights derived from satellite solutions for geocentric station coordinates in North America, the Caribbean, Europe, and Australia.
Spreading dynamics of an e-commerce preferential information model on scale-free networks
NASA Astrophysics Data System (ADS)
Wan, Chen; Li, Tao; Guan, Zhi-Hong; Wang, Yuanmei; Liu, Xiongding
2017-02-01
In order to study the influence of the preferential degree and the heterogeneity of underlying networks on the spread of preferential e-commerce information, we propose a novel susceptible-infected-beneficial model based on scale-free networks. The spreading dynamics of the preferential information are analyzed in detail using the mean-field theory. We determine the basic reproductive number and equilibria. The theoretical analysis indicates that the basic reproductive number depends mainly on the preferential degree and the topology of the underlying networks. We prove the global stability of the information-elimination equilibrium. The permanence of preferential information and the global attractivity of the information-prevailing equilibrium are also studied in detail. Some numerical simulations are presented to verify the theoretical results.
Model Hierarchies in Edge-Based Compartmental Modeling for Infectious Disease Spread
Miller, Joel C.; Volz, Erik M.
2012-01-01
We consider the family of edge-based compartmental models for epidemic spread developed in [11]. These models allow for a range of complex behaviors, and in particular allow us to explicitly incorporate duration of a contact into our mathematical models. Our focus here is to identify conditions under which simpler models may be substituted for more detailed models, and in so doing we define a hierarchy of epidemic models. In particular we provide conditions under which it is appropriate to use the standard mass action SIR model, and we show what happens when these conditions fail. Using our hierarchy, we provide a procedure leading to the choice of the appropriate model for a given population. Our result about the convergence of models to the Mass Action model gives clear, rigorous conditions under which the Mass Action model is accurate. PMID:22911242
Simple Fall Criteria for MEMS Sensors: Data Analysis and Sensor Concept
Ibrahim, Alwathiqbellah; Younis, Mohammad I.
2014-01-01
This paper presents a new and simple fall detection concept based on detailed experimental data of human falling and the activities of daily living (ADLs). Establishing appropriate fall algorithms compatible with MEMS sensors requires detailed data on falls and ADLs that indicate clearly the variations of the kinematics at the possible sensor node location on the human body, such as hip, head, and chest. Currently, there is a lack of data on the exact direction and magnitude of each acceleration component associated with these node locations. This is crucial for MEMS structures, which have inertia elements very close to the substrate and are capacitively biased, and hence, are very sensitive to the direction of motion whether it is toward or away from the substrate. This work presents detailed data of the acceleration components on various locations on the human body during various kinds of falls and ADLs. A two-degree-of-freedom model is used to help interpret the experimental data. An algorithm for fall detection based on MEMS switches is then established. A new sensing concept based on the algorithm is proposed. The concept is based on employing several inertia sensors, which are triggered simultaneously, as electrical switches connected in series, upon receiving a true fall signal. In the case of everyday life activities, some or no switches will be triggered resulting in an open circuit configuration, thereby preventing false positive. Lumped-parameter model is presented for the device and preliminary simulation results are presented illustrating the new device concept. PMID:25006997
Development of a model of the coronary arterial tree for the 4D XCAT phantom
NASA Astrophysics Data System (ADS)
Fung, George S. K.; Segars, W. Paul; Gullberg, Grant T.; Tsui, Benjamin M. W.
2011-09-01
A detailed three-dimensional (3D) model of the coronary artery tree with cardiac motion has great potential for applications in a wide variety of medical imaging research areas. In this work, we first developed a computer-generated 3D model of the coronary arterial tree for the heart in the extended cardiac-torso (XCAT) phantom, thereby creating a realistic computer model of the human anatomy. The coronary arterial tree model was based on two datasets: (1) a gated cardiac dual-source computed tomography (CT) angiographic dataset obtained from a normal human subject and (2) statistical morphometric data of porcine hearts. The initial proximal segments of the vasculature and the anatomical details of the boundaries of the ventricles were defined by segmenting the CT data. An iterative rule-based generation method was developed and applied to extend the coronary arterial tree beyond the initial proximal segments. The algorithm was governed by three factors: (1) statistical morphometric measurements of the connectivity, lengths and diameters of the arterial segments; (2) avoidance forces from other vessel segments and the boundaries of the myocardium, and (3) optimality principles which minimize the drag force at the bifurcations of the generated tree. Using this algorithm, the 3D computational model of the largest six orders of the coronary arterial tree was generated, which spread across the myocardium of the left and right ventricles. The 3D coronary arterial tree model was then extended to 4D to simulate different cardiac phases by deforming the original 3D model according to the motion vector map of the 4D cardiac model of the XCAT phantom at the corresponding phases. As a result, a detailed and realistic 4D model of the coronary arterial tree was developed for the XCAT phantom by imposing constraints of anatomical and physiological characteristics of the coronary vasculature. This new 4D coronary artery tree model provides a unique simulation tool that can be used in the development and evaluation of instrumentation and methods for imaging normal and pathological hearts with myocardial perfusion defects.
Description of a Website Resource for Turbulence Modeling Verification and Validation
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Smith, Brian R.; Huang, George P.
2010-01-01
The activities of the Turbulence Model Benchmarking Working Group - which is a subcommittee of the American Institute of Aeronautics and Astronautics (AIAA) Fluid Dynamics Technical Committee - are described. The group s main purpose is to establish a web-based repository for Reynolds-averaged Navier-Stokes turbulence model documentation, including verification and validation cases. This turbulence modeling resource has been established based on feedback from a survey on what is needed to achieve consistency and repeatability in turbulence model implementation and usage, and to document and disseminate information on new turbulence models or improvements to existing models. The various components of the website are described in detail: description of turbulence models, turbulence model readiness rating system, verification cases, validation cases, validation databases, and turbulence manufactured solutions. An outline of future plans of the working group is also provided.
High resolution modelling of extreme precipitation events in urban areas
NASA Astrophysics Data System (ADS)
Siemerink, Martijn; Volp, Nicolette; Schuurmans, Wytze; Deckers, Dave
2015-04-01
The present day society needs to adjust to the effects of climate change. More extreme weather conditions are expected, which can lead to longer periods of drought, but also to more extreme precipitation events. Urban water systems are not designed for such extreme events. Most sewer systems are not able to drain the excessive storm water, causing urban flooding. This leads to high economic damage. In order to take appropriate measures against extreme urban storms, detailed knowledge about the behaviour of the urban water system above and below the streets is required. To investigate the behaviour of urban water systems during extreme precipitation events new assessment tools are necessary. These tools should provide a detailed and integral description of the flow in the full domain of overland runoff, sewer flow, surface water flow and groundwater flow. We developed a new assessment tool, called 3Di, which provides detailed insight in the urban water system. This tool is based on a new numerical methodology that can accurately deal with the interaction between overland runoff, sewer flow and surface water flow. A one-dimensional model for the sewer system and open channel flow is fully coupled to a two-dimensional depth-averaged model that simulates the overland flow. The tool uses a subgrid-based approach in order to take high resolution information of the sewer system and of the terrain into account [1, 2]. The combination of using the high resolution information and the subgrid based approach results in an accurate and efficient modelling tool. It is now possible to simulate entire urban water systems using extreme high resolution (0.5m x 0.5m) terrain data in combination with a detailed sewer and surface water network representation. The new tool has been tested in several Dutch cities, such as Rotterdam, Amsterdam and The Hague. We will present the results of an extreme precipitation event in the city of Schiedam (The Netherlands). This city deals with significant soil consolidation and the low-lying areas are prone to urban flooding. The simulation results are compared with measurements in the sewer network. References [1] Guus S. Stelling G.S., 2012. Quadtree flood simulations with subgrid digital elevation models. Water Management 165 (WM1):1329-1354. [2] Vincenzo Cassuli and Guus S. Stelling, 2013. A semi-implicit numerical model for urban drainage systems. International Journal for Numerical Methods in Fluids. Vol. 73:600-614. DOI: 10.1002/fld.3817
The Simulation of Read-time Scalable Coherent Interface
NASA Technical Reports Server (NTRS)
Li, Qiang; Grant, Terry; Grover, Radhika S.
1997-01-01
Scalable Coherent Interface (SCI, IEEE/ANSI Std 1596-1992) (SCI1, SCI2) is a high performance interconnect for shared memory multiprocessor systems. In this project we investigate an SCI Real Time Protocols (RTSCI1) using Directed Flow Control Symbols. We studied the issues of efficient generation of control symbols, and created a simulation model of the protocol on a ring-based SCI system. This report presents the results of the study. The project has been implemented using SES/Workbench. The details that follow encompass aspects of both SCI and Flow Control Protocols, as well as the effect of realistic client/server processing delay. The report is organized as follows. Section 2 provides a description of the simulation model. Section 3 describes the protocol implementation details. The next three sections of the report elaborate on the workload, results and conclusions. Appended to the report is a description of the tool, SES/Workbench, used in our simulation, and internal details of our implementation of the protocol.
X-ray phase-contrast tomography for high-spatial-resolution zebrafish muscle imaging
NASA Astrophysics Data System (ADS)
Vågberg, William; Larsson, Daniel H.; Li, Mei; Arner, Anders; Hertz, Hans M.
2015-11-01
Imaging of muscular structure with cellular or subcellular detail in whole-body animal models is of key importance for understanding muscular disease and assessing interventions. Classical histological methods for high-resolution imaging methods require excision, fixation and staining. Here we show that the three-dimensional muscular structure of unstained whole zebrafish can be imaged with sub-5 μm detail with X-ray phase-contrast tomography. Our method relies on a laboratory propagation-based phase-contrast system tailored for detection of low-contrast 4-6 μm subcellular myofibrils. The method is demonstrated on 20 days post fertilization zebrafish larvae and comparative histology confirms that we resolve individual myofibrils in the whole-body animal. X-ray imaging of healthy zebrafish show the expected structured muscle pattern while specimen with a dystrophin deficiency (sapje) displays an unstructured pattern, typical of Duchenne muscular dystrophy. The method opens up for whole-body imaging with sub-cellular detail also of other types of soft tissue and in different animal models.
NASA Astrophysics Data System (ADS)
Rosner, Helge
2011-03-01
A microscopic understanding of the structure-properties relation in crystalline materials is a main goal of modern solid state chemistry and physics. Due to their peculiar magnetism, low dimensional spin 1/2 systems are often highly sensitive to structural details. Seemingly unimportant structural details can be crucial for the magnetic ground state of a compound, especially in the case of competing interactions, frustration and near-degeneracy. Here, we present for selected, complex Cu 2+ systems that a first principles based approach can reliably provide the correct magnetic model, especially in cases where the interpretation of experimental data meets serious difficulties or fails. We demonstrate that the magnetism of low dimensional insulators crucially depends on the magnetically active orbitals which are determined by details of the ligand field of the magnetic cation. Our theoretical results are in very good agreement with thermodynamic and spectroscopic data and provide deep microscopic insight into topical low dimensional magnets.
Modeling science: Supporting a more authentic epistemology of science
NASA Astrophysics Data System (ADS)
Svoboda, Julia Marie
In this dissertation I argue that model-based inquiry has the potential to create experiences for students to consider how scientific knowledge is generated and evaluated - that is, for students to consider the epistemology of science. I also argue that such epistemically rich experiences can lead to shifts in students' conceptions of the nature of scientific knowledge. The context of this work is a yearlong biological modeling traineeship for undergraduate mathematics and biology majors called Collaborative Learning at the Interface of Mathematics and Biology (CLIMB). I used an ethnographically-based approach to collect detailed field notes, video, documents and interviews with faculty and students in CLIMB. The resulting dataset provides a rich description of the CLIMB program as well as students experiences in this program. Analysis of the CLIMB curriculum revealed that the degree to which students were treated as independent scholars and challenged with authentic problems influenced the productivity of their activity. A more detailed analysis of the nature of modeling tasks revealed that only when models were at the center of their activity did students have opportunities to consider epistemic themes relating to how knowledge is created and critiqued in science. Finally, a case study that followed a single student described how rich epistemically rich experiences with modeling have the potential to shift the ways in which students conceive of scientific knowledge and practice. It also provided evidence that supports the theory that students have complex multidimensional epistemic ecologies as opposed to static views about science. As a whole, this dissertation provides a rich description of how model-based inquiry can support learning about the epistemology of science and suggests that scientific modeling should have a more central role in science education.
Mesoscale energy deposition footprint model for kiloelectronvolt cluster bombardment of solids.
Russo, Michael F; Garrison, Barbara J
2006-10-15
Molecular dynamics simulations have been performed to model 5-keV C60 and Au3 projectile bombardment of an amorphous water substrate. The goal is to obtain detailed insights into the dynamics of motion in order to develop a straightforward and less computationally demanding model of the process of ejection. The molecular dynamics results provide the basis for the mesoscale energy deposition footprint model. This model provides a method for predicting relative yields based on information from less than 1 ps of simulation time.
An optimization model for metabolic pathways.
Planes, F J; Beasley, J E
2009-10-15
Different mathematical methods have emerged in the post-genomic era to determine metabolic pathways. These methods can be divided into stoichiometric methods and path finding methods. In this paper we detail a novel optimization model, based upon integer linear programming, to determine metabolic pathways. Our model links reaction stoichiometry with path finding in a single approach. We test the ability of our model to determine 40 annotated Escherichia coli metabolic pathways. We show that our model is able to determine 36 of these 40 pathways in a computationally effective manner.
RF model of the distribution system as a communication channel, phase 2. Volume 2: Task reports
NASA Technical Reports Server (NTRS)
Rustay, R. C.; Gajjar, J. T.; Rankin, R. W.; Wentz, R. C.; Wooding, R.
1982-01-01
Based on the established feasibility of predicting, via a model, the propagation of Power Line Frequency on radial type distribution feeders, verification studies comparing model predictions against measurements were undertaken using more complicated feeder circuits and situations. Detailed accounts of the major tasks are presented. These include: (1) verification of model; (2) extension, implementation, and verification of perturbation theory; (3) parameter sensitivity; (4) transformer modeling; and (5) compensation of power distribution systems for enhancement of power line carrier communication reliability.
2016-02-11
INVESTIGATION OF BACK-OFF BASED INTERPOLATION BETWEEN RECURRENT NEURAL NETWORK AND N- GRAM LANGUAGE MODELS X. Chen, X. Liu, M. J. F. Gales, and P. C...As the gener- alization patterns of RNNLMs and n- gram LMs are inherently dif- ferent, RNNLMs are usually combined with n- gram LMs via a fixed...RNNLMs and n- gram LMs as n- gram level changes. In order to fully exploit the detailed n- gram level comple- mentary attributes between the two LMs, a
Multiphysics Thrust Chamber Modeling for Nuclear Thermal Propulsion
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Cheng, Gary; Chen, Yen-Sen
2006-01-01
The objective of this effort is to develop an efficient and accurate thermo-fluid computational methodology to predict environments for a solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics formulation. A two-pronged approach is employed in this effort: A detailed thermo-fluid analysis on a multi-channel flow element for mid-section corrosion investigation; and a global modeling of the thrust chamber to understand the effect of heat transfer on thrust performance. Preliminary results on both aspects are presented.
NASA Technical Reports Server (NTRS)
Richardson, David
2018-01-01
Model-Based Systems Engineering (MBSE) is the formalized application of modeling to support system requirements, design, analysis, verification and validation activities beginning in the conceptual design phase and continuing throughout development and later life cycle phases . This presentation will discuss the value proposition that MBSE has for Systems Engineering, and the associated culture change needed to adopt it.
NASA Astrophysics Data System (ADS)
Brown, S. S.; Edwards, P. M.; Patel, S.; Dube, W. P.; Williams, E. J.; Roberts, J. M.; McLaren, R.; Kercher, J. P.; Gilman, J. B.; Lerner, B. M.; Warneke, C.; Geiger, F.; De Gouw, J. A.; Tsai, C.; Stutz, J.; Young, C. J.; Washenfelder, R. A.; Parrish, D. D.
2012-12-01
Oil and gas development in mountain basins of the Western United States has led to frequent exceedences of National Ambient Air Quality Standards for ozone during the winter season. The Uintah Basin Winter Ozone Study took place during February and March 2012 in northeast Utah with the goal of providing detailed chemical and meteorological data to understand this phenomenon. Although snow and cold pool stagnation conditions that lead to winter ozone buildup were not encountered during the study period, the detailed measurements did provide a unique data set to understand the chemistry of key air pollutants in a desert environment during winter. This presentation will examine both the photochemistry and the nighttime chemistry of nitrogen oxides, ozone and VOCs, with the goal of understanding the observed photochemistry and its relationship to nighttime chemistry through a set of box models. The photochemical box model is based on the master chemical mechanism (MCM), a detailed model for VOC degradation and ozone production. The presentation will examine the sensitivity of ozone photochemistry to different parameters, including pollutant concentrations likely to be characteristic of cold pool conditions, and the strength of radical sources derived from heterogeneous chemical reactions. The goal of the analysis will be to identify the factors most likely to be responsible for the higher ozone events that have been observed during colder years with less detailed chemical measurements.
Modeling hard clinical end-point data in economic analyses.
Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V
2013-11-01
The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (<7). Models of infrequent events or with numerous health states generally preferred constant event rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are more appropriate to accurately reflect the trial data.
Simple model of inhibition of chain-branching combustion processes
NASA Astrophysics Data System (ADS)
Babushok, Valeri I.; Gubernov, Vladimir V.; Minaev, Sergei S.; Miroshnichenko, Taisia P.
2017-11-01
A simple kinetic model has been suggested to describe the inhibition and extinction of flame propagation in reaction systems with chain-branching reactions typical for hydrocarbon systems. The model is based on the generalised model of the combustion process with chain-branching reaction combined with the one-stage reaction describing the thermal mode of flame propagation with the addition of inhibition reaction steps. Inhibitor addition suppresses the radical overshoot in flame and leads to the change of reaction mode from the chain-branching reaction to a thermal mode of flame propagation. With the increase of inhibitor the transition of chain-branching mode of reaction to the reaction with straight-chains (non-branching chain reaction) is observed. The inhibition part of the model includes a block of three reactions to describe the influence of the inhibitor. The heat losses are incorporated into the model via Newton cooling. The flame extinction is the result of the decreased heat release of inhibited reaction processes and the suppression of radical overshoot with the further decrease of the reaction rate due to the temperature decrease and mixture dilution. A comparison of the results of modelling laminar premixed methane/air flames inhibited by potassium bicarbonate (gas phase model, detailed kinetic model) with the results obtained using the suggested simple model is presented. The calculations with the detailed kinetic model demonstrate the following modes of combustion process: (1) flame propagation with chain-branching reaction (with radical overshoot, inhibitor addition decreases the radical overshoot down to the equilibrium level); (2) saturation of chemical influence of inhibitor, and (3) transition to thermal mode of flame propagation (non-branching chain mode of reaction). The suggested simple kinetic model qualitatively reproduces the modes of flame propagation with the addition of the inhibitor observed using detailed kinetic models.
Study on Stress Development in the Phase Transition Layer of Thermal Barrier Coatings
Chai, Yijun; Lin, Chen; Wang, Xian; Li, Yueming
2016-01-01
Stress development is one of the significant factors leading to the failure of thermal barrier coating (TBC) systems. In this work, stress development in the two phase mixed zone named phase transition layer (PTL), which grows between the thermally grown oxide (TGO) and the bond coat (BC), is investigated by using two different homogenization models. A constitutive equation of the PTL based on the Reuss model is proposed to study the stresses in the PTL. The stresses computed with the proposed constitutive equation are compared with those obtained with Voigt model-based equation in detail. The stresses based on the Voigt model are slightly higher than those based on the Reuss model. Finally, a further study is carried out to explore the influence of phase transition proportions on the stress difference caused by homogenization models. Results show that the stress difference becomes more evident with the increase of the PTL thickness ratio in the TGO. PMID:28773894
Model reduction for agent-based social simulation: coarse-graining a civil violence model.
Zou, Yu; Fonoberov, Vladimir A; Fonoberova, Maria; Mezic, Igor; Kevrekidis, Ioannis G
2012-06-01
Agent-based modeling (ABM) constitutes a powerful computational tool for the exploration of phenomena involving emergent dynamic behavior in the social sciences. This paper demonstrates a computer-assisted approach that bridges the significant gap between the single-agent microscopic level and the macroscopic (coarse-grained population) level, where fundamental questions must be rationally answered and policies guiding the emergent dynamics devised. Our approach will be illustrated through an agent-based model of civil violence. This spatiotemporally varying ABM incorporates interactions between a heterogeneous population of citizens [active (insurgent), inactive, or jailed] and a population of police officers. Detailed simulations exhibit an equilibrium punctuated by periods of social upheavals. We show how to effectively reduce the agent-based dynamics to a stochastic model with only two coarse-grained degrees of freedom: the number of jailed citizens and the number of active ones. The coarse-grained model captures the ABM dynamics while drastically reducing the computation time (by a factor of approximately 20).
Model reduction for agent-based social simulation: Coarse-graining a civil violence model
NASA Astrophysics Data System (ADS)
Zou, Yu; Fonoberov, Vladimir A.; Fonoberova, Maria; Mezic, Igor; Kevrekidis, Ioannis G.
2012-06-01
Agent-based modeling (ABM) constitutes a powerful computational tool for the exploration of phenomena involving emergent dynamic behavior in the social sciences. This paper demonstrates a computer-assisted approach that bridges the significant gap between the single-agent microscopic level and the macroscopic (coarse-grained population) level, where fundamental questions must be rationally answered and policies guiding the emergent dynamics devised. Our approach will be illustrated through an agent-based model of civil violence. This spatiotemporally varying ABM incorporates interactions between a heterogeneous population of citizens [active (insurgent), inactive, or jailed] and a population of police officers. Detailed simulations exhibit an equilibrium punctuated by periods of social upheavals. We show how to effectively reduce the agent-based dynamics to a stochastic model with only two coarse-grained degrees of freedom: the number of jailed citizens and the number of active ones. The coarse-grained model captures the ABM dynamics while drastically reducing the computation time (by a factor of approximately 20).
LOD 1 VS. LOD 2 - Preliminary Investigations Into Differences in Mobile Rendering Performance
NASA Astrophysics Data System (ADS)
Ellul, C.; Altenbuchner, J.
2013-09-01
The increasing availability, size and detail of 3D City Model datasets has led to a challenge when rendering such data on mobile devices. Understanding the limitations to the usability of such models on these devices is particularly important given the broadening range of applications - such as pollution or noise modelling, tourism, planning, solar potential - for which these datasets and resulting visualisations can be utilized. Much 3D City Model data is created by extrusion of 2D topographic datasets, resulting in what is known as Level of Detail (LoD) 1 buildings - with flat roofs. However, in the UK the National Mapping Agency (the Ordnance Survey, OS) is now releasing test datasets to Level of Detail (LoD) 2 - i.e. including roof structures. These datasets are designed to integrate with the LoD 1 datasets provided by the OS, and provide additional detail in particular on larger buildings and in town centres. The availability of such integrated datasets at two different Levels of Detail permits investigation into the impact of the additional roof structures (and hence the display of a more realistic 3D City Model) on rendering performance on a mobile device. This paper describes preliminary work carried out to investigate this issue, for the test area of the city of Sheffield (in the UK Midlands). The data is stored in a 3D spatial database as triangles and then extracted and served as a web-based data stream which is queried by an App developed on the mobile device (using the Android environment, Java and OpenGL for graphics). Initial tests have been carried out on two dataset sizes, for the city centre and a larger area, rendering the data onto a tablet to compare results. Results of 52 seconds for rendering LoD 1 data, and 72 seconds for LoD 1 mixed with LoD 2 data, show that the impact of LoD 2 is significant.
A simplified dynamic model of the T700 turboshaft engine
NASA Technical Reports Server (NTRS)
Duyar, Ahmet; Gu, Zhen; Litt, Jonathan S.
1992-01-01
A simplified open-loop dynamic model of the T700 turboshaft engine, valid within the normal operating range of the engine, is developed. This model is obtained by linking linear state space models obtained at different engine operating points. Each linear model is developed from a detailed nonlinear engine simulation using a multivariable system identification and realization method. The simplified model may be used with a model-based real time diagnostic scheme for fault detection and diagnostics, as well as for open loop engine dynamics studies and closed loop control analysis utilizing a user generated control law.
Novel model of a AlGaN/GaN high electron mobility transistor based on an artificial neural network
NASA Astrophysics Data System (ADS)
Cheng, Zhi-Qun; Hu, Sha; Liu, Jun; Zhang, Qi-Jun
2011-03-01
In this paper we present a novel approach to modeling AlGaN/GaN high electron mobility transistor (HEMT) with an artificial neural network (ANN). The AlGaN/GaN HEMT device structure and its fabrication process are described. The circuit-based Neuro-space mapping (neuro-SM) technique is studied in detail. The EEHEMT model is implemented according to the measurement results of the designed device, which serves as a coarse model. An ANN is proposed to model AlGaN/GaN HEMT based on the coarse model. Its optimization is performed. The simulation results from the model are compared with the measurement results. It is shown that the simulation results obtained from the ANN model of AlGaN/GaN HEMT are more accurate than those obtained from the EEHEMT model. Project supported by the National Natural Science Foundation of China (Grant No. 60776052).
Rajiv Prasad; David G. Tarboton; Glen E. Liston; Charles H. Luce; Mark S. Seyfried
2001-01-01
In this paper a physically based snow transport model (SnowTran-3D) was used to simulate snow drifting over a 30 m grid and was compared to detailed snow water equivalence (SWE) surveys on three dates within a small 0.25 km2 subwatershed, Upper Sheep Creek. Two precipitation scenarios and two vegetation scenarios were used to carry out four snow transport model runs in...
NASA Technical Reports Server (NTRS)
Boyce, Lola; Lovelace, Thomas B.
1989-01-01
FORTRAN program RANDOM2 is presented in the form of a user's manual. RANDOM2 is based on fracture mechanics using a probabilistic fatigue crack growth model. It predicts the random lifetime of an engine component to reach a given crack size. Details of the theoretical background, input data instructions, and a sample problem illustrating the use of the program are included.
Point-based and model-based geolocation analysis of airborne laser scanning data
NASA Astrophysics Data System (ADS)
Sefercik, Umut Gunes; Buyuksalih, Gurcan; Jacobsen, Karsten; Alkan, Mehmet
2017-01-01
Airborne laser scanning (ALS) is one of the most effective remote sensing technologies providing precise three-dimensional (3-D) dense point clouds. A large-size ALS digital surface model (DSM) covering the whole Istanbul province was analyzed by point-based and model-based comprehensive statistical approaches. Point-based analysis was performed using checkpoints on flat areas. Model-based approaches were implemented in two steps as strip to strip comparing overlapping ALS DSMs individually in three subareas and comparing the merged ALS DSMs with terrestrial laser scanning (TLS) DSMs in four other subareas. In the model-based approach, the standard deviation of height and normalized median absolute deviation were used as the accuracy indicators combined with the dependency of terrain inclination. The results demonstrate that terrain roughness has a strong impact on the vertical accuracy of ALS DSMs. From the relative horizontal shifts determined and partially improved by merging the overlapping strips and comparison of the ALS, and the TLS, data were found not to be negligible. The analysis of ALS DSM in relation to TLS DSM allowed us to determine the characteristics of the DSM in detail.
The neuronal dynamics underlying cognitive flexibility in set shifting tasks.
Stemme, Anja; Deco, Gustavo; Busch, Astrid
2007-12-01
The ability to switch attention from one aspect of an object to another or in other words to switch the "attentional set" as investigated in tasks like the "Wisconsin Card Sorting Test" is commonly referred to as cognitive flexibility. In this work we present a biophysically detailed neurodynamical model which illustrates the neuronal base of the processes related to this cognitive flexibility. For this purpose we conducted behavioral experiments which allow the combined evaluation of different aspects of set shifting tasks: uninstructed set shifts as investigated in Wisconsin-like tasks, effects of stimulus congruency as investigated in Stroop-like tasks and the contribution of working memory as investigated in "Delayed-Match-to-Sample" tasks. The work describes how general experimental findings are usable to design the architecture of a biophysical detailed though minimalistic model with a high orientation on neurobiological findings and how, in turn, the simulations support experimental investigations. The resulting model is able to account for experimental and individual response times and error rates and enables the switch of attention as a system inherent model feature: The switching process suggested by the model is based on the memorization of the visual stimuli and does not require any synaptic learning. The operation of the model thus demonstrates with at least a high probability the neuronal dynamics underlying a key component of human behavior: the ability to adapt behavior according to context requirements--cognitive flexibility.