Sample records for modelling approaches application

  1. Approaches for the Application of Physiologically Based ...

    EPA Pesticide Factsheets

    EPA released the final report, Approaches for the Application of Physiologically Based Pharmacokinetic (PBPK) Models and Supporting Data in Risk Assessment as announced in a September 22 2006 Federal Register Notice.This final report addresses the application and evaluation of PBPK models for risk assessment purposes. These models represent an important class of dosimetry models that are useful for predicting internal dose at target organs for risk assessment applications. EPA is releasing a final report describing the evaluation and applications of physiologically based pharmacokinetic (PBPK) models in health risk assessment. This was announced in the September 22 2006 Federal Register Notice.

  2. Approaches for the Application of Physiologically Based Pharmacokinetic (PBPK) Models and Supporting Data in Risk Assessment (Final Report)

    EPA Science Inventory

    EPA released the final report, Approaches for the Application of Physiologically Based Pharmacokinetic (PBPK) Models and Supporting Data in Risk Assessment as announced in a September 22 2006 Federal Register Notice.This final report addresses the application and evaluati...

  3. Application Perspective of 2D+SCALE Dimension

    NASA Astrophysics Data System (ADS)

    Karim, H.; Rahman, A. Abdul

    2016-09-01

    Different applications or users need different abstraction of spatial models, dimensionalities and specification of their datasets due to variations of required analysis and output. Various approaches, data models and data structures are now available to support most current application models in Geographic Information System (GIS). One of the focuses trend in GIS multi-dimensional research community is the implementation of scale dimension with spatial datasets to suit various scale application needs. In this paper, 2D spatial datasets that been scaled up as the third dimension are addressed as 2D+scale (or 3D-scale) dimension. Nowadays, various data structures, data models, approaches, schemas, and formats have been proposed as the best approaches to support variety of applications and dimensionality in 3D topology. However, only a few of them considers the element of scale as their targeted dimension. As the scale dimension is concerned, the implementation approach can be either multi-scale or vario-scale (with any available data structures and formats) depending on application requirements (topology, semantic and function). This paper attempts to discuss on the current and new potential applications which positively could be integrated upon 3D-scale dimension approach. The previous and current works on scale dimension as well as the requirements to be preserved for any given applications, implementation issues and future potential applications forms the major discussion of this paper.

  4. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions

    NASA Astrophysics Data System (ADS)

    Donahue, William; Newhauser, Wayne D.; Ziegler, James F.

    2016-09-01

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u-1 to 450 MeV u-1 or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  5. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions.

    PubMed

    Donahue, William; Newhauser, Wayne D; Ziegler, James F

    2016-09-07

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u(-1) to 450 MeV u(-1) or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  6. Taking the mystery out of mathematical model applications to karst aquifers—A primer

    USGS Publications Warehouse

    Kuniansky, Eve L.

    2014-01-01

    Advances in mathematical model applications toward the understanding of the complex flow, characterization, and water-supply management issues for karst aquifers have occurred in recent years. Different types of mathematical models can be applied successfully if appropriate information is available and the problems are adequately identified. The mathematical approaches discussed in this paper are divided into three major categories: 1) distributed parameter models, 2) lumped parameter models, and 3) fitting models. The modeling approaches are described conceptually with examples (but without equations) to help non-mathematicians understand the applications.

  7. FINE SCALE AIR QUALITY MODELING USING DISPERSION AND CMAQ MODELING APPROACHES: AN EXAMPLE APPLICATION IN WILMINGTON, DE

    EPA Science Inventory

    Characterization of spatial variability of air pollutants in an urban setting at fine scales is critical for improved air toxics exposure assessments, for model evaluation studies and also for air quality regulatory applications. For this study, we investigate an approach that su...

  8. Reflection of a Year Long Model-Driven Business and UI Modeling Development Project

    NASA Astrophysics Data System (ADS)

    Sukaviriya, Noi; Mani, Senthil; Sinha, Vibha

    Model-driven software development enables users to specify an application at a high level - a level that better matches problem domain. It also promises the users with better analysis and automation. Our work embarks on two collaborating domains - business process and human interactions - to build an application. Business modeling expresses business operations and flows then creates business flow implementation. Human interaction modeling expresses a UI design, its relationship with business data, logic, and flow, and can generate working UI. This double modeling approach automates the production of a working system with UI and business logic connected. This paper discusses the human aspects of this modeling approach after a year long of building a procurement outsourcing contract application using the approach - the result of which was deployed in December 2008. The paper discusses in multiple areas the happy endings and some heartache. We end with insights on how a model-driven approach could do better for humans in the process.

  9. A function space approach to smoothing with applications to model error estimation for flexible spacecraft control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1981-01-01

    A function space approach to smoothing is used to obtain a set of model error estimates inherent in a reduced-order model. By establishing knowledge of inevitable deficiencies in the truncated model, the error estimates provide a foundation for updating the model and thereby improving system performance. The function space smoothing solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for spacecraft attitude control.

  10. A modular approach to addressing model design, scale, and parameter estimation issues in distributed hydrological modelling

    USGS Publications Warehouse

    Leavesley, G.H.; Markstrom, S.L.; Restrepo, Pedro J.; Viger, R.J.

    2002-01-01

    A modular approach to model design and construction provides a flexible framework in which to focus the multidisciplinary research and operational efforts needed to facilitate the development, selection, and application of the most robust distributed modelling methods. A variety of modular approaches have been developed, but with little consideration for compatibility among systems and concepts. Several systems are proprietary, limiting any user interaction. The US Geological Survey modular modelling system (MMS) is a modular modelling framework that uses an open source software approach to enable all members of the scientific community to address collaboratively the many complex issues associated with the design, development, and application of distributed hydrological and environmental models. Implementation of a common modular concept is not a trivial task. However, it brings the resources of a larger community to bear on the problems of distributed modelling, provides a framework in which to compare alternative modelling approaches objectively, and provides a means of sharing the latest modelling advances. The concepts and components of the MMS are described and an example application of the MMS, in a decision-support system context, is presented to demonstrate current system capabilities. Copyright ?? 2002 John Wiley and Sons, Ltd.

  11. Evaluating the Power Consumption of Wireless Sensor Network Applications Using Models

    PubMed Central

    Dâmaso, Antônio; Freitas, Davi; Rosa, Nelson; Silva, Bruno; Maciel, Paulo

    2013-01-01

    Power consumption is the main concern in developing Wireless Sensor Network (WSN) applications. Consequently, several strategies have been proposed for investigating the power consumption of this kind of application. These strategies can help to predict the WSN lifetime, provide recommendations to application developers and may optimize the energy consumed by the WSN applications. While measurement is a known and precise strategy for power consumption evaluation, it is very costly, tedious and may be unfeasible considering the (usual) large number of WSN nodes. Furthermore, due to the inherent dynamism of WSNs, the instrumentation required by measurement techniques makes difficult their use in several different scenarios. In this context, this paper presents an approach for evaluating the power consumption of WSN applications by using simulation models along with a set of tools to automate the proposed approach. Starting from a programming language code, we automatically generate consumption models used to predict the power consumption of WSN applications. In order to evaluate the proposed approach, we compare the results obtained by using the generated models against ones obtained by measurement. PMID:23486217

  12. Evaluating the power consumption of wireless sensor network applications using models.

    PubMed

    Dâmaso, Antônio; Freitas, Davi; Rosa, Nelson; Silva, Bruno; Maciel, Paulo

    2013-03-13

    Power consumption is the main concern in developing Wireless Sensor Network (WSN) applications. Consequently, several strategies have been proposed for investigating the power consumption of this kind of application. These strategies can help to predict the WSN lifetime, provide recommendations to application developers and may optimize the energy consumed by the WSN applications. While measurement is a known and precise strategy for power consumption evaluation, it is very costly, tedious and may be unfeasible considering the (usual) large number of WSN nodes. Furthermore, due to the inherent dynamism of WSNs, the instrumentation required by measurement techniques makes difficult their use in several different scenarios. In this context, this paper presents an approach for evaluating the power consumption of WSN applications by using simulation models along with a set of tools to automate the proposed approach. Starting from a programming language code, we automatically generate consumption models used to predict the power consumption of WSN applications. In order to evaluate the proposed approach, we compare the results obtained by using the generated models against ones obtained by measurement.

  13. Hirabayashi, Satoshi; Kroll, Charles N.; Nowak, David J. 2011. Component-based development and sensitivity analyses of an air pollutant dry deposition model. Environmental Modelling & Software. 26(6): 804-816.

    Treesearch

    Satoshi Hirabayashi; Chuck Kroll; David Nowak

    2011-01-01

    The Urban Forest Effects-Deposition model (UFORE-D) was developed with a component-based modeling approach. Functions of the model were separated into components that are responsible for user interface, data input/output, and core model functions. Taking advantage of the component-based approach, three UFORE-D applications were developed: a base application to estimate...

  14. Variance Estimation for NAEP Data Using a Resampling-Based Approach: An Application of Cognitive Diagnostic Models. Research Report. ETS RR-10-26

    ERIC Educational Resources Information Center

    Hsieh, Chueh-an; Xu, Xueli; von Davier, Matthias

    2010-01-01

    This paper presents an application of a jackknifing approach to variance estimation of ability inferences for groups of students, using a multidimensional discrete model for item response data. The data utilized to demonstrate the approach come from the National Assessment of Educational Progress (NAEP). In contrast to the operational approach…

  15. A Model-Based Approach for Bridging Virtual and Physical Sensor Nodes in a Hybrid Simulation Framework

    PubMed Central

    Mozumdar, Mohammad; Song, Zhen Yu; Lavagno, Luciano; Sangiovanni-Vincentelli, Alberto L.

    2014-01-01

    The Model Based Design (MBD) approach is a popular trend to speed up application development of embedded systems, which uses high-level abstractions to capture functional requirements in an executable manner, and which automates implementation code generation. Wireless Sensor Networks (WSNs) are an emerging very promising application area for embedded systems. However, there is a lack of tools in this area, which would allow an application developer to model a WSN application by using high level abstractions, simulate it mapped to a multi-node scenario for functional analysis, and finally use the refined model to automatically generate code for different WSN platforms. Motivated by this idea, in this paper we present a hybrid simulation framework that not only follows the MBD approach for WSN application development, but also interconnects a simulated sub-network with a physical sub-network and then allows one to co-simulate them, which is also known as Hardware-In-the-Loop (HIL) simulation. PMID:24960083

  16. Formalism Challenges of the Cougaar Model Driven Architecture

    NASA Technical Reports Server (NTRS)

    Bohner, Shawn A.; George, Boby; Gracanin, Denis; Hinchey, Michael G.

    2004-01-01

    The Cognitive Agent Architecture (Cougaar) is one of the most sophisticated distributed agent architectures developed today. As part of its research and evolution, Cougaar is being studied for application to large, logistics-based applications for the Department of Defense (DoD). Anticipiting future complex applications of Cougaar, we are investigating the Model Driven Architecture (MDA) approach to understand how effective it would be for increasing productivity in Cougar-based development efforts. Recognizing the sophistication of the Cougaar development environment and the limitations of transformation technologies for agents, we have systematically developed an approach that combines component assembly in the large and transformation in the small. This paper describes some of the key elements that went into the Cougaar Model Driven Architecture approach and the characteristics that drove the approach.

  17. Predictive QSAR modeling workflow, model applicability domains, and virtual screening.

    PubMed

    Tropsha, Alexander; Golbraikh, Alexander

    2007-01-01

    Quantitative Structure Activity Relationship (QSAR) modeling has been traditionally applied as an evaluative approach, i.e., with the focus on developing retrospective and explanatory models of existing data. Model extrapolation was considered if only in hypothetical sense in terms of potential modifications of known biologically active chemicals that could improve compounds' activity. This critical review re-examines the strategy and the output of the modern QSAR modeling approaches. We provide examples and arguments suggesting that current methodologies may afford robust and validated models capable of accurate prediction of compound properties for molecules not included in the training sets. We discuss a data-analytical modeling workflow developed in our laboratory that incorporates modules for combinatorial QSAR model development (i.e., using all possible binary combinations of available descriptor sets and statistical data modeling techniques), rigorous model validation, and virtual screening of available chemical databases to identify novel biologically active compounds. Our approach places particular emphasis on model validation as well as the need to define model applicability domains in the chemistry space. We present examples of studies where the application of rigorously validated QSAR models to virtual screening identified computational hits that were confirmed by subsequent experimental investigations. The emerging focus of QSAR modeling on target property forecasting brings it forward as predictive, as opposed to evaluative, modeling approach.

  18. A Unified Approach to Modeling Multidisciplinary Interactions

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.; Bhatia, Kumar G.

    2000-01-01

    There are a number of existing methods to transfer information among various disciplines. For a multidisciplinary application with n disciplines, the traditional methods may be required to model (n(exp 2) - n) interactions. This paper presents a unified three-dimensional approach that reduces the number of interactions from (n(exp 2) - n) to 2n by using a computer-aided design model. The proposed modeling approach unifies the interactions among various disciplines. The approach is independent of specific discipline implementation, and a number of existing methods can be reformulated in the context of the proposed unified approach. This paper provides an overview of the proposed unified approach and reformulations for two existing methods. The unified approach is specially tailored for application environments where the geometry is created and managed through a computer-aided design system. Results are presented for a blended-wing body and a high-speed civil transport.

  19. Modeling and applications in microbial food safety

    USDA-ARS?s Scientific Manuscript database

    Mathematical modeling is a scientific and systematic approach to study and describe the recurrent events or phenomena with successful application track for decades. When models are properly developed and validated, their applications may save costs and time. For the microbial food safety concerns, ...

  20. Application of QSAR and shape pharmacophore modeling approaches for targeted chemical library design.

    PubMed

    Ebalunode, Jerry O; Zheng, Weifan; Tropsha, Alexander

    2011-01-01

    Optimization of chemical library composition affords more efficient identification of hits from biological screening experiments. The optimization could be achieved through rational selection of reagents used in combinatorial library synthesis. However, with a rapid advent of parallel synthesis methods and availability of millions of compounds synthesized by many vendors, it may be more efficient to design targeted libraries by means of virtual screening of commercial compound collections. This chapter reviews the application of advanced cheminformatics approaches such as quantitative structure-activity relationships (QSAR) and pharmacophore modeling (both ligand and structure based) for virtual screening. Both approaches rely on empirical SAR data to build models; thus, the emphasis is placed on achieving models of the highest rigor and external predictive power. We present several examples of successful applications of both approaches for virtual screening to illustrate their utility. We suggest that the expert use of both QSAR and pharmacophore models, either independently or in combination, enables users to achieve targeted libraries enriched with experimentally confirmed hit compounds.

  1. Social Learning among Organic Farmers and the Application of the Communities of Practice Framework

    ERIC Educational Resources Information Center

    Morgan, Selyf Lloyd

    2011-01-01

    The paper examines social learning processes among organic farmers and explores the application of the Community of Practice (CoP) model in this context. The analysis employed utilises an approach based on the CoP model, and considers how, or whether, this approach may be useful to understand social learning among farmers. The CoP model is applied…

  2. Centrifuge Rotor Models: A Comparison of the Euler-Lagrange and the Bond Graph Modeling Approach

    NASA Technical Reports Server (NTRS)

    Granda, Jose J.; Ramakrishnan, Jayant; Nguyen, Louis H.

    2006-01-01

    A viewgraph presentation on centrifuge rotor models with a comparison using Euler-Lagrange and bond graph methods is shown. The topics include: 1) Objectives; 2) MOdeling Approach Comparisons; 3) Model Structures; and 4) Application.

  3. On Structural Equation Model Equivalence.

    ERIC Educational Resources Information Center

    Raykov, Tenko; Penev, Spiridon

    1999-01-01

    Presents a necessary and sufficient condition for the equivalence of structural-equation models that is applicable to models with parameter restrictions and models that may or may not fulfill assumptions of the rules. Illustrates the application of the approach for studying model equivalence. (SLD)

  4. Modeling Subgrid Scale Droplet Deposition in Multiphase-CFD

    NASA Astrophysics Data System (ADS)

    Agostinelli, Giulia; Baglietto, Emilio

    2017-11-01

    The development of first-principle-based constitutive equations for the Eulerian-Eulerian CFD modeling of annular flow is a major priority to extend the applicability of multiphase CFD (M-CFD) across all two-phase flow regimes. Two key mechanisms need to be incorporated in the M-CFD framework, the entrainment of droplets from the liquid film, and their deposition. Here we focus first on the aspect of deposition leveraging a separate effects approach. Current two-field methods in M-CFD do not include appropriate local closures to describe the deposition of droplets in annular flow conditions. As many integral correlations for deposition have been proposed for lumped parameters methods applications, few attempts exist in literature to extend their applicability to CFD simulations. The integral nature of the approach limits its applicability to fully developed flow conditions, without geometrical or flow variations, therefore negating the scope of CFD application. A new approach is proposed here that leverages local quantities to predict the subgrid-scale deposition rate. The methodology is first tested into a three-field approach CFD model.

  5. A distributed computing model for telemetry data processing

    NASA Astrophysics Data System (ADS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  6. A distributed computing model for telemetry data processing

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  7. HABITAT MODELING APPROACHES FOR RESTORATION SITE SELECTION

    EPA Science Inventory

    Numerous modeling approaches have been used to develop predictive models of species-environment and species-habitat relationships. These models have been used in conservation biology and habitat or species management, but their application to restoration efforts has been minimal...

  8. Application and project portfolio valuation using enterprise architecture and business requirements modelling

    NASA Astrophysics Data System (ADS)

    Quartel, Dick; Steen, Maarten W. A.; Lankhorst, Marc M.

    2012-05-01

    This article describes an architecture-based approach to IT valuation. This approach offers organisations an instrument to valuate their application and project portfolios and to make well-balanced decisions about IT investments. The value of a software application is assessed in terms of its contribution to a selection of business goals. Based on such assessments, the value of different applications can be compared, and requirements for innovation, development, maintenance and phasing out can be identified. IT projects are proposed to realise the requirements. The value of each project is assessed in terms of the value it adds to one or more applications. This value can be obtained by relating the 'as-is' application portfolio to the 'to-be' portfolio that is being proposed by the project portfolio. In this way, projects can be ranked according to their added value, given a certain selection of business goals. The approach uses ArchiMate to model the relationship between software applications, business processes, services and products. In addition, two language extensions are used to model the relationship of these elements to business goals and requirements and to projects and project portfolios. The approach is illustrated using the portfolio method of Bedell and has been implemented in BiZZdesign Architect.

  9. Addressing Challenges in Urban Teaching, Learning and Math Using Model-Strategy-Application with Reasoning Approach in Lingustically and Culturally Diverse Classrooms

    ERIC Educational Resources Information Center

    Wu, Zhonghe; An, Shuhua

    2016-01-01

    This study examined the effects of using the Model-Strategy-Application with Reasoning Approach (MSAR) in teaching and learning mathematics in linguistically and culturally diverse elementary classrooms. Through learning mathematics via the MSAR, students from different language ability groups gained an understanding of mathematics from creating…

  10. Diffuse-Interface Methods in Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Anderson, D. M.; McFadden, G. B.; Wheeler, A. A.

    1997-01-01

    The authors review the development of diffuse-interface models of hydrodynamics and their application to a wide variety of interfacial phenomena. The authors discuss the issues involved in formulating diffuse-interface models for single-component and binary fluids. Recent applications and computations using these models are discussed in each case. Further, the authors address issues including sharp-interface analyses that relate these models to the classical free-boundary problem, related computational approaches to describe interfacial phenomena, and related approaches describing fully-miscible fluids.

  11. A Review of Diagnostic Techniques for ISHM Applications

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, Ann; Biswas, Gautam; Aaseng, Gordon; Narasimhan, Sriam; Pattipati, Krishna

    2005-01-01

    System diagnosis is an integral part of any Integrated System Health Management application. Diagnostic applications make use of system information from the design phase, such as safety and mission assurance analysis, failure modes and effects analysis, hazards analysis, functional models, fault propagation models, and testability analysis. In modern process control and equipment monitoring systems, topological and analytic , models of the nominal system, derived from design documents, are also employed for fault isolation and identification. Depending on the complexity of the monitored signals from the physical system, diagnostic applications may involve straightforward trending and feature extraction techniques to retrieve the parameters of importance from the sensor streams. They also may involve very complex analysis routines, such as signal processing, learning or classification methods to derive the parameters of importance to diagnosis. The process that is used to diagnose anomalous conditions from monitored system signals varies widely across the different approaches to system diagnosis. Rule-based expert systems, case-based reasoning systems, model-based reasoning systems, learning systems, and probabilistic reasoning systems are examples of the many diverse approaches ta diagnostic reasoning. Many engineering disciplines have specific approaches to modeling, monitoring and diagnosing anomalous conditions. Therefore, there is no "one-size-fits-all" approach to building diagnostic and health monitoring capabilities for a system. For instance, the conventional approaches to diagnosing failures in rotorcraft applications are very different from those used in communications systems. Further, online and offline automated diagnostic applications are integrated into an operations framework with flight crews, flight controllers and maintenance teams. While the emphasis of this paper is automation of health management functions, striking the correct balance between automated and human-performed tasks is a vital concern.

  12. Models and Frameworks: A Synergistic Association for Developing Component-Based Applications

    PubMed Central

    Sánchez-Ledesma, Francisco; Sánchez, Pedro; Pastor, Juan A.; Álvarez, Bárbara

    2014-01-01

    The use of frameworks and components has been shown to be effective in improving software productivity and quality. However, the results in terms of reuse and standardization show a dearth of portability either of designs or of component-based implementations. This paper, which is based on the model driven software development paradigm, presents an approach that separates the description of component-based applications from their possible implementations for different platforms. This separation is supported by automatic integration of the code obtained from the input models into frameworks implemented using object-oriented technology. Thus, the approach combines the benefits of modeling applications from a higher level of abstraction than objects, with the higher levels of code reuse provided by frameworks. In order to illustrate the benefits of the proposed approach, two representative case studies that use both an existing framework and an ad hoc framework, are described. Finally, our approach is compared with other alternatives in terms of the cost of software development. PMID:25147858

  13. Models and frameworks: a synergistic association for developing component-based applications.

    PubMed

    Alonso, Diego; Sánchez-Ledesma, Francisco; Sánchez, Pedro; Pastor, Juan A; Álvarez, Bárbara

    2014-01-01

    The use of frameworks and components has been shown to be effective in improving software productivity and quality. However, the results in terms of reuse and standardization show a dearth of portability either of designs or of component-based implementations. This paper, which is based on the model driven software development paradigm, presents an approach that separates the description of component-based applications from their possible implementations for different platforms. This separation is supported by automatic integration of the code obtained from the input models into frameworks implemented using object-oriented technology. Thus, the approach combines the benefits of modeling applications from a higher level of abstraction than objects, with the higher levels of code reuse provided by frameworks. In order to illustrate the benefits of the proposed approach, two representative case studies that use both an existing framework and an ad hoc framework, are described. Finally, our approach is compared with other alternatives in terms of the cost of software development.

  14. Application of a hybrid MPI/OpenMP approach for parallel groundwater model calibration using multi-core computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan

    2010-01-01

    Calibration of groundwater models involves hundreds to thousands of forward solutions, each of which may solve many transient coupled nonlinear partial differential equations, resulting in a computationally intensive problem. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelisms in software and hardware to reduce calibration time on multi-core computers. HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for direct solutions for a reactive transport model application, and a field-scale coupled flow and transport model application. In the reactive transport model, a single parallelizable loop is identified to account for over 97% of the total computational time using GPROF.more » Addition of a few lines of OpenMP compiler directives to the loop yields a speedup of about 10 on a 16-core compute node. For the field-scale model, parallelizable loops in 14 of 174 HGC5 subroutines that require 99% of the execution time are identified. As these loops are parallelized incrementally, the scalability is found to be limited by a loop where Cray PAT detects over 90% cache missing rates. With this loop rewritten, similar speedup as the first application is achieved. The OpenMP-parallelized code can be run efficiently on multiple workstations in a network or multiple compute nodes on a cluster as slaves using parallel PEST to speedup model calibration. To run calibration on clusters as a single task, the Levenberg Marquardt algorithm is added to HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, 100 200 compute cores are used to reduce the calibration time from weeks to a few hours for these two applications. This approach is applicable to most of the existing groundwater model codes for many applications.« less

  15. Combining Domain-driven Design and Mashups for Service Development

    NASA Astrophysics Data System (ADS)

    Iglesias, Carlos A.; Fernández-Villamor, José Ignacio; Del Pozo, David; Garulli, Luca; García, Boni

    This chapter presents the Romulus project approach to Service Development using Java-based web technologies. Romulus aims at improving productivity of service development by providing a tool-supported model to conceive Java-based web applications. This model follows a Domain Driven Design approach, which states that the primary focus of software projects should be the core domain and domain logic. Romulus proposes a tool-supported model, Roma Metaframework, that provides an abstraction layer on top of existing web frameworks and automates the application generation from the domain model. This metaframework follows an object centric approach, and complements Domain Driven Design by identifying the most common cross-cutting concerns (security, service, view, ...) of web applications. The metaframework uses annotations for enriching the domain model with these cross-cutting concerns, so-called aspects. In addition, the chapter presents the usage of mashup technology in the metaframework for service composition, using the web mashup editor MyCocktail. This approach is applied to a scenario of the Mobile Phone Service Portability case study for the development of a new service.

  16. Exploring a model-driven architecture (MDA) approach to health care information systems development.

    PubMed

    Raghupathi, Wullianallur; Umar, Amjad

    2008-05-01

    To explore the potential of the model-driven architecture (MDA) in health care information systems development. An MDA is conceptualized and developed for a health clinic system to track patient information. A prototype of the MDA is implemented using an advanced MDA tool. The UML provides the underlying modeling support in the form of the class diagram. The PIM to PSM transformation rules are applied to generate the prototype application from the model. The result of the research is a complete MDA methodology to developing health care information systems. Additional insights gained include development of transformation rules and documentation of the challenges in the application of MDA to health care. Design guidelines for future MDA applications are described. The model has the potential for generalizability. The overall approach supports limited interoperability and portability. The research demonstrates the applicability of the MDA approach to health care information systems development. When properly implemented, it has the potential to overcome the challenges of platform (vendor) dependency, lack of open standards, interoperability, portability, scalability, and the high cost of implementation.

  17. Comparison of prognostic and diagnostic approached to modeling evapotranspiration in the Nile river basin

    USDA-ARS?s Scientific Manuscript database

    Actual evapotranspiration (ET) can be estimated using both prognostic and diagnostic modeling approaches, providing independent yet complementary information for hydrologic applications. Both approaches have advantages and disadvantages. When provided with temporally continuous atmospheric forcing d...

  18. Open data models for smart health interconnected applications: the example of openEHR.

    PubMed

    Demski, Hans; Garde, Sebastian; Hildebrand, Claudia

    2016-10-22

    Smart Health is known as a concept that enhances networking, intelligent data processing and combining patient data with other parameters. Open data models can play an important role in creating a framework for providing interoperable data services that support the development of innovative Smart Health applications profiting from data fusion and sharing. This article describes a model-driven engineering approach based on standardized clinical information models and explores its application for the development of interoperable electronic health record systems. The following possible model-driven procedures were considered: provision of data schemes for data exchange, automated generation of artefacts for application development and native platforms that directly execute the models. The applicability of the approach in practice was examined using the openEHR framework as an example. A comprehensive infrastructure for model-driven engineering of electronic health records is presented using the example of the openEHR framework. It is shown that data schema definitions to be used in common practice software development processes can be derived from domain models. The capabilities for automatic creation of implementation artefacts (e.g., data entry forms) are demonstrated. Complementary programming libraries and frameworks that foster the use of open data models are introduced. Several compatible health data platforms are listed. They provide standard based interfaces for interconnecting with further applications. Open data models help build a framework for interoperable data services that support the development of innovative Smart Health applications. Related tools for model-driven application development foster semantic interoperability and interconnected innovative applications.

  19. An investigation of modelling and design for software service applications.

    PubMed

    Anjum, Maria; Budgen, David

    2017-01-01

    Software services offer the opportunity to use a component-based approach for the design of applications. However, this needs a deeper understanding of how to develop service-based applications in a systematic manner, and of the set of properties that need to be included in the 'design model'. We have used a realistic application to explore systematically how service-based designs can be created and described. We first identified the key properties of an SOA (service oriented architecture) and then undertook a single-case case study to explore its use in the development of a design for a large-scale application in energy engineering, modelling this with existing notations wherever possible. We evaluated the resulting design model using two walkthroughs with both domain and application experts. We were able to successfully develop a design model around the ten properties identified, and to describe it by adapting existing design notations. A component-based approach to designing such systems does appear to be feasible. However, it needs the assistance of a more integrated set of notations for describing the resulting design model.

  20. Modeling of polymer networks for application to solid propellant formulating

    NASA Technical Reports Server (NTRS)

    Marsh, H. E.

    1979-01-01

    Methods for predicting the network structural characteristics formed by the curing of pourable elastomers were presented; as well as the logic which was applied in the development of mathematical models. A universal approach for modeling was developed and verified by comparison with other methods in application to a complex system. Several applications of network models to practical problems are described.

  1. Bayesian structural equation modeling: a more flexible representation of substantive theory.

    PubMed

    Muthén, Bengt; Asparouhov, Tihomir

    2012-09-01

    This article proposes a new approach to factor analysis and structural equation modeling using Bayesian analysis. The new approach replaces parameter specifications of exact zeros with approximate zeros based on informative, small-variance priors. It is argued that this produces an analysis that better reflects substantive theories. The proposed Bayesian approach is particularly beneficial in applications where parameters are added to a conventional model such that a nonidentified model is obtained if maximum-likelihood estimation is applied. This approach is useful for measurement aspects of latent variable modeling, such as with confirmatory factor analysis, and the measurement part of structural equation modeling. Two application areas are studied, cross-loadings and residual correlations in confirmatory factor analysis. An example using a full structural equation model is also presented, showing an efficient way to find model misspecification. The approach encompasses 3 elements: model testing using posterior predictive checking, model estimation, and model modification. Monte Carlo simulations and real data are analyzed using Mplus. The real-data analyses use data from Holzinger and Swineford's (1939) classic mental abilities study, Big Five personality factor data from a British survey, and science achievement data from the National Educational Longitudinal Study of 1988.

  2. Defining a Technical Basis for Comparing and Contrasting Emerging Dynamic Discovery Protocols

    DTIC Science & Technology

    2001-05-02

    UPnP, SLP, Bluetooth , and HAVi • Projected specific UML models for Jini, UPnP, and SLP • Completed a Rapide Model of Jini structure, function, and...narrow application focus but targeting a different application domain. (e.g., HAVi, Salutation Consortium, and Bluetooth Service Discovery) • Sun has...Our General Approach? 1/31/2002 7 Particulars of Our Approach Define a Generic UML Model that Encompasses Jini, UPnP, SLP, HAVi, and Bluetooth

  3. Application of Boosting Regression Trees to Preliminary Cost Estimation in Building Construction Projects

    PubMed Central

    2015-01-01

    Among the recent data mining techniques available, the boosting approach has attracted a great deal of attention because of its effective learning algorithm and strong boundaries in terms of its generalization performance. However, the boosting approach has yet to be used in regression problems within the construction domain, including cost estimations, but has been actively utilized in other domains. Therefore, a boosting regression tree (BRT) is applied to cost estimations at the early stage of a construction project to examine the applicability of the boosting approach to a regression problem within the construction domain. To evaluate the performance of the BRT model, its performance was compared with that of a neural network (NN) model, which has been proven to have a high performance in cost estimation domains. The BRT model has shown results similar to those of NN model using 234 actual cost datasets of a building construction project. In addition, the BRT model can provide additional information such as the importance plot and structure model, which can support estimators in comprehending the decision making process. Consequently, the boosting approach has potential applicability in preliminary cost estimations in a building construction project. PMID:26339227

  4. Application of Boosting Regression Trees to Preliminary Cost Estimation in Building Construction Projects.

    PubMed

    Shin, Yoonseok

    2015-01-01

    Among the recent data mining techniques available, the boosting approach has attracted a great deal of attention because of its effective learning algorithm and strong boundaries in terms of its generalization performance. However, the boosting approach has yet to be used in regression problems within the construction domain, including cost estimations, but has been actively utilized in other domains. Therefore, a boosting regression tree (BRT) is applied to cost estimations at the early stage of a construction project to examine the applicability of the boosting approach to a regression problem within the construction domain. To evaluate the performance of the BRT model, its performance was compared with that of a neural network (NN) model, which has been proven to have a high performance in cost estimation domains. The BRT model has shown results similar to those of NN model using 234 actual cost datasets of a building construction project. In addition, the BRT model can provide additional information such as the importance plot and structure model, which can support estimators in comprehending the decision making process. Consequently, the boosting approach has potential applicability in preliminary cost estimations in a building construction project.

  5. Approaches for the Application of Physiologically Based ...

    EPA Pesticide Factsheets

    This draft report of Approaches for the Application of Physiologically Based Pharmacokinetic (PBPK) Models and Supporting Data in Risk Assessment addresses the application and evaluation of PBPK models for risk assessment purposes. These models represent an important class of dosimetry models that are useful for predicting internal dose at target organs for risk assessment applications. Topics covered include:the types of data required use of PBPK models in risk assessment,evaluation of PBPK models for use in risk assessment, andthe application of these models to address uncertainties resulting from extrapolations (e.g. interspecies extrapolation) often used in risk assessment.In addition, appendices are provided that includea compilation of chemical partition coefficients and rate constants,algorithms for estimating chemical-specific parameters, anda list of publications relating to PBPK modeling. This report is primarily meant to serve as a learning tool for EPA scientists and risk assessors who may be less familiar with the field. In addition, this report can be informative to PBPK modelers within and outside the Agency, as it provides an assessment of the types of data and models that the EPA requires for consideration of a model for use in risk assessment.

  6. Comparisons of Multilevel Modeling and Structural Equation Modeling Approaches to Actor-Partner Interdependence Model.

    PubMed

    Hong, Sehee; Kim, Soyoung

    2018-01-01

    There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.

  7. Different Manhattan project: automatic statistical model generation

    NASA Astrophysics Data System (ADS)

    Yap, Chee Keng; Biermann, Henning; Hertzmann, Aaron; Li, Chen; Meyer, Jon; Pao, Hsing-Kuo; Paxia, Salvatore

    2002-03-01

    We address the automatic generation of large geometric models. This is important in visualization for several reasons. First, many applications need access to large but interesting data models. Second, we often need such data sets with particular characteristics (e.g., urban models, park and recreation landscape). Thus we need the ability to generate models with different parameters. We propose a new approach for generating such models. It is based on a top-down propagation of statistical parameters. We illustrate the method in the generation of a statistical model of Manhattan. But the method is generally applicable in the generation of models of large geographical regions. Our work is related to the literature on generating complex natural scenes (smoke, forests, etc) based on procedural descriptions. The difference in our approach stems from three characteristics: modeling with statistical parameters, integration of ground truth (actual map data), and a library-based approach for texture mapping.

  8. A Bayesian approach to modeling diffraction profiles and application to ferroelectric materials

    DOE PAGES

    Iamsasri, Thanakorn; Guerrier, Jonathon; Esteves, Giovanni; ...

    2017-02-01

    A new statistical approach for modeling diffraction profiles is introduced, using Bayesian inference and a Markov chain Monte Carlo (MCMC) algorithm. This method is demonstrated by modeling the degenerate reflections during application of an electric field to two different ferroelectric materials: thin-film lead zirconate titanate (PZT) of composition PbZr 0.3Ti 0.7O 3and a bulk commercial PZT polycrystalline ferroelectric. Here, the new method offers a unique uncertainty quantification of the model parameters that can be readily propagated into new calculated parameters.

  9. An Open Source Simulation Model for Soil and Sediment Bioturbation

    PubMed Central

    Schiffers, Katja; Teal, Lorna Rachel; Travis, Justin Mark John; Solan, Martin

    2011-01-01

    Bioturbation is one of the most widespread forms of ecological engineering and has significant implications for the structure and functioning of ecosystems, yet our understanding of the processes involved in biotic mixing remains incomplete. One reason is that, despite their value and utility, most mathematical models currently applied to bioturbation data tend to neglect aspects of the natural complexity of bioturbation in favour of mathematical simplicity. At the same time, the abstract nature of these approaches limits the application of such models to a limited range of users. Here, we contend that a movement towards process-based modelling can improve both the representation of the mechanistic basis of bioturbation and the intuitiveness of modelling approaches. In support of this initiative, we present an open source modelling framework that explicitly simulates particle displacement and a worked example to facilitate application and further development. The framework combines the advantages of rule-based lattice models with the application of parameterisable probability density functions to generate mixing on the lattice. Model parameters can be fitted by experimental data and describe particle displacement at the spatial and temporal scales at which bioturbation data is routinely collected. By using the same model structure across species, but generating species-specific parameters, a generic understanding of species-specific bioturbation behaviour can be achieved. An application to a case study and comparison with a commonly used model attest the predictive power of the approach. PMID:22162997

  10. An open source simulation model for soil and sediment bioturbation.

    PubMed

    Schiffers, Katja; Teal, Lorna Rachel; Travis, Justin Mark John; Solan, Martin

    2011-01-01

    Bioturbation is one of the most widespread forms of ecological engineering and has significant implications for the structure and functioning of ecosystems, yet our understanding of the processes involved in biotic mixing remains incomplete. One reason is that, despite their value and utility, most mathematical models currently applied to bioturbation data tend to neglect aspects of the natural complexity of bioturbation in favour of mathematical simplicity. At the same time, the abstract nature of these approaches limits the application of such models to a limited range of users. Here, we contend that a movement towards process-based modelling can improve both the representation of the mechanistic basis of bioturbation and the intuitiveness of modelling approaches. In support of this initiative, we present an open source modelling framework that explicitly simulates particle displacement and a worked example to facilitate application and further development. The framework combines the advantages of rule-based lattice models with the application of parameterisable probability density functions to generate mixing on the lattice. Model parameters can be fitted by experimental data and describe particle displacement at the spatial and temporal scales at which bioturbation data is routinely collected. By using the same model structure across species, but generating species-specific parameters, a generic understanding of species-specific bioturbation behaviour can be achieved. An application to a case study and comparison with a commonly used model attest the predictive power of the approach.

  11. Artificial intelligence based models for stream-flow forecasting: 2000-2015

    NASA Astrophysics Data System (ADS)

    Yaseen, Zaher Mundher; El-shafie, Ahmed; Jaafar, Othman; Afan, Haitham Abdulmohsin; Sayl, Khamis Naba

    2015-11-01

    The use of Artificial Intelligence (AI) has increased since the middle of the 20th century as seen in its application in a wide range of engineering and science problems. The last two decades, for example, has seen a dramatic increase in the development and application of various types of AI approaches for stream-flow forecasting. Generally speaking, AI has exhibited significant progress in forecasting and modeling non-linear hydrological applications and in capturing the noise complexity in the dataset. This paper explores the state-of-the-art application of AI in stream-flow forecasting, focusing on defining the data-driven of AI, the advantages of complementary models, as well as the literature and their possible future application in modeling and forecasting stream-flow. The review also identifies the major challenges and opportunities for prospective research, including, a new scheme for modeling the inflow, a novel method for preprocessing time series frequency based on Fast Orthogonal Search (FOS) techniques, and Swarm Intelligence (SI) as an optimization approach.

  12. Bioprocess systems engineering: transferring traditional process engineering principles to industrial biotechnology.

    PubMed

    Koutinas, Michalis; Kiparissides, Alexandros; Pistikopoulos, Efstratios N; Mantalaris, Athanasios

    2012-01-01

    The complexity of the regulatory network and the interactions that occur in the intracellular environment of microorganisms highlight the importance in developing tractable mechanistic models of cellular functions and systematic approaches for modelling biological systems. To this end, the existing process systems engineering approaches can serve as a vehicle for understanding, integrating and designing biological systems and processes. Here, we review the application of a holistic approach for the development of mathematical models of biological systems, from the initial conception of the model to its final application in model-based control and optimisation. We also discuss the use of mechanistic models that account for gene regulation, in an attempt to advance the empirical expressions traditionally used to describe micro-organism growth kinetics, and we highlight current and future challenges in mathematical biology. The modelling research framework discussed herein could prove beneficial for the design of optimal bioprocesses, employing rational and feasible approaches towards the efficient production of chemicals and pharmaceuticals.

  13. Bioprocess systems engineering: transferring traditional process engineering principles to industrial biotechnology

    PubMed Central

    Koutinas, Michalis; Kiparissides, Alexandros; Pistikopoulos, Efstratios N.; Mantalaris, Athanasios

    2013-01-01

    The complexity of the regulatory network and the interactions that occur in the intracellular environment of microorganisms highlight the importance in developing tractable mechanistic models of cellular functions and systematic approaches for modelling biological systems. To this end, the existing process systems engineering approaches can serve as a vehicle for understanding, integrating and designing biological systems and processes. Here, we review the application of a holistic approach for the development of mathematical models of biological systems, from the initial conception of the model to its final application in model-based control and optimisation. We also discuss the use of mechanistic models that account for gene regulation, in an attempt to advance the empirical expressions traditionally used to describe micro-organism growth kinetics, and we highlight current and future challenges in mathematical biology. The modelling research framework discussed herein could prove beneficial for the design of optimal bioprocesses, employing rational and feasible approaches towards the efficient production of chemicals and pharmaceuticals. PMID:24688682

  14. BioMOL: a computer-assisted biological modeling tool for complex chemical mixtures and biological processes at the molecular level.

    PubMed Central

    Klein, Michael T; Hou, Gang; Quann, Richard J; Wei, Wei; Liao, Kai H; Yang, Raymond S H; Campain, Julie A; Mazurek, Monica A; Broadbelt, Linda J

    2002-01-01

    A chemical engineering approach for the rigorous construction, solution, and optimization of detailed kinetic models for biological processes is described. This modeling capability addresses the required technical components of detailed kinetic modeling, namely, the modeling of reactant structure and composition, the building of the reaction network, the organization of model parameters, the solution of the kinetic model, and the optimization of the model. Even though this modeling approach has enjoyed successful application in the petroleum industry, its application to biomedical research has just begun. We propose to expand the horizons on classic pharmacokinetics and physiologically based pharmacokinetics (PBPK), where human or animal bodies were often described by a few compartments, by integrating PBPK with reaction network modeling described in this article. If one draws a parallel between an oil refinery, where the application of this modeling approach has been very successful, and a human body, the individual processing units in the oil refinery may be considered equivalent to the vital organs of the human body. Even though the cell or organ may be much more complicated, the complex biochemical reaction networks in each organ may be similarly modeled and linked in much the same way as the modeling of the entire oil refinery through linkage of the individual processing units. The integrated chemical engineering software package described in this article, BioMOL, denotes the biological application of molecular-oriented lumping. BioMOL can build a detailed model in 1-1,000 CPU sec using standard desktop hardware. The models solve and optimize using standard and widely available hardware and software and can be presented in the context of a user-friendly interface. We believe this is an engineering tool with great promise in its application to complex biological reaction networks. PMID:12634134

  15. Applications of a thermal-based two-source energy balance model using Priestley-Taylor approach for surface temperature partitioning (TSEB_PTT) under advective conditions

    USDA-ARS?s Scientific Manuscript database

    Operational application of the two source energy balance model (TSEB) which can estimate evaportranspiration (ET) and the components evaporation (E), transpiration (T) of the land surface in different climates is very useful for many applications in hydrology and agriculture. The TSEB model uses an ...

  16. Trainable Nonlinear Reaction Diffusion: A Flexible Framework for Fast and Effective Image Restoration.

    PubMed

    Chen, Yunjin; Pock, Thomas

    2017-06-01

    Image restoration is a long-standing problem in low-level computer vision with many interesting applications. We describe a flexible learning framework based on the concept of nonlinear reaction diffusion models for various image restoration problems. By embodying recent improvements in nonlinear diffusion models, we propose a dynamic nonlinear reaction diffusion model with time-dependent parameters (i.e., linear filters and influence functions). In contrast to previous nonlinear diffusion models, all the parameters, including the filters and the influence functions, are simultaneously learned from training data through a loss based approach. We call this approach TNRD-Trainable Nonlinear Reaction Diffusion. The TNRD approach is applicable for a variety of image restoration tasks by incorporating appropriate reaction force. We demonstrate its capabilities with three representative applications, Gaussian image denoising, single image super resolution and JPEG deblocking. Experiments show that our trained nonlinear diffusion models largely benefit from the training of the parameters and finally lead to the best reported performance on common test datasets for the tested applications. Our trained models preserve the structural simplicity of diffusion models and take only a small number of diffusion steps, thus are highly efficient. Moreover, they are also well-suited for parallel computation on GPUs, which makes the inference procedure extremely fast.

  17. Population modeling for pesticide risk assessment of threatened species-A case study of a terrestrial plant, Boltonia decurrens.

    PubMed

    Schmolke, Amelie; Brain, Richard; Thorbek, Pernille; Perkins, Daniel; Forbes, Valery

    2017-02-01

    Although population models are recognized as necessary tools in the ecological risk assessment of pesticides, particularly for species listed under the Endangered Species Act, their application in this context is currently limited to very few cases. The authors developed a detailed, individual-based population model for a threatened plant species, the decurrent false aster (Boltonia decurrens), for application in pesticide risk assessment. Floods and competition with other plant species are known factors that drive the species' population dynamics and were included in the model approach. The authors use the model to compare the population-level effects of 5 toxicity surrogates applied to B. decurrens under varying environmental conditions. The model results suggest that the environmental conditions under which herbicide applications occur may have a higher impact on populations than organism-level sensitivities to an herbicide within a realistic range. Indirect effects may be as important as the direct effects of herbicide applications by shifting competition strength if competing species have different sensitivities to the herbicide. The model approach provides a case study for population-level risk assessments of listed species. Population-level effects of herbicides can be assessed in a realistic and species-specific context, and uncertainties can be addressed explicitly. The authors discuss how their approach can inform the future development and application of modeling for population-level risk assessments of listed species, and ecological risk assessment in general. Environ Toxicol Chem 2017;36:480-491. © 2016 SETAC. © 2016 SETAC.

  18. Confidence Intervals for a Semiparametric Approach to Modeling Nonlinear Relations among Latent Variables

    ERIC Educational Resources Information Center

    Pek, Jolynn; Losardo, Diane; Bauer, Daniel J.

    2011-01-01

    Compared to parametric models, nonparametric and semiparametric approaches to modeling nonlinearity between latent variables have the advantage of recovering global relationships of unknown functional form. Bauer (2005) proposed an indirect application of finite mixtures of structural equation models where latent components are estimated in the…

  19. Reusable Component Model Development Approach for Parallel and Distributed Simulation

    PubMed Central

    Zhu, Feng; Yao, Yiping; Chen, Huilong; Yao, Feng

    2014-01-01

    Model reuse is a key issue to be resolved in parallel and distributed simulation at present. However, component models built by different domain experts usually have diversiform interfaces, couple tightly, and bind with simulation platforms closely. As a result, they are difficult to be reused across different simulation platforms and applications. To address the problem, this paper first proposed a reusable component model framework. Based on this framework, then our reusable model development approach is elaborated, which contains two phases: (1) domain experts create simulation computational modules observing three principles to achieve their independence; (2) model developer encapsulates these simulation computational modules with six standard service interfaces to improve their reusability. The case study of a radar model indicates that the model developed using our approach has good reusability and it is easy to be used in different simulation platforms and applications. PMID:24729751

  20. An investigation of modelling and design for software service applications

    PubMed Central

    2017-01-01

    Software services offer the opportunity to use a component-based approach for the design of applications. However, this needs a deeper understanding of how to develop service-based applications in a systematic manner, and of the set of properties that need to be included in the ‘design model’. We have used a realistic application to explore systematically how service-based designs can be created and described. We first identified the key properties of an SOA (service oriented architecture) and then undertook a single-case case study to explore its use in the development of a design for a large-scale application in energy engineering, modelling this with existing notations wherever possible. We evaluated the resulting design model using two walkthroughs with both domain and application experts. We were able to successfully develop a design model around the ten properties identified, and to describe it by adapting existing design notations. A component-based approach to designing such systems does appear to be feasible. However, it needs the assistance of a more integrated set of notations for describing the resulting design model. PMID:28489905

  1. A discrete mechanics framework for real time virtual surgical simulations with application to virtual laparoscopic nephrectomy.

    PubMed

    Zhou, Xiangmin; Zhang, Nan; Sha, Desong; Shen, Yunhe; Tamma, Kumar K; Sweet, Robert

    2009-01-01

    The inability to render realistic soft-tissue behavior in real time has remained a barrier to face and content aspects of validity for many virtual reality surgical training systems. Biophysically based models are not only suitable for training purposes but also for patient-specific clinical applications, physiological modeling and surgical planning. When considering the existing approaches for modeling soft tissue for virtual reality surgical simulation, the computer graphics-based approach lacks predictive capability; the mass-spring model (MSM) based approach lacks biophysically realistic soft-tissue dynamic behavior; and the finite element method (FEM) approaches fail to meet the real-time requirement. The present development stems from physics fundamental thermodynamic first law; for a space discrete dynamic system directly formulates the space discrete but time continuous governing equation with embedded material constitutive relation and results in a discrete mechanics framework which possesses a unique balance between the computational efforts and the physically realistic soft-tissue dynamic behavior. We describe the development of the discrete mechanics framework with focused attention towards a virtual laparoscopic nephrectomy application.

  2. On performance of parametric and distribution-free models for zero-inflated and over-dispersed count responses.

    PubMed

    Tang, Wan; Lu, Naiji; Chen, Tian; Wang, Wenjuan; Gunzler, Douglas David; Han, Yu; Tu, Xin M

    2015-10-30

    Zero-inflated Poisson (ZIP) and negative binomial (ZINB) models are widely used to model zero-inflated count responses. These models extend the Poisson and negative binomial (NB) to address excessive zeros in the count response. By adding a degenerate distribution centered at 0 and interpreting it as describing a non-risk group in the population, the ZIP (ZINB) models a two-component population mixture. As in applications of Poisson and NB, the key difference between ZIP and ZINB is the allowance for overdispersion by the ZINB in its NB component in modeling the count response for the at-risk group. Overdispersion arising in practice too often does not follow the NB, and applications of ZINB to such data yield invalid inference. If sources of overdispersion are known, other parametric models may be used to directly model the overdispersion. Such models too are subject to assumed distributions. Further, this approach may not be applicable if information about the sources of overdispersion is unavailable. In this paper, we propose a distribution-free alternative and compare its performance with these popular parametric models as well as a moment-based approach proposed by Yu et al. [Statistics in Medicine 2013; 32: 2390-2405]. Like the generalized estimating equations, the proposed approach requires no elaborate distribution assumptions. Compared with the approach of Yu et al., it is more robust to overdispersed zero-inflated responses. We illustrate our approach with both simulated and real study data. Copyright © 2015 John Wiley & Sons, Ltd.

  3. The Be-WetSpa-Pest modeling approach to simulate human and environmental exposure from pesticide application

    NASA Astrophysics Data System (ADS)

    Binder, Claudia; Garcia-Santos, Glenda; Andreoli, Romano; Diaz, Jaime; Feola, Giuseppe; Wittensoeldner, Moritz; Yang, Jing

    2016-04-01

    This study presents an integrative and spatially explicit modeling approach for analyzing human and environmental exposure from pesticide application of smallholders in the potato producing Andean region in Colombia. The modeling approach fulfills the following criteria: (i) it includes environmental and human compartments; (ii) it contains a behavioral decision-making model for estimating the effect of policies on pesticide flows to humans and the environment; (iii) it is spatially explicit; and (iv) it is modular and easily expandable to include additional modules, crops or technologies. The model was calibrated and validated for the Vereda La Hoya and was used to explore the effect of different policy measures in the region. The model has moderate data requirements and can be adapted relatively easy to other regions in developing countries with similar conditions.

  4. Simulation Assisted Risk Assessment: Blast Overpressure Modeling

    NASA Technical Reports Server (NTRS)

    Lawrence, Scott L.; Gee, Ken; Mathias, Donovan; Olsen, Michael

    2006-01-01

    A probabilistic risk assessment (PRA) approach has been developed and applied to the risk analysis of capsule abort during ascent. The PRA is used to assist in the identification of modeling and simulation applications that can significantly impact the understanding of crew risk during this potentially dangerous maneuver. The PRA approach is also being used to identify the appropriate level of fidelity for the modeling of those critical failure modes. The Apollo launch escape system (LES) was chosen as a test problem for application of this approach. Failure modes that have been modeled and/or simulated to date include explosive overpressure-based failure, explosive fragment-based failure, land landing failures (range limits exceeded either near launch or Mode III trajectories ending on the African continent), capsule-booster re-contact during separation, and failure due to plume-induced instability. These failure modes have been investigated using analysis tools in a variety of technical disciplines at various levels of fidelity. The current paper focuses on the development and application of a blast overpressure model for the prediction of structural failure due to overpressure, including the application of high-fidelity analysis to predict near-field and headwinds effects.

  5. Kuang's Semi-Classical Formalism for Calculating Electron Capture Cross Sections: A Space- Physics Application

    NASA Technical Reports Server (NTRS)

    Barghouty, A. F.

    2014-01-01

    Accurate estimates of electroncapture cross sections at energies relevant to the modeling of the transport, acceleration, and interaction of energetic neutral atoms (ENA) in space (approximately few MeV per nucleon) and especially for multi-electron ions must rely on detailed, but computationally expensive, quantum-mechanical description of the collision process. Kuang's semi-classical approach is an elegant and efficient way to arrive at these estimates. Motivated by ENA modeling efforts for apace applications, we shall briefly present this approach along with sample applications and report on current progress.

  6. A Component Approach to Collaborative Scientific Software Development: Tools and Techniques Utilized by the Quantum Chemistry Science Application Partnership

    DOE PAGES

    Kenny, Joseph P.; Janssen, Curtis L.; Gordon, Mark S.; ...

    2008-01-01

    Cutting-edge scientific computing software is complex, increasingly involving the coupling of multiple packages to combine advanced algorithms or simulations at multiple physical scales. Component-based software engineering (CBSE) has been advanced as a technique for managing this complexity, and complex component applications have been created in the quantum chemistry domain, as well as several other simulation areas, using the component model advocated by the Common Component Architecture (CCA) Forum. While programming models do indeed enable sound software engineering practices, the selection of programming model is just one building block in a comprehensive approach to large-scale collaborative development which must also addressmore » interface and data standardization, and language and package interoperability. We provide an overview of the development approach utilized within the Quantum Chemistry Science Application Partnership, identifying design challenges, describing the techniques which we have adopted to address these challenges and highlighting the advantages which the CCA approach offers for collaborative development.« less

  7. Aircraft applications of fault detection and isolation techniques

    NASA Astrophysics Data System (ADS)

    Marcos Esteban, Andres

    In this thesis the problems of fault detection & isolation and fault tolerant systems are studied from the perspective of LTI frequency-domain, model-based techniques. Emphasis is placed on the applicability of these LTI techniques to nonlinear models, especially to aerospace systems. Two applications of Hinfinity LTI fault diagnosis are given using an open-loop (no controller) design approach: one for the longitudinal motion of a Boeing 747-100/200 aircraft, the other for a turbofan jet engine. An algorithm formalizing a robust identification approach based on model validation ideas is also given and applied to the previous jet engine. A general linear fractional transformation formulation is given in terms of the Youla and Dual Youla parameterizations for the integrated (control and diagnosis filter) approach. This formulation provides better insight into the trade-off between the control and the diagnosis objectives. It also provides the basic groundwork towards the development of nested schemes for the integrated approach. These nested structures allow iterative improvements on the control/filter Youla parameters based on successive identification of the system uncertainty (as given by the Dual Youla parameter). The thesis concludes with an application of Hinfinity LTI techniques to the integrated design for the longitudinal motion of the previous Boeing 747-100/200 model.

  8. Pharmaceutical interventions for mitigating an influenza pandemic: modeling the risks and health-economic impacts.

    PubMed

    Postma, Maarten J; Milne, George; Nelson, E Anthony S; Pyenson, Bruce; Basili, Marcello; Coker, Richard; Oxford, John; Garrison, Louis P

    2010-12-01

    Model-based analyses built on burden-of-disease and cost-effectiveness theory predict that pharmaceutical interventions may efficiently mitigate both the epidemiologic and economic impact of an influenza pandemic. Pharmaceutical interventions typically encompass the application of (pre)pandemic influenza vaccines, other vaccines (notably pneumococcal), antiviral treatments and other drug treatment (e.g., antibiotics to target potential complications of influenza). However, these models may be too limited to capture the full macro-economic impact of pandemic influenza. The aim of this article is to summarize current health-economic modeling approaches to recognize the strengths and weaknesses of these approaches, and to compare these with more recently proposed alternative methods. We conclude that it is useful, particularly for policy and planning purposes, to extend modeling concepts through the application of alternative approaches, including insurers' risk theories, human capital approaches and sectoral and full macro-economic modeling. This article builds on a roundtable meeting of the Pandemic Influenza Economic Impact Group that was held in Boston, MA, USA, in December 2008.

  9. An Estimating Equations Approach for the LISCOMP Model.

    ERIC Educational Resources Information Center

    Reboussin, Beth A.; Liang, Kung-Lee

    1998-01-01

    A quadratic estimating equations approach for the LISCOMP model is proposed that only requires specification of the first two moments. This method is compared with a three-stage generalized least squares approach through a numerical study and application to a study of life events and neurotic illness. (SLD)

  10. Improving software maintenance through measurement

    NASA Technical Reports Server (NTRS)

    Rombach, H. Dieter; Ulery, Bradford T.

    1989-01-01

    A practical approach to improving software maintenance through measurements is presented. This approach is based on general models for measurement and improvement. Both models, their integration, and practical guidelines for transferring them into industrial maintenance settings are presented. Several examples of applications of the approach to real-world maintenance environments are discussed.

  11. Humanistic Speech Education to Create Leadership Models.

    ERIC Educational Resources Information Center

    Oka, Beverley Jeanne

    A theoretical framework based primarily on the humanistic psychology of Abraham Maslow is used in developing a humanistic approach to speech education. The holistic view of human learning and behavior, inherent in this approach, is seen to be compatible with a model of effective leadership. Specific applications of this approach to speech…

  12. Continuum-Kinetic Models and Numerical Methods for Multiphase Applications

    NASA Astrophysics Data System (ADS)

    Nault, Isaac Michael

    This thesis presents a continuum-kinetic approach for modeling general problems in multiphase solid mechanics. In this context, a continuum model refers to any model, typically on the macro-scale, in which continuous state variables are used to capture the most important physics: conservation of mass, momentum, and energy. A kinetic model refers to any model, typically on the meso-scale, which captures the statistical motion and evolution of microscopic entitites. Multiphase phenomena usually involve non-negligible micro or meso-scopic effects at the interfaces between phases. The approach developed in the thesis attempts to combine the computational performance benefits of a continuum model with the physical accuracy of a kinetic model when applied to a multiphase problem. The approach is applied to modeling a single particle impact in Cold Spray, an engineering process that intimately involves the interaction of crystal grains with high-magnitude elastic waves. Such a situation could be classified a multiphase application due to the discrete nature of grains on the spatial scale of the problem. For this application, a hyper elasto-plastic model is solved by a finite volume method with approximate Riemann solver. The results of this model are compared for two types of plastic closure: a phenomenological macro-scale constitutive law, and a physics-based meso-scale Crystal Plasticity model.

  13. Formulation of consumables management models. Development approach for the mission planning processor working model

    NASA Technical Reports Server (NTRS)

    Connelly, L. C.

    1977-01-01

    The mission planning processor is a user oriented tool for consumables management and is part of the total consumables subsystem management concept. The approach to be used in developing a working model of the mission planning processor is documented. The approach includes top-down design, structured programming techniques, and application of NASA approved software development standards. This development approach: (1) promotes cost effective software development, (2) enhances the quality and reliability of the working model, (3) encourages the sharing of the working model through a standard approach, and (4) promotes portability of the working model to other computer systems.

  14. Environment Modeling Using Runtime Values for JPF-Android

    NASA Technical Reports Server (NTRS)

    van der Merwe, Heila; Tkachuk, Oksana; Nel, Seal; van der Merwe, Brink; Visser, Willem

    2015-01-01

    Software applications are developed to be executed in a specific environment. This environment includes external native libraries to add functionality to the application and drivers to fire the application execution. For testing and verification, the environment of an application is simplified abstracted using models or stubs. Empty stubs, returning default values, are simple to generate automatically, but they do not perform well when the application expects specific return values. Symbolic execution is used to find input parameters for drivers and return values for library stubs, but it struggles to detect the values of complex objects. In this work-in-progress paper, we explore an approach to generate drivers and stubs based on values collected during runtime instead of using default values. Entry-points and methods that need to be modeled are instrumented to log their parameters and return values. The instrumented applications are then executed using a driver and instrumented libraries. The values collected during runtime are used to generate driver and stub values on- the-fly that improve coverage during verification by enabling the execution of code that previously crashed or was missed. We are implementing this approach to improve the environment model of JPF-Android, our model checking and analysis tool for Android applications.

  15. Multivariate Bayesian modeling of known and unknown causes of events--an application to biosurveillance.

    PubMed

    Shen, Yanna; Cooper, Gregory F

    2012-09-01

    This paper investigates Bayesian modeling of known and unknown causes of events in the context of disease-outbreak detection. We introduce a multivariate Bayesian approach that models multiple evidential features of every person in the population. This approach models and detects (1) known diseases (e.g., influenza and anthrax) by using informative prior probabilities and (2) unknown diseases (e.g., a new, highly contagious respiratory virus that has never been seen before) by using relatively non-informative prior probabilities. We report the results of simulation experiments which support that this modeling method can improve the detection of new disease outbreaks in a population. A contribution of this paper is that it introduces a multivariate Bayesian approach for jointly modeling both known and unknown causes of events. Such modeling has general applicability in domains where the space of known causes is incomplete. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  16. A model-based reasoning approach to sensor placement for monitorability

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Doyle, Richard; Homemdemello, Luiz

    1992-01-01

    An approach is presented to evaluating sensor placements to maximize monitorability of the target system while minimizing the number of sensors. The approach uses a model of the monitored system to score potential sensor placements on the basis of four monitorability criteria. The scores can then be analyzed to produce a recommended sensor set. An example from our NASA application domain is used to illustrate our model-based approach to sensor placement.

  17. MODFLOW-LGR: Practical application to a large regional dataset

    NASA Astrophysics Data System (ADS)

    Barnes, D.; Coulibaly, K. M.

    2011-12-01

    In many areas of the US, including southwest Florida, large regional-scale groundwater models have been developed to aid in decision making and water resources management. These models are subsequently used as a basis for site-specific investigations. Because the large scale of these regional models is not appropriate for local application, refinement is necessary to analyze the local effects of pumping wells and groundwater related projects at specific sites. The most commonly used approach to date is Telescopic Mesh Refinement or TMR. It allows the extraction of a subset of the large regional model with boundary conditions derived from the regional model results. The extracted model is then updated and refined for local use using a variable sized grid focused on the area of interest. MODFLOW-LGR, local grid refinement, is an alternative approach which allows model discretization at a finer resolution in areas of interest and provides coupling between the larger "parent" model and the locally refined "child." In the present work, these two approaches are tested on a mining impact assessment case in southwest Florida using a large regional dataset (The Lower West Coast Surficial Aquifer System Model). Various metrics for performance are considered. They include: computation time, water balance (as compared to the variable sized grid), calibration, implementation effort, and application advantages and limitations. The results indicate that MODFLOW-LGR is a useful tool to improve local resolution of regional scale models. While performance metrics, such as computation time, are case-dependent (model size, refinement level, stresses involved), implementation effort, particularly when regional models of suitable scale are available, can be minimized. The creation of multiple child models within a larger scale parent model makes it possible to reuse the same calibrated regional dataset with minimal modification. In cases similar to the Lower West Coast model, where a model is larger than optimal for direct application as a parent grid, a combination of TMR and LGR approaches should be used to develop a suitable parent grid.

  18. Applications of a formal approach to decipher discrete genetic networks.

    PubMed

    Corblin, Fabien; Fanchon, Eric; Trilling, Laurent

    2010-07-20

    A growing demand for tools to assist the building and analysis of biological networks exists in systems biology. We argue that the use of a formal approach is relevant and applicable to address questions raised by biologists about such networks. The behaviour of these systems being complex, it is essential to exploit efficiently every bit of experimental information. In our approach, both the evolution rules and the partial knowledge about the structure and the behaviour of the network are formalized using a common constraint-based language. In this article our formal and declarative approach is applied to three biological applications. The software environment that we developed allows to specifically address each application through a new class of biologically relevant queries. We show that we can describe easily and in a formal manner the partial knowledge about a genetic network. Moreover we show that this environment, based on a constraint algorithmic approach, offers a wide variety of functionalities, going beyond simple simulations, such as proof of consistency, model revision, prediction of properties, search for minimal models relatively to specified criteria. The formal approach proposed here deeply changes the way to proceed in the exploration of genetic and biochemical networks, first by avoiding the usual trial-and-error procedure, and second by placing the emphasis on sets of solutions, rather than a single solution arbitrarily chosen among many others. Last, the constraint approach promotes an integration of model and experimental data in a single framework.

  19. Application of ToxCast High-Throughput Screening and ...

    EPA Pesticide Factsheets

    Slide presentation at the SETAC annual meeting on High-Throughput Screening and Modeling Approaches to Identify Steroidogenesis Distruptors Slide presentation at the SETAC annual meeting on High-Throughput Screening and Modeling Approaches to Identify Steroidogenssis Distruptors

  20. Biomass transformation webs provide a unified approach to consumer–resource modelling

    PubMed Central

    Getz, Wayne M.

    2011-01-01

    An approach to modelling food web biomass flows among live and dead compartments within and among species is formulated using metaphysiological principles that characterise population growth in terms of basal metabolism, feeding, senescence and exploitation. This leads to a unified approach to modelling interactions among plants, herbivores, carnivores, scavengers, parasites and their resources. Also, dichotomising sessile miners from mobile gatherers of resources, with relevance to feeding and starvation time scales, suggests a new classification scheme involving 10 primary categories of consumer types. These types, in various combinations, rigorously distinguish scavenger from parasite, herbivory from phytophagy and detritivore from decomposer. Application of the approach to particular consumer–resource interactions is demonstrated, culminating in the construction of an anthrax-centred food web model, with parameters applicable to Etosha National Park, Namibia, where deaths of elephants and zebra from the bacterial pathogen, Bacillus anthracis, provide significant subsidies to jackals, vultures and other scavengers. PMID:21199247

  1. Evaluation Theory, Models, and Applications

    ERIC Educational Resources Information Center

    Stufflebeam, Daniel L.; Shinkfield, Anthony J.

    2007-01-01

    "Evaluation Theory, Models, and Applications" is designed for evaluators and students who need to develop a commanding knowledge of the evaluation field: its history, theory and standards, models and approaches, procedures, and inclusion of personnel as well as program evaluation. This important book shows how to choose from a growing…

  2. Approaches for Increasing Acceptance of Physiologically Based Pharmacokinetic Models in Public Health Risk Assessment

    EPA Science Inventory

    Physiologically based pharmacokinetic (PBPK) models have great potential for application in regulatory and non-regulatory public health risk assessment. The development and application of PBPK models in chemical toxicology has grown steadily since their emergence in the 1980s. Ho...

  3. Stochastic modelling of temperatures affecting the in situ performance of a solar-assisted heat pump: The multivariate approach and physical interpretation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loveday, D.L.; Craggs, C.

    Box-Jenkins-based multivariate stochastic modeling is carried out using data recorded from a domestic heating system. The system comprises an air-source heat pump sited in the roof space of a house, solar assistance being provided by the conventional tile roof acting as a radiation absorber. Multivariate models are presented which illustrate the time-dependent relationships between three air temperatures - at external ambient, at entry to, and at exit from, the heat pump evaporator. Using a deterministic modeling approach, physical interpretations are placed on the results of the multivariate technique. It is concluded that the multivariate Box-Jenkins approach is a suitable techniquemore » for building thermal analysis. Application to multivariate Box-Jenkins approach is a suitable technique for building thermal analysis. Application to multivariate model-based control is discussed, with particular reference to building energy management systems. It is further concluded that stochastic modeling of data drawn from a short monitoring period offers a means of retrofitting an advanced model-based control system in existing buildings, which could be used to optimize energy savings. An approach to system simulation is suggested.« less

  4. Improving homology modeling of G-protein coupled receptors through multiple-template derived conserved inter-residue interactions

    NASA Astrophysics Data System (ADS)

    Chaudhari, Rajan; Heim, Andrew J.; Li, Zhijun

    2015-05-01

    Evidenced by the three-rounds of G-protein coupled receptors (GPCR) Dock competitions, improving homology modeling methods of helical transmembrane proteins including the GPCRs, based on templates of low sequence identity, remains an eminent challenge. Current approaches addressing this challenge adopt the philosophy of "modeling first, refinement next". In the present work, we developed an alternative modeling approach through the novel application of available multiple templates. First, conserved inter-residue interactions are derived from each additional template through conservation analysis of each template-target pairwise alignment. Then, these interactions are converted into distance restraints and incorporated in the homology modeling process. This approach was applied to modeling of the human β2 adrenergic receptor using the bovin rhodopsin and the human protease-activated receptor 1 as templates and improved model quality was demonstrated compared to the homology model generated by standard single-template and multiple-template methods. This method of "refined restraints first, modeling next", provides a fast and complementary way to the current modeling approaches. It allows rational identification and implementation of additional conserved distance restraints extracted from multiple templates and/or experimental data, and has the potential to be applicable to modeling of all helical transmembrane proteins.

  5. Multilayer shallow water models with locally variable number of layers and semi-implicit time discretization

    NASA Astrophysics Data System (ADS)

    Bonaventura, Luca; Fernández-Nieto, Enrique D.; Garres-Díaz, José; Narbona-Reina, Gladys

    2018-07-01

    We propose an extension of the discretization approaches for multilayer shallow water models, aimed at making them more flexible and efficient for realistic applications to coastal flows. A novel discretization approach is proposed, in which the number of vertical layers and their distribution are allowed to change in different regions of the computational domain. Furthermore, semi-implicit schemes are employed for the time discretization, leading to a significant efficiency improvement for subcritical regimes. We show that, in the typical regimes in which the application of multilayer shallow water models is justified, the resulting discretization does not introduce any major spurious feature and allows again to reduce substantially the computational cost in areas with complex bathymetry. As an example of the potential of the proposed technique, an application to a sediment transport problem is presented, showing a remarkable improvement with respect to standard discretization approaches.

  6. Estimating the domain of applicability for machine learning QSAR models: a study on aqueous solubility of drug discovery molecules.

    PubMed

    Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-12-01

    We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

  7. Estimating the domain of applicability for machine learning QSAR models: a study on aqueous solubility of drug discovery molecules.

    PubMed

    Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-09-01

    We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

  8. Estimating the domain of applicability for machine learning QSAR models: a study on aqueous solubility of drug discovery molecules

    NASA Astrophysics Data System (ADS)

    Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-12-01

    We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

  9. Estimating the domain of applicability for machine learning QSAR models: a study on aqueous solubility of drug discovery molecules

    NASA Astrophysics Data System (ADS)

    Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-09-01

    We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

  10. Hybrid Air Quality Modeling Approach for use in the Hear-road Exposures to Urban air pollutant Study(NEXUS)

    EPA Science Inventory

    The paper presents a hybrid air quality modeling approach and its application in NEXUS in order to provide spatial and temporally varying exposure estimates and identification of the mobile source contribution to the total pollutant exposure. Model-based exposure metrics, associa...

  11. Coupling and Comparing a Spatially- and temporally-detailed Eutrophication Model with an Ecosystem Network Model: An Initial Application to Chesapeake Bay.

    EPA Science Inventory

    Coastal waters are modeled for a variety of purposes including eutrophication remediation and fisheries management. Combining these two approaches provides insights which are not available from either approach independently. Coupling is confounded, however, by differences in mode...

  12. Data Aggregation, Curation and Modeling Approaches to Deliver Prediction Models to Support Computational Toxicology at the EPA (ACS Fall meeting)

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program develops and utilizes QSAR modeling approaches across a broad range of applications. In terms of physical chemistry we have a particular interest in the prediction of basic physicochemical parameters ...

  13. Improving the performance of the mass transfer-based reference evapotranspiration estimation approaches through a coupled wavelet-random forest methodology

    NASA Astrophysics Data System (ADS)

    Shiri, Jalal

    2018-06-01

    Among different reference evapotranspiration (ETo) modeling approaches, mass transfer-based methods have been less studied. These approaches utilize temperature and wind speed records. On the other hand, the empirical equations proposed in this context generally produce weak simulations, except when a local calibration is used for improving their performance. This might be a crucial drawback for those equations in case of local data scarcity for calibration procedure. So, application of heuristic methods can be considered as a substitute for improving the performance accuracy of the mass transfer-based approaches. However, given that the wind speed records have usually higher variation magnitudes than the other meteorological parameters, application of a wavelet transform for coupling with heuristic models would be necessary. In the present paper, a coupled wavelet-random forest (WRF) methodology was proposed for the first time to improve the performance accuracy of the mass transfer-based ETo estimation approaches using cross-validation data management scenarios in both local and cross-station scales. The obtained results revealed that the new coupled WRF model (with the minimum scatter index values of 0.150 and 0.192 for local and external applications, respectively) improved the performance accuracy of the single RF models as well as the empirical equations to great extent.

  14. The Application of a Transdisciplinary Model for Early Intervention Services

    ERIC Educational Resources Information Center

    King, Gillian; Strachan, Deborah; Tucker, Michelle; Duwyn, Betty; Desserud, Sharon; Shillington, Monique

    2009-01-01

    This article reviews the literature on the transdisciplinary approach to early intervention services and identifies the essential elements of this approach. A practice model describing the implementation of the approach is then presented, based on the experiences of staff members in a home visiting program for infants that has been in existence…

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, J. Y.; Riley, W. J.

    We present a generic flux limiter to account for mass limitations from an arbitrary number of substrates in a biogeochemical reaction network. The flux limiter is based on the observation that substrate (e.g., nitrogen, phosphorus) limitation in biogeochemical models can be represented as to ensure mass conservative and non-negative numerical solutions to the governing ordinary differential equations. Application of the flux limiter includes two steps: (1) formulation of the biogeochemical processes with a matrix of stoichiometric coefficients and (2) application of Liebig's law of the minimum using the dynamic stoichiometric relationship of the reactants. This approach contrasts with the ad hoc down-regulationmore » approaches that are implemented in many existing models (such as CLM4.5 and the ACME (Accelerated Climate Modeling for Energy) Land Model (ALM)) of carbon and nutrient interactions, which are error prone when adding new processes, even for experienced modelers. Through an example implementation with a CENTURY-like decomposition model that includes carbon, nitrogen, and phosphorus, we show that our approach (1) produced almost identical results to that from the ad hoc down-regulation approaches under non-limiting nutrient conditions, (2) properly resolved the negative solutions under substrate-limited conditions where the simple clipping approach failed, (3) successfully avoided the potential conceptual ambiguities that are implied by those ad hoc down-regulation approaches. We expect our approach will make future biogeochemical models easier to improve and more robust.« less

  16. Multi-Flight-Phase GPS Navigation Filter Applications to Terrestrial Vehicle Navigation and Positioning

    NASA Technical Reports Server (NTRS)

    Park, Young W.; Montez, Moises N.

    1994-01-01

    A candidate onboard space navigation filter demonstrated excellent performance (less than 8 meter level RMS semi-major axis accuracy) in performing orbit determination of a low-Earth orbit Explorer satellite using single-frequency real GPS data. This performance is significantly better than predicted by other simulation studies using dual-frequency GPS data. The study results revealed the significance of two new modeling approaches evaluated in the work. One approach introduces a single-frequency ionospheric correction through pseudo-range and phase range averaging implementation. The other approach demonstrates a precise axis-dependent characterization of dynamic sample space uncertainty to compute a more accurate Kalman filter gain. Additionally, this navigation filter demonstrates a flexibility to accommodate both perturbational dynamic and observational biases required for multi-flight phase and inhomogeneous application environments. This paper reviews the potential application of these methods and the filter structure to terrestrial vehicle and positioning applications. Both the single-frequency ionospheric correction method and the axis-dependent state noise modeling approach offer valuable contributions in cost and accuracy improvements for terrestrial GPS receivers. With a modular design approach to either 'plug-in' or 'unplug' various force models, this multi-flight phase navigation filter design structure also provides a versatile GPS navigation software engine for both atmospheric and exo-atmospheric navigation or positioning use, thereby streamlining the flight phase or application-dependent software requirements. Thus, a standardized GPS navigation software engine that can reduce the development and maintenance cost of commercial GPS receivers is now possible.

  17. A Two-Stage Estimation Method for Random Coefficient Differential Equation Models with Application to Longitudinal HIV Dynamic Data.

    PubMed

    Fang, Yun; Wu, Hulin; Zhu, Li-Xing

    2011-07-01

    We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.

  18. An ensemble-based dynamic Bayesian averaging approach for discharge simulations using multiple global precipitation products and hydrological models

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris

    2018-03-01

    Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.

  19. Collaborative development of predictive toxicology applications

    PubMed Central

    2010-01-01

    OpenTox provides an interoperable, standards-based Framework for the support of predictive toxicology data management, algorithms, modelling, validation and reporting. It is relevant to satisfying the chemical safety assessment requirements of the REACH legislation as it supports access to experimental data, (Quantitative) Structure-Activity Relationship models, and toxicological information through an integrating platform that adheres to regulatory requirements and OECD validation principles. Initial research defined the essential components of the Framework including the approach to data access, schema and management, use of controlled vocabularies and ontologies, architecture, web service and communications protocols, and selection and integration of algorithms for predictive modelling. OpenTox provides end-user oriented tools to non-computational specialists, risk assessors, and toxicological experts in addition to Application Programming Interfaces (APIs) for developers of new applications. OpenTox actively supports public standards for data representation, interfaces, vocabularies and ontologies, Open Source approaches to core platform components, and community-based collaboration approaches, so as to progress system interoperability goals. The OpenTox Framework includes APIs and services for compounds, datasets, features, algorithms, models, ontologies, tasks, validation, and reporting which may be combined into multiple applications satisfying a variety of different user needs. OpenTox applications are based on a set of distributed, interoperable OpenTox API-compliant REST web services. The OpenTox approach to ontology allows for efficient mapping of complementary data coming from different datasets into a unifying structure having a shared terminology and representation. Two initial OpenTox applications are presented as an illustration of the potential impact of OpenTox for high-quality and consistent structure-activity relationship modelling of REACH-relevant endpoints: ToxPredict which predicts and reports on toxicities for endpoints for an input chemical structure, and ToxCreate which builds and validates a predictive toxicity model based on an input toxicology dataset. Because of the extensible nature of the standardised Framework design, barriers of interoperability between applications and content are removed, as the user may combine data, models and validation from multiple sources in a dependable and time-effective way. PMID:20807436

  20. Collaborative development of predictive toxicology applications.

    PubMed

    Hardy, Barry; Douglas, Nicki; Helma, Christoph; Rautenberg, Micha; Jeliazkova, Nina; Jeliazkov, Vedrin; Nikolova, Ivelina; Benigni, Romualdo; Tcheremenskaia, Olga; Kramer, Stefan; Girschick, Tobias; Buchwald, Fabian; Wicker, Joerg; Karwath, Andreas; Gütlein, Martin; Maunz, Andreas; Sarimveis, Haralambos; Melagraki, Georgia; Afantitis, Antreas; Sopasakis, Pantelis; Gallagher, David; Poroikov, Vladimir; Filimonov, Dmitry; Zakharov, Alexey; Lagunin, Alexey; Gloriozova, Tatyana; Novikov, Sergey; Skvortsova, Natalia; Druzhilovsky, Dmitry; Chawla, Sunil; Ghosh, Indira; Ray, Surajit; Patel, Hitesh; Escher, Sylvia

    2010-08-31

    OpenTox provides an interoperable, standards-based Framework for the support of predictive toxicology data management, algorithms, modelling, validation and reporting. It is relevant to satisfying the chemical safety assessment requirements of the REACH legislation as it supports access to experimental data, (Quantitative) Structure-Activity Relationship models, and toxicological information through an integrating platform that adheres to regulatory requirements and OECD validation principles. Initial research defined the essential components of the Framework including the approach to data access, schema and management, use of controlled vocabularies and ontologies, architecture, web service and communications protocols, and selection and integration of algorithms for predictive modelling. OpenTox provides end-user oriented tools to non-computational specialists, risk assessors, and toxicological experts in addition to Application Programming Interfaces (APIs) for developers of new applications. OpenTox actively supports public standards for data representation, interfaces, vocabularies and ontologies, Open Source approaches to core platform components, and community-based collaboration approaches, so as to progress system interoperability goals.The OpenTox Framework includes APIs and services for compounds, datasets, features, algorithms, models, ontologies, tasks, validation, and reporting which may be combined into multiple applications satisfying a variety of different user needs. OpenTox applications are based on a set of distributed, interoperable OpenTox API-compliant REST web services. The OpenTox approach to ontology allows for efficient mapping of complementary data coming from different datasets into a unifying structure having a shared terminology and representation.Two initial OpenTox applications are presented as an illustration of the potential impact of OpenTox for high-quality and consistent structure-activity relationship modelling of REACH-relevant endpoints: ToxPredict which predicts and reports on toxicities for endpoints for an input chemical structure, and ToxCreate which builds and validates a predictive toxicity model based on an input toxicology dataset. Because of the extensible nature of the standardised Framework design, barriers of interoperability between applications and content are removed, as the user may combine data, models and validation from multiple sources in a dependable and time-effective way.

  1. Computational neuroanatomy: ontology-based representation of neural components and connectivity.

    PubMed

    Rubin, Daniel L; Talos, Ion-Florin; Halle, Michael; Musen, Mark A; Kikinis, Ron

    2009-02-05

    A critical challenge in neuroscience is organizing, managing, and accessing the explosion in neuroscientific knowledge, particularly anatomic knowledge. We believe that explicit knowledge-based approaches to make neuroscientific knowledge computationally accessible will be helpful in tackling this challenge and will enable a variety of applications exploiting this knowledge, such as surgical planning. We developed ontology-based models of neuroanatomy to enable symbolic lookup, logical inference and mathematical modeling of neural systems. We built a prototype model of the motor system that integrates descriptive anatomic and qualitative functional neuroanatomical knowledge. In addition to modeling normal neuroanatomy, our approach provides an explicit representation of abnormal neural connectivity in disease states, such as common movement disorders. The ontology-based representation encodes both structural and functional aspects of neuroanatomy. The ontology-based models can be evaluated computationally, enabling development of automated computer reasoning applications. Neuroanatomical knowledge can be represented in machine-accessible format using ontologies. Computational neuroanatomical approaches such as described in this work could become a key tool in translational informatics, leading to decision support applications that inform and guide surgical planning and personalized care for neurological disease in the future.

  2. Web-based applications for building, managing and analysing kinetic models of biological systems.

    PubMed

    Lee, Dong-Yup; Saha, Rajib; Yusufi, Faraaz Noor Khan; Park, Wonjun; Karimi, Iftekhar A

    2009-01-01

    Mathematical modelling and computational analysis play an essential role in improving our capability to elucidate the functions and characteristics of complex biological systems such as metabolic, regulatory and cell signalling pathways. The modelling and concomitant simulation render it possible to predict the cellular behaviour of systems under various genetically and/or environmentally perturbed conditions. This motivates systems biologists/bioengineers/bioinformaticians to develop new tools and applications, allowing non-experts to easily conduct such modelling and analysis. However, among a multitude of systems biology tools developed to date, only a handful of projects have adopted a web-based approach to kinetic modelling. In this report, we evaluate the capabilities and characteristics of current web-based tools in systems biology and identify desirable features, limitations and bottlenecks for further improvements in terms of usability and functionality. A short discussion on software architecture issues involved in web-based applications and the approaches taken by existing tools is included for those interested in developing their own simulation applications.

  3. A Deep Learning based Approach to Reduced Order Modeling of Fluids using LSTM Neural Networks

    NASA Astrophysics Data System (ADS)

    Mohan, Arvind; Gaitonde, Datta

    2017-11-01

    Reduced Order Modeling (ROM) can be used as surrogates to prohibitively expensive simulations to model flow behavior for long time periods. ROM is predicated on extracting dominant spatio-temporal features of the flow from CFD or experimental datasets. We explore ROM development with a deep learning approach, which comprises of learning functional relationships between different variables in large datasets for predictive modeling. Although deep learning and related artificial intelligence based predictive modeling techniques have shown varied success in other fields, such approaches are in their initial stages of application to fluid dynamics. Here, we explore the application of the Long Short Term Memory (LSTM) neural network to sequential data, specifically to predict the time coefficients of Proper Orthogonal Decomposition (POD) modes of the flow for future timesteps, by training it on data at previous timesteps. The approach is demonstrated by constructing ROMs of several canonical flows. Additionally, we show that statistical estimates of stationarity in the training data can indicate a priori how amenable a given flow-field is to this approach. Finally, the potential and limitations of deep learning based ROM approaches will be elucidated and further developments discussed.

  4. Development and application of a 3-D geometry/mass model for LDEF satellite ionizing radiation assessments

    NASA Technical Reports Server (NTRS)

    Colborn, B. L.; Armstong, T. W.

    1993-01-01

    A three-dimensional geometry and mass model of the Long Duration Exposure Facility (LDEF) spacecraft and experiment trays was developed for use in predictions and data interpretation related to ionizing radiation measurements. The modeling approach, level of detail incorporated, example models for specific experiments and radiation dosimeters, and example applications of the model are described.

  5. A Decentralized Adaptive Approach to Fault Tolerant Flight Control

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva; Nikulin, Vladimir; Heimes, Felix; Shormin, Victor

    2000-01-01

    This paper briefly reports some results of our study on the application of a decentralized adaptive control approach to a 6 DOF nonlinear aircraft model. The simulation results showed the potential of using this approach to achieve fault tolerant control. Based on this observation and some analysis, the paper proposes a multiple channel adaptive control scheme that makes use of the functionally redundant actuating and sensing capabilities in the model, and explains how to implement the scheme to tolerate actuator and sensor failures. The conditions, under which the scheme is applicable, are stated in the paper.

  6. Interactive Visual Analytics Approch for Exploration of Geochemical Model Simulations with Different Parameter Sets

    NASA Astrophysics Data System (ADS)

    Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris

    2015-04-01

    Many geoscience applications can benefit from testing many combinations of input parameters for geochemical simulation models. It is, however, a challenge to screen the input and output data from the model to identify the significant relationships between input parameters and output variables. For addressing this problem we propose a Visual Analytics approach that has been developed in an ongoing collaboration between computer science and geoscience researchers. Our Visual Analytics approach uses visualization methods of hierarchical horizontal axis, multi-factor stacked bar charts and interactive semi-automated filtering for input and output data together with automatic sensitivity analysis. This guides the users towards significant relationships. We implement our approach as an interactive data exploration tool. It is designed with flexibility in mind, so that a diverse set of tasks such as inverse modeling, sensitivity analysis and model parameter refinement can be supported. Here we demonstrate the capabilities of our approach by two examples for gas storage applications. For the first example our Visual Analytics approach enabled the analyst to observe how the element concentrations change around previously established baselines in response to thousands of different combinations of mineral phases. This supported combinatorial inverse modeling for interpreting observations about the chemical composition of the formation fluids at the Ketzin pilot site for CO2 storage. The results indicate that, within the experimental error range, the formation fluid cannot be considered at local thermodynamical equilibrium with the mineral assemblage of the reservoir rock. This is a valuable insight from the predictive geochemical modeling for the Ketzin site. For the second example our approach supports sensitivity analysis for a reaction involving the reductive dissolution of pyrite with formation of pyrrothite in presence of gaseous hydrogen. We determine that this reaction is thermodynamically favorable under a broad range of conditions. This includes low temperatures and absence of microbial catalysators. Our approach has potential for use in other applications that involve exploration of relationships in geochemical simulation model data.

  7. Flamelet Model Application for Non-Premixed Turbulent Combustion

    NASA Technical Reports Server (NTRS)

    Secundov, A.; Bezgin, L.; Buriko, Yu.; Guskov, O.; Kopchenov, V.; Laskin, I.; Lomkov, K.; Tshepin, S.; Volkov, D.; Zaitsev, S.

    1996-01-01

    The current Final Report contains results of the study which was performed in Scientific Research Center 'ECOLEN' (Moscow, Russia). The study concerns the development and verification of non-expensive approach for modeling of supersonic turbulent diffusion flames based on flamelet consideration of the chemistry/turbulence interaction (FL approach). Research work included: development of the approach and CFD tests of the flamelet model for supersonic jet flames; development of the simplified procedure for solution of the flamelet equations based on partial equilibrium chemistry assumption; study of the flame ignition/extinction predictions provided by flamelet model. The performed investigation demonstrated that FL approach allowed to describe satisfactory main features of supersonic H 2/air jet flames. Model demonstrated also high capabilities for reduction of the computational expenses in CFD modeling of the supersonic flames taking into account detailed oxidation chemistry. However, some disadvantages and restrictions of the existing version of approach were found in this study. They were: (1) inaccuracy in predictions of the passive scalar statistics by our turbulence model for one of the considered test cases; and (2) applicability of the available version of the flamelet model to flames without large ignition delay distance only. Based on the results of the performed investigation, we formulated and submitted to the National Aeronautics and Space Administration our Project Proposal for the next step research directed toward further improvement of the FL approach.

  8. PREDICTIVE MODELING OF LIGHT-INDUCED MORTALITY OF ENTEROCOCCI FAECALIS IN RECREATIONAL WATERS

    EPA Science Inventory

    One approach to predictive modeling of biological contamination of recreational waters involves the application of process-based approaches that consider microbial sources, hydrodynamic transport, and microbial fate. This presentation focuses on one important fate process, light-...

  9. Development of a fourth generation predictive capability maturity model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hills, Richard Guy; Witkowski, Walter R.; Urbina, Angel

    2013-09-01

    The Predictive Capability Maturity Model (PCMM) is an expert elicitation tool designed to characterize and communicate completeness of the approaches used for computational model definition, verification, validation, and uncertainty quantification associated for an intended application. The primary application of this tool at Sandia National Laboratories (SNL) has been for physics-based computational simulations in support of nuclear weapons applications. The two main goals of a PCMM evaluation are 1) the communication of computational simulation capability, accurately and transparently, and 2) the development of input for effective planning. As a result of the increasing importance of computational simulation to SNLs mission, themore » PCMM has evolved through multiple generations with the goal to provide more clarity, rigor, and completeness in its application. This report describes the approach used to develop the fourth generation of the PCMM.« less

  10. Technical Note: A generic law-of-the-minimum flux limiter for simulating substrate limitation in biogeochemical models

    DOE PAGES

    Tang, J. Y.; Riley, W. J.

    2016-02-05

    We present a generic flux limiter to account for mass limitations from an arbitrary number of substrates in a biogeochemical reaction network. The flux limiter is based on the observation that substrate (e.g., nitrogen, phosphorus) limitation in biogeochemical models can be represented as to ensure mass conservative and non-negative numerical solutions to the governing ordinary differential equations. Application of the flux limiter includes two steps: (1) formulation of the biogeochemical processes with a matrix of stoichiometric coefficients and (2) application of Liebig's law of the minimum using the dynamic stoichiometric relationship of the reactants. This approach contrasts with the ad hoc down-regulationmore » approaches that are implemented in many existing models (such as CLM4.5 and the ACME (Accelerated Climate Modeling for Energy) Land Model (ALM)) of carbon and nutrient interactions, which are error prone when adding new processes, even for experienced modelers. Through an example implementation with a CENTURY-like decomposition model that includes carbon, nitrogen, and phosphorus, we show that our approach (1) produced almost identical results to that from the ad hoc down-regulation approaches under non-limiting nutrient conditions, (2) properly resolved the negative solutions under substrate-limited conditions where the simple clipping approach failed, (3) successfully avoided the potential conceptual ambiguities that are implied by those ad hoc down-regulation approaches. We expect our approach will make future biogeochemical models easier to improve and more robust.« less

  11. Experiment Design for Nonparametric Models Based On Minimizing Bayes Risk: Application to Voriconazole1

    PubMed Central

    Bayard, David S.; Neely, Michael

    2016-01-01

    An experimental design approach is presented for individualized therapy in the special case where the prior information is specified by a nonparametric (NP) population model. Here, a nonparametric model refers to a discrete probability model characterized by a finite set of support points and their associated weights. An important question arises as to how to best design experiments for this type of model. Many experimental design methods are based on Fisher Information or other approaches originally developed for parametric models. While such approaches have been used with some success across various applications, it is interesting to note that they largely fail to address the fundamentally discrete nature of the nonparametric model. Specifically, the problem of identifying an individual from a nonparametric prior is more naturally treated as a problem of classification, i.e., to find a support point that best matches the patient’s behavior. This paper studies the discrete nature of the NP experiment design problem from a classification point of view. Several new insights are provided including the use of Bayes Risk as an information measure, and new alternative methods for experiment design. One particular method, denoted as MMopt (Multiple-Model Optimal), will be examined in detail and shown to require minimal computation while having distinct advantages compared to existing approaches. Several simulated examples, including a case study involving oral voriconazole in children, are given to demonstrate the usefulness of MMopt in pharmacokinetics applications. PMID:27909942

  12. Experiment design for nonparametric models based on minimizing Bayes Risk: application to voriconazole¹.

    PubMed

    Bayard, David S; Neely, Michael

    2017-04-01

    An experimental design approach is presented for individualized therapy in the special case where the prior information is specified by a nonparametric (NP) population model. Here, a NP model refers to a discrete probability model characterized by a finite set of support points and their associated weights. An important question arises as to how to best design experiments for this type of model. Many experimental design methods are based on Fisher information or other approaches originally developed for parametric models. While such approaches have been used with some success across various applications, it is interesting to note that they largely fail to address the fundamentally discrete nature of the NP model. Specifically, the problem of identifying an individual from a NP prior is more naturally treated as a problem of classification, i.e., to find a support point that best matches the patient's behavior. This paper studies the discrete nature of the NP experiment design problem from a classification point of view. Several new insights are provided including the use of Bayes Risk as an information measure, and new alternative methods for experiment design. One particular method, denoted as MMopt (multiple-model optimal), will be examined in detail and shown to require minimal computation while having distinct advantages compared to existing approaches. Several simulated examples, including a case study involving oral voriconazole in children, are given to demonstrate the usefulness of MMopt in pharmacokinetics applications.

  13. Verification of road databases using multiple road models

    NASA Astrophysics Data System (ADS)

    Ziems, Marcel; Rottensteiner, Franz; Heipke, Christian

    2017-08-01

    In this paper a new approach for automatic road database verification based on remote sensing images is presented. In contrast to existing methods, the applicability of the new approach is not restricted to specific road types, context areas or geographic regions. This is achieved by combining several state-of-the-art road detection and road verification approaches that work well under different circumstances. Each one serves as an independent module representing a unique road model and a specific processing strategy. All modules provide independent solutions for the verification problem of each road object stored in the database in form of two probability distributions, the first one for the state of a database object (correct or incorrect), and a second one for the state of the underlying road model (applicable or not applicable). In accordance with the Dempster-Shafer Theory, both distributions are mapped to a new state space comprising the classes correct, incorrect and unknown. Statistical reasoning is applied to obtain the optimal state of a road object. A comparison with state-of-the-art road detection approaches using benchmark datasets shows that in general the proposed approach provides results with larger completeness. Additional experiments reveal that based on the proposed method a highly reliable semi-automatic approach for road data base verification can be designed.

  14. Domain and Specification Models for Software Engineering

    NASA Technical Reports Server (NTRS)

    Iscoe, Neil; Liu, Zheng-Yang; Feng, Guohui

    1992-01-01

    This paper discusses our approach to representing application domain knowledge for specific software engineering tasks. Application domain knowledge is embodied in a domain model. Domain models are used to assist in the creation of specification models. Although many different specification models can be created from any particular domain model, each specification model is consistent and correct with respect to the domain model. One aspect of the system-hierarchical organization is described in detail.

  15. Modelling of Operative Report Documents for Data Integration into an openEHR-Based Enterprise Data Warehouse.

    PubMed

    Haarbrandt, Birger; Wilschko, Andreas; Marschollek, Michael

    2016-01-01

    In order to integrate operative report documents from two operating room management systems into a data warehouse, we investigated the application of the two-level modelling approach of openEHR to create a shared data model. Based on the systems' analyses, a template consisting of 13 archetypes has been developed. Of these 13 archetypes, 3 have been obtained from the international archetype repository of the openEHR foundation. The remaining 10 archetypes have been newly created. The template was evaluated by an application system expert and through conducting a first test mapping of real-world data from one of the systems. The evaluation showed that by using the two-level modelling approach of openEHR, we succeeded to represent an integrated and shared information model for operative report documents. More research is needed to learn about the limitations of this approach in other data integration scenarios.

  16. Assessment of Current Process Modeling Approaches to Determine Their Limitations, Applicability and Developments Needed for Long-Fiber Thermoplastic Injection Molded Composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Ba Nghiep; Holbery, Jim; Smith, Mark T.

    2006-11-30

    This report describes the status of the current process modeling approaches to predict the behavior and flow of fiber-filled thermoplastics under injection molding conditions. Previously, models have been developed to simulate the injection molding of short-fiber thermoplastics, and an as-formed composite part or component can then be predicted that contains a microstructure resulting from the constituents’ material properties and characteristics as well as the processing parameters. Our objective is to assess these models in order to determine their capabilities and limitations, and the developments needed for long-fiber injection-molded thermoplastics (LFTs). First, the concentration regimes are summarized to facilitate the understandingmore » of different types of fiber-fiber interaction that can occur for a given fiber volume fraction. After the formulation of the fiber suspension flow problem and the simplification leading to the Hele-Shaw approach, the interaction mechanisms are discussed. Next, the establishment of the rheological constitutive equation is presented that reflects the coupled flow/orientation nature. The decoupled flow/orientation approach is also discussed which constitutes a good simplification for many applications involving flows in thin cavities. Finally, before outlining the necessary developments for LFTs, some applications of the current orientation model and the so-called modified Folgar-Tucker model are illustrated through the fiber orientation predictions for selected LFT samples.« less

  17. Families with Noncompliant Children: Applications of the Systemic Model.

    ERIC Educational Resources Information Center

    Neilans, Thomas H.; And Others

    This paper describes the application of a systems approach model to assessing families with a labeled noncompliant child. The first section describes and comments on the applied methodology for the model. The second section describes the classification of 61 families containing a child labeled by the family as noncompliant. An analysis of data…

  18. Atmospheric Tracer Inverse Modeling Using Markov Chain Monte Carlo (MCMC)

    NASA Astrophysics Data System (ADS)

    Kasibhatla, P.

    2004-12-01

    In recent years, there has been an increasing emphasis on the use of Bayesian statistical estimation techniques to characterize the temporal and spatial variability of atmospheric trace gas sources and sinks. The applications have been varied in terms of the particular species of interest, as well as in terms of the spatial and temporal resolution of the estimated fluxes. However, one common characteristic has been the use of relatively simple statistical models for describing the measurement and chemical transport model error statistics and prior source statistics. For example, multivariate normal probability distribution functions (pdfs) are commonly used to model these quantities and inverse source estimates are derived for fixed values of pdf paramaters. While the advantage of this approach is that closed form analytical solutions for the a posteriori pdfs of interest are available, it is worth exploring Bayesian analysis approaches which allow for a more general treatment of error and prior source statistics. Here, we present an application of the Markov Chain Monte Carlo (MCMC) methodology to an atmospheric tracer inversion problem to demonstrate how more gereral statistical models for errors can be incorporated into the analysis in a relatively straightforward manner. The MCMC approach to Bayesian analysis, which has found wide application in a variety of fields, is a statistical simulation approach that involves computing moments of interest of the a posteriori pdf by efficiently sampling this pdf. The specific inverse problem that we focus on is the annual mean CO2 source/sink estimation problem considered by the TransCom3 project. TransCom3 was a collaborative effort involving various modeling groups and followed a common modeling and analysis protocoal. As such, this problem provides a convenient case study to demonstrate the applicability of the MCMC methodology to atmospheric tracer source/sink estimation problems.

  19. Reuse: A knowledge-based approach

    NASA Technical Reports Server (NTRS)

    Iscoe, Neil; Liu, Zheng-Yang; Feng, Guohui

    1992-01-01

    This paper describes our research in automating the reuse process through the use of application domain models. Application domain models are explicit formal representations of the application knowledge necessary to understand, specify, and generate application programs. Furthermore, they provide a unified repository for the operational structure, rules, policies, and constraints of a specific application area. In our approach, domain models are expressed in terms of a transaction-based meta-modeling language. This paper has described in detail the creation and maintenance of hierarchical structures. These structures are created through a process that includes reverse engineering of data models with supplementary enhancement from application experts. Source code is also reverse engineered but is not a major source of domain model instantiation at this time. In the second phase of the software synthesis process, program specifications are interactively synthesized from an instantiated domain model. These specifications are currently integrated into a manual programming process but will eventually be used to derive executable code with mechanically assisted transformations. This research is performed within the context of programming-in-the-large types of systems. Although our goals are ambitious, we are implementing the synthesis system in an incremental manner through which we can realize tangible results. The client/server architecture is capable of supporting 16 simultaneous X/Motif users and tens of thousands of attributes and classes. Domain models have been partially synthesized from five different application areas. As additional domain models are synthesized and additional knowledge is gathered, we will inevitably add to and modify our representation. However, our current experience indicates that it will scale and expand to meet our modeling needs.

  20. APPLICATION OF THE SURFACE COMPLEXATION CONCEPT TO COMPLEX MINERAL ASSEMBLAGES

    EPA Science Inventory

    Two types of modeling approaches are illustrated for describing inorganic contaminant adsorption in aqueous environments: (a) the component additivity approach and (b) the generalized composite approach. Each approach is applied to simulate Zn2+ adsorption by a well-characterize...

  1. Computational aspects in mechanical modeling of the articular cartilage tissue.

    PubMed

    Mohammadi, Hadi; Mequanint, Kibret; Herzog, Walter

    2013-04-01

    This review focuses on the modeling of articular cartilage (at the tissue level), chondrocyte mechanobiology (at the cell level) and a combination of both in a multiscale computation scheme. The primary objective is to evaluate the advantages and disadvantages of conventional models implemented to study the mechanics of the articular cartilage tissue and chondrocytes. From monophasic material models as the simplest form to more complicated multiscale theories, these approaches have been frequently used to model articular cartilage and have contributed significantly to modeling joint mechanics, addressing and resolving numerous issues regarding cartilage mechanics and function. It should be noted that attentiveness is important when using different modeling approaches, as the choice of the model limits the applications available. In this review, we discuss the conventional models applicable to some of the mechanical aspects of articular cartilage such as lubrication, swelling pressure and chondrocyte mechanics and address some of the issues associated with the current modeling approaches. We then suggest future pathways for a more realistic modeling strategy as applied for the simulation of the mechanics of the cartilage tissue using multiscale and parallelized finite element method.

  2. AIR QUALITY MODELING OF HAZARDOUS POLLUTANTS: CURRENT STATUS AND FUTURE DIRECTIONS

    EPA Science Inventory

    The paper presents a review of current air toxics modeling applications and discusses possible advanced approaches. Many applications require the ability to predict hot spots from industrial sources or large roadways that are needed for community health and Environmental Justice...

  3. A Mechanistic Design Approach for Graphite Nanoplatelet (GNP) Reinforced Asphalt Mixtures for Low-Temperature Applications

    DOT National Transportation Integrated Search

    2018-01-01

    This report explores the application of a discrete computational model for predicting the fracture behavior of asphalt mixtures at low temperatures based on the results of simple laboratory experiments. In this discrete element model, coarse aggregat...

  4. An effective medium approach to modelling the pressure-dependent electrical properties of porous rocks

    NASA Astrophysics Data System (ADS)

    Han, Tongcheng

    2018-07-01

    Understanding the electrical properties of rocks under varying pressure is important for a variety of geophysical applications. This study proposes an approach to modelling the pressure-dependent electrical properties of porous rocks based on an effective medium model. The so-named Textural model uses the aspect ratios and pressure-dependent volume fractions of the pores and the aspect ratio and electrical conductivity of the matrix grains. The pores were represented by randomly oriented stiff and compliant spheroidal shapes with constant aspect ratios, and their pressure-dependent volume fractions were inverted from the measured variation of total porosity with differential pressure using a dual porosity model. The unknown constant stiff and compliant pore aspect ratios and the aspect ratio and electrical conductivity of the matrix grains were inverted by best fitting the modelled electrical formation factor to the measured data. Application of the approach to three sandstone samples covering a broad porosity range showed that the pressure-dependent electrical properties can be satisfactorily modelled by the proposed approach. The results demonstrate that the dual porosity concept is sufficient to explain the electrical properties of porous rocks under pressure through the effective medium model scheme.

  5. Towards more accurate and reliable predictions for nuclear applications

    NASA Astrophysics Data System (ADS)

    Goriely, Stephane; Hilaire, Stephane; Dubray, Noel; Lemaître, Jean-François

    2017-09-01

    The need for nuclear data far from the valley of stability, for applications such as nuclear astrophysics or future nuclear facilities, challenges the robustness as well as the predictive power of present nuclear models. Most of the nuclear data evaluation and prediction are still performed on the basis of phenomenological nuclear models. For the last decades, important progress has been achieved in fundamental nuclear physics, making it now feasible to use more reliable, but also more complex microscopic or semi-microscopic models in the evaluation and prediction of nuclear data for practical applications. Nowadays mean-field models can be tuned at the same level of accuracy as the phenomenological models, renormalized on experimental data if needed, and therefore can replace the phenomenological inputs in the evaluation of nuclear data. The latest achievements to determine nuclear masses within the non-relativistic HFB approach, including the related uncertainties in the model predictions, are discussed. Similarly, recent efforts to determine fission observables within the mean-field approach are described and compared with more traditional existing models.

  6. Statistical processing of large image sequences.

    PubMed

    Khellah, F; Fieguth, P; Murray, M J; Allen, M

    2005-01-01

    The dynamic estimation of large-scale stochastic image sequences, as frequently encountered in remote sensing, is important in a variety of scientific applications. However, the size of such images makes conventional dynamic estimation methods, for example, the Kalman and related filters, impractical. In this paper, we present an approach that emulates the Kalman filter, but with considerably reduced computational and storage requirements. Our approach is illustrated in the context of a 512 x 512 image sequence of ocean surface temperature. The static estimation step, the primary contribution here, uses a mixture of stationary models to accurately mimic the effect of a nonstationary prior, simplifying both computational complexity and modeling. Our approach provides an efficient, stable, positive-definite model which is consistent with the given correlation structure. Thus, the methods of this paper may find application in modeling and single-frame estimation.

  7. Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach.

    DTIC Science & Technology

    1998-05-01

    Coverage Probability with a Random Optimization Procedure: An Artificial Neural Network Approach by Biing T. Guan, George Z. Gertner, and Alan B...Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach 6. AUTHOR(S) Biing...coverage based on past coverage. Approach A literature survey was conducted to identify artificial neural network analysis techniques applicable for

  8. Configurational coupled cluster approach with applications to magnetic model systems

    NASA Astrophysics Data System (ADS)

    Wu, Siyuan; Nooijen, Marcel

    2018-05-01

    A general exponential, coupled cluster like, approach is discussed to extract an effective Hamiltonian in configurational space, as a sum of 1-body, 2-body up to n-body operators. The simplest two-body approach is illustrated by calculations on simple magnetic model systems. A key feature of the approach is that equations up to a certain rank do not depend on higher body cluster operators.

  9. Towards a Viscous Wall Model for Immersed Boundary Methods

    NASA Technical Reports Server (NTRS)

    Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.

    2016-01-01

    Immersed boundary methods are frequently employed for simulating flows at low Reynolds numbers or for applications where viscous boundary layer effects can be neglected. The primary shortcoming of Cartesian mesh immersed boundary methods is the inability of efficiently resolving thin turbulent boundary layers in high-Reynolds number flow application. The inefficiency of resolving the thin boundary is associated with the use of constant aspect ratio Cartesian grid cells. Conventional CFD approaches can efficiently resolve the large wall normal gradients by utilizing large aspect ratio cells near the wall. This paper presents different approaches for immersed boundary methods to account for the viscous boundary layer interaction with the flow-field away from the walls. Different wall modeling approaches proposed in previous research studies are addressed and compared to a new integral boundary layer based approach. In contrast to common wall-modeling approaches that usually only utilize local flow information, the integral boundary layer based approach keeps the streamwise history of the boundary layer. This allows the method to remain effective at much larger y+ values than local wall modeling approaches. After a theoretical discussion of the different approaches, the method is applied to increasingly more challenging flow fields including fully attached, separated, and shock-induced separated (laminar and turbulent) flows.

  10. Thoracic respiratory motion estimation from MRI using a statistical model and a 2-D image navigator.

    PubMed

    King, A P; Buerger, C; Tsoumpas, C; Marsden, P K; Schaeffter, T

    2012-01-01

    Respiratory motion models have potential application for estimating and correcting the effects of motion in a wide range of applications, for example in PET-MR imaging. Given that motion cycles caused by breathing are only approximately repeatable, an important quality of such models is their ability to capture and estimate the intra- and inter-cycle variability of the motion. In this paper we propose and describe a technique for free-form nonrigid respiratory motion correction in the thorax. Our model is based on a principal component analysis of the motion states encountered during different breathing patterns, and is formed from motion estimates made from dynamic 3-D MRI data. We apply our model using a data-driven technique based on a 2-D MRI image navigator. Unlike most previously reported work in the literature, our approach is able to capture both intra- and inter-cycle motion variability. In addition, the 2-D image navigator can be used to estimate how applicable the current motion model is, and hence report when more imaging data is required to update the model. We also use the motion model to decide on the best positioning for the image navigator. We validate our approach using MRI data acquired from 10 volunteers and demonstrate improvements of up to 40.5% over other reported motion modelling approaches, which corresponds to 61% of the overall respiratory motion present. Finally we demonstrate one potential application of our technique: MRI-based motion correction of real-time PET data for simultaneous PET-MRI acquisition. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. A reduced order, test verified component mode synthesis approach for system modeling applications

    NASA Astrophysics Data System (ADS)

    Butland, Adam; Avitabile, Peter

    2010-05-01

    Component mode synthesis (CMS) is a very common approach used for the generation of large system models. In general, these modeling techniques can be separated into two categories: those utilizing a combination of constraint modes and fixed interface normal modes and those based on a combination of free interface normal modes and residual flexibility terms. The major limitation of the methods utilizing constraint modes and fixed interface normal modes is the inability to easily obtain the required information from testing; the result of this limitation is that constraint mode-based techniques are primarily used with numerical models. An alternate approach is proposed which utilizes frequency and shape information acquired from modal testing to update reduced order finite element models using exact analytical model improvement techniques. The connection degrees of freedom are then rigidly constrained in the test verified, reduced order model to provide the boundary conditions necessary for constraint modes and fixed interface normal modes. The CMS approach is then used with this test verified, reduced order model to generate the system model for further analysis. A laboratory structure is used to show the application of the technique with both numerical and simulated experimental components to describe the system and validate the proposed approach. Actual test data is then used in the approach proposed. Due to typical measurement data contaminants that are always included in any test, the measured data is further processed to remove contaminants and is then used in the proposed approach. The final case using improved data with the reduced order, test verified components is shown to produce very acceptable results from the Craig-Bampton component mode synthesis approach. Use of the technique with its strengths and weaknesses are discussed.

  12. Network rewiring dynamics with convergence towards a star network

    PubMed Central

    Dick, G.; Parry, M.

    2016-01-01

    Network rewiring as a method for producing a range of structures was first introduced in 1998 by Watts & Strogatz (Nature 393, 440–442. (doi:10.1038/30918)). This approach allowed a transition from regular through small-world to a random network. The subsequent interest in scale-free networks motivated a number of methods for developing rewiring approaches that converged to scale-free networks. This paper presents a rewiring algorithm (RtoS) for undirected, non-degenerate, fixed size networks that transitions from regular, through small-world and scale-free to star-like networks. Applications of the approach to models for the spread of infectious disease and fixation time for a simple genetics model are used to demonstrate the efficacy and application of the approach. PMID:27843396

  13. Network rewiring dynamics with convergence towards a star network.

    PubMed

    Whigham, P A; Dick, G; Parry, M

    2016-10-01

    Network rewiring as a method for producing a range of structures was first introduced in 1998 by Watts & Strogatz ( Nature 393 , 440-442. (doi:10.1038/30918)). This approach allowed a transition from regular through small-world to a random network. The subsequent interest in scale-free networks motivated a number of methods for developing rewiring approaches that converged to scale-free networks. This paper presents a rewiring algorithm (RtoS) for undirected, non-degenerate, fixed size networks that transitions from regular, through small-world and scale-free to star-like networks. Applications of the approach to models for the spread of infectious disease and fixation time for a simple genetics model are used to demonstrate the efficacy and application of the approach.

  14. Applicability of land use models for the Houston area test site

    NASA Technical Reports Server (NTRS)

    Petersburg, R. K.; Bradford, L. H.

    1973-01-01

    Descriptions of land use models are presented which were considered for their applicability to the Houston Area Test Site. These models are representative both of the prevailing theories of land use dynamics and of basic approaches to simulation. The models considered are: a model of metropolis, land use simulation model, emperic land use forecasting model, a probabilistic model for residential growth, and the regional environmental management allocation process. Sources of environmental/resource information are listed.

  15. A toolbox and a record for scientific model development

    NASA Technical Reports Server (NTRS)

    Ellman, Thomas

    1994-01-01

    Scientific computation can benefit from software tools that facilitate construction of computational models, control the application of models, and aid in revising models to handle new situations. Existing environments for scientific programming provide only limited means of handling these tasks. This paper describes a two pronged approach for handling these tasks: (1) designing a 'Model Development Toolbox' that includes a basic set of model constructing operations; and (2) designing a 'Model Development Record' that is automatically generated during model construction. The record is subsequently exploited by tools that control the application of scientific models and revise models to handle new situations. Our two pronged approach is motivated by our belief that the model development toolbox and record should be highly interdependent. In particular, a suitable model development record can be constructed only when models are developed using a well defined set of operations. We expect this research to facilitate rapid development of new scientific computational models, to help ensure appropriate use of such models and to facilitate sharing of such models among working computational scientists. We are testing this approach by extending SIGMA, and existing knowledge-based scientific software design tool.

  16. An Approach for Validating Actinide and Fission Product Burnup Credit Criticality Safety Analyses: Criticality (k eff) Predictions

    DOE PAGES

    Scaglione, John M.; Mueller, Don E.; Wagner, John C.

    2014-12-01

    One of the most important remaining challenges associated with expanded implementation of burnup credit in the United States is the validation of depletion and criticality calculations used in the safety evaluation—in particular, the availability and use of applicable measured data to support validation, especially for fission products (FPs). Applicants and regulatory reviewers have been constrained by both a scarcity of data and a lack of clear technical basis or approach for use of the data. In this study, this paper describes a validation approach for commercial spent nuclear fuel (SNF) criticality safety (k eff) evaluations based on best-available data andmore » methods and applies the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The criticality validation approach utilizes not only available laboratory critical experiment (LCE) data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the French Haut Taux de Combustion program to support validation of the principal actinides but also calculated sensitivities, nuclear data uncertainties, and limited available FP LCE data to predict and verify individual biases for relevant minor actinides and FPs. The results demonstrate that (a) sufficient critical experiment data exist to adequately validate k eff calculations via conventional validation approaches for the primary actinides, (b) sensitivity-based critical experiment selection is more appropriate for generating accurate application model bias and uncertainty, and (c) calculated sensitivities and nuclear data uncertainties can be used for generating conservative estimates of bias for minor actinides and FPs. Results based on the SCALE 6.1 and the ENDF/B-VII.0 cross-section libraries indicate that a conservative estimate of the bias for the minor actinides and FPs is 1.5% of their worth within the application model. Finally, this paper provides a detailed description of the approach and its technical bases, describes the application of the approach for representative pressurized water reactor and boiling water reactor safety analysis models, and provides reference bias results based on the prerelease SCALE 6.1 code package and ENDF/B-VII nuclear cross-section data.« less

  17. A new symmetrical quasi-classical model for electronically non-adiabatic processes: Application to the case of weak non-adiabatic coupling

    DOE PAGES

    Cotton, Stephen J.; Miller, William H.

    2016-10-14

    Previous work has shown how a symmetrical quasi-classical (SQC) windowing procedure can be used to quantize the initial and final electronic degrees of freedom in the Meyer-Miller (MM) classical vibronic (i.e, nuclear + electronic) Hamiltonian, and that the approach provides a very good description of electronically non-adiabatic processes within a standard classical molecular dynamics framework for a number of benchmark problems. This study explores application of the SQC/MM approach to the case of very weak non-adiabatic coupling between the electronic states, showing (as anticipated) how the standard SQC/MM approach used to date fails in this limit, and then devises amore » new SQC windowing scheme to deal with it. Finally, application of this new SQC model to a variety of realistic benchmark systems shows that the new model not only treats the weak coupling case extremely well, but it is also seen to describe the “normal” regime (of electronic transition probabilities ≳ 0.1) even more accurately than the previous “standard” model.« less

  18. A new symmetrical quasi-classical model for electronically non-adiabatic processes: Application to the case of weak non-adiabatic coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cotton, Stephen J.; Miller, William H.

    Previous work has shown how a symmetrical quasi-classical (SQC) windowing procedure can be used to quantize the initial and final electronic degrees of freedom in the Meyer-Miller (MM) classical vibronic (i.e, nuclear + electronic) Hamiltonian, and that the approach provides a very good description of electronically non-adiabatic processes within a standard classical molecular dynamics framework for a number of benchmark problems. This study explores application of the SQC/MM approach to the case of very weak non-adiabatic coupling between the electronic states, showing (as anticipated) how the standard SQC/MM approach used to date fails in this limit, and then devises amore » new SQC windowing scheme to deal with it. Finally, application of this new SQC model to a variety of realistic benchmark systems shows that the new model not only treats the weak coupling case extremely well, but it is also seen to describe the “normal” regime (of electronic transition probabilities ≳ 0.1) even more accurately than the previous “standard” model.« less

  19. An information-based approach to change-point analysis with applications to biophysics and cell biology.

    PubMed

    Wiggins, Paul A

    2015-07-21

    This article describes the application of a change-point algorithm to the analysis of stochastic signals in biological systems whose underlying state dynamics consist of transitions between discrete states. Applications of this analysis include molecular-motor stepping, fluorophore bleaching, electrophysiology, particle and cell tracking, detection of copy number variation by sequencing, tethered-particle motion, etc. We present a unified approach to the analysis of processes whose noise can be modeled by Gaussian, Wiener, or Ornstein-Uhlenbeck processes. To fit the model, we exploit explicit, closed-form algebraic expressions for maximum-likelihood estimators of model parameters and estimated information loss of the generalized noise model, which can be computed extremely efficiently. We implement change-point detection using the frequentist information criterion (which, to our knowledge, is a new information criterion). The frequentist information criterion specifies a single, information-based statistical test that is free from ad hoc parameters and requires no prior probability distribution. We demonstrate this information-based approach in the analysis of simulated and experimental tethered-particle-motion data. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  20. Learning Probabilistic Logic Models from Probabilistic Examples

    PubMed Central

    Chen, Jianzhong; Muggleton, Stephen; Santos, José

    2009-01-01

    Abstract We revisit an application developed originally using abductive Inductive Logic Programming (ILP) for modeling inhibition in metabolic networks. The example data was derived from studies of the effects of toxins on rats using Nuclear Magnetic Resonance (NMR) time-trace analysis of their biofluids together with background knowledge representing a subset of the Kyoto Encyclopedia of Genes and Genomes (KEGG). We now apply two Probabilistic ILP (PILP) approaches - abductive Stochastic Logic Programs (SLPs) and PRogramming In Statistical modeling (PRISM) to the application. Both approaches support abductive learning and probability predictions. Abductive SLPs are a PILP framework that provides possible worlds semantics to SLPs through abduction. Instead of learning logic models from non-probabilistic examples as done in ILP, the PILP approach applied in this paper is based on a general technique for introducing probability labels within a standard scientific experimental setting involving control and treated data. Our results demonstrate that the PILP approach provides a way of learning probabilistic logic models from probabilistic examples, and the PILP models learned from probabilistic examples lead to a significant decrease in error accompanied by improved insight from the learned results compared with the PILP models learned from non-probabilistic examples. PMID:19888348

  1. Learning Probabilistic Logic Models from Probabilistic Examples.

    PubMed

    Chen, Jianzhong; Muggleton, Stephen; Santos, José

    2008-10-01

    We revisit an application developed originally using abductive Inductive Logic Programming (ILP) for modeling inhibition in metabolic networks. The example data was derived from studies of the effects of toxins on rats using Nuclear Magnetic Resonance (NMR) time-trace analysis of their biofluids together with background knowledge representing a subset of the Kyoto Encyclopedia of Genes and Genomes (KEGG). We now apply two Probabilistic ILP (PILP) approaches - abductive Stochastic Logic Programs (SLPs) and PRogramming In Statistical modeling (PRISM) to the application. Both approaches support abductive learning and probability predictions. Abductive SLPs are a PILP framework that provides possible worlds semantics to SLPs through abduction. Instead of learning logic models from non-probabilistic examples as done in ILP, the PILP approach applied in this paper is based on a general technique for introducing probability labels within a standard scientific experimental setting involving control and treated data. Our results demonstrate that the PILP approach provides a way of learning probabilistic logic models from probabilistic examples, and the PILP models learned from probabilistic examples lead to a significant decrease in error accompanied by improved insight from the learned results compared with the PILP models learned from non-probabilistic examples.

  2. An "age"-structured model of hematopoietic stem cell organization with application to chronic myeloid leukemia.

    PubMed

    Roeder, Ingo; Herberg, Maria; Horn, Matthias

    2009-04-01

    Previously, we have modeled hematopoietic stem cell organization by a stochastic, single cell-based approach. Applications to different experimental systems demonstrated that this model consistently explains a broad variety of in vivo and in vitro data. A major advantage of the agent-based model (ABM) is the representation of heterogeneity within the hematopoietic stem cell population. However, this advantage comes at the price of time-consuming simulations if the systems become large. One example in this respect is the modeling of disease and treatment dynamics in patients with chronic myeloid leukemia (CML), where the realistic number of individual cells to be considered exceeds 10(6). To overcome this deficiency, without losing the representation of the inherent heterogeneity of the stem cell population, we here propose to approximate the ABM by a system of partial differential equations (PDEs). The major benefit of such an approach is its independence from the size of the system. Although this mean field approach includes a number of simplifying assumptions compared to the ABM, it retains the key structure of the model including the "age"-structure of stem cells. We show that the PDE model qualitatively and quantitatively reproduces the results of the agent-based approach.

  3. Rare events modeling with support vector machine: Application to forecasting large-amplitude geomagnetic substorms and extreme events in financial markets.

    NASA Astrophysics Data System (ADS)

    Gavrishchaka, V. V.; Ganguli, S. B.

    2001-12-01

    Reliable forecasting of rare events in a complex dynamical system is a challenging problem that is important for many practical applications. Due to the nature of rare events, data set available for construction of the statistical and/or machine learning model is often very limited and incomplete. Therefore many widely used approaches including such robust algorithms as neural networks can easily become inadequate for rare events prediction. Moreover in many practical cases models with high-dimensional inputs are required. This limits applications of the existing rare event modeling techniques (e.g., extreme value theory) that focus on univariate cases. These approaches are not easily extended to multivariate cases. Support vector machine (SVM) is a machine learning system that can provide an optimal generalization using very limited and incomplete training data sets and can efficiently handle high-dimensional data. These features may allow to use SVM to model rare events in some applications. We have applied SVM-based system to the problem of large-amplitude substorm prediction and extreme event forecasting in stock and currency exchange markets. Encouraging preliminary results will be presented and other possible applications of the system will be discussed.

  4. The Population Consequences of Disturbance Model Application to North Atlantic Right Whales (Eubalaena glacialis)

    DTIC Science & Technology

    2014-09-30

    from individuals to the population by way of changes in either behavior or physiology, and the revised approach is called PCOD (Population...include modeling fecundity, and exploring the feasibility of incorporating acoustic disturbance and prey variability into the PCOD model...the applicability of the model to assessing the effects of acoustics on the population. We have refined and applied the PCOD model developed for

  5. Addressing HIV in the School Setting: Application of a School Change Model

    ERIC Educational Resources Information Center

    Walsh, Audra St. John; Chenneville, Tiffany

    2013-01-01

    This paper describes best practices for responding to youth with human immunodeficiency virus (HIV) in the school setting through the application of a school change model designed by the World Health Organization. This model applies a whole school approach and includes four levels that span the continuum from universal prevention to direct…

  6. Application of System Operational Effectiveness Methodology to Space Launch Vehicle Development and Operations

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Kelley, Gary W.

    2012-01-01

    The Department of Defense (DoD) defined System Operational Effectiveness (SOE) model provides an exceptional framework for an affordable approach to the development and operation of space launch vehicles and their supporting infrastructure. The SOE model provides a focal point from which to direct and measure technical effectiveness and process efficiencies of space launch vehicles. The application of the SOE model to a space launch vehicle's development and operation effort leads to very specific approaches and measures that require consideration during the design phase. This paper provides a mapping of the SOE model to the development of space launch vehicles for human exploration by addressing the SOE model key points of measurement including System Performance, System Availability, Technical Effectiveness, Process Efficiency, System Effectiveness, Life Cycle Cost, and Affordable Operational Effectiveness. In addition, the application of the SOE model to the launch vehicle development process is defined providing the unique aspects of space launch vehicle production and operations in lieu of the traditional broader SOE context that examines large quantities of fielded systems. The tailoring and application of the SOE model to space launch vehicles provides some key insights into the operational design drivers, capability phasing, and operational support systems.

  7. Application of LogitBoost Classifier for Traceability Using SNP Chip Data

    PubMed Central

    Kang, Hyunsung; Cho, Seoae; Kim, Heebal; Seo, Kang-Seok

    2015-01-01

    Consumer attention to food safety has increased rapidly due to animal-related diseases; therefore, it is important to identify their places of origin (POO) for safety purposes. However, only a few studies have addressed this issue and focused on machine learning-based approaches. In the present study, classification analyses were performed using a customized SNP chip for POO prediction. To accomplish this, 4,122 pigs originating from 104 farms were genotyped using the SNP chip. Several factors were considered to establish the best prediction model based on these data. We also assessed the applicability of the suggested model using a kinship coefficient-filtering approach. Our results showed that the LogitBoost-based prediction model outperformed other classifiers in terms of classification performance under most conditions. Specifically, a greater level of accuracy was observed when a higher kinship-based cutoff was employed. These results demonstrated the applicability of a machine learning-based approach using SNP chip data for practical traceability. PMID:26436917

  8. Application of LogitBoost Classifier for Traceability Using SNP Chip Data.

    PubMed

    Kim, Kwondo; Seo, Minseok; Kang, Hyunsung; Cho, Seoae; Kim, Heebal; Seo, Kang-Seok

    2015-01-01

    Consumer attention to food safety has increased rapidly due to animal-related diseases; therefore, it is important to identify their places of origin (POO) for safety purposes. However, only a few studies have addressed this issue and focused on machine learning-based approaches. In the present study, classification analyses were performed using a customized SNP chip for POO prediction. To accomplish this, 4,122 pigs originating from 104 farms were genotyped using the SNP chip. Several factors were considered to establish the best prediction model based on these data. We also assessed the applicability of the suggested model using a kinship coefficient-filtering approach. Our results showed that the LogitBoost-based prediction model outperformed other classifiers in terms of classification performance under most conditions. Specifically, a greater level of accuracy was observed when a higher kinship-based cutoff was employed. These results demonstrated the applicability of a machine learning-based approach using SNP chip data for practical traceability.

  9. THz spectroscopy: An emerging technology for pharmaceutical development and pharmaceutical Process Analytical Technology (PAT) applications

    NASA Astrophysics Data System (ADS)

    Wu, Huiquan; Khan, Mansoor

    2012-08-01

    As an emerging technology, THz spectroscopy has gained increasing attention in the pharmaceutical area during the last decade. This attention is due to the fact that (1) it provides a promising alternative approach for in-depth understanding of both intermolecular interaction among pharmaceutical molecules and pharmaceutical product quality attributes; (2) it provides a promising alternative approach for enhanced process understanding of certain pharmaceutical manufacturing processes; and (3) the FDA pharmaceutical quality initiatives, most noticeably, the Process Analytical Technology (PAT) initiative. In this work, the current status and progress made so far on using THz spectroscopy for pharmaceutical development and pharmaceutical PAT applications are reviewed. In the spirit of demonstrating the utility of first principles modeling approach for addressing model validation challenge and reducing unnecessary model validation "burden" for facilitating THz pharmaceutical PAT applications, two scientific case studies based on published THz spectroscopy measurement results are created and discussed. Furthermore, other technical challenges and opportunities associated with adapting THz spectroscopy as a pharmaceutical PAT tool are highlighted.

  10. Computational neuroanatomy: ontology-based representation of neural components and connectivity

    PubMed Central

    Rubin, Daniel L; Talos, Ion-Florin; Halle, Michael; Musen, Mark A; Kikinis, Ron

    2009-01-01

    Background A critical challenge in neuroscience is organizing, managing, and accessing the explosion in neuroscientific knowledge, particularly anatomic knowledge. We believe that explicit knowledge-based approaches to make neuroscientific knowledge computationally accessible will be helpful in tackling this challenge and will enable a variety of applications exploiting this knowledge, such as surgical planning. Results We developed ontology-based models of neuroanatomy to enable symbolic lookup, logical inference and mathematical modeling of neural systems. We built a prototype model of the motor system that integrates descriptive anatomic and qualitative functional neuroanatomical knowledge. In addition to modeling normal neuroanatomy, our approach provides an explicit representation of abnormal neural connectivity in disease states, such as common movement disorders. The ontology-based representation encodes both structural and functional aspects of neuroanatomy. The ontology-based models can be evaluated computationally, enabling development of automated computer reasoning applications. Conclusion Neuroanatomical knowledge can be represented in machine-accessible format using ontologies. Computational neuroanatomical approaches such as described in this work could become a key tool in translational informatics, leading to decision support applications that inform and guide surgical planning and personalized care for neurological disease in the future. PMID:19208191

  11. An anisotropic numerical model for thermal hydraulic analyses: application to liquid metal flow in fuel assemblies

    NASA Astrophysics Data System (ADS)

    Vitillo, F.; Vitale Di Maio, D.; Galati, C.; Caruso, G.

    2015-11-01

    A CFD analysis has been carried out to study the thermal-hydraulic behavior of liquid metal coolant in a fuel assembly of triangular lattice. In order to obtain fast and accurate results, the isotropic two-equation RANS approach is often used in nuclear engineering applications. A different approach is provided by Non-Linear Eddy Viscosity Models (NLEVM), which try to take into account anisotropic effects by a nonlinear formulation of the Reynolds stress tensor. This approach is very promising, as it results in a very good numerical behavior and in a potentially better fluid flow description than classical isotropic models. An Anisotropic Shear Stress Transport (ASST) model, implemented into a commercial software, has been applied in previous studies, showing very trustful results for a large variety of flows and applications. In the paper, the ASST model has been used to perform an analysis of the fluid flow inside the fuel assembly of the ALFRED lead cooled fast reactor. Then, a comparison between the results of wall-resolved conjugated heat transfer computations and the results of a decoupled analysis using a suitable thermal wall-function previously implemented into the solver has been performed and presented.

  12. Business model framework applications in health care: A systematic review.

    PubMed

    Fredriksson, Jens Jacob; Mazzocato, Pamela; Muhammed, Rafiq; Savage, Carl

    2017-11-01

    It has proven to be a challenge for health care organizations to achieve the Triple Aim. In the business literature, business model frameworks have been used to understand how organizations are aligned to achieve their goals. We conducted a systematic literature review with an explanatory synthesis approach to understand how business model frameworks have been applied in health care. We found a large increase in applications of business model frameworks during the last decade. E-health was the most common context of application. We identified six applications of business model frameworks: business model description, financial assessment, classification based on pre-defined typologies, business model analysis, development, and evaluation. Our synthesis suggests that the choice of business model framework and constituent elements should be informed by the intent and context of application. We see a need for harmonization in the choice of elements in order to increase generalizability, simplify application, and help organizations realize the Triple Aim.

  13. A Cognitive Perspective in the Treatment of Incarcerated Clients.

    ERIC Educational Resources Information Center

    Walsh, Thomas C.

    1990-01-01

    Proposes a cognitive therapy model as a workable approach in treating incarcerated clients. Reviews principal components and techniques of cognitive theory. Uses case vignettes to illustrate application of this approach. Delineates key features of cognitive model which relate to treatment of incarcerated population. (Author/ABL)

  14. Model-Driven Theme/UML

    NASA Astrophysics Data System (ADS)

    Carton, Andrew; Driver, Cormac; Jackson, Andrew; Clarke, Siobhán

    Theme/UML is an existing approach to aspect-oriented modelling that supports the modularisation and composition of concerns, including crosscutting ones, in design. To date, its lack of integration with model-driven engineering (MDE) techniques has limited its benefits across the development lifecycle. Here, we describe our work on facilitating the use of Theme/UML as part of an MDE process. We have developed a transformation tool that adopts model-driven architecture (MDA) standards. It defines a concern composition mechanism, implemented as a model transformation, to support the enhanced modularisation features of Theme/UML. We evaluate our approach by applying it to the development of mobile, context-aware applications-an application area characterised by many non-functional requirements that manifest themselves as crosscutting concerns.

  15. Unified constitutive models for high-temperature structural applications

    NASA Technical Reports Server (NTRS)

    Lindholm, U. S.; Chan, K. S.; Bodner, S. R.; Weber, R. M.; Walker, K. P.

    1988-01-01

    Unified constitutive models are characterized by the use of a single inelastic strain rate term for treating all aspects of inelastic deformation, including plasticity, creep, and stress relaxation under monotonic or cyclic loading. The structure of this class of constitutive theory pertinent for high temperature structural applications is first outlined and discussed. The effectiveness of the unified approach for representing high temperature deformation of Ni-base alloys is then evaluated by extensive comparison of experimental data and predictions of the Bodner-Partom and the Walker models. The use of the unified approach for hot section structural component analyses is demonstrated by applying the Walker model in finite element analyses of a benchmark notch problem and a turbine blade problem.

  16. Fitting population models from field data

    USGS Publications Warehouse

    Emlen, J.M.; Freeman, D.C.; Kirchhoff, M.D.; Alados, C.L.; Escos, J.; Duda, J.J.

    2003-01-01

    The application of population and community ecology to solving real-world problems requires population and community dynamics models that reflect the myriad patterns of interaction among organisms and between the biotic and physical environments. Appropriate models are not hard to construct, but the experimental manipulations needed to evaluate their defining coefficients are often both time consuming and costly, and sometimes environmentally destructive, as well. In this paper we present an empirical approach for finding the coefficients of broadly inclusive models without the need for environmental manipulation, demonstrate the approach with both an animal and a plant example, and suggest possible applications. Software has been developed, and is available from the senior author, with a manual describing both field and analytic procedures.

  17. Expert judgement and uncertainty quantification for climate change

    NASA Astrophysics Data System (ADS)

    Oppenheimer, Michael; Little, Christopher M.; Cooke, Roger M.

    2016-05-01

    Expert judgement is an unavoidable element of the process-based numerical models used for climate change projections, and the statistical approaches used to characterize uncertainty across model ensembles. Here, we highlight the need for formalized approaches to unifying numerical modelling with expert judgement in order to facilitate characterization of uncertainty in a reproducible, consistent and transparent fashion. As an example, we use probabilistic inversion, a well-established technique used in many other applications outside of climate change, to fuse two recent analyses of twenty-first century Antarctic ice loss. Probabilistic inversion is but one of many possible approaches to formalizing the role of expert judgement, and the Antarctic ice sheet is only one possible climate-related application. We recommend indicators or signposts that characterize successful science-based uncertainty quantification.

  18. A method for development of efficient 3D models for neutronic calculations of ASTRA critical facility using experimental information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balanin, A. L.; Boyarinov, V. F.; Glushkov, E. S.

    The application of experimental information on measured axial distributions of fission reaction rates for development of 3D numerical models of the ASTRA critical facility taking into account azimuthal asymmetry of the assembly simulating a HTGR with annular core is substantiated. Owing to the presence of the bottom reflector and the absence of the top reflector, the application of 2D models based on experimentally determined buckling is impossible for calculation of critical assemblies of the ASTRA facility; therefore, an alternative approach based on the application of the extrapolated assembly height is proposed. This approach is exemplified by the numerical analysis ofmore » experiments on measurement of efficiency of control rods mockups and protection system (CPS).« less

  19. A square root ensemble Kalman filter application to a motor-imagery brain-computer interface.

    PubMed

    Kamrunnahar, M; Schiff, S J

    2011-01-01

    We here investigated a non-linear ensemble Kalman filter (SPKF) application to a motor imagery brain computer interface (BCI). A square root central difference Kalman filter (SR-CDKF) was used as an approach for brain state estimation in motor imagery task performance, using scalp electroencephalography (EEG) signals. Healthy human subjects imagined left vs. right hand movements and tongue vs. bilateral toe movements while scalp EEG signals were recorded. Offline data analysis was conducted for training the model as well as for decoding the imagery movements. Preliminary results indicate the feasibility of this approach with a decoding accuracy of 78%-90% for the hand movements and 70%-90% for the tongue-toes movements. Ongoing research includes online BCI applications of this approach as well as combined state and parameter estimation using this algorithm with different system dynamic models.

  20. Model-based metabolism design: constraints for kinetic and stoichiometric models

    PubMed Central

    Stalidzans, Egils; Seiman, Andrus; Peebo, Karl; Komasilovs, Vitalijs; Pentjuss, Agris

    2018-01-01

    The implementation of model-based designs in metabolic engineering and synthetic biology may fail. One of the reasons for this failure is that only a part of the real-world complexity is included in models. Still, some knowledge can be simplified and taken into account in the form of optimization constraints to improve the feasibility of model-based designs of metabolic pathways in organisms. Some constraints (mass balance, energy balance, and steady-state assumption) serve as a basis for many modelling approaches. There are others (total enzyme activity constraint and homeostatic constraint) proposed decades ago, but which are frequently ignored in design development. Several new approaches of cellular analysis have made possible the application of constraints like cell size, surface, and resource balance. Constraints for kinetic and stoichiometric models are grouped according to their applicability preconditions in (1) general constraints, (2) organism-level constraints, and (3) experiment-level constraints. General constraints are universal and are applicable for any system. Organism-level constraints are applicable for biological systems and usually are organism-specific, but these constraints can be applied without information about experimental conditions. To apply experimental-level constraints, peculiarities of the organism and the experimental set-up have to be taken into account to calculate the values of constraints. The limitations of applicability of particular constraints for kinetic and stoichiometric models are addressed. PMID:29472367

  1. An assessment and application of turbulence models for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Coakley, T. J.; Viegas, J. R.; Huang, P. G.; Rubesin, M. W.

    1990-01-01

    The current approach to the Accurate Computation of Complex high-speed flows is to solve the Reynolds averaged Navier-Stokes equations using finite difference methods. An integral part of this approach consists of development and applications of mathematical turbulence models which are necessary in predicting the aerothermodynamic loads on the vehicle and the performance of the propulsion plant. Computations of several high speed turbulent flows using various turbulence models are described and the models are evaluated by comparing computations with the results of experimental measurements. The cases investigated include flows over insulated and cooled flat plates with Mach numbers ranging from 2 to 8 and wall temperature ratios ranging from 0.2 to 1.0. The turbulence models investigated include zero-equation, two-equation, and Reynolds-stress transport models.

  2. A national EHR strategy preparedness characterisation model and its application in the South-East European region.

    PubMed

    Orfanidis, Leonidas; Bamidis, Panagiotis; Eaglestone, Barry

    2006-01-01

    This paper is concerned with modelling national approaches towards electronic health record systems (NEHRS) development. A model framework is stepwise produced, that allows for the characterisation of the preparedness and the readiness of a country to develop an NEHRS. Secondary data of published reports are considered for the creation of the model. Such sources are identified to mostly originate from within a sample of five developed countries. Factors arising from these sources are identified, coded and scaled, so as to allow for a quantitative application of the model. Instantiation of the latter for the case of the five developed countries is contrasted with the set of countries from South East Europe (SEE). The likely importance and validity of this modelling approach is discussed, using the Delphi method.

  3. The emerging role of cloud computing in molecular modelling.

    PubMed

    Ebejer, Jean-Paul; Fulle, Simone; Morris, Garrett M; Finn, Paul W

    2013-07-01

    There is a growing recognition of the importance of cloud computing for large-scale and data-intensive applications. The distinguishing features of cloud computing and their relationship to other distributed computing paradigms are described, as are the strengths and weaknesses of the approach. We review the use made to date of cloud computing for molecular modelling projects and the availability of front ends for molecular modelling applications. Although the use of cloud computing technologies for molecular modelling is still in its infancy, we demonstrate its potential by presenting several case studies. Rapid growth can be expected as more applications become available and costs continue to fall; cloud computing can make a major contribution not just in terms of the availability of on-demand computing power, but could also spur innovation in the development of novel approaches that utilize that capacity in more effective ways. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Calibration of Complex Subsurface Reaction Models Using a Surrogate-Model Approach

    EPA Science Inventory

    Application of model assessment techniques to complex subsurface reaction models involves numerous difficulties, including non-trivial model selection, parameter non-uniqueness, and excessive computational burden. To overcome these difficulties, this study introduces SAMM (Simult...

  5. Modeling winter hydrological processes under differing climatic conditions: Modifying WEPP

    NASA Astrophysics Data System (ADS)

    Dun, Shuhui

    Water erosion is a serious and continuous environmental problem worldwide. In cold regions, soil freeze and thaw has great impacts on infiltration and erosion. Rain or snowmelt on a thawing soil can cause severe water erosion. Of equal importance is snow accumulation and snowmelt, which can be the predominant hydrological process in areas of mid- to high latitudes and forested watersheds. Modelers must properly simulate winter processes to adequately represent the overall hydrological outcome and sediment and chemical transport in these areas. Modeling winter hydrology is presently lacking in water erosion models. Most of these models are based on the functional Universal Soil Loss Equation (USLE) or its revised forms, e.g., Revised USLE (RUSLE). In RUSLE a seasonally variable soil erodibility factor (K) was used to account for the effects of frozen and thawing soil. Yet the use of this factor requires observation data for calibration, and such a simplified approach cannot represent the complicated transient freeze-thaw processes and their impacts on surface runoff and erosion. The Water Erosion Prediction Project (WEPP) watershed model, a physically-based erosion prediction software developed by the USDA-ARS, has seen numerous applications within and outside the US. WEPP simulates winter processes, including snow accumulation, snowmelt, and soil freeze-thaw, using an approach based on mass and energy conservation. However, previous studies showed the inadequacy of the winter routines in the WEPP model. Therefore, the objectives of this study were: (1) To adapt a modeling approach for winter hydrology based on mass and energy conservation, and to implement this approach into a physically-oriented hydrological model, such as WEPP; and (2) To assess this modeling approach through case applications to different geographic conditions. A new winter routine was developed and its performance was evaluated by incorporating it into WEPP (v2008.9) and then applying WEPP to four study sites at different spatial scales under different climatic conditions, including experimental plots in Pullman, WA and Morris, MN, two agricultural drainages in Pendleton, OR, and a forest watershed in Mica Creek, ID. The model applications showed promising results, indicating adequacy of the mass- and energy-balance-based approach for winter hydrology simulation.

  6. Modelling, simulation and applications of longitudinal train dynamics

    NASA Astrophysics Data System (ADS)

    Cole, Colin; Spiryagin, Maksym; Wu, Qing; Sun, Yan Quan

    2017-10-01

    Significant developments in longitudinal train simulation and an overview of the approaches to train models and modelling vehicle force inputs are firstly presented. The most important modelling task, that of the wagon connection, consisting of energy absorption devices such as draft gears and buffers, draw gear stiffness, coupler slack and structural stiffness is then presented. Detailed attention is given to the modelling approaches for friction wedge damped and polymer draft gears. A significant issue in longitudinal train dynamics is the modelling and calculation of the input forces - the co-dimensional problem. The need to push traction performances higher has led to research and improvement in the accuracy of traction modelling which is discussed. A co-simulation method that combines longitudinal train simulation, locomotive traction control and locomotive vehicle dynamics is presented. The modelling of other forces, braking propulsion resistance, curve drag and grade forces are also discussed. As extensions to conventional longitudinal train dynamics, lateral forces and coupler impacts are examined in regards to interaction with wagon lateral and vertical dynamics. Various applications of longitudinal train dynamics are then presented. As an alternative to the tradition single wagon mass approach to longitudinal train dynamics, an example incorporating fully detailed wagon dynamics is presented for a crash analysis problem. Further applications of starting traction, air braking, distributed power, energy analysis and tippler operation are also presented.

  7. DEVELOPMENT AND APPLICATION OF POPULATION MODELS TO SUPPORT EPA'S ECOLOGICAL RISK ASSESSMENT PROCESSES FOR PESTICIDES

    EPA Science Inventory

    As part of a broader exploratory effort to develop ecological risk assessment approaches to estimate potential chemical effects on non-target populations, we describe an approach for developing simple population models to estimate the extent to which acute effects on individual...

  8. Application of Complex Adaptive Systems in Portfolio Management

    ERIC Educational Resources Information Center

    Su, Zheyuan

    2017-01-01

    Simulation-based methods are becoming a promising research tool in financial markets. A general Complex Adaptive System can be tailored to different application scenarios. Based on the current research, we built two models that would benefit portfolio management by utilizing Complex Adaptive Systems (CAS) in Agent-based Modeling (ABM) approach.…

  9. Performance modeling codes for the QuakeSim problem solving environment

    NASA Technical Reports Server (NTRS)

    Parker, J. W.; Donnellan, A.; Lyzenga, G.; Rundle, J.; Tullis, T.

    2003-01-01

    The QuakeSim Problem Solving Environment uses a web-services approach to unify and deploy diverse remote data sources and processing services within a browser environment. Here we focus on the high-performance crustal modeling applications that will be included in this set of remote but interoperable applications.

  10. Artificial Neural Networks: A New Approach to Predicting Application Behavior.

    ERIC Educational Resources Information Center

    Gonzalez, Julie M. Byers; DesJardins, Stephen L.

    2002-01-01

    Applied the technique of artificial neural networks to predict which students were likely to apply to one research university. Compared the results to the traditional analysis tool, logistic regression modeling. Found that the addition of artificial intelligence models was a useful new tool for predicting student application behavior. (EV)

  11. Design of experiments applications in bioprocessing: concepts and approach.

    PubMed

    Kumar, Vijesh; Bhalla, Akriti; Rathore, Anurag S

    2014-01-01

    Most biotechnology unit operations are complex in nature with numerous process variables, feed material attributes, and raw material attributes that can have significant impact on the performance of the process. Design of experiments (DOE)-based approach offers a solution to this conundrum and allows for an efficient estimation of the main effects and the interactions with minimal number of experiments. Numerous publications illustrate application of DOE towards development of different bioprocessing unit operations. However, a systematic approach for evaluation of the different DOE designs and for choosing the optimal design for a given application has not been published yet. Through this work we have compared the I-optimal and D-optimal designs to the commonly used central composite and Box-Behnken designs for bioprocess applications. A systematic methodology is proposed for construction of the model and for precise prediction of the responses for the three case studies involving some of the commonly used unit operations in downstream processing. Use of Akaike information criterion for model selection has been examined and found to be suitable for the applications under consideration. © 2013 American Institute of Chemical Engineers.

  12. Development of Integrated Modular Avionics Application Based on Simulink and XtratuM

    NASA Astrophysics Data System (ADS)

    Fons-Albert, Borja; Usach-Molina, Hector; Vila-Carbo, Joan; Crespo-Lorente, Alfons

    2013-08-01

    This paper presents an integral approach for designing avionics applications that meets the requirements for software development and execution of this application domain. Software design follows the Model-Based design process and is performed in Simulink. This approach allows easy and quick testbench development and helps satisfying DO-178B requirements through the use of proper tools. The software execution platform is based on XtratuM, a minimal bare-metal hypervisor designed in our research group. XtratuM provides support for IMA-SP (Integrated Modular Avionics for Space) architectures. This approach allows the code generation of a Simulink model to be executed on top of Lithos as XtratuM partition. Lithos is a ARINC-653 compliant RTOS for XtratuM. The paper concentrates in how to smoothly port Simulink designs to XtratuM solving problems like application partitioning, automatic code generation, real-time tasking, interfacing, and others. This process is illustrated with an autopilot design test using a flight simulator.

  13. Stochastic modeling of consumer preferences for health care institutions.

    PubMed

    Malhotra, N K

    1983-01-01

    This paper proposes a stochastic procedure for modeling consumer preferences via LOGIT analysis. First, a simple, non-technical exposition of the use of a stochastic approach in health care marketing is presented. Second, a study illustrating the application of the LOGIT model in assessing consumer preferences for hospitals is given. The paper concludes with several implications of the proposed approach.

  14. The Application of a Multiphase Triangulation Approach to Mixed Methods: The Research of an Aspiring School Principal Development Program

    ERIC Educational Resources Information Center

    Youngs, Howard; Piggot-Irvine, Eileen

    2012-01-01

    Mixed methods research has emerged as a credible alternative to unitary research approaches. The authors show how a combination of a triangulation convergence model with a triangulation multilevel model was used to research an aspiring school principal development pilot program. The multilevel model is used to show the national and regional levels…

  15. Contingent approach to Internet-based supply network integration

    NASA Astrophysics Data System (ADS)

    Ho, Jessica; Boughton, Nick; Kehoe, Dennis; Michaelides, Zenon

    2001-10-01

    The Internet is playing an increasingly important role in enhancing the operations of supply networks as many organizations begin to recognize the benefits of Internet- enabled supply arrangements. However, the developments and applications to-date do not extend significantly beyond the dyadic model, whereas the real advantages are to be made with the external and network models to support a coordinated and collaborative based approach. The DOMAIN research group at the University of Liverpool is currently defining new Internet- enabled approaches to enable greater collaboration across supply chains. Different e-business models and tools are focusing on different applications. Using inappropriate e- business models, tools or techniques will bring negative results instead of benefits to all the tiers in the supply network. Thus there are a number of issues to be considered before addressing Internet based supply network integration, in particular an understanding of supply chain management, the emergent business models and evaluating the effects of deploying e-business to the supply network or a particular tier. It is important to utilize a contingent approach to selecting the right e-business model to meet the specific supply chain requirements. This paper addresses the issues and provides a case study on the indirect materials supply networks.

  16. A Hierarchical and Distributed Approach for Mapping Large Applications to Heterogeneous Grids using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Sanyal, Soumya; Jain, Amit; Das, Sajal K.; Biswas, Rupak

    2003-01-01

    In this paper, we propose a distributed approach for mapping a single large application to a heterogeneous grid environment. To minimize the execution time of the parallel application, we distribute the mapping overhead to the available nodes of the grid. This approach not only provides a fast mapping of tasks to resources but is also scalable. We adopt a hierarchical grid model and accomplish the job of mapping tasks to this topology using a scheduler tree. Results show that our three-phase algorithm provides high quality mappings, and is fast and scalable.

  17. Multiscale modeling of ductile failure in metallic alloys

    NASA Astrophysics Data System (ADS)

    Pardoen, Thomas; Scheyvaerts, Florence; Simar, Aude; Tekoğlu, Cihan; Onck, Patrick R.

    2010-04-01

    Micromechanical models for ductile failure have been developed in the 1970s and 1980s essentially to address cracking in structural applications and complement the fracture mechanics approach. Later, this approach has become attractive for physical metallurgists interested by the prediction of failure during forming operations and as a guide for the design of more ductile and/or high-toughness microstructures. Nowadays, a realistic treatment of damage evolution in complex metallic microstructures is becoming feasible when sufficiently sophisticated constitutive laws are used within the context of a multilevel modelling strategy. The current understanding and the state of the art models for the nucleation, growth and coalescence of voids are reviewed with a focus on the underlying physics. Considerations are made about the introduction of the different length scales associated with the microstructure and damage process. Two applications of the methodology are then described to illustrate the potential of the current models. The first application concerns the competition between intergranular and transgranular ductile fracture in aluminum alloys involving soft precipitate free zones along the grain boundaries. The second application concerns the modeling of ductile failure in friction stir welded joints, a problem which also involves soft and hard zones, albeit at a larger scale.

  18. Evaluation of Structural Robustness against Column Loss: Methodology and Application to RC Frame Buildings.

    PubMed

    Bao, Yihai; Main, Joseph A; Noh, Sam-Young

    2017-08-01

    A computational methodology is presented for evaluating structural robustness against column loss. The methodology is illustrated through application to reinforced concrete (RC) frame buildings, using a reduced-order modeling approach for three-dimensional RC framing systems that includes the floor slabs. Comparisons with high-fidelity finite-element model results are presented to verify the approach. Pushdown analyses of prototype buildings under column loss scenarios are performed using the reduced-order modeling approach, and an energy-based procedure is employed to account for the dynamic effects associated with sudden column loss. Results obtained using the energy-based approach are found to be in good agreement with results from direct dynamic analysis of sudden column loss. A metric for structural robustness is proposed, calculated by normalizing the ultimate capacities of the structural system under sudden column loss by the applicable service-level gravity loading and by evaluating the minimum value of this normalized ultimate capacity over all column removal scenarios. The procedure is applied to two prototype 10-story RC buildings, one employing intermediate moment frames (IMFs) and the other employing special moment frames (SMFs). The SMF building, with its more stringent seismic design and detailing, is found to have greater robustness.

  19. The Effects of Sand Sediment Volume Heterogeneities on Sound Propagation and Scattering

    DTIC Science & Technology

    2011-09-01

    previously developed at APL- UW for the study of high-frequency acoustics . These models include perturbation models applied to scattering from the...shell shapes (Figure 1). The acoustic modeling to this point has utilized Ivakin’s unified approach to volume and roughness scattering [3...sediments: A modeling approach and application to a shelly sand-mud environment,” in the Proceeding of the European Conference on Underwater Acoustics

  20. Knowledge sifters in MDA technologies

    NASA Astrophysics Data System (ADS)

    Kravchenko, Yuri; Kursitys, Ilona; Bova, Victoria

    2018-05-01

    The article considers a new approach to efficient management of information processes on the basis of object models. With the help of special design tools, a generic and application-independent application model is created, and then the program is implemented in a specific development environment. At the same time, the development process is completely based on a model that must contain all the information necessary for programming. The presence of a detailed model provides the automatic creation of typical parts of the application, the development of which is amenable to automation.

  1. A Generalized Hybrid Multiscale Modeling Approach for Flow and Reactive Transport in Porous Media

    NASA Astrophysics Data System (ADS)

    Yang, X.; Meng, X.; Tang, Y. H.; Guo, Z.; Karniadakis, G. E.

    2017-12-01

    Using emerging understanding of biological and environmental processes at fundamental scales to advance predictions of the larger system behavior requires the development of multiscale approaches, and there is strong interest in coupling models at different scales together in a hybrid multiscale simulation framework. A limited number of hybrid multiscale simulation methods have been developed for subsurface applications, mostly using application-specific approaches for model coupling. The proposed generalized hybrid multiscale approach is designed with minimal intrusiveness to the at-scale simulators (pre-selected) and provides a set of lightweight C++ scripts to manage a complex multiscale workflow utilizing a concurrent coupling approach. The workflow includes at-scale simulators (using the lattice-Boltzmann method, LBM, at the pore and Darcy scale, respectively), scripts for boundary treatment (coupling and kriging), and a multiscale universal interface (MUI) for data exchange. The current study aims to apply the generalized hybrid multiscale modeling approach to couple pore- and Darcy-scale models for flow and mixing-controlled reaction with precipitation/dissolution in heterogeneous porous media. The model domain is packed heterogeneously that the mixing front geometry is more complex and not known a priori. To address those challenges, the generalized hybrid multiscale modeling approach is further developed to 1) adaptively define the locations of pore-scale subdomains, 2) provide a suite of physical boundary coupling schemes and 3) consider the dynamic change of the pore structures due to mineral precipitation/dissolution. The results are validated and evaluated by comparing with single-scale simulations in terms of velocities, reactive concentrations and computing cost.

  2. Real-time interactive virtual tour on the World Wide Web (WWW)

    NASA Astrophysics Data System (ADS)

    Yoon, Sanghyuk; Chen, Hai-jung; Hsu, Tom; Yoon, Ilmi

    2003-12-01

    Web-based Virtual Tour has become a desirable and demanded application, yet challenging due to the nature of web application's running environment such as limited bandwidth and no guarantee of high computation power on the client side. Image-based rendering approach has attractive advantages over traditional 3D rendering approach in such Web Applications. Traditional approach, such as VRML, requires labor-intensive 3D modeling process, high bandwidth and computation power especially for photo-realistic virtual scenes. QuickTime VR and IPIX as examples of image-based approach, use panoramic photos and the virtual scenes that can be generated from photos directly skipping the modeling process. But, these image-based approaches may require special cameras or effort to take panoramic views and provide only one fixed-point look-around and zooming in-out rather than 'walk around', that is a very important feature to provide immersive experience to virtual tourists. The Web-based Virtual Tour using Tour into the Picture employs pseudo 3D geometry with image-based rendering approach to provide viewers with immersive experience of walking around the virtual space with several snap shots of conventional photos.

  3. Robust biological parametric mapping: an improved technique for multimodal brain image analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xue; Beason-Held, Lori; Resnick, Susan M.; Landman, Bennett A.

    2011-03-01

    Mapping the quantitative relationship between structure and function in the human brain is an important and challenging problem. Numerous volumetric, surface, region of interest and voxelwise image processing techniques have been developed to statistically assess potential correlations between imaging and non-imaging metrics. Recently, biological parametric mapping has extended the widely popular statistical parametric approach to enable application of the general linear model to multiple image modalities (both for regressors and regressands) along with scalar valued observations. This approach offers great promise for direct, voxelwise assessment of structural and functional relationships with multiple imaging modalities. However, as presented, the biological parametric mapping approach is not robust to outliers and may lead to invalid inferences (e.g., artifactual low p-values) due to slight mis-registration or variation in anatomy between subjects. To enable widespread application of this approach, we introduce robust regression and robust inference in the neuroimaging context of application of the general linear model. Through simulation and empirical studies, we demonstrate that our robust approach reduces sensitivity to outliers without substantial degradation in power. The robust approach and associated software package provides a reliable way to quantitatively assess voxelwise correlations between structural and functional neuroimaging modalities.

  4. Fuzzy logic modeling of high performance rechargeable batteries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, P.; Fennie, C. Jr.; Reisner, D.E.

    1998-07-01

    Accurate battery state-of-charge (SOC) measurements are critical in many portable electronic device applications. Yet conventional techniques for battery SOC estimation are limited in their accuracy, reliability, and flexibility. In this paper the authors present a powerful new approach to estimate battery SOC using a fuzzy logic-based methodology. This approach provides a universally applicable, accurate method for battery SOC estimation either integrated within, or as an external monitor to, an electronic device. The methodology is demonstrated in modeling impedance measurements on Ni-MH cells and discharge voltage curves of Li-ion cells.

  5. Development and comparison of Bayesian modularization method in uncertainty assessment of hydrological models

    NASA Astrophysics Data System (ADS)

    Li, L.; Xu, C.-Y.; Engeland, K.

    2012-04-01

    With respect to model calibration, parameter estimation and analysis of uncertainty sources, different approaches have been used in hydrological models. Bayesian method is one of the most widely used methods for uncertainty assessment of hydrological models, which incorporates different sources of information into a single analysis through Bayesian theorem. However, none of these applications can well treat the uncertainty in extreme flows of hydrological models' simulations. This study proposes a Bayesian modularization method approach in uncertainty assessment of conceptual hydrological models by considering the extreme flows. It includes a comprehensive comparison and evaluation of uncertainty assessments by a new Bayesian modularization method approach and traditional Bayesian models using the Metropolis Hasting (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions are used in combination with traditional Bayesian: the AR (1) plus Normal and time period independent model (Model 1), the AR (1) plus Normal and time period dependent model (Model 2) and the AR (1) plus multi-normal model (Model 3). The results reveal that (1) the simulations derived from Bayesian modularization method are more accurate with the highest Nash-Sutcliffe efficiency value, and (2) the Bayesian modularization method performs best in uncertainty estimates of entire flows and in terms of the application and computational efficiency. The study thus introduces a new approach for reducing the extreme flow's effect on the discharge uncertainty assessment of hydrological models via Bayesian. Keywords: extreme flow, uncertainty assessment, Bayesian modularization, hydrological model, WASMOD

  6. QSPR models for half-wave reduction potential of steroids: a comparative study between feature selection and feature extraction from subsets of or entire set of descriptors.

    PubMed

    Hemmateenejad, Bahram; Yazdani, Mahdieh

    2009-02-16

    Steroids are widely distributed in nature and are found in plants, animals, and fungi in abundance. A data set consists of a diverse set of steroids have been used to develop quantitative structure-electrochemistry relationship (QSER) models for their half-wave reduction potential. Modeling was established by means of multiple linear regression (MLR) and principle component regression (PCR) analyses. In MLR analysis, the QSPR models were constructed by first grouping descriptors and then stepwise selection of variables from each group (MLR1) and stepwise selection of predictor variables from the pool of all calculated descriptors (MLR2). Similar procedure was used in PCR analysis so that the principal components (or features) were extracted from different group of descriptors (PCR1) and from entire set of descriptors (PCR2). The resulted models were evaluated using cross-validation, chance correlation, application to prediction reduction potential of some test samples and accessing applicability domain. Both MLR approaches represented accurate results however the QSPR model found by MLR1 was statistically more significant. PCR1 approach produced a model as accurate as MLR approaches whereas less accurate results were obtained by PCR2 approach. In overall, the correlation coefficients of cross-validation and prediction of the QSPR models resulted from MLR1, MLR2 and PCR1 approaches were higher than 90%, which show the high ability of the models to predict reduction potential of the studied steroids.

  7. Dynamic Computation of Change Operations in Version Management of Business Process Models

    NASA Astrophysics Data System (ADS)

    Küster, Jochen Malte; Gerth, Christian; Engels, Gregor

    Version management of business process models requires that changes can be resolved by applying change operations. In order to give a user maximal freedom concerning the application order of change operations, position parameters of change operations must be computed dynamically during change resolution. In such an approach, change operations with computed position parameters must be applicable on the model and dependencies and conflicts of change operations must be taken into account because otherwise invalid models can be constructed. In this paper, we study the concept of partially specified change operations where parameters are computed dynamically. We provide a formalization for partially specified change operations using graph transformation and provide a concept for their applicability. Based on this, we study potential dependencies and conflicts of change operations and show how these can be taken into account within change resolution. Using our approach, a user can resolve changes of business process models without being unnecessarily restricted to a certain order.

  8. Comparative systems biology between human and animal models based on next-generation sequencing methods.

    PubMed

    Zhao, Yu-Qi; Li, Gong-Hua; Huang, Jing-Fei

    2013-04-01

    Animal models provide myriad benefits to both experimental and clinical research. Unfortunately, in many situations, they fall short of expected results or provide contradictory results. In part, this can be the result of traditional molecular biological approaches that are relatively inefficient in elucidating underlying molecular mechanism. To improve the efficacy of animal models, a technological breakthrough is required. The growing availability and application of the high-throughput methods make systematic comparisons between human and animal models easier to perform. In the present study, we introduce the concept of the comparative systems biology, which we define as "comparisons of biological systems in different states or species used to achieve an integrated understanding of life forms with all their characteristic complexity of interactions at multiple levels". Furthermore, we discuss the applications of RNA-seq and ChIP-seq technologies to comparative systems biology between human and animal models and assess the potential applications for this approach in the future studies.

  9. Noise exposure-response relationships established from repeated binary observations: Modeling approaches and applications.

    PubMed

    Schäffer, Beat; Pieren, Reto; Mendolia, Franco; Basner, Mathias; Brink, Mark

    2017-05-01

    Noise exposure-response relationships are used to estimate the effects of noise on individuals or a population. Such relationships may be derived from independent or repeated binary observations, and modeled by different statistical methods. Depending on the method by which they were established, their application in population risk assessment or estimation of individual responses may yield different results, i.e., predict "weaker" or "stronger" effects. As far as the present body of literature on noise effect studies is concerned, however, the underlying statistical methodology to establish exposure-response relationships has not always been paid sufficient attention. This paper gives an overview on two statistical approaches (subject-specific and population-averaged logistic regression analysis) to establish noise exposure-response relationships from repeated binary observations, and their appropriate applications. The considerations are illustrated with data from three noise effect studies, estimating also the magnitude of differences in results when applying exposure-response relationships derived from the two statistical approaches. Depending on the underlying data set and the probability range of the binary variable it covers, the two approaches yield similar to very different results. The adequate choice of a specific statistical approach and its application in subsequent studies, both depending on the research question, are therefore crucial.

  10. Models of evaluating efficiency and risks on integration of cloud-base IT-services of the machine-building enterprise: a system approach

    NASA Astrophysics Data System (ADS)

    Razumnikov, S.; Kurmanbay, A.

    2016-04-01

    The present paper suggests a system approach to evaluation of the effectiveness and risks resulted from the integration of cloud-based services in a machine-building enterprise. This approach makes it possible to estimate a set of enterprise IT applications and choose the applications to be migrated to the cloud with regard to specific business requirements, a technological strategy and willingness to risk.

  11. Automatic testing and assessment of neuroanatomy using a digital brain atlas: method and development of computer- and mobile-based applications.

    PubMed

    Nowinski, Wieslaw L; Thirunavuukarasuu, Arumugam; Ananthasubramaniam, Anand; Chua, Beng Choon; Qian, Guoyu; Nowinska, Natalia G; Marchenko, Yevgen; Volkau, Ihar

    2009-10-01

    Preparation of tests and student's assessment by the instructor are time consuming. We address these two tasks in neuroanatomy education by employing a digital media application with a three-dimensional (3D), interactive, fully segmented, and labeled brain atlas. The anatomical and vascular models in the atlas are linked to Terminologia Anatomica. Because the cerebral models are fully segmented and labeled, our approach enables automatic and random atlas-derived generation of questions to test location and naming of cerebral structures. This is done in four steps: test individualization by the instructor, test taking by the students at their convenience, automatic student assessment by the application, and communication of the individual assessment to the instructor. A computer-based application with an interactive 3D atlas and a preliminary mobile-based application were developed to realize this approach. The application works in two test modes: instructor and student. In the instructor mode, the instructor customizes the test by setting the scope of testing and student performance criteria, which takes a few seconds. In the student mode, the student is tested and automatically assessed. Self-testing is also feasible at any time and pace. Our approach is automatic both with respect to test generation and student assessment. It is also objective, rapid, and customizable. We believe that this approach is novel from computer-based, mobile-based, and atlas-assisted standpoints.

  12. Applicability of DFT model in reactive distillation

    NASA Astrophysics Data System (ADS)

    Staszak, Maciej

    2017-11-01

    The density functional theory (DFT) applicability to reactive distillation is discussed. Brief modeling techniques description of distillation and rectification with chemical reaction is provided as a background for quantum method usage description. The equilibrium and nonequilibrium distillation models are described for that purpose. The DFT quantum theory is concisely described. The usage of DFT in the modeling of reactive distillation is described in two parts. One of the fundamental and very important component of distillation modeling is vapor-liquid equilibrium description for which the DFT quantum approach can be used. The representative DFT models, namely COSMO-RS (Conductor like Screening Model for Real Solvents), COSMOSPACE (COSMO Surface Pair Activity Coefficient) and COSMO-SAC (SAC - segment activity coefficient) approaches are described. The second part treats the way in which the chemical reaction is described by means of quantum DFT method. The intrinsic reaction coordinate (IRC) method is described which is used to find minimum energy path of substrates to products transition. The DFT is one of the methods which can be used for that purpose. The literature data examples are provided which proves that IRC method is applicable for chemical reaction kinetics description.

  13. Action Centered Contextual Bandits.

    PubMed

    Greenewald, Kristjan; Tewari, Ambuj; Klasnja, Predrag; Murphy, Susan

    2017-12-01

    Contextual bandits have become popular as they offer a middle ground between very simple approaches based on multi-armed bandits and very complex approaches using the full power of reinforcement learning. They have demonstrated success in web applications and have a rich body of associated theoretical guarantees. Linear models are well understood theoretically and preferred by practitioners because they are not only easily interpretable but also simple to implement and debug. Furthermore, if the linear model is true, we get very strong performance guarantees. Unfortunately, in emerging applications in mobile health, the time-invariant linear model assumption is untenable. We provide an extension of the linear model for contextual bandits that has two parts: baseline reward and treatment effect. We allow the former to be complex but keep the latter simple. We argue that this model is plausible for mobile health applications. At the same time, it leads to algorithms with strong performance guarantees as in the linear model setting, while still allowing for complex nonlinear baseline modeling. Our theory is supported by experiments on data gathered in a recently concluded mobile health study.

  14. Design-based modeling of magnetically actuated soft diaphragm materials

    NASA Astrophysics Data System (ADS)

    Jayaneththi, V. R.; Aw, K. C.; McDaid, A. J.

    2018-04-01

    Magnetic polymer composites (MPC) have shown promise for emerging biomedical applications such as lab-on-a-chip and implantable drug delivery. These soft material actuators are capable of fast response, large deformation and wireless actuation. Existing MPC modeling approaches are computationally expensive and unsuitable for rapid design prototyping and real-time control applications. This paper proposes a macro-scale 1-DOF model capable of predicting force and displacement of an MPC diaphragm actuator. Model validation confirmed both blocked force and displacement can be accurately predicted in a variety of working conditions i.e. different magnetic field strengths, static/dynamic fields, and gap distances. The contribution of this work includes a comprehensive experimental investigation of a macro-scale diaphragm actuator; the derivation and validation of a new phenomenological model to describe MPC actuation; and insights into the proposed model’s design-based functionality i.e. scalability and generalizability in terms of magnetic filler concentration and diaphragm diameter. Due to the lumped element modeling approach, the proposed model can also be adapted to alternative actuator configurations, and thus presents a useful tool for design, control and simulation of novel MPC applications.

  15. A methodology for computing uncertainty bounds of multivariable systems based on sector stability theory concepts

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.

    1992-01-01

    The application of a sector-based stability theory approach to the formulation of useful uncertainty descriptions for linear, time-invariant, multivariable systems is explored. A review of basic sector properties and sector-based approach are presented first. The sector-based approach is then applied to several general forms of parameter uncertainty to investigate its advantages and limitations. The results indicate that the sector uncertainty bound can be used effectively to evaluate the impact of parameter uncertainties on the frequency response of the design model. Inherent conservatism is a potential limitation of the sector-based approach, especially for highly dependent uncertain parameters. In addition, the representation of the system dynamics can affect the amount of conservatism reflected in the sector bound. Careful application of the model can help to reduce this conservatism, however, and the solution approach has some degrees of freedom that may be further exploited to reduce the conservatism.

  16. Ecological risk assessment conceptual model formulation for nonindigenous species.

    PubMed

    Landis, Wayne G

    2004-08-01

    This article addresses the application of ecological risk assessment at the regional scale to the prediction of impacts due to invasive or nonindigenous species (NIS). The first section describes risk assessment, the decision-making process, and introduces regional risk assessment. A general conceptual model for the risk assessment of NIS is then presented based upon the regional risk assessment approach. Two diverse examples of the application of this approach are presented. The first example is based upon the dynamics of introduced plasmids into bacteria populations. The second example is the application risk assessment approach to the invasion of a coastal marine site of Cherry Point, Washington, USA by the European green crab. The lessons learned from the two examples demonstrate that assessment of the risks of invasion of NIS will have to incorporate not only the characteristics of the invasive species, but also the other stresses and impacts affecting the region of interest.

  17. A square root ensemble Kalman filter application to a motor-imagery brain-computer interface

    PubMed Central

    Kamrunnahar, M.; Schiff, S. J.

    2017-01-01

    We here investigated a non-linear ensemble Kalman filter (SPKF) application to a motor imagery brain computer interface (BCI). A square root central difference Kalman filter (SR-CDKF) was used as an approach for brain state estimation in motor imagery task performance, using scalp electroencephalography (EEG) signals. Healthy human subjects imagined left vs. right hand movements and tongue vs. bilateral toe movements while scalp EEG signals were recorded. Offline data analysis was conducted for training the model as well as for decoding the imagery movements. Preliminary results indicate the feasibility of this approach with a decoding accuracy of 78%–90% for the hand movements and 70%–90% for the tongue-toes movements. Ongoing research includes online BCI applications of this approach as well as combined state and parameter estimation using this algorithm with different system dynamic models. PMID:22255799

  18. Efficient Approaches for Propagating Hydrologic Forcing Uncertainty: High-Resolution Applications Over the Western United States

    NASA Astrophysics Data System (ADS)

    Hobbs, J.; Turmon, M.; David, C. H.; Reager, J. T., II; Famiglietti, J. S.

    2017-12-01

    NASA's Western States Water Mission (WSWM) combines remote sensing of the terrestrial water cycle with hydrological models to provide high-resolution state estimates for multiple variables. The effort includes both land surface and river routing models that are subject to several sources of uncertainty, including errors in the model forcing and model structural uncertainty. Computational and storage constraints prohibit extensive ensemble simulations, so this work outlines efficient but flexible approaches for estimating and reporting uncertainty. Calibrated by remote sensing and in situ data where available, we illustrate the application of these techniques in producing state estimates with associated uncertainties at kilometer-scale resolution for key variables such as soil moisture, groundwater, and streamflow.

  19. Case study: Optimizing fault model input parameters using bio-inspired algorithms

    NASA Astrophysics Data System (ADS)

    Plucar, Jan; Grunt, Onřej; Zelinka, Ivan

    2017-07-01

    We present a case study that demonstrates a bio-inspired approach in the process of finding optimal parameters for GSM fault model. This model is constructed using Petri Nets approach it represents dynamic model of GSM network environment in the suburban areas of Ostrava city (Czech Republic). We have been faced with a task of finding optimal parameters for an application that requires high amount of data transfers between the application itself and secure servers located in datacenter. In order to find the optimal set of parameters we employ bio-inspired algorithms such as Differential Evolution (DE) or Self Organizing Migrating Algorithm (SOMA). In this paper we present use of these algorithms, compare results and judge their performance in fault probability mitigation.

  20. Qualitative model-based diagnosis using possibility theory

    NASA Technical Reports Server (NTRS)

    Joslyn, Cliff

    1994-01-01

    The potential for the use of possibility in the qualitative model-based diagnosis of spacecraft systems is described. The first sections of the paper briefly introduce the Model-Based Diagnostic (MBD) approach to spacecraft fault diagnosis; Qualitative Modeling (QM) methodologies; and the concepts of possibilistic modeling in the context of Generalized Information Theory (GIT). Then the necessary conditions for the applicability of possibilistic methods to qualitative MBD, and a number of potential directions for such an application, are described.

  1. Theoretical models for application in school health education research.

    PubMed

    Parcel, G S

    1984-01-01

    Theoretical models that may be useful to research studies in school health education are reviewed. Selected, well-defined theories include social learning theory, problem-behavior theory, theory of reasoned action, communications theory, coping theory, social competence, and social and family theories. Also reviewed are multiple theory models including models of health related-behavior, the PRECEDE Framework, social-psychological approaches and the Activated Health Education Model. Two major reviews of teaching models are also discussed. The paper concludes with a brief outline of the general applications of theory to the field of school health education including applications to basic research, development and design of interventions, program evaluation, and program utilization.

  2. Methodology and application of combined watershed and ground-water models in Kansas

    USGS Publications Warehouse

    Sophocleous, M.; Perkins, S.P.

    2000-01-01

    Increased irrigation in Kansas and other regions during the last several decades has caused serious water depletion, making the development of comprehensive strategies and tools to resolve such problems increasingly important. This paper makes the case for an intermediate complexity, quasi-distributed, comprehensive, large-watershed model, which falls between the fully distributed, physically based hydrological modeling system of the type of the SHE model and the lumped, conceptual rainfall-runoff modeling system of the type of the Stanford watershed model. This is achieved by integrating the quasi-distributed watershed model SWAT with the fully-distributed ground-water model MODFLOW. The advantage of this approach is the appreciably smaller input data requirements and the use of readily available data (compared to the fully distributed, physically based models), the statistical handling of watershed heterogeneities by employing the hydrologic-response-unit concept, and the significantly increased flexibility in handling stream-aquifer interactions, distributed well withdrawals, and multiple land uses. The mechanics of integrating the component watershed and ground-water models are outlined, and three real-world management applications of the integrated model from Kansas are briefly presented. Three different aspects of the integrated model are emphasized: (1) management applications of a Decision Support System for the integrated model (Rattlesnake Creek subbasin); (2) alternative conceptual models of spatial heterogeneity related to the presence or absence of an underlying aquifer with shallow or deep water table (Lower Republican River basin); and (3) the general nature of the integrated model linkage by employing a watershed simulator other than SWAT (Wet Walnut Creek basin). These applications demonstrate the practicality and versatility of this relatively simple and conceptually clear approach, making public acceptance of the integrated watershed modeling system much easier. This approach also enhances model calibration and thus the reliability of model results. (C) 2000 Elsevier Science B.V.Increased irrigation in Kansas and other regions during the last several decades has caused serious water depletion, making the development of comprehensive strategies and tools to resolve such problems increasingly important. This paper makes the case for an intermediate complexity, quasi-distributed, comprehensive, large-watershed model, which falls between the fully distributed, physically based hydrological modeling system of the type of the SHE model and the lumped, conceptual rainfall-runoff modeling system of the type of the Stanford watershed model. This is achieved by integrating the quasi-distributed watershed model SWAT with the fully-distributed ground-water model MODFLOW. The advantage of this approach is the appreciably smaller input data requirements and the use of readily available data (compared to the fully distributed, physically based models), the statistical handling of watershed heterogeneities by employing the hydrologic-response-unit concept, and the significantly increased flexibility in handling stream-aquifer interactions, distributed well withdrawals, and multiple land uses. The mechanics of integrating the component watershed and ground-water models are outlined, and three real-world management applications of the integrated model from Kansas are briefly presented. Three different aspects of the integrated model are emphasized: (1) management applications of a Decision Support System for the integrated model (Rattlesnake Creek subbasin); (2) alternative conceptual models of spatial heterogeneity related to the presence or absence of an underlying aquifer with shallow or deep water table (Lower Republican River basin); and (3) the general nature of the integrated model linkage by employing a watershed simulator other than SWAT (Wet Walnut Creek basin). These applications demonstrate the practicality and ve

  3. A Bayesian Approach for Nonlinear Structural Equation Models with Dichotomous Variables Using Logit and Probit Links

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan; Cai, Jing-Heng

    2010-01-01

    Analysis of ordered binary and unordered binary data has received considerable attention in social and psychological research. This article introduces a Bayesian approach, which has several nice features in practical applications, for analyzing nonlinear structural equation models with dichotomous data. We demonstrate how to use the software…

  4. Forestry sector analysis for developing countries: issues and methods.

    Treesearch

    R.W. Haynes

    1993-01-01

    A satellite meeting of the 10th Forestry World Congress focused on the methods used for forest sector analysis and their applications in both developed and developing countries. The results of that meeting are summarized, and a general approach for forest sector modeling is proposed. The approach includes models derived from the existing...

  5. Evaluation of the MIKE SHE model for application in the Loess Plateau, China

    Treesearch

    Zhiqiang Zhang; Shenping Wang; Ge Sun; Steven G. McNulty; Huayong Zhang; Jianlao Li; Manliang Zhang; Eduard Klaghofer; Peter Strauss

    2008-01-01

    Quantifying the hydrologic responses to land use / land cover change and climate variability is essential for integrated sustainable watershed management in water limited regions such as the Loess Plateau in Northwestern China where an adaptive watershed management approach is being implemented. Traditional empirical modeling approach to quantifying the accumulated...

  6. Holland in Iceland Revisited: An Emic Approach to Evaluating U.S. Vocational Interest Models

    ERIC Educational Resources Information Center

    Einarsdottir, Sif; Rounds, James; Su, Rong

    2010-01-01

    An emic approach was used to test the structural validity and applicability of Holland's (1997) RIASEC (Realistic, Investigative, Artistic, Social, Enterprising, Conventional) model in Iceland. Archival data from the development of the Icelandic Interest Inventory (Einarsdottir & Rounds, 2007) were used in the present investigation. The data…

  7. A Test of a Linear Programming Model as an Optimal Solution to the Problem of Combining Methods of Reading Instruction

    ERIC Educational Resources Information Center

    Mills, James W.; And Others

    1973-01-01

    The Study reported here tested an application of the Linear Programming Model at the Reading Clinic of Drew University. Results, while not conclusive, indicate that this approach yields greater gains in speed scores than a traditional approach for this population. (Author)

  8. Multiscale high-order/low-order (HOLO) algorithms and applications

    NASA Astrophysics Data System (ADS)

    Chacón, L.; Chen, G.; Knoll, D. A.; Newman, C.; Park, H.; Taitano, W.; Willert, J. A.; Womeldorff, G.

    2017-02-01

    We review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. The HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.

  9. An Alternative Approach to Strengthening the Connection of Dissemination and Improvement in Education.

    ERIC Educational Resources Information Center

    Teresa, Joseph G.

    Converting research findings into practical applications is an important concept to education but one that has been overlooked by the educational research community until recently. While a model for turning concepts into practical applications has been developed and field tested for human service practitioners, the model has not been directly…

  10. In Silico Models of Aerosol Delivery to the Respiratory Tract – Development and Applications

    PubMed Central

    Longest, P. Worth; Holbrook, Landon T.

    2011-01-01

    This review discusses the application of computational models to simulate the transport and deposition of inhaled pharmaceutical aerosols from the site of particle or droplet formation to deposition within the respiratory tract. Traditional one-dimensional (1-D) whole-lung models are discussed briefly followed by a more in-depth review of three-dimensional (3-D) computational fluid dynamics (CFD) simulations. The review of CFD models is organized into sections covering transport and deposition within the inhaler device, the extrathoracic (oral and nasal) region, conducting airways, and alveolar space. For each section, a general review of significant contributions and advancements in the area of simulating pharmaceutical aerosols is provided followed by a more in-depth application or case study that highlights the challenges, utility, and benefits of in silico models. Specific applications presented include the optimization of an existing spray inhaler, development of charge-targeted delivery, specification of conditions for optimal nasal delivery, analysis of a new condensational delivery approach, and an evaluation of targeted delivery using magnetic aerosols. The review concludes with recommendations on the need for more refined model validations, use of a concurrent experimental and CFD approach for developing aerosol delivery systems, and development of a stochastic individual path (SIP) model of aerosol transport and deposition throughout the respiratory tract. PMID:21640772

  11. Survey of DoD Profit Policy and Further Analysis of the Estimation Theory

    DTIC Science & Technology

    1999-12-01

    CAPITAL ASSET PRICING MODEL 21 E. APPLICATION OF THE CAPM TO WEIGHTED TO THE WEIGHTED GUIDLELINES POLICY 24 1. Pure...Working Capital Employed : 9 4. Facilities Capital 11 C. EFFECTIVENESS OF POLICY 12 III. CAPTIAL ASSET PRICING MODEL OF DOD PROFIT 19 A. OVERVIEW 19...and Rogerson’s approach to the weighted guidelines policy using a capital asset pricing model approach. Both models are examined in the

  12. A critical comparison of systematic calibration protocols for activated sludge models: a SWOT analysis.

    PubMed

    Sin, Gürkan; Van Hulle, Stijn W H; De Pauw, Dirk J W; van Griensven, Ann; Vanrolleghem, Peter A

    2005-07-01

    Modelling activated sludge systems has gained an increasing momentum after the introduction of activated sludge models (ASMs) in 1987. Application of dynamic models for full-scale systems requires essentially a calibration of the chosen ASM to the case under study. Numerous full-scale model applications have been performed so far which were mostly based on ad hoc approaches and expert knowledge. Further, each modelling study has followed a different calibration approach: e.g. different influent wastewater characterization methods, different kinetic parameter estimation methods, different selection of parameters to be calibrated, different priorities within the calibration steps, etc. In short, there was no standard approach in performing the calibration study, which makes it difficult, if not impossible, to (1) compare different calibrations of ASMs with each other and (2) perform internal quality checks for each calibration study. To address these concerns, systematic calibration protocols have recently been proposed to bring guidance to the modeling of activated sludge systems and in particular to the calibration of full-scale models. In this contribution four existing calibration approaches (BIOMATH, HSG, STOWA and WERF) will be critically discussed using a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. It will also be assessed in what way these approaches can be further developed in view of further improving the quality of ASM calibration. In this respect, the potential of automating some steps of the calibration procedure by use of mathematical algorithms is highlighted.

  13. Studying the flow dynamics of a karst aquifer system with an equivalent porous medium model.

    PubMed

    Abusaada, Muath; Sauter, Martin

    2013-01-01

    The modeling of groundwater flow in karst aquifers is a challenge due to the extreme heterogeneity of its hydraulic parameters and the duality in their discharge behavior, that is, rapid response of highly conductive karst conduits and delayed drainage of the low-permeability fractured matrix after recharge events. There are a number of different modeling approaches for the simulation of the karst groundwater dynamics, applicable to different aquifer as well as modeling problem types, ranging from continuum models to double continuum models to discrete and hybrid models. This study presents the application of an equivalent porous model approach (EPM, single continuum model) to construct a steady-state numerical flow model for an important karst aquifer, that is, the Western Mountain Aquifer Basin (WMAB), shared by Israel and the West-Bank, using MODFLOW2000. The WMAB was used as a catchment since it is a well-constrained catchment with well-defined recharge and discharge components and therefore allows a control on the modeling approach, a very rare opportunity for karst aquifer modeling. The model demonstrates the applicability of equivalent porous medium models for the simulation of karst systems, despite their large contrast in hydraulic conductivities. As long as the simulated saturated volume is large enough to average out the local influence of karst conduits and as long as transport velocities are not an issue, EPM models excellently simulate the observed head distribution. The model serves as a starting basis that will be used as a reference for developing a long-term dynamic model for the WMAB, starting from the pre-development period (i.e., 1940s) up to date. © 2012, The Author(s). GroundWater © 2012, National Ground Water Association.

  14. An SVM model with hybrid kernels for hydrological time series

    NASA Astrophysics Data System (ADS)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  15. Application of surface complexation models to anion adsorption by natural materials

    USDA-ARS?s Scientific Manuscript database

    Various chemical models of ion adsorption will be presented and discussed. Chemical models, such as surface complexation models, provide a molecular description of anion adsorption reactions using an equilibrium approach. Two such models, the constant capacitance model and the triple layer model w...

  16. QSAR modeling of GPCR ligands: methodologies and examples of applications.

    PubMed

    Tropsha, A; Wang, S X

    2006-01-01

    GPCR ligands represent not only one of the major classes of current drugs but the major continuing source of novel potent pharmaceutical agents. Because 3D structures of GPCRs as determined by experimental techniques are still unavailable, ligand-based drug discovery methods remain the major computational molecular modeling approaches to the analysis of growing data sets of tested GPCR ligands. This paper presents an overview of modern Quantitative Structure Activity Relationship (QSAR) modeling. We discuss the critical issue of model validation and the strategy for applying the successfully validated QSAR models to virtual screening of available chemical databases. We present several examples of applications of validated QSAR modeling approaches to GPCR ligands. We conclude with the comments on exciting developments in the QSAR modeling of GPCR ligands that focus on the study of emerging data sets of compounds with dual or even multiple activities against two or more of GPCRs.

  17. Fractional Order Modeling of Atmospheric Turbulence - A More Accurate Modeling Methodology for Aero Vehicles

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    2014-01-01

    The presentation covers a recently developed methodology to model atmospheric turbulence as disturbances for aero vehicle gust loads and for controls development like flutter and inlet shock position. The approach models atmospheric turbulence in their natural fractional order form, which provides for more accuracy compared to traditional methods like the Dryden model, especially for high speed vehicle. The presentation provides a historical background on atmospheric turbulence modeling and the approaches utilized for air vehicles. This is followed by the motivation and the methodology utilized to develop the atmospheric turbulence fractional order modeling approach. Some examples covering the application of this method are also provided, followed by concluding remarks.

  18. Worklist handling in workflow-enabled radiological application systems

    NASA Astrophysics Data System (ADS)

    Wendler, Thomas; Meetz, Kirsten; Schmidt, Joachim; von Berg, Jens

    2000-05-01

    For the next generation integrated information systems for health care applications, more emphasis has to be put on systems which, by design, support the reduction of cost, the increase inefficiency and the improvement of the quality of services. A substantial contribution to this will be the modeling. optimization, automation and enactment of processes in health care institutions. One of the perceived key success factors for the system integration of processes will be the application of workflow management, with workflow management systems as key technology components. In this paper we address workflow management in radiology. We focus on an important aspect of workflow management, the generation and handling of worklists, which provide workflow participants automatically with work items that reflect tasks to be performed. The display of worklists and the functions associated with work items are the visible part for the end-users of an information system using a workflow management approach. Appropriate worklist design and implementation will influence user friendliness of a system and will largely influence work efficiency. Technically, in current imaging department information system environments (modality-PACS-RIS installations), a data-driven approach has been taken: Worklist -- if present at all -- are generated from filtered views on application data bases. In a future workflow-based approach, worklists will be generated by autonomous workflow services based on explicit process models and organizational models. This process-oriented approach will provide us with an integral view of entire health care processes or sub- processes. The paper describes the basic mechanisms of this approach and summarizes its benefits.

  19. Final Report, “Exploiting Global View for Resilience”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chien, Andrew

    2017-03-29

    Final technical report for the "Exploiting Global View for Resilience" project. The GVR project aims to create a new approach to portable, resilient applications. The GVR approach builds on a global view data model,, adding versioning (multi-version), user control of timing and rate (multi-stream), and flexible cross layer error signalling and recovery. With a versioned array as a portable abstraction, GVR enables application programmers to exploit deep scientific and application code insights to manage resilience (and its overhead) in a flexible, portable fashion.

  20. Reservoir Modeling by Data Integration via Intermediate Spaces and Artificial Intelligence Tools in MPS Simulation Frameworks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmadi, Rouhollah, E-mail: rouhollahahmadi@yahoo.com; Khamehchi, Ehsan

    Conditioning stochastic simulations are very important in many geostatistical applications that call for the introduction of nonlinear and multiple-point data in reservoir modeling. Here, a new methodology is proposed for the incorporation of different data types into multiple-point statistics (MPS) simulation frameworks. Unlike the previous techniques that call for an approximate forward model (filter) for integration of secondary data into geologically constructed models, the proposed approach develops an intermediate space where all the primary and secondary data are easily mapped onto. Definition of the intermediate space, as may be achieved via application of artificial intelligence tools like neural networks andmore » fuzzy inference systems, eliminates the need for using filters as in previous techniques. The applicability of the proposed approach in conditioning MPS simulations to static and geologic data is verified by modeling a real example of discrete fracture networks using conventional well-log data. The training patterns are well reproduced in the realizations, while the model is also consistent with the map of secondary data.« less

  1. Bayesian hierarchical functional data analysis via contaminated informative priors.

    PubMed

    Scarpa, Bruno; Dunson, David B

    2009-09-01

    A variety of flexible approaches have been proposed for functional data analysis, allowing both the mean curve and the distribution about the mean to be unknown. Such methods are most useful when there is limited prior information. Motivated by applications to modeling of temperature curves in the menstrual cycle, this article proposes a flexible approach for incorporating prior information in semiparametric Bayesian analyses of hierarchical functional data. The proposed approach is based on specifying the distribution of functions as a mixture of a parametric hierarchical model and a nonparametric contamination. The parametric component is chosen based on prior knowledge, while the contamination is characterized as a functional Dirichlet process. In the motivating application, the contamination component allows unanticipated curve shapes in unhealthy menstrual cycles. Methods are developed for posterior computation, and the approach is applied to data from a European fecundability study.

  2. Nonlinear model predictive control of a wave energy converter based on differential flatness parameterisation

    NASA Astrophysics Data System (ADS)

    Li, Guang

    2017-01-01

    This paper presents a fast constrained optimization approach, which is tailored for nonlinear model predictive control of wave energy converters (WEC). The advantage of this approach relies on its exploitation of the differential flatness of the WEC model. This can reduce the dimension of the resulting nonlinear programming problem (NLP) derived from the continuous constrained optimal control of WEC using pseudospectral method. The alleviation of computational burden using this approach helps to promote an economic implementation of nonlinear model predictive control strategy for WEC control problems. The method is applicable to nonlinear WEC models, nonconvex objective functions and nonlinear constraints, which are commonly encountered in WEC control problems. Numerical simulations demonstrate the efficacy of this approach.

  3. Generalized Parameter-Adjusted Stochastic Resonance of Duffing Oscillator and Its Application to Weak-Signal Detection.

    PubMed

    Lai, Zhi-Hui; Leng, Yong-Gang

    2015-08-28

    A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application.

  4. Survey of methods for soil moisture determination

    NASA Technical Reports Server (NTRS)

    Schmugge, T. J.; Jackson, T. J.; Mckim, H. L.

    1979-01-01

    Existing and proposed methods for soil moisture determination are discussed. These include: (1) in situ investigations including gravimetric, nuclear, and electromagnetic techniques; (2) remote sensing approaches that use the reflected solar, thermal infrared, and microwave portions of the electromagnetic spectrum; and (3) soil physics models that track the behavior of water in the soil in response to meteorological inputs (precipitation) and demands (evapotranspiration). The capacities of these approaches to satisfy various user needs for soil moisture information vary from application to application, but a conceptual scheme for merging these approaches into integrated systems to provide soil moisture information is proposed that has the potential for meeting various application requirements.

  5. Three novel approaches to structural identifiability analysis in mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2016-05-06

    Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Stirling engine - Approach for long-term durability assessment

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Bartolotta, Paul A.; Halford, Gary R.; Freed, Alan D.

    1992-01-01

    The approach employed by NASA Lewis for the long-term durability assessment of the Stirling engine hot-section components is summarized. The approach consists of: preliminary structural assessment; development of a viscoplastic constitutive model to accurately determine material behavior under high-temperature thermomechanical loads; an experimental program to characterize material constants for the viscoplastic constitutive model; finite-element thermal analysis and structural analysis using a viscoplastic constitutive model to obtain stress/strain/temperature at the critical location of the hot-section components for life assessment; and development of a life prediction model applicable for long-term durability assessment at high temperatures. The approach should aid in the provision of long-term structural durability and reliability of Stirling engines.

  7. Model-based analyses: Promises, pitfalls, and example applications to the study of cognitive control

    PubMed Central

    Mars, Rogier B.; Shea, Nicholas J.; Kolling, Nils; Rushworth, Matthew F. S.

    2011-01-01

    We discuss a recent approach to investigating cognitive control, which has the potential to deal with some of the challenges inherent in this endeavour. In a model-based approach, the researcher defines a formal, computational model that performs the task at hand and whose performance matches that of a research participant. The internal variables in such a model might then be taken as proxies for latent variables computed in the brain. We discuss the potential advantages of such an approach for the study of the neural underpinnings of cognitive control and its pitfalls, and we make explicit the assumptions underlying the interpretation of data obtained using this approach. PMID:20437297

  8. Development and application of air quality models at the US ...

    EPA Pesticide Factsheets

    Overview of the development and application of air quality models at the U.S. EPA, particularly focused on the development and application of the Community Multiscale Air Quality (CMAQ) model developed within the Computation Exposure Division (CED) of the National Exposure Research Laboratory (NERL). This presentation will provide a simple overview of air quality model development and application geared toward a non-technical student audience. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.

  9. A dynamical systems model for nuclear power plant risk

    NASA Astrophysics Data System (ADS)

    Hess, Stephen Michael

    The recent transition to an open access generation marketplace has forced nuclear plant operators to become much more cost conscious and focused on plant performance. Coincidentally, the regulatory perspective also is in a state of transition from a command and control framework to one that is risk-informed and performance-based. Due to these structural changes in the economics and regulatory system associated with commercial nuclear power plant operation, there is an increased need for plant management to explicitly manage nuclear safety risk. Application of probabilistic risk assessment techniques to model plant hardware has provided a significant contribution to understanding the potential initiating events and equipment failures that can lead to core damage accidents. Application of the lessons learned from these analyses has supported improved plant operation and safety over the previous decade. However, this analytical approach has not been nearly as successful in addressing the impact of plant processes and management effectiveness on the risks of plant operation. Thus, the research described in this dissertation presents a different approach to address this issue. Here we propose a dynamical model that describes the interaction of important plant processes among themselves and their overall impact on nuclear safety risk. We first provide a review of the techniques that are applied in a conventional probabilistic risk assessment of commercially operating nuclear power plants and summarize the typical results obtained. The limitations of the conventional approach and the status of research previously performed to address these limitations also are presented. Next, we present the case for the application of an alternative approach using dynamical systems theory. This includes a discussion of previous applications of dynamical models to study other important socio-economic issues. Next, we review the analytical techniques that are applicable to analysis of these models. Details of the development of the mathematical risk model are presented. This includes discussion of the processes included in the model and the identification of significant interprocess interactions. This is followed by analysis of the model that demonstrates that its dynamical evolution displays characteristics that have been observed at commercially operating plants. The model is analyzed using the previously described techniques from dynamical systems theory. From this analysis, several significant insights are obtained with respect to the effective control of nuclear safety risk. Finally, we present conclusions and recommendations for further research.

  10. A general U-block model-based design procedure for nonlinear polynomial control systems

    NASA Astrophysics Data System (ADS)

    Zhu, Q. M.; Zhao, D. Y.; Zhang, Jianhua

    2016-10-01

    The proposition of U-model concept (in terms of 'providing concise and applicable solutions for complex problems') and a corresponding basic U-control design algorithm was originated in the first author's PhD thesis. The term of U-model appeared (not rigorously defined) for the first time in the first author's other journal paper, which established a framework for using linear polynomial control system design approaches to design nonlinear polynomial control systems (in brief, linear polynomial approaches → nonlinear polynomial plants). This paper represents the next milestone work - using linear state-space approaches to design nonlinear polynomial control systems (in brief, linear state-space approaches → nonlinear polynomial plants). The overall aim of the study is to establish a framework, defined as the U-block model, which provides a generic prototype for using linear state-space-based approaches to design the control systems with smooth nonlinear plants/processes described by polynomial models. For analysing the feasibility and effectiveness, sliding mode control design approach is selected as an exemplary case study. Numerical simulation studies provide a user-friendly step-by-step procedure for the readers/users with interest in their ad hoc applications. In formality, this is the first paper to present the U-model-oriented control system design in a formal way and to study the associated properties and theorems. The previous publications, in the main, have been algorithm-based studies and simulation demonstrations. In some sense, this paper can be treated as a landmark for the U-model-based research from intuitive/heuristic stage to rigour/formal/comprehensive studies.

  11. Soft Tissue Structure Modelling for Use in Orthopaedic Applications and Musculoskeletal Biomechanics

    NASA Astrophysics Data System (ADS)

    Audenaert, E. A.; Mahieu, P.; van Hoof, T.; Pattyn, C.

    2009-12-01

    We present our methodology for the three-dimensional anatomical and geometrical description of soft tissues, relevant for orthopaedic surgical applications and musculoskeletal biomechanics. The technique involves the segmentation and geometrical description of muscles and neurovascular structures from high-resolution computer tomography scanning for the reconstruction of generic anatomical models. These models can be used for quantitative interpretation of anatomical and biomechanical aspects of different soft tissue structures. This approach should allow the use of these data in other application fields, such as musculoskeletal modelling, simulations for radiation therapy, and databases for use in minimally invasive, navigated and robotic surgery.

  12. Modeling languages for biochemical network simulation: reaction vs equation based approaches.

    PubMed

    Wiechert, Wolfgang; Noack, Stephan; Elsheikh, Atya

    2010-01-01

    Biochemical network modeling and simulation is an essential task in any systems biology project. The systems biology markup language (SBML) was established as a standardized model exchange language for mechanistic models. A specific strength of SBML is that numerous tools for formulating, processing, simulation and analysis of models are freely available. Interestingly, in the field of multidisciplinary simulation, the problem of model exchange between different simulation tools occurred much earlier. Several general modeling languages like Modelica have been developed in the 1990s. Modelica enables an equation based modular specification of arbitrary hierarchical differential algebraic equation models. Moreover, libraries for special application domains can be rapidly developed. This contribution compares the reaction based approach of SBML with the equation based approach of Modelica and explains the specific strengths of both tools. Several biological examples illustrating essential SBML and Modelica concepts are given. The chosen criteria for tool comparison are flexibility for constraint specification, different modeling flavors, hierarchical, modular and multidisciplinary modeling. Additionally, support for spatially distributed systems, event handling and network analysis features is discussed. As a major result it is shown that the choice of the modeling tool has a strong impact on the expressivity of the specified models but also strongly depends on the requirements of the application context.

  13. Controlling aliased dynamics in motion systems? An identification for sampled-data control approach

    NASA Astrophysics Data System (ADS)

    Oomen, Tom

    2014-07-01

    Sampled-data control systems occasionally exhibit aliased resonance phenomena within the control bandwidth. The aim of this paper is to investigate the aspect of these aliased dynamics with application to a high performance industrial nano-positioning machine. This necessitates a full sampled-data control design approach, since these aliased dynamics endanger both the at-sample performance and the intersample behaviour. The proposed framework comprises both system identification and sampled-data control. In particular, the sampled-data control objective necessitates models that encompass the intersample behaviour, i.e., ideally continuous time models. Application of the proposed approach on an industrial wafer stage system provides a thorough insight and new control design guidelines for controlling aliased dynamics.

  14. Multiscale approach for the construction of equilibrated all-atom models of a poly(ethylene glycol)-based hydrogel

    PubMed Central

    Li, Xianfeng; Murthy, N. Sanjeeva; Becker, Matthew L.; Latour, Robert A.

    2016-01-01

    A multiscale modeling approach is presented for the efficient construction of an equilibrated all-atom model of a cross-linked poly(ethylene glycol) (PEG)-based hydrogel using the all-atom polymer consistent force field (PCFF). The final equilibrated all-atom model was built with a systematic simulation toolset consisting of three consecutive parts: (1) building a global cross-linked PEG-chain network at experimentally determined cross-link density using an on-lattice Monte Carlo method based on the bond fluctuation model, (2) recovering the local molecular structure of the network by transitioning from the lattice model to an off-lattice coarse-grained (CG) model parameterized from PCFF, followed by equilibration using high performance molecular dynamics methods, and (3) recovering the atomistic structure of the network by reverse mapping from the equilibrated CG structure, hydrating the structure with explicitly represented water, followed by final equilibration using PCFF parameterization. The developed three-stage modeling approach has application to a wide range of other complex macromolecular hydrogel systems, including the integration of peptide, protein, and/or drug molecules as side-chains within the hydrogel network for the incorporation of bioactivity for tissue engineering, regenerative medicine, and drug delivery applications. PMID:27013229

  15. Automated Design Space Exploration with Aspen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spafford, Kyle L.; Vetter, Jeffrey S.

    Architects and applications scientists often use performance models to explore a multidimensional design space of architectural characteristics, algorithm designs, and application parameters. With traditional performance modeling tools, these explorations forced users to first develop a performance model and then repeatedly evaluate and analyze the model manually. These manual investigations proved laborious and error prone. More importantly, the complexity of this traditional process often forced users to simplify their investigations. To address this challenge of design space exploration, we extend our Aspen (Abstract Scalable Performance Engineering Notation) language with three new language constructs: user-defined resources, parameter ranges, and a collection ofmore » costs in the abstract machine model. Then, we use these constructs to enable automated design space exploration via a nonlinear optimization solver. We show how four interesting classes of design space exploration scenarios can be derived from Aspen models and formulated as pure nonlinear programs. The analysis tools are demonstrated using examples based on Aspen models for a three-dimensional Fast Fourier Transform, the CoMD molecular dynamics proxy application, and the DARPA Streaming Sensor Challenge Problem. Our results show that this approach can compose and solve arbitrary performance modeling questions quickly and rigorously when compared to the traditional manual approach.« less

  16. Automated Design Space Exploration with Aspen

    DOE PAGES

    Spafford, Kyle L.; Vetter, Jeffrey S.

    2015-01-01

    Architects and applications scientists often use performance models to explore a multidimensional design space of architectural characteristics, algorithm designs, and application parameters. With traditional performance modeling tools, these explorations forced users to first develop a performance model and then repeatedly evaluate and analyze the model manually. These manual investigations proved laborious and error prone. More importantly, the complexity of this traditional process often forced users to simplify their investigations. To address this challenge of design space exploration, we extend our Aspen (Abstract Scalable Performance Engineering Notation) language with three new language constructs: user-defined resources, parameter ranges, and a collection ofmore » costs in the abstract machine model. Then, we use these constructs to enable automated design space exploration via a nonlinear optimization solver. We show how four interesting classes of design space exploration scenarios can be derived from Aspen models and formulated as pure nonlinear programs. The analysis tools are demonstrated using examples based on Aspen models for a three-dimensional Fast Fourier Transform, the CoMD molecular dynamics proxy application, and the DARPA Streaming Sensor Challenge Problem. Our results show that this approach can compose and solve arbitrary performance modeling questions quickly and rigorously when compared to the traditional manual approach.« less

  17. Reservoir studies with geostatistics to forecast performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, R.W.; Behrens, R.A.; Emanuel, A.S.

    1991-05-01

    In this paper example geostatistics and streamtube applications are presented for waterflood and CO{sub 2} flood in two low-permeability sandstone reservoirs. Thy hybrid approach of combining fine vertical resolution in cross-sectional models with streamtubes resulted in models that showed water channeling and provided realistic performance estimates. Results indicate that the combination of detailed geostatistical cross sections and fine-grid streamtube models offers a systematic approach for realistic performance forecasts.

  18. War-gaming application for future space systems acquisition part 1: program and technical baseline war-gaming modeling and simulation approaches

    NASA Astrophysics Data System (ADS)

    Nguyen, Tien M.; Guillen, Andy T.

    2017-05-01

    This paper describes static Bayesian game models with "Pure" and "Mixed" games for the development of an optimum Program and Technical Baseline (PTB) solution for affordable acquisition of future space systems. The paper discusses System Engineering (SE) frameworks and analytical and simulation modeling approaches for developing the optimum PTB solutions from both the government and contractor perspectives.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaunak, S.K.; Soni, B.K.

    With research interests shifting away from primarily military or industrial applications to more environmental applications, the area of ocean modelling has become an increasingly popular and exciting area of research. This paper presents a CIPS (Computation Field Simulation) system customized for the solution of oceanographic problems. This system deals primarily with the generation of simple, yet efficient grids for coastal areas. The two primary grid approaches are both structured in methodology. The first approach is a standard approach which is used in such popular grid generation softwares as GE-NIE++, EAGLEVIEW, and TIGER, where the user defines boundaries via points, lines,more » or curves, varies the distribution of points along these boundaries and then creates the interior grid. The second approach is to allow the user to interactively select points on the screen to form the boundary curves and then create the interior grid from these spline curves. The program has been designed with the needs of the ocean modeller in mind so that the modeller can obtain results in a timely yet elegant manner. The modeller performs four basic steps in using the program. First, he selects a region of interest from a popular database. Then, he creates a grid for that region. Next, he sets up boundary and input conditions and runs a circulation model. Finally, the modeller visualizes the output.« less

  20. The potential application of the blackboard model of problem solving to multidisciplinary design

    NASA Technical Reports Server (NTRS)

    Rogers, J. L.

    1989-01-01

    Problems associated with the sequential approach to multidisciplinary design are discussed. A blackboard model is suggested as a potential tool for implementing the multilevel decomposition approach to overcome these problems. The blackboard model serves as a global database for the solution with each discipline acting as a knowledge source for updating the solution. With this approach, it is possible for engineers to improve the coordination, communication, and cooperation in the conceptual design process, allowing them to achieve a more optimal design from an interdisciplinary standpoint.

  1. Approaches in highly parameterized inversion: bgaPEST, a Bayesian geostatistical approach implementation with PEST: documentation and instructions

    USGS Publications Warehouse

    Fienen, Michael N.; D'Oria, Marco; Doherty, John E.; Hunt, Randall J.

    2013-01-01

    The application bgaPEST is a highly parameterized inversion software package implementing the Bayesian Geostatistical Approach in a framework compatible with the parameter estimation suite PEST. Highly parameterized inversion refers to cases in which parameters are distributed in space or time and are correlated with one another. The Bayesian aspect of bgaPEST is related to Bayesian probability theory in which prior information about parameters is formally revised on the basis of the calibration dataset used for the inversion. Conceptually, this approach formalizes the conditionality of estimated parameters on the specific data and model available. The geostatistical component of the method refers to the way in which prior information about the parameters is used. A geostatistical autocorrelation function is used to enforce structure on the parameters to avoid overfitting and unrealistic results. Bayesian Geostatistical Approach is designed to provide the smoothest solution that is consistent with the data. Optionally, users can specify a level of fit or estimate a balance between fit and model complexity informed by the data. Groundwater and surface-water applications are used as examples in this text, but the possible uses of bgaPEST extend to any distributed parameter applications.

  2. Systemic Analysis Approaches for Air Transportation

    NASA Technical Reports Server (NTRS)

    Conway, Sheila

    2005-01-01

    Air transportation system designers have had only limited success using traditional operations research and parametric modeling approaches in their analyses of innovations. They need a systemic methodology for modeling of safety-critical infrastructure that is comprehensive, objective, and sufficiently concrete, yet simple enough to be used with reasonable investment. The methodology must also be amenable to quantitative analysis so issues of system safety and stability can be rigorously addressed. However, air transportation has proven itself an extensive, complex system whose behavior is difficult to describe, no less predict. There is a wide range of system analysis techniques available, but some are more appropriate for certain applications than others. Specifically in the area of complex system analysis, the literature suggests that both agent-based models and network analysis techniques may be useful. This paper discusses the theoretical basis for each approach in these applications, and explores their historic and potential further use for air transportation analysis.

  3. Jointly modeling longitudinal proportional data and survival times with an application to the quality of life data in a breast cancer trial.

    PubMed

    Song, Hui; Peng, Yingwei; Tu, Dongsheng

    2017-04-01

    Motivated by the joint analysis of longitudinal quality of life data and recurrence free survival times from a cancer clinical trial, we present in this paper two approaches to jointly model the longitudinal proportional measurements, which are confined in a finite interval, and survival data. Both approaches assume a proportional hazards model for the survival times. For the longitudinal component, the first approach applies the classical linear mixed model to logit transformed responses, while the second approach directly models the responses using a simplex distribution. A semiparametric method based on a penalized joint likelihood generated by the Laplace approximation is derived to fit the joint model defined by the second approach. The proposed procedures are evaluated in a simulation study and applied to the analysis of breast cancer data motivated this research.

  4. A screening-level modeling approach to estimate nitrogen loading and standard exceedance risk, with application to the Tippecanoe River watershed, Indiana

    EPA Science Inventory

    This paper presents a screening-level modeling approach that can be used to rapidly estimate nutrient loading and assess numerical nutrient standard exceedance risk of surface waters leading to potential classification as impaired for designated use. It can also be used to explor...

  5. Teaching Higher Order Thinking in the Introductory MIS Course: A Model-Directed Approach

    ERIC Educational Resources Information Center

    Wang, Shouhong; Wang, Hai

    2011-01-01

    One vision of education evolution is to change the modes of thinking of students. Critical thinking, design thinking, and system thinking are higher order thinking paradigms that are specifically pertinent to business education. A model-directed approach to teaching and learning higher order thinking is proposed. An example of application of the…

  6. Application of a number-conserving boson expansion theory to Ginocchio's SO(8) model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, C.h.; Pedrocchi, V.G.; Tamura, T.

    1986-05-01

    A boson expansion theory based on a number-conserving quasiparticle approach is applied to Ginocchio's SO(8) fermion model. Energy spectra and E2 transition rates calculated by using this new boson mapping are presented and compared against the exact fermion values. A comparison with other boson approaches is also given.

  7. Information Retrieval and Graph Analysis Approaches for Book Recommendation.

    PubMed

    Benkoussas, Chahinez; Bellot, Patrice

    2015-01-01

    A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model) and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval approach to related document network comprised of social links. We called Directed Graph of Documents (DGD) a network constructed with documents and social information provided from each one of them. Specifically, this work tackles the problem of book recommendation in the context of INEX (Initiative for the Evaluation of XML retrieval) Social Book Search track. A series of reranking experiments demonstrate that combining retrieval models yields significant improvements in terms of standard ranked retrieval metrics. These results extend the applicability of link analysis algorithms to different environments.

  8. Information Retrieval and Graph Analysis Approaches for Book Recommendation

    PubMed Central

    Benkoussas, Chahinez; Bellot, Patrice

    2015-01-01

    A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model) and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval approach to related document network comprised of social links. We called Directed Graph of Documents (DGD) a network constructed with documents and social information provided from each one of them. Specifically, this work tackles the problem of book recommendation in the context of INEX (Initiative for the Evaluation of XML retrieval) Social Book Search track. A series of reranking experiments demonstrate that combining retrieval models yields significant improvements in terms of standard ranked retrieval metrics. These results extend the applicability of link analysis algorithms to different environments. PMID:26504899

  9. Unified control/structure design and modeling research

    NASA Technical Reports Server (NTRS)

    Mingori, D. L.; Gibson, J. S.; Blelloch, P. A.; Adamian, A.

    1986-01-01

    To demonstrate the applicability of the control theory for distributed systems to large flexible space structures, research was focused on a model of a space antenna which consists of a rigid hub, flexible ribs, and a mesh reflecting surface. The space antenna model used is discussed along with the finite element approximation of the distributed model. The basic control problem is to design an optimal or near-optimal compensator to suppress the linear vibrations and rigid-body displacements of the structure. The application of an infinite dimensional Linear Quadratic Gaussian (LQG) control theory to flexible structure is discussed. Two basic approaches for robustness enhancement were investigated: loop transfer recovery and sensitivity optimization. A third approach synthesized from elements of these two basic approaches is currently under development. The control driven finite element approximation of flexible structures is discussed. Three sets of finite element basic vectors for computing functional control gains are compared. The possibility of constructing a finite element scheme to approximate the infinite dimensional Hamiltonian system directly, instead of indirectly is discussed.

  10. Constraint-Based Local Search for Constrained Optimum Paths Problems

    NASA Astrophysics Data System (ADS)

    Pham, Quang Dung; Deville, Yves; van Hentenryck, Pascal

    Constrained Optimum Path (COP) problems arise in many real-life applications and are ubiquitous in communication networks. They have been traditionally approached by dedicated algorithms, which are often hard to extend with side constraints and to apply widely. This paper proposes a constraint-based local search (CBLS) framework for COP applications, bringing the compositionality, reuse, and extensibility at the core of CBLS and CP systems. The modeling contribution is the ability to express compositional models for various COP applications at a high level of abstraction, while cleanly separating the model and the search procedure. The main technical contribution is a connected neighborhood based on rooted spanning trees to find high-quality solutions to COP problems. The framework, implemented in COMET, is applied to Resource Constrained Shortest Path (RCSP) problems (with and without side constraints) and to the edge-disjoint paths problem (EDP). Computational results show the potential significance of the approach.

  11. Incremental checking of Master Data Management model based on contextual graphs

    NASA Astrophysics Data System (ADS)

    Lamolle, Myriam; Menet, Ludovic; Le Duc, Chan

    2015-10-01

    The validation of models is a crucial step in distributed heterogeneous systems. In this paper, an incremental validation method is proposed in the scope of a Model Driven Engineering (MDE) approach, which is used to develop a Master Data Management (MDM) field represented by XML Schema models. The MDE approach presented in this paper is based on the definition of an abstraction layer using UML class diagrams. The validation method aims to minimise the model errors and to optimisethe process of model checking. Therefore, the notion of validation contexts is introduced allowing the verification of data model views. Description logics specify constraints that the models have to check. An experimentation of the approach is presented through an application developed in ArgoUML IDE.

  12. Stage-by-Stage and Parallel Flow Path Compressor Modeling for a Variable Cycle Engine

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph W.; Cheng, Larry

    2015-01-01

    This paper covers the development of stage-by-stage and parallel flow path compressor modeling approaches for a Variable Cycle Engine. The stage-by-stage compressor modeling approach is an extension of a technique for lumped volume dynamics and performance characteristic modeling. It was developed to improve the accuracy of axial compressor dynamics over lumped volume dynamics modeling. The stage-by-stage compressor model presented here is formulated into a parallel flow path model that includes both axial and rotational dynamics. This is done to enable the study of compressor and propulsion system dynamic performance under flow distortion conditions. The approaches utilized here are generic and should be applicable for the modeling of any axial flow compressor design.

  13. An ontology-based semantic configuration approach to constructing Data as a Service for enterprises

    NASA Astrophysics Data System (ADS)

    Cai, Hongming; Xie, Cheng; Jiang, Lihong; Fang, Lu; Huang, Chenxi

    2016-03-01

    To align business strategies with IT systems, enterprises should rapidly implement new applications based on existing information with complex associations to adapt to the continually changing external business environment. Thus, Data as a Service (DaaS) has become an enabling technology for enterprise through information integration and the configuration of existing distributed enterprise systems and heterogonous data sources. However, business modelling, system configuration and model alignment face challenges at the design and execution stages. To provide a comprehensive solution to facilitate data-centric application design in a highly complex and large-scale situation, a configurable ontology-based service integrated platform (COSIP) is proposed to support business modelling, system configuration and execution management. First, a meta-resource model is constructed and used to describe and encapsulate information resources by way of multi-view business modelling. Then, based on ontologies, three semantic configuration patterns, namely composite resource configuration, business scene configuration and runtime environment configuration, are designed to systematically connect business goals with executable applications. Finally, a software architecture based on model-view-controller (MVC) is provided and used to assemble components for software implementation. The result of the case study demonstrates that the proposed approach provides a flexible method of implementing data-centric applications.

  14. Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis.

    PubMed

    Falk, Carl F; Cai, Li

    2016-06-01

    We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.

  15. A fuzzy logic approach to modeling a vehicle crash test

    NASA Astrophysics Data System (ADS)

    Pawlus, Witold; Karimi, Hamid Reza; Robbersmyr, Kjell G.

    2013-03-01

    This paper presents an application of fuzzy approach to vehicle crash modeling. A typical vehicle to pole collision is described and kinematics of a car involved in this type of crash event is thoroughly characterized. The basics of fuzzy set theory and modeling principles based on fuzzy logic approach are presented. In particular, exceptional attention is paid to explain the methodology of creation of a fuzzy model of a vehicle collision. Furthermore, the simulation results are presented and compared to the original vehicle's kinematics. It is concluded which factors have influence on the accuracy of the fuzzy model's output and how they can be adjusted to improve the model's fidelity.

  16. Practical Results from the Application of Model Checking and Test Generation from UML/SysML Models of On-Board Space Applications

    NASA Astrophysics Data System (ADS)

    Faria, J. M.; Mahomad, S.; Silva, N.

    2009-05-01

    The deployment of complex safety-critical applications requires rigorous techniques and powerful tools both for the development and V&V stages. Model-based technologies are increasingly being used to develop safety-critical software, and arguably, turning to them can bring significant benefits to such processes, however, along with new challenges. This paper presents the results of a research project where we tried to extend current V&V methodologies to be applied on UML/SysML models and aiming at answering the demands related to validation issues. Two quite different but complementary approaches were investigated: (i) model checking and the (ii) extraction of robustness test-cases from the same models. These two approaches don't overlap and when combined provide a wider reaching model/design validation ability than each one alone thus offering improved safety assurance. Results are very encouraging, even though they either fell short of the desired outcome as shown for model checking, or still appear as not fully matured as shown for robustness test case extraction. In the case of model checking, it was verified that the automatic model validation process can become fully operational and even expanded in scope once tool vendors help (inevitably) to improve the XMI standard interoperability situation. For the robustness test case extraction methodology, the early approach produced interesting results but need further systematisation and consolidation effort in order to produce results in a more predictable fashion and reduce reliance on expert's heuristics. Finally, further improvements and innovation research projects were immediately apparent for both investigated approaches, which point to either circumventing current limitations in XMI interoperability on one hand and bringing test case specification onto the same graphical level as the models themselves and then attempting to automate the generation of executable test cases from its standard UML notation.

  17. Structured Hypermedia Application Development Model (SHADM): A structured Model for Technical Documentation Application Design

    DTIC Science & Technology

    1991-12-01

    effective (19:15) Figure 2 details a flowchart of the basic steps in prototyping. The basic concept behind prototyping is to quickly produce a working...One approach to overcoming this is to structure the document relative to the experience level of the user (14:49). A "novice" or beginner would

  18. Single walled boron nitride nanotube-based biosensor: an atomistic finite element modelling approach.

    PubMed

    Panchal, Mitesh B; Upadhyay, Sanjay H

    2014-09-01

    The unprecedented dynamic characteristics of nanoelectromechanical systems make them suitable for nanoscale mass sensing applications. Owing to superior biocompatibility, boron nitride nanotubes (BNNTs) are being increasingly used for such applications. In this study, the feasibility of single walled BNNT (SWBNNT)-based bio-sensor has been explored. Molecular structural mechanics-based finite element (FE) modelling approach has been used to analyse the dynamic behaviour of SWBNNT-based biosensors. The application of an SWBNNT-based mass sensing for zeptogram level of mass has been reported. Also, the effect of size of the nanotube in terms of length as well as different chiral atomic structures of SWBNNT has been analysed for their sensitivity analysis. The vibrational behaviour of SWBNNT has been analysed for higher-order modes of vibrations to identify the intermediate landing position of biological object of zeptogram scale. The present molecular structural mechanics-based FE modelling approach is found to be very effectual to incorporate different chiralities of the atomic structures. Also, different boundary conditions can be effectively simulated using the present approach to analyse the dynamic behaviour of the SWBNNT-based mass sensor. The presented study has explored the potential of SWBNNT, as a nanobiosensor having the capability of zeptogram level mass sensing.

  19. Evaluation of Structural Robustness against Column Loss: Methodology and Application to RC Frame Buildings

    PubMed Central

    Bao, Yihai; Main, Joseph A.; Noh, Sam-Young

    2017-01-01

    A computational methodology is presented for evaluating structural robustness against column loss. The methodology is illustrated through application to reinforced concrete (RC) frame buildings, using a reduced-order modeling approach for three-dimensional RC framing systems that includes the floor slabs. Comparisons with high-fidelity finite-element model results are presented to verify the approach. Pushdown analyses of prototype buildings under column loss scenarios are performed using the reduced-order modeling approach, and an energy-based procedure is employed to account for the dynamic effects associated with sudden column loss. Results obtained using the energy-based approach are found to be in good agreement with results from direct dynamic analysis of sudden column loss. A metric for structural robustness is proposed, calculated by normalizing the ultimate capacities of the structural system under sudden column loss by the applicable service-level gravity loading and by evaluating the minimum value of this normalized ultimate capacity over all column removal scenarios. The procedure is applied to two prototype 10-story RC buildings, one employing intermediate moment frames (IMFs) and the other employing special moment frames (SMFs). The SMF building, with its more stringent seismic design and detailing, is found to have greater robustness. PMID:28890599

  20. Logic models to predict continuous outputs based on binary inputs with an application to personalized cancer therapy

    PubMed Central

    Knijnenburg, Theo A.; Klau, Gunnar W.; Iorio, Francesco; Garnett, Mathew J.; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F. A.

    2016-01-01

    Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present ‘Logic Optimization for Binary Input to Continuous Output’ (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models. PMID:27876821

  1. Logic models to predict continuous outputs based on binary inputs with an application to personalized cancer therapy.

    PubMed

    Knijnenburg, Theo A; Klau, Gunnar W; Iorio, Francesco; Garnett, Mathew J; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F A

    2016-11-23

    Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present 'Logic Optimization for Binary Input to Continuous Output' (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models.

  2. Development of Subspace-based Hybrid Monte Carlo-Deterministric Algorithms for Reactor Physics Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Khalik, Hany S.; Zhang, Qiong

    2014-05-20

    The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executedmore » in the order of 10 3 - 10 5 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.« less

  3. A physics based method for combining multiple anatomy models with application to medical simulation.

    PubMed

    Zhu, Yanong; Magee, Derek; Ratnalingam, Rishya; Kessel, David

    2009-01-01

    We present a physics based approach to the construction of anatomy models by combining components from different sources; different image modalities, protocols, and patients. Given an initial anatomy, a mass-spring model is generated which mimics the physical properties of the solid anatomy components. This helps maintain valid spatial relationships between the components, as well as the validity of their shapes. Combination can be either replacing/modifying an existing component, or inserting a new component. The external forces that deform the model components to fit the new shape are estimated from Gradient Vector Flow and Distance Transform maps. We demonstrate the applicability and validity of the described approach in the area of medical simulation, by showing the processes of non-rigid surface alignment, component replacement, and component insertion.

  4. Dynamic Emulation Modelling (DEMo) of large physically-based environmental models

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Castelletti, A.

    2012-12-01

    In environmental modelling large, spatially-distributed, physically-based models are widely adopted to describe the dynamics of physical, social and economic processes. Such an accurate process characterization comes, however, to a price: the computational requirements of these models are considerably high and prevent their use in any problem requiring hundreds or thousands of model runs to be satisfactory solved. Typical examples include optimal planning and management, data assimilation, inverse modelling and sensitivity analysis. An effective approach to overcome this limitation is to perform a top-down reduction of the physically-based model by identifying a simplified, computationally efficient emulator, constructed from and then used in place of the original model in highly resource-demanding tasks. The underlying idea is that not all the process details in the original model are equally important and relevant to the dynamics of the outputs of interest for the type of problem considered. Emulation modelling has been successfully applied in many environmental applications, however most of the literature considers non-dynamic emulators (e.g. metamodels, response surfaces and surrogate models), where the original dynamical model is reduced to a static map between input and the output of interest. In this study we focus on Dynamic Emulation Modelling (DEMo), a methodological approach that preserves the dynamic nature of the original physically-based model, with consequent advantages in a wide variety of problem areas. In particular, we propose a new data-driven DEMo approach that combines the many advantages of data-driven modelling in representing complex, non-linear relationships, but preserves the state-space representation typical of process-based models, which is both particularly effective in some applications (e.g. optimal management and data assimilation) and facilitates the ex-post physical interpretation of the emulator structure, thus enhancing the credibility of the model to stakeholders and decision-makers. Numerical results from the application of the approach to the reduction of 3D coupled hydrodynamic-ecological models in several real world case studies, including Marina Reservoir (Singapore) and Googong Reservoir (Australia), are illustrated.

  5. Evaluating model accuracy for model-based reasoning

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Roden, Joseph

    1992-01-01

    Described here is an approach to automatically assessing the accuracy of various components of a model. In this approach, actual data from the operation of a target system is used to drive statistical measures to evaluate the prediction accuracy of various portions of the model. We describe how these statistical measures of model accuracy can be used in model-based reasoning for monitoring and design. We then describe the application of these techniques to the monitoring and design of the water recovery system of the Environmental Control and Life Support System (ECLSS) of Space Station Freedom.

  6. A comparison of the stochastic and machine learning approaches in hydrologic time series forecasting

    NASA Astrophysics Data System (ADS)

    Kim, T.; Joo, K.; Seo, J.; Heo, J. H.

    2016-12-01

    Hydrologic time series forecasting is an essential task in water resources management and it becomes more difficult due to the complexity of runoff process. Traditional stochastic models such as ARIMA family has been used as a standard approach in time series modeling and forecasting of hydrological variables. Due to the nonlinearity in hydrologic time series data, machine learning approaches has been studied with the advantage of discovering relevant features in a nonlinear relation among variables. This study aims to compare the predictability between the traditional stochastic model and the machine learning approach. Seasonal ARIMA model was used as the traditional time series model, and Random Forest model which consists of decision tree and ensemble method using multiple predictor approach was applied as the machine learning approach. In the application, monthly inflow data from 1986 to 2015 of Chungju dam in South Korea were used for modeling and forecasting. In order to evaluate the performances of the used models, one step ahead and multi-step ahead forecasting was applied. Root mean squared error and mean absolute error of two models were compared.

  7. A Bayesian Nonparametric Approach to Test Equating

    ERIC Educational Resources Information Center

    Karabatsos, George; Walker, Stephen G.

    2009-01-01

    A Bayesian nonparametric model is introduced for score equating. It is applicable to all major equating designs, and has advantages over previous equating models. Unlike the previous models, the Bayesian model accounts for positive dependence between distributions of scores from two tests. The Bayesian model and the previous equating models are…

  8. Multifidelity-CMA: a multifidelity approach for efficient personalisation of 3D cardiac electromechanical models.

    PubMed

    Molléro, Roch; Pennec, Xavier; Delingette, Hervé; Garny, Alan; Ayache, Nicholas; Sermesant, Maxime

    2018-02-01

    Personalised computational models of the heart are of increasing interest for clinical applications due to their discriminative and predictive abilities. However, the simulation of a single heartbeat with a 3D cardiac electromechanical model can be long and computationally expensive, which makes some practical applications, such as the estimation of model parameters from clinical data (the personalisation), very slow. Here we introduce an original multifidelity approach between a 3D cardiac model and a simplified "0D" version of this model, which enables to get reliable (and extremely fast) approximations of the global behaviour of the 3D model using 0D simulations. We then use this multifidelity approximation to speed-up an efficient parameter estimation algorithm, leading to a fast and computationally efficient personalisation method of the 3D model. In particular, we show results on a cohort of 121 different heart geometries and measurements. Finally, an exploitable code of the 0D model with scripts to perform parameter estimation will be released to the community.

  9. A new Bayesian recursive technique for parameter estimation

    NASA Astrophysics Data System (ADS)

    Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis

    2006-08-01

    The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.

  10. Towards generalised reference condition models for environmental assessment: a case study on rivers in Atlantic Canada.

    PubMed

    Armanini, D G; Monk, W A; Carter, L; Cote, D; Baird, D J

    2013-08-01

    Evaluation of the ecological status of river sites in Canada is supported by building models using the reference condition approach. However, geography, data scarcity and inter-operability constraints have frustrated attempts to monitor national-scale status and trends. This issue is particularly true in Atlantic Canada, where no ecological assessment system is currently available. Here, we present a reference condition model based on the River Invertebrate Prediction and Classification System approach with regional-scale applicability. To achieve this, we used biological monitoring data collected from wadeable streams across Atlantic Canada together with freely available, nationally consistent geographic information system (GIS) environmental data layers. For the first time, we demonstrated that it is possible to use data generated from different studies, even when collected using different sampling methods, to generate a robust predictive model. This model was successfully generated and tested using GIS-based rather than local habitat variables and showed improved performance when compared to a null model. In addition, ecological quality ratio data derived from the model responded to observed stressors in a test dataset. Implications for future large-scale implementation of river biomonitoring using a standardised approach with global application are presented.

  11. Using "big data" to optimally model hydrology and water quality across expansive regions

    USGS Publications Warehouse

    Roehl, E.A.; Cook, J.B.; Conrads, P.A.

    2009-01-01

    This paper describes a new divide and conquer approach that leverages big environmental data, utilizing all available categorical and time-series data without subjectivity, to empirically model hydrologic and water-quality behaviors across expansive regions. The approach decomposes large, intractable problems into smaller ones that are optimally solved; decomposes complex signals into behavioral components that are easier to model with "sub- models"; and employs a sequence of numerically optimizing algorithms that include time-series clustering, nonlinear, multivariate sensitivity analysis and predictive modeling using multi-layer perceptron artificial neural networks, and classification for selecting the best sub-models to make predictions at new sites. This approach has many advantages over traditional modeling approaches, including being faster and less expensive, more comprehensive in its use of available data, and more accurate in representing a system's physical processes. This paper describes the application of the approach to model groundwater levels in Florida, stream temperatures across Western Oregon and Wisconsin, and water depths in the Florida Everglades. ?? 2009 ASCE.

  12. BUMPER: the Bayesian User-friendly Model for Palaeo-Environmental Reconstruction

    NASA Astrophysics Data System (ADS)

    Holden, Phil; Birks, John; Brooks, Steve; Bush, Mark; Hwang, Grace; Matthews-Bird, Frazer; Valencia, Bryan; van Woesik, Robert

    2017-04-01

    We describe the Bayesian User-friendly Model for Palaeo-Environmental Reconstruction (BUMPER), a Bayesian transfer function for inferring past climate and other environmental variables from microfossil assemblages. The principal motivation for a Bayesian approach is that the palaeoenvironment is treated probabilistically, and can be updated as additional data become available. Bayesian approaches therefore provide a reconstruction-specific quantification of the uncertainty in the data and in the model parameters. BUMPER is fully self-calibrating, straightforward to apply, and computationally fast, requiring 2 seconds to build a 100-taxon model from a 100-site training-set on a standard personal computer. We apply the model's probabilistic framework to generate thousands of artificial training-sets under ideal assumptions. We then use these to demonstrate both the general applicability of the model and the sensitivity of reconstructions to the characteristics of the training-set, considering assemblage richness, taxon tolerances, and the number of training sites. We demonstrate general applicability to real data, considering three different organism types (chironomids, diatoms, pollen) and different reconstructed variables. In all of these applications an identically configured model is used, the only change being the input files that provide the training-set environment and taxon-count data.

  13. The Park School Systems Approach to Piagetian Education.

    ERIC Educational Resources Information Center

    Park, Rose R.

    While three models of the application of Piaget's theory to education have been identified, the Park School (Norwalk, Connecticut) adds a fourth. This method involves a systems approach that extends beyond curricula and derives teaching techniques and administrative practices from Piaget's view. The approach uses logical games and…

  14. A model of cloud application assignments in software-defined storages

    NASA Astrophysics Data System (ADS)

    Bolodurina, Irina P.; Parfenov, Denis I.; Polezhaev, Petr N.; E Shukhman, Alexander

    2017-01-01

    The aim of this study is to analyze the structure and mechanisms of interaction of typical cloud applications and to suggest the approaches to optimize their placement in storage systems. In this paper, we describe a generalized model of cloud applications including the three basic layers: a model of application, a model of service, and a model of resource. The distinctive feature of the model suggested implies analyzing cloud resources from the user point of view and from the point of view of a software-defined infrastructure of the virtual data center (DC). The innovation character of this model is in describing at the same time the application data placements, as well as the state of the virtual environment, taking into account the network topology. The model of software-defined storage has been developed as a submodel within the resource model. This model allows implementing the algorithm for control of cloud application assignments in software-defined storages. Experimental researches returned this algorithm decreases in cloud application response time and performance growth in user request processes. The use of software-defined data storages allows the decrease in the number of physical store devices, which demonstrates the efficiency of our algorithm.

  15. On the application of multilevel modeling in environmental and ecological studies

    USGS Publications Warehouse

    Qian, Song S.; Cuffney, Thomas F.; Alameddine, Ibrahim; McMahon, Gerard; Reckhow, Kenneth H.

    2010-01-01

    This paper illustrates the advantages of a multilevel/hierarchical approach for predictive modeling, including flexibility of model formulation, explicitly accounting for hierarchical structure in the data, and the ability to predict the outcome of new cases. As a generalization of the classical approach, the multilevel modeling approach explicitly models the hierarchical structure in the data by considering both the within- and between-group variances leading to a partial pooling of data across all levels in the hierarchy. The modeling framework provides means for incorporating variables at different spatiotemporal scales. The examples used in this paper illustrate the iterative process of model fitting and evaluation, a process that can lead to improved understanding of the system being studied.

  16. Extrinsic local regression on manifold-valued data

    PubMed Central

    Lin, Lizhen; St Thomas, Brian; Zhu, Hongtu; Dunson, David B.

    2017-01-01

    We propose an extrinsic regression framework for modeling data with manifold valued responses and Euclidean predictors. Regression with manifold responses has wide applications in shape analysis, neuroscience, medical imaging and many other areas. Our approach embeds the manifold where the responses lie onto a higher dimensional Euclidean space, obtains a local regression estimate in that space, and then projects this estimate back onto the image of the manifold. Outside the regression setting both intrinsic and extrinsic approaches have been proposed for modeling i.i.d manifold-valued data. However, to our knowledge our work is the first to take an extrinsic approach to the regression problem. The proposed extrinsic regression framework is general, computationally efficient and theoretically appealing. Asymptotic distributions and convergence rates of the extrinsic regression estimates are derived and a large class of examples are considered indicating the wide applicability of our approach. PMID:29225385

  17. Nonparametric model validations for hidden Markov models with applications in financial econometrics.

    PubMed

    Zhao, Zhibiao

    2011-06-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.

  18. Robust inference in the negative binomial regression model with an application to falls data.

    PubMed

    Aeberhard, William H; Cantoni, Eva; Heritier, Stephane

    2014-12-01

    A popular way to model overdispersed count data, such as the number of falls reported during intervention studies, is by means of the negative binomial (NB) distribution. Classical estimating methods are well-known to be sensitive to model misspecifications, taking the form of patients falling much more than expected in such intervention studies where the NB regression model is used. We extend in this article two approaches for building robust M-estimators of the regression parameters in the class of generalized linear models to the NB distribution. The first approach achieves robustness in the response by applying a bounded function on the Pearson residuals arising in the maximum likelihood estimating equations, while the second approach achieves robustness by bounding the unscaled deviance components. For both approaches, we explore different choices for the bounding functions. Through a unified notation, we show how close these approaches may actually be as long as the bounding functions are chosen and tuned appropriately, and provide the asymptotic distributions of the resulting estimators. Moreover, we introduce a robust weighted maximum likelihood estimator for the overdispersion parameter, specific to the NB distribution. Simulations under various settings show that redescending bounding functions yield estimates with smaller biases under contamination while keeping high efficiency at the assumed model, and this for both approaches. We present an application to a recent randomized controlled trial measuring the effectiveness of an exercise program at reducing the number of falls among people suffering from Parkinsons disease to illustrate the diagnostic use of such robust procedures and their need for reliable inference. © 2014, The International Biometric Society.

  19. MATTS- A Step Towards Model Based Testing

    NASA Astrophysics Data System (ADS)

    Herpel, H.-J.; Willich, G.; Li, J.; Xie, J.; Johansen, B.; Kvinnesland, K.; Krueger, S.; Barrios, P.

    2016-08-01

    In this paper we describe a Model Based approach to testing of on-board software and compare it with traditional validation strategy currently applied to satellite software. The major problems that software engineering will face over at least the next two decades are increasing application complexity driven by the need for autonomy and serious application robustness. In other words, how do we actually get to declare success when trying to build applications one or two orders of magnitude more complex than today's applications. To solve the problems addressed above the software engineering process has to be improved at least for two aspects: 1) Software design and 2) Software testing. The software design process has to evolve towards model-based approaches with extensive use of code generators. Today, testing is an essential, but time and resource consuming activity in the software development process. Generating a short, but effective test suite usually requires a lot of manual work and expert knowledge. In a model-based process, among other subtasks, test construction and test execution can also be partially automated. The basic idea behind the presented study was to start from a formal model (e.g. State Machines), generate abstract test cases which are then converted to concrete executable test cases (input and expected output pairs). The generated concrete test cases were applied to an on-board software. Results were collected and evaluated wrt. applicability, cost-efficiency, effectiveness at fault finding, and scalability.

  20. A Perspective on Computational Human Performance Models as Design Tools

    NASA Technical Reports Server (NTRS)

    Jones, Patricia M.

    2010-01-01

    The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.

  1. HEUS-RS applications study, volume 2

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The final report of a High Energy Upper Stage Restartable Solid (HEUS-RS) Applications Study is presented. The material deals with launch program cost comparisons associated with meeting NASA mission model requirements with several different launch vehicle approaches.

  2. Building model analysis applications with the Joint Universal Parameter IdenTification and Evaluation of Reliability (JUPITER) API

    USGS Publications Warehouse

    Banta, E.R.; Hill, M.C.; Poeter, E.; Doherty, J.E.; Babendreier, J.

    2008-01-01

    The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input and output conventions allow application users to access various applications and the analysis methods they embody with a minimum of time and effort. Process models simulate, for example, physical, chemical, and (or) biological systems of interest using phenomenological, theoretical, or heuristic approaches. The types of model analyses supported by the JUPITER API include, but are not limited to, sensitivity analysis, data needs assessment, calibration, uncertainty analysis, model discrimination, and optimization. The advantages provided by the JUPITER API for users and programmers allow for rapid programming and testing of new ideas. Application-specific coding can be in languages other than the Fortran-90 of the API. This article briefly describes the capabilities and utility of the JUPITER API, lists existing applications, and uses UCODE_2005 as an example.

  3. Data normalization in biosurveillance: an information-theoretic approach.

    PubMed

    Peter, William; Najmi, Amir H; Burkom, Howard

    2007-10-11

    An approach to identifying public health threats by characterizing syndromic surveillance data in terms of its surprisability is discussed. Surprisability in our model is measured by assigning a probability distribution to a time series, and then calculating its entropy, leading to a straightforward designation of an alert. Initial application of our method is to investigate the applicability of using suitably-normalized syndromic counts (i.e., proportions) to improve early event detection.

  4. Integrating public risk perception into formal natural hazard risk assessment

    NASA Astrophysics Data System (ADS)

    Plattner, Th.; Plapp, T.; Hebel, B.

    2006-06-01

    An urgent need to take perception into account for risk assessment has been pointed out by relevant literature, its impact in terms of risk-related behaviour by individuals is obvious. This study represents an effort to overcome the broadly discussed question of whether risk perception is quantifiable or not by proposing a still simple but applicable methodology. A novel approach is elaborated to obtain a more accurate and comprehensive quantification of risk in comparison to present formal risk evaluation practice. A consideration of relevant factors enables a explicit quantification of individual risk perception and evaluation. The model approach integrates the effective individual risk reff and a weighted mean of relevant perception affecting factors PAF. The relevant PAF cover voluntariness of risk-taking, individual reducibility of risk, knowledge and experience, endangerment, subjective damage rating and subjective recurrence frequency perception. The approach assigns an individual weight to each PAF to represent its impact magnitude. The quantification of these weights is target-group-dependent (e.g. experts, laypersons) and may be effected by psychometric methods. The novel approach is subject to a plausibility check using data from an expert-workshop. A first model application is conducted by means of data of an empirical risk perception study in Western Germany to deduce PAF and weight quantification as well as to confirm and evaluate model applicbility and flexibility. Main fields of application will be a quantification of risk perception by individual persons in a formal and technical way e.g. for the purpose of risk communication issues in illustrating differing perspectives of experts and non-experts. For decision making processes this model will have to be applied with caution, since it is by definition not designed to quantify risk acceptance or risk evaluation. The approach may well explain how risk perception differs, but not why it differs. The formal model generates only "snap shots" and considers neither the socio-cultural nor the historical context of risk perception, since it is a highly individualistic and non-contextual approach.

  5. A basis for solid modeling of gear teeth with application in design and manufacture

    NASA Technical Reports Server (NTRS)

    Huston, Ronald L.; Mavriplis, Dimitrios; Oswald, Fred B.; Liu, Yung Sheng

    1992-01-01

    A new approach to modeling gear tooth surfaces is discussed. A computer graphics solid modeling procedure is used to simulate the tooth fabrication process. This procedure is based on the principles of differential geometry that pertain to envelopes of curves and surfaces. The procedure is illustrated with the modeling of spur, helical, bevel, spiral bevel, and hypoid gear teeth. Applications in design and manufacturing are discussed. Extensions to nonstandard tooth forms, to cams, and to rolling element bearings are proposed.

  6. A Basis for Solid Modeling of Gear Teeth with Application in Design and Manufacture

    NASA Technical Reports Server (NTRS)

    Huston, Ronald L.; Mavriplis, Dimitrios; Oswald, Fred B.; Liu, Yung Sheng

    1994-01-01

    This paper discusses a new approach to modeling gear tooth surfaces. A computer graphics solid modeling procedure is used to simulate the tooth fabrication processes. This procedure is based on the principles of differential geometry that pertain to envelopes of curves and surfaces. The procedure is illustrated with the modeling of spur, helical, bevel, spiral bevel and hypoid gear teeth. Applications in design and manufacturing arc discussed. Extensions to nonstandard tooth forms, to cams, and to rolling element hearings are proposed.

  7. Simulation of saltwater intrusion in a poorly karstified coastal aquifer in Lebanon (Eastern Mediterranean)

    NASA Astrophysics Data System (ADS)

    Khadra, Wisam M.; Stuyfzand, Pieter J.

    2018-03-01

    To date, there has been no agreement on the best way to simulate saltwater intrusion (SWI) in karst aquifers. An equivalent porous medium (EPM) is usually assumed without justification of its applicability. In this paper, SWI in a poorly karstified aquifer in Lebanon is simulated in various ways and compared to measurements. Time series analysis of rainfall and aquifer response is recommended to decide whether quickflow through conduits can be safely ignored. This aids in justifying the selection of the exemplified EPM model. To examine the improvement of SWI representation when discrete features (DFs) are embedded in the model domain, the results of a coupled discrete-continuum (CDC) approach (a hybrid EPM-DF approach) are compared to the EPM model. The two approaches yielded reasonable patterns of hydraulic head and groundwater salinity, which seem trustworthy enough for management purposes. The CDC model also reproduced some local anomalous chloride patterns, being more adaptable with respect to the measurements. It improved the overall accuracy of salinity predictions at wells and better represented the fresh-brackish water interface. Therefore, the CDC approach can be beneficial in modeling SWI in poorly karstified aquifers, and should be compared with the results of the EPM method to decide whether the differences in the outcome at local scale warrant its (more complicated) application. The simulation utilized the SEAWAT code since it is density dependent and public domain, and it enjoys widespread application. Including DFs necessitated manual handling because the selected code has no built-in option for such features.

  8. A System of Systems Approach to Integrating Global Sea Level Change Application Programs

    NASA Astrophysics Data System (ADS)

    Bambachus, M. J.; Foster, R. S.; Powell, C.; Cole, M.

    2005-12-01

    The global sea level change application community has numerous disparate models used to make predications over various regional and temporal scales. These models have typically been focused on limited sets of data and optimized for specific areas or questions of interest. Increasingly, decision makers at the national, international, and local/regional levels require access to these application data models and want to be able to integrate large disparate data sets, with new ubiquitous sensor data, and use these data across models from multiple sources. These requirements will force the Global Sea Level Change application community to take a new system-of-systems approach to their programs. We present a new technical architecture approach to the global sea level change program that provides external access to the vast stores of global sea level change data, provides a collaboration forum for the discussion and visualization of data, and provides a simulation environment to evaluate decisions. This architectural approach will provide the tools to support multi-disciplinary decision making. A conceptual system of systems approach is needed to address questions around the multiple approaches to tracking and predicting Sea Level Change. A systems of systems approach would include (1) a forum of data providers, modelers, and users, (2) a service oriented architecture including interoperable web services with a backbone of Grid computing capability, and (3) discovery and access functionality to the information developed through this structure. Each of these three areas would be clearly designed to maximize communication, data use for decision making and flexibility and extensibility for evolution of technology and requirements. In contemplating a system-of-systems approach, it is important to highlight common understanding and coordination as foundational to success across the multiple systems. The workflow of science in different applications is often conceptually similar but different in the details. These differences can discourage the potential for collaboration. Resources that are not inherently shared (or do not spring from a common authority) must be explicitly coordinated to avoid disrupting the collaborative research workflow. This includes tools which make the interaction of systems (and users with systems, and administrators of systems) more conceptual and higher-level than is typically done today. Such tools all appear under the heading of Grid, within a larger idea of metacomputing. We present an approach for successful collaboration and shared use of distributed research resources. The real advances in research throughput that are occurring through the use of large computers are occurring less as a function of progress in a given discrete algorithm and much more as a function of model and data coupling. Complexity normally reduces the ability of the human mind to understand and work with this kind of coupling. Intuitive Grid-based computational resources simultaneously reduce the effect of this complexity on the scientist/decision maker, and increase the ability to rationalize complexity. Research progress can even be achieved before full understanding of complexity has been reached, by modeling and experimenting and providing more data to think about. Analytic engines provided via the Grid can help digest this data and make it tractable through visualization and exploration tools. We present a rationale for increasing research throughput by leveraging more complex model and data interaction.

  9. Adaptive Long-Term Monitoring at Environmental Restoration Sites (ER-0629)

    DTIC Science & Technology

    2009-05-01

    Figures Figure 2-1 General Flowchart of Software Application Figure 2-2 Overview of the Genetic Algorithm Approach Figure 2-3 Example of a...and Model Builder) are highlighted on Figure 2-1, which is a general flowchart illustrating the application of the software. The software is applied...monitoring event (e.g., contaminant mass based on interpolation) that modeling is provided by Model Builder. 4 Figure 2-1. General Flowchart of Software

  10. A General Approach for Specifying Informative Prior Distributions for PBPK Model Parameters

    EPA Science Inventory

    Characterization of uncertainty in model predictions is receiving more interest as more models are being used in applications that are critical to human health. For models in which parameters reflect biological characteristics, it is often possible to provide estimates of paramet...

  11. Self-discharge analysis and characterization of supercapacitors for environmentally powered wireless sensor network applications

    NASA Astrophysics Data System (ADS)

    Yang, Hengzhao; Zhang, Ying

    2011-10-01

    A new approach is presented to characterize the variable leakage resistance, a parameter in the variable leakage resistance model we developed to model supercapacitors used in environmentally powered wireless sensor network applications. Based on an analysis of the supercapacitor terminal behavior during the self-discharge, the variable leakage resistance is modeled as a function of the supercapacitor terminal voltage instead of the self-discharge time, which is more practical for an environmentally powered wireless sensor node. The new characterization approach is implemented and validated using MATLAB Simulink with a 10 F supercapacitor as an example. In addition, effects of initial voltages and temperatures on the supercapacitor self-discharge rate and the variable leakage resistance value are explored.

  12. On neural networks in identification and control of dynamic systems

    NASA Technical Reports Server (NTRS)

    Phan, Minh; Juang, Jer-Nan; Hyland, David C.

    1993-01-01

    This paper presents a discussion of the applicability of neural networks in the identification and control of dynamic systems. Emphasis is placed on the understanding of how the neural networks handle linear systems and how the new approach is related to conventional system identification and control methods. Extensions of the approach to nonlinear systems are then made. The paper explains the fundamental concepts of neural networks in their simplest terms. Among the topics discussed are feed forward and recurrent networks in relation to the standard state-space and observer models, linear and nonlinear auto-regressive models, linear, predictors, one-step ahead control, and model reference adaptive control for linear and nonlinear systems. Numerical examples are presented to illustrate the application of these important concepts.

  13. Generalized Parameter-Adjusted Stochastic Resonance of Duffing Oscillator and Its Application to Weak-Signal Detection

    PubMed Central

    Lai, Zhi-Hui; Leng, Yong-Gang

    2015-01-01

    A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application. PMID:26343671

  14. Efficient Testing Combining Design of Experiment and Learn-to-Fly Strategies

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Brandon, Jay M.

    2017-01-01

    Rapid modeling and efficient testing methods are important in a number of aerospace applications. In this study efficient testing strategies were evaluated in a wind tunnel test environment and combined to suggest a promising approach for both ground-based and flight-based experiments. Benefits of using Design of Experiment techniques, well established in scientific, military, and manufacturing applications are evaluated in combination with newly developing methods for global nonlinear modeling. The nonlinear modeling methods, referred to as Learn-to-Fly methods, utilize fuzzy logic and multivariate orthogonal function techniques that have been successfully demonstrated in flight test. The blended approach presented has a focus on experiment design and identifies a sequential testing process with clearly defined completion metrics that produce increased testing efficiency.

  15. Surface Modeling, Grid Generation, and Related Issues in Computational Fluid Dynamic (CFD) Solutions

    NASA Technical Reports Server (NTRS)

    Choo, Yung K. (Compiler)

    1995-01-01

    The NASA Steering Committee for Surface Modeling and Grid Generation (SMAGG) sponsored a workshop on surface modeling, grid generation, and related issues in Computational Fluid Dynamics (CFD) solutions at Lewis Research Center, Cleveland, Ohio, May 9-11, 1995. The workshop provided a forum to identify industry needs, strengths, and weaknesses of the five grid technologies (patched structured, overset structured, Cartesian, unstructured, and hybrid), and to exchange thoughts about where each technology will be in 2 to 5 years. The workshop also provided opportunities for engineers and scientists to present new methods, approaches, and applications in SMAGG for CFD. This Conference Publication (CP) consists of papers on industry overview, NASA overview, five grid technologies, new methods/ approaches/applications, and software systems.

  16. Pesticide fate at regional scale: Development of an integrated model approach and application

    NASA Astrophysics Data System (ADS)

    Herbst, M.; Hardelauf, H.; Harms, R.; Vanderborght, J.; Vereecken, H.

    As a result of agricultural practice many soils and aquifers are contaminated with pesticides. In order to quantify the side-effects of these anthropogenic impacts on groundwater quality at regional scale, a process-based, integrated model approach was developed. The Richards’ equation based numerical model TRACE calculates the three-dimensional saturated/unsaturated water flow. For the modeling of regional scale pesticide transport we linked TRACE with the plant module SUCROS and with 3DLEWASTE, a hybrid Lagrangian/Eulerian approach to solve the convection/dispersion equation. We used measurements, standard methods like pedotransfer-functions or parameters from literature to derive the model input for the process model. A first-step application of TRACE/3DLEWASTE to the 20 km 2 test area ‘Zwischenscholle’ for the period 1983-1993 reveals the behaviour of the pesticide isoproturon. The selected test area is characterised by an intense agricultural use and shallow groundwater, resulting in a high vulnerability of the groundwater to pesticide contamination. The model results stress the importance of the unsaturated zone for the occurrence of pesticides in groundwater. Remarkable isoproturon concentrations in groundwater are predicted for locations with thin layered and permeable soils. For four selected locations we used measured piezometric heads to validate predicted groundwater levels. In general, the model results are consistent and reasonable. Thus the developed integrated model approach is seen as a promising tool for the quantification of the agricultural practice impact on groundwater quality.

  17. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  18. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples, volume 1

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  19. Multiscale high-order/low-order (HOLO) algorithms and applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon, Luis; Chen, Guangye; Knoll, Dana Alan

    Here, we review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. Themore » HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.« less

  20. Multiscale high-order/low-order (HOLO) algorithms and applications

    DOE PAGES

    Chacon, Luis; Chen, Guangye; Knoll, Dana Alan; ...

    2016-11-11

    Here, we review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. Themore » HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.« less

  1. Human factors systems approach to healthcare quality and patient safety

    PubMed Central

    Carayon, Pascale; Wetterneck, Tosha B.; Rivera-Rodriguez, A. Joy; Hundt, Ann Schoofs; Hoonakker, Peter; Holden, Richard; Gurses, Ayse P.

    2013-01-01

    Human factors systems approaches are critical for improving healthcare quality and patient safety. The SEIPS (Systems Engineering Initiative for Patient Safety) model of work system and patient safety is a human factors systems approach that has been successfully applied in healthcare research and practice. Several research and practical applications of the SEIPS model are described. Important implications of the SEIPS model for healthcare system and process redesign are highlighted. Principles for redesigning healthcare systems using the SEIPS model are described. Balancing the work system and encouraging the active and adaptive role of workers are key principles for improving healthcare quality and patient safety. PMID:23845724

  2. Zero-state Markov switching count-data models: an empirical assessment.

    PubMed

    Malyshkina, Nataliya V; Mannering, Fred L

    2010-01-01

    In this study, a two-state Markov switching count-data model is proposed as an alternative to zero-inflated models to account for the preponderance of zeros sometimes observed in transportation count data, such as the number of accidents occurring on a roadway segment over some period of time. For this accident-frequency case, zero-inflated models assume the existence of two states: one of the states is a zero-accident count state, which has accident probabilities that are so low that they cannot be statistically distinguished from zero, and the other state is a normal-count state, in which counts can be non-negative integers that are generated by some counting process, for example, a Poisson or negative binomial. While zero-inflated models have come under some criticism with regard to accident-frequency applications - one fact is undeniable - in many applications they provide a statistically superior fit to the data. The Markov switching approach we propose seeks to overcome some of the criticism associated with the zero-accident state of the zero-inflated model by allowing individual roadway segments to switch between zero and normal-count states over time. An important advantage of this Markov switching approach is that it allows for the direct statistical estimation of the specific roadway-segment state (i.e., zero-accident or normal-count state) whereas traditional zero-inflated models do not. To demonstrate the applicability of this approach, a two-state Markov switching negative binomial model (estimated with Bayesian inference) and standard zero-inflated negative binomial models are estimated using five-year accident frequencies on Indiana interstate highway segments. It is shown that the Markov switching model is a viable alternative and results in a superior statistical fit relative to the zero-inflated models.

  3. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Schröter, Kai; Merz, Bruno

    2016-05-01

    Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  4. Optimal bioprocess design through a gene regulatory network - growth kinetic hybrid model: Towards Replacing Monod kinetics.

    PubMed

    Tsipa, Argyro; Koutinas, Michalis; Usaku, Chonlatep; Mantalaris, Athanasios

    2018-05-02

    Currently, design and optimisation of biotechnological bioprocesses is performed either through exhaustive experimentation and/or with the use of empirical, unstructured growth kinetics models. Whereas, elaborate systems biology approaches have been recently explored, mixed-substrate utilisation is predominantly ignored despite its significance in enhancing bioprocess performance. Herein, bioprocess optimisation for an industrially-relevant bioremediation process involving a mixture of highly toxic substrates, m-xylene and toluene, was achieved through application of a novel experimental-modelling gene regulatory network - growth kinetic (GRN-GK) hybrid framework. The GRN model described the TOL and ortho-cleavage pathways in Pseudomonas putida mt-2 and captured the transcriptional kinetics expression patterns of the promoters. The GRN model informed the formulation of the growth kinetics model replacing the empirical and unstructured Monod kinetics. The GRN-GK framework's predictive capability and potential as a systematic optimal bioprocess design tool, was demonstrated by effectively predicting bioprocess performance, which was in agreement with experimental values, when compared to four commonly used models that deviated significantly from the experimental values. Significantly, a fed-batch biodegradation process was designed and optimised through the model-based control of TOL Pr promoter expression resulting in 61% and 60% enhanced pollutant removal and biomass formation, respectively, compared to the batch process. This provides strong evidence of model-based bioprocess optimisation at the gene level, rendering the GRN-GK framework as a novel and applicable approach to optimal bioprocess design. Finally, model analysis using global sensitivity analysis (GSA) suggests an alternative, systematic approach for model-driven strain modification for synthetic biology and metabolic engineering applications. Copyright © 2018. Published by Elsevier Inc.

  5. Mobile Applications in Cell Biology Present New Approaches for Cell Modelling

    ERIC Educational Resources Information Center

    de Oliveira, Mayara Lustosa; Galembeck, Eduardo

    2016-01-01

    Cell biology apps were surveyed in order to identify whether there are new approaches for modelling cells allowed by the new technologies implemented in tablets and smartphones. A total of 97 apps were identified in 3 stores surveyed (Apple, Google Play and Amazon), they are presented as: education 48.4%, games 26.8% and medicine 15.4%. The apps…

  6. Scene-aware joint global and local homographic video coding

    NASA Astrophysics Data System (ADS)

    Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.

    2016-09-01

    Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.

  7. Historical HIV incidence modelling in regional subgroups: use of flexible discrete models with penalized splines based on prior curves.

    PubMed

    Greenland, S

    1996-03-15

    This paper presents an approach to back-projection (back-calculation) of human immunodeficiency virus (HIV) person-year infection rates in regional subgroups based on combining a log-linear model for subgroup differences with a penalized spline model for trends. The penalized spline approach allows flexible trend estimation but requires far fewer parameters than fully non-parametric smoothers, thus saving parameters that can be used in estimating subgroup effects. Use of reasonable prior curve to construct the penalty function minimizes the degree of smoothing needed beyond model specification. The approach is illustrated in application to acquired immunodeficiency syndrome (AIDS) surveillance data from Los Angeles County.

  8. Linking Goal-Oriented Requirements and Model-Driven Development

    NASA Astrophysics Data System (ADS)

    Pastor, Oscar; Giachetti, Giovanni

    In the context of Goal-Oriented Requirement Engineering (GORE) there are interesting modeling approaches for the analysis of complex scenarios that are oriented to obtain and represent the relevant requirements for the development of software products. However, the way to use these GORE models in an automated Model-Driven Development (MDD) process is not clear, and, in general terms, the translation of these models into the final software products is still manually performed. Therefore, in this chapter, we show an approach to automatically link GORE models and MDD processes, which has been elaborated by considering the experience obtained from linking the i * framework with an industrially applied MDD approach. The linking approach proposed is formulated by means of a generic process that is based on current modeling standards and technologies in order to facilitate its application for different MDD and GORE approaches. Special attention is paid to how this process generates appropriate model transformation mechanisms to automatically obtain MDD conceptual models from GORE models, and how it can be used to specify validation mechanisms to assure the correct model transformations.

  9. Nanoparticle surface characterization and clustering through concentration-dependent surface adsorption modeling.

    PubMed

    Chen, Ran; Zhang, Yuntao; Sahneh, Faryad Darabi; Scoglio, Caterina M; Wohlleben, Wendel; Haase, Andrea; Monteiro-Riviere, Nancy A; Riviere, Jim E

    2014-09-23

    Quantitative characterization of nanoparticle interactions with their surrounding environment is vital for safe nanotechnological development and standardization. A recent quantitative measure, the biological surface adsorption index (BSAI), has demonstrated promising applications in nanomaterial surface characterization and biological/environmental prediction. This paper further advances the approach beyond the application of five descriptors in the original BSAI to address the concentration dependence of the descriptors, enabling better prediction of the adsorption profile and more accurate categorization of nanomaterials based on their surface properties. Statistical analysis on the obtained adsorption data was performed based on three different models: the original BSAI, a concentration-dependent polynomial model, and an infinite dilution model. These advancements in BSAI modeling showed a promising development in the application of quantitative predictive modeling in biological applications, nanomedicine, and environmental safety assessment of nanomaterials.

  10. Locomotion Dynamics for Bio-inspired Robots with Soft Appendages: Application to Flapping Flight and Passive Swimming

    NASA Astrophysics Data System (ADS)

    Boyer, Frédéric; Porez, Mathieu; Morsli, Ferhat; Morel, Yannick

    2017-08-01

    In animal locomotion, either in fish or flying insects, the use of flexible terminal organs or appendages greatly improves the performance of locomotion (thrust and lift). In this article, we propose a general unified framework for modeling and simulating the (bio-inspired) locomotion of robots using soft organs. The proposed approach is based on the model of Mobile Multibody Systems (MMS). The distributed flexibilities are modeled according to two major approaches: the Floating Frame Approach (FFA) and the Geometrically Exact Approach (GEA). Encompassing these two approaches in the Newton-Euler modeling formalism of robotics, this article proposes a unique modeling framework suited to the fast numerical integration of the dynamics of a MMS in both the FFA and the GEA. This general framework is applied on two illustrative examples drawn from bio-inspired locomotion: the passive swimming in von Karman Vortex Street, and the hovering flight with flexible flapping wings.

  11. Non-animal models of epithelial barriers (skin, intestine and lung) in research, industrial applications and regulatory toxicology.

    PubMed

    Gordon, Sarah; Daneshian, Mardas; Bouwstra, Joke; Caloni, Francesca; Constant, Samuel; Davies, Donna E; Dandekar, Gudrun; Guzman, Carlos A; Fabian, Eric; Haltner, Eleonore; Hartung, Thomas; Hasiwa, Nina; Hayden, Patrick; Kandarova, Helena; Khare, Sangeeta; Krug, Harald F; Kneuer, Carsten; Leist, Marcel; Lian, Guoping; Marx, Uwe; Metzger, Marco; Ott, Katharina; Prieto, Pilar; Roberts, Michael S; Roggen, Erwin L; Tralau, Tewes; van den Braak, Claudia; Walles, Heike; Lehr, Claus-Michael

    2015-01-01

    Models of the outer epithelia of the human body - namely the skin, the intestine and the lung - have found valid applications in both research and industrial settings as attractive alternatives to animal testing. A variety of approaches to model these barriers are currently employed in such fields, ranging from the utilization of ex vivo tissue to reconstructed in vitro models, and further to chip-based technologies, synthetic membrane systems and, of increasing current interest, in silico modeling approaches. An international group of experts in the field of epithelial barriers was convened from academia, industry and regulatory bodies to present both the current state of the art of non-animal models of the skin, intestinal and pulmonary barriers in their various fields of application, and to discuss research-based, industry-driven and regulatory-relevant future directions for both the development of new models and the refinement of existing test methods. Issues of model relevance and preference, validation and standardization, acceptance, and the need for simplicity versus complexity were focal themes of the discussions. The outcomes of workshop presentations and discussions, in relation to both current status and future directions in the utilization and development of epithelial barrier models, are presented by the attending experts in the current report.

  12. From global circulation to flood loss: Coupling models across the scales

    NASA Astrophysics Data System (ADS)

    Felder, Guido; Gomez-Navarro, Juan Jose; Bozhinova, Denica; Zischg, Andreas; Raible, Christoph C.; Ole, Roessler; Martius, Olivia; Weingartner, Rolf

    2017-04-01

    The prediction and the prevention of flood losses requires an extensive understanding of underlying meteorological, hydrological, hydraulic and damage processes. Coupled models help to improve the understanding of such underlying processes and therefore contribute the understanding of flood risk. Using such a modelling approach to determine potentially flood-affected areas and damages requires a complex coupling between several models operating at different spatial and temporal scales. Although the isolated parts of the single modelling components are well established and commonly used in the literature, a full coupling including a mesoscale meteorological model driven by a global circulation one, a hydrologic model, a hydrodynamic model and a flood impact and loss model has not been reported so far. In the present study, we tackle the application of such a coupled model chain in terms of computational resources, scale effects, and model performance. From a technical point of view, results show the general applicability of such a coupled model, as well as good model performance. From a practical point of view, such an approach enables the prediction of flood-induced damages, although some future challenges have been identified.

  13. Application of soft computing based hybrid models in hydrological variables modeling: a comprehensive review

    NASA Astrophysics Data System (ADS)

    Fahimi, Farzad; Yaseen, Zaher Mundher; El-shafie, Ahmed

    2017-05-01

    Since the middle of the twentieth century, artificial intelligence (AI) models have been used widely in engineering and science problems. Water resource variable modeling and prediction are the most challenging issues in water engineering. Artificial neural network (ANN) is a common approach used to tackle this problem by using viable and efficient models. Numerous ANN models have been successfully developed to achieve more accurate results. In the current review, different ANN models in water resource applications and hydrological variable predictions are reviewed and outlined. In addition, recent hybrid models and their structures, input preprocessing, and optimization techniques are discussed and the results are compared with similar previous studies. Moreover, to achieve a comprehensive view of the literature, many articles that applied ANN models together with other techniques are included. Consequently, coupling procedure, model evaluation, and performance comparison of hybrid models with conventional ANN models are assessed, as well as, taxonomy and hybrid ANN models structures. Finally, current challenges and recommendations for future researches are indicated and new hybrid approaches are proposed.

  14. A Comparison Study of Rule Space Method and Neural Network Model for Classifying Individuals and an Application.

    ERIC Educational Resources Information Center

    Hayashi, Atsuhiro

    Both the Rule Space Method (RSM) and the Neural Network Model (NNM) are techniques of statistical pattern recognition and classification approaches developed for applications from different fields. RSM was developed in the domain of educational statistics. It started from the use of an incidence matrix Q that characterizes the underlying cognitive…

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fernández Cristóbal, Jose Ma, E-mail: jmariaffc@gmail.com

    Under the generic designation of unimodular theory, two theoretical models of gravity are considered: the unimodular gravity and the TDiff theory. Our approach is primarily pedagogical. We aim to describe these models both from a geometric and a field-theoretical point of view. In addition, we explore connections with the cosmological-constant problem and outline some applications. We do not discuss the application of this theory to the quantization of gravity.

  16. Application of the Financial Industry Business Ontology (FIBO) for development of a financial organization ontology

    NASA Astrophysics Data System (ADS)

    Petrova, G. G.; Tuzovsky, A. F.; Aksenova, N. V.

    2017-01-01

    The article considers an approach to a formalized description and meaning harmonization for financial terms and means of semantic modeling. Ontologies for the semantic models are described with the help of special languages developed for the Semantic Web. Results of FIBO application to solution of different tasks in the Russian financial sector are given.

  17. Topology-based description of the NCA cathode configurational space and an approach of its effective reduction

    NASA Astrophysics Data System (ADS)

    Zolotarev, Pavel; Eremin, Roman

    2018-04-01

    Modification of existing solid electrolyte and cathode materialsis a topic of interest for theoreticians and experimentalists. In particular, itrequires elucidation of the influence of dopants on the characteristics of thestudying materials. For the reason of high complexity of theconfigurational space of doped/deintercalated systems, application of thecomputer modeling approaches is hindered, despite significant advances ofcomputational facilities in last decades. In this study, we propose a scheme,which allows to reduce a set of structures of a modeled configurationalspace for the subsequent study by means of the time-consuming quantumchemistry methods. Application of the proposed approach is exemplifiedthrough the study of the configurational space of the commercialLiNi0.8Co0.15Al0.05O2 (NCA) cathode material approximant.

  18. SMA Hybrid Composites for Dynamic Response Abatement Applications

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.

    2000-01-01

    A recently developed constitutive model and a finite element formulation for predicting the thermomechanical response of Shape Memory Alloy (SMA) hybrid composite (SMAHC) structures is briefly described. Attention is focused on constrained recovery behavior in this study, but the constitutive formulation is also capable of modeling restrained or free recovery. Numerical results are shown for glass/epoxy panel specimens with embedded Nitinol actuators subjected to thermal and acoustic loads. Control of thermal buckling, random response, sonic fatigue, and transmission loss are demonstrated and compared to conventional approaches including addition of conventional composite layers and a constrained layer damping treatment. Embedded SMA actuators are shown to be significantly more effective in dynamic response abatement applications than the conventional approaches and are attractive for combination with other passive and/or active approaches.

  19. Application of a data base management system to a finite element model

    NASA Technical Reports Server (NTRS)

    Rogers, J. L., Jr.

    1980-01-01

    In today's software market, much effort is being expended on the development of data base management systems (DBMS). Most commercially available DBMS were designed for business use. However, the need for such systems within the engineering and scientific communities is becoming apparent. A potential DBMS application that appears attractive is the handling of data for finite element engineering models. The applications of a commercially available, business-oriented DBMS to a structural engineering, finite element model is explored. The model, DBMS, an approach to using the DBMS, advantages and disadvantages are described. Plans for research on a scientific and engineering DBMS are discussed.

  20. Hierarchical animal movement models for population-level inference

    USGS Publications Warehouse

    Hooten, Mevin B.; Buderman, Frances E.; Brost, Brian M.; Hanks, Ephraim M.; Ivans, Jacob S.

    2016-01-01

    New methods for modeling animal movement based on telemetry data are developed regularly. With advances in telemetry capabilities, animal movement models are becoming increasingly sophisticated. Despite a need for population-level inference, animal movement models are still predominantly developed for individual-level inference. Most efforts to upscale the inference to the population level are either post hoc or complicated enough that only the developer can implement the model. Hierarchical Bayesian models provide an ideal platform for the development of population-level animal movement models but can be challenging to fit due to computational limitations or extensive tuning required. We propose a two-stage procedure for fitting hierarchical animal movement models to telemetry data. The two-stage approach is statistically rigorous and allows one to fit individual-level movement models separately, then resample them using a secondary MCMC algorithm. The primary advantages of the two-stage approach are that the first stage is easily parallelizable and the second stage is completely unsupervised, allowing for an automated fitting procedure in many cases. We demonstrate the two-stage procedure with two applications of animal movement models. The first application involves a spatial point process approach to modeling telemetry data, and the second involves a more complicated continuous-time discrete-space animal movement model. We fit these models to simulated data and real telemetry data arising from a population of monitored Canada lynx in Colorado, USA.

  1. A model for prediction of STOVL ejector dynamics

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1989-01-01

    A semi-empirical control-volume approach to ejector modeling for transient performance prediction is presented. This new approach is motivated by the need for a predictive real-time ejector sub-system simulation for Short Take-Off Verticle Landing (STOVL) integrated flight and propulsion controls design applications. Emphasis is placed on discussion of the approximate characterization of the mixing process central to thrust augmenting ejector operation. The proposed ejector model suggests transient flow predictions are possible with a model based on steady-flow data. A practical test case is presented to illustrate model calibration.

  2. Using Ecosystem Experiments to Improve Vegetation Models

    DOE PAGES

    Medlyn, Belinda; Zaehle, S; DeKauwe, Martin G.; ...

    2015-05-21

    Ecosystem responses to rising CO2 concentrations are a major source of uncertainty in climate change projections. Data from ecosystem-scale Free-Air CO2 Enrichment (FACE) experiments provide a unique opportunity to reduce this uncertainty. The recent FACE Model–Data Synthesis project aimed to use the information gathered in two forest FACE experiments to assess and improve land ecosystem models. A new 'assumption-centred' model intercomparison approach was used, in which participating models were evaluated against experimental data based on the ways in which they represent key ecological processes. Identifying and evaluating the main assumptions caused differences among models, and the assumption-centered approach produced amore » clear roadmap for reducing model uncertainty. We explain this approach and summarize the resulting research agenda. We encourage the application of this approach in other model intercomparison projects to fundamentally improve predictive understanding of the Earth system.« less

  3. A preliminary comparison of hydrodynamic approaches for flood inundation modeling of urban areas in Jakarta Ciliwung river basin

    NASA Astrophysics Data System (ADS)

    Rojali, Aditia; Budiaji, Abdul Somat; Pribadi, Yudhistira Satya; Fatria, Dita; Hadi, Tri Wahyu

    2017-07-01

    This paper addresses on the numerical modeling approaches for flood inundation in urban areas. Decisive strategy to choose between 1D, 2D or even a hybrid 1D-2D model is more than important to optimize flood inundation analyses. To find cost effective yet robust and accurate model has been our priority and motivation in the absence of available High Performance Computing facilities. The application of 1D, 1D/2D and full 2D modeling approach to river flood study in Jakarta Ciliwung river basin, and a comparison of approaches benchmarked for the inundation study are presented. This study demonstrate the successful use of 1D/2D and 2D system to model Jakarta Ciliwung river basin in terms of inundation results and computational aspect. The findings of the study provide an interesting comparison between modeling approaches, HEC-RAS 1D, 1D-2D, 2D, and ANUGA when benchmarked to the Manggarai water level measurement.

  4. Application of maximum entropy to statistical inference for inversion of data from a single track segment.

    PubMed

    Stotts, Steven A; Koch, Robert A

    2017-08-01

    In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.

  5. Transient high frequency signal estimation: A model-based processing approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, F.L.

    1985-03-22

    By utilizing the superposition property of linear systems a method of estimating the incident signal from reflective nondispersive data is developed. One of the basic merits of this approach is that, the reflections were removed by direct application of a Weiner type estimation algorithm, after the appropriate input was synthesized. The structure of the nondispersive signal model is well documented, and thus its' credence is established. The model is stated and more effort is devoted to practical methods of estimating the model parameters. Though a general approach was developed for obtaining the reflection weights, a simpler approach was employed here,more » since a fairly good reflection model is available. The technique essentially consists of calculating ratios of the autocorrelation function at lag zero and that lag where the incident and first reflection coincide. We initially performed our processing procedure on a measurement of a single signal. Multiple application of the processing procedure was required when we applied the reflection removal technique on a measurement containing information from the interaction of two physical phenomena. All processing was performed using SIG, an interactive signal processing package. One of the many consequences of using SIG was that repetitive operations were, for the most part, automated. A custom menu was designed to perform the deconvolution process.« less

  6. Informed spectral analysis: audio signal parameter estimation using side information

    NASA Astrophysics Data System (ADS)

    Fourer, Dominique; Marchand, Sylvain

    2013-12-01

    Parametric models are of great interest for representing and manipulating sounds. However, the quality of the resulting signals depends on the precision of the parameters. When the signals are available, these parameters can be estimated, but the presence of noise decreases the resulting precision of the estimation. Furthermore, the Cramér-Rao bound shows the minimal error reachable with the best estimator, which can be insufficient for demanding applications. These limitations can be overcome by using the coding approach which consists in directly transmitting the parameters with the best precision using the minimal bitrate. However, this approach does not take advantage of the information provided by the estimation from the signal and may require a larger bitrate and a loss of compatibility with existing file formats. The purpose of this article is to propose a compromised approach, called the 'informed approach,' which combines analysis with (coded) side information in order to increase the precision of parameter estimation using a lower bitrate than pure coding approaches, the audio signal being known. Thus, the analysis problem is presented in a coder/decoder configuration where the side information is computed and inaudibly embedded into the mixture signal at the coder. At the decoder, the extra information is extracted and is used to assist the analysis process. This study proposes applying this approach to audio spectral analysis using sinusoidal modeling which is a well-known model with practical applications and where theoretical bounds have been calculated. This work aims at uncovering new approaches for audio quality-based applications. It provides a solution for challenging problems like active listening of music, source separation, and realistic sound transformations.

  7. The time series approach to short term load forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagan, M.T.; Behr, S.M.

    The application of time series analysis methods to load forecasting is reviewed. It is shown than Box and Jenkins time series models, in particular, are well suited to this application. The logical and organized procedures for model development using the autocorrelation function make these models particularly attractive. One of the drawbacks of these models is the inability to accurately represent the nonlinear relationship between load and temperature. A simple procedure for overcoming this difficulty is introduced, and several Box and Jenkins models are compared with a forecasting procedure currently used by a utility company.

  8. Model application niche analysis: An approach for assessing the transferability and generalizability of ecological models

    EPA Science Inventory

    A 30-year review of predictive models used in regulatory decision-making, revealed that transferring models to contexts other than that for which the models were developed was one of the biggest vulnerabilities to their legal defensibility. The use and transfer of models by ecolo...

  9. Hybrid modeling for quality by design and PAT-benefits and challenges of applications in biopharmaceutical industry.

    PubMed

    von Stosch, Moritz; Davy, Steven; Francois, Kjell; Galvanauskas, Vytautas; Hamelink, Jan-Martijn; Luebbert, Andreas; Mayer, Martin; Oliveira, Rui; O'Kennedy, Ronan; Rice, Paul; Glassey, Jarka

    2014-06-01

    This report highlights the drivers, challenges, and enablers of the hybrid modeling applications in biopharmaceutical industry. It is a summary of an expert panel discussion of European academics and industrialists with relevant scientific and engineering backgrounds. Hybrid modeling is viewed in its broader sense, namely as the integration of different knowledge sources in form of parametric and nonparametric models into a hybrid semi-parametric model, for instance the integration of fundamental and data-driven models. A brief description of the current state-of-the-art and industrial uptake of the methodology is provided. The report concludes with a number of recommendations to facilitate further developments and a wider industrial application of this modeling approach. These recommendations are limited to further exploiting the benefits of this methodology within process analytical technology (PAT) applications in biopharmaceutical industry. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. OntoCR: A CEN/ISO-13606 clinical repository based on ontologies.

    PubMed

    Lozano-Rubí, Raimundo; Muñoz Carrero, Adolfo; Serrano Balazote, Pablo; Pastor, Xavier

    2016-04-01

    To design a new semantically interoperable clinical repository, based on ontologies, conforming to CEN/ISO 13606 standard. The approach followed is to extend OntoCRF, a framework for the development of clinical repositories based on ontologies. The meta-model of OntoCRF has been extended by incorporating an OWL model integrating CEN/ISO 13606, ISO 21090 and SNOMED CT structure. This approach has demonstrated a complete evaluation cycle involving the creation of the meta-model in OWL format, the creation of a simple test application, and the communication of standardized extracts to another organization. Using a CEN/ISO 13606 based system, an indefinite number of archetypes can be merged (and reused) to build new applications. Our approach, based on the use of ontologies, maintains data storage independent of content specification. With this approach, relational technology can be used for storage, maintaining extensibility capabilities. The present work demonstrates that it is possible to build a native CEN/ISO 13606 repository for the storage of clinical data. We have demonstrated semantic interoperability of clinical information using CEN/ISO 13606 extracts. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. The Applicability of the Generalized Method of Cells for Analyzing Discontinuously Reinforced Composites

    NASA Technical Reports Server (NTRS)

    Pahr, D. H.; Arnold, S. M.

    2001-01-01

    The paper begins with a short overview of the recent work done in the field of discontinuous reinforced composites, focusing on the different parameters which influence the material behavior of discontinuous reinforced composites, as well as the various analysis approaches undertaken. Based on this overview it became evident, that in order to investigate the enumerated effects in an efficient and comprehensive manner, an alternative approach to the computationally intensive finite-element based micromechanics approach is required. Therefore, an investigation is conducted to demonstrate the utility of utilizing the generalized method of cells (GMC), a semi-analytical micromechanics-based approach, to simulate the elastic and elastoplastic material behavior of aligned short fiber composites. The results are compared with (1) simulations using other micromechanical based mean field models and finite element (FE) unit cell models found in the literature given elastic material behavior, as well as (2) finite element unit cell and a new semianalytical elastoplastic shear lag model in the inelastic range. GMC is shown to definitely have a window of applicability when simulating discontinuously reinforced composite material behavior.

  12. The Applicability of the Generalized Method of Cells for Analyzing Discontinuously Reinforced Composites

    NASA Technical Reports Server (NTRS)

    Pahr, D. H.; Arnold, S. M.

    2001-01-01

    The paper begins with a short overview of the recent work done in the field of discontinuous reinforced composites, focusing on the different parameters which influence the material behavior of discontinuous reinforced composites, as well as the various analysis approaches undertaken. Based on this overview it became evident that in order to investigate the enumerated effects in an efficient and comprehensive manner, an alternative approach to the computationally intensive finite-element based micromechanics approach is required. Therefore, an investigation is conducted to demonstrate the utility of utilizing the generalized method of cells (GMC), a semi-analytical micromechanics-based approach, to simulate the elastic and elastoplastic material behavior of aligned short fiber composites. The results are compared with simulations using other micromechanical based mean field models and finite element (FE) unit cell models found in the literature given elastic material behavior, as well as finite element unit cell and a new semianalytical elastoplastic shear lag model in the inelastic range. GMC is shown to definitely have a window of applicability when simulating discontinuously reinforced composite material behavior.

  13. A Progressive Damage Model for unidirectional Fibre Reinforced Composites with Application to Impact and Penetration Simulation

    NASA Astrophysics Data System (ADS)

    Kerschbaum, M.; Hopmann, C.

    2016-06-01

    The computationally efficient simulation of the progressive damage behaviour of continuous fibre reinforced plastics is still a challenging task with currently available computer aided engineering methods. This paper presents an original approach for an energy based continuum damage model which accounts for stress-/strain nonlinearities, transverse and shear stress interaction phenomena, quasi-plastic shear strain components, strain rate effects, regularised damage evolution and consideration of load reversal effects. The physically based modelling approach enables experimental determination of all parameters on ply level to avoid expensive inverse analysis procedures. The modelling strategy, implementation and verification of this model using commercially available explicit finite element software are detailed. The model is then applied to simulate the impact and penetration of carbon fibre reinforced cross-ply specimens with variation of the impact speed. The simulation results show that the presented approach enables a good representation of the force-/displacement curves and especially well agreement with the experimentally observed fracture patterns. In addition, the mesh dependency of the results were assessed for one impact case showing only very little change of the simulation results which emphasises the general applicability of the presented method.

  14. A framework for the selection and ensemble development of flood vulnerability models

    NASA Astrophysics Data System (ADS)

    Figueiredo, Rui; Schröter, Kai; Kreibich, Heidi; Martina, Mario

    2017-04-01

    Effective understanding and management of flood risk requires comprehensive risk assessment studies that consider not only the hazard component, but also the impacts that the phenomena may have on the built environment, economy and society. This integrated approach has gained importance over recent decades, and with it so has the scientific attention given to flood vulnerability models describing the relationships between flood intensity metrics and damage to physical assets, also known as flood loss models. Despite considerable progress in this field, many challenges persist. Flood damage mechanisms are complex and depend on multiple variables, which can have different degrees of importance depending on the application setting. In addition, data required for the development and validation of such models tend to be scarce, particularly in data poor regions. These issues are reflected in the large amount of flood vulnerability models that are available in the literature today, as well as in their high heterogeneity: they are built with different modelling approaches, in different geographic contexts, utilizing different explanatory variables, and with varying levels of complexity. Notwithstanding recent developments in this area, uncertainty remains high, and large disparities exist among models. For these reasons, identifying which model or models, given their properties, are appropriate for a given context is not straightforward. In the present study, we propose a framework that guides the structured selection of flood vulnerability models and enables ranking them according to their suitability for a certain application, based on expert judgement. The approach takes advantage of current state of the art and most up-to-date knowledge on flood vulnerability processes. Given the heterogeneity and uncertainty currently present in flood vulnerability models, we propose the use of a model ensemble. With this in mind, the proposed approach is based on a weighting scheme within a logic-tree framework that enables the generation of such ensembles in a logically consistent manner. We test and discuss the results by applying the framework to the case study of the 2002 floods along the Mulde River in Germany. Applications of individual models and model ensembles are compared and discussed.

  15. Modeling Social Influence via Combined Centralized and Distributed Planning Control

    NASA Technical Reports Server (NTRS)

    Vaccaro, James; Guest, Clark

    2010-01-01

    Real world events are driven by a mixture of both centralized and distributed control of individual agents based on their situational context and internal make up. For example, some people have partial allegiances to multiple, contradictory authorities, as well as to their own goals and principles. This can create a cognitive dissonance that can be exploited by an appropriately directed psychological influence operation (PSYOP). An Autonomous Dynamic Planning and Execution (ADP&E) approach is proposed for modeling both the unperturbed context as well as its reaction to various PSYOP interventions. As an illustrative example, the unrest surrounding the Iranian elections in the summer of 2009 is described in terms applicable to an ADP&E modeling approach. Aspects of the ADP&E modeling process are discussed to illustrate its application and advantages for this example.

  16. A stratification approach using logit-based models for confounder adjustment in the study of continuous outcomes.

    PubMed

    Tan, Chuen Seng; Støer, Nathalie C; Chen, Ying; Andersson, Marielle; Ning, Yilin; Wee, Hwee-Lin; Khoo, Eric Yin Hao; Tai, E-Shyong; Kao, Shih Ling; Reilly, Marie

    2017-01-01

    The control of confounding is an area of extensive epidemiological research, especially in the field of causal inference for observational studies. Matched cohort and case-control study designs are commonly implemented to control for confounding effects without specifying the functional form of the relationship between the outcome and confounders. This paper extends the commonly used regression models in matched designs for binary and survival outcomes (i.e. conditional logistic and stratified Cox proportional hazards) to studies of continuous outcomes through a novel interpretation and application of logit-based regression models from the econometrics and marketing research literature. We compare the performance of the maximum likelihood estimators using simulated data and propose a heuristic argument for obtaining the residuals for model diagnostics. We illustrate our proposed approach with two real data applications. Our simulation studies demonstrate that our stratification approach is robust to model misspecification and that the distribution of the estimated residuals provides a useful diagnostic when the strata are of moderate size. In our applications to real data, we demonstrate that parity and menopausal status are associated with percent mammographic density, and that the mean level and variability of inpatient blood glucose readings vary between medical and surgical wards within a national tertiary hospital. Our work highlights how the same class of regression models, available in most statistical software, can be used to adjust for confounding in the study of binary, time-to-event and continuous outcomes.

  17. INVERSE MODEL ESTIMATION AND EVALUATION OF SEASONAL NH 3 EMISSIONS

    EPA Science Inventory

    The presentation topic is inverse modeling for estimate and evaluation of emissions. The case study presented is the need for seasonal estimates of NH3 emissions for air quality modeling. The inverse modeling application approach is first described, and then the NH

  18. Mixture Modeling: Applications in Educational Psychology

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Hodis, Flaviu A.

    2016-01-01

    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  19. Uncertainty and variability in computational and mathematical models of cardiac physiology.

    PubMed

    Mirams, Gary R; Pathmanathan, Pras; Gray, Richard A; Challenor, Peter; Clayton, Richard H

    2016-12-01

    Mathematical and computational models of cardiac physiology have been an integral component of cardiac electrophysiology since its inception, and are collectively known as the Cardiac Physiome. We identify and classify the numerous sources of variability and uncertainty in model formulation, parameters and other inputs that arise from both natural variation in experimental data and lack of knowledge. The impact of uncertainty on the outputs of Cardiac Physiome models is not well understood, and this limits their utility as clinical tools. We argue that incorporating variability and uncertainty should be a high priority for the future of the Cardiac Physiome. We suggest investigating the adoption of approaches developed in other areas of science and engineering while recognising unique challenges for the Cardiac Physiome; it is likely that novel methods will be necessary that require engagement with the mathematics and statistics community. The Cardiac Physiome effort is one of the most mature and successful applications of mathematical and computational modelling for describing and advancing the understanding of physiology. After five decades of development, physiological cardiac models are poised to realise the promise of translational research via clinical applications such as drug development and patient-specific approaches as well as ablation, cardiac resynchronisation and contractility modulation therapies. For models to be included as a vital component of the decision process in safety-critical applications, rigorous assessment of model credibility will be required. This White Paper describes one aspect of this process by identifying and classifying sources of variability and uncertainty in models as well as their implications for the application and development of cardiac models. We stress the need to understand and quantify the sources of variability and uncertainty in model inputs, and the impact of model structure and complexity and their consequences for predictive model outputs. We propose that the future of the Cardiac Physiome should include a probabilistic approach to quantify the relationship of variability and uncertainty of model inputs and outputs. © 2016 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siranosian, Antranik Antonio; Schembri, Philip Edward; Luscher, Darby Jon

    The Los Alamos National Laboratory's Weapon Systems Engineering division's Advanced Engineering Analysis group employs material constitutive models of composites for use in simulations of components and assemblies of interest. Experimental characterization, modeling and prediction of the macro-scale (i.e. continuum) behaviors of these composite materials is generally difficult because they exhibit nonlinear behaviors on the meso- (e.g. micro-) and macro-scales. Furthermore, it can be difficult to measure and model the mechanical responses of the individual constituents and constituent interactions in the composites of interest. Current efforts to model such composite materials rely on semi-empirical models in which meso-scale properties are inferredmore » from continuum level testing and modeling. The proposed approach involves removing the difficulties of interrogating and characterizing micro-scale behaviors by scaling-up the problem to work with macro-scale composites, with the intention of developing testing and modeling capabilities that will be applicable to the mesoscale. This approach assumes that the physical mechanisms governing the responses of the composites on the meso-scale are reproducible on the macro-scale. Working on the macro-scale simplifies the quantification of composite constituents and constituent interactions so that efforts can be focused on developing material models and the testing techniques needed for calibration and validation. Other benefits to working with macro-scale composites include the ability to engineer and manufacture—potentially using additive manufacturing techniques—composites that will support the application of advanced measurement techniques such as digital volume correlation and three-dimensional computed tomography imaging, which would aid in observing and quantifying complex behaviors that are exhibited in the macro-scale composites of interest. Ultimately, the goal of this new approach is to develop a meso-scale composite modeling framework, applicable to many composite materials, and the corresponding macroscale testing and test data interrogation techniques to support model calibration.« less

  1. Integrating machine learning techniques into robust data enrichment approach and its application to gene expression data.

    PubMed

    Erdoğdu, Utku; Tan, Mehmet; Alhajj, Reda; Polat, Faruk; Rokne, Jon; Demetrick, Douglas

    2013-01-01

    The availability of enough samples for effective analysis and knowledge discovery has been a challenge in the research community, especially in the area of gene expression data analysis. Thus, the approaches being developed for data analysis have mostly suffered from the lack of enough data to train and test the constructed models. We argue that the process of sample generation could be successfully automated by employing some sophisticated machine learning techniques. An automated sample generation framework could successfully complement the actual sample generation from real cases. This argument is validated in this paper by describing a framework that integrates multiple models (perspectives) for sample generation. We illustrate its applicability for producing new gene expression data samples, a highly demanding area that has not received attention. The three perspectives employed in the process are based on models that are not closely related. The independence eliminates the bias of having the produced approach covering only certain characteristics of the domain and leading to samples skewed towards one direction. The first model is based on the Probabilistic Boolean Network (PBN) representation of the gene regulatory network underlying the given gene expression data. The second model integrates Hierarchical Markov Model (HIMM) and the third model employs a genetic algorithm in the process. Each model learns as much as possible characteristics of the domain being analysed and tries to incorporate the learned characteristics in generating new samples. In other words, the models base their analysis on domain knowledge implicitly present in the data itself. The developed framework has been extensively tested by checking how the new samples complement the original samples. The produced results are very promising in showing the effectiveness, usefulness and applicability of the proposed multi-model framework.

  2. The mechanism and design of sequencing batch reactor systems for nutrient removal--the state of the art.

    PubMed

    Artan, N; Wilderer, P; Orhon, D; Morgenroth, E; Ozgür, N

    2001-01-01

    The Sequencing Batch Reactor (SBR) process for carbon and nutrient removal is subject to extensive research, and it is finding a wider application in full-scale installations. Despite the growing popularity, however, a widely accepted approach to process analysis and modeling, a unified design basis, and even a common terminology are still lacking; this situation is now regarded as the major obstacle hindering broader practical application of the SBR. In this paper a rational dimensioning approach is proposed for nutrient removal SBRs based on scientific information on process stoichiometry and modelling, also emphasizing practical constraints in design and operation.

  3. Validating the simulation of large-scale parallel applications using statistical characteristics

    DOE PAGES

    Zhang, Deli; Wilke, Jeremiah; Hendry, Gilbert; ...

    2016-03-01

    Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodologymore » and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Lastly, our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.« less

  4. Advanced EUV mask and imaging modeling

    NASA Astrophysics Data System (ADS)

    Evanschitzky, Peter; Erdmann, Andreas

    2017-10-01

    The exploration and optimization of image formation in partially coherent EUV projection systems with complex source shapes requires flexible, accurate, and efficient simulation models. This paper reviews advanced mask diffraction and imaging models for the highly accurate and fast simulation of EUV lithography systems, addressing important aspects of the current technical developments. The simulation of light diffraction from the mask employs an extended rigorous coupled wave analysis (RCWA) approach, which is optimized for EUV applications. In order to be able to deal with current EUV simulation requirements, several additional models are included in the extended RCWA approach: a field decomposition and a field stitching technique enable the simulation of larger complex structured mask areas. An EUV multilayer defect model including a database approach makes the fast and fully rigorous defect simulation and defect repair simulation possible. A hybrid mask simulation approach combining real and ideal mask parts allows the detailed investigation of the origin of different mask 3-D effects. The image computation is done with a fully vectorial Abbe-based approach. Arbitrary illumination and polarization schemes and adapted rigorous mask simulations guarantee a high accuracy. A fully vectorial sampling-free description of the pupil with Zernikes and Jones pupils and an optimized representation of the diffraction spectrum enable the computation of high-resolution images with high accuracy and short simulation times. A new pellicle model supports the simulation of arbitrary membrane stacks, pellicle distortions, and particles/defects on top of the pellicle. Finally, an extension for highly accurate anamorphic imaging simulations is included. The application of the models is demonstrated by typical use cases.

  5. Sequence-Based Prioritization of Nonsynonymous Single-Nucleotide Polymorphisms for the Study of Disease Mutations

    PubMed Central

    Jiang, Rui ; Yang, Hua ; Zhou, Linqi ; Kuo, C.-C. Jay ; Sun, Fengzhu ; Chen, Ting 

    2007-01-01

    The increasing demand for the identification of genetic variation responsible for common diseases has translated into a need for sophisticated methods for effectively prioritizing mutations occurring in disease-associated genetic regions. In this article, we prioritize candidate nonsynonymous single-nucleotide polymorphisms (nsSNPs) through a bioinformatics approach that takes advantages of a set of improved numeric features derived from protein-sequence information and a new statistical learning model called “multiple selection rule voting” (MSRV). The sequence-based features can maximize the scope of applications of our approach, and the MSRV model can capture subtle characteristics of individual mutations. Systematic validation of the approach demonstrates that this approach is capable of prioritizing causal mutations for both simple monogenic diseases and complex polygenic diseases. Further studies of familial Alzheimer diseases and diabetes show that the approach can enrich mutations underlying these polygenic diseases among the top of candidate mutations. Application of this approach to unclassified mutations suggests that there are 10 suspicious mutations likely to cause diseases, and there is strong support for this in the literature. PMID:17668383

  6. Hedonic approaches based on spatial econometrics and spatial statistics: application to evaluation of project benefits

    NASA Astrophysics Data System (ADS)

    Tsutsumi, Morito; Seya, Hajime

    2009-12-01

    This study discusses the theoretical foundation of the application of spatial hedonic approaches—the hedonic approach employing spatial econometrics or/and spatial statistics—to benefits evaluation. The study highlights the limitations of the spatial econometrics approach since it uses a spatial weight matrix that is not employed by the spatial statistics approach. Further, the study presents empirical analyses by applying the Spatial Autoregressive Error Model (SAEM), which is based on the spatial econometrics approach, and the Spatial Process Model (SPM), which is based on the spatial statistics approach. SPMs are conducted based on both isotropy and anisotropy and applied to different mesh sizes. The empirical analysis reveals that the estimated benefits are quite different, especially between isotropic and anisotropic SPM and between isotropic SPM and SAEM; the estimated benefits are similar for SAEM and anisotropic SPM. The study demonstrates that the mesh size does not affect the estimated amount of benefits. Finally, the study provides a confidence interval for the estimated benefits and raises an issue with regard to benefit evaluation.

  7. Automated Construction of Node Software Using Attributes in a Ubiquitous Sensor Network Environment

    PubMed Central

    Lee, Woojin; Kim, Juil; Kang, JangMook

    2010-01-01

    In sensor networks, nodes must often operate in a demanding environment facing restrictions such as restricted computing resources, unreliable wireless communication and power shortages. Such factors make the development of ubiquitous sensor network (USN) applications challenging. To help developers construct a large amount of node software for sensor network applications easily and rapidly, this paper proposes an approach to the automated construction of node software for USN applications using attributes. In the proposed technique, application construction proceeds by first developing a model for the sensor network and then designing node software by setting the values of the predefined attributes. After that, the sensor network model and the design of node software are verified. The final source codes of the node software are automatically generated from the sensor network model. We illustrate the efficiency of the proposed technique by using a gas/light monitoring application through a case study of a Gas and Light Monitoring System based on the Nano-Qplus operating system. We evaluate the technique using a quantitative metric—the memory size of execution code for node software. Using the proposed approach, developers are able to easily construct sensor network applications and rapidly generate a large number of node softwares at a time in a ubiquitous sensor network environment. PMID:22163678

  8. Automated construction of node software using attributes in a ubiquitous sensor network environment.

    PubMed

    Lee, Woojin; Kim, Juil; Kang, JangMook

    2010-01-01

    In sensor networks, nodes must often operate in a demanding environment facing restrictions such as restricted computing resources, unreliable wireless communication and power shortages. Such factors make the development of ubiquitous sensor network (USN) applications challenging. To help developers construct a large amount of node software for sensor network applications easily and rapidly, this paper proposes an approach to the automated construction of node software for USN applications using attributes. In the proposed technique, application construction proceeds by first developing a model for the sensor network and then designing node software by setting the values of the predefined attributes. After that, the sensor network model and the design of node software are verified. The final source codes of the node software are automatically generated from the sensor network model. We illustrate the efficiency of the proposed technique by using a gas/light monitoring application through a case study of a Gas and Light Monitoring System based on the Nano-Qplus operating system. We evaluate the technique using a quantitative metric-the memory size of execution code for node software. Using the proposed approach, developers are able to easily construct sensor network applications and rapidly generate a large number of node softwares at a time in a ubiquitous sensor network environment.

  9. Human systems immunology: hypothesis-based modeling and unbiased data-driven approaches.

    PubMed

    Arazi, Arnon; Pendergraft, William F; Ribeiro, Ruy M; Perelson, Alan S; Hacohen, Nir

    2013-10-31

    Systems immunology is an emerging paradigm that aims at a more systematic and quantitative understanding of the immune system. Two major approaches have been utilized to date in this field: unbiased data-driven modeling to comprehensively identify molecular and cellular components of a system and their interactions; and hypothesis-based quantitative modeling to understand the operating principles of a system by extracting a minimal set of variables and rules underlying them. In this review, we describe applications of the two approaches to the study of viral infections and autoimmune diseases in humans, and discuss possible ways by which these two approaches can synergize when applied to human immunology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Advances in theory and their application within the field of zeolite chemistry.

    PubMed

    Van Speybroeck, Veronique; Hemelsoet, Karen; Joos, Lennart; Waroquier, Michel; Bell, Robert G; Catlow, C Richard A

    2015-10-21

    Zeolites are versatile and fascinating materials which are vital for a wide range of industries, due to their unique structural and chemical properties, which are the basis of applications in gas separation, ion exchange and catalysis. Given their economic impact, there is a powerful incentive for smart design of new materials with enhanced functionalities to obtain the best material for a given application. Over the last decades, theoretical modeling has matured to a level that model guided design has become within reach. Major hurdles have been overcome to reach this point and almost all contemporary methods in computational materials chemistry are actively used in the field of modeling zeolite chemistry and applications. Integration of complementary modeling approaches is necessary to obtain reliable predictions and rationalizations from theory. A close synergy between experimentalists and theoreticians has led to a deep understanding of the complexity of the system at hand, but also allowed the identification of shortcomings in current theoretical approaches. Inspired by the importance of zeolite characterization which can now be performed at the single atom and single molecule level from experiment, computational spectroscopy has grown in importance in the last decade. In this review most of the currently available modeling tools are introduced and illustrated on the most challenging problems in zeolite science. Directions for future model developments will be given.

  11. Fast All-Sky Radiation Model for Solar Applications (FARMS): A Brief Overview of Mechanisms, Performance, and Applications: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Yu; Sengupta, Manajit

    Solar radiation can be computed using radiative transfer models, such as the Rapid Radiation Transfer Model (RRTM) and its general circulation model applications, and used for various energy applications. Due to the complexity of computing radiation fields in aerosol and cloudy atmospheres, simulating solar radiation can be extremely time-consuming, but many approximations--e.g., the two-stream approach and the delta-M truncation scheme--can be utilized. To provide a new fast option for computing solar radiation, we developed the Fast All-sky Radiation Model for Solar applications (FARMS) by parameterizing the simulated diffuse horizontal irradiance and direct normal irradiance for cloudy conditions from the RRTMmore » runs using a 16-stream discrete ordinates radiative transfer method. The solar irradiance at the surface was simulated by combining the cloud irradiance parameterizations with a fast clear-sky model, REST2. To understand the accuracy and efficiency of the newly developed fast model, we analyzed FARMS runs using cloud optical and microphysical properties retrieved using GOES data from 2009-2012. The global horizontal irradiance for cloudy conditions was simulated using FARMS and RRTM for global circulation modeling with a two-stream approximation and compared to measurements taken from the U.S. Department of Energy's Atmospheric Radiation Measurement Climate Research Facility Southern Great Plains site. Our results indicate that the accuracy of FARMS is comparable to or better than the two-stream approach; however, FARMS is approximately 400 times more efficient because it does not explicitly solve the radiative transfer equation for each individual cloud condition. Radiative transfer model runs are computationally expensive, but this model is promising for broad applications in solar resource assessment and forecasting. It is currently being used in the National Solar Radiation Database, which is publicly available from the National Renewable Energy Laboratory at http://nsrdb.nrel.gov.« less

  12. Multilevel Modeling and Policy Development: Guidelines and Applications to Medical Travel.

    PubMed

    Garcia-Garzon, Eduardo; Zhukovsky, Peter; Haller, Elisa; Plakolm, Sara; Fink, David; Petrova, Dafina; Mahalingam, Vaishali; Menezes, Igor G; Ruggeri, Kai

    2016-01-01

    Medical travel has expanded rapidly in recent years, resulting in new markets and increased access to medical care. Whereas several studies investigated the motives of individuals seeking healthcare abroad, the conventional analytical approach is limited by substantial caveats. Classical techniques as found in the literature cannot provide sufficient insight due to the nested nature of data generated. The application of adequate analytical techniques, specifically multilevel modeling, is scarce to non-existent in the context of medical travel. This study introduces the guidelines for application of multilevel techniques in public health research by presenting an application of multilevel modeling in analyzing the decision-making patterns of potential medical travelers. Benefits and potential limitations are discussed.

  13. Multilevel Modeling and Policy Development: Guidelines and Applications to Medical Travel

    PubMed Central

    Garcia-Garzon, Eduardo; Zhukovsky, Peter; Haller, Elisa; Plakolm, Sara; Fink, David; Petrova, Dafina; Mahalingam, Vaishali; Menezes, Igor G.; Ruggeri, Kai

    2016-01-01

    Medical travel has expanded rapidly in recent years, resulting in new markets and increased access to medical care. Whereas several studies investigated the motives of individuals seeking healthcare abroad, the conventional analytical approach is limited by substantial caveats. Classical techniques as found in the literature cannot provide sufficient insight due to the nested nature of data generated. The application of adequate analytical techniques, specifically multilevel modeling, is scarce to non-existent in the context of medical travel. This study introduces the guidelines for application of multilevel techniques in public health research by presenting an application of multilevel modeling in analyzing the decision-making patterns of potential medical travelers. Benefits and potential limitations are discussed. PMID:27252672

  14. Application of Artificial Intelligence for Bridge Deterioration Model.

    PubMed

    Chen, Zhang; Wu, Yangyang; Li, Li; Sun, Lijun

    2015-01-01

    The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention.

  15. Application of Artificial Intelligence for Bridge Deterioration Model

    PubMed Central

    Chen, Zhang; Wu, Yangyang; Sun, Lijun

    2015-01-01

    The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention. PMID:26601121

  16. Application of various FLD modelling approaches

    NASA Astrophysics Data System (ADS)

    Banabic, D.; Aretz, H.; Paraianu, L.; Jurco, P.

    2005-07-01

    This paper focuses on a comparison between different modelling approaches to predict the forming limit diagram (FLD) for sheet metal forming under a linear strain path using the recently introduced orthotropic yield criterion BBC2003 (Banabic D et al 2005 Int. J. Plasticity 21 493-512). The FLD models considered here are a finite element based approach, the well known Marciniak-Kuczynski model, the modified maximum force criterion according to Hora et al (1996 Proc. Numisheet'96 Conf. (Dearborn/Michigan) pp 252-6), Swift's diffuse (Swift H W 1952 J. Mech. Phys. Solids 1 1-18) and Hill's classical localized necking approach (Hill R 1952 J. Mech. Phys. Solids 1 19-30). The FLD of an AA5182-O aluminium sheet alloy has been determined experimentally in order to quantify the predictive capabilities of the models mentioned above.

  17. Curriculum Assessment Using Artificial Neural Network and Support Vector Machine Modeling Approaches: A Case Study. IR Applications. Volume 29

    ERIC Educational Resources Information Center

    Chen, Chau-Kuang

    2010-01-01

    Artificial Neural Network (ANN) and Support Vector Machine (SVM) approaches have been on the cutting edge of science and technology for pattern recognition and data classification. In the ANN model, classification accuracy can be achieved by using the feed-forward of inputs, back-propagation of errors, and the adjustment of connection weights. In…

  18. Application of SIGGS to Project PRIME: A General Systems Approach to Evaluation of Mainstreaming.

    ERIC Educational Resources Information Center

    Frick, Ted

    The use of the systems approach in educational inquiry is not new, and the models of input/output, input/process/product, and cybernetic systems have been widely used. The general systems model is an extension of all these, adding the dimension of environmental influence on the system as well as system influence on the environment. However, if the…

  19. The Rangeland Hydrology and Erosion Model: A dynamic approach for predicting soil loss on rangelands

    USDA-ARS?s Scientific Manuscript database

    In this study we present the improved Rangeland Hydrology and Erosion Model (RHEM V2.3), a process-based erosion prediction tool specific for rangeland application. The article provides the mathematical formulation of the model and parameter estimation equations. Model performance is assessed agains...

  20. Exploring component-based approaches in forest landscape modeling

    Treesearch

    H. S. He; D. R. Larsen; D. J. Mladenoff

    2002-01-01

    Forest management issues are increasingly required to be addressed in a spatial context, which has led to the development of spatially explicit forest landscape models. The numerous processes, complex spatial interactions, and diverse applications in spatial modeling make the development of forest landscape models difficult for any single research group. New...

  1. APPLICATION OF A NEW LAND-SURFACE, DRY DEPOSITION, AND PBL MODEL IN THE MODELS-3 COMMUNITY MULTI-SCALE AIR QUALITY (CMAQ) MODEL SYSTEM

    EPA Science Inventory

    Like most air quality modeling systems, CMAQ divides the treatment of meteorological and chemical/transport processes into separate models run sequentially. A potential drawback to this approach is that it creates the illusion that these processes are minimally interdependent an...

  2. Model error estimation for distributed systems described by elliptic equations

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1983-01-01

    A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.

  3. Intelligence Fusion Modeling. A Proposed Approach.

    DTIC Science & Technology

    1983-09-16

    based techniques developed by artificial intelligence researchers. This paper describes the application of these techniques in the modeling of an... intelligence requirements, although the methods presented are applicable . We treat PIR/IR as given. -7- -- -W V"W v* 1.- . :71.,v It k*~ ~-- Movement...items from the PIR/IR/HVT decomposition are received from the CMDS. Formatted tactical intelligence reports are received from sensors of like types

  4. Model selection and assessment for multi­-species occupancy models

    USGS Publications Warehouse

    Broms, Kristin M.; Hooten, Mevin B.; Fitzpatrick, Ryan M.

    2016-01-01

    While multi-species occupancy models (MSOMs) are emerging as a popular method for analyzing biodiversity data, formal checking and validation approaches for this class of models have lagged behind. Concurrent with the rise in application of MSOMs among ecologists, a quiet regime shift is occurring in Bayesian statistics where predictive model comparison approaches are experiencing a resurgence. Unlike single-species occupancy models that use integrated likelihoods, MSOMs are usually couched in a Bayesian framework and contain multiple levels. Standard model checking and selection methods are often unreliable in this setting and there is only limited guidance in the ecological literature for this class of models. We examined several different contemporary Bayesian hierarchical approaches for checking and validating MSOMs and applied these methods to a freshwater aquatic study system in Colorado, USA, to better understand the diversity and distributions of plains fishes. Our findings indicated distinct differences among model selection approaches, with cross-validation techniques performing the best in terms of prediction.

  5. Nonparametric model validations for hidden Markov models with applications in financial econometrics

    PubMed Central

    Zhao, Zhibiao

    2011-01-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise. PMID:21750601

  6. Application of physics engines in virtual worlds

    NASA Astrophysics Data System (ADS)

    Norman, Mark; Taylor, Tim

    2002-03-01

    Dynamic virtual worlds potentially can provide a much richer and more enjoyable experience than static ones. To realize such worlds, three approaches are commonly used. The first of these, and still widely applied, involves importing traditional animations from a modeling system such as 3D Studio Max. This approach is therefore limited to predefined animation scripts or combinations/blends thereof. The second approach involves the integration of some specific-purpose simulation code, such as car dynamics, and is thus generally limited to one (class of) application(s). The third approach involves the use of general-purpose physics engines, which promise to enable a range of compelling dynamic virtual worlds and to considerably speed up development. By far the largest market today for real-time simulation is computer games, revenues exceeding those of the movie industry. Traditionally, the simulation is produced by game developers in-house for specific titles. However, off-the-shelf middleware physics engines are now available for use in games and related domains. In this paper, we report on our experiences of using middleware physics engines to create a virtual world as an interactive experience, and an advanced scenario where artificial life techniques generate controllers for physically modeled characters.

  7. Regionalization of land use impact models for life cycle assessment: Recommendations for their use on the global scale and their applicability to Brazil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavan, Ana Laura Raymundo, E-mail: laurarpavan@gmail.com; Ometto, Aldo Roberto; Department of Production Engineering, São Carlos School of Engineering, University of São Paulo, Av. Trabalhador São-Carlense 400, São Carlos 13566-590, SP

    Life Cycle Assessment (LCA) is the main technique for evaluate the environmental impacts of product life cycles. A major challenge in the field of LCA is spatial and temporal differentiation in Life Cycle Impact Assessment (LCIA) methods, especially impacts resulting from land occupation and land transformation. Land use characterization modeling has advanced considerably over the last two decades and many approaches have recently included crucial aspects such as geographic differentiation. Nevertheless, characterization models have so far not been systematically reviewed and evaluated to determine their applicability to South America. Given that Brazil is the largest country in South America, thismore » paper analyzes the main international characterization models currently available in the literature, with a view to recommending regionalized models applicable on a global scale for land use life cycle impact assessments, and discusses their feasibility for regionalized assessment in Brazil. The analytical methodology involves classification based on the following criteria: midpoint/endpoint approach, scope of application, area of data collection, biogeographical differentiation, definition of recovery time and reference situation; followed by an evaluation of thirteen scientific robustness and environmental relevance subcriteria. The results of the scope of application are distributed among 25% of the models developed for the European context, and 50% have a global scope. There is no consensus in the literature about the definition of parameters such biogeographical differentiation and reference situation, and our review indicates that 35% of the models use ecoregion division while 40% use the concept of potential natural vegetation. Four characterization models show high scores in terms of scientific robustness and environmental relevance. These models are recommended for application in land use life cycle impact assessments, and also to serve as references for the development or adaptation of regional methodological procedures for Brazil. - Highlights: • A discussion is made on performing regionalized impact assessments using spatial differentiation in LCA. • A review is made of 20 characterization models for land use impacts in Life Cycle Impact Assessment. • Four characterization models are recommended according to different land use impact pathways for application in Brazil.« less

  8. Application of a Systems Engineering Approach to Support Space Reactor Development

    NASA Astrophysics Data System (ADS)

    Wold, Scott

    2005-02-01

    In 1992, approximately 25 Russian and 12 U.S. engineers and technicians were involved in the transport, assembly, inspection, and testing of over 90 tons of Russian equipment associated with the Thermionic System Evaluation Test (TSET) Facility. The entire Russian Baikal Test Stand, consisting of a 5.79 m tall vacuum chamber and related support equipment, was reassembled and tested at the TSET facility in less than four months. In November 1992, the first non-nuclear operational test of a complete thermionic power reactor system in the U.S. was accomplished three months ahead of schedule and under budget. A major factor in this accomplishment was the application of a disciplined top-down systems engineering approach and application of a spiral development model to achieve the desired objectives of the TOPAZ International Program (TIP). Systems Engineering is a structured discipline that helps programs and projects conceive, develop, integrate, test and deliver products and services that meet customer requirements within cost and schedule. This paper discusses the impact of Systems Engineering and a spiral development model on the success of the TOPAZ International Program and how the application of a similar approach could help ensure the success of future space reactor development projects.

  9. A Machine Learning Approach to Estimate Riverbank Geotechnical Parameters from Sediment Particle Size Data

    NASA Astrophysics Data System (ADS)

    Iwashita, Fabio; Brooks, Andrew; Spencer, John; Borombovits, Daniel; Curwen, Graeme; Olley, Jon

    2015-04-01

    Assessing bank stability using geotechnical models traditionally involves the laborious collection of data on the bank and floodplain stratigraphy, as well as in-situ geotechnical data for each sedimentary unit within a river bank. The application of geotechnical bank stability models are limited to those sites where extensive field data has been collected, where their ability to provide predictions of bank erosion at the reach scale are limited without a very extensive and expensive field data collection program. Some challenges in the construction and application of riverbank erosion and hydraulic numerical models are their one-dimensionality, steady-state requirements, lack of calibration data, and nonuniqueness. Also, numerical models commonly can be too rigid with respect to detecting unexpected features like the onset of trends, non-linear relations, or patterns restricted to sub-samples of a data set. These shortcomings create the need for an alternate modelling approach capable of using available data. The application of the Self-Organizing Maps (SOM) approach is well-suited to the analysis of noisy, sparse, nonlinear, multidimensional, and scale-dependent data. It is a type of unsupervised artificial neural network with hybrid competitive-cooperative learning. In this work we present a method that uses a database of geotechnical data collected at over 100 sites throughout Queensland State, Australia, to develop a modelling approach that enables geotechnical parameters (soil effective cohesion, friction angle, soil erodibility and critical stress) to be derived from sediment particle size data (PSD). The model framework and predicted values were evaluated using two methods, splitting the dataset into training and validation set, and through a Bootstrap approach. The basis of Bootstrap cross-validation is a leave-one-out strategy. This requires leaving one data value out of the training set while creating a new SOM to estimate that missing value based on the remaining data. As a new SOM is created up to 30 times for each value under scrutiny, it forms the basis for a stochastic framework from which residuals are used to evaluate error statistics and model bias. The proposed method is suitable to estimate soil geotechnical properties, revealing and quantifying relationships between geotechnical variables and particle distribution size, not properly observed by linear multivariate statistical approaches.

  10. Progress with modeling activity landscapes in drug discovery.

    PubMed

    Vogt, Martin

    2018-04-19

    Activity landscapes (ALs) are representations and models of compound data sets annotated with a target-specific activity. In contrast to quantitative structure-activity relationship (QSAR) models, ALs aim at characterizing structure-activity relationships (SARs) on a large-scale level encompassing all active compounds for specific targets. The popularity of AL modeling has grown substantially with the public availability of large activity-annotated compound data sets. AL modeling crucially depends on molecular representations and similarity metrics used to assess structural similarity. Areas covered: The concepts of AL modeling are introduced and its basis in quantitatively assessing molecular similarity is discussed. The different types of AL modeling approaches are introduced. AL designs can broadly be divided into three categories: compound-pair based, dimensionality reduction, and network approaches. Recent developments for each of these categories are discussed focusing on the application of mathematical, statistical, and machine learning tools for AL modeling. AL modeling using chemical space networks is covered in more detail. Expert opinion: AL modeling has remained a largely descriptive approach for the analysis of SARs. Beyond mere visualization, the application of analytical tools from statistics, machine learning and network theory has aided in the sophistication of AL designs and provides a step forward in transforming ALs from descriptive to predictive tools. To this end, optimizing representations that encode activity relevant features of molecules might prove to be a crucial step.

  11. Reverse engineering physical models employing a sensor integration between 3D stereo detection and contact digitization

    NASA Astrophysics Data System (ADS)

    Chen, Liang-Chia; Lin, Grier C. I.

    1997-12-01

    A vision-drive automatic digitization process for free-form surface reconstruction has been developed, with a coordinate measurement machine (CMM) equipped with a touch-triggered probe and a CCD camera, in reverse engineering physical models. The process integrates 3D stereo detection, data filtering, Delaunay triangulation, adaptive surface digitization into a single process of surface reconstruction. By using this innovative approach, surface reconstruction can be implemented automatically and accurately. Least-squares B- spline surface models with the controlled accuracy of digitization can be generated for further application in product design and manufacturing processes. One industrial application indicates that this approach is feasible, and the processing time required in reverse engineering process can be significantly reduced up to more than 85%.

  12. Application of model predictive control for optimal operation of wind turbines

    NASA Astrophysics Data System (ADS)

    Yuan, Yuan; Cao, Pei; Tang, J.

    2017-04-01

    For large-scale wind turbines, reducing maintenance cost is a major challenge. Model predictive control (MPC) is a promising approach to deal with multiple conflicting objectives using the weighed sum approach. In this research, model predictive control method is applied to wind turbine to find an optimal balance between multiple objectives, such as the energy capture, loads on turbine components, and the pitch actuator usage. The actuator constraints are integrated into the objective function at the control design stage. The analysis is carried out in both the partial load region and full load region, and the performances are compared with those of a baseline gain scheduling PID controller. The application of this strategy achieves enhanced balance of component loads, the average power and actuator usages in partial load region.

  13. Modeling and design for electromagnetic surface wave devices

    NASA Astrophysics Data System (ADS)

    La Spada, Luigi; Haq, Sajad; Hao, Yang

    2017-09-01

    A great deal of interest has reemerged recently in the study of surface waves. The possibility to control and manipulate electromagnetic wave propagations at will opens many new research areas and leads to lots of novel applications in engineering. In this paper, we will present a comprehensive modeling and design approach for surface wave cloaks, based on graded-refractive-index materials and the theory of transformation optics. It can be also applied to any other forms of surface wave manipulation, in terms of amplitude and phase. In this paper, we will present a general method to illustrate how this can be achieved from modeling to the final design. The proposed approach is validated to be versatile and allows ease in manufacturing, thereby demonstrating great potential for practical applications.

  14. Computational design and multiscale modeling of a nanoactuator using DNA actuation.

    PubMed

    Hamdi, Mustapha

    2009-12-02

    Developments in the field of nanobiodevices coupling nanostructures and biological components are of great interest in medical nanorobotics. As the fundamentals of bio/non-bio interaction processes are still poorly understood in the design of these devices, design tools and multiscale dynamics modeling approaches are necessary at the fabrication pre-project stage. This paper proposes a new concept of optimized carbon nanotube based servomotor design for drug delivery and biomolecular transport applications. The design of an encapsulated DNA-multi-walled carbon nanotube actuator is prototyped using multiscale modeling. The system is parametrized by using a quantum level approach and characterized by using a molecular dynamics simulation. Based on the analysis of the simulation results, a servo nanoactuator using ionic current feedback is simulated and analyzed for application as a drug delivery carrier.

  15. Soft computing methods for geoidal height transformation

    NASA Astrophysics Data System (ADS)

    Akyilmaz, O.; Özlüdemir, M. T.; Ayan, T.; Çelik, R. N.

    2009-07-01

    Soft computing techniques, such as fuzzy logic and artificial neural network (ANN) approaches, have enabled researchers to create precise models for use in many scientific and engineering applications. Applications that can be employed in geodetic studies include the estimation of earth rotation parameters and the determination of mean sea level changes. Another important field of geodesy in which these computing techniques can be applied is geoidal height transformation. We report here our use of a conventional polynomial model, the Adaptive Network-based Fuzzy (or in some publications, Adaptive Neuro-Fuzzy) Inference System (ANFIS), an ANN and a modified ANN approach to approximate geoid heights. These approximation models have been tested on a number of test points. The results obtained through the transformation processes from ellipsoidal heights into local levelling heights have also been compared.

  16. Prediction of Size Effects in Notched Laminates Using Continuum Damage Mechanics

    NASA Technical Reports Server (NTRS)

    Camanho, D. P.; Maimi, P.; Davila, C. G.

    2007-01-01

    This paper examines the use of a continuum damage model to predict strength and size effects in notched carbon-epoxy laminates. The effects of size and the development of a fracture process zone before final failure are identified in an experimental program. The continuum damage model is described and the resulting predictions of size effects are compared with alternative approaches: the point stress and the inherent flaw models, the Linear-Elastic Fracture Mechanics approach, and the strength of materials approach. The results indicate that the continuum damage model is the most accurate technique to predict size effects in composites. Furthermore, the continuum damage model does not require any calibration and it is applicable to general geometries and boundary conditions.

  17. Using Openstreetmap Data to Generate Building Models with Their Inner Structures for 3d Maps

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Zipf, A.

    2017-09-01

    With the development of Web 2.0, more and more data related to indoor environments has been collected within the volunteered geographic information (VGI) framework, which creates a need for construction of indoor environments from VGI. In this study, we focus on generating 3D building models from OpenStreetMap (OSM) data, and provide an approach to support construction and visualization of indoor environments on 3D maps. In this paper, we present an algorithm which can extract building information from OSM data, and can construct building structures as well as inner building components (e.g., doors, rooms, and windows). A web application is built to support the processing and visualization of the building models on a 3D map. We test our approach with an indoor dataset collected from the field. The results show the feasibility of our approach and its potentials to provide support for a wide range of applications, such as indoor and outdoor navigation, urban planning, and incident management.

  18. Multiphase mean curvature flows with high mobility contrasts: A phase-field approach, with applications to nanowires

    NASA Astrophysics Data System (ADS)

    Bretin, Elie; Danescu, Alexandre; Penuelas, José; Masnou, Simon

    2018-07-01

    The structure of many multiphase systems is governed by an energy that penalizes the area of interfaces between phases weighted by surface tension coefficients. However, interface evolution laws depend also on interface mobility coefficients. Having in mind some applications where highly contrasted or even degenerate mobilities are involved, for which classical phase field models are inapplicable, we propose a new effective phase field approach to approximate multiphase mean curvature flows with mobilities. The key aspect of our model is to incorporate the mobilities not in the phase field energy (which is conventionally the case) but in the metric which determines the gradient flow. We show the consistency of such an approach by a formal analysis of the sharp interface limit. We also propose an efficient numerical scheme which allows us to illustrate the advantages of the model on various examples, as the wetting of droplets on solid surfaces or the simulation of nanowires growth generated by the so-called vapor-liquid-solid method.

  19. Wake modeling in complex terrain using a hybrid Eulerian-Lagrangian Split Solver

    NASA Astrophysics Data System (ADS)

    Fuchs, Franz G.; Rasheed, Adil; Tabib, Mandar; Fonn, Eivind

    2016-09-01

    Wake vortices (WVs) generated by aircraft are a source of risk to the following aircraft. The probability of WV related accidents increases in the vicinity of airport runways due to the shorter time of recovery after a WV encounter. Hence, solutions that can reduce the risk of WV encounters are needed to ensure increased flight safety. In this work we propose an interesting approach to model such wake vortices in real time using a hybrid Eulerian- Lagrangian approach. We derive an appropriate mathematical model, and show a comparison of the different types of solvers. We will conclude with a real life application of the methodology by simulating how wake vortices left behind by an aircraft at the Vffirnes airport in Norway get transported and decay under the influence of a background wind and turbulence field. Although the work demonstrates the application in an aviation context the same approach can be used in a wind energy context.

  20. Thermal Damage Analysis in Biological Tissues Under Optical Irradiation: Application to the Skin

    NASA Astrophysics Data System (ADS)

    Fanjul-Vélez, Félix; Ortega-Quijano, Noé; Solana-Quirós, José Ramón; Arce-Diego, José Luis

    2009-07-01

    The use of optical sources in medical praxis is increasing nowadays. In this study, different approaches using thermo-optical principles that allow us to predict thermal damage in irradiated tissues are analyzed. Optical propagation is studied by means of the radiation transport theory (RTT) equation, solved via a Monte Carlo analysis. Data obtained are included in a bio-heat equation, solved via a numerical finite difference approach. Optothermal properties are considered for the model to be accurate and reliable. Thermal distribution is calculated as a function of optical source parameters, mainly optical irradiance, wavelength and exposition time. Two thermal damage models, the cumulative equivalent minutes (CEM) 43 °C approach and the Arrhenius analysis, are used. The former is appropriate when dealing with dosimetry considerations at constant temperature. The latter is adequate to predict thermal damage with arbitrary temperature time dependence. Both models are applied and compared for the particular application of skin thermotherapy irradiation.

  1. Protein-ligand docking using FFT based sampling: D3R case study.

    PubMed

    Padhorny, Dzmitry; Hall, David R; Mirzaei, Hanieh; Mamonov, Artem B; Moghadasi, Mohammad; Alekseenko, Andrey; Beglov, Dmitri; Kozakov, Dima

    2018-01-01

    Fast Fourier transform (FFT) based approaches have been successful in application to modeling of relatively rigid protein-protein complexes. Recently, we have been able to adapt the FFT methodology to treatment of flexible protein-peptide interactions. Here, we report our latest attempt to expand the capabilities of the FFT approach to treatment of flexible protein-ligand interactions in application to the D3R PL-2016-1 challenge. Based on the D3R assessment, our FFT approach in conjunction with Monte Carlo minimization off-grid refinement was among the top performing methods in the challenge. The potential advantage of our method is its ability to globally sample the protein-ligand interaction landscape, which will be explored in further applications.

  2. Polarizable Force Fields and Polarizable Continuum Model: A Fluctuating Charges/PCM Approach. 1. Theory and Implementation.

    PubMed

    Lipparini, Filippo; Barone, Vincenzo

    2011-11-08

    We present a combined fluctuating charges-polarizable continuum model approach to describe molecules in solution. Both static and dynamic approaches are discussed: analytical first and second derivatives are shown as well as an extended lagrangian for molecular dynamics simluations. In particular, we use the polarizable continuum model to provide nonperiodic boundary conditions for molecular dynamics simulations of aqueous solutions. The extended lagrangian method is extensively discussed, with specific reference to the fluctuating charge model, from a numerical point of view by means of several examples, and a rationalization of the behavior found is presented. Several prototypical applications are shown, especially regarding solvation of ions and polar molecules in water.

  3. An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten

    2016-06-08

    In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part ismore » to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.« less

  4. An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space

    NASA Astrophysics Data System (ADS)

    Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang

    2016-06-01

    In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.

  5. Accelerating materials discovery through the development of polymer databases

    NASA Astrophysics Data System (ADS)

    Audus, Debra

    In our line of business we create chemical solutions for a wide range of applications, such as home and personal care, printing and packaging, automotive and structural coatings, and structural plastics and foams applications. In this environment, stable and highly automated workflows suitable to handle complex systems are a must. By satisfying these prerequisites, efficiency for the development of new materials can be significantly improved by combining modeling and experimental approaches. This is in fact in line with recent Materials Genome Initiative efforts sponsored by the US administration. From our experience, we know, that valuable contributions to product development are possible today by combining existing modeling techniques in an intelligent fashion, provided modeling and experiment work closely together. In my presentation I intend to review approaches to build and parameterize soft matter systems. As an example of our standard workflow, I will show a few applications, which include the design of a stabilizer molecule for dispersing polymer particles and the simulation of polystyrene dispersions.

  6. ‘Survival’: a simulation toolkit introducing a modular approach for radiobiological evaluations in ion beam therapy

    NASA Astrophysics Data System (ADS)

    Manganaro, L.; Russo, G.; Bourhaleb, F.; Fausti, F.; Giordanengo, S.; Monaco, V.; Sacchi, R.; Vignati, A.; Cirio, R.; Attili, A.

    2018-04-01

    One major rationale for the application of heavy ion beams in tumour therapy is their increased relative biological effectiveness (RBE). The complex dependencies of the RBE on dose, biological endpoint, position in the field etc require the use of biophysical models in treatment planning and clinical analysis. This study aims to introduce a new software, named ‘Survival’, to facilitate the radiobiological computations needed in ion therapy. The simulation toolkit was written in C++ and it was developed with a modular architecture in order to easily incorporate different radiobiological models. The following models were successfully implemented: the local effect model (LEM, version I, II and III) and variants of the microdosimetric-kinetic model (MKM). Different numerical evaluation approaches were also implemented: Monte Carlo (MC) numerical methods and a set of faster analytical approximations. Among the possible applications, the toolkit was used to reproduce the RBE versus LET for different ions (proton, He, C, O, Ne) and different cell lines (CHO, HSG). Intercomparison between different models (LEM and MKM) and computational approaches (MC and fast approximations) were performed. The developed software could represent an important tool for the evaluation of the biological effectiveness of charged particles in ion beam therapy, in particular when coupled with treatment simulations. Its modular architecture facilitates benchmarking and inter-comparison between different models and evaluation approaches. The code is open source (GPL2 license) and available at https://github.com/batuff/Survival.

  7. Socrates Meets the 21st Century

    ERIC Educational Resources Information Center

    Lege, Jerry

    2005-01-01

    A inquiry-based approach called the "modelling discussion" is introduced for structuring beginning modelling activity, teaching new mathematics from examining its applications in contextual situations, and as a general classroom management technique when students are engaged in mathematical modelling. An example which illustrates the style and…

  8. Practical examples of modeling choices and their consequences for risk assessment

    EPA Science Inventory

    Although benchmark dose (BMD) modeling has become the preferred approach to identifying a point of departure (POD) over the No Observed Adverse Effect Level, there remain challenges to its application in human health risk assessment. BMD modeling, as currently implemented by the...

  9. 2D hybrid analysis: Approach for building three-dimensional atomic model by electron microscopy image matching.

    PubMed

    Matsumoto, Atsushi; Miyazaki, Naoyuki; Takagi, Junichi; Iwasaki, Kenji

    2017-03-23

    In this study, we develop an approach termed "2D hybrid analysis" for building atomic models by image matching from electron microscopy (EM) images of biological molecules. The key advantage is that it is applicable to flexible molecules, which are difficult to analyze by 3DEM approach. In the proposed approach, first, a lot of atomic models with different conformations are built by computer simulation. Then, simulated EM images are built from each atomic model. Finally, they are compared with the experimental EM image. Two kinds of models are used as simulated EM images: the negative stain model and the simple projection model. Although the former is more realistic, the latter is adopted to perform faster computations. The use of the negative stain model enables decomposition of the averaged EM images into multiple projection images, each of which originated from a different conformation or orientation. We apply this approach to the EM images of integrin to obtain the distribution of the conformations, from which the pathway of the conformational change of the protein is deduced.

  10. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 1: Methodology and applications

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  11. Three methods to construct predictive models using logistic regression and likelihood ratios to facilitate adjustment for pretest probability give similar results.

    PubMed

    Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les

    2008-01-01

    To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.

  12. Linear spline multilevel models for summarising childhood growth trajectories: A guide to their application using examples from five birth cohorts.

    PubMed

    Howe, Laura D; Tilling, Kate; Matijasevich, Alicia; Petherick, Emily S; Santos, Ana Cristina; Fairley, Lesley; Wright, John; Santos, Iná S; Barros, Aluísio Jd; Martin, Richard M; Kramer, Michael S; Bogdanovich, Natalia; Matush, Lidia; Barros, Henrique; Lawlor, Debbie A

    2016-10-01

    Childhood growth is of interest in medical research concerned with determinants and consequences of variation from healthy growth and development. Linear spline multilevel modelling is a useful approach for deriving individual summary measures of growth, which overcomes several data issues (co-linearity of repeat measures, the requirement for all individuals to be measured at the same ages and bias due to missing data). Here, we outline the application of this methodology to model individual trajectories of length/height and weight, drawing on examples from five cohorts from different generations and different geographical regions with varying levels of economic development. We describe the unique features of the data within each cohort that have implications for the application of linear spline multilevel models, for example, differences in the density and inter-individual variation in measurement occasions, and multiple sources of measurement with varying measurement error. After providing example Stata syntax and a suggested workflow for the implementation of linear spline multilevel models, we conclude with a discussion of the advantages and disadvantages of the linear spline approach compared with other growth modelling methods such as fractional polynomials, more complex spline functions and other non-linear models. © The Author(s) 2013.

  13. Linear spline multilevel models for summarising childhood growth trajectories: A guide to their application using examples from five birth cohorts

    PubMed Central

    Tilling, Kate; Matijasevich, Alicia; Petherick, Emily S; Santos, Ana Cristina; Fairley, Lesley; Wright, John; Santos, Iná S.; Barros, Aluísio JD; Martin, Richard M; Kramer, Michael S; Bogdanovich, Natalia; Matush, Lidia; Barros, Henrique; Lawlor, Debbie A

    2013-01-01

    Childhood growth is of interest in medical research concerned with determinants and consequences of variation from healthy growth and development. Linear spline multilevel modelling is a useful approach for deriving individual summary measures of growth, which overcomes several data issues (co-linearity of repeat measures, the requirement for all individuals to be measured at the same ages and bias due to missing data). Here, we outline the application of this methodology to model individual trajectories of length/height and weight, drawing on examples from five cohorts from different generations and different geographical regions with varying levels of economic development. We describe the unique features of the data within each cohort that have implications for the application of linear spline multilevel models, for example, differences in the density and inter-individual variation in measurement occasions, and multiple sources of measurement with varying measurement error. After providing example Stata syntax and a suggested workflow for the implementation of linear spline multilevel models, we conclude with a discussion of the advantages and disadvantages of the linear spline approach compared with other growth modelling methods such as fractional polynomials, more complex spline functions and other non-linear models. PMID:24108269

  14. An event-version-based spatio-temporal modeling approach and its application in the cadastral management

    NASA Astrophysics Data System (ADS)

    Li, Yangdong; Han, Zhen; Liao, Zhongping

    2009-10-01

    Spatiality, temporality, legality, accuracy and continuality are characteristic of cadastral information, and the cadastral management demands that the cadastral data should be accurate, integrated and updated timely. It's a good idea to build an effective GIS management system to manage the cadastral data which are characterized by spatiality and temporality. Because no sound spatio-temporal data models have been adopted, however, the spatio-temporal characteristics of cadastral data are not well expressed in the existing cadastral management systems. An event-version-based spatio-temporal modeling approach is first proposed from the angle of event and version. Then with the help of it, an event-version-based spatio-temporal cadastral data model is built to represent spatio-temporal cadastral data. At last, the previous model is used in the design and implementation of a spatio-temporal cadastral management system. The result of the application of the system shows that the event-version-based spatio-temporal data model is very suitable for the representation and organization of cadastral data.

  15. On the road to metallic nanoparticles by rational design: bridging the gap between atomic-level theoretical modeling and reality by total scattering experiments

    NASA Astrophysics Data System (ADS)

    Prasai, Binay; Wilson, A. R.; Wiley, B. J.; Ren, Y.; Petkov, Valeri

    2015-10-01

    The extent to which current theoretical modeling alone can reveal real-world metallic nanoparticles (NPs) at the atomic level was scrutinized and demonstrated to be insufficient and how it can be improved by using a pragmatic approach involving straightforward experiments is shown. In particular, 4 to 6 nm in size silica supported Au100-xPdx (x = 30, 46 and 58) explored for catalytic applications is characterized structurally by total scattering experiments including high-energy synchrotron X-ray diffraction (XRD) coupled to atomic pair distribution function (PDF) analysis. Atomic-level models for the NPs are built by molecular dynamics simulations based on the archetypal for current theoretical modeling Sutton-Chen (SC) method. Models are matched against independent experimental data and are demonstrated to be inaccurate unless their theoretical foundation, i.e. the SC method, is supplemented with basic yet crucial information on the length and strength of metal-to-metal bonds and, when necessary, structural disorder in the actual NPs studied. An atomic PDF-based approach for accessing such information and implementing it in theoretical modeling is put forward. For completeness, the approach is concisely demonstrated on 15 nm in size water-dispersed Au particles explored for bio-medical applications and 16 nm in size hexane-dispersed Fe48Pd52 particles explored for magnetic applications as well. It is argued that when ``tuned up'' against experiments relevant to metals and alloys confined to nanoscale dimensions, such as total scattering coupled to atomic PDF analysis, rather than by mere intuition and/or against data for the respective solids, atomic-level theoretical modeling can provide a sound understanding of the synthesis-structure-property relationships in real-world metallic NPs. Ultimately this can help advance nanoscience and technology a step closer to producing metallic NPs by rational design.The extent to which current theoretical modeling alone can reveal real-world metallic nanoparticles (NPs) at the atomic level was scrutinized and demonstrated to be insufficient and how it can be improved by using a pragmatic approach involving straightforward experiments is shown. In particular, 4 to 6 nm in size silica supported Au100-xPdx (x = 30, 46 and 58) explored for catalytic applications is characterized structurally by total scattering experiments including high-energy synchrotron X-ray diffraction (XRD) coupled to atomic pair distribution function (PDF) analysis. Atomic-level models for the NPs are built by molecular dynamics simulations based on the archetypal for current theoretical modeling Sutton-Chen (SC) method. Models are matched against independent experimental data and are demonstrated to be inaccurate unless their theoretical foundation, i.e. the SC method, is supplemented with basic yet crucial information on the length and strength of metal-to-metal bonds and, when necessary, structural disorder in the actual NPs studied. An atomic PDF-based approach for accessing such information and implementing it in theoretical modeling is put forward. For completeness, the approach is concisely demonstrated on 15 nm in size water-dispersed Au particles explored for bio-medical applications and 16 nm in size hexane-dispersed Fe48Pd52 particles explored for magnetic applications as well. It is argued that when ``tuned up'' against experiments relevant to metals and alloys confined to nanoscale dimensions, such as total scattering coupled to atomic PDF analysis, rather than by mere intuition and/or against data for the respective solids, atomic-level theoretical modeling can provide a sound understanding of the synthesis-structure-property relationships in real-world metallic NPs. Ultimately this can help advance nanoscience and technology a step closer to producing metallic NPs by rational design. Electronic supplementary information (ESI) available: XRD patterns, TEM and 3D structure modelling methodology. See DOI: 10.1039/c5nr04678e

  16. Differential renormalization-group generators for static and dynamic critical phenomena

    NASA Astrophysics Data System (ADS)

    Chang, T. S.; Vvedensky, D. D.; Nicoll, J. F.

    1992-09-01

    The derivation of differential renormalization-group (DRG) equations for applications to static and dynamic critical phenomena is reviewed. The DRG approach provides a self-contained closed-form representation of the Wilson renormalization group (RG) and should be viewed as complementary to the Callan-Symanzik equations used in field-theoretic approaches to the RG. The various forms of DRG equations are derived to illustrate the general mathematical structure of each approach and to point out the advantages and disadvantages for performing practical calculations. Otherwise, the review focuses upon the one-particle-irreducible DRG equations derived by Nicoll and Chang and by Chang, Nicoll, and Young; no attempt is made to provide a general treatise of critical phenomena. A few specific examples are included to illustrate the utility of the DRG approach: the large- n limit of the classical n-vector model (the spherical model), multi- or higher-order critical phenomena, and crit ical dynamics far from equilibrium. The large- n limit of the n-vector model is used to introduce the application of DRG equations to a well-known example, with exact solution obtained for the nonlinear trajectories, generating functions for nonlinear scaling fields, and the equation of state. Trajectory integrals and nonlinear scaling fields within the framework of ɛ-expansions are then discussed for tricritical crossover, and briefly for certain aspects of multi- or higher-order critical points, including the derivation of the Helmholtz free energy and the equation of state. The discussion then turns to critical dynamics with a development of the path integral formulation for general dynamic processes. This is followed by an application to a model far-from-equilibrium system that undergoes a phase transformation analogous to a second-order critical point, the Schlögl model for a chemical instability.

  17. Region of validity of the finite–temperature Thomas–Fermi model with respect to quantum and exchange corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dyachkov, Sergey, E-mail: serj.dyachkov@gmail.com; Moscow Institute of Physics and Technology, 9 Institutskiy per., Dolgoprudny, Moscow Region 141700; Levashov, Pavel, E-mail: pasha@ihed.ras.ru

    We determine the region of applicability of the finite–temperature Thomas–Fermi model and its thermal part with respect to quantum and exchange corrections. Very high accuracy of computations has been achieved by using a special approach for the solution of the boundary problem and numerical integration. We show that the thermal part of the model can be applied at lower temperatures than the full model. Also we offer simple approximations of the boundaries of validity for practical applications.

  18. Application of linear regression analysis in accuracy assessment of rolling force calculations

    NASA Astrophysics Data System (ADS)

    Poliak, E. I.; Shim, M. K.; Kim, G. S.; Choo, W. Y.

    1998-10-01

    Efficient operation of the computational models employed in process control systems require periodical assessment of the accuracy of their predictions. Linear regression is proposed as a tool which allows separate systematic and random prediction errors from those related to measurements. A quantitative characteristic of the model predictive ability is introduced in addition to standard statistical tests for model adequacy. Rolling force calculations are considered as an example for the application. However, the outlined approach can be used to assess the performance of any computational model.

  19. A human operator simulator model of the NASA Terminal Configured Vehicle (TCV)

    NASA Technical Reports Server (NTRS)

    Glenn, F. A., III; Doane, S. M.

    1981-01-01

    A generic operator model called HOS was used to simulate the behavior and performance of a pilot flying a transport airplane during instrument approach and landing operations in order to demonstrate the applicability of the model to problems associated with interfacing a crew with a flight system. The model which was installed and operated on NASA Langley's central computing system is described. Preliminary results of its application to an investigation of an innovative display system under development in Langley's terminal configured vehicle program are considered.

  20. Improving stability of prediction models based on correlated omics data by using network approaches.

    PubMed

    Tissier, Renaud; Houwing-Duistermaat, Jeanine; Rodríguez-Girondo, Mar

    2018-01-01

    Building prediction models based on complex omics datasets such as transcriptomics, proteomics, metabolomics remains a challenge in bioinformatics and biostatistics. Regularized regression techniques are typically used to deal with the high dimensionality of these datasets. However, due to the presence of correlation in the datasets, it is difficult to select the best model and application of these methods yields unstable results. We propose a novel strategy for model selection where the obtained models also perform well in terms of overall predictability. Several three step approaches are considered, where the steps are 1) network construction, 2) clustering to empirically derive modules or pathways, and 3) building a prediction model incorporating the information on the modules. For the first step, we use weighted correlation networks and Gaussian graphical modelling. Identification of groups of features is performed by hierarchical clustering. The grouping information is included in the prediction model by using group-based variable selection or group-specific penalization. We compare the performance of our new approaches with standard regularized regression via simulations. Based on these results we provide recommendations for selecting a strategy for building a prediction model given the specific goal of the analysis and the sizes of the datasets. Finally we illustrate the advantages of our approach by application of the methodology to two problems, namely prediction of body mass index in the DIetary, Lifestyle, and Genetic determinants of Obesity and Metabolic syndrome study (DILGOM) and prediction of response of each breast cancer cell line to treatment with specific drugs using a breast cancer cell lines pharmacogenomics dataset.

  1. A Poisson approach to the validation of failure time surrogate endpoints in individual patient data meta-analyses.

    PubMed

    Rotolo, Federico; Paoletti, Xavier; Burzykowski, Tomasz; Buyse, Marc; Michiels, Stefan

    2017-01-01

    Surrogate endpoints are often used in clinical trials instead of well-established hard endpoints for practical convenience. The meta-analytic approach relies on two measures of surrogacy: one at the individual level and one at the trial level. In the survival data setting, a two-step model based on copulas is commonly used. We present a new approach which employs a bivariate survival model with an individual random effect shared between the two endpoints and correlated treatment-by-trial interactions. We fit this model using auxiliary mixed Poisson models. We study via simulations the operating characteristics of this mixed Poisson approach as compared to the two-step copula approach. We illustrate the application of the methods on two individual patient data meta-analyses in gastric cancer, in the advanced setting (4069 patients from 20 randomized trials) and in the adjuvant setting (3288 patients from 14 randomized trials).

  2. Collective learning modeling based on the kinetic theory of active particles

    NASA Astrophysics Data System (ADS)

    Burini, D.; De Lillo, S.; Gibelli, L.

    2016-03-01

    This paper proposes a systems approach to the theory of perception and learning in populations composed of many living entities. Starting from a phenomenological description of these processes, a mathematical structure is derived which is deemed to incorporate their complexity features. The modeling is based on a generalization of kinetic theory methods where interactions are described by theoretical tools of game theory. As an application, the proposed approach is used to model the learning processes that take place in a classroom.

  3. Macroscopic and mesoscopic approach to the alkali-silica reaction in concrete

    NASA Astrophysics Data System (ADS)

    Grymin, Witold; Koniorczyk, Marcin; Pesavento, Francesco; Gawin, Dariusz

    2018-01-01

    A model of the alkali-silica reaction, which takes into account couplings between thermal, hygral, mechanical and chemical phenomena in concrete, has been discussed. The ASR may be considered at macroscopic or mesoscopic scale. The main features of each approach have been summarized and development of the model for both scales has been briefly described. Application of the model to experimental results for both scales has been presented. Even though good accordance of the model has been obtained for both approaches, consideration of the model at the mesoscopic scale allows to model different mortar mixes, prepared with the same aggregate, but of different grain size, using the same set of parameters. It enables also to predict reaction development assuming different alkali sources, such as de-icing salts or alkali leaching.

  4. Aerostructural interaction in a collaborative MDO environment

    NASA Astrophysics Data System (ADS)

    Ciampa, Pier Davide; Nagel, Björn

    2014-10-01

    The work presents an approach for aircraft design and optimization, developed to account for fluid-structure interactions in MDO applications. The approach makes use of a collaborative distributed design environment, and focuses on the influence of multiple physics based aerostructural models, on the overall aircraft synthesis and optimization. The approach is tested for the design of large transportation aircraft.

  5. Adaptive Role Playing Games: An Immersive Approach for Problem Based Learning

    ERIC Educational Resources Information Center

    Sancho, Pilar; Moreno-Ger, Pablo; Fuentes-Fernandez, Ruben; Fernandez-Manjon, Baltasar

    2009-01-01

    In this paper we present a general framework, called NUCLEO, for the application of socio-constructive educational approaches in higher education. The underlying pedagogical approach relies on an adaptation model in order to improve group dynamics, as this has been identified as one of the key features in the success of collaborative learning…

  6. Continuity-based model interfacing for plant-wide simulation: a general approach.

    PubMed

    Volcke, Eveline I P; van Loosdrecht, Mark C M; Vanrolleghem, Peter A

    2006-08-01

    In plant-wide simulation studies of wastewater treatment facilities, often existing models from different origin need to be coupled. However, as these submodels are likely to contain different state variables, their coupling is not straightforward. The continuity-based interfacing method (CBIM) provides a general framework to construct model interfaces for models of wastewater systems, taking into account conservation principles. In this contribution, the CBIM approach is applied to study the effect of sludge digestion reject water treatment with a SHARON-Anammox process on a plant-wide scale. Separate models were available for the SHARON process and for the Anammox process. The Benchmark simulation model no. 2 (BSM2) is used to simulate the behaviour of the complete WWTP including sludge digestion. The CBIM approach is followed to develop three different model interfaces. At the same time, the generally applicable CBIM approach was further refined and particular issues when coupling models in which pH is considered as a state variable, are pointed out.

  7. Application of a New Hybrid RANS/LES Modeling Paradigm to Compressible Flow

    NASA Astrophysics Data System (ADS)

    Oliver, Todd; Pederson, Clark; Haering, Sigfried; Moser, Robert

    2017-11-01

    It is well-known that traditional hybrid RANS/LES modeling approaches suffer from a number of deficiencies. These deficiencies often stem from overly simplistic blending strategies based on scalar measures of turbulence length scale and grid resolution and from use of isotropic subgrid models in LES regions. A recently developed hybrid modeling approach has shown promise in overcoming these deficiencies in incompressible flows [Haering, 2015]. In the approach, RANS/LES blending is accomplished using a hybridization parameter that is governed by an additional model transport equation and is driven to achieve equilibrium between the resolved and unresolved turbulence for the given grid. Further, the model uses an tensor eddy viscosity that is formulated to represent the effects of anisotropic grid resolution on subgrid quantities. In this work, this modeling approach is extended to compressible flows and implemented in the compressible flow solver SU2 (http://su2.stanford.edu/). We discuss both modeling and implementation challenges and show preliminary results for compressible flow test cases with smooth wall separation.

  8. New Mechanistic Models of Long Term Evolution of Microstructure and Mechanical Properties of Nickel Based Alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kruzic, Jamie J.; Evans, T. Matthew; Greaney, P. Alex

    The report describes the development of a discrete element method (DEM) based modeling approach to quantitatively predict deformation and failure of typical nickel based superalloys. A series of experimental data, including microstructure and mechanical property characterization at 600°C, was collected for a relatively simple, model solid solution Ni-20Cr alloy (Nimonic 75) to determine inputs for the model and provide data for model validation. Nimonic 75 was considered ideal for this study because it is a certified tensile and creep reference material. A series of new DEM modeling approaches were developed to capture the complexity of metal deformation, including cubic elasticmore » anisotropy and plastic deformation both with and without strain hardening. Our model approaches were implemented into a commercially available DEM code, PFC3D, that is commonly used by engineers. It is envisioned that once further developed, this new DEM modeling approach can be adapted to a wide range of engineering applications.« less

  9. A hybrid model for computing nonthermal ion distributions in a long mean-free-path plasma

    NASA Astrophysics Data System (ADS)

    Tang, Xianzhu; McDevitt, Chris; Guo, Zehua; Berk, Herb

    2014-10-01

    Non-thermal ions, especially the suprathermal ones, are known to make a dominant contribution to a number of important physics such as the fusion reactivity in controlled fusion, the ion heat flux, and in the case of a tokamak, the ion bootstrap current. Evaluating the deviation from a local Maxwellian distribution of these non-thermal ions can be a challenging task in the context of a global plasma fluid model that evolves the plasma density, flow, and temperature. Here we describe a hybrid model for coupling such constrained kinetic calculation to global plasma fluid models. The key ingredient is a non-perturbative treatment of the tail ions where the ion Knudsen number approaches or surpasses order unity. This can be sharply constrasted with the standard Chapman-Enskog approach which relies on a perturbative treatment that is frequently invalidated. The accuracy of our coupling scheme is controlled by the precise criteria for matching the non-perturbative kinetic model to perturbative solutions in both configuration space and velocity space. Although our specific application examples will be drawn from laboratory controlled fusion experiments, the general approach is applicable to space and astrophysical plasmas as well. Work supported by DOE.

  10. Learning and Information Approaches for Inference in Dynamic Data-Driven Geophysical Applications

    NASA Astrophysics Data System (ADS)

    Ravela, S.

    2015-12-01

    Many Geophysical inference problems are characterized by non-linear processes, high-dimensional models and complex uncertainties. A dynamic coupling between models, estimation, and sampling is typically sought to efficiently characterize and reduce uncertainty. This process is however fraught with several difficulties. Among them, the key difficulties are the ability to deal with model errors, efficacy of uncertainty quantification and data assimilation. In this presentation, we present three key ideas from learning and intelligent systems theory and apply them to two geophysical applications. The first idea is the use of Ensemble Learning to compensate for model error, the second is to develop tractable Information Theoretic Learning to deal with non-Gaussianity in inference, and the third is a Manifold Resampling technique for effective uncertainty quantification. We apply these methods, first to the development of a cooperative autonomous observing system using sUAS for studying coherent structures. We apply this to Second, we apply this to the problem of quantifying risk from hurricanes and storm surges in a changing climate. Results indicate that learning approaches can enable new effectiveness in cases where standard approaches to model reduction, uncertainty quantification and data assimilation fail.

  11. NASA's Earth Observations Commercialization Applications Program: A model for government promotion of commercial space opportunities

    NASA Technical Reports Server (NTRS)

    Macauley, Molly K.

    1995-01-01

    The role of government in promoting space commerce is a topic of discussion in every spacefaring nation. This article describes a new approach to government intervention which, based on its five-year track record, appears to have met with success. The approach, developed in NASA's Earth Observations Commercialization Application Program (EOCAP), offer several lessons for effective government sponsorship of commercial space development in general and of commercial remote sensing in particular.

  12. Proposed Reliability/Cost Model

    NASA Technical Reports Server (NTRS)

    Delionback, L. M.

    1982-01-01

    New technique estimates cost of improvement in reliability for complex system. Model format/approach is dependent upon use of subsystem cost-estimating relationships (CER's) in devising cost-effective policy. Proposed methodology should have application in broad range of engineering management decisions.

  13. Integrating multi-scale data to create a virtual physiological mouse heart.

    PubMed

    Land, Sander; Niederer, Steven A; Louch, William E; Sejersted, Ole M; Smith, Nicolas P

    2013-04-06

    While the virtual physiological human (VPH) project has made great advances in human modelling, many of the tools and insights developed as part of this initiative are also applicable for facilitating mechanistic understanding of the physiology of a range of other species. This process, in turn, has the potential to provide human relevant insights via a different scientific path. Specifically, the increasing use of mice in experimental research, not yet fully complemented by a similar increase in computational modelling, is currently missing an important opportunity for using and interpreting this growing body of experimental data to improve our understanding of cardiac function. This overview describes our work to address this issue by creating a virtual physiological mouse model of the heart. We describe the similarities between human- and mouse-focused modelling, including the reuse of VPH tools, and the development of methods for investigating parameter sensitivity that are applicable across species. We show how previous results using this approach have already provided important biological insights, and how these can also be used to advance VPH heart models. Finally, we show an example application of this approach to test competing multi-scale hypotheses by investigating variations in length-dependent properties of cardiac muscle.

  14. Integrating multi-scale data to create a virtual physiological mouse heart

    PubMed Central

    Land, Sander; Niederer, Steven A.; Louch, William E.; Sejersted, Ole M.; Smith, Nicolas P.

    2013-01-01

    While the virtual physiological human (VPH) project has made great advances in human modelling, many of the tools and insights developed as part of this initiative are also applicable for facilitating mechanistic understanding of the physiology of a range of other species. This process, in turn, has the potential to provide human relevant insights via a different scientific path. Specifically, the increasing use of mice in experimental research, not yet fully complemented by a similar increase in computational modelling, is currently missing an important opportunity for using and interpreting this growing body of experimental data to improve our understanding of cardiac function. This overview describes our work to address this issue by creating a virtual physiological mouse model of the heart. We describe the similarities between human- and mouse-focused modelling, including the reuse of VPH tools, and the development of methods for investigating parameter sensitivity that are applicable across species. We show how previous results using this approach have already provided important biological insights, and how these can also be used to advance VPH heart models. Finally, we show an example application of this approach to test competing multi-scale hypotheses by investigating variations in length-dependent properties of cardiac muscle. PMID:24427525

  15. A practical technique for quantifying the performance of acoustic emission systems on plate-like structures.

    PubMed

    Scholey, J J; Wilcox, P D; Wisnom, M R; Friswell, M I

    2009-06-01

    A model for quantifying the performance of acoustic emission (AE) systems on plate-like structures is presented. Employing a linear transfer function approach the model is applicable to both isotropic and anisotropic materials. The model requires several inputs including source waveforms, phase velocity and attenuation. It is recognised that these variables may not be readily available, thus efficient measurement techniques are presented for obtaining phase velocity and attenuation in a form that can be exploited directly in the model. Inspired by previously documented methods, the application of these techniques is examined and some important implications for propagation characterisation in plates are discussed. Example measurements are made on isotropic and anisotropic plates and, where possible, comparisons with numerical solutions are made. By inputting experimentally obtained data into the model, quantitative system metrics are examined for different threshold values and sensor locations. By producing plots describing areas of hit success and source location error, the ability to measure the performance of different AE system configurations is demonstrated. This quantitative approach will help to place AE testing on a more solid foundation, underpinning its use in industrial AE applications.

  16. Improved methods in neural network-based adaptive output feedback control, with applications to flight control

    NASA Astrophysics Data System (ADS)

    Kim, Nakwan

    Utilizing the universal approximation property of neural networks, we develop several novel approaches to neural network-based adaptive output feedback control of nonlinear systems, and illustrate these approaches for several flight control applications. In particular, we address the problem of non-affine systems and eliminate the fixed point assumption present in earlier work. All of the stability proofs are carried out in a form that eliminates an algebraic loop in the neural network implementation. An approximate input/output feedback linearizing controller is augmented with a neural network using input/output sequences of the uncertain system. These approaches permit adaptation to both parametric uncertainty and unmodeled dynamics. All physical systems also have control position and rate limits, which may either deteriorate performance or cause instability for a sufficiently high control bandwidth. Here we apply a method for protecting an adaptive process from the effects of input saturation and time delays, known as "pseudo control hedging". This method was originally developed for the state feedback case, and we provide a stability analysis that extends its domain of applicability to the case of output feedback. The approach is illustrated by the design of a pitch-attitude flight control system for a linearized model of an R-50 experimental helicopter, and by the design of a pitch-rate control system for a 58-state model of a flexible aircraft consisting of rigid body dynamics coupled with actuator and flexible modes. A new approach to augmentation of an existing linear controller is introduced. It is especially useful when there is limited information concerning the plant model, and the existing controller. The approach is applied to the design of an adaptive autopilot for a guided munition. Design of a neural network adaptive control that ensures asymptotically stable tracking performance is also addressed.

  17. Theory of quantized systems: formal basis for DEVS/HLA distributed simulation environment

    NASA Astrophysics Data System (ADS)

    Zeigler, Bernard P.; Lee, J. S.

    1998-08-01

    In the context of a DARPA ASTT project, we are developing an HLA-compliant distributed simulation environment based on the DEVS formalism. This environment will provide a user- friendly, high-level tool-set for developing interoperable discrete and continuous simulation models. One application is the study of contract-based predictive filtering. This paper presents a new approach to predictive filtering based on a process called 'quantization' to reduce state update transmission. Quantization, which generates state updates only at quantum level crossings, abstracts a sender model into a DEVS representation. This affords an alternative, efficient approach to embedding continuous models within distributed discrete event simulations. Applications of quantization to message traffic reduction are discussed. The theory has been validated by DEVSJAVA simulations of test cases. It will be subject to further test in actual distributed simulations using the DEVS/HLA modeling and simulation environment.

  18. Reclaiming the best of the biopsychosocial model of mental health care and 'recovery' for older people through a 'person-centred' approach.

    PubMed

    McKay, Roderick; McDonald, Regina; Lie, David; McGowan, Helen

    2012-12-01

    The 'biopsychosocial', 'person-centred care' (PCC) and 'recovery' models of care can be seen as distinct and competing paradigms. This paper proposes an integration of these valuable perspectives and suggestions for effective implementation in health services for the elderly. An overview of PCC and recovery models, and their application for older people with mental health problems, is provided. Their overlap and contrast with the familiar 'biopsychosocial' model of mental health care is considered, together with obstacles to implementation. Utilisation of PCC and recovery concepts allow clinicians to avoid narrow application of the biopsychosocial approach and encourages clinicians to focus on the person's right to autonomy, their values and life goals. Service reform and development is required to embed these concepts into core clinical processes so as to improve outcomes and the quality of life for older people with mental health problems.

  19. A Software Tool for Integrated Optical Design Analysis

    NASA Technical Reports Server (NTRS)

    Moore, Jim; Troy, Ed; DePlachett, Charles; Montgomery, Edward (Technical Monitor)

    2001-01-01

    Design of large precision optical systems requires multi-disciplinary analysis, modeling, and design. Thermal, structural and optical characteristics of the hardware must be accurately understood in order to design a system capable of accomplishing the performance requirements. The interactions between each of the disciplines become stronger as systems are designed lighter weight for space applications. This coupling dictates a concurrent engineering design approach. In the past, integrated modeling tools have been developed that attempt to integrate all of the complex analysis within the framework of a single model. This often results in modeling simplifications and it requires engineering specialist to learn new applications. The software described in this presentation addresses the concurrent engineering task using a different approach. The software tool, Integrated Optical Design Analysis (IODA), uses data fusion technology to enable a cross discipline team of engineering experts to concurrently design an optical system using their standard validated engineering design tools.

  20. Accuracy assessment for a multi-parameter optical calliper in on line automotive applications

    NASA Astrophysics Data System (ADS)

    D'Emilia, G.; Di Gasbarro, D.; Gaspari, A.; Natale, E.

    2017-08-01

    In this work, a methodological approach based on the evaluation of the measurement uncertainty is applied to an experimental test case, related to the automotive sector. The uncertainty model for different measurement procedures of a high-accuracy optical gauge is discussed in order to individuate the best measuring performances of the system for on-line applications and when the measurement requirements are becoming more stringent. In particular, with reference to the industrial production and control strategies of high-performing turbochargers, two uncertainty models are proposed, discussed and compared, to be used by the optical calliper. Models are based on an integrated approach between measurement methods and production best practices to emphasize their mutual coherence. The paper shows the possible advantages deriving from the considerations that the measurement uncertainty modelling provides, in order to keep control of the uncertainty propagation on all the indirect measurements useful for production statistical control, on which basing further improvements.

  1. Dependability analysis of parallel systems using a simulation-based approach. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sawyer, Darren Charles

    1994-01-01

    The analysis of dependability in large, complex, parallel systems executing real applications or workloads is examined in this thesis. To effectively demonstrate the wide range of dependability problems that can be analyzed through simulation, the analysis of three case studies is presented. For each case, the organization of the simulation model used is outlined, and the results from simulated fault injection experiments are explained, showing the usefulness of this method in dependability modeling of large parallel systems. The simulation models are constructed using DEPEND and C++. Where possible, methods to increase dependability are derived from the experimental results. Another interesting facet of all three cases is the presence of some kind of workload of application executing in the simulation while faults are injected. This provides a completely new dimension to this type of study, not possible to model accurately with analytical approaches.

  2. Modeling structural change in spatial system dynamics: A Daisyworld example.

    PubMed

    Neuwirth, C; Peck, A; Simonović, S P

    2015-03-01

    System dynamics (SD) is an effective approach for helping reveal the temporal behavior of complex systems. Although there have been recent developments in expanding SD to include systems' spatial dependencies, most applications have been restricted to the simulation of diffusion processes; this is especially true for models on structural change (e.g. LULC modeling). To address this shortcoming, a Python program is proposed to tightly couple SD software to a Geographic Information System (GIS). The approach provides the required capacities for handling bidirectional and synchronized interactions of operations between SD and GIS. In order to illustrate the concept and the techniques proposed for simulating structural changes, a fictitious environment called Daisyworld has been recreated in a spatial system dynamics (SSD) environment. The comparison of spatial and non-spatial simulations emphasizes the importance of considering spatio-temporal feedbacks. Finally, practical applications of structural change models in agriculture and disaster management are proposed.

  3. A Formal Approach to Requirements-Based Programming

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    No significant general-purpose method is currently available to mechanically transform system requirements into a provably equivalent model. The widespread use of such a method represents a necessary step toward high-dependability system engineering for numerous application domains. Current tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" unfilled by such tools and methods is that the formal models cannot be proven to be equivalent to the requirements. We offer a method for mechanically transforming requirements into a provably equivalent formal model that can be used as the basis for code generation and other transformations. This method is unique in offering full mathematical tractability while using notations and techniques that are well known and well trusted. Finally, we describe further application areas we are investigating for use of the approach.

  4. A Numerical-Analytical Approach to Modeling the Axial Rotation of the Earth

    NASA Astrophysics Data System (ADS)

    Markov, Yu. G.; Perepelkin, V. V.; Rykhlova, L. V.; Filippova, A. S.

    2018-04-01

    A model for the non-uniform axial rotation of the Earth is studied using a celestial-mechanical approach and numerical simulations. The application of an approximate model containing a small number of parameters to predict variations of the axial rotation velocity of the Earth over short time intervals is justified. This approximate model is obtained by averaging variable parameters that are subject to small variations due to non-stationarity of the perturbing factors. The model is verified and compared with predictions over a long time interval published by the International Earth Rotation and Reference Systems Service (IERS).

  5. BPMN as a Communication Language for the Process- and Event-Oriented Perspectives in Fact-Oriented Conceptual Models

    NASA Astrophysics Data System (ADS)

    Bollen, Peter

    In this paper we will show how the OMG specification of BPMN (Business Process Modeling Notation) can be used to model the process- and event-oriented perspectives of an application subject area. We will illustrate how the fact-oriented conceptual models for the information-, process- and event perspectives can be used in a 'bottom-up' approach for creating a BPMN model in combination with other approaches, e.g. the use of a textual description. We will use the common doctor's office example as a running example in this article.

  6. Parameterized Micro-benchmarking: An Auto-tuning Approach for Complex Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Wenjing; Krishnamoorthy, Sriram; Agrawal, Gagan

    2012-05-15

    Auto-tuning has emerged as an important practical method for creating highly optimized implementations of key computational kernels and applications. However, the growing complexity of architectures and applications is creating new challenges for auto-tuning. Complex applications can involve a prohibitively large search space that precludes empirical auto-tuning. Similarly, architectures are becoming increasingly complicated, making it hard to model performance. In this paper, we focus on the challenge to auto-tuning presented by applications with a large number of kernels and kernel instantiations. While these kernels may share a somewhat similar pattern, they differ considerably in problem sizes and the exact computation performed.more » We propose and evaluate a new approach to auto-tuning which we refer to as parameterized micro-benchmarking. It is an alternative to the two existing classes of approaches to auto-tuning: analytical model-based and empirical search-based. Particularly, we argue that the former may not be able to capture all the architectural features that impact performance, whereas the latter might be too expensive for an application that has several different kernels. In our approach, different expressions in the application, different possible implementations of each expression, and the key architectural features, are used to derive a simple micro-benchmark and a small parameter space. This allows us to learn the most significant features of the architecture that can impact the choice of implementation for each kernel. We have evaluated our approach in the context of GPU implementations of tensor contraction expressions encountered in excited state calculations in quantum chemistry. We have focused on two aspects of GPUs that affect tensor contraction execution: memory access patterns and kernel consolidation. Using our parameterized micro-benchmarking approach, we obtain a speedup of up to 2 over the version that used default optimizations, but no auto-tuning. We demonstrate that observations made from microbenchmarks match the behavior seen from real expressions. In the process, we make important observations about the memory hierarchy of two of the most recent NVIDIA GPUs, which can be used in other optimization frameworks as well.« less

  7. Developing a local least-squares support vector machines-based neuro-fuzzy model for nonlinear and chaotic time series prediction.

    PubMed

    Miranian, A; Abdollahzade, M

    2013-02-01

    Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.

  8. A spatial haplotype copying model with applications to genotype imputation.

    PubMed

    Yang, Wen-Yun; Hormozdiari, Farhad; Eskin, Eleazar; Pasaniuc, Bogdan

    2015-05-01

    Ever since its introduction, the haplotype copy model has proven to be one of the most successful approaches for modeling genetic variation in human populations, with applications ranging from ancestry inference to genotype phasing and imputation. Motivated by coalescent theory, this approach assumes that any chromosome (haplotype) can be modeled as a mosaic of segments copied from a set of chromosomes sampled from the same population. At the core of the model is the assumption that any chromosome from the sample is equally likely to contribute a priori to the copying process. Motivated by recent works that model genetic variation in a geographic continuum, we propose a new spatial-aware haplotype copy model that jointly models geography and the haplotype copying process. We extend hidden Markov models of haplotype diversity such that at any given location, haplotypes that are closest in the genetic-geographic continuum map are a priori more likely to contribute to the copying process than distant ones. Through simulations starting from the 1000 Genomes data, we show that our model achieves superior accuracy in genotype imputation over the standard spatial-unaware haplotype copy model. In addition, we show the utility of our model in selecting a small personalized reference panel for imputation that leads to both improved accuracy as well as to a lower computational runtime than the standard approach. Finally, we show our proposed model can be used to localize individuals on the genetic-geographical map on the basis of their genotype data.

  9. Multiagent intelligent systems

    NASA Astrophysics Data System (ADS)

    Krause, Lee S.; Dean, Christopher; Lehman, Lynn A.

    2003-09-01

    This paper will discuss a simulation approach based upon a family of agent-based models. As the demands placed upon simulation technology by such applications as Effects Based Operations (EBO), evaluations of indicators and warnings surrounding homeland defense and commercial demands such financial risk management current single thread based simulations will continue to show serious deficiencies. The types of "what if" analysis required to support these types of applications, demand rapidly re-configurable approaches capable of aggregating large models incorporating multiple viewpoints. The use of agent technology promises to provide a broad spectrum of models incorporating differing viewpoints through a synthesis of a collection of models. Each model would provide estimates to the overall scenario based upon their particular measure or aspect. An agent framework, denoted as the "family" would provide a common ontology in support of differing aspects of the scenario. This approach permits the future of modeling to change from viewing the problem as a single thread simulation, to take into account multiple viewpoints from different models. Even as models are updated or replaced the agent approach permits rapid inclusion in new or modified simulations. In this approach a variety of low and high-resolution information and its synthesis requires a family of models. Each agent "publishes" its support for a given measure and each model provides their own estimates on the scenario based upon their particular measure or aspect. If more than one agent provides the same measure (e.g. cognitive) then the results from these agents are combined to form an aggregate measure response. The objective would be to inform and help calibrate a qualitative model, rather than merely to present highly aggregated statistical information. As each result is processed, the next action can then be determined. This is done by a top-level decision system that communicates to the family at the ontology level without any specific understanding of the processes (or model) behind each agent. The increasingly complex demands upon simulation for the necessity to incorporate the breadth and depth of influencing factors makes a family of agent based models a promising solution. This paper will discuss that solution with syntax and semantics necessary to support the approach.

  10. A mathematical model for expected time to extinction of pathogenic bacteria through antibiotic

    NASA Astrophysics Data System (ADS)

    Ghosh, M. K.; Nandi, S.; Roy, P. K.

    2016-04-01

    Application of antibiotics in human system to prevent bacterial diseases like Gastritis, Ulcers, Meningitis, Pneumonia and Gonorrhea are indispensable. Antibiotics saved innumerable lives and continue to be a strong support for therapeutic application against pathogenic bacteria. In human system, bacterial diseases occur when pathogenic bacteria gets into the body and begin to reproduce and crowd out healthy bacteria. In this process, immature bacteria releases enzyme which is essential for bacterial cell-wall biosynthesis. After complete formation of cell wall, immature bacteria are converted to mature or virulent bacteria which are harmful to us during bacterial infections. Use of antibiotics as drug inhibits the bacterial cell wall formation. After application of antibiotics within body, the released bacterial enzyme binds with antibiotic molecule instead of its functional site during the cell wall synthesis in a competitive inhibition approach. As a consequence, the bacterial cell-wall formation as well as maturation process of pathogenic bacteria is halted and the disease is cured with lysis of bacterial cells. With this idea, a mathematical model has been developed in the present research investigation to review the inhibition of biosynthesis of bacterial cell wall by the application of antibiotics as drug in the light of enzyme kinetics. This approach helps to estimate the expected time to extinction of the pathogenic bacteria. Our mathematical approach based on the enzyme kinetic model for finding out expected time to extinction contributes favorable results for understanding of disease dynamics. Analytical and numerical results based on simulated findings validate our mathematical model.

  11. The Application of an Army Prospective Payment Model Structured on the Standards Set Forth by the CHAMPUS Maximum Allowable Charges and the Center for Medicare and Medicaid Services: An Academic Approach

    DTIC Science & Technology

    2005-04-29

    To) 29-04-2005 Final Report July 2004 to July 2005 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER The appli’eation of an army prospective payment model structured...Z39.18 Prospective Payment Model 1 The Application of an Army Prospective Payment Model Structured on the Standards Set Forth by the CHAMPUS Maximum...Health Care Administration 20060315 090 Prospective Payment Model 2 Acknowledgments I would like to acknowledge my wife, Karen, who allowed me the

  12. Applications of the BIOPHYS Algorithm for Physically-Based Retrieval of Biophysical, Structural and Forest Disturbance Information

    NASA Technical Reports Server (NTRS)

    Peddle, Derek R.; Huemmrich, K. Fred; Hall, Forrest G.; Masek, Jeffrey G.; Soenen, Scott A.; Jackson, Chris D.

    2011-01-01

    Canopy reflectance model inversion using look-up table approaches provides powerful and flexible options for deriving improved forest biophysical structural information (BSI) compared with traditional statistical empirical methods. The BIOPHYS algorithm is an improved, physically-based inversion approach for deriving BSI for independent use and validation and for monitoring, inventory and quantifying forest disturbance as well as input to ecosystem, climate and carbon models. Based on the multiple-forward mode (MFM) inversion approach, BIOPHYS results were summarized from different studies (Minnesota/NASA COVER; Virginia/LEDAPS; Saskatchewan/BOREAS), sensors (airborne MMR; Landsat; MODIS) and models (GeoSail; GOMS). Applications output included forest density, height, crown dimension, branch and green leaf area, canopy cover, disturbance estimates based on multi-temporal chronosequences, and structural change following recovery from forest fires over the last century. Good correspondences with validation field data were obtained. Integrated analyses of multiple solar and view angle imagery further improved retrievals compared with single pass data. Quantifying ecosystem dynamics such as the area and percent of forest disturbance, early regrowth and succession provide essential inputs to process-driven models of carbon flux. BIOPHYS is well suited for large-area, multi-temporal applications involving multiple image sets and mosaics for assessing vegetation disturbance and quantifying biophysical structural dynamics and change. It is also suitable for integration with forest inventory, monitoring, updating, and other programs.

  13. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 3: Structure and listing of programs

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  14. A new Green's function Monte Carlo algorithm for the solution of the two-dimensional nonlinear Poisson–Boltzmann equation: Application to the modeling of the communication breakdown problem in space vehicles during re-entry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Kausik, E-mail: kausik.chatterjee@aggiemail.usu.edu; Center for Atmospheric and Space Sciences, Utah State University, Logan, UT 84322; Roadcap, John R., E-mail: john.roadcap@us.af.mil

    The objective of this paper is the exposition of a recently-developed, novel Green's function Monte Carlo (GFMC) algorithm for the solution of nonlinear partial differential equations and its application to the modeling of the plasma sheath region around a cylindrical conducting object, carrying a potential and moving at low speeds through an otherwise neutral medium. The plasma sheath is modeled in equilibrium through the GFMC solution of the nonlinear Poisson–Boltzmann (NPB) equation. The traditional Monte Carlo based approaches for the solution of nonlinear equations are iterative in nature, involving branching stochastic processes which are used to calculate linear functionals ofmore » the solution of nonlinear integral equations. Over the last several years, one of the authors of this paper, K. Chatterjee has been developing a philosophically-different approach, where the linearization of the equation of interest is not required and hence there is no need for iteration and the simulation of branching processes. Instead, an approximate expression for the Green's function is obtained using perturbation theory, which is used to formulate the random walk equations within the problem sub-domains where the random walker makes its walks. However, as a trade-off, the dimensions of these sub-domains have to be restricted by the limitations imposed by perturbation theory. The greatest advantage of this approach is the ease and simplicity of parallelization stemming from the lack of the need for iteration, as a result of which the parallelization procedure is identical to the parallelization procedure for the GFMC solution of a linear problem. The application area of interest is in the modeling of the communication breakdown problem during a space vehicle's re-entry into the atmosphere. However, additional application areas are being explored in the modeling of electromagnetic propagation through the atmosphere/ionosphere in UHF/GPS applications.« less

  15. A new Green's function Monte Carlo algorithm for the solution of the two-dimensional nonlinear Poisson-Boltzmann equation: Application to the modeling of the communication breakdown problem in space vehicles during re-entry

    NASA Astrophysics Data System (ADS)

    Chatterjee, Kausik; Roadcap, John R.; Singh, Surendra

    2014-11-01

    The objective of this paper is the exposition of a recently-developed, novel Green's function Monte Carlo (GFMC) algorithm for the solution of nonlinear partial differential equations and its application to the modeling of the plasma sheath region around a cylindrical conducting object, carrying a potential and moving at low speeds through an otherwise neutral medium. The plasma sheath is modeled in equilibrium through the GFMC solution of the nonlinear Poisson-Boltzmann (NPB) equation. The traditional Monte Carlo based approaches for the solution of nonlinear equations are iterative in nature, involving branching stochastic processes which are used to calculate linear functionals of the solution of nonlinear integral equations. Over the last several years, one of the authors of this paper, K. Chatterjee has been developing a philosophically-different approach, where the linearization of the equation of interest is not required and hence there is no need for iteration and the simulation of branching processes. Instead, an approximate expression for the Green's function is obtained using perturbation theory, which is used to formulate the random walk equations within the problem sub-domains where the random walker makes its walks. However, as a trade-off, the dimensions of these sub-domains have to be restricted by the limitations imposed by perturbation theory. The greatest advantage of this approach is the ease and simplicity of parallelization stemming from the lack of the need for iteration, as a result of which the parallelization procedure is identical to the parallelization procedure for the GFMC solution of a linear problem. The application area of interest is in the modeling of the communication breakdown problem during a space vehicle's re-entry into the atmosphere. However, additional application areas are being explored in the modeling of electromagnetic propagation through the atmosphere/ionosphere in UHF/GPS applications.

  16. Applications of system dynamics modelling to support health policy.

    PubMed

    Atkinson, Jo-An M; Wells, Robert; Page, Andrew; Dominello, Amanda; Haines, Mary; Wilson, Andrew

    2015-07-09

    The value of systems science modelling methods in the health sector is increasingly being recognised. Of particular promise is the potential of these methods to improve operational aspects of healthcare capacity and delivery, analyse policy options for health system reform and guide investments to address complex public health problems. Because it lends itself to a participatory approach, system dynamics modelling has been a particularly appealing method that aims to align stakeholder understanding of the underlying causes of a problem and achieve consensus for action. The aim of this review is to determine the effectiveness of system dynamics modelling for health policy, and explore the range and nature of its application. A systematic search was conducted to identify articles published up to April 2015 from the PubMed, Web of Knowledge, Embase, ScienceDirect and Google Scholar databases. The grey literature was also searched. Papers eligible for inclusion were those that described applications of system dynamics modelling to support health policy at any level of government. Six papers were identified, comprising eight case studies of the application of system dynamics modelling to support health policy. No analytic studies were found that examined the effectiveness of this type of modelling. Only three examples engaged multidisciplinary stakeholders in collective model building. Stakeholder participation in model building reportedly facilitated development of a common 'mental map' of the health problem, resulting in consensus about optimal policy strategy and garnering support for collaborative action. The paucity of relevant papers indicates that, although the volume of descriptive literature advocating the value of system dynamics modelling is considerable, its practical application to inform health policy making is yet to be routinely applied and rigorously evaluated. Advances in software are allowing the participatory model building approach to be extended to more sophisticated multimethod modelling that provides policy makers with more powerful tools to support the design of targeted, effective and equitable policy responses for complex health problems. Building capacity and investing in communication to promote these modelling methods, as well as documenting and evaluating their applications, will be vital to supporting uptake by policy makers.

  17. Designing in the Social Context: Using the Social Contextual Model of Health Behavior Change to Develop a Tobacco Control Intervention for Teachers in India

    ERIC Educational Resources Information Center

    Nagler, Eve M.; Pednekar, Mangesh S.; Viswanath, Kasisomayajula; Sinha, Dhirendra N.; Aghi, Mira B.; Pischke, Claudia R.; Ebbeling, Cara B.; Lando, Harry A.; Gupta, Prakash C.; Sorensen, Glorian C.

    2013-01-01

    This article provides a theory-based, step-by-step approach to intervention development and illustrates its application in India to design an intervention to promote tobacco-use cessation among school personnel in Bihar. We employed a five-step approach to develop the intervention using the Social Contextual Model of Health Behavior Change (SCM)…

  18. Application of the Double-Tangent Construction of Coexisting Phases to Any Type of Phase Equilibrium for Binary Systems Modeled with the Gamma-Phi Approach

    ERIC Educational Resources Information Center

    Jaubert, Jean-Noël; Privat, Romain

    2014-01-01

    The double-tangent construction of coexisting phases is an elegant approach to visualize all the multiphase binary systems that satisfy the equality of chemical potentials and to select the stable state. In this paper, we show how to perform the double-tangent construction of coexisting phases for binary systems modeled with the gamma-phi…

  19. 4D Flexible Atom-Pairs: An efficient probabilistic conformational space comparison for ligand-based virtual screening

    PubMed Central

    2011-01-01

    Background The performance of 3D-based virtual screening similarity functions is affected by the applied conformations of compounds. Therefore, the results of 3D approaches are often less robust than 2D approaches. The application of 3D methods on multiple conformer data sets normally reduces this weakness, but entails a significant computational overhead. Therefore, we developed a special conformational space encoding by means of Gaussian mixture models and a similarity function that operates on these models. The application of a model-based encoding allows an efficient comparison of the conformational space of compounds. Results Comparisons of our 4D flexible atom-pair approach with over 15 state-of-the-art 2D- and 3D-based virtual screening similarity functions on the 40 data sets of the Directory of Useful Decoys show a robust performance of our approach. Even 3D-based approaches that operate on multiple conformers yield inferior results. The 4D flexible atom-pair method achieves an averaged AUC value of 0.78 on the filtered Directory of Useful Decoys data sets. The best 2D- and 3D-based approaches of this study yield an AUC value of 0.74 and 0.72, respectively. As a result, the 4D flexible atom-pair approach achieves an average rank of 1.25 with respect to 15 other state-of-the-art similarity functions and four different evaluation metrics. Conclusions Our 4D method yields a robust performance on 40 pharmaceutically relevant targets. The conformational space encoding enables an efficient comparison of the conformational space. Therefore, the weakness of the 3D-based approaches on single conformations is circumvented. With over 100,000 similarity calculations on a single desktop CPU, the utilization of the 4D flexible atom-pair in real-world applications is feasible. PMID:21733172

  20. Time Series Analysis for Forecasting Hospital Census: Application to the Neonatal Intensive Care Unit

    PubMed Central

    Hoover, Stephen; Jackson, Eric V.; Paul, David; Locke, Robert

    2016-01-01

    Summary Background Accurate prediction of future patient census in hospital units is essential for patient safety, health outcomes, and resource planning. Forecasting census in the Neonatal Intensive Care Unit (NICU) is particularly challenging due to limited ability to control the census and clinical trajectories. The fixed average census approach, using average census from previous year, is a forecasting alternative used in clinical practice, but has limitations due to census variations. Objective Our objectives are to: (i) analyze the daily NICU census at a single health care facility and develop census forecasting models, (ii) explore models with and without patient data characteristics obtained at the time of admission, and (iii) evaluate accuracy of the models compared with the fixed average census approach. Methods We used five years of retrospective daily NICU census data for model development (January 2008 – December 2012, N=1827 observations) and one year of data for validation (January – December 2013, N=365 observations). Best-fitting models of ARIMA and linear regression were applied to various 7-day prediction periods and compared using error statistics. Results The census showed a slightly increasing linear trend. Best fitting models included a non-seasonal model, ARIMA(1,0,0), seasonal ARIMA models, ARIMA(1,0,0)x(1,1,2)7 and ARIMA(2,1,4)x(1,1,2)14, as well as a seasonal linear regression model. Proposed forecasting models resulted on average in 36.49% improvement in forecasting accuracy compared with the fixed average census approach. Conclusions Time series models provide higher prediction accuracy under different census conditions compared with the fixed average census approach. Presented methodology is easily applicable in clinical practice, can be generalized to other care settings, support short- and long-term census forecasting, and inform staff resource planning. PMID:27437040

  1. Time Series Analysis for Forecasting Hospital Census: Application to the Neonatal Intensive Care Unit.

    PubMed

    Capan, Muge; Hoover, Stephen; Jackson, Eric V; Paul, David; Locke, Robert

    2016-01-01

    Accurate prediction of future patient census in hospital units is essential for patient safety, health outcomes, and resource planning. Forecasting census in the Neonatal Intensive Care Unit (NICU) is particularly challenging due to limited ability to control the census and clinical trajectories. The fixed average census approach, using average census from previous year, is a forecasting alternative used in clinical practice, but has limitations due to census variations. Our objectives are to: (i) analyze the daily NICU census at a single health care facility and develop census forecasting models, (ii) explore models with and without patient data characteristics obtained at the time of admission, and (iii) evaluate accuracy of the models compared with the fixed average census approach. We used five years of retrospective daily NICU census data for model development (January 2008 - December 2012, N=1827 observations) and one year of data for validation (January - December 2013, N=365 observations). Best-fitting models of ARIMA and linear regression were applied to various 7-day prediction periods and compared using error statistics. The census showed a slightly increasing linear trend. Best fitting models included a non-seasonal model, ARIMA(1,0,0), seasonal ARIMA models, ARIMA(1,0,0)x(1,1,2)7 and ARIMA(2,1,4)x(1,1,2)14, as well as a seasonal linear regression model. Proposed forecasting models resulted on average in 36.49% improvement in forecasting accuracy compared with the fixed average census approach. Time series models provide higher prediction accuracy under different census conditions compared with the fixed average census approach. Presented methodology is easily applicable in clinical practice, can be generalized to other care settings, support short- and long-term census forecasting, and inform staff resource planning.

  2. A MANUAL OF INSTRUCTIONAL PROBLEMS FOR THE U.S.G.S. MODFLOW MODEL

    EPA Science Inventory

    A recent report by the United States Environmental Protection Agency Groundwater Modeling Policy Study Group (van der Heijde and Park, 1986) offered several approaches to training Agency staff in the application of groundwater modeling. They identified the problem that current t...

  3. Dispersion modelling approaches for near road applications involving noise barriers

    EPA Science Inventory

    The talk will present comparisons with two datasets of the barrier algorithms implemented in two different dispersion models: US EPA’s R-LINE (a research dispersion modelling tool under development by the US EPA’s Office of Research and Development) and CERC’s A...

  4. PREDICTIVE UNCERTAINTY IN HYDROLOGIC AND WATER QUALITY MODELING: APPROACHES, APPLICATION TO ENVIRONMENTAL MANAGEMENT, AND FUTURE CHALLENGES (PRESENTATION)

    EPA Science Inventory

    Extant process-based hydrologic and water quality models are indispensable to water resources planning and environmental management. However, models are only approximations of real systems and often calibrated with incomplete and uncertain data. Reliable estimates, or perhaps f...

  5. PREDICTIVE UNCERTAINTY IN HYDROLOGIC AND WATER QUALITY MODELING: APPROACHES, APPLICATION TO ENVIRONMENTAL MANAGEMENT, AND FUTURE CHALLENGES

    EPA Science Inventory

    Extant process-based hydrologic and water quality models are indispensable to water resources planning and environmental management. However, models are only approximations of real systems and often calibrated with incomplete and uncertain data. Reliable estimates, or perhaps f...

  6. User's guide for the IEBT application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartoletti, T

    INFOSEC Experience-Based Training (IEBT) is a simulation and modeling approach to education in the arena of information security issues and its application to system-specific operations. The IEBT philosophy is that ''Experience is the Best Teacher''. This approach to computer-based training aims to bridge the gap between unappealing ''read the text, answer the questions'' types of training (largely a test of short-term memory), and the far more costly, time-consuming and inconvenient ''real hardware'' laboratory experience. Simulation and modeling supports this bridge by allowing the critical or salient features to be exercised while avoiding those aspects of a real world experience unrelatedmore » to the training goal.« less

  7. Structural partitioning of complex structures in the medium-frequency range. An application to an automotive vehicle

    NASA Astrophysics Data System (ADS)

    Kassem, M.; Soize, C.; Gagliardini, L.

    2011-02-01

    In a recent work [ Journal of Sound and Vibration 323 (2009) 849-863] the authors presented an energy-density field approach for the vibroacoustic analysis of complex structures in the low and medium frequency ranges. In this approach, a local vibroacoustic energy model as well as a simplification of this model were constructed. In this paper, firstly an extension of the previous theory is performed in order to include the case of general input forces and secondly, a structural partitioning methodology is presented along with a set of tools used for the construction of a partitioning. Finally, an application is presented for an automotive vehicle.

  8. AGM: A DSL for mobile cloud computing based on directed graph

    NASA Astrophysics Data System (ADS)

    Tanković, Nikola; Grbac, Tihana Galinac

    2016-06-01

    This paper summarizes a novel approach for consuming a domain specific language (DSL) by transforming it to a directed graph representation persisted by a graph database. Using such specialized database enables advanced navigation trough the stored model exposing only relevant subsets of meta-data to different involved services and components. We applied this approach in a mobile cloud computing system and used it to model several mobile applications in retail, supply chain management and merchandising domain. These application are distributed in a Software-as-a-Service (SaaS) fashion and used by thousands of customers in Croatia. We report on lessons learned and propose further research on this topic.

  9. Agent-Based Modeling in Public Health: Current Applications and Future Directions.

    PubMed

    Tracy, Melissa; Cerdá, Magdalena; Keyes, Katherine M

    2018-04-01

    Agent-based modeling is a computational approach in which agents with a specified set of characteristics interact with each other and with their environment according to predefined rules. We review key areas in public health where agent-based modeling has been adopted, including both communicable and noncommunicable disease, health behaviors, and social epidemiology. We also describe the main strengths and limitations of this approach for questions with public health relevance. Finally, we describe both methodologic and substantive future directions that we believe will enhance the value of agent-based modeling for public health. In particular, advances in model validation, comparisons with other causal modeling procedures, and the expansion of the models to consider comorbidity and joint influences more systematically will improve the utility of this approach to inform public health research, practice, and policy.

  10. Query Language for Location-Based Services: A Model Checking Approach

    NASA Astrophysics Data System (ADS)

    Hoareau, Christian; Satoh, Ichiro

    We present a model checking approach to the rationale, implementation, and applications of a query language for location-based services. Such query mechanisms are necessary so that users, objects, and/or services can effectively benefit from the location-awareness of their surrounding environment. The underlying data model is founded on a symbolic model of space organized in a tree structure. Once extended to a semantic model for modal logic, we regard location query processing as a model checking problem, and thus define location queries as hybrid logicbased formulas. Our approach is unique to existing research because it explores the connection between location models and query processing in ubiquitous computing systems, relies on a sound theoretical basis, and provides modal logic-based query mechanisms for expressive searches over a decentralized data structure. A prototype implementation is also presented and will be discussed.

  11. Identifying when tagged fishes have been consumed by piscivorous predators: application of multivariate mixture models to movement parameters of telemetered fishes

    USGS Publications Warehouse

    Romine, Jason G.; Perry, Russell W.; Johnston, Samuel V.; Fitzer, Christopher W.; Pagliughi, Stephen W.; Blake, Aaron R.

    2013-01-01

    Mixture models proved valuable as a means to differentiate between salmonid smolts and predators that consumed salmonid smolts. However, successful application of this method requires that telemetered fishes and their predators exhibit measurable differences in movement behavior. Our approach is flexible, allows inclusion of multiple track statistics and improves upon rule-based manual classification methods.

  12. Prediction of biochar yield from cattle manure pyrolysis via least squares support vector machine intelligent approach.

    PubMed

    Cao, Hongliang; Xin, Ya; Yuan, Qiaoxia

    2016-02-01

    To predict conveniently the biochar yield from cattle manure pyrolysis, intelligent modeling approach was introduced in this research. A traditional artificial neural networks (ANN) model and a novel least squares support vector machine (LS-SVM) model were developed. For the identification and prediction evaluation of the models, a data set with 33 experimental data was used, which were obtained using a laboratory-scale fixed bed reaction system. The results demonstrated that the intelligent modeling approach is greatly convenient and effective for the prediction of the biochar yield. In particular, the novel LS-SVM model has a more satisfying predicting performance and its robustness is better than the traditional ANN model. The introduction and application of the LS-SVM modeling method gives a successful example, which is a good reference for the modeling study of cattle manure pyrolysis process, even other similar processes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Functional Connectivity Mapping in the Animal Model: Principles and Applications of Resting-State fMRI

    PubMed Central

    Gorges, Martin; Roselli, Francesco; Müller, Hans-Peter; Ludolph, Albert C.; Rasche, Volker; Kassubek, Jan

    2017-01-01

    “Resting-state” fMRI has substantially contributed to the understanding of human and non-human functional brain organization by the analysis of correlated patterns in spontaneous activity within dedicated brain systems. Spontaneous neural activity is indirectly measured from the blood oxygenation level-dependent signal as acquired by echo planar imaging, when subjects quietly “resting” in the scanner. Animal models including disease or knockout models allow a broad spectrum of experimental manipulations not applicable in humans. The non-invasive fMRI approach provides a promising tool for cross-species comparative investigations. This review focuses on the principles of “resting-state” functional connectivity analysis and its applications to living animals. The translational aspect from in vivo animal models toward clinical applications in humans is emphasized. We introduce the fMRI-based investigation of the non-human brain’s hemodynamics, the methodological issues in the data postprocessing, and the functional data interpretation from different abstraction levels. The longer term goal of integrating fMRI connectivity data with structural connectomes obtained with tracing and optical imaging approaches is presented and will allow the interrogation of fMRI data in terms of directional flow of information and may identify the structural underpinnings of observed functional connectivity patterns. PMID:28539914

  14. A coupled melt-freeze temperature index approach in a one-layer model to predict bulk volumetric liquid water content dynamics in snow

    NASA Astrophysics Data System (ADS)

    Avanzi, Francesco; Yamaguchi, Satoru; Hirashima, Hiroyuki; De Michele, Carlo

    2016-04-01

    Liquid water in snow rules runoff dynamics and wet snow avalanches release. Moreover, it affects snow viscosity and snow albedo. As a result, measuring and modeling liquid water dynamics in snow have important implications for many scientific applications. However, measurements are usually challenging, while modeling is difficult due to an overlap of mechanical, thermal and hydraulic processes. Here, we evaluate the use of a simple one-layer one-dimensional model to predict hourly time-series of bulk volumetric liquid water content in seasonal snow. The model considers both a simple temperature-index approach (melt only) and a coupled melt-freeze temperature-index approach that is able to reconstruct melt-freeze dynamics. Performance of this approach is evaluated at three sites in Japan. These sites (Nagaoka, Shinjo and Sapporo) present multi-year time-series of snow and meteorological data, vertical profiles of snow physical properties and snow melt lysimeters data. These data-sets are an interesting opportunity to test this application in different climatic conditions, as sites span a wide latitudinal range and are subjected to different snow conditions during the season. When melt-freeze dynamics are included in the model, results show that median absolute differences between observations and predictions of bulk volumetric liquid water content are consistently lower than 1 vol%. Moreover, the model is able to predict an observed dry condition of the snowpack in 80% of observed cases at a non-calibration site, where parameters from calibration sites are transferred. Overall, the analysis show that a coupled melt-freeze temperature-index approach may be a valid solution to predict average wetness conditions of a snow cover at local scale.

  15. Uncertainty Quantification in Simulations of Epidemics Using Polynomial Chaos

    PubMed Central

    Santonja, F.; Chen-Charpentier, B.

    2012-01-01

    Mathematical models based on ordinary differential equations are a useful tool to study the processes involved in epidemiology. Many models consider that the parameters are deterministic variables. But in practice, the transmission parameters present large variability and it is not possible to determine them exactly, and it is necessary to introduce randomness. In this paper, we present an application of the polynomial chaos approach to epidemiological mathematical models based on ordinary differential equations with random coefficients. Taking into account the variability of the transmission parameters of the model, this approach allows us to obtain an auxiliary system of differential equations, which is then integrated numerically to obtain the first-and the second-order moments of the output stochastic processes. A sensitivity analysis based on the polynomial chaos approach is also performed to determine which parameters have the greatest influence on the results. As an example, we will apply the approach to an obesity epidemic model. PMID:22927889

  16. Nonlinear Pressurization and Modal Analysis Procedure for Dynamic Modeling of Inflatable Structures

    NASA Technical Reports Server (NTRS)

    Smalley, Kurt B.; Tinker, Michael L.; Saxon, Jeff (Technical Monitor)

    2002-01-01

    An introduction and set of guidelines for finite element dynamic modeling of nonrigidized inflatable structures is provided. A two-step approach is presented, involving 1) nonlinear static pressurization of the structure and updating of the stiffness matrix and 2) hear normal modes analysis using the updated stiffness. Advantages of this approach are that it provides physical realism in modeling of pressure stiffening, and it maintains the analytical convenience of a standard bear eigensolution once the stiffness has been modified. Demonstration of the approach is accomplished through the creation and test verification of an inflated cylinder model using a large commercial finite element code. Good frequency and mode shape comparisons are obtained with test data and previous modeling efforts, verifying the accuracy of the technique. Problems encountered in the application of the approach, as well as their solutions, are discussed in detail.

  17. Modeling fuels and fire effects in 3D: Model description and applications

    Treesearch

    Francois Pimont; Russell Parsons; Eric Rigolot; Francois de Coligny; Jean-Luc Dupuy; Philippe Dreyfus; Rodman R. Linn

    2016-01-01

    Scientists and managers critically need ways to assess how fuel treatments alter fire behavior, yet few tools currently exist for this purpose.We present a spatially-explicit-fuel-modeling system, FuelManager, which models fuels, vegetation growth, fire behavior (using a physics-based model, FIRETEC), and fire effects. FuelManager's flexible approach facilitates...

  18. To Aggregate or Not and Potentially Better Questions for Clustered Data: The Need for Hierarchical Linear Modeling in CTE Research

    ERIC Educational Resources Information Center

    Nimon, Kim

    2012-01-01

    Using state achievement data that are openly accessible, this paper demonstrates the application of hierarchical linear modeling within the context of career technical education research. Three prominent approaches to analyzing clustered data (i.e., modeling aggregated data, modeling disaggregated data, modeling hierarchical data) are discussed…

  19. Benefit Indicators for Flood Regulation Services of Wetlands: A Modeling Approach

    EPA Science Inventory

    This report describes a method for developing indicators of the benefits of flood regulation services of wetlands and presents a companion case study. We demonstrate our approach through an application to the Woonasquatucket River watershed in northern Rhode Island. This work is ...

  20. Optimizing Environmental Monitoring Networks with Direction-Dependent Distance Thresholds.

    ERIC Educational Resources Information Center

    Hudak, Paul F.

    1993-01-01

    In the direction-dependent approach to location modeling developed herein, the distance within which a point of demand can find service from a facility depends on direction of measurement. The utility of the approach is illustrated through an application to groundwater remediation. (Author/MDH)

  1. A Systems Biology Approach to Toxicology Research with Small Fish Models

    EPA Science Inventory

    Increasing use of mechanistically-based molecular and biochemical endpoints and in vitro assays is being advocated as a more efficient and cost-effective approach for generating chemical hazard data. However, development of effective assays and application of the resulting data i...

  2. The Integration of Environmental Education in Science Materials by Using "MOTORIC" Learning Model

    ERIC Educational Resources Information Center

    Sukarjita, I. Wayan; Ardi, Muhammad; Rachman, Abdul; Supu, Amiruddin; Dirawan, Gufran Darma

    2015-01-01

    The research of the integration of Environmental Education in science subject matter by application of "MOTORIC" Learning models has carried out on Junior High School Kupang Nusa Tenggara Timur Indonesia. "MOTORIC" learning model is an Environmental Education (EE) learning model that collaborate three learning approach i.e.…

  3. Two Decades of WRF/CMAQ simulations over the continental United States: New approaches for performing dynamic model evaluation and determining confidence limits for ozone exceedances

    EPA Science Inventory

    Confidence in the application of models for forecasting and regulatory assessments is furthered by conducting four types of model evaluation: operational, dynamic, diagnostic, and probabilistic. Operational model evaluation alone does not reveal the confidence limits that can be ...

  4. Erosion and Sediment Transport Modelling in Shallow Waters: A Review on Approaches, Models and Applications.

    PubMed

    Hajigholizadeh, Mohammad; Melesse, Assefa M; Fuentes, Hector R

    2018-03-14

    The erosion and sediment transport processes in shallow waters, which are discussed in this paper, begin when water droplets hit the soil surface. The transport mechanism caused by the consequent rainfall-runoff process determines the amount of generated sediment that can be transferred downslope. Many significant studies and models are performed to investigate these processes, which differ in terms of their effecting factors, approaches, inputs and outputs, model structure and the manner that these processes represent. This paper attempts to review the related literature concerning sediment transport modelling in shallow waters. A classification based on the representational processes of the soil erosion and sediment transport models (empirical, conceptual, physical and hybrid) is adopted, and the commonly-used models and their characteristics are listed. This review is expected to be of interest to researchers and soil and water conservation managers who are working on erosion and sediment transport phenomena in shallow waters. The paper format should be helpful for practitioners to identify and generally characterize the types of available models, their strengths and their basic scope of applicability.

  5. Modeling urban building energy use: A review of modeling approaches and procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Wenliang; Zhou, Yuyu; Cetin, Kristen

    With rapid urbanization and economic development, the world has been experiencing an unprecedented increase in energy consumption and greenhouse gas (GHG) emissions. While reducing energy consumption and GHG emissions is a common interest shared by major developed and developing countries, actions to enable these global reductions are generally implemented at the city scale. This is because baseline information from individual cities plays an important role in identifying economical options for improving building energy efficiency and reducing GHG emissions. Numerous approaches have been proposed for modeling urban building energy use in the past decades. This paper aims to provide an up-to-datemore » review of the broad categories of energy models for urban buildings and describes the basic workflow of physics-based, bottom-up models and their applications in simulating urban-scale building energy use. Because there are significant differences across models with varied potential for application, strengths and weaknesses of the reviewed models are also presented. This is followed by a discussion of challenging issues associated with model preparation and calibration.« less

  6. Modeling urban building energy use: A review of modeling approaches and procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Wenliang; Zhou, Yuyu; Cetin, Kristen

    With rapid urbanization and economic development, the world has been experiencing an unprecedented increase in energy consumption and greenhouse gas (GHG) emissions. While reducing energy consumption and GHG emissions is a common interest shared by major developed and developing countries, actions to enable these global reductions are generally implemented at the city scale. This is because baseline information from individual cities plays an important role in identifying economical options for improving building energy efficiency and reducing GHG emissions. Numerous approaches have been proposed for modeling urban building energy use in the past decades. Our paper aims to provide an up-to-datemore » review of the broad categories of energy models for urban buildings and describes the basic workflow of physics-based, bottom-up models and their applications in simulating urban-scale building energy use. Because there are significant differences across models with varied potential for application, strengths and weaknesses of the reviewed models are also presented. We then follow this with a discussion of challenging issues associated with model preparation and calibration.« less

  7. Erosion and Sediment Transport Modelling in Shallow Waters: A Review on Approaches, Models and Applications

    PubMed Central

    Fuentes, Hector R.

    2018-01-01

    The erosion and sediment transport processes in shallow waters, which are discussed in this paper, begin when water droplets hit the soil surface. The transport mechanism caused by the consequent rainfall-runoff process determines the amount of generated sediment that can be transferred downslope. Many significant studies and models are performed to investigate these processes, which differ in terms of their effecting factors, approaches, inputs and outputs, model structure and the manner that these processes represent. This paper attempts to review the related literature concerning sediment transport modelling in shallow waters. A classification based on the representational processes of the soil erosion and sediment transport models (empirical, conceptual, physical and hybrid) is adopted, and the commonly-used models and their characteristics are listed. This review is expected to be of interest to researchers and soil and water conservation managers who are working on erosion and sediment transport phenomena in shallow waters. The paper format should be helpful for practitioners to identify and generally characterize the types of available models, their strengths and their basic scope of applicability. PMID:29538335

  8. Modeling urban building energy use: A review of modeling approaches and procedures

    DOE PAGES

    Li, Wenliang; Zhou, Yuyu; Cetin, Kristen; ...

    2017-11-13

    With rapid urbanization and economic development, the world has been experiencing an unprecedented increase in energy consumption and greenhouse gas (GHG) emissions. While reducing energy consumption and GHG emissions is a common interest shared by major developed and developing countries, actions to enable these global reductions are generally implemented at the city scale. This is because baseline information from individual cities plays an important role in identifying economical options for improving building energy efficiency and reducing GHG emissions. Numerous approaches have been proposed for modeling urban building energy use in the past decades. Our paper aims to provide an up-to-datemore » review of the broad categories of energy models for urban buildings and describes the basic workflow of physics-based, bottom-up models and their applications in simulating urban-scale building energy use. Because there are significant differences across models with varied potential for application, strengths and weaknesses of the reviewed models are also presented. We then follow this with a discussion of challenging issues associated with model preparation and calibration.« less

  9. Incorporating measurement error in n = 1 psychological autoregressive modeling.

    PubMed

    Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.

  10. Boundary methods for mode estimation

    NASA Astrophysics Data System (ADS)

    Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.

    1999-08-01

    This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).

  11. A multiscale modelling methodology applicable for regulatory purposes taking into account effects of complex terrain and buildings on pollutant dispersion: a case study for an inner Alpine basin.

    PubMed

    Oettl, D

    2015-11-01

    Dispersion modelling in complex terrain always has been challenging for modellers. Although a large number of publications are dedicated to that field, candidate methods and models for usage in regulatory applications are scarce. This is all the more true when the combined effect of topography and obstacles on pollutant dispersion has to be taken into account. In Austria, largely situated in Alpine regions, such complex situations are quite frequent. This work deals with an approach, which is in principle capable of considering both buildings and topography in simulations by combining state-of-the-art wind field models at the micro- (<1 km) and mesoscale γ (2-20 km) with a Lagrangian particle model. In order to make such complex numerical models applicable for regulatory purposes, meteorological input data for the models need to be readily derived from routine observations. Here, use was made of the traditional way to bin meteorological data based on wind direction, speed, and stability class, formerly mainly used in conjunction with Gaussian-type models. It is demonstrated that this approach leads to reasonable agreements (fractional bias < 0.1) between observed and modelled annual average concentrations in an Alpine basin with frequent low-wind-speed conditions, temperature inversions, and quite complex flow patterns, while keeping the simulation times within the frame of possibility with regard to applications in licencing procedures. However, due to the simplifications in the derivation of meteorological input data as well as several ad hoc assumptions regarding the boundary conditions of the mesoscale wind field model, the methodology is not suited for computing detailed time and space variations of pollutant concentrations.

  12. Application of surface complexation models to anion adsorption by natural materials.

    PubMed

    Goldberg, Sabine

    2014-10-01

    Various chemical models of ion adsorption are presented and discussed. Chemical models, such as surface complexation models, provide a molecular description of anion adsorption reactions using an equilibrium approach. Two such models, the constant capacitance model and the triple layer model, are described in the present study. Characteristics common to all the surface complexation models are equilibrium constant expressions, mass and charge balances, and surface activity coefficient electrostatic potential terms. Methods for determining parameter values for surface site density, capacitances, and surface complexation constants also are discussed. Spectroscopic experimental methods of establishing ion adsorption mechanisms include vibrational spectroscopy, nuclear magnetic resonance spectroscopy, electron spin resonance spectroscopy, X-ray absorption spectroscopy, and X-ray reflectivity. Experimental determinations of point of zero charge shifts and ionic strength dependence of adsorption results and molecular modeling calculations also can be used to deduce adsorption mechanisms. Applications of the surface complexation models to heterogeneous natural materials, such as soils, using the component additivity and the generalized composite approaches are described. Emphasis is on the generalized composite approach for predicting anion adsorption by soils. Continuing research is needed to develop consistent and realistic protocols for describing ion adsorption reactions on soil minerals and soils. The availability of standardized model parameter databases for use in chemical speciation-transport models is critical. Published 2014 Wiley Periodicals Inc. on behalf of SETAC. This article is a US Government work and as such, is in the public domain in the in the United States of America.

  13. Supporting user-defined granularities in a spatiotemporal conceptual model

    USGS Publications Warehouse

    Khatri, V.; Ram, S.; Snodgrass, R.T.; O'Brien, G. M.

    2002-01-01

    Granularities are integral to spatial and temporal data. A large number of applications require storage of facts along with their temporal and spatial context, which needs to be expressed in terms of appropriate granularities. For many real-world applications, a single granularity in the database is insufficient. In order to support any type of spatial or temporal reasoning, the semantics related to granularities needs to be embedded in the database. Specifying granularities related to facts is an important part of conceptual database design because under-specifying the granularity can restrict an application, affect the relative ordering of events and impact the topological relationships. Closely related to granularities is indeterminacy, i.e., an occurrence time or location associated with a fact that is not known exactly. In this paper, we present an ontology for spatial granularities that is a natural analog of temporal granularities. We propose an upward-compatible, annotation-based spatiotemporal conceptual model that can comprehensively capture the semantics related to spatial and temporal granularities, and indeterminacy without requiring new spatiotemporal constructs. We specify the formal semantics of this spatiotemporal conceptual model via translation to a conventional conceptual model. To underscore the practical focus of our approach, we describe an on-going case study. We apply our approach to a hydrogeologic application at the United States Geologic Survey and demonstrate that our proposed granularity-based spatiotemporal conceptual model is straightforward to use and is comprehensive.

  14. Application of the wavelet transform for speech processing

    NASA Technical Reports Server (NTRS)

    Maes, Stephane

    1994-01-01

    Speaker identification and word spotting will shortly play a key role in space applications. An approach based on the wavelet transform is presented that, in the context of the 'modulation model,' enables extraction of speech features which are used as input for the classification process.

  15. Capturing nonlocal interaction effects in the Hubbard model: Optimal mappings and limits of applicability

    NASA Astrophysics Data System (ADS)

    van Loon, E. G. C. P.; Schüler, M.; Katsnelson, M. I.; Wehling, T. O.

    2016-10-01

    We investigate the Peierls-Feynman-Bogoliubov variational principle to map Hubbard models with nonlocal interactions to effective models with only local interactions. We study the renormalization of the local interaction induced by nearest-neighbor interaction and assess the quality of the effective Hubbard models in reproducing observables of the corresponding extended Hubbard models. We compare the renormalization of the local interactions as obtained from numerically exact determinant quantum Monte Carlo to approximate but more generally applicable calculations using dual boson, dynamical mean field theory, and the random phase approximation. These more approximate approaches are crucial for any application with real materials in mind. Furthermore, we use the dual boson method to calculate observables of the extended Hubbard models directly and benchmark these against determinant quantum Monte Carlo simulations of the effective Hubbard model.

  16. Reviewing model application to support animal health decision making.

    PubMed

    Singer, Alexander; Salman, Mo; Thulke, Hans-Hermann

    2011-04-01

    Animal health is of societal importance as it affects human welfare, and anthropogenic interests shape decision making to assure animal health. Scientific advice to support decision making is manifold. Modelling, as one piece of the scientific toolbox, is appreciated for its ability to describe and structure data, to give insight in complex processes and to predict future outcome. In this paper we study the application of scientific modelling to support practical animal health decisions. We reviewed the 35 animal health related scientific opinions adopted by the Animal Health and Animal Welfare Panel of the European Food Safety Authority (EFSA). Thirteen of these documents were based on the application of models. The review took two viewpoints, the decision maker's need and the modeller's approach. In the reviewed material three types of modelling questions were addressed by four specific model types. The correspondence between tasks and models underpinned the importance of the modelling question in triggering the modelling approach. End point quantifications were the dominating request from decision makers, implying that prediction of risk is a major need. However, due to knowledge gaps corresponding modelling studies often shed away from providing exact numbers. Instead, comparative scenario analyses were performed, furthering the understanding of the decision problem and effects of alternative management options. In conclusion, the most adequate scientific support for decision making - including available modelling capacity - might be expected if the required advice is clearly stated. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Fiia: A Model-Based Approach to Engineering Collaborative Augmented Reality

    NASA Astrophysics Data System (ADS)

    Wolfe, Christopher; Smith, J. David; Phillips, W. Greg; Graham, T. C. Nicholas

    Augmented reality systems often involve collaboration among groups of people. While there are numerous toolkits that aid the development of such augmented reality groupware systems (e.g., ARToolkit and Groupkit), there remains an enormous gap between the specification of an AR groupware application and its implementation. In this chapter, we present Fiia, a toolkit which simplifies the development of collaborative AR applications. Developers specify the structure of their applications using the Fiia modeling language, which abstracts details of networking and provides high-level support for specifying adapters between the physical and virtual world. The Fiia.Net runtime system then maps this conceptual model to a runtime implementation. We illustrate Fiia via Raptor, an augmented reality application used to help small groups collaboratively prototype video games.

  18. Simulation as a preoperative planning approach in advanced heart failure patients. A retrospective clinical analysis.

    PubMed

    Capoccia, Massimo; Marconi, Silvia; Singh, Sanjeet Avtaar; Pisanelli, Domenico M; De Lazzari, Claudio

    2018-05-02

    Modelling and simulation may become clinically applicable tools for detailed evaluation of the cardiovascular system and clinical decision-making to guide therapeutic intervention. Models based on pressure-volume relationship and zero-dimensional representation of the cardiovascular system may be a suitable choice given their simplicity and versatility. This approach has great potential for application in heart failure where the impact of left ventricular assist devices has played a significant role as a bridge to transplant and more recently as a long-term solution for non eligible candidates. We sought to investigate the value of simulation in the context of three heart failure patients with a view to predict or guide further management. CARDIOSIM © was the software used for this purpose. The study was based on retrospective analysis of haemodynamic data previously discussed at a multidisciplinary meeting. The outcome of the simulations addressed the value of a more quantitative approach in the clinical decision process. Although previous experience, co-morbidities and the risk of potentially fatal complications play a role in clinical decision-making, patient-specific modelling may become a daily approach for selection and optimisation of device-based treatment for heart failure patients. Willingness to adopt this integrated approach may be the key to further progress.

  19. Multiple criteria decision analysis for health technology assessment.

    PubMed

    Thokala, Praveen; Duenas, Alejandra

    2012-12-01

    Multicriteria decision analysis (MCDA) has been suggested by some researchers as a method to capture the benefits beyond quality adjusted life-years in a transparent and consistent manner. The objectives of this article were to analyze the possible application of MCDA approaches in health technology assessment and to describe their relative advantages and disadvantages. This article begins with an introduction to the most common types of MCDA models and a critical review of state-of-the-art methods for incorporating multiple criteria in health technology assessment. An overview of MCDA is provided and is compared against the current UK National Institute for Health and Clinical Excellence health technology appraisal process. A generic MCDA modeling approach is described, and the different MCDA modeling approaches are applied to a hypothetical case study. A comparison of the different MCDA approaches is provided, and the generic issues that need consideration before the application of MCDA in health technology assessment are examined. There are general practical issues that might arise from using an MCDA approach, and it is suggested that appropriate care be taken to ensure the success of MCDA techniques in the appraisal process. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  20. The bag-of-frames approach to audio pattern recognition: a sufficient model for urban soundscapes but not for polyphonic music.

    PubMed

    Aucouturier, Jean-Julien; Defreville, Boris; Pachet, François

    2007-08-01

    The "bag-of-frames" approach (BOF) to audio pattern recognition represents signals as the long-term statistical distribution of their local spectral features. This approach has proved nearly optimal for simulating the auditory perception of natural and human environments (or soundscapes), and is also the most predominent paradigm to extract high-level descriptions from music signals. However, recent studies show that, contrary to its application to soundscape signals, BOF only provides limited performance when applied to polyphonic music signals. This paper proposes to explicitly examine the difference between urban soundscapes and polyphonic music with respect to their modeling with the BOF approach. First, the application of the same measure of acoustic similarity on both soundscape and music data sets confirms that the BOF approach can model soundscapes to near-perfect precision, and exhibits none of the limitations observed in the music data set. Second, the modification of this measure by two custom homogeneity transforms reveals critical differences in the temporal and statistical structure of the typical frame distribution of each type of signal. Such differences may explain the uneven performance of BOF algorithms on soundscapes and music signals, and suggest that their human perception rely on cognitive processes of a different nature.

  1. Unsupervised chunking based on graph propagation from bilingual corpus.

    PubMed

    Zhu, Ling; Wong, Derek F; Chao, Lidia S

    2014-01-01

    This paper presents a novel approach for unsupervised shallow parsing model trained on the unannotated Chinese text of parallel Chinese-English corpus. In this approach, no information of the Chinese side is applied. The exploitation of graph-based label propagation for bilingual knowledge transfer, along with an application of using the projected labels as features in unsupervised model, contributes to a better performance. The experimental comparisons with the state-of-the-art algorithms show that the proposed approach is able to achieve impressive higher accuracy in terms of F-score.

  2. A Two-Step Approach for Analysis of Nonignorable Missing Outcomes in Longitudinal Regression: an Application to Upstate KIDS Study.

    PubMed

    Liu, Danping; Yeung, Edwina H; McLain, Alexander C; Xie, Yunlong; Buck Louis, Germaine M; Sundaram, Rajeshwari

    2017-09-01

    Imperfect follow-up in longitudinal studies commonly leads to missing outcome data that can potentially bias the inference when the missingness is nonignorable; that is, the propensity of missingness depends on missing values in the data. In the Upstate KIDS Study, we seek to determine if the missingness of child development outcomes is nonignorable, and how a simple model assuming ignorable missingness would compare with more complicated models for a nonignorable mechanism. To correct for nonignorable missingness, the shared random effects model (SREM) jointly models the outcome and the missing mechanism. However, the computational complexity and lack of software packages has limited its practical applications. This paper proposes a novel two-step approach to handle nonignorable missing outcomes in generalized linear mixed models. We first analyse the missing mechanism with a generalized linear mixed model and predict values of the random effects; then, the outcome model is fitted adjusting for the predicted random effects to account for heterogeneity in the missingness propensity. Extensive simulation studies suggest that the proposed method is a reliable approximation to SREM, with a much faster computation. The nonignorability of missing data in the Upstate KIDS Study is estimated to be mild to moderate, and the analyses using the two-step approach or SREM are similar to the model assuming ignorable missingness. The two-step approach is a computationally straightforward method that can be conducted as sensitivity analyses in longitudinal studies to examine violations to the ignorable missingness assumption and the implications relative to health outcomes. © 2017 John Wiley & Sons Ltd.

  3. Evaluation of Service Level Agreement Approaches for Portfolio Management in the Financial Industry

    NASA Astrophysics Data System (ADS)

    Pontz, Tobias; Grauer, Manfred; Kuebert, Roland; Tenschert, Axel; Koller, Bastian

    The idea of service-oriented Grid computing seems to have the potential for fundamental paradigm change and a new architectural alignment concerning the design of IT infrastructures. There is a wide range of technical approaches from scientific communities which describe basic infrastructures and middlewares for integrating Grid resources in order that by now Grid applications are technically realizable. Hence, Grid computing needs viable business models and enhanced infrastructures to move from academic application right up to commercial application. For a commercial usage of these evolutions service level agreements are needed. The developed approaches are primary of academic interest and mostly have not been put into practice. Based on a business use case of the financial industry, five service level agreement approaches have been evaluated in this paper. Based on the evaluation, a management architecture has been designed and implemented as a prototype.

  4. Generic Business Model Types for Enterprise Mashup Intermediaries

    NASA Astrophysics Data System (ADS)

    Hoyer, Volker; Stanoevska-Slabeva, Katarina

    The huge demand for situational and ad-hoc applications desired by the mass of business end users led to a new kind of Web applications, well-known as Enterprise Mashups. Users with no or limited programming skills are empowered to leverage in a collaborative manner existing Mashup components by combining and reusing company internal and external resources within minutes to new value added applications. Thereby, Enterprise Mashup environments interact as intermediaries to match the supply of providers and demand of consumers. By following the design science approach, we propose an interaction phase model artefact based on market transaction phases to structure required intermediary features. By means of five case studies, we demonstrate the application of the designed model and identify three generic business model types for Enterprise Mashups intermediaries (directory, broker, and marketplace). So far, intermediaries following a real marketplace business model don’t exist in context of Enterprise Mashups and require further research for this emerging paradigm.

  5. A general system for automatic biomedical image segmentation using intensity neighborhoods.

    PubMed

    Chen, Cheng; Ozolek, John A; Wang, Wei; Rohde, Gustavo K

    2011-01-01

    Image segmentation is important with applications to several problems in biology and medicine. While extensively researched, generally, current segmentation methods perform adequately in the applications for which they were designed, but often require extensive modifications or calibrations before being used in a different application. We describe an approach that, with few modifications, can be used in a variety of image segmentation problems. The approach is based on a supervised learning strategy that utilizes intensity neighborhoods to assign each pixel in a test image its correct class based on training data. We describe methods for modeling rotations and variations in scales as well as a subset selection for training the classifiers. We show that the performance of our approach in tissue segmentation tasks in magnetic resonance and histopathology microscopy images, as well as nuclei segmentation from fluorescence microscopy images, is similar to or better than several algorithms specifically designed for each of these applications.

  6. Integrated Workforce Modeling System

    NASA Technical Reports Server (NTRS)

    Moynihan, Gary P.

    2000-01-01

    There are several computer-based systems, currently in various phases of development at KSC, which encompass some component, aspect, or function of workforce modeling. These systems may offer redundant capabilities and/or incompatible interfaces. A systems approach to workforce modeling is necessary in order to identify and better address user requirements. This research has consisted of two primary tasks. Task 1 provided an assessment of existing and proposed KSC workforce modeling systems for their functionality and applicability to the workforce planning function. Task 2 resulted in the development of a proof-of-concept design for a systems approach to workforce modeling. The model incorporates critical aspects of workforce planning, including hires, attrition, and employee development.

  7. A priori discretization quality metrics for distributed hydrologic modeling applications

    NASA Astrophysics Data System (ADS)

    Liu, Hongli; Tolson, Bryan; Craig, James; Shafii, Mahyar; Basu, Nandita

    2016-04-01

    In distributed hydrologic modelling, a watershed is treated as a set of small homogeneous units that address the spatial heterogeneity of the watershed being simulated. The ability of models to reproduce observed spatial patterns firstly depends on the spatial discretization, which is the process of defining homogeneous units in the form of grid cells, subwatersheds, or hydrologic response units etc. It is common for hydrologic modelling studies to simply adopt a nominal or default discretization strategy without formally assessing alternative discretization levels. This approach lacks formal justifications and is thus problematic. More formalized discretization strategies are either a priori or a posteriori with respect to building and running a hydrologic simulation model. A posteriori approaches tend to be ad-hoc and compare model calibration and/or validation performance under various watershed discretizations. The construction and calibration of multiple versions of a distributed model can become a seriously limiting computational burden. Current a priori approaches are more formalized and compare overall heterogeneity statistics of dominant variables between candidate discretization schemes and input data or reference zones. While a priori approaches are efficient and do not require running a hydrologic model, they do not fully investigate the internal spatial pattern changes of variables of interest. Furthermore, the existing a priori approaches focus on landscape and soil data and do not assess impacts of discretization on stream channel definition even though its significance has been noted by numerous studies. The primary goals of this study are to (1) introduce new a priori discretization quality metrics considering the spatial pattern changes of model input data; (2) introduce a two-step discretization decision-making approach to compress extreme errors and meet user-specified discretization expectations through non-uniform discretization threshold modification. The metrics for the first time provides quantification of the routing relevant information loss due to discretization according to the relationship between in-channel routing length and flow velocity. Moreover, it identifies and counts the spatial pattern changes of dominant hydrological variables by overlaying candidate discretization schemes upon input data and accumulating variable changes in area-weighted way. The metrics are straightforward and applicable to any semi-distributed or fully distributed hydrological model with grid scales are greater than input data resolutions. The discretization metrics and decision-making approach are applied to the Grand River watershed located in southwestern Ontario, Canada where discretization decisions are required for a semi-distributed modelling application. Results show that discretization induced information loss monotonically increases as discretization gets rougher. With regards to routing information loss in subbasin discretization, multiple interesting points rather than just the watershed outlet should be considered. Moreover, subbasin and HRU discretization decisions should not be considered independently since subbasin input significantly influences the complexity of HRU discretization result. Finally, results show that the common and convenient approach of making uniform discretization decisions across the watershed domain performs worse compared to a metric informed non-uniform discretization approach as the later since is able to conserve more watershed heterogeneity under the same model complexity (number of computational units).

  8. Programming PHREEQC calculations with C++ and Python a comparative study

    USGS Publications Warehouse

    Charlton, Scott R.; Parkhurst, David L.; Muller, Mike

    2011-01-01

    The new IPhreeqc module provides an application programming interface (API) to facilitate coupling of other codes with the U.S. Geological Survey geochemical model PHREEQC. Traditionally, loose coupling of PHREEQC with other applications required methods to create PHREEQC input files, start external PHREEQC processes, and process PHREEQC output files. IPhreeqc eliminates most of this effort by providing direct access to PHREEQC capabilities through a component object model (COM), a library, or a dynamically linked library (DLL). Input and calculations can be specified through internally programmed strings, and all data exchange between an application and the module can occur in computer memory. This study compares simulations programmed in C++ and Python that are tightly coupled with IPhreeqc modules to the traditional simulations that are loosely coupled to PHREEQC. The study compares performance, quantifies effort, and evaluates lines of code and the complexity of the design. The comparisons show that IPhreeqc offers a more powerful and simpler approach for incorporating PHREEQC calculations into transport models and other applications that need to perform PHREEQC calculations. The IPhreeqc module facilitates the design of coupled applications and significantly reduces run times. Even a moderate knowledge of one of the supported programming languages allows more efficient use of PHREEQC than the traditional loosely coupled approach.

  9. Magnetic microspheres and tissue model studies for therapeutic applications

    NASA Technical Reports Server (NTRS)

    Ramachandran, Narayanan; Mazuruk, Konstantin

    2004-01-01

    The use of magnetic fluids and magnetic particles in combinatorial hyperthermia therapy for cancer treatment is reviewed. The investigation approach adopted for producing thermoregulating particles and tissue model studies for studying particle retention and heating characteristics is discussed.

  10. Application of dynamic traffic assignment to advanced managed lane modeling.

    DOT National Transportation Integrated Search

    2013-11-01

    In this study, a demand estimation framework is developed for assessing the managed lane (ML) : strategies by utilizing dynamic traffic assignment (DTA) modeling, instead of the traditional : approaches that are based on the static traffic assignment...

  11. Role of Imaging Specrometer Data for Model-based Cross-calibration of Imaging Sensors

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis John

    2014-01-01

    Site characterization benefits from imaging spectrometry to determine spectral bi-directional reflectance of a well-understood surface. Cross calibration approaches, uncertainties, role of imaging spectrometry, model-based site characterization, and application to product validation.

  12. Interactive Sound Propagation using Precomputation and Statistical Approximations

    NASA Astrophysics Data System (ADS)

    Antani, Lakulish

    Acoustic phenomena such as early reflections, diffraction, and reverberation have been shown to improve the user experience in interactive virtual environments and video games. These effects arise due to repeated interactions between sound waves and objects in the environment. In interactive applications, these effects must be simulated within a prescribed time budget. We present two complementary approaches for computing such acoustic effects in real time, with plausible variation in the sound field throughout the scene. The first approach, Precomputed Acoustic Radiance Transfer, precomputes a matrix that accounts for multiple acoustic interactions between all scene objects. The matrix is used at run time to provide sound propagation effects that vary smoothly as sources and listeners move. The second approach couples two techniques---Ambient Reverberance, and Aural Proxies---to provide approximate sound propagation effects in real time, based on only the portion of the environment immediately visible to the listener. These approaches lie at different ends of a space of interactive sound propagation techniques for modeling sound propagation effects in interactive applications. The first approach emphasizes accuracy by modeling acoustic interactions between all parts of the scene; the second approach emphasizes efficiency by only taking the local environment of the listener into account. These methods have been used to efficiently generate acoustic walkthroughs of architectural models. They have also been integrated into a modern game engine, and can enable realistic, interactive sound propagation on commodity desktop PCs.

  13. Local respiratory motion correction for PET/CT imaging: Application to lung cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamare, F., E-mail: frederic.lamare@chu-bordeaux.fr; Fernandez, P.; Fayad, H.

    Purpose: Despite multiple methodologies already proposed to correct respiratory motion in the whole PET imaging field of view (FOV), such approaches have not found wide acceptance in clinical routine. An alternative can be the local respiratory motion correction (LRMC) of data corresponding to a given volume of interest (VOI: organ or tumor). Advantages of LRMC include the use of a simple motion model, faster execution times, and organ specific motion correction. The purpose of this study was to evaluate the performance of LMRC using various motion models for oncology (lung lesion) applications. Methods: Both simulated (NURBS based 4D cardiac-torso phantom)more » and clinical studies (six patients) were used in the evaluation of the proposed LRMC approach. PET data were acquired in list-mode and synchronized with respiration. The implemented approach consists first in defining a VOI on the reconstructed motion average image. Gated PET images of the VOI are subsequently reconstructed using only lines of response passing through the selected VOI and are used in combination with a center of gravity or an affine/elastic registration algorithm to derive the transformation maps corresponding to the respiration effects. Those are finally integrated in the reconstruction process to produce a motion free image over the lesion regions. Results: Although the center of gravity or affine algorithm achieved similar performance for individual lesion motion correction, the elastic model, applied either locally or to the whole FOV, led to an overall superior performance. The spatial tumor location was altered by 89% and 81% for the elastic model applied locally or to the whole FOV, respectively (compared to 44% and 39% for the center of gravity and affine models, respectively). This resulted in similar associated overall tumor volume changes of 84% and 80%, respectively (compared to 75% and 71% for the center of gravity and affine models, respectively). The application of the nonrigid deformation model in LRMC led to over an order of magnitude gain in computational efficiency of the correction relative to the application of the deformable model to the whole FOV. Conclusions: The results of this study support the use of LMRC as a flexible and efficient correction approach for respiratory motion effects for single lesions in the thoracic area.« less

  14. Methodology of decreasing software complexity using ontology

    NASA Astrophysics Data System (ADS)

    DÄ browska-Kubik, Katarzyna

    2015-09-01

    In this paper a model of web application`s source code, based on the OSD ontology (Ontology for Software Development), is proposed. This model is applied to implementation and maintenance phase of software development process through the DevOntoCreator tool [5]. The aim of this solution is decreasing software complexity of that source code, using many different maintenance techniques, like creation of documentation, elimination dead code, cloned code or bugs, which were known before [1][2]. Due to this approach saving on software maintenance costs of web applications will be possible.

  15. Multilayer Markov Random Field models for change detection in optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Benedek, Csaba; Shadaydeh, Maha; Kato, Zoltan; Szirányi, Tamás; Zerubia, Josiane

    2015-09-01

    In this paper, we give a comparative study on three Multilayer Markov Random Field (MRF) based solutions proposed for change detection in optical remote sensing images, called Multicue MRF, Conditional Mixed Markov model, and Fusion MRF. Our purposes are twofold. On one hand, we highlight the significance of the focused model family and we set them against various state-of-the-art approaches through a thematic analysis and quantitative tests. We discuss the advantages and drawbacks of class comparison vs. direct approaches, usage of training data, various targeted application fields and different ways of Ground Truth generation, meantime informing the Reader in which roles the Multilayer MRFs can be efficiently applied. On the other hand we also emphasize the differences between the three focused models at various levels, considering the model structures, feature extraction, layer interpretation, change concept definition, parameter tuning and performance. We provide qualitative and quantitative comparison results using principally a publicly available change detection database which contains aerial image pairs and Ground Truth change masks. We conclude that the discussed models are competitive against alternative state-of-the-art solutions, if one uses them as pre-processing filters in multitemporal optical image analysis. In addition, they cover together a large range of applications, considering the different usage options of the three approaches.

  16. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    NASA Astrophysics Data System (ADS)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik; Mikkelsen, Peter Steen; Rieckermann, Jörg

    2015-07-01

    In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can provide probabilistic predictions of wastewater discharge in a similarly reliable way, both for periods ranging from a few hours up to more than 1 week ahead of time. The EBD produces more accurate predictions on long horizons but relies on computationally heavy MCMC routines for parameter inferences. These properties make it more suitable for off-line applications. The IND can help in diagnosing the causes of output errors and is computationally inexpensive. It produces best results on short forecast horizons that are typical for online applications.

  17. Community-Based Participatory Evaluation: The Healthy Start Approach

    PubMed Central

    Braithwaite, Ronald L.; McKenzie, Robetta D.; Pruitt, Vikki; Holden, Kisha B.; Aaron, Katrina; Hollimon, Chavone

    2013-01-01

    The use of community-based participatory research has gained momentum as a viable approach to academic and community engagement for research over the past 20 years. This article discusses an approach for extending the process with an emphasis on evaluation of a community partnership–driven initiative and thus advances the concept of conducting community-based participatory evaluation (CBPE) through a model used by the Healthy Start project of the Augusta Partnership for Children, Inc., in Augusta, Georgia. Application of the CBPE approach advances the importance of bilateral engagements with consumers and academic evaluators. The CBPE model shows promise as a reliable and credible evaluation approach for community-level assessment of health promotion programs. PMID:22461687

  18. Community-based participatory evaluation: the healthy start approach.

    PubMed

    Braithwaite, Ronald L; McKenzie, Robetta D; Pruitt, Vikki; Holden, Kisha B; Aaron, Katrina; Hollimon, Chavone

    2013-03-01

    The use of community-based participatory research has gained momentum as a viable approach to academic and community engagement for research over the past 20 years. This article discusses an approach for extending the process with an emphasis on evaluation of a community partnership-driven initiative and thus advances the concept of conducting community-based participatory evaluation (CBPE) through a model used by the Healthy Start project of the Augusta Partnership for Children, Inc., in Augusta, Georgia. Application of the CBPE approach advances the importance of bilateral engagements with consumers and academic evaluators. The CBPE model shows promise as a reliable and credible evaluation approach for community-level assessment of health promotion programs.

  19. Using complex networks for text classification: Discriminating informative and imaginative documents

    NASA Astrophysics Data System (ADS)

    de Arruda, Henrique F.; Costa, Luciano da F.; Amancio, Diego R.

    2016-01-01

    Statistical methods have been widely employed in recent years to grasp many language properties. The application of such techniques have allowed an improvement of several linguistic applications, such as machine translation and document classification. In the latter, many approaches have emphasised the semantical content of texts, as is the case of bag-of-word language models. These approaches have certainly yielded reasonable performance. However, some potential features such as the structural organization of texts have been used only in a few studies. In this context, we probe how features derived from textual structure analysis can be effectively employed in a classification task. More specifically, we performed a supervised classification aiming at discriminating informative from imaginative documents. Using a networked model that describes the local topological/dynamical properties of function words, we achieved an accuracy rate of up to 95%, which is much higher than similar networked approaches. A systematic analysis of feature relevance revealed that symmetry and accessibility measurements are among the most prominent network measurements. Our results suggest that these measurements could be used in related language applications, as they play a complementary role in characterising texts.

  20. Classification of HTTP Attacks: A Study on the ECML/PKDD 2007 Discovery Challenge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallagher, Brian; Eliassi-Rad, Tina

    2009-07-08

    As the world becomes more reliant on Web applications for commercial, financial, and medical transactions, cyber attacks on the World Wide Web are increasing in frequency and severity. Web applications provide an attractive alternative to traditional desktop applications due to their accessibility and ease of deployment. However, the accessibility of Web applications also makes them extremely vulnerable to attack. This inherent vulnerability is intensified by the distributed nature ofWeb applications and the complexity of configuring application servers. These factors have led to a proliferation of Web-based attacks, in which attackers surreptitiously inject code into HTTP requests, allowing them to executemore » arbitrary commands on remote systems and perform malicious activities such as reading, altering, or destroying sensitive data. One approach for dealing with HTTP-based attacks is to identify malicious code in incoming HTTP requests and eliminate bad requests before they are processed. Using machine learning techniques, we can build a classifier to automatically label requests as “Valid” or “Attack.” For this study, we develop a simple, but effective HTTP attack classifier, based on the vector space model used commonly for Information Retrieval. Our classifier not only separates attacks from valid requests, but can also identify specific attack types (e.g., “SQL Injection” or “Path Traversal”). We demonstrate the effectiveness of our approach through experiments on the ECML/PKDD 2007 Discovery Challenge data set. Specifically, we show that our approach achieves higher precision and recall than previous methods. In addition, our approach has a number of desirable characteristics, including robustness to missing contextual information, interpretability of models, and scalability.« less

Top