Science.gov

Sample records for automated modelling interface

  1. Spud 1.0: generalising and automating the user interfaces of scientific computer models

    NASA Astrophysics Data System (ADS)

    Ham, D. A.; Farrell, P. E.; Gorman, G. J.; Maddison, J. R.; Wilson, C. R.; Kramer, S. C.; Shipton, J.; Collins, G. S.; Cotter, C. J.; Piggott, M. D.

    2009-03-01

    The interfaces by which users specify the scenarios to be simulated by scientific computer models are frequently primitive, under-documented and ad-hoc text files which make using the model in question difficult and error-prone and significantly increase the development cost of the model. In this paper, we present a model-independent system, Spud, which formalises the specification of model input formats in terms of formal grammars. This is combined with an automated graphical user interface which guides users to create valid model inputs based on the grammar provided, and a generic options reading module, libspud, which minimises the development cost of adding model options. Together, this provides a user friendly, well documented, self validating user interface which is applicable to a wide range of scientific models and which minimises the developer input required to maintain and extend the model interface.

  2. Spud 1.0: generalising and automating the user interfaces of scientific computer models

    NASA Astrophysics Data System (ADS)

    Ham, D. A.; Farrell, P. E.; Gorman, G. J.; Maddison, J. R.; Wilson, C. R.; Kramer, S. C.; Shipton, J.; Collins, G. S.; Cotter, C. J.; Piggott, M. D.

    2008-07-01

    The interfaces by which users specify the scenarios to be simulated by scientific computer models are frequently primitive, under-documented and ad-hoc text files which make using the model in question difficult and error-prone and significantly increase the development cost of the model. In this paper, we present a model-independent system, Spud, which formalises the specification of model input formats in terms of formal grammars. This is combined with an automated graphical user interface which guides users to create valid model inputs based on the grammar provided, and a generic options reading module which minimises the development cost of adding model options. Together, this provides a user friendly, well documented, self validating user interface which is applicable to a wide range of scientific models and which minimises the developer input required to maintain and extend the model interface.

  3. Spud and FLML: generalising and automating the user interfaces of scientific computer models

    NASA Astrophysics Data System (ADS)

    Ham, D. A.; Farrell, P. E.; Maddison, J. R.; Gorman, G. J.; Wilson, C. R.; Kramer, S. C.; Shipton, J.; Collins, G. S.; Cotter, C. J.; Piggott, M. D.

    2009-04-01

    The interfaces by which users specify the scenarios to be simulated by scientific computer models are frequently primitive, under-documented and ad-hoc text files which make using the model in question difficult and error-prone and significantly increase the development cost of the model. We present a model-independent system, Spud[1], which formalises the specification of model input formats in terms of formal grammars. This is combined with an automatically generated graphical user interface which guides users to create valid model inputs based on the grammar provided, and a generic options reading module which minimises the development cost of adding model options. We further present FLML, the Fluidity Markup Language. FLML applies Spud to the Imperial College Ocean Model (ICOM) resulting in a graphically driven system which radically improves the usability of ICOM. As well as a step forward for ICOM, FLML illustrates how the Spud system can be applied to an existing complex ocean model highlighting the potential of Spud as a user interface for other codes in the ocean modelling community. [1] Ham, D. A. et.al, Spud 1.0: generalising and automating the user interfaces of scientific computer models, Geosci. Model Dev. Discuss., 1, 125-146, 2008.

  4. Automated Fluid Interface System (AFIS)

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Automated remote fluid servicing will be necessary for future space missions, as future satellites will be designed for on-orbit consumable replenishment. In order to develop an on-orbit remote servicing capability, a standard interface between a tanker and the receiving satellite is needed. The objective of the Automated Fluid Interface System (AFIS) program is to design, fabricate, and functionally demonstrate compliance with all design requirements for an automated fluid interface system. A description and documentation of the Fairchild AFIS design is provided.

  5. Testing of the Automated Fluid Interface System

    NASA Technical Reports Server (NTRS)

    Johnston, A. S.; Tyler, Tony R.

    1998-01-01

    The Automated Fluid Interface System (AFIS) is an advanced development prototype satellite servicer. The device was designed to transfer consumables from one spacecraft to another. An engineering model was built and underwent development testing at Marshall Space Flight Center. While the current AFIS is not suitable for spaceflight, testing and evaluation of the AFIS provided significant experience which would be beneficial in building a flight unit.

  6. Development and testing of the Automated Fluid Interface System

    NASA Technical Reports Server (NTRS)

    Milton, Martha E.; Tyler, Tony R.

    1993-01-01

    The Automated Fluid Interface System (AFIS) is an advanced development program aimed at becoming the standard interface for satellite servicing for years to come. The AFIS will be capable of transferring propellants, fluids, gasses, power, and cryogens from a tanker to an orbiting satellite. The AFIS program currently under consideration is a joint venture between the NASA/Marshall Space Flight Center and Moog, Inc. An engineering model has been built and is undergoing development testing to investigate the mechanism's abilities.

  7. Development and testing of the Automated Fluid Interface System

    NASA Astrophysics Data System (ADS)

    Milton, Martha E.; Tyler, Tony R.

    1993-05-01

    The Automated Fluid Interface System (AFIS) is an advanced development program aimed at becoming the standard interface for satellite servicing for years to come. The AFIS will be capable of transferring propellants, fluids, gasses, power, and cryogens from a tanker to an orbiting satellite. The AFIS program currently under consideration is a joint venture between the NASA/Marshall Space Flight Center and Moog, Inc. An engineering model has been built and is undergoing development testing to investigate the mechanism's abilities.

  8. Automated identification and indexing of dislocations in crystal interfaces

    DOE PAGES

    Stukowski, Alexander; Bulatov, Vasily V.; Arsenlis, Athanasios

    2012-10-31

    Here, we present a computational method for identifying partial and interfacial dislocations in atomistic models of crystals with defects. Our automated algorithm is based on a discrete Burgers circuit integral over the elastic displacement field and is not limited to specific lattices or dislocation types. Dislocations in grain boundaries and other interfaces are identified by mapping atomic bonds from the dislocated interface to an ideal template configuration of the coherent interface to reveal incompatible displacements induced by dislocations and to determine their Burgers vectors. Additionally, the algorithm generates a continuous line representation of each dislocation segment in the crystal andmore » also identifies dislocation junctions.« less

  9. Automated identification and indexing of dislocations in crystal interfaces

    SciTech Connect

    Stukowski, Alexander; Bulatov, Vasily V.; Arsenlis, Athanasios

    2012-10-31

    Here, we present a computational method for identifying partial and interfacial dislocations in atomistic models of crystals with defects. Our automated algorithm is based on a discrete Burgers circuit integral over the elastic displacement field and is not limited to specific lattices or dislocation types. Dislocations in grain boundaries and other interfaces are identified by mapping atomic bonds from the dislocated interface to an ideal template configuration of the coherent interface to reveal incompatible displacements induced by dislocations and to determine their Burgers vectors. Additionally, the algorithm generates a continuous line representation of each dislocation segment in the crystal and also identifies dislocation junctions.

  10. Automated Student Model Improvement

    ERIC Educational Resources Information Center

    Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.

    2012-01-01

    Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…

  11. Automation Interfaces of the Orion GNC Executive Architecture

    NASA Technical Reports Server (NTRS)

    Hart, Jeremy

    2009-01-01

    This viewgraph presentation describes Orion mission's automation Guidance, Navigation and Control (GNC) architecture and interfaces. The contents include: 1) Orion Background; 2) Shuttle/Orion Automation Comparison; 3) Orion Mission Sequencing; 4) Orion Mission Sequencing Display Concept; and 5) Status and Forward Plans.

  12. Towards automation of user interface design

    NASA Technical Reports Server (NTRS)

    Gastner, Rainer; Kraetzschmar, Gerhard K.; Lutz, Ernst

    1992-01-01

    This paper suggests an approach to automatic software design in the domain of graphical user interfaces. There are still some drawbacks in existing user interface management systems (UIMS's) which basically offer only quantitative layout specifications via direct manipulation. Our approach suggests a convenient way to get a default graphical user interface which may be customized and redesigned easily in further prototyping cycles.

  13. Space station automation and robotics study. Operator-systems interface

    NASA Technical Reports Server (NTRS)

    1984-01-01

    This is the final report of a Space Station Automation and Robotics Planning Study, which was a joint project of the Boeing Aerospace Company, Boeing Commercial Airplane Company, and Boeing Computer Services Company. The study is in support of the Advanced Technology Advisory Committee established by NASA in accordance with a mandate by the U.S. Congress. Boeing support complements that provided to the NASA Contractor study team by four aerospace contractors, the Stanford Research Institute (SRI), and the California Space Institute. This study identifies automation and robotics (A&R) technologies that can be advanced by requirements levied by the Space Station Program. The methodology used in the study is to establish functional requirements for the operator system interface (OSI), establish the technologies needed to meet these requirements, and to forecast the availability of these technologies. The OSI would perform path planning, tracking and control, object recognition, fault detection and correction, and plan modifications in connection with extravehicular (EV) robot operations.

  14. Aircraft automation: the problem of the pilot interface.

    PubMed

    Bergeron, H P; Hinton, D A

    1985-02-01

    Aircraft operations, particularly in the IFR environment, are rapidly becoming very complex. Studies have shown that this complexity can frequently lead to accidents and incidents. Results of studies performed at NASA and elsewhere are presented to show that one of the major themes evident in both the accidents and incidents and in the research performed to solve the problems associated with them is that of human error. Examples of various incidents and blunders, recorded in several studies, illustrate and emphasize the hypothesis: "As systems become more and more automated and complex, the more they become prone to human error. The problem can be eliminated or reduced only if good human factor principles are incorporated in the implementation of the systems, to guarantee a good man/machine interface". Aircraft systems technology, however, (e.g.: electronics, avionics, automation) is evolving and developing at a very high rate. Examples of research are presented showing where this emerging technology has been employed to reduce the complexity and enhance the safety and utility of the aircraft operations.

  15. Geographic information system/watershed model interface

    USGS Publications Warehouse

    Fisher, Gary T.

    1989-01-01

    Geographic information systems allow for the interactive analysis of spatial data related to water-resources investigations. A conceptual design for an interface between a geographic information system and a watershed model includes functions for the estimation of model parameter values. Design criteria include ease of use, minimal equipment requirements, a generic data-base management system, and use of a macro language. An application is demonstrated for a 90.1-square-kilometer subbasin of the Patuxent River near Unity, Maryland, that performs automated derivation of watershed parameters for hydrologic modeling.

  16. On Abstractions and Simplifications in the Design of Human-Automation Interfaces

    NASA Technical Reports Server (NTRS)

    Heymann, Michael; Degani, Asaf; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This report addresses the design of human-automation interaction from a formal perspective that focuses on the information content of the interface, rather than the design of the graphical user interface. It also addresses the issue of the information provided to the user (e.g., user-manuals, training material, and all other resources). In this report, we propose a formal procedure for generating interfaces and user-manuals. The procedure is guided by two criteria: First, the interface must be correct, that is, with the given interface the user will be able to perform the specified tasks correctly. Second, the interface should be succinct. The report discusses the underlying concepts and the formal methods for this approach. Two examples are used to illustrate the procedure. The algorithm for constructing interfaces can be automated, and a preliminary software system for its implementation has been developed.

  17. On Abstractions and Simplifications in the Design of Human-Automation Interfaces

    NASA Technical Reports Server (NTRS)

    Heymann, Michael; Degani, Asaf; Shafto, Michael; Meyer, George; Clancy, Daniel (Technical Monitor)

    2001-01-01

    This report addresses the design of human-automation interaction from a formal perspective that focuses on the information content of the interface, rather than the design of the graphical user interface. It also addresses the, issue of the information provided to the user (e.g., user-manuals, training material, and all other resources). In this report, we propose a formal procedure for generating interfaces and user-manuals. The procedure is guided by two criteria: First, the interface must be correct, i.e., that with the given interface the user will be able to perform the specified tasks correctly. Second, the interface should be as succinct as possible. The report discusses the underlying concepts and the formal methods for this approach. Several examples are used to illustrate the procedure. The algorithm for constructing interfaces can be automated, and a preliminary software system for its implementation has been developed.

  18. Designing effective human-automation-plant interfaces: a control-theoretic perspective.

    PubMed

    Jamieson, Greg A; Vicente, Kim J

    2005-01-01

    In this article, we propose the application of a control-theoretic framework to human-automation interaction. The framework consists of a set of conceptual distinctions that should be respected in automation research and design. We demonstrate how existing automation interface designs in some nuclear plants fail to recognize these distinctions. We further show the value of the approach by applying it to modes of automation. The design guidelines that have been proposed in the automation literature are evaluated from the perspective of the framework. This comparison shows that the framework reveals insights that are frequently overlooked in this literature. A new set of design guidelines is introduced that builds upon the contributions of previous research and draws complementary insights from the control-theoretic framework. The result is a coherent and systematic approach to the design of human-automation-plant interfaces that will yield more concrete design criteria and a broader set of design tools. Applications of this research include improving the effectiveness of human-automation interaction design and the relevance of human-automation interaction research.

  19. Cooperative control - The interface challenge for men and automated machines

    NASA Technical Reports Server (NTRS)

    Hankins, W. W., III; Orlando, N. E.

    1984-01-01

    The research issues associated with the increasing autonomy and independence of machines and their evolving relationships to human beings are explored. The research, conducted by Langley Research Center (LaRC), will produce a new social work order in which the complementary attributes of robots and human beings, which include robots' greater strength and precision and humans' greater physical and intellectual dexterity, are necessary for systems of cooperation. Attention is given to the tools for performing the research, including the Intelligent Systems Research Laboratory (ISRL) and industrial manipulators, as well as to the research approaches taken by the Automation Technology Branch (ATB) of LaRC to achieve high automation levels. The ATB is focusing on artificial intelligence research through DAISIE, a system which tends to organize its environment into hierarchical controller/planner abstractions.

  20. Alloy Interface Interdiffusion Modeled

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo H.; Garces, Jorge E.; Abel, Phillip B.

    2003-01-01

    With renewed interest in developing nuclear-powered deep space probes, attention will return to improving the metallurgical processing of potential nuclear fuels so that they remain dimensionally stable over the years required for a successful mission. Previous work on fuel alloys at the NASA Glenn Research Center was primarily empirical, with virtually no continuing research. Even when empirical studies are exacting, they often fail to provide enough insight to guide future research efforts. In addition, from a fundamental theoretical standpoint, the actinide metals (which include materials used for nuclear fuels) pose a severe challenge to modern electronic-structure theory. Recent advances in quantum approximate atomistic modeling, coupled with first-principles derivation of needed input parameters, can help researchers develop new alloys for nuclear propulsion.

  1. Automation and Accountability in Decision Support System Interface Design

    ERIC Educational Resources Information Center

    Cummings, Mary L.

    2006-01-01

    When the human element is introduced into decision support system design, entirely new layers of social and ethical issues emerge but are not always recognized as such. This paper discusses those ethical and social impact issues specific to decision support systems and highlights areas that interface designers should consider during design with an…

  2. Automated, Parametric Geometry Modeling and Grid Generation for Turbomachinery Applications

    NASA Technical Reports Server (NTRS)

    Harrand, Vincent J.; Uchitel, Vadim G.; Whitmire, John B.

    2000-01-01

    The objective of this Phase I project is to develop a highly automated software system for rapid geometry modeling and grid generation for turbomachinery applications. The proposed system features a graphical user interface for interactive control, a direct interface to commercial CAD/PDM systems, support for IGES geometry output, and a scripting capability for obtaining a high level of automation and end-user customization of the tool. The developed system is fully parametric and highly automated, and, therefore, significantly reduces the turnaround time for 3D geometry modeling, grid generation and model setup. This facilitates design environments in which a large number of cases need to be generated, such as for parametric analysis and design optimization of turbomachinery equipment. In Phase I we have successfully demonstrated the feasibility of the approach. The system has been tested on a wide variety of turbomachinery geometries, including several impellers and a multi stage rotor-stator combination. In Phase II, we plan to integrate the developed system with turbomachinery design software and with commercial CAD/PDM software.

  3. Model-Based Design of Air Traffic Controller-Automation Interaction

    NASA Technical Reports Server (NTRS)

    Romahn, Stephan; Callantine, Todd J.; Palmer, Everett A.; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    A model of controller and automation activities was used to design the controller-automation interactions necessary to implement a new terminal area air traffic management concept. The model was then used to design a controller interface that provides the requisite information and functionality. Using data from a preliminary study, the Crew Activity Tracking System (CATS) was used to help validate the model as a computational tool for describing controller performance.

  4. Task-focused modeling in automated agriculture

    NASA Astrophysics Data System (ADS)

    Vriesenga, Mark R.; Peleg, K.; Sklansky, Jack

    1993-01-01

    Machine vision systems analyze image data to carry out automation tasks. Our interest is in machine vision systems that rely on models to achieve their designed task. When the model is interrogated from an a priori menu of questions, the model need not be complete. Instead, the machine vision system can use a partial model that contains a large amount of information in regions of interest and less information elsewhere. We propose an adaptive modeling scheme for machine vision, called task-focused modeling, which constructs a model having just sufficient detail to carry out the specified task. The model is detailed in regions of interest to the task and is less detailed elsewhere. This focusing effect saves time and reduces the computational effort expended by the machine vision system. We illustrate task-focused modeling by an example involving real-time micropropagation of plants in automated agriculture.

  5. Automated two-dimensional interface for capillary gas chromatography

    DOEpatents

    Strunk, Michael R.; Bechtold, William E.

    1996-02-20

    A multidimensional gas chromatograph (GC) system having wide bore capillary and narrow bore capillary GC columns in series and having a novel system interface. Heart cuts from a high flow rate sample, separated by a wide bore GC column, are collected and directed to a narrow bore GC column with carrier gas injected at a lower flow compatible with a mass spectrometer. A bimodal six-way valve is connected with the wide bore GC column outlet and a bimodal four-way valve is connected with the narrow bore GC column inlet. A trapping and retaining circuit with a cold trap is connected with the six-way valve and a transfer circuit interconnects the two valves. The six-way valve is manipulated between first and second mode positions to collect analyte, and the four-way valve is manipulated between third and fourth mode positions to allow carrier gas to sweep analyte from a deactivated cold trap, through the transfer circuit, and then to the narrow bore GC capillary column for separation and subsequent analysis by a mass spectrometer. Rotary valves have substantially the same bore width as their associated columns to minimize flow irregularities and resulting sample peak deterioration. The rotary valves are heated separately from the GC columns to avoid temperature lag and resulting sample deterioration.

  6. Automated two-dimensional interface for capillary gas chromatography

    DOEpatents

    Strunk, M.R.; Bechtold, W.E.

    1996-02-20

    A multidimensional gas chromatograph (GC) system is disclosed which has wide bore capillary and narrow bore capillary GC columns in series and has a novel system interface. Heart cuts from a high flow rate sample, separated by a wide bore GC column, are collected and directed to a narrow bore GC column with carrier gas injected at a lower flow compatible with a mass spectrometer. A bimodal six-way valve is connected with the wide bore GC column outlet and a bimodal four-way valve is connected with the narrow bore GC column inlet. A trapping and retaining circuit with a cold trap is connected with the six-way valve and a transfer circuit interconnects the two valves. The six-way valve is manipulated between first and second mode positions to collect analyte, and the four-way valve is manipulated between third and fourth mode positions to allow carrier gas to sweep analyte from a deactivated cold trap, through the transfer circuit, and then to the narrow bore GC capillary column for separation and subsequent analysis by a mass spectrometer. Rotary valves have substantially the same bore width as their associated columns to minimize flow irregularities and resulting sample peak deterioration. The rotary valves are heated separately from the GC columns to avoid temperature lag and resulting sample deterioration. 3 figs.

  7. RCrane: semi-automated RNA model building

    PubMed Central

    Keating, Kevin S.; Pyle, Anna Marie

    2012-01-01

    RNA crystals typically diffract to much lower resolutions than protein crystals. This low-resolution diffraction results in unclear density maps, which cause considerable difficulties during the model-building process. These difficulties are exacerbated by the lack of computational tools for RNA modeling. Here, RCrane, a tool for the partially automated building of RNA into electron-density maps of low or intermediate resolution, is presented. This tool works within Coot, a common program for macromolecular model building. RCrane helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RCrane then allows the crystallographer to review the newly built structure and select alternative backbone conformations where desired. This tool can also be used to automatically correct the backbone structure of previously built nucleotides. These automated corrections can fix incorrect sugar puckers, steric clashes and other structural problems. PMID:22868764

  8. Automating risk analysis of software design models.

    PubMed

    Frydman, Maxime; Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P

    2014-01-01

    The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance. PMID:25136688

  9. Automating Risk Analysis of Software Design Models

    PubMed Central

    Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P.

    2014-01-01

    The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance. PMID:25136688

  10. Modeling Increased Complexity and the Reliance on Automation: FLightdeck Automation Problems (FLAP) Model

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2014-01-01

    This paper highlights the development of a model that is focused on the safety issue of increasing complexity and reliance on automation systems in transport category aircraft. Recent statistics show an increase in mishaps related to manual handling and automation errors due to pilot complacency and over-reliance on automation, loss of situational awareness, automation system failures and/or pilot deficiencies. Consequently, the aircraft can enter a state outside the flight envelope and/or air traffic safety margins which potentially can lead to loss-of-control (LOC), controlled-flight-into-terrain (CFIT), or runway excursion/confusion accidents, etc. The goal of this modeling effort is to provide NASA's Aviation Safety Program (AvSP) with a platform capable of assessing the impacts of AvSP technologies and products towards reducing the relative risk of automation related accidents and incidents. In order to do so, a generic framework, capable of mapping both latent and active causal factors leading to automation errors, is developed. Next, the framework is converted into a Bayesian Belief Network model and populated with data gathered from Subject Matter Experts (SMEs). With the insertion of technologies and products, the model provides individual and collective risk reduction acquired by technologies and methodologies developed within AvSP.

  11. Design Through Manufacturing: The Solid Model - Finite Element Analysis Interface

    NASA Technical Reports Server (NTRS)

    Rubin, Carol

    2003-01-01

    State-of-the-art computer aided design (CAD) presently affords engineers the opportunity to create solid models of machine parts which reflect every detail of the finished product. Ideally, these models should fulfill two very important functions: (1) they must provide numerical control information for automated manufacturing of precision parts, and (2) they must enable analysts to easily evaluate the stress levels (using finite element analysis - FEA) for all structurally significant parts used in space missions. Today's state-of-the-art CAD programs perform function (1) very well, providing an excellent model for precision manufacturing. But they do not provide a straightforward and simple means of automating the translation from CAD to FEA models, especially for aircraft-type structures. The research performed during the fellowship period investigated the transition process from the solid CAD model to the FEA stress analysis model with the final goal of creating an automatic interface between the two. During the period of the fellowship a detailed multi-year program for the development of such an interface was created. The ultimate goal of this program will be the development of a fully parameterized automatic ProE/FEA translator for parts and assemblies, with the incorporation of data base management into the solution, and ultimately including computational fluid dynamics and thermal modeling in the interface.

  12. Automation model of sewerage rehabilitation planning.

    PubMed

    Yang, M D; Su, T C

    2006-01-01

    The major steps of sewerage rehabilitation include inspection of sewerage, assessment of structural conditions, computation of structural condition grades, and determination of rehabilitation methods and materials. Conventionally, sewerage rehabilitation planning relies on experts with professional background that is tedious and time-consuming. This paper proposes an automation model of planning optimal sewerage rehabilitation strategies for the sewer system by integrating image process, clustering technology, optimization, and visualization display. Firstly, image processing techniques, such as wavelet transformation and co-occurrence features extraction, were employed to extract various characteristics of structural failures from CCTV inspection images. Secondly, a classification neural network was established to automatically interpret the structural conditions by comparing the extracted features with the typical failures in a databank. Then, to achieve optimal rehabilitation efficiency, a genetic algorithm was used to determine appropriate rehabilitation methods and substitution materials for the pipe sections with a risk of mal-function and even collapse. Finally, the result from the automation model can be visualized in a geographic information system in which essential information of the sewer system and sewerage rehabilitation plans are graphically displayed. For demonstration, the automation model of optimal sewerage rehabilitation planning was applied to a sewer system in east Taichung, Chinese Taiwan.

  13. Automation model of sewerage rehabilitation planning.

    PubMed

    Yang, M D; Su, T C

    2006-01-01

    The major steps of sewerage rehabilitation include inspection of sewerage, assessment of structural conditions, computation of structural condition grades, and determination of rehabilitation methods and materials. Conventionally, sewerage rehabilitation planning relies on experts with professional background that is tedious and time-consuming. This paper proposes an automation model of planning optimal sewerage rehabilitation strategies for the sewer system by integrating image process, clustering technology, optimization, and visualization display. Firstly, image processing techniques, such as wavelet transformation and co-occurrence features extraction, were employed to extract various characteristics of structural failures from CCTV inspection images. Secondly, a classification neural network was established to automatically interpret the structural conditions by comparing the extracted features with the typical failures in a databank. Then, to achieve optimal rehabilitation efficiency, a genetic algorithm was used to determine appropriate rehabilitation methods and substitution materials for the pipe sections with a risk of mal-function and even collapse. Finally, the result from the automation model can be visualized in a geographic information system in which essential information of the sewer system and sewerage rehabilitation plans are graphically displayed. For demonstration, the automation model of optimal sewerage rehabilitation planning was applied to a sewer system in east Taichung, Chinese Taiwan. PMID:17302324

  14. A Generalized Timeline Representation, Services, and Interface for Automating Space Mission Operations

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Johnston, Mark; Frank, Jeremy; Giuliano, Mark; Kavelaars, Alicia; Lenzen, Christoph; Policella, Nicola

    2012-01-01

    Most use a timeline based representation for operations modeling. Most model a core set of state, resource types. Most provide similar capabilities on this modeling to enable (semi) automated schedule generation. In this paper we explore the commonality of : representation and services for these timelines. These commonalities offer potential to be harmonized to enable interoperability, re-use.

  15. Atomistic modeling of dislocation-interface interactions

    SciTech Connect

    Wang, Jian; Valone, Steven M; Beyerlein, Irene J; Misra, Amit; Germann, T. C.

    2011-01-31

    Using atomic scale models and interface defect theory, we first classify interface structures into a few types with respect to geometrical factors, then study the interfacial shear response and further simulate the dislocation-interface interactions using molecular dynamics. The results show that the atomic scale structural characteristics of both heterophases and homophases interfaces play a crucial role in (i) their mechanical responses and (ii) the ability of incoming lattice dislocations to transmit across them.

  16. Automated Expert Modeling and Student Evaluation

    SciTech Connect

    2012-09-12

    AEMASE searches a database of recorded events for combinations of events that are of interest. It compares matching combinations to a statistical model to determine similarity to previous events of interest and alerts the user as new matching examples are found. AEMASE is currently used by weapons tactics instructors to find situations of interest in recorded tactical training scenarios. AEMASE builds on a sub-component, the Relational Blackboard (RBB), which is being released as open-source software. AEMASE builds on RBB adding interactive expert model construction (automated knowledge capture) and re-evaluation of scenario data.

  17. Automated Expert Modeling and Student Evaluation

    2012-09-12

    AEMASE searches a database of recorded events for combinations of events that are of interest. It compares matching combinations to a statistical model to determine similarity to previous events of interest and alerts the user as new matching examples are found. AEMASE is currently used by weapons tactics instructors to find situations of interest in recorded tactical training scenarios. AEMASE builds on a sub-component, the Relational Blackboard (RBB), which is being released as open-source software.more » AEMASE builds on RBB adding interactive expert model construction (automated knowledge capture) and re-evaluation of scenario data.« less

  18. Automated system for measuring the surface dilational modulus of liquid–air interfaces

    NASA Astrophysics Data System (ADS)

    Stadler, Dominik; Hofmann, Matthias J.; Motschmann, Hubert; Shamonin, Mikhail

    2016-06-01

    The surface dilational modulus is a crucial parameter for describing the rheological properties of aqueous surfactant solutions. These properties are important for many technological processes. The present paper describes a fully automated instrument based on the oscillating bubble technique. It works in the frequency range from 1 Hz to 500 Hz, where surfactant exchange dynamics governs the relaxation process. The originality of instrument design is the consistent combination of modern measurement technologies with advanced imaging and signal processing algorithms. Key steps on the way to reliable and precise measurements are the excitation of harmonic oscillation of the bubble, phase sensitive evaluation of the pressure response, adjustment and maintenance of the bubble shape to half sphere geometry for compensation of thermal drifts, contour tracing of the bubbles video images, removal of noise and artefacts within the image for improving the reliability of the measurement, and, in particular, a complex trigger scheme for the measurement of the oscillation amplitude, which may vary with frequency as a result of resonances. The corresponding automation and programming tasks are described in detail. Various programming strategies, such as the use of MATLAB® software and native C++ code are discussed. An advance in the measurement technique is demonstrated by a fully automated measurement. The instrument has the potential to mature into a standard technique in the fields of colloid and interface chemistry and provides a significant extension of the frequency range to established competing techniques and state-of-the-art devices based on the same measurement principle.

  19. Automated system for measuring the surface dilational modulus of liquid-air interfaces

    NASA Astrophysics Data System (ADS)

    Stadler, Dominik; Hofmann, Matthias J.; Motschmann, Hubert; Shamonin, Mikhail

    2016-06-01

    The surface dilational modulus is a crucial parameter for describing the rheological properties of aqueous surfactant solutions. These properties are important for many technological processes. The present paper describes a fully automated instrument based on the oscillating bubble technique. It works in the frequency range from 1 Hz to 500 Hz, where surfactant exchange dynamics governs the relaxation process. The originality of instrument design is the consistent combination of modern measurement technologies with advanced imaging and signal processing algorithms. Key steps on the way to reliable and precise measurements are the excitation of harmonic oscillation of the bubble, phase sensitive evaluation of the pressure response, adjustment and maintenance of the bubble shape to half sphere geometry for compensation of thermal drifts, contour tracing of the bubbles video images, removal of noise and artefacts within the image for improving the reliability of the measurement, and, in particular, a complex trigger scheme for the measurement of the oscillation amplitude, which may vary with frequency as a result of resonances. The corresponding automation and programming tasks are described in detail. Various programming strategies, such as the use of MATLAB® software and native C++ code are discussed. An advance in the measurement technique is demonstrated by a fully automated measurement. The instrument has the potential to mature into a standard technique in the fields of colloid and interface chemistry and provides a significant extension of the frequency range to established competing techniques and state-of-the-art devices based on the same measurement principle.

  20. A Generalized Timeline Representation, Services, and Interface for Automating Space Mission Operations

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Johnston, Mark; Frank, Jeremy; Giuliano, Mark; Kavelaars, Alicia; Lenzen, Christoph; Policella, Nicola

    2012-01-01

    Numerous automated and semi-automated planning & scheduling systems have been developed for space applications. Most of these systems are model-based in that they encode domain knowledge necessary to predict spacecraft state and resources based on initial conditions and a proposed activity plan. The spacecraft state and resources as often modeled as a series of timelines, with a timeline or set of timelines to represent a state or resource key in the operations of the spacecraft. In this paper, we first describe a basic timeline representation that can represent a set of state, resource, timing, and transition constraints. We describe a number of planning and scheduling systems designed for space applications (and in many cases deployed for use of ongoing missions) and describe how they do and do not map onto this timeline model.

  1. Automation life-cycle cost model

    NASA Technical Reports Server (NTRS)

    Gathmann, Thomas P.; Reeves, Arlinda J.; Cline, Rick; Henrion, Max; Ruokangas, Corinne

    1992-01-01

    The problem domain being addressed by this contractual effort can be summarized by the following list: Automation and Robotics (A&R) technologies appear to be viable alternatives to current, manual operations; Life-cycle cost models are typically judged with suspicion due to implicit assumptions and little associated documentation; and Uncertainty is a reality for increasingly complex problems and few models explicitly account for its affect on the solution space. The objectives for this effort range from the near-term (1-2 years) to far-term (3-5 years). In the near-term, the envisioned capabilities of the modeling tool are annotated. In addition, a framework is defined and developed in the Decision Modelling System (DEMOS) environment. Our approach is summarized as follows: Assess desirable capabilities (structure into near- and far-term); Identify useful existing models/data; Identify parameters for utility analysis; Define tool framework; Encode scenario thread for model validation; and Provide transition path for tool development. This report contains all relevant, technical progress made on this contractual effort.

  2. A Diffuse Interface Model with Immiscibility Preservation

    PubMed Central

    Tiwari, Arpit; Freund, Jonathan B.; Pantano, Carlos

    2013-01-01

    A new, simple, and computationally efficient interface capturing scheme based on a diffuse interface approach is presented for simulation of compressible multiphase flows. Multi-fluid interfaces are represented using field variables (interface functions) with associated transport equations that are augmented, with respect to an established formulation, to enforce a selected interface thickness. The resulting interface region can be set just thick enough to be resolved by the underlying mesh and numerical method, yet thin enough to provide an efficient model for dynamics of well-resolved scales. A key advance in the present method is that the interface regularization is asymptotically compatible with the thermodynamic mixture laws of the mixture model upon which it is constructed. It incorporates first-order pressure and velocity non-equilibrium effects while preserving interface conditions for equilibrium flows, even within the thin diffused mixture region. We first quantify the improved convergence of this formulation in some widely used one-dimensional configurations, then show that it enables fundamentally better simulations of bubble dynamics. Demonstrations include both a spherical bubble collapse, which is shown to maintain excellent symmetry despite the Cartesian mesh, and a jetting bubble collapse adjacent a wall. Comparisons show that without the new formulation the jet is suppressed by numerical diffusion leading to qualitatively incorrect results. PMID:24058207

  3. Bayesian Safety Risk Modeling of Human-Flightdeck Automation Interaction

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2015-01-01

    Usage of automatic systems in airliners has increased fuel efficiency, added extra capabilities, enhanced safety and reliability, as well as provide improved passenger comfort since its introduction in the late 80's. However, original automation benefits, including reduced flight crew workload, human errors or training requirements, were not achieved as originally expected. Instead, automation introduced new failure modes, redistributed, and sometimes increased workload, brought in new cognitive and attention demands, and increased training requirements. Modern airliners have numerous flight modes, providing more flexibility (and inherently more complexity) to the flight crew. However, the price to pay for the increased flexibility is the need for increased mode awareness, as well as the need to supervise, understand, and predict automated system behavior. Also, over-reliance on automation is linked to manual flight skill degradation and complacency in commercial pilots. As a result, recent accidents involving human errors are often caused by the interactions between humans and the automated systems (e.g., the breakdown in man-machine coordination), deteriorated manual flying skills, and/or loss of situational awareness due to heavy dependence on automated systems. This paper describes the development of the increased complexity and reliance on automation baseline model, named FLAP for FLightdeck Automation Problems. The model development process starts with a comprehensive literature review followed by the construction of a framework comprised of high-level causal factors leading to an automation-related flight anomaly. The framework was then converted into a Bayesian Belief Network (BBN) using the Hugin Software v7.8. The effects of automation on flight crew are incorporated into the model, including flight skill degradation, increased cognitive demand and training requirements along with their interactions. Besides flight crew deficiencies, automation system

  4. Model compilation: An approach to automated model derivation

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo

    1990-01-01

    An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.

  5. Systems Engineering Interfaces: A Model Based Approach

    NASA Technical Reports Server (NTRS)

    Fosse, Elyse; Delp, Christopher

    2013-01-01

    Currently: Ops Rev developed and maintains a framework that includes interface-specific language, patterns, and Viewpoints. Ops Rev implements the framework to design MOS 2.0 and its 5 Mission Services. Implementation de-couples interfaces and instances of interaction Future: A Mission MOSE implements the approach and uses the model based artifacts for reviews. The framework extends further into the ground data layers and provides a unified methodology.

  6. Microcanonical model for interface formation

    SciTech Connect

    Rucklidge, A.; Zaleski, S.

    1988-04-01

    We describe a new cellular automaton model which allows us to simulate separation of phases. The model is an extension of existing cellular automata for the Ising model, such as Q2R. It conserves particle number and presents the qualitative features of spinodal decomposition. The dynamics is deterministic and does not require random number generators. The spins exchange energy with small local reservoirs or demons. The rate of relaxation to equilibrium is investigated, and the results are compared to the Lifshitz-Slyozov theory.

  7. Modeling Europa's Ice-Ocean Interface

    NASA Astrophysics Data System (ADS)

    Elsenousy, A.; Vance, S.; Bills, B. G.

    2014-12-01

    This work focuses on modeling the ice-ocean interface on Jupiter's Moon (Europa); mainly from the standpoint of heat and salt transfer relationship with emphasis on the basal ice growth rate and its implications to Europa's tidal response. Modeling the heat and salt flux at Europa's ice/ocean interface is necessary to understand the dynamics of Europa's ocean and its interaction with the upper ice shell as well as the history of active turbulence at this area. To achieve this goal, we used McPhee et al., 2008 parameterizations on Earth's ice/ocean interface that was developed to meet Europa's ocean dynamics. We varied one parameter at a time to test its influence on both; "h" the basal ice growth rate and on "R" the double diffusion tendency strength. The double diffusion tendency "R" was calculated as the ratio between the interface heat exchange coefficient αh to the interface salt exchange coefficient αs. Our preliminary results showed a strong double diffusion tendency R ~200 at Europa's ice-ocean interface for plausible changes in the heat flux due to onset or elimination of a hydrothermal activity, suggesting supercooling and a strong tendency for forming frazil ice.

  8. Formally verifying human-automation interaction as part of a system model: limitations and tradeoffs.

    PubMed

    Bolton, Matthew L; Bass, Ellen J

    2010-03-25

    Both the human factors engineering (HFE) and formal methods communities are concerned with improving the design of safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to perform formal verification of human-automation interaction with a programmable device. This effort utilizes a system architecture composed of independent models of the human mission, human task behavior, human-device interface, device automation, and operational environment. The goals of this architecture were to allow HFE practitioners to perform formal verifications of realistic systems that depend on human-automation interaction in a reasonable amount of time using representative models, intuitive modeling constructs, and decoupled models of system components that could be easily changed to support multiple analyses. This framework was instantiated using a patient controlled analgesia pump in a two phased process where models in each phase were verified using a common set of specifications. The first phase focused on the mission, human-device interface, and device automation; and included a simple, unconstrained human task behavior model. The second phase replaced the unconstrained task model with one representing normative pump programming behavior. Because models produced in the first phase were too large for the model checker to verify, a number of model revisions were undertaken that affected the goals of the effort. While the use of human task behavior models in the second phase helped mitigate model complexity, verification time increased. Additional modeling tools and technological developments are necessary for model checking to become a more usable technique for HFE.

  9. Asymptotic curved interface models in piezoelectric composites

    NASA Astrophysics Data System (ADS)

    Serpilli, Michele

    2016-10-01

    We study the electromechanical behavior of a thin interphase, constituted by a piezoelectric anisotropic shell-like thin layer, embedded between two generic three-dimensional piezoelectric bodies by means of the asymptotic analysis in a general curvilinear framework. After defining a small real dimensionless parameter ε, which will tend to zero, we characterize two different limit models and their associated limit problems, the so-called weak and strong piezoelectric curved interface models, respectively. Moreover, we identify the non-classical electromechanical transmission conditions at the interface between the two three-dimensional bodies.

  10. Interfacing a robotic station with a gas chromatograph for the full automation of the determination of organochlorine pesticides in vegetables

    SciTech Connect

    Torres, P.; Luque de Castro, M.D.

    1996-12-31

    A fully automated method for the determination of organochlorine pesticides in vegetables is proposed. The overall system acts as an {open_quotes}analytical black box{close_quotes} because a robotic station performs the prelimninary operations, from weighing to capping the leached analytes and location in an autosampler of an automated gas chromatograph with electron capture detection. The method has been applied to the determination of lindane, heptachlor, captan, chlordane and metoxcychlor in tea, marjoram, cinnamon, pennyroyal, and mint with good results in most cases. A gas chromatograph has been interfaced to a robotic station for the determination of pesticides in vegetables. 15 refs., 4 figs., 2 tabs.

  11. Model-centric distribution automation: Capacity, reliability, and efficiency

    DOE PAGES

    Onen, Ahmet; Jung, Jaesung; Dilek, Murat; Cheng, Danling; Broadwater, Robert P.; Scirbona, Charlie; Cocks, George; Hamilton, Stephanie; Wang, Xiaoyu

    2016-02-26

    A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditionalmore » system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.« less

  12. An interface tracking model for droplet electrocoalescence.

    SciTech Connect

    Erickson, Lindsay Crowl

    2013-09-01

    This report describes an Early Career Laboratory Directed Research and Development (LDRD) project to develop an interface tracking model for droplet electrocoalescence. Many fluid-based technologies rely on electrical fields to control the motion of droplets, e.g. microfluidic devices for high-speed droplet sorting, solution separation for chemical detectors, and purification of biodiesel fuel. Precise control over droplets is crucial to these applications. However, electric fields can induce complex and unpredictable fluid dynamics. Recent experiments (Ristenpart et al. 2009) have demonstrated that oppositely charged droplets bounce rather than coalesce in the presence of strong electric fields. A transient aqueous bridge forms between approaching drops prior to pinch-off. This observation applies to many types of fluids, but neither theory nor experiments have been able to offer a satisfactory explanation. Analytic hydrodynamic approximations for interfaces become invalid near coalescence, and therefore detailed numerical simulations are necessary. This is a computationally challenging problem that involves tracking a moving interface and solving complex multi-physics and multi-scale dynamics, which are beyond the capabilities of most state-of-the-art simulations. An interface-tracking model for electro-coalescence can provide a new perspective to a variety of applications in which interfacial physics are coupled with electrodynamics, including electro-osmosis, fabrication of microelectronics, fuel atomization, oil dehydration, nuclear waste reprocessing and solution separation for chemical detectors. We present a conformal decomposition finite element (CDFEM) interface-tracking method for the electrohydrodynamics of two-phase flow to demonstrate electro-coalescence. CDFEM is a sharp interface method that decomposes elements along fluid-fluid boundaries and uses a level set function to represent the interface.

  13. Modeling, Instrumentation, Automation, and Optimization of Water Resource Recovery Facilities.

    PubMed

    Sweeney, Michael W; Kabouris, John C

    2016-10-01

    A review of the literature published in 2015 on topics relating to water resource recovery facilities (WRRF) in the areas of modeling, automation, measurement and sensors and optimization of wastewater treatment (or water resource reclamation) is presented. PMID:27620091

  14. Automating a human factors evaluation of graphical user interfaces for NASA applications: An update on CHIMES

    NASA Technical Reports Server (NTRS)

    Jiang, Jian-Ping; Murphy, Elizabeth D.; Bailin, Sidney C.; Truszkowski, Walter F.

    1993-01-01

    Capturing human factors knowledge about the design of graphical user interfaces (GUI's) and applying this knowledge on-line are the primary objectives of the Computer-Human Interaction Models (CHIMES) project. The current CHIMES prototype is designed to check a GUI's compliance with industry-standard guidelines, general human factors guidelines, and human factors recommendations on color usage. Following the evaluation, CHIMES presents human factors feedback and advice to the GUI designer. The paper describes the approach to modeling human factors guidelines, the system architecture, a new method developed to convert quantitative RGB primaries into qualitative color representations, and the potential for integrating CHIMES with user interface management systems (UIMS). Both the conceptual approach and its implementation are discussed. This paper updates the presentation on CHIMES at the first International Symposium on Ground Data Systems for Spacecraft Control.

  15. Automation Marketplace 2010: New Models, Core Systems

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2010-01-01

    In a year when a difficult economy presented fewer opportunities for immediate gains, the major industry players have defined their business strategies with fundamentally different concepts of library automation. This is no longer an industry where companies compete on the basis of the best or the most features in similar products but one where…

  16. Automated model integration at source code level: An approach for implementing models into the NASA Land Information System

    NASA Astrophysics Data System (ADS)

    Wang, S.; Peters-Lidard, C. D.; Mocko, D. M.; Kumar, S.; Nearing, G. S.; Arsenault, K. R.; Geiger, J. V.

    2014-12-01

    Model integration bridges the data flow between modeling frameworks and models. However, models usually do not fit directly into a particular modeling environment, if not designed for it. An example includes implementing different types of models into the NASA Land Information System (LIS), a software framework for land-surface modeling and data assimilation. Model implementation requires scientific knowledge and software expertise and may take a developer months to learn LIS and model software structure. Debugging and testing of the model implementation is also time-consuming due to not fully understanding LIS or the model. This time spent is costly for research and operational projects. To address this issue, an approach has been developed to automate model integration into LIS. With this in mind, a general model interface was designed to retrieve forcing inputs, parameters, and state variables needed by the model and to provide as state variables and outputs to LIS. Every model can be wrapped to comply with the interface, usually with a FORTRAN 90 subroutine. Development efforts need only knowledge of the model and basic programming skills. With such wrappers, the logic is the same for implementing all models. Code templates defined for this general model interface could be re-used with any specific model. Therefore, the model implementation can be done automatically. An automated model implementation toolkit was developed with Microsoft Excel and its built-in VBA language. It allows model specifications in three worksheets and contains FORTRAN 90 code templates in VBA programs. According to the model specification, the toolkit generates data structures and procedures within FORTRAN modules and subroutines, which transfer data between LIS and the model wrapper. Model implementation is standardized, and about 80 - 90% of the development load is reduced. In this presentation, the automated model implementation approach is described along with LIS programming

  17. Modeling strategic behavior in human-automation interaction - Why an 'aid' can (and should) go unused

    NASA Technical Reports Server (NTRS)

    Kirlik, Alex

    1993-01-01

    Task-offload aids (e.g., an autopilot, an 'intelligent' assistant) can be selectively engaged by the human operator to dynamically delegate tasks to automation. Introducing such aids eliminates some task demands but creates new ones associated with programming, engaging, and disengaging the aiding device via an interface. The burdens associated with managing automation can sometimes outweigh the potential benefits of automation to improved system performance. Aid design parameters and features of the overall multitask context combine to determine whether or not a task-offload aid will effectively support the operator. A modeling and sensitivity analysis approach is presented that identifies effective strategies for human-automation interaction as a function of three task-context parameters and three aid design parameters. The analysis and modeling approaches provide resources for predicting how a well-adapted operator will use a given task-offload aid, and for specifying aid design features that ensure that automation will provide effective operator support in a multitask environment.

  18. Modeling strategic behavior in human-automation interaction: why an "aid" can (and should) go unused.

    PubMed

    Kirlik, A

    1993-06-01

    Task-offload aids (e.g., an autopilot, an "intelligent" assistant) can be selectively engaged by the human operator to dynamically delegate tasks to automation. Introducing such aids eliminates some task demands but creates new ones associated with programming, engaging, and disengaging the aiding device via an interface. The burdens associated with managing automation can sometimes outweigh the potential benefits of automation to improved system performance. Aid design parameters and features of the overall multitask context combine to determine whether or not a task-offload aid will effectively support the operator. A modeling and sensitivity analysis approach is presented that identifies effective strategies for human-automation interaction as a function of three task-context parameters and three aid design parameters. The analysis and modeling approaches provide resources for predicting how a well-adapted operator will use a given task-offload aid, and for specifying aid design features that ensure that automation will provide effective operator support in a multitask environment.

  19. Parmodel: a web server for automated comparative modeling of proteins.

    PubMed

    Uchôa, Hugo Brandão; Jorge, Guilherme Eberhart; Freitas Da Silveira, Nelson José; Camera, João Carlos; Canduri, Fernanda; De Azevedo, Walter Filgueira

    2004-12-24

    Parmodel is a web server for automated comparative modeling and evaluation of protein structures. The aim of this tool is to help inexperienced users to perform modeling, assessment, visualization, and optimization of protein models as well as crystallographers to evaluate structures solved experimentally. It is subdivided in four modules: Parmodel Modeling, Parmodel Assessment, Parmodel Visualization, and Parmodel Optimization. The main module is the Parmodel Modeling that allows the building of several models for a same protein in a reduced time, through the distribution of modeling processes on a Beowulf cluster. Parmodel automates and integrates the main softwares used in comparative modeling as MODELLER, Whatcheck, Procheck, Raster3D, Molscript, and Gromacs. This web server is freely accessible at .

  20. Modeling and deadlock avoidance of automated manufacturing systems with multiple automated guided vehicles.

    PubMed

    Wu, Naiqi; Zhou, MengChu

    2005-12-01

    An automated manufacturing system (AMS) contains a number of versatile machines (or workstations), buffers, an automated material handling system (MHS), and is computer-controlled. An effective and flexible alternative for implementing MHS is to use automated guided vehicle (AGV) system. The deadlock issue in AMS is very important in its operation and has extensively been studied. The deadlock problems were separately treated for parts in production and transportation and many techniques were developed for each problem. However, such treatment does not take the advantage of the flexibility offered by multiple AGVs. In general, it is intractable to obtain maximally permissive control policy for either problem. Instead, this paper investigates these two problems in an integrated way. First we model an AGV system and part processing processes by resource-oriented Petri nets, respectively. Then the two models are integrated by using macro transitions. Based on the combined model, a novel control policy for deadlock avoidance is proposed. It is shown to be maximally permissive with computational complexity of O (n2) where n is the number of machines in AMS if the complexity for controlling the part transportation by AGVs is not considered. Thus, the complexity of deadlock avoidance for the whole system is bounded by the complexity in controlling the AGV system. An illustrative example shows its application and power.

  1. Modeling and deadlock avoidance of automated manufacturing systems with multiple automated guided vehicles.

    PubMed

    Wu, Naiqi; Zhou, MengChu

    2005-12-01

    An automated manufacturing system (AMS) contains a number of versatile machines (or workstations), buffers, an automated material handling system (MHS), and is computer-controlled. An effective and flexible alternative for implementing MHS is to use automated guided vehicle (AGV) system. The deadlock issue in AMS is very important in its operation and has extensively been studied. The deadlock problems were separately treated for parts in production and transportation and many techniques were developed for each problem. However, such treatment does not take the advantage of the flexibility offered by multiple AGVs. In general, it is intractable to obtain maximally permissive control policy for either problem. Instead, this paper investigates these two problems in an integrated way. First we model an AGV system and part processing processes by resource-oriented Petri nets, respectively. Then the two models are integrated by using macro transitions. Based on the combined model, a novel control policy for deadlock avoidance is proposed. It is shown to be maximally permissive with computational complexity of O (n2) where n is the number of machines in AMS if the complexity for controlling the part transportation by AGVs is not considered. Thus, the complexity of deadlock avoidance for the whole system is bounded by the complexity in controlling the AGV system. An illustrative example shows its application and power. PMID:16366245

  2. A catalog of automated analysis methods for enterprise models.

    PubMed

    Florez, Hector; Sánchez, Mario; Villalobos, Jorge

    2016-01-01

    Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool.

  3. Automated particulate sampler field test model operations guide

    SciTech Connect

    Bowyer, S.M.; Miley, H.S.

    1996-10-01

    The Automated Particulate Sampler Field Test Model Operations Guide is a collection of documents which provides a complete picture of the Automated Particulate Sampler (APS) and the Field Test in which it was evaluated. The Pacific Northwest National Laboratory (PNNL) Automated Particulate Sampler was developed for the purpose of radionuclide particulate monitoring for use under the Comprehensive Test Ban Treaty (CTBT). Its design was directed by anticipated requirements of small size, low power consumption, low noise level, fully automatic operation, and most predominantly the sensitivity requirements of the Conference on Disarmament Working Paper 224 (CDWP224). This guide is intended to serve as both a reference document for the APS and to provide detailed instructions on how to operate the sampler. This document provides a complete description of the APS Field Test Model and all the activity related to its evaluation and progression.

  4. Modeling of metal-ferroelectric-insulator-semiconductor structure considering the effects of interface traps

    NASA Astrophysics Data System (ADS)

    Sun, Jing; Shi, Xiao Rong; Zheng, Xue Jun; Tian, Li; Zhu, Zhe

    2015-06-01

    An improved model, in which the interface traps effects are considered, is developed by combining with quantum mechanical model, dipole switching theory and silicon physics of metal-oxide-semiconductor structure to describe the electrical properties of metal-ferroelectric-insulator-semiconductor (MFIS) structure. Using the model, the effects of the interface traps on the surface potential (ϕSi) of the semiconductor, the low frequency (LF) capacitance-voltage (C-V) characteristics and memory window of MFIS structure are simulated, and the results show that the ϕSi- V and LF C-V curves are shifted toward the positive-voltage direction and the memory window become worse as the density of the interface trap states increases. This paper is expected to provide some guidance to the design and performance improvement of MFIS structure devices. In addition, the improved model can be integrated into electronic design automation (EDA) software for circuit simulation.

  5. Automated data acquisition technology development:Automated modeling and control development

    NASA Technical Reports Server (NTRS)

    Romine, Peter L.

    1995-01-01

    This report documents the completion of, and improvements made to, the software developed for automated data acquisition and automated modeling and control development on the Texas Micro rackmounted PC's. This research was initiated because a need was identified by the Metal Processing Branch of NASA Marshall Space Flight Center for a mobile data acquisition and data analysis system, customized for welding measurement and calibration. Several hardware configurations were evaluated and a PC based system was chosen. The Welding Measurement System (WMS), is a dedicated instrument strickly for use of data acquisition and data analysis. In addition to the data acquisition functions described in this thesis, WMS also supports many functions associated with process control. The hardware and software requirements for an automated acquisition system for welding process parameters, welding equipment checkout, and welding process modeling were determined in 1992. From these recommendations, NASA purchased the necessary hardware and software. The new welding acquisition system is designed to collect welding parameter data and perform analysis to determine the voltage versus current arc-length relationship for VPPA welding. Once the results of this analysis are obtained, they can then be used to develop a RAIL function to control welding startup and shutdown without torch crashing.

  6. ModelMate - A graphical user interface for model analysis

    USGS Publications Warehouse

    Banta, Edward R.

    2011-01-01

    ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.

  7. Modeling Neutral Hydrogen in the Heliospheric Interface

    NASA Astrophysics Data System (ADS)

    Heerikhuisen, Jacob; Pogorelov, Nikolai; Brand, Pontus

    2010-03-01

    Observational data of neutral atoms provides us with a 1 AU picture of the neutral atom flux in the heliosphere. The large mean free paths of neutrals allow us to infer properties of their distant source, as well as the properties of the intermediary medium. Energetic neutral hydrogen, for example, travels on almost straight trajectories, so that the particles observed coming from a particular direction were created from energetic protons along that line of sight. Similarly, low energy interstellar atoms are attenuated and deflected as they enter the heliosphere, and this deflection tells us something about the structure of the heliospheric interface. Of course, to infer quantitative features of the global heliosphere from neutral atom observations at 1 AU, we need accurate models that capture the 3D structure of the heliosphere. In this paper we present our MHD-plasma/kinetic-neutral model of the heliospheric interface that uses a Lorentzian distribution function to approximate a suprathermal tail on the solar wind proton distribution due to pick-up ions. We investigate the effect the k parameter of the Lorentzian function has on the overall solution and the flux of energetic neutral atoms (ENAs). ENA fluxes are also compared to ``pre-IBEX'' spacecraft data.

  8. Automation of Endmember Pixel Selection in SEBAL/METRIC Model

    NASA Astrophysics Data System (ADS)

    Bhattarai, N.; Quackenbush, L. J.; Im, J.; Shaw, S. B.

    2015-12-01

    The commonly applied surface energy balance for land (SEBAL) and its variant, mapping evapotranspiration (ET) at high resolution with internalized calibration (METRIC) models require manual selection of endmember (i.e. hot and cold) pixels to calibrate sensible heat flux. Current approaches for automating this process are based on statistical methods and do not appear to be robust under varying climate conditions and seasons. In this paper, we introduce a new approach based on simple machine learning tools and search algorithms that provides an automatic and time efficient way of identifying endmember pixels for use in these models. The fully automated models were applied on over 100 cloud-free Landsat images with each image covering several eddy covariance flux sites in Florida and Oklahoma. Observed land surface temperatures at automatically identified hot and cold pixels were within 0.5% of those from pixels manually identified by an experienced operator (coefficient of determination, R2, ≥ 0.92, Nash-Sutcliffe efficiency, NSE, ≥ 0.92, and root mean squared error, RMSE, ≤ 1.67 K). Daily ET estimates derived from the automated SEBAL and METRIC models were in good agreement with their manual counterparts (e.g., NSE ≥ 0.91 and RMSE ≤ 0.35 mm day-1). Automated and manual pixel selection resulted in similar estimates of observed ET across all sites. The proposed approach should reduce time demands for applying SEBAL/METRIC models and allow for their more widespread and frequent use. This automation can also reduce potential bias that could be introduced by an inexperienced operator and extend the domain of the models to new users.

  9. Radiation budget measurement/model interface

    NASA Technical Reports Server (NTRS)

    Vonderhaar, T. H.; Ciesielski, P.; Randel, D.; Stevens, D.

    1983-01-01

    This final report includes research results from the period February, 1981 through November, 1982. Two new results combine to form the final portion of this work. They are the work by Hanna (1982) and Stevens to successfully test and demonstrate a low-order spectral climate model and the work by Ciesielski et al. (1983) to combine and test the new radiation budget results from NIMBUS-7 with earlier satellite measurements. Together, the two related activities set the stage for future research on radiation budget measurement/model interfacing. Such combination of results will lead to new applications of satellite data to climate problems. The objectives of this research under the present contract are therefore satisfied. Additional research reported herein includes the compilation and documentation of the radiation budget data set a Colorado State University and the definition of climate-related experiments suggested after lengthy analysis of the satellite radiation budget experiments.

  10. Variational Implicit Solvation with Solute Molecular Mechanics: From Diffuse-Interface to Sharp-Interface Models

    PubMed Central

    Li, Bo; Zhao, Yanxiang

    2013-01-01

    Central in a variational implicit-solvent description of biomolecular solvation is an effective free-energy functional of the solute atomic positions and the solute-solvent interface (i.e., the dielectric boundary). The free-energy functional couples together the solute molecular mechanical interaction energy, the solute-solvent interfacial energy, the solute-solvent van der Waals interaction energy, and the electrostatic energy. In recent years, the sharp-interface version of the variational implicit-solvent model has been developed and used for numerical computations of molecular solvation. In this work, we propose a diffuse-interface version of the variational implicit-solvent model with solute molecular mechanics. We also analyze both the sharp-interface and diffuse-interface models. We prove the existence of free-energy minimizers and obtain their bounds. We also prove the convergence of the diffuse-interface model to the sharp-interface model in the sense of Γ-convergence. We further discuss properties of sharp-interface free-energy minimizers, the boundary conditions and the coupling of the Poisson–Boltzmann equation in the diffuse-interface model, and the convergence of forces from diffuse-interface to sharp-interface descriptions. Our analysis relies on the previous works on the problem of minimizing surface areas and on our observations on the coupling between solute molecular mechanical interactions with the continuum solvent. Our studies justify rigorously the self consistency of the proposed diffuse-interface variational models of implicit solvation. PMID:24058213

  11. Computational design of patterned interfaces using reduced order models

    PubMed Central

    Vattré, A. J.; Abdolrahim, N.; Kolluri, K.; Demkowicz, M. J.

    2014-01-01

    Patterning is a familiar approach for imparting novel functionalities to free surfaces. We extend the patterning paradigm to interfaces between crystalline solids. Many interfaces have non-uniform internal structures comprised of misfit dislocations, which in turn govern interface properties. We develop and validate a computational strategy for designing interfaces with controlled misfit dislocation patterns by tailoring interface crystallography and composition. Our approach relies on a novel method for predicting the internal structure of interfaces: rather than obtaining it from resource-intensive atomistic simulations, we compute it using an efficient reduced order model based on anisotropic elasticity theory. Moreover, our strategy incorporates interface synthesis as a constraint on the design process. As an illustration, we apply our approach to the design of interfaces with rapid, 1-D point defect diffusion. Patterned interfaces may be integrated into the microstructure of composite materials, markedly improving performance. PMID:25169868

  12. Automated Environment Generation for Software Model Checking

    NASA Technical Reports Server (NTRS)

    Tkachuk, Oksana; Dwyer, Matthew B.; Pasareanu, Corina S.

    2003-01-01

    A key problem in model checking open systems is environment modeling (i.e., representing the behavior of the execution context of the system under analysis). Software systems are fundamentally open since their behavior is dependent on patterns of invocation of system components and values defined outside the system but referenced within the system. Whether reasoning about the behavior of whole programs or about program components, an abstract model of the environment can be essential in enabling sufficiently precise yet tractable verification. In this paper, we describe an approach to generating environments of Java program fragments. This approach integrates formally specified assumptions about environment behavior with sound abstractions of environment implementations to form a model of the environment. The approach is implemented in the Bandera Environment Generator (BEG) which we describe along with our experience using BEG to reason about properties of several non-trivial concurrent Java programs.

  13. Automated adaptive inference of phenomenological dynamical models

    PubMed Central

    Daniels, Bryan C.; Nemenman, Ilya

    2015-01-01

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved. PMID:26293508

  14. Models for Automated Tube Performance Calculations

    SciTech Connect

    C. Brunkhorst

    2002-12-12

    High power radio-frequency systems, as typically used in fusion research devices, utilize vacuum tubes. Evaluation of vacuum tube performance involves data taken from tube operating curves. The acquisition of data from such graphical sources is a tedious process. A simple modeling method is presented that will provide values of tube currents for a given set of element voltages. These models may be used as subroutines in iterative solutions of amplifier operating conditions for a specific loading impedance.

  15. Modeling Neutral Hydrogen in the Heliospheric Interface

    NASA Astrophysics Data System (ADS)

    Heerikhuisen, J.; Pogorelov, N.

    2009-05-01

    Observational data of neutral atoms provides us with a 1 AU picture of the neutral atom flux in the heliosphere. The large mean free paths of neutrals allow us to infer properties of their distant source, as well as the properties of the intermediary medium. Energetic neutral hydrogen, for example, travels on almost straight trajectories, so that the particles observed coming from a particular direction were created from energetic protons along that line of sight. Similarly, low energy interstellar atoms are attenuated and deflected as they enter the heliosphere, and this deflection tells us something about the structure of the heliospheric interface. Of course, to infer quantitative features of the global heliosphere from neutral atom observations at 1 AU, we need accurate models that capture the 3D structure of the heliosphere. We will present an advanced MHD-neutral model of the heliosphere which is 3D, employs kinetic neutral Hydrogen, and incorporates a suprathermal tail on the solar wind proton distribution to approximate pick-up ions. We will demonstrate that with the help of such a model, we can test various hypotheses regarding the heliospheric boundary via forward modeling and comparison with data.

  16. Automated sample plan selection for OPC modeling

    NASA Astrophysics Data System (ADS)

    Casati, Nathalie; Gabrani, Maria; Viswanathan, Ramya; Bayraktar, Zikri; Jaiswal, Om; DeMaris, David; Abdo, Amr Y.; Oberschmidt, James; Krause, Andreas

    2014-03-01

    It is desired to reduce the time required to produce metrology data for calibration of Optical Proximity Correction (OPC) models and also maintain or improve the quality of the data collected with regard to how well that data represents the types of patterns that occur in real circuit designs. Previous work based on clustering in geometry and/or image parameter space has shown some benefit over strictly manual or intuitive selection, but leads to arbitrary pattern exclusion or selection which may not be the best representation of the product. Forming the pattern selection as an optimization problem, which co-optimizes a number of objective functions reflecting modelers' insight and expertise, has shown to produce models with equivalent quality to the traditional plan of record (POR) set but in a less time.

  17. Qgui: A high-throughput interface for automated setup and analysis of free energy calculations and empirical valence bond simulations in biological systems.

    PubMed

    Isaksen, Geir Villy; Andberg, Tor Arne Heim; Åqvist, Johan; Brandsdal, Bjørn Olav

    2015-07-01

    Structural information and activity data has increased rapidly for many protein targets during the last decades. In this paper, we present a high-throughput interface (Qgui) for automated free energy and empirical valence bond (EVB) calculations that use molecular dynamics (MD) simulations for conformational sampling. Applications to ligand binding using both the linear interaction energy (LIE) method and the free energy perturbation (FEP) technique are given using the estrogen receptor (ERα) as a model system. Examples of free energy profiles obtained using the EVB method for the rate-limiting step of the enzymatic reaction catalyzed by trypsin are also shown. In addition, we present calculation of high-precision Arrhenius plots to obtain the thermodynamic activation enthalpy and entropy with Qgui from running a large number of EVB simulations.

  18. A power line data communication interface using spread spectrum technology in home automation

    SciTech Connect

    Shwehdi, M.H.; Khan, A.Z.

    1996-07-01

    Building automation technology is rapidly developing towards more reliable communication systems, devices that control electronic equipments. These equipment if controlled leads to efficient energy management, and savings on the monthly electricity bill. Power Line communication (PLC) has been one of the dreams of the electronics industry for decades, especially for building automation. It is the purpose of this paper to demonstrate communication methods among electronic control devices through an AC power line carrier within the buildings for more efficient energy control. The paper outlines methods of communication over a powerline, namely the X-10 and CE bus. It also introduces the spread spectrum technology as to increase speed to 100--150 times faster than the X-10 system. The powerline carrier has tremendous applications in the field of building automation. The paper presents an attempt to realize a smart house concept, so called, in which all home electronic devices from a coffee maker to a water heater microwave to chaos robots will be utilized by an intelligent network whenever one wishes to do so. The designed system may be applied very profitably to help in energy management for both customer and utility.

  19. Automated refinement and inference of analytical models for metabolic networks

    PubMed Central

    Schmidt, Michael D; Vallabhajosyula, Ravishankar R; Jenkins, Jerry W; Hood, Jonathan E; Soni, Abhishek S; Wikswo, John P; Lipson, Hod

    2013-01-01

    The reverse engineering of metabolic networks from experimental data is traditionally a labor-intensive task requiring a priori systems knowledge. Using a proven model as a test system, we demonstrate an automated method to simplify this process by modifying an existing or related model – suggesting nonlinear terms and structural modifications – or even constructing a new model that agrees with the system’s time-series observations. In certain cases, this method can identify the full dynamical model from scratch without prior knowledge or structural assumptions. The algorithm selects between multiple candidate models by designing experiments to make their predictions disagree. We performed computational experiments to analyze a nonlinear seven-dimensional model of yeast glycolytic oscillations. This approach corrected mistakes reliably in both approximated and overspecified models. The method performed well to high levels of noise for most states, could identify the correct model de novo, and make better predictions than ordinary parametric regression and neural network models. We identified an invariant quantity in the model, which accurately derived kinetics and the numerical sensitivity coefficients of the system. Finally, we compared the system to dynamic flux estimation and discussed the scaling and application of this methodology to automated experiment design and control in biological systems in real-time. PMID:21832805

  20. Automated Adaptor Generation for Behavioral Mismatching Services Based on Pushdown Model Checking

    NASA Astrophysics Data System (ADS)

    Lin, Hsin-Hung; Aoki, Toshiaki; Katayama, Takuya

    In this paper, we introduce an approach of service adaptation for behavior mismatching services using pushdown model checking. This approach uses pushdown systems as model of adaptors so that capturing non-regular behavior in service interactions is possible. Also, the use of pushdown model checking integrates adaptation and verification. This guarantees that an adaptor generated by our approach not only solves behavior mismatches but also satisfies usual verification properties if specified. Unlike conventional approaches, we do not count on specifications of adaptor contracts but take only information from behavior interfaces of services and perform fully automated adaptor generation. Three requirements relating to behavior mismatches, unbounded messages, and branchings are retrieved from behavior interfaces and used to build LTL properties for pushdown model checking. Properties for unbounded messages, i.e., messages sent and received arbitrary multiple times, are especially addressed since it characterizes non-regular behavior in service composition. This paper also shows some experimental results from a prototype tool and provides directions for building BPEL adaptors from behavior interface of generated adaptor. The results show that our approach does solve behavior mismatches and successfully capture non-regular behavior in service composition under the scale of real service applications.

  1. Rapid Automated Aircraft Simulation Model Updating from Flight Data

    NASA Technical Reports Server (NTRS)

    Brian, Geoff; Morelli, Eugene A.

    2011-01-01

    Techniques to identify aircraft aerodynamic characteristics from flight measurements and compute corrections to an existing simulation model of a research aircraft were investigated. The purpose of the research was to develop a process enabling rapid automated updating of aircraft simulation models using flight data and apply this capability to all flight regimes, including flight envelope extremes. The process presented has the potential to improve the efficiency of envelope expansion flight testing, revision of control system properties, and the development of high-fidelity simulators for pilot training.

  2. Mathematical models of magnetite desliming for automated quality control systems

    NASA Astrophysics Data System (ADS)

    Olevska, Yu.; Mishchenko, V.; Olevskyi, V.

    2016-10-01

    The aim of the study is to provide multifactor mathematical models suitable for use in automatic control systems of desliming process. For this purpose we described the motion of a two-phase environment regard to the shape the desliming machine and technological parameters of the enrichment process. We created the method for preparation of dependences of the enrichment process quality from the technological and design parameters. To automate the process we constructed mathematical models to justify intensive technological modes and optimal parameters for design of desliming machine.

  3. Diffuse interface field approach to modeling arbitrarily-shaped particles at fluid-fluid interfaces

    SciTech Connect

    Paul C. Millett; Yu. U. Wang

    2011-01-01

    We present a novel mesoscale simulation approach to modeling the evolution of solid particles segregated at fluid-fluid interfaces. The approach involves a diffuse- interface field description of each fluid phase in addition to the set of solid particles. The unique strength of the model is its generality to include particles of arbitrary shapes and orientations, as well as the ability to incorporate electrostatic particle interactions and external forces via a previous work [Millett PC, Wang YU, Acta Mater 2009;57:3101]. In this work, we verify that the model produces the correct capillary forces and contact angles by comparing with a well-defined analytical solution. In addition, simulation results of rotations of various-shaped particles at fluid-fluid interfaces, external force- induced capillary attraction/repulsion between particles, and spinodal decomposition arrest due to colloidal particle jamming at the interfaces are presented.

  4. A new interface element with progressive damage and osseointegration for modeling of interfaces in hip resurfacing.

    PubMed

    Caouette, Christiane; Bureau, Martin N; Lavigne, Martin; Vendittoli, Pascal-André; Nuño, Natalia

    2013-03-01

    Finite element models of orthopedic implants such as hip resurfacing femoral components usually rely on contact elements to model the load-bearing interfaces that connect bone, cement and implant. However, contact elements cannot simulate progressive degradation of bone-cement interfaces or osseointegration. A new interface element is developed to alleviate these shortcomings. This element is capable of simulating the nonlinear progression of bone-cement interface debonding or bone-implant interface osseointegration, based on mechanical stimuli in normal and tangential directions. The new element is applied to a hip resurfacing femoral component with a stem made of a novel biomimetic composite material. Three load cases are applied sequentially to simulate the 6-month period required for osseointegration of the stem. The effect of interdigitation depth of the bone-cement interface is found to be negligible, with only minor variations of micromotions. Numerical results show that the biomimetic stem progressively osseointegrates (alpha averages 0.7 on the stem surface, with spot-welds) and that bone-stem micromotions decrease below 10 microm. Osseointegration also changes the load path within the femoral bone: a decrease of 300 microepsilon was observed in the femoral head, and the inferomedial part of the femoral neck showed a slight increase of 165 microepsilon. There was also increased stress in the implant stem (from 7 to 11 MPa after osseointegration), indicating that part of the load is supported through the stem. The use of the new osseointegratable interface element has shown the osseointegration potential of the biomimetic stem. Its ability to model partially osseointegrated interfaces based on the mechanical conditions at the interface means that the new element could be used to study load transfer and osseointegration patterns on other models of uncemented hip resurfacing femoral components.

  5. Automated Decomposition of Model-based Learning Problems

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Millar, Bill

    1996-01-01

    A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.

  6. Automated modelling of spatially-distributed glacier ice thickness and volume

    NASA Astrophysics Data System (ADS)

    James, William H. M.; Carrivick, Jonathan L.

    2016-07-01

    Ice thickness distribution and volume are both key parameters for glaciological and hydrological applications. This study presents VOLTA (Volume and Topography Automation), which is a Python script tool for ArcGISTM that requires just a digital elevation model (DEM) and glacier outline(s) to model distributed ice thickness, volume and bed topography. Ice thickness is initially estimated at points along an automatically generated centreline network based on the perfect-plasticity rheology assumption, taking into account a valley side drag component of the force balance equation. Distributed ice thickness is subsequently interpolated using a glaciologically correct algorithm. For five glaciers with independent field-measured bed topography, VOLTA modelled volumes were between 26.5% (underestimate) and 16.6% (overestimate) of that derived from field observations. Greatest differences were where an asymmetric valley cross section shape was present or where significant valley infill had occurred. Compared with other methods of modelling ice thickness and volume, key advantages of VOLTA are: a fully automated approach and a user friendly graphical user interface (GUI), GIS consistent geometry, fully automated centreline generation, inclusion of a side drag component in the force balance equation, estimation of glacier basal shear stress for each individual glacier, fully distributed ice thickness output and the ability to process multiple glaciers rapidly. VOLTA is capable of regional scale ice volume assessment, which is a key parameter for exploring glacier response to climate change. VOLTA also permits subtraction of modelled ice thickness from the input surface elevation to produce an ice-free DEM, which is a key input for reconstruction of former glaciers. VOLTA could assist with prediction of future glacier geometry changes and hence in projection of future meltwater fluxes.

  7. COMPASS: A Framework for Automated Performance Modeling and Prediction

    SciTech Connect

    Lee, Seyong; Meredith, Jeremy S; Vetter, Jeffrey S

    2015-01-01

    Flexible, accurate performance predictions offer numerous benefits such as gaining insight into and optimizing applications and architectures. However, the development and evaluation of such performance predictions has been a major research challenge, due to the architectural complexities. To address this challenge, we have designed and implemented a prototype system, named COMPASS, for automated performance model generation and prediction. COMPASS generates a structured performance model from the target application's source code using automated static analysis, and then, it evaluates this model using various performance prediction techniques. As we demonstrate on several applications, the results of these predictions can be used for a variety of purposes, such as design space exploration, identifying performance tradeoffs for applications, and understanding sensitivities of important parameters. COMPASS can generate these predictions across several types of applications from traditional, sequential CPU applications to GPU-based, heterogeneous, parallel applications. Our empirical evaluation demonstrates a maximum overhead of 4%, flexibility to generate models for 9 applications, speed, ease of creation, and very low relative errors across a diverse set of architectures.

  8. Multibody dynamics model building using graphical interfaces

    NASA Technical Reports Server (NTRS)

    Macala, Glenn A.

    1989-01-01

    In recent years, the extremely laborious task of manually deriving equations of motion for the simulation of multibody spacecraft dynamics has largely been eliminated. Instead, the dynamicist now works with commonly available general purpose dynamics simulation programs which generate the equations of motion either explicitly or implicitly via computer codes. The user interface to these programs has predominantly been via input data files, each with its own required format and peculiarities, causing errors and frustrations during program setup. Recent progress in a more natural method of data input for dynamics programs: the graphical interface, is described.

  9. The Application of the Cumulative Logistic Regression Model to Automated Essay Scoring

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; Sinharay, Sandip

    2010-01-01

    Most automated essay scoring programs use a linear regression model to predict an essay score from several essay features. This article applied a cumulative logit model instead of the linear regression model to automated essay scoring. Comparison of the performances of the linear regression model and the cumulative logit model was performed on a…

  10. Modeling and Control of the Automated Radiator Inspection Device

    NASA Technical Reports Server (NTRS)

    Dawson, Darren

    1991-01-01

    Many of the operations performed at the Kennedy Space Center (KSC) are dangerous and repetitive tasks which make them ideal candidates for robotic applications. For one specific application, KSC is currently in the process of designing and constructing a robot called the Automated Radiator Inspection Device (ARID), to inspect the radiator panels on the orbiter. The following aspects of the ARID project are discussed: modeling of the ARID; design of control algorithms; and nonlinear based simulation of the ARID. Recommendations to assist KSC personnel in the successful completion of the ARID project are given.

  11. Developing a Graphical User Interface to Automate the Estimation and Prediction of Risk Values for Flood Protective Structures using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Hasan, M.; Helal, A.; Gabr, M.

    2014-12-01

    In this project, we focus on providing a computer-automated platform for a better assessment of the potential failures and retrofit measures of flood-protecting earth structures, e.g., dams and levees. Such structures play an important role during extreme flooding events as well as during normal operating conditions. Furthermore, they are part of other civil infrastructures such as water storage and hydropower generation. Hence, there is a clear need for accurate evaluation of stability and functionality levels during their service lifetime so that the rehabilitation and maintenance costs are effectively guided. Among condition assessment approaches based on the factor of safety, the limit states (LS) approach utilizes numerical modeling to quantify the probability of potential failures. The parameters for LS numerical modeling include i) geometry and side slopes of the embankment, ii) loading conditions in terms of rate of rising and duration of high water levels in the reservoir, and iii) cycles of rising and falling water levels simulating the effect of consecutive storms throughout the service life of the structure. Sample data regarding the correlations of these parameters are available through previous research studies. We have unified these criteria and extended the risk assessment in term of loss of life through the implementation of a graphical user interface to automate input parameters that divides data into training and testing sets, and then feeds them into Artificial Neural Network (ANN) tool through MATLAB programming. The ANN modeling allows us to predict risk values of flood protective structures based on user feedback quickly and easily. In future, we expect to fine-tune the software by adding extensive data on variations of parameters.

  12. Automated Physico-Chemical Cell Model Development through Information Theory

    SciTech Connect

    Peter J. Ortoleva

    2005-11-29

    The objective of this project was to develop predictive models of the chemical responses of microbial cells to variations in their surroundings. The application of these models is optimization of environmental remediation and energy-producing biotechnical processes.The principles on which our project is based are as follows: chemical thermodynamics and kinetics; automation of calibration through information theory; integration of multiplex data (e.g. cDNA microarrays, NMR, proteomics), cell modeling, and bifurcation theory to overcome cellular complexity; and the use of multiplex data and information theory to calibrate and run an incomplete model. In this report we review four papers summarizing key findings and a web-enabled, multiple module workflow we have implemented that consists of a set of interoperable systems biology computational modules.

  13. Automated extraction of knowledge for model-based diagnostics

    NASA Technical Reports Server (NTRS)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  14. ALC: automated reduction of rule-based models

    PubMed Central

    Koschorreck, Markus; Gilles, Ernst Dieter

    2008-01-01

    Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705

  15. A continuum model of colloid-stabilized interfaces

    NASA Astrophysics Data System (ADS)

    Aland, Sebastian; Lowengrub, John; Voigt, Axel

    2011-06-01

    Colloids that are partially wetted by two immiscible fluids can become confined to fluid-fluid interfaces. At sufficiently high volume fractions, the colloids may jam and the interface may crystallize. Examples include bicontinuous interfacially jammed emulsion gels (bijels), which were proposed in this study by Stratford et al. [Science 309, 2198 (2005)] as a hypothetical new class of soft materials in which interpenetrating, continuous domains of two immiscible viscous fluids are maintained in a rigid state by a jammed layer of colloidal particles at their interface. We develop a continuum model for such a system that is capable of simulating the long-time evolution. A Navier-Stokes-Cahn-Hilliard model for the macroscopic two-phase flow system is combined with a surface phase-field-crystal model for the microscopic colloidal system along the interface. The presence of colloids introduces elastic forces at the interface between the two immiscible fluid phases. An adaptive finite element method is used to solve the model numerically. Using a variety of flow configurations in two dimensions, we demonstrate that as colloids jam on the interface and the interface crystallizes, the elastic force may be strong enough to make the interface sufficiently rigid to resist external forces, such as an applied shear flow, as well as surface tension induced coarsening in bicontinuous structures.

  16. A continuum model of colloid-stabilized interfaces

    NASA Astrophysics Data System (ADS)

    Lowengrub, John

    2012-02-01

    Colloids that are partially wetted by two immiscible fluids can become confined to fluid-fluid interfaces. At sufficiently high volume fractions, the colloids may jam and the interface may crystallize. Examples include bicontinuous interfacially jammed emulsion gels (``bijels''), which were proposed in Stratford et al. (Science (2005) 309:2198) as a hypothetical new class of soft materials in which interpenetrating, continuous domains of two immiscible viscous fluids are maintained in a rigid state, by a jammed layer of colloidal particles at their interface. We develop a continuum model for such a system that is capable of simulating the long-time evolution. A Navier-Stokes- Cahn-Hilliard model for the macroscopic two-phase flow system is combined with a surface Phase- Field-Crystal model for the microscopic colloidal system along the interface. The presence of colloids introduces elastic forces at the interface between the two immiscible fluid phases. An adaptive finite element method is used to solve the model numerically. Using a variety of flow configurations, we demonstrate that as colloids jam on the interface and the interface crystallizes, the elastic force may be strong enough to make the interface sufficiently rigid to resist external forces, such as an applied shear flow, as well as surface tension induced coarsening in bicontinuous structures.

  17. Ray tracing in discontinuous velocity model with implicit Interface

    NASA Astrophysics Data System (ADS)

    Zhang, Jianxing; Yang, Qin; Meng, Xianhai; Li, Jigang

    2016-07-01

    Ray tracing in the velocity model containing complex discontinuities is still facing many challenges. The main difficulty arises from the detection of the spatial relationship between the rays and the interfaces that are usually described in non-linear parametric forms. We propose a novel model representation method that can facilitate the implementation of classical shooting-ray methods. In the representation scheme, each interface is expressed as the zero contour of a signed distance field. A multi-copy strategy is adopted to describe the volumetric properties within blocks. The implicit description of the interface makes it easier to detect the ray-interface intersection. The direct calculation of the intersection point is converted into the problem of judging the signs of a ray segment's endpoints. More importantly, the normal to the interface at the intersection point can be easily acquired according to the signed distance field of the interface. The multiple storage of the velocity property in the proximity of the interface can provide accurate and unambiguous velocity information of the intersection point. Thus, the departing ray path can be determined easily and robustly. In addition, the new representation method can describe velocity models containing very complex geological structures, such as faults, salt domes, intrusions, and pinches, without any simplification. The examples on synthetic and real models validate the robustness and accuracy of the ray tracing based on the proposed model representation scheme.

  18. A Fully Automated Trial Selection Method for Optimization of Motor Imagery Based Brain-Computer Interface.

    PubMed

    Zhou, Bangyan; Wu, Xiaopei; Lv, Zhao; Zhang, Lei; Guo, Xiaojin

    2016-01-01

    Independent component analysis (ICA) as a promising spatial filtering method can separate motor-related independent components (MRICs) from the multichannel electroencephalogram (EEG) signals. However, the unpredictable burst interferences may significantly degrade the performance of ICA-based brain-computer interface (BCI) system. In this study, we proposed a new algorithm frame to address this issue by combining the single-trial-based ICA filter with zero-training classifier. We developed a two-round data selection method to identify automatically the badly corrupted EEG trials in the training set. The "high quality" training trials were utilized to optimize the ICA filter. In addition, we proposed an accuracy-matrix method to locate the artifact data segments within a single trial and investigated which types of artifacts can influence the performance of the ICA-based MIBCIs. Twenty-six EEG datasets of three-class motor imagery were used to validate the proposed methods, and the classification accuracies were compared with that obtained by frequently used common spatial pattern (CSP) spatial filtering algorithm. The experimental results demonstrated that the proposed optimizing strategy could effectively improve the stability, practicality and classification performance of ICA-based MIBCI. The study revealed that rational use of ICA method may be crucial in building a practical ICA-based MIBCI system. PMID:27631789

  19. A Fully Automated Trial Selection Method for Optimization of Motor Imagery Based Brain-Computer Interface

    PubMed Central

    Zhou, Bangyan; Wu, Xiaopei; Lv, Zhao; Zhang, Lei; Guo, Xiaojin

    2016-01-01

    Independent component analysis (ICA) as a promising spatial filtering method can separate motor-related independent components (MRICs) from the multichannel electroencephalogram (EEG) signals. However, the unpredictable burst interferences may significantly degrade the performance of ICA-based brain-computer interface (BCI) system. In this study, we proposed a new algorithm frame to address this issue by combining the single-trial-based ICA filter with zero-training classifier. We developed a two-round data selection method to identify automatically the badly corrupted EEG trials in the training set. The “high quality” training trials were utilized to optimize the ICA filter. In addition, we proposed an accuracy-matrix method to locate the artifact data segments within a single trial and investigated which types of artifacts can influence the performance of the ICA-based MIBCIs. Twenty-six EEG datasets of three-class motor imagery were used to validate the proposed methods, and the classification accuracies were compared with that obtained by frequently used common spatial pattern (CSP) spatial filtering algorithm. The experimental results demonstrated that the proposed optimizing strategy could effectively improve the stability, practicality and classification performance of ICA-based MIBCI. The study revealed that rational use of ICA method may be crucial in building a practical ICA-based MIBCI system. PMID:27631789

  20. Control of a Wheelchair in an Indoor Environment Based on a Brain-Computer Interface and Automated Navigation.

    PubMed

    Zhang, Rui; Li, Yuanqing; Yan, Yongyong; Zhang, Hao; Wu, Shaoyu; Yu, Tianyou; Gu, Zhenghui

    2016-01-01

    The concept of controlling a wheelchair using brain signals is promising. However, the continuous control of a wheelchair based on unstable and noisy electroencephalogram signals is unreliable and generates a significant mental burden for the user. A feasible solution is to integrate a brain-computer interface (BCI) with automated navigation techniques. This paper presents a brain-controlled intelligent wheelchair with the capability of automatic navigation. Using an autonomous navigation system, candidate destinations and waypoints are automatically generated based on the existing environment. The user selects a destination using a motor imagery (MI)-based or P300-based BCI. According to the determined destination, the navigation system plans a short and safe path and navigates the wheelchair to the destination. During the movement of the wheelchair, the user can issue a stop command with the BCI. Using our system, the mental burden of the user can be substantially alleviated. Furthermore, our system can adapt to changes in the environment. Two experiments based on MI and P300 were conducted to demonstrate the effectiveness of our system. PMID:26054072

  1. Control of a Wheelchair in an Indoor Environment Based on a Brain-Computer Interface and Automated Navigation.

    PubMed

    Zhang, Rui; Li, Yuanqing; Yan, Yongyong; Zhang, Hao; Wu, Shaoyu; Yu, Tianyou; Gu, Zhenghui

    2016-01-01

    The concept of controlling a wheelchair using brain signals is promising. However, the continuous control of a wheelchair based on unstable and noisy electroencephalogram signals is unreliable and generates a significant mental burden for the user. A feasible solution is to integrate a brain-computer interface (BCI) with automated navigation techniques. This paper presents a brain-controlled intelligent wheelchair with the capability of automatic navigation. Using an autonomous navigation system, candidate destinations and waypoints are automatically generated based on the existing environment. The user selects a destination using a motor imagery (MI)-based or P300-based BCI. According to the determined destination, the navigation system plans a short and safe path and navigates the wheelchair to the destination. During the movement of the wheelchair, the user can issue a stop command with the BCI. Using our system, the mental burden of the user can be substantially alleviated. Furthermore, our system can adapt to changes in the environment. Two experiments based on MI and P300 were conducted to demonstrate the effectiveness of our system.

  2. An Interactive Tool For Semi-automated Statistical Prediction Using Earth Observations and Models

    NASA Astrophysics Data System (ADS)

    Zaitchik, B. F.; Berhane, F.; Tadesse, T.

    2015-12-01

    We developed a semi-automated statistical prediction tool applicable to concurrent analysis or seasonal prediction of any time series variable in any geographic location. The tool was developed using Shiny, JavaScript, HTML and CSS. A user can extract a predictand by drawing a polygon over a region of interest on the provided user interface (global map). The user can select the Climatic Research Unit (CRU) precipitation or Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS) as predictand. They can also upload their own predictand time series. Predictors can be extracted from sea surface temperature, sea level pressure, winds at different pressure levels, air temperature at various pressure levels, and geopotential height at different pressure levels. By default, reanalysis fields are applied as predictors, but the user can also upload their own predictors, including a wide range of compatible satellite-derived datasets. The package generates correlations of the variables selected with the predictand. The user also has the option to generate composites of the variables based on the predictand. Next, the user can extract predictors by drawing polygons over the regions that show strong correlations (composites). Then, the user can select some or all of the statistical prediction models provided. Provided models include Linear Regression models (GLM, SGLM), Tree-based models (bagging, random forest, boosting), Artificial Neural Network, and other non-linear models such as Generalized Additive Model (GAM) and Multivariate Adaptive Regression Splines (MARS). Finally, the user can download the analysis steps they used, such as the region they selected, the time period they specified, the predictand and predictors they chose and preprocessing options they used, and the model results in PDF or HTML format. Key words: Semi-automated prediction, Shiny, R, GLM, ANN, RF, GAM, MARS

  3. An Automated 3d Indoor Topological Navigation Network Modelling

    NASA Astrophysics Data System (ADS)

    Jamali, A.; Rahman, A. A.; Boguslawski, P.; Gold, C. M.

    2015-10-01

    Indoor navigation is important for various applications such as disaster management and safety analysis. In the last decade, indoor environment has been a focus of wide research; that includes developing techniques for acquiring indoor data (e.g. Terrestrial laser scanning), 3D indoor modelling and 3D indoor navigation models. In this paper, an automated 3D topological indoor network generated from inaccurate 3D building models is proposed. In a normal scenario, 3D indoor navigation network derivation needs accurate 3D models with no errors (e.g. gap, intersect) and two cells (e.g. rooms, corridors) should touch each other to build their connections. The presented 3D modeling of indoor navigation network is based on surveying control points and it is less dependent on the 3D geometrical building model. For reducing time and cost of indoor building data acquisition process, Trimble LaserAce 1000 as surveying instrument is used. The modelling results were validated against an accurate geometry of indoor building environment which was acquired using Trimble M3 total station.

  4. A new seismically constrained subduction interface model for Central America

    NASA Astrophysics Data System (ADS)

    Kyriakopoulos, C.; Newman, A. V.; Thomas, A. M.; Moore-Driskell, M.; Farmer, G. T.

    2015-08-01

    We provide a detailed, seismically defined three-dimensional model for the subducting plate interface along the Middle America Trench between northern Nicaragua and southern Costa Rica. The model uses data from a weighted catalog of about 30,000 earthquake hypocenters compiled from nine catalogs to constrain the interface through a process we term the "maximum seismicity method." The method determines the average position of the largest cluster of microseismicity beneath an a priori functional surface above the interface. This technique is applied to all seismicity above 40 km depth, the approximate intersection of the hanging wall Mohorovičić discontinuity, where seismicity likely lies along the plate interface. Below this depth, an envelope above 90% of seismicity approximates the slab surface. Because of station proximity to the interface, this model provides highest precision along the interface beneath the Nicoya Peninsula of Costa Rica, an area where marked geometric changes coincide with crustal transitions and topography observed seaward of the trench. The new interface is useful for a number of geophysical studies that aim to understand subduction zone earthquake behavior and geodynamic and tectonic development of convergent plate boundaries.

  5. Molecular modeling of cracks at interfaces in nanoceramic composites

    NASA Astrophysics Data System (ADS)

    Pavia, F.; Curtin, W. A.

    2013-10-01

    Toughness in Ceramic Matrix Composites (CMCs) is achieved if crack deflection can occur at the fiber/matrix interface, preventing crack penetration into the fiber and enabling energy-dissipating fiber pullout. To investigate toughening in nanoscale CMCs, direct atomistic models are used to study how matrix cracks behave as a function of the degree of interfacial bonding/sliding, as controlled by the density of C interstitial atoms, at the interface between carbon nanotubes (CNTs) and a diamond matrix. Under all interface conditions studied, incident matrix cracks do not penetrate into the nanotube. Under increased loading, weaker interfaces fail in shear while stronger interfaces do not fail and, instead, the CNT fails once the stress on the CNT reaches its tensile strength. An analytic shear lag model captures all of the micromechanical details as a function of loading and material parameters. Interface deflection versus fiber penetration is found to depend on the relative bond strengths of the interface and the CNT, with CNT failure occurring well below the prediction of the toughness-based continuum He-Hutchinson model. The shear lag model, in contrast, predicts the CNT failure point and shows that the nanoscale embrittlement transition occurs at an interface shear strength scaling as τs~ɛσ rather than τs~σ typically prevailing for micron scale composites, where ɛ and σ are the CNT failure strain and stress, respectively. Interface bonding also lowers the effective fracture strength in SWCNTs, due to formation of defects, but does not play a role in DWCNTs having interwall coupling, which are weaker than SWCNTs but less prone to damage in the outerwall.

  6. Analysis of wax esters in edible oils by automated on-line coupling liquid chromatography-gas chromatography using the through oven transfer adsorption desorption (TOTAD) interface.

    PubMed

    Aragón, Alvaro; Cortés, José M; Toledano, Rosa M; Villén, Jesús; Vázquez, Ana

    2011-07-29

    An automated method for the direct analysis of wax esters in edible oils is presented. The proposed method uses the TOTAD (through oven transfer adsorption desorption) interface for the on-line coupling of normal phase liquid chromatography and gas chromatography. In this fully automated system, the oil with C32 wax ester as internal standard and diluted with heptane is injected directly with no sample pre-treatment step other than filtration. The proposed method allows analysis of different wax esters, and is simpler and faster than the European Union Official Method, which is tedious and time-consuming. The obtained results closely match the certified values obtained from the median of the analytical results of the inter-labs certification study. Relative standard deviations of the concentrations are less than 5%. The method is appropriate for routine analysis as it is totally automated.

  7. Flightdeck Automation Problems (FLAP) Model for Safety Technology Portfolio Assessment

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2014-01-01

    NASA's Aviation Safety Program (AvSP) develops and advances methodologies and technologies to improve air transportation safety. The Safety Analysis and Integration Team (SAIT) conducts a safety technology portfolio assessment (PA) to analyze the program content, to examine the benefits and risks of products with respect to program goals, and to support programmatic decision making. The PA process includes systematic identification of current and future safety risks as well as tracking several quantitative and qualitative metrics to ensure the program goals are addressing prominent safety risks accurately and effectively. One of the metrics within the PA process involves using quantitative aviation safety models to gauge the impact of the safety products. This paper demonstrates the role of aviation safety modeling by providing model outputs and evaluating a sample of portfolio elements using the Flightdeck Automation Problems (FLAP) model. The model enables not only ranking of the quantitative relative risk reduction impact of all portfolio elements, but also highlighting the areas with high potential impact via sensitivity and gap analyses in support of the program office. Although the model outputs are preliminary and products are notional, the process shown in this paper is essential to a comprehensive PA of NASA's safety products in the current program and future programs/projects.

  8. User interface for ground-water modeling: Arcview extension

    USGS Publications Warehouse

    Tsou, M.-S.; Whittemore, D.O.

    2001-01-01

    Numerical simulation for ground-water modeling often involves handling large input and output data sets. A geographic information system (GIS) provides an integrated platform to manage, analyze, and display disparate data and can greatly facilitate modeling efforts in data compilation, model calibration, and display of model parameters and results. Furthermore, GIS can be used to generate information for decision making through spatial overlay and processing of model results. Arc View is the most widely used Windows-based GIS software that provides a robust user-friendly interface to facilitate data handling and display. An extension is an add-on program to Arc View that provides additional specialized functions. An Arc View interface for the ground-water flow and transport models MODFLOW and MT3D was built as an extension for facilitating modeling. The extension includes preprocessing of spatially distributed (point, line, and polygon) data for model input and postprocessing of model output. An object database is used for linking user dialogs and model input files. The Arc View interface utilizes the capabilities of the 3D Analyst extension. Models can be automatically calibrated through the Arc View interface by external linking to such programs as PEST. The efficient pre- and postprocessing capabilities and calibration link were demonstrated for ground-water modeling in southwest Kansas.

  9. Empirical rheological model for rough or grooved bonded interfaces.

    PubMed

    Belloncle, Valentina Vlasie; Rousseau, Martine

    2007-12-01

    In the industrial sector, it is common to use metal/adhesive/metal structural bonds. The cohesion of such structures can be improved by preliminary chemical treatments (degreasing with solvents, alkaline, or acid pickling), electrochemical treatments (anodising), or mechanical treatments (abrasion, sandblasting, grooving) of the metallic plates. All these pretreatments create some asperities, ranging from roughnesses to grooves. On the other hand, in damage solid mechanics and in non-destructive testing, rheological models are used to measure the strength of bonded interfaces. However, these models do not take into account the interlocking of the adhesive in the porosities. Here, an empirical rheological model taking into account the interlocking effects is developed. This model depends on a characteristic parameter representing the average porosity along the interface, which considerably simplifies the corresponding stress and displacement jump conditions. The paper deals with the influence of this interface model on the ultrasonic guided modes of the structure. PMID:17659313

  10. Ab initio diffuse-interface model for lithiated electrode interface evolution

    NASA Astrophysics Data System (ADS)

    Stournara, Maria E.; Kumar, Ravi; Qi, Yue; Sheldon, Brian W.

    2016-07-01

    The study of chemical segregation at interfaces, and in particular the ability to predict the thickness of segregated layers via analytical expressions or computational modeling, is a fundamentally challenging topic in the design of novel heterostructured materials. This issue is particularly relevant for the phase-field (PF) methodology, which has become a prominent tool for describing phase transitions. These models rely on phenomenological parameters that pertain to the interfacial energy and thickness, quantities that cannot be experimentally measured. Instead of back-calculating these parameters from experimental data, here we combine a set of analytical expressions based on the Cahn-Hilliard approach with ab initio calculations to compute the gradient energy parameter κ and the thickness λ of the segregated Li layer at the LixSi-Cu interface. With this bottom-up approach we calculate the thickness λ of the Li diffuse interface to be on the order of a few nm, in agreement with prior experimental secondary ion mass spectrometry observations. Our analysis indicates that Li segregation is primarily driven by solution thermodynamics, while the strain contribution in this system is relatively small. This combined scheme provides an essential first step in the systematic evaluation of the thermodynamic parameters of the PF methodology, and we believe that it can serve as a framework for the development of quantitative interface models in the field of Li-ion batteries.

  11. Ab initio diffuse-interface model for lithiated electrode interface evolution.

    PubMed

    Stournara, Maria E; Kumar, Ravi; Qi, Yue; Sheldon, Brian W

    2016-07-01

    The study of chemical segregation at interfaces, and in particular the ability to predict the thickness of segregated layers via analytical expressions or computational modeling, is a fundamentally challenging topic in the design of novel heterostructured materials. This issue is particularly relevant for the phase-field (PF) methodology, which has become a prominent tool for describing phase transitions. These models rely on phenomenological parameters that pertain to the interfacial energy and thickness, quantities that cannot be experimentally measured. Instead of back-calculating these parameters from experimental data, here we combine a set of analytical expressions based on the Cahn-Hilliard approach with ab initio calculations to compute the gradient energy parameter κ and the thickness λ of the segregated Li layer at the Li_{x}Si-Cu interface. With this bottom-up approach we calculate the thickness λ of the Li diffuse interface to be on the order of a few nm, in agreement with prior experimental secondary ion mass spectrometry observations. Our analysis indicates that Li segregation is primarily driven by solution thermodynamics, while the strain contribution in this system is relatively small. This combined scheme provides an essential first step in the systematic evaluation of the thermodynamic parameters of the PF methodology, and we believe that it can serve as a framework for the development of quantitative interface models in the field of Li-ion batteries. PMID:27575197

  12. Automated diagnosis of data-model conflicts using metadata.

    PubMed

    Chen, R O; Altman, R B

    1999-01-01

    The authors describe a methodology for helping computational biologists diagnose discrepancies they encounter between experimental data and the predictions of scientific models. The authors call these discrepancies data-model conflicts. They have built a prototype system to help scientists resolve these conflicts in a more systematic, evidence-based manner. In computational biology, data-model conflicts are the result of complex computations in which data and models are transformed and evaluated. Increasingly, the data, models, and tools employed in these computations come from diverse and distributed resources, contributing to a widening gap between the scientist and the original context in which these resources were produced. This contextual rift can contribute to the misuse of scientific data or tools and amplifies the problem of diagnosing data-model conflicts. The authors' hypothesis is that systematic collection of metadata about a computational process can help bridge the contextual rift and provide information for supporting automated diagnosis of these conflicts. The methodology involves three major steps. First, the authors decompose the data-model evaluation process into abstract functional components. Next, they use this process decomposition to enumerate the possible causes of the data-model conflict and direct the acquisition of diagnostically relevant metadata. Finally, they use evidence statically and dynamically generated from the metadata collected to identify the most likely causes of the given conflict. They describe how these methods are implemented in a knowledge-based system called GRENDEL and show how GRENDEL can be used to help diagnose conflicts between experimental data and computationally built structural models of the 30S ribosomal subunit. PMID:10495098

  13. Back to the Future: A Non-Automated Method of Constructing Transfer Models

    ERIC Educational Resources Information Center

    Feng, Mingyu; Beck, Joseph

    2009-01-01

    Representing domain knowledge is important for constructing educational software, and automated approaches have been proposed to construct and refine such models. In this paper, instead of applying automated and computationally intensive approaches, we simply start with existing hand-constructed transfer models at various levels of granularity and…

  14. Automated piecewise power-law modeling of biological systems.

    PubMed

    Machina, Anna; Ponosov, Arkady; Voit, Eberhard O

    2010-09-01

    Recent trends suggest that future biotechnology will increasingly rely on mathematical models of the biological systems under investigation. In particular, metabolic engineering will make wider use of metabolic pathway models in stoichiometric or fully kinetic format. A significant obstacle to the use of pathway models is the identification of suitable process descriptions and their parameters. We recently showed that, at least under favorable conditions, Dynamic Flux Estimation (DFE) permits the numerical characterization of fluxes from sets of metabolic time series data. However, DFE does not prescribe how to convert these numerical results into functional representations. In some cases, Michaelis-Menten rate laws or canonical formats are well suited, in which case the estimation of parameter values is easy. However, in other cases, appropriate functional forms are not evident, and exhaustive searches among all possible candidate models are not feasible. We show here how piecewise power-law functions of one or more variables offer an effective default solution for the almost unbiased representation of uni- and multivariate time series data. The results of an automated algorithm for their determination are piecewise power-law fits, whose accuracy is only limited by the available data. The individual power-law pieces may lead to discontinuities at break points or boundaries between sub-domains. In many practical applications, these boundary gaps do not cause problems. Potential smoothing techniques, based on differential inclusions and Filippov's theory, are discussed in Appendix A. PMID:20060428

  15. Automated robust generation of compact 3D statistical shape models

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaz; Likar, Bostjan; Tomazevic, Dejan; Pernus, Franjo

    2004-05-01

    Ascertaining the detailed shape and spatial arrangement of anatomical structures is important not only within diagnostic settings but also in the areas of planning, simulation, intraoperative navigation, and tracking of pathology. Robust, accurate and efficient automated segmentation of anatomical structures is difficult because of their complexity and inter-patient variability. Furthermore, the position of the patient during image acquisition, the imaging device and protocol, image resolution, and other factors induce additional variations in shape and appearance. Statistical shape models (SSMs) have proven quite successful in capturing structural variability. A possible approach to obtain a 3D SSM is to extract reference voxels by precisely segmenting the structure in one, reference image. The corresponding voxels in other images are determined by registering the reference image to each other image. The SSM obtained in this way describes statistically plausible shape variations over the given population as well as variations due to imperfect registration. In this paper, we present a completely automated method that significantly reduces shape variations induced by imperfect registration, thus allowing a more accurate description of variations. At each iteration, the derived SSM is used for coarse registration, which is further improved by describing finer variations of the structure. The method was tested on 64 lumbar spinal column CT scans, from which 23, 38, 45, 46 and 42 volumes of interest containing vertebra L1, L2, L3, L4 and L5, respectively, were extracted. Separate SSMs were generated for each vertebra. The results show that the method is capable of reducing the variations induced by registration errors.

  16. HLA-Modeler: Automated Homology Modeling of Human Leukocyte Antigens.

    PubMed

    Amari, Shinji; Kataoka, Ryoichi; Ikegami, Takashi; Hirayama, Noriaki

    2013-01-01

    The three-dimensional (3D) structures of human leukocyte antigen (HLA) molecules are indispensable for the studies on the functions at molecular level. We have developed a homology modeling system named HLA-modeler specialized in the HLA molecules. Segment matching algorithm is employed for modeling and the optimization of the model is carried out by use of the PFROSST force field considering the implicit solvent model. In order to efficiently construct the homology models, HLA-modeler uses a local database of the 3D structures of HLA molecules. The structure of the antigenic peptide-binding site is important for the function and the 3D structure is highly conserved between various alleles. HLA-modeler optimizes the use of this structural motif. The leave-one-out cross-validation using the crystal structures of class I and class II HLA molecules has demonstrated that the rmsds of nonhydrogen atoms of the sites between homology models and crystal structures are less than 1.0 Å in most cases. The results have indicated that the 3D structures of the antigenic peptide-binding sites can be reproduced by HLA-modeler at the level almost corresponding to the crystal structures.

  17. Solid phase extraction-liquid chromatography (SPE-LC) interface for automated peptide separation and identification by tandem mass spectrometry

    NASA Astrophysics Data System (ADS)

    Hørning, Ole Bjeld; Theodorsen, Søren; Vorm, Ole; Jensen, Ole Nørregaard

    2007-12-01

    Reversed-phase solid phase extraction (SPE) is a simple and widely used technique for desalting and concentration of peptide and protein samples prior to mass spectrometry analysis. Often, SPE sample preparation is done manually and the samples eluted, dried and reconstituted into 96-well titer plates for subsequent LC-MS/MS analysis. To reduce the number of sample handling stages and increase throughput, we developed a robotic system to interface off-line SPE to LC-ESI-MS/MS. Samples were manually loaded onto disposable SPE tips that subsequently were connected in-line with a capillary chromatography column. Peptides were recovered from the SPE column and separated on the RP-LC column using isocratic elution conditions and analysed by electrospray tandem mass spectrometry. Peptide mixtures eluted within approximately 5 min, with individual peptide peak resolution of ~7 s (FWHM), making the SPE-LC suited for analysis of medium complex samples (3-12 protein components). For optimum performance, the isocratic flow rate was reduced to 30 nL/min, producing nanoelectrospray like conditions which ensure high ionisation efficiency and sensitivity. Using a modified autosampler for mounting and disposing of the SPE tips, the SPE-LC-MS/MS system could analyse six samples per hour, and up to 192 SPE tips in one batch. The relatively high sample throughput, medium separation power and high sensitivity makes the automated SPE-LC-MS/MS setup attractive for proteomics experiments as demonstrated by the identification of the components of simple protein mixtures and of proteins recovered from 2DE gels.

  18. The Michigan Space Weather Modeling Framework (SWMF) Graphical User Interface

    NASA Astrophysics Data System (ADS)

    de Zeeuw, D.; Gombosi, T.; Toth, G.; Ridley, A.

    2007-05-01

    The Michigan Space Weather Modeling Framework (SWMF) is a powerful tool available for the community that has been used to model from the Sun to Earth and beyond. As a research tool, however, it still requires user experience with parallel compute clusters and visualization tools. Thus, we have developed a graphical user interface (GUI) that assists with configuring, compiling, and running the SWMF, as well as visualizing the model output. This is accomplished through a portable web interface. Live examples will be demonstrated and visualization of several archived events will be shown.

  19. Analysis of models for curvature driven motion of interfaces

    NASA Astrophysics Data System (ADS)

    Swartz, Drew E.

    Interfacial energies frequently appear in models arising in materials science and engineering. To dissipate energy in these systems, the interfaces will often move by a curvature dependent velocity. The present work details the mathematical analysis of some models for curvature dependent motion of interfaces. In particular we focus on two types, thresholding schemes and phase field models. With regard to thresholding schemes, we give a new proof of the convergence of the Merriman-Bence-Osher thresholding algorithm to motion by mean curvature. This new proof does not rely on the scheme satisfying a comparison principle. The technique shows promise in proving the convergence of thresholding schemes for more general motions, such as fourth-order motions and motions of higher codimension interfaces. The application of the proof technique to these more general schemes is discussed, along with rigorous consistency estimates. With regard to phase-field models, we examine the L 2-gradient flow of a second order gradient model for phase transitions, introduced by Fonseca and Mantegazza. In the case of radial symmetry we demonstrate that the diffuse interfacial dynamics converge to motion by mean curvature as the width of the interface decreases to zero. This is in accordance with the first-order Allen-Cahn model for phase transitions. But unlike the Allen-Cahn model, the gradient flow for the Fonseca-Mantegazza model is a fourth-order parabolic PDE. This creates new and novel difficulties in its analysis.

  20. A distributed data component for the open modeling interface

    Technology Transfer Automated Retrieval System (TEKTRAN)

    As the volume of collected data continues to increase in the environmental sciences, so does the need for effective means for accessing those data. We have developed an Open Modeling Interface (OpenMI) data component that retrieves input data for model components from environmental information syste...

  1. Diffuse Interface Model for Microstructure Evolution

    NASA Astrophysics Data System (ADS)

    Nestler, Britta

    A phase-field model for a general class of multi-phase metallic alloys is proposed which describes both, multi-phase solidification phenomena as well as polycrystalline grain structures. The model serves as a computational method to simulate the motion and kinetics of multiple phase boundaries and enables the visualization of the diffusion processes and of the phase transitions in multi-phase systems. Numerical simulations are presented which illustrate the capability of the phase-field model to recover a variety of complex experimental growth structures. In particular, the phase-field model can be used to simulate microstructure evolutions in eutectic, peritectic and monotectic alloys. In addition, polycrystalline grain structures with effects such as wetting, grain growth, symmetry properties of adjacent triple junctions in thin film samples and stability criteria at multiple junctions are described by phase-field simulations.

  2. A diffuse interface Lox/hydrogen transcritical flame model

    NASA Astrophysics Data System (ADS)

    Gaillard, Pierre; Giovangigli, Vincent; Matuszewski, Lionel

    2016-05-01

    We present a diffuse-interface all-pressure flame model that transitions smoothly between subcritical and supercritical conditions. The model involves a non-equilibrium liquid/gas diffuse interface of van der Waals/Korteweg type embedded into a non-ideal multicomponent reactive fluid. The multicomponent transport fluxes are evaluated in their thermodynamic form in order to avoid singularities at thermodynamic mechanical stability limits. The model also takes into account condensing liquid water in order to avoid thermodynamic chemical instabilities. The resulting equations are used to investigate the interface between cold dense and hot light oxygen as well as the structure of diffusion flames between cold dense oxygen and gaseous-like hydrogen at all pressures, either subcritical or supercritical.

  3. An interface model for dosage adjustment connects hematotoxicity to pharmacokinetics.

    PubMed

    Meille, C; Iliadis, A; Barbolosi, D; Frances, N; Freyer, G

    2008-12-01

    When modeling is required to describe pharmacokinetics and pharmacodynamics simultaneously, it is difficult to link time-concentration profiles and drug effects. When patients are under chemotherapy, despite the huge amount of blood monitoring numerations, there is a lack of exposure variables to describe hematotoxicity linked with the circulating drug blood levels. We developed an interface model that transforms circulating pharmacokinetic concentrations to adequate exposures, destined to be inputs of the pharmacodynamic process. The model is materialized by a nonlinear differential equation involving three parameters. The relevance of the interface model for dosage adjustment is illustrated by numerous simulations. In particular, the interface model is incorporated into a complex system including pharmacokinetics and neutropenia induced by docetaxel and by cisplatin. Emphasis is placed on the sensitivity of neutropenia with respect to the variations of the drug amount. This complex system including pharmacokinetic, interface, and pharmacodynamic hematotoxicity models is an interesting tool for analysis of hematotoxicity induced by anticancer agents. The model could be a new basis for further improvements aimed at incorporating new experimental features. PMID:19107581

  4. Automated optic disk boundary detection by modified active contour model.

    PubMed

    Xu, Juan; Chutatape, Opas; Chew, Paul

    2007-03-01

    This paper presents a novel deformable-model-based algorithm for fully automated detection of optic disk boundary in fundus images. The proposed method improves and extends the original snake (deforming-only technique) in two aspects: clustering and smoothing update. The contour points are first self-separated into edge-point group or uncertain-point group by clustering after each deformation, and these contour points are then updated by different criteria based on different groups. The updating process combines both the local and global information of the contour to achieve the balance of contour stability and accuracy. The modifications make the proposed algorithm more accurate and robust to blood vessel occlusions, noises, ill-defined edges and fuzzy contour shapes. The comparative results show that the proposed method can estimate the disk boundaries of 100 test images closer to the groundtruth, as measured by mean distance to closest point (MDCP) <3 pixels, with the better success rate when compared to those obtained by gradient vector flow snake (GVF-snake) and modified active shape models (ASM).

  5. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT (AGWA): A GIS-BASED HYRDOLOGIC MODELING TOOL FOR WATERSHED ASSESSMENT AND ANALYSIS

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment tool (AGWA) is a GIS interface jointly

    developed by the USDA Agricultural Research Service, the U.S. Environmental Protection

    Agency, the University of Arizona, and the University of Wyoming to automate the

    parame...

  6. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT (AGWA): A GIS-BASED HYDROLOGIC MODELING TOOL FOR WATERSHED ASSESSMENT AND ANALYSIS

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment tool (AGWA) is a GIS interface jointly developed by the USDA Agricultural Research Service, the U.S. Environmental Protection Agency, the University of Arizona, and the University of Wyoming to automate the parameterization and execu...

  7. AUTOMATED GEOSPATICAL WATERSHED ASSESSMENT (AGWA): A GIS-BASED HYDROLOICAL MODELING TOOL FOR WATERSHED ASSESSMENT AND ANALYSIS

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment tool (AGWA) is a GIS interface jointly developed by the USDA Agricultural Research Service, the U.S. Environmental Protection Agency, the University of Arizona, and the University of Wyoming to automate the parameterization and execut...

  8. Automated macromolecular model building for X-ray crystallography using ARP/wARP version 7

    PubMed Central

    Langer, Gerrit G; Cohen, Serge X; Lamzin, Victor S; Perrakis, Anastassis

    2008-01-01

    ARP/wARP is a software suite to build macromolecular models in X-ray crystallography electron density maps. Structural genomics initiatives and the study of complex macromolecular assemblies and membrane proteins all rely on advanced methods for 3D structure determination. ARP/wARP meets these needs by providing the tools to obtain a macromolecular model automatically, with a reproducible computational procedure. ARP/wARP 7.0 tackles several tasks: iterative protein model building including a high-level decision-making control module; fast construction of the secondary structure of a protein; building flexible loops in alternate conformations; fully automated placement of ligands, including a choice of the best fitting ligand from a “cocktail”; and finding ordered water molecules. All protocols are easy to handle by a non-expert user through a graphical user interface or a command line. The time required is typically a few minutes although iterative model building may take a few hours. PMID:18600222

  9. Modelling biological invasions: Individual to population scales at interfaces.

    PubMed

    Belmonte-Beitia, J; Woolley, T E; Scott, J G; Maini, P K; Gaffney, E A

    2013-10-01

    Extracting the population level behaviour of biological systems from that of the individual is critical in understanding dynamics across multiple scales and thus has been the subject of numerous investigations. Here, the influence of spatial heterogeneity in such contexts is explored for interfaces with a separation of the length scales characterising the individual and the interface, a situation that can arise in applications involving cellular modelling. As an illustrative example, we consider cell movement between white and grey matter in the brain which may be relevant in considering the invasive dynamics of glioma. We show that while one can safely neglect intrinsic noise, at least when considering glioma cell invasion, profound differences in population behaviours emerge in the presence of interfaces with only subtle alterations in the dynamics at the individual level. Transport driven by local cell sensing generates predictions of cell accumulations along interfaces where cell motility changes. This behaviour is not predicted with the commonly used Fickian diffusion transport model, but can be extracted from preliminary observations of specific cell lines in recent, novel, cryo-imaging. Consequently, these findings suggest a need to consider the impact of individual behaviour, spatial heterogeneity and especially interfaces in experimental and modelling frameworks of cellular dynamics, for instance in the characterisation of glioma cell motility.

  10. NASA: Model development for human factors interfacing

    NASA Technical Reports Server (NTRS)

    Smith, L. L.

    1984-01-01

    The results of an intensive literature review in the general topics of human error analysis, stress and job performance, and accident and safety analysis revealed no usable techniques or approaches for analyzing human error in ground or space operations tasks. A task review model is described and proposed to be developed in order to reduce the degree of labor intensiveness in ground and space operations tasks. An extensive number of annotated references are provided.

  11. Modelling interfacial cracking with non-matching cohesive interface elements

    NASA Astrophysics Data System (ADS)

    Nguyen, Vinh Phu; Nguyen, Chi Thanh; Bordas, Stéphane; Heidarpour, Amin

    2016-07-01

    Interfacial cracking occurs in many engineering problems such as delamination in composite laminates, matrix/interface debonding in fibre reinforced composites etc. Computational modelling of these interfacial cracks usually employs compatible or matching cohesive interface elements. In this paper, incompatible or non-matching cohesive interface elements are proposed for interfacial fracture mechanics problems. They allow non-matching finite element discretisations of the opposite crack faces thus lifting the constraint on the compatible discretisation of the domains sharing the interface. The formulation is based on a discontinuous Galerkin method and works with both initially elastic and rigid cohesive laws. The proposed formulation has the following advantages compared to classical interface elements: (i) non-matching discretisations of the domains and (ii) no high dummy stiffness. Two and three dimensional quasi-static fracture simulations are conducted to demonstrate the method. Our method not only simplifies the meshing process but also it requires less computational demands, compared with standard interface elements, for problems that involve materials/solids having a large mismatch in stiffnesses.

  12. Modelling interfacial cracking with non-matching cohesive interface elements

    NASA Astrophysics Data System (ADS)

    Nguyen, Vinh Phu; Nguyen, Chi Thanh; Bordas, Stéphane; Heidarpour, Amin

    2016-11-01

    Interfacial cracking occurs in many engineering problems such as delamination in composite laminates, matrix/interface debonding in fibre reinforced composites etc. Computational modelling of these interfacial cracks usually employs compatible or matching cohesive interface elements. In this paper, incompatible or non-matching cohesive interface elements are proposed for interfacial fracture mechanics problems. They allow non-matching finite element discretisations of the opposite crack faces thus lifting the constraint on the compatible discretisation of the domains sharing the interface. The formulation is based on a discontinuous Galerkin method and works with both initially elastic and rigid cohesive laws. The proposed formulation has the following advantages compared to classical interface elements: (i) non-matching discretisations of the domains and (ii) no high dummy stiffness. Two and three dimensional quasi-static fracture simulations are conducted to demonstrate the method. Our method not only simplifies the meshing process but also it requires less computational demands, compared with standard interface elements, for problems that involve materials/solids having a large mismatch in stiffnesses.

  13. Aviation Safety: Modeling and Analyzing Complex Interactions between Humans and Automated Systems

    NASA Technical Reports Server (NTRS)

    Rungta, Neha; Brat, Guillaume; Clancey, William J.; Linde, Charlotte; Raimondi, Franco; Seah, Chin; Shafto, Michael

    2013-01-01

    The on-going transformation from the current US Air Traffic System (ATS) to the Next Generation Air Traffic System (NextGen) will force the introduction of new automated systems and most likely will cause automation to migrate from ground to air. This will yield new function allocations between humans and automation and therefore change the roles and responsibilities in the ATS. Yet, safety in NextGen is required to be at least as good as in the current system. We therefore need techniques to evaluate the safety of the interactions between humans and automation. We think that current human factor studies and simulation-based techniques will fall short in front of the ATS complexity, and that we need to add more automated techniques to simulations, such as model checking, which offers exhaustive coverage of the non-deterministic behaviors in nominal and off-nominal scenarios. In this work, we present a verification approach based both on simulations and on model checking for evaluating the roles and responsibilities of humans and automation. Models are created using Brahms (a multi-agent framework) and we show that the traditional Brahms simulations can be integrated with automated exploration techniques based on model checking, thus offering a complete exploration of the behavioral space of the scenario. Our formal analysis supports the notion of beliefs and probabilities to reason about human behavior. We demonstrate the technique with the Ueberligen accident since it exemplifies authority problems when receiving conflicting advices from human and automated systems.

  14. Individual Differences in Response to Automation: The Five Factor Model of Personality

    ERIC Educational Resources Information Center

    Szalma, James L.; Taylor, Grant S.

    2011-01-01

    This study examined the relationship of operator personality (Five Factor Model) and characteristics of the task and of adaptive automation (reliability and adaptiveness--whether the automation was well-matched to changes in task demand) to operator performance, workload, stress, and coping. This represents the first investigation of how the Five…

  15. Automated MRI Cerebellar Size Measurements Using Active Appearance Modeling

    PubMed Central

    Price, Mathew; Cardenas, Valerie A.; Fein, George

    2014-01-01

    Although the human cerebellum has been increasingly identified as an important hub that shows potential for helping in the diagnosis of a large spectrum of disorders, such as alcoholism, autism, and fetal alcohol spectrum disorder, the high costs associated with manual segmentation, and low availability of reliable automated cerebellar segmentation tools, has resulted in a limited focus on cerebellar measurement in human neuroimaging studies. We present here the CATK (Cerebellar Analysis Toolkit), which is based on the Bayesian framework implemented in FMRIB’s FIRST. This approach involves training Active Appearance Models (AAM) using hand-delineated examples. CATK can currently delineate the cerebellar hemispheres and three vermal groups (lobules I–V, VI–VII, and VIII–X). Linear registration with the low-resolution MNI152 template is used to provide initial alignment, and Point Distribution Models (PDM) are parameterized using stellar sampling. The Bayesian approach models the relationship between shape and texture through computation of conditionals in the training set. Our method varies from the FIRST framework in that initial fitting is driven by 1D intensity profile matching, and the conditional likelihood function is subsequently used to refine fitting. The method was developed using T1-weighted images from 63 subjects that were imaged and manually labeled: 43 subjects were scanned once and were used for training models, and 20 subjects were imaged twice (with manual labeling applied to both runs) and used to assess reliability and validity. Intraclass correlation analysis shows that CATK is highly reliable (average test-retest ICCs of 0.96), and offers excellent agreement with the gold standard (average validity ICC of 0.87 against manual labels). Comparisons against an alternative atlas-based approach, SUIT (Spatially Unbiased Infratentorial Template), that registers images with a high-resolution template of the cerebellum, show that our AAM

  16. Molecular Modeling of Water Interfaces: From Molecular Spectroscopy to Thermodynamics.

    PubMed

    Nagata, Yuki; Ohto, Tatsuhiko; Backus, Ellen H G; Bonn, Mischa

    2016-04-28

    Understanding aqueous interfaces at the molecular level is not only fundamentally important, but also highly relevant for a variety of disciplines. For instance, electrode-water interfaces are relevant for electrochemistry, as are mineral-water interfaces for geochemistry and air-water interfaces for environmental chemistry; water-lipid interfaces constitute the boundaries of the cell membrane, and are thus relevant for biochemistry. One of the major challenges in these fields is to link macroscopic properties such as interfacial reactivity, solubility, and permeability as well as macroscopic thermodynamic and spectroscopic observables to the structure, structural changes, and dynamics of molecules at these interfaces. Simulations, by themselves, or in conjunction with appropriate experiments, can provide such molecular-level insights into aqueous interfaces. In this contribution, we review the current state-of-the-art of three levels of molecular dynamics (MD) simulation: ab initio, force field, and coarse-grained. We discuss the advantages, the potential, and the limitations of each approach for studying aqueous interfaces, by assessing computations of the sum-frequency generation spectra and surface tension. The comparison of experimental and simulation data provides information on the challenges of future MD simulations, such as improving the force field models and the van der Waals corrections in ab initio MD simulations. Once good agreement between experimental observables and simulation can be established, the simulation can be used to provide insights into the processes at a level of detail that is generally inaccessible to experiments. As an example we discuss the mechanism of the evaporation of water. We finish by presenting an outlook outlining four future challenges for molecular dynamics simulations of aqueous interfacial systems. PMID:27010817

  17. Automated forward mechanical modeling of wrinkle ridges on Mars

    NASA Astrophysics Data System (ADS)

    Nahm, Amanda; Peterson, Samuel

    2016-04-01

    One of the main goals of the InSight mission to Mars is to understand the internal structure of Mars [1], in part through passive seismology. Understanding the shallow surface structure of the landing site is critical to the robust interpretation of recorded seismic signals. Faults, such as the wrinkle ridges abundant in the proposed landing site in Elysium Planitia, can be used to determine the subsurface structure of the regions they deform. Here, we test a new automated method for modeling of the topography of a wrinkle ridge (WR) in Elysium Planitia, allowing for faster and more robust determination of subsurface fault geometry for interpretation of the local subsurface structure. We perform forward mechanical modeling of fault-related topography [e.g., 2, 3], utilizing the modeling program Coulomb [4, 5] to model surface displacements surface induced by blind thrust faulting. Fault lengths are difficult to determine for WR; we initially assume a fault length of 30 km, but also test the effects of different fault lengths on model results. At present, we model the wrinkle ridge as a single blind thrust fault with a constant fault dip, though WR are likely to have more complicated fault geometry [e.g., 6-8]. Typically, the modeling is performed using the Coulomb GUI. This approach can be time consuming, requiring user inputs to change model parameters and to calculate the associated displacements for each model, which limits the number of models and parameter space that can be tested. To reduce active user computation time, we have developed a method in which the Coulomb GUI is bypassed. The general modeling procedure remains unchanged, and a set of input files is generated before modeling with ranges of pre-defined parameter values. The displacement calculations are divided into two suites. For Suite 1, a total of 3770 input files were generated in which the fault displacement (D), dip angle (δ), depth to upper fault tip (t), and depth to lower fault tip (B

  18. Critical interfaces and duality in the Ashkin-Teller model

    SciTech Connect

    Picco, Marco; Santachiara, Raoul

    2011-06-15

    We report on the numerical measures on different spin interfaces and Fortuin-Kasteleyn (FK) cluster boundaries in the Askhin-Teller (AT) model. For a general point on the AT critical line, we find that the fractal dimension of a generic spin cluster interface can take one of four different possible values. In particular we found spin interfaces whose fractal dimension is d{sub f}=3/2 all along the critical line. Furthermore, the fractal dimension of the boundaries of FK clusters was found to satisfy all along the AT critical line a duality relation with the fractal dimension of their outer boundaries. This result provides clear numerical evidence that such duality, which is well known in the case of the O(n) model, exists in an extended conformal field theory.

  19. Designers' models of the human-computer interface

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Breedin, Sarah D.

    1993-01-01

    Understanding design models of the human-computer interface (HCI) may produce two types of benefits. First, interface development often requires input from two different types of experts: human factors specialists and software developers. Given the differences in their backgrounds and roles, human factors specialists and software developers may have different cognitive models of the HCI. Yet, they have to communicate about the interface as part of the design process. If they have different models, their interactions are likely to involve a certain amount of miscommunication. Second, the design process in general is likely to be guided by designers' cognitive models of the HCI, as well as by their knowledge of the user, tasks, and system. Designers do not start with a blank slate; rather they begin with a general model of the object they are designing. The author's approach to a design model of the HCI was to have three groups make judgments of categorical similarity about the components of an interface: human factors specialists with HCI design experience, software developers with HCI design experience, and a baseline group of computer users with no experience in HCI design. The components of the user interface included both display components such as windows, text, and graphics, and user interaction concepts, such as command language, editing, and help. The judgments of the three groups were analyzed using hierarchical cluster analysis and Pathfinder. These methods indicated, respectively, how the groups categorized the concepts, and network representations of the concepts for each group. The Pathfinder analysis provides greater information about local, pairwise relations among concepts, whereas the cluster analysis shows global, categorical relations to a greater extent.

  20. A microscopic model of interface related to the Burgers equation

    SciTech Connect

    De Masi, A.; Ferrari, P.A.; Vares, M.E. )

    1989-05-01

    A microscopic model for a solid-on-solid type of interface under the influence of an external field is introduced. It is proven that in equilibrium the macroscopic profile satisfies a partial differential equation which is (up to a transformation) the stationary Burgers equation. The study is based on the structure of the invariant measures for a related asymmetric simple exclusion process.

  1. Automation of sample plan creation for process model calibration

    NASA Astrophysics Data System (ADS)

    Oberschmidt, James; Abdo, Amr; Desouky, Tamer; Al-Imam, Mohamed; Krasnoperova, Azalia; Viswanathan, Ramya

    2010-04-01

    The process of preparing a sample plan for optical and resist model calibration has always been tedious. Not only because it is required to accurately represent full chip designs with countless combinations of widths, spaces and environments, but also because of the constraints imposed by metrology which may result in limiting the number of structures to be measured. Also, there are other limits on the types of these structures, and this is mainly due to the accuracy variation across different types of geometries. For instance, pitch measurements are normally more accurate than corner rounding. Thus, only certain geometrical shapes are mostly considered to create a sample plan. In addition, the time factor is becoming very crucial as we migrate from a technology node to another due to the increase in the number of development and production nodes, and the process is getting more complicated if process window aware models are to be developed in a reasonable time frame, thus there is a need for reliable methods to choose sample plans which also help reduce cycle time. In this context, an automated flow is proposed for sample plan creation. Once the illumination and film stack are defined, all the errors in the input data are fixed and sites are centered. Then, bad sites are excluded. Afterwards, the clean data are reduced based on geometrical resemblance. Also, an editable database of measurement-reliable and critical structures are provided, and their percentage in the final sample plan as well as the total number of 1D/2D samples can be predefined. It has the advantage of eliminating manual selection or filtering techniques, and it provides powerful tools for customizing the final plan, and the time needed to generate these plans is greatly reduced.

  2. Atomic Models of Strong Solids Interfaces Viewed as Composite Structures

    NASA Astrophysics Data System (ADS)

    Staffell, I.; Shang, J. L.; Kendall, K.

    2014-02-01

    This paper looks back through the 1960s to the invention of carbon fibres and the theories of Strong Solids. In particular it focuses on the fracture mechanics paradox of strong composites containing weak interfaces. From Griffith theory, it is clear that three parameters must be considered in producing a high strength composite:- minimising defects; maximising the elastic modulus; and raising the fracture energy along the crack path. The interface then introduces two further factors:- elastic modulus mismatch causing crack stopping; and debonding along a brittle interface due to low interface fracture energy. Consequently, an understanding of the fracture energy of a composite interface is needed. Using an interface model based on atomic interaction forces, it is shown that a single layer of contaminant atoms between the matrix and the reinforcement can reduce the interface fracture energy by an order of magnitude, giving a large delamination effect. The paper also looks to a future in which cars will be made largely from composite materials. Radical improvements in automobile design are necessary because the number of cars worldwide is predicted to double. This paper predicts gains in fuel economy by suggesting a new theory of automobile fuel consumption using an adaptation of Coulomb's friction law. It is demonstrated both by experiment and by theoretical argument that the energy dissipated in standard vehicle tests depends only on weight. Consequently, moving from metal to fibre construction can give a factor 2 improved fuel economy performance, roughly the same as moving from a petrol combustion drive to hydrogen fuel cell propulsion. Using both options together can give a factor 4 improvement, as demonstrated by testing a composite car using the ECE15 protocol.

  3. Automated model selection in covariance estimation and spatial whitening of MEG and EEG signals.

    PubMed

    Engemann, Denis A; Gramfort, Alexandre

    2015-03-01

    Magnetoencephalography and electroencephalography (M/EEG) measure non-invasively the weak electromagnetic fields induced by post-synaptic neural currents. The estimation of the spatial covariance of the signals recorded on M/EEG sensors is a building block of modern data analysis pipelines. Such covariance estimates are used in brain-computer interfaces (BCI) systems, in nearly all source localization methods for spatial whitening as well as for data covariance estimation in beamformers. The rationale for such models is that the signals can be modeled by a zero mean Gaussian distribution. While maximizing the Gaussian likelihood seems natural, it leads to a covariance estimate known as empirical covariance (EC). It turns out that the EC is a poor estimate of the true covariance when the number of samples is small. To address this issue the estimation needs to be regularized. The most common approach downweights off-diagonal coefficients, while more advanced regularization methods are based on shrinkage techniques or generative models with low rank assumptions: probabilistic PCA (PPCA) and factor analysis (FA). Using cross-validation all of these models can be tuned and compared based on Gaussian likelihood computed on unseen data. We investigated these models on simulations, one electroencephalography (EEG) dataset as well as magnetoencephalography (MEG) datasets from the most common MEG systems. First, our results demonstrate that different models can be the best, depending on the number of samples, heterogeneity of sensor types and noise properties. Second, we show that the models tuned by cross-validation are superior to models with hand-selected regularization. Hence, we propose an automated solution to the often overlooked problem of covariance estimation of M/EEG signals. The relevance of the procedure is demonstrated here for spatial whitening and source localization of MEG signals.

  4. Automated model selection in covariance estimation and spatial whitening of MEG and EEG signals.

    PubMed

    Engemann, Denis A; Gramfort, Alexandre

    2015-03-01

    Magnetoencephalography and electroencephalography (M/EEG) measure non-invasively the weak electromagnetic fields induced by post-synaptic neural currents. The estimation of the spatial covariance of the signals recorded on M/EEG sensors is a building block of modern data analysis pipelines. Such covariance estimates are used in brain-computer interfaces (BCI) systems, in nearly all source localization methods for spatial whitening as well as for data covariance estimation in beamformers. The rationale for such models is that the signals can be modeled by a zero mean Gaussian distribution. While maximizing the Gaussian likelihood seems natural, it leads to a covariance estimate known as empirical covariance (EC). It turns out that the EC is a poor estimate of the true covariance when the number of samples is small. To address this issue the estimation needs to be regularized. The most common approach downweights off-diagonal coefficients, while more advanced regularization methods are based on shrinkage techniques or generative models with low rank assumptions: probabilistic PCA (PPCA) and factor analysis (FA). Using cross-validation all of these models can be tuned and compared based on Gaussian likelihood computed on unseen data. We investigated these models on simulations, one electroencephalography (EEG) dataset as well as magnetoencephalography (MEG) datasets from the most common MEG systems. First, our results demonstrate that different models can be the best, depending on the number of samples, heterogeneity of sensor types and noise properties. Second, we show that the models tuned by cross-validation are superior to models with hand-selected regularization. Hence, we propose an automated solution to the often overlooked problem of covariance estimation of M/EEG signals. The relevance of the procedure is demonstrated here for spatial whitening and source localization of MEG signals. PMID:25541187

  5. Interfaces between phases in a lattice model of microemulsions

    NASA Astrophysics Data System (ADS)

    Dawson, K. A.

    1987-02-01

    A lattice model which has recently been developed to aid the study of microemulsions is briefly reviewed. The local-density mean-field equations are presented and the interfacial profiles and surface tensions are computed using a variational method. These density profiles describing the interface between oil rich and water rich phases, both of which are isotropic, are structured and nonmonotonic. Some comments about a perturbation expansion which confirms these conclusions are made. It is possible to compute the surface tension to high numerical accuracy using the variational procedure. This permits discussion of the question of wetting of the oil-water interface by a microemulsion phase. The interfacial tensions along the oil-water-microemulsion coexistence line are ultra-low. The oil-water interface is not wet by microemulsion throughout most of the bicontinuous regime.

  6. A general graphical user interface for automatic reliability modeling

    NASA Technical Reports Server (NTRS)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1991-01-01

    Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.

  7. A polarizable continuum model for molecules at spherical diffuse interfaces

    NASA Astrophysics Data System (ADS)

    Di Remigio, Roberto; Mozgawa, Krzysztof; Cao, Hui; Weijo, Ville; Frediani, Luca

    2016-03-01

    We present an extension of the Polarizable Continuum Model (PCM) to simulate solvent effects at diffuse interfaces with spherical symmetry, such as nanodroplets and micelles. We derive the form of the Green's function for a spatially varying dielectric permittivity with spherical symmetry and exploit the integral equation formalism of the PCM for general dielectric environments to recast the solvation problem into a continuum solvation framework. This allows the investigation of the solvation of ions and molecules in nonuniform dielectric environments, such as liquid droplets, micelles or membranes, while maintaining the computationally appealing characteristics of continuum solvation models. We describe in detail our implementation, both for the calculation of the Green's function and for its subsequent use in the PCM electrostatic problem. The model is then applied on a few test systems, mainly to analyze the effect of interface curvature on solvation energetics.

  8. A polarizable continuum model for molecules at spherical diffuse interfaces.

    PubMed

    Di Remigio, Roberto; Mozgawa, Krzysztof; Cao, Hui; Weijo, Ville; Frediani, Luca

    2016-03-28

    We present an extension of the Polarizable Continuum Model (PCM) to simulate solvent effects at diffuse interfaces with spherical symmetry, such as nanodroplets and micelles. We derive the form of the Green's function for a spatially varying dielectric permittivity with spherical symmetry and exploit the integral equation formalism of the PCM for general dielectric environments to recast the solvation problem into a continuum solvation framework. This allows the investigation of the solvation of ions and molecules in nonuniform dielectric environments, such as liquid droplets, micelles or membranes, while maintaining the computationally appealing characteristics of continuum solvation models. We describe in detail our implementation, both for the calculation of the Green's function and for its subsequent use in the PCM electrostatic problem. The model is then applied on a few test systems, mainly to analyze the effect of interface curvature on solvation energetics. PMID:27036423

  9. Computer modelling of nanoscale diffusion phenomena at epitaxial interfaces

    NASA Astrophysics Data System (ADS)

    Michailov, M.; Ranguelov, B.

    2014-05-01

    The present study outlines an important area in the application of computer modelling to interface phenomena. Being relevant to the fundamental physical problem of competing atomic interactions in systems with reduced dimensionality, these phenomena attract special academic attention. On the other hand, from a technological point of view, detailed knowledge of the fine atomic structure of surfaces and interfaces correlates with a large number of practical problems in materials science. Typical examples are formation of nanoscale surface patterns, two-dimensional superlattices, atomic intermixing at an epitaxial interface, atomic transport phenomena, structure and stability of quantum wires on surfaces. We discuss here a variety of diffusion mechanisms that control surface-confined atomic exchange, formation of alloyed atomic stripes and islands, relaxation of pure and alloyed atomic terraces, diffusion of clusters and their stability in an external field. The computational model refines important details of diffusion of adatoms and clusters accounting for the energy barriers at specific atomic sites: smooth domains, terraces, steps and kinks. The diffusion kinetics, integrity and decomposition of atomic islands in an external field are considered in detail and assigned to specific energy regions depending on the cluster stability in mass transport processes. The presented ensemble of diffusion scenarios opens a way for nanoscale surface design towards regular atomic interface patterns with exotic physical features.

  10. Developing A Laser Shockwave Model For Characterizing Diffusion Bonded Interfaces

    SciTech Connect

    James A. Smith; Jeffrey M. Lacy; Barry H. Rabin

    2014-07-01

    12. Other advances in QNDE and related topics: Preferred Session Laser-ultrasonics Developing A Laser Shockwave Model For Characterizing Diffusion Bonded Interfaces 41st Annual Review of Progress in Quantitative Nondestructive Evaluation Conference QNDE Conference July 20-25, 2014 Boise Centre 850 West Front Street Boise, Idaho 83702 James A. Smith, Jeffrey M. Lacy, Barry H. Rabin, Idaho National Laboratory, Idaho Falls, ID ABSTRACT: The US National Nuclear Security Agency has a Global Threat Reduction Initiative (GTRI) which is assigned with reducing the worldwide use of high-enriched uranium (HEU). A salient component of that initiative is the conversion of research reactors from HEU to low enriched uranium (LEU) fuels. An innovative fuel is being developed to replace HEU. The new LEU fuel is based on a monolithic fuel made from a U-Mo alloy foil encapsulated in Al-6061 cladding. In order to complete the fuel qualification process, the laser shock technique is being developed to characterize the clad-clad and fuel-clad interface strengths in fresh and irradiated fuel plates. The Laser Shockwave Technique (LST) is being investigated to characterize interface strength in fuel plates. LST is a non-contact method that uses lasers for the generation and detection of large amplitude acoustic waves to characterize interfaces in nuclear fuel plates. However the deposition of laser energy into the containment layer on specimen’s surface is intractably complex. The shock wave energy is inferred from the velocity on the backside and the depth of the impression left on the surface from the high pressure plasma pulse created by the shock laser. To help quantify the stresses and strengths at the interface, a finite element model is being developed and validated by comparing numerical and experimental results for back face velocities and front face depressions with experimental results. This paper will report on initial efforts to develop a finite element model for laser

  11. A visual interface for the SUPERFLEX hydrological modelling framework

    NASA Astrophysics Data System (ADS)

    Gao, H.; Fenicia, F.; Kavetski, D.; Savenije, H. H. G.

    2012-04-01

    The SUPERFLEX framework is a modular modelling system for conceptual hydrological modelling at the catchment scale. This work reports the development of a visual interface for the SUPERFLEX model. This aims to enhance the communication between the hydrologic experimentalists and modelers, in particular further bridging the gap between the field soft data and the modeler's knowledge. In collaboration with field experimentalists, modelers can visually and intuitively hypothesize different model architectures and combinations of reservoirs, select from a library of constructive functions to describe the relationship between reservoirs' storage and discharge, specify the shape of lag functions and, finally, set parameter values. The software helps hydrologists take advantage of any existing insights into the study site, translate it into a conceptual hydrological model and implement it within a computationally robust algorithm. This tool also helps challenge and contrast competing paradigms such as the "uniqueness of place" vs "one model fits all". Using this interface, hydrologists can test different hypotheses and model representations, and stepwise build deeper understanding of the watershed of interest.

  12. Automated MRI Segmentation for Individualized Modeling of Current Flow in the Human Head

    PubMed Central

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-01-01

    Objective High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography (HD-EEG) require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images (MRI) requires labor-intensive manual segmentation, even when leveraging available automated segmentation tools. Also, accurate placement of many high-density electrodes on individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach A fully automated segmentation technique based on Statical Parametric Mapping 8 (SPM8), including an improved tissue probability map (TPM) and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on 4 healthy subjects and 7 stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. Main results The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view (FOV) extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly

  13. Automated MRI segmentation for individualized modeling of current flow in the human head

    NASA Astrophysics Data System (ADS)

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-12-01

    Objective. High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets.Main results. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly.Significance. Fully

  14. Industrial Automation Mechanic Model Curriculum Project. Final Report.

    ERIC Educational Resources Information Center

    Toledo Public Schools, OH.

    This document describes a demonstration program that developed secondary level competency-based instructional materials for industrial automation mechanics. Program activities included task list compilation, instructional materials research, learning activity packet (LAP) development, construction of lab elements, system implementation,…

  15. A Multiple Agent Model of Human Performance in Automated Air Traffic Control and Flight Management Operations

    NASA Technical Reports Server (NTRS)

    Corker, Kevin; Pisanich, Gregory; Condon, Gregory W. (Technical Monitor)

    1995-01-01

    A predictive model of human operator performance (flight crew and air traffic control (ATC)) has been developed and applied in order to evaluate the impact of automation developments in flight management and air traffic control. The model is used to predict the performance of a two person flight crew and the ATC operators generating and responding to clearances aided by the Center TRACON Automation System (CTAS). The purpose of the modeling is to support evaluation and design of automated aids for flight management and airspace management and to predict required changes in procedure both air and ground in response to advancing automation in both domains. Additional information is contained in the original extended abstract.

  16. Symmetric model of compressible granular mixtures with permeable interfaces

    NASA Astrophysics Data System (ADS)

    Saurel, Richard; Le Martelot, Sébastien; Tosello, Robert; Lapébie, Emmanuel

    2014-12-01

    Compressible granular materials are involved in many applications, some of them being related to energetic porous media. Gas permeation effects are important during their compaction stage, as well as their eventual chemical decomposition. Also, many situations involve porous media separated from pure fluids through two-phase interfaces. It is thus important to develop theoretical and numerical formulations to deal with granular materials in the presence of both two-phase interfaces and gas permeation effects. Similar topic was addressed for fluid mixtures and interfaces with the Discrete Equations Method (DEM) [R. Abgrall and R. Saurel, "Discrete equations for physical and numerical compressible multiphase mixtures," J. Comput. Phys. 186(2), 361-396 (2003)] but it seemed impossible to extend this approach to granular media as intergranular stress [K. K. Kuo, V. Yang, and B. B. Moore, "Intragranular stress, particle-wall friction and speed of sound in granular propellant beds," J. Ballist. 4(1), 697-730 (1980)] and associated configuration energy [J. B. Bdzil, R. Menikoff, S. F. Son, A. K. Kapila, and D. S. Stewart, "Two-phase modeling of deflagration-to-detonation transition in granular materials: A critical examination of modeling issues," Phys. Fluids 11, 378 (1999)] were present with significant effects. An approach to deal with fluid-porous media interfaces was derived in Saurel et al. ["Modelling dynamic and irreversible powder compaction," J. Fluid Mech. 664, 348-396 (2010)] but its validity was restricted to weak velocity disequilibrium only. Thanks to a deeper analysis, the DEM is successfully extended to granular media modelling in the present paper. It results in an enhanced version of the Baer and Nunziato ["A two-phase mixture theory for the deflagration-to-detonation transition (DDT) in reactive granular materials," Int. J. Multiphase Flow 12(6), 861-889 (1986)] model as symmetry of the formulation is now preserved. Several computational examples are

  17. Thermal Edge-Effects Model for Automated Tape Placement of Thermoplastic Composites

    NASA Technical Reports Server (NTRS)

    Costen, Robert C.

    2000-01-01

    Two-dimensional thermal models for automated tape placement (ATP) of thermoplastic composites neglect the diffusive heat transport that occurs between the newly placed tape and the cool substrate beside it. Such lateral transport can cool the tape edges prematurely and weaken the bond. The three-dimensional, steady state, thermal transport equation is solved by the Green's function method for a tape of finite width being placed on an infinitely wide substrate. The isotherm for the glass transition temperature on the weld interface is used to determine the distance inward from the tape edge that is prematurely cooled, called the cooling incursion Delta a. For the Langley ATP robot, Delta a = 0.4 mm for a unidirectional lay-up of PEEK/carbon fiber composite, and Delta a = 1.2 mm for an isotropic lay-up. A formula for Delta a is developed and applied to a wide range of operating conditions. A surprise finding is that Delta a need not decrease as the Peclet number Pe becomes very large, where Pe is the dimensionless ratio of inertial to diffusive heat transport. Conformable rollers that increase the consolidation length would also increase Delta a, unless other changes are made, such as proportionally increasing the material speed. To compensate for premature edge cooling, the thermal input could be extended past the tape edges by the amount Delta a. This method should help achieve uniform weld strength and crystallinity across the width of the tape.

  18. Modelling molecule-surface interactions--an automated quantum-classical approach using a genetic algorithm.

    PubMed

    Herbers, Claudia R; Johnston, Karen; van der Vegt, Nico F A

    2011-06-14

    We present an automated and efficient method to develop force fields for molecule-surface interactions. A genetic algorithm (GA) is used to parameterise a classical force field so that the classical adsorption energy landscape of a molecule on a surface matches the corresponding landscape from density functional theory (DFT) calculations. The procedure performs a sophisticated search in the parameter phase space and converges very quickly. The method is capable of fitting a significant number of structures and corresponding adsorption energies. Water on a ZnO(0001) surface was chosen as a benchmark system but the method is implemented in a flexible way and can be applied to any system of interest. In the present case, pairwise Lennard Jones (LJ) and Coulomb potentials are used to describe the molecule-surface interactions. In the course of the fitting procedure, the LJ parameters are refined in order to reproduce the adsorption energy landscape. The classical model is capable of describing a wide range of energies, which is essential for a realistic description of a fluid-solid interface. PMID:21594260

  19. Diffuse interface modeling of a radial vapor bubble collapse

    NASA Astrophysics Data System (ADS)

    Magaletti, Francesco; Marino, Luca; Massimo Casciola, Carlo

    2015-12-01

    A diffuse interface model is exploited to study in details the dynamics of a cavitation vapor bubble, by including phase change, transition to supercritical conditions, shock wave propagation and thermal conduction. The numerical experiments show that the actual dynamic is a sequence of collapses and rebounds demonstrating the importance of nonequilibrium phase changes. In particular the transition to supercritical conditions avoids the full condensation and leads to shockwave emission after the collapse and to successive bubble rebound.

  20. Bacterial Adhesion to Hexadecane (Model NAPL)-Water Interfaces

    NASA Astrophysics Data System (ADS)

    Ghoshal, S.; Zoueki, C. R.; Tufenkji, N.

    2009-05-01

    The rates of biodegradation of NAPLs have been shown to be influenced by the adhesion of hydrocarbon- degrading microorganisms as well as their proximity to the NAPL-water interface. Several studies provide evidence for bacterial adhesion or biofilm formation at alkane- or crude oil-water interfaces, but there is a significant knowledge gap in our understanding of the processes that influence initial adhesion of bacteria on to NAPL-water interfaces. In this study bacterial adhesion to hexadecane, and a series of NAPLs comprised of hexadecane amended with toluene, and/or with asphaltenes and resins, which are the surface active fractions of crude oils, were examined using a Microbial Adhesion to Hydrocarbons (MATH) assay. The microorganisms employed were Mycobacterium kubicae, Pseudomonas aeruginosa and Pseudomonas putida, which are hydrocarbon degraders or soil microorganisms. MATH assays as well as electrophoretic mobility measurements of the bacterial cells and the NAPL droplet surfaces in aqueous solutions were conducted at three solution pHs (4, 6 and 7). Asphaltenes and resins were shown to generally decrease microbial adhesion. Results of the MATH assay were not in qualitative agreement with theoretical predictions of bacteria- hydrocarbon interactions based on the extended Derjaguin-Landau-Verwey-Overbeek (XDLVO) model of free energy of interaction between the cell and NAPL droplets. In this model the free energy of interaction between two colloidal particles is predicted based on electrical double layer, van der Waals and hydrophobic forces. It is likely that the steric repulsion between bacteria and NAPL surfaces, caused by biopolymers on bacterial surfaces and aphaltenes and resins at the NAPL-water interface contributed to the decreased adhesion compared to that predicted by the XDLVO model.

  1. Automated Eukaryotic Gene Structure Annotation Using EVidenceModeler and the Program to Assemble Spliced Alignments

    SciTech Connect

    Haas, B J; Salzberg, S L; Zhu, W; Pertea, M; Allen, J E; Orvis, J; White, O; Buell, C R; Wortman, J R

    2007-12-10

    EVidenceModeler (EVM) is presented as an automated eukaryotic gene structure annotation tool that reports eukaryotic gene structures as a weighted consensus of all available evidence. EVM, when combined with the Program to Assemble Spliced Alignments (PASA), yields a comprehensive, configurable annotation system that predicts protein-coding genes and alternatively spliced isoforms. Our experiments on both rice and human genome sequences demonstrate that EVM produces automated gene structure annotation approaching the quality of manual curation.

  2. ShowFlow: A practical interface for groundwater modeling

    SciTech Connect

    Tauxe, J.D.

    1990-12-01

    ShowFlow was created to provide a user-friendly, intuitive environment for researchers and students who use computer modeling software. What traditionally has been a workplace available only to those familiar with command-line based computer systems is now within reach of almost anyone interested in the subject of modeling. In the case of this edition of ShowFlow, the user can easily experiment with simulations using the steady state gaussian plume groundwater pollutant transport model SSGPLUME, though ShowFlow can be rewritten to provide a similar interface for any computer model. Included in this thesis is all the source code for both the ShowFlow application for Microsoft{reg sign} Windows{trademark} and the SSGPLUME model, a User's Guide, and a Developer's Guide for converting ShowFlow to run other model programs. 18 refs., 13 figs.

  3. Transforming Collaborative Process Models into Interface Process Models by Applying an MDA Approach

    NASA Astrophysics Data System (ADS)

    Lazarte, Ivanna M.; Chiotti, Omar; Villarreal, Pablo D.

    Collaborative business models among enterprises require defining collaborative business processes. Enterprises implement B2B collaborations to execute these processes. In B2B collaborations the integration and interoperability of processes and systems of the enterprises are required to support the execution of collaborative processes. From a collaborative process model, which describes the global view of the enterprise interactions, each enterprise must define the interface process that represents the role it performs in the collaborative process in order to implement the process in a Business Process Management System. Hence, in this work we propose a method for the automatic generation of the interface process model of each enterprise from a collaborative process model. This method is based on a Model-Driven Architecture to transform collaborative process models into interface process models. By applying this method, interface processes are guaranteed to be interoperable and defined according to a collaborative process.

  4. Language Model Applications to Spelling with Brain-Computer Interfaces

    PubMed Central

    Mora-Cortes, Anderson; Manyakov, Nikolay V.; Chumerin, Nikolay; Van Hulle, Marc M.

    2014-01-01

    Within the Ambient Assisted Living (AAL) community, Brain-Computer Interfaces (BCIs) have raised great hopes as they provide alternative communication means for persons with disabilities bypassing the need for speech and other motor activities. Although significant advancements have been realized in the last decade, applications of language models (e.g., word prediction, completion) have only recently started to appear in BCI systems. The main goal of this article is to review the language model applications that supplement non-invasive BCI-based communication systems by discussing their potential and limitations, and to discern future trends. First, a brief overview of the most prominent BCI spelling systems is given, followed by an in-depth discussion of the language models applied to them. These language models are classified according to their functionality in the context of BCI-based spelling: the static/dynamic nature of the user interface, the use of error correction and predictive spelling, and the potential to improve their classification performance by using language models. To conclude, the review offers an overview of the advantages and challenges when implementing language models in BCI-based communication systems when implemented in conjunction with other AAL technologies. PMID:24675760

  5. Language model applications to spelling with Brain-Computer Interfaces.

    PubMed

    Mora-Cortes, Anderson; Manyakov, Nikolay V; Chumerin, Nikolay; Van Hulle, Marc M

    2014-03-26

    Within the Ambient Assisted Living (AAL) community, Brain-Computer Interfaces (BCIs) have raised great hopes as they provide alternative communication means for persons with disabilities bypassing the need for speech and other motor activities. Although significant advancements have been realized in the last decade, applications of language models (e.g., word prediction, completion) have only recently started to appear in BCI systems. The main goal of this article is to review the language model applications that supplement non-invasive BCI-based communication systems by discussing their potential and limitations, and to discern future trends. First, a brief overview of the most prominent BCI spelling systems is given, followed by an in-depth discussion of the language models applied to them. These language models are classified according to their functionality in the context of BCI-based spelling: the static/dynamic nature of the user interface, the use of error correction and predictive spelling, and the potential to improve their classification performance by using language models. To conclude, the review offers an overview of the advantages and challenges when implementing language models in BCI-based communication systems when implemented in conjunction with other AAL technologies.

  6. Developing a laser shockwave model for characterizing diffusion bonded interfaces

    SciTech Connect

    Lacy, Jeffrey M. Smith, James A. Rabin, Barry H.

    2015-03-31

    The US National Nuclear Security Agency has a Global Threat Reduction Initiative (GTRI) with the goal of reducing the worldwide use of high-enriched uranium (HEU). A salient component of that initiative is the conversion of research reactors from HEU to low enriched uranium (LEU) fuels. An innovative fuel is being developed to replace HEU in high-power research reactors. The new LEU fuel is a monolithic fuel made from a U-Mo alloy foil encapsulated in Al-6061 cladding. In order to support the fuel qualification process, the Laser Shockwave Technique (LST) is being developed to characterize the clad-clad and fuel-clad interface strengths in fresh and irradiated fuel plates. LST is a non-contact method that uses lasers for the generation and detection of large amplitude acoustic waves to characterize interfaces in nuclear fuel plates. However, because the deposition of laser energy into the containment layer on a specimen's surface is intractably complex, the shock wave energy is inferred from the surface velocity measured on the backside of the fuel plate and the depth of the impression left on the surface by the high pressure plasma pulse created by the shock laser. To help quantify the stresses generated at the interfaces, a finite element method (FEM) model is being utilized. This paper will report on initial efforts to develop and validate the model by comparing numerical and experimental results for back surface velocities and front surface depressions in a single aluminum plate representative of the fuel cladding.

  7. A diffuse interface model of grain boundary faceting

    NASA Astrophysics Data System (ADS)

    Abdeljawad, Fadi; Medlin, Douglas; Zimmerman, Jonathan; Hattar, Khalid; Foiles, Stephen

    Incorporating anisotropy into thermodynamic treatments of interfaces dates back to over a century ago. For a given orientation of two abutting grains in a pure metal, depressions in the grain boundary (GB) energy may exist as a function of GB inclination, defined by the plane normal. Therefore, an initially flat GB may facet resulting in a hill-and-valley structure. Herein, we present a diffuse interface model of GB faceting that is capable of capturing anisotropic GB energies and mobilities, and accounting for the excess energy due to facet junctions and their non-local interactions. The hallmark of our approach is the ability to independently examine the role of each of the interface properties on the faceting behavior. As a demonstration, we consider the Σ 5 < 001 > tilt GB in iron, where faceting along the { 310 } and { 210 } planes was experimentally observed. Linear stability analysis and numerical examples highlight the role of junction energy and associated non-local interactions on the resulting facet length scales. On the whole, our modeling approach provides a general framework to examine the spatio-temporal evolution of highly anisotropic GBs in polycrystalline metals. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  8. Developing a laser shockwave model for characterizing diffusion bonded interfaces

    NASA Astrophysics Data System (ADS)

    Lacy, Jeffrey M.; Smith, James A.; Rabin, Barry H.

    2015-03-01

    The US National Nuclear Security Agency has a Global Threat Reduction Initiative (GTRI) with the goal of reducing the worldwide use of high-enriched uranium (HEU). A salient component of that initiative is the conversion of research reactors from HEU to low enriched uranium (LEU) fuels. An innovative fuel is being developed to replace HEU in high-power research reactors. The new LEU fuel is a monolithic fuel made from a U-Mo alloy foil encapsulated in Al-6061 cladding. In order to support the fuel qualification process, the Laser Shockwave Technique (LST) is being developed to characterize the clad-clad and fuel-clad interface strengths in fresh and irradiated fuel plates. LST is a non-contact method that uses lasers for the generation and detection of large amplitude acoustic waves to characterize interfaces in nuclear fuel plates. However, because the deposition of laser energy into the containment layer on a specimen's surface is intractably complex, the shock wave energy is inferred from the surface velocity measured on the backside of the fuel plate and the depth of the impression left on the surface by the high pressure plasma pulse created by the shock laser. To help quantify the stresses generated at the interfaces, a finite element method (FEM) model is being utilized. This paper will report on initial efforts to develop and validate the model by comparing numerical and experimental results for back surface velocities and front surface depressions in a single aluminum plate representative of the fuel cladding.

  9. An automated method for high-definition transcranial direct current stimulation modeling.

    PubMed

    Huang, Yu; Su, Yuzhuo; Rorden, Christopher; Dmochowski, Jacek; Datta, Abhishek; Parra, Lucas C

    2012-01-01

    Targeted transcranial stimulation with electric currents requires accurate models of the current flow from scalp electrodes to the human brain. Idiosyncratic anatomy of individual brains and heads leads to significant variability in such current flows across subjects, thus, necessitating accurate individualized head models. Here we report on an automated processing chain that computes current distributions in the head starting from a structural magnetic resonance image (MRI). The main purpose of automating this process is to reduce the substantial effort currently required for manual segmentation, electrode placement, and solving of finite element models. In doing so, several weeks of manual labor were reduced to no more than 4 hours of computation time and minimal user interaction, while current-flow results for the automated method deviated by less than 27.9% from the manual method. Key facilitating factors are the addition of three tissue types (skull, scalp and air) to a state-of-the-art automated segmentation process, morphological processing to correct small but important segmentation errors, and automated placement of small electrodes based on easily reproducible standard electrode configurations. We anticipate that such an automated processing will become an indispensable tool to individualize transcranial direct current stimulation (tDCS) therapy.

  10. A Model of Process-Based Automation: Cost and Quality Implications in the Medication Management Process

    ERIC Educational Resources Information Center

    Spaulding, Trent Joseph

    2011-01-01

    The objective of this research is to understand how a set of systems, as defined by the business process, creates value. The three studies contained in this work develop the model of process-based automation. The model states that complementarities among systems are specified by handoffs in the business process. The model also provides theory to…

  11. Finite element modeling of the contact interface between trans-tibial residual limb and prosthetic socket.

    PubMed

    Lee, Winson C C; Zhang, Ming; Jia, Xiaohong; Cheung, Jason T M

    2004-10-01

    Finite element method has been identified as a useful tool to understand the load transfer mechanics between a residual limb and its prosthetic socket. This paper proposed a new practical approach in modeling the contact interface with consideration of the friction/slip conditions and pre-stresses applied on the limb within a rectified socket. The residual limb and socket were modeled as two separate structures and their interactions were simulated using automated contact methods. Some regions of the limb penetrated into the socket because of socket modification. In the first step of the simulation, the penetrated limb surface was moved onto the inner surface of the socket and the pre-stresses were predicted. In the subsequent loading step, pre-stresses were kept and loadings were applied at the knee joint to simulate the loading during the stance phase of gait. Comparisons were made between the model using the proposed approach and the model having an assumption that the shape of the limb and the socket were the same which ignored pre-stress. It was found that peak normal and shear stresses over the regions where socket undercuts were made reduced and the stress values over other regions raised in the model having the simplifying assumption.

  12. Towards an Improved Pilot-Vehicle Interface for Highly Automated Aircraft: Evaluation of the Haptic Flight Control System

    NASA Technical Reports Server (NTRS)

    Schutte, Paul; Goodrich, Kenneth; Williams, Ralph

    2012-01-01

    The control automation and interaction paradigm (e.g., manual, autopilot, flight management system) used on virtually all large highly automated aircraft has long been an exemplar of breakdowns in human factors and human-centered design. An alternative paradigm is the Haptic Flight Control System (HFCS) that is part of NASA Langley Research Center s Naturalistic Flight Deck Concept. The HFCS uses only stick and throttle for easily and intuitively controlling the actual flight of the aircraft without losing any of the efficiency and operational benefits of the current paradigm. Initial prototypes of the HFCS are being evaluated and this paper describes one such evaluation. In this evaluation we examined claims regarding improved situation awareness, appropriate workload, graceful degradation, and improved pilot acceptance. Twenty-four instrument-rated pilots were instructed to plan and fly four different flights in a fictitious airspace using a moderate fidelity desktop simulation. Three different flight control paradigms were tested: Manual control, Full Automation control, and a simplified version of the HFCS. Dependent variables included both subjective (questionnaire) and objective (SAGAT) measures of situation awareness, workload (NASA-TLX), secondary task performance, time to recognize automation failures, and pilot preference (questionnaire). The results showed a statistically significant advantage for the HFCS in a number of measures. Results that were not statistically significant still favored the HFCS. The results suggest that the HFCS does offer an attractive and viable alternative to the tactical components of today s FMS/autopilot control system. The paper describes further studies that are planned to continue to evaluate the HFCS.

  13. Interface Management for a NASA Flight Project Using Model-Based Systems Engineering (MBSE)

    NASA Technical Reports Server (NTRS)

    Vipavetz, Kevin; Shull, Thomas A.; Infeld, Samatha; Price, Jim

    2016-01-01

    The goal of interface management is to identify, define, control, and verify interfaces; ensure compatibility; provide an efficient system development; be on time and within budget; while meeting stakeholder requirements. This paper will present a successful seven-step approach to interface management used in several NASA flight projects. The seven-step approach using Model Based Systems Engineering will be illustrated by interface examples from the Materials International Space Station Experiment-X (MISSE-X) project. The MISSE-X was being developed as an International Space Station (ISS) external platform for space environmental studies, designed to advance the technology readiness of materials and devices critical for future space exploration. Emphasis will be given to best practices covering key areas such as interface definition, writing good interface requirements, utilizing interface working groups, developing and controlling interface documents, handling interface agreements, the use of shadow documents, the importance of interface requirement ownership, interface verification, and product transition.

  14. A biological model for controlling interface growth and morphology.

    SciTech Connect

    Hoyt, Jeffrey John; Holm, Elizabeth Ann

    2004-01-01

    Biological systems create proteins that perform tasks more efficiently and precisely than conventional chemicals. For example, many plants and animals produce proteins to control the freezing of water. Biological antifreeze proteins (AFPs) inhibit the solidification process, even below the freezing point. These molecules bond to specific sites at the ice/water interface and are theorized to suppress solidification chemically or geometrically. In this project, we investigated the theoretical and experimental data on AFPs and performed analyses to understand the unique physics of AFPs. The experimental literature was analyzed to determine chemical mechanisms and effects of protein bonding at ice surfaces, specifically thermodynamic freezing point depression, suppression of ice nucleation, decrease in dendrite growth kinetics, solute drag on the moving solid/liquid interface, and stearic pinning of the ice interface. Stearic pinning was found to be the most likely candidate to explain experimental results, including freezing point depression, growth morphologies, and thermal hysteresis. A new stearic pinning model was developed and applied to AFPs, with excellent quantitative results. Understanding biological antifreeze mechanisms could enable important medical and engineering applications, but considerable future work will be necessary.

  15. Electrochemical Stability of Model Polymer Electrolyte/Electrode Interfaces

    NASA Astrophysics Data System (ADS)

    Hallinan, Daniel; Yang, Guang

    2015-03-01

    Polymer electrolytes are promising materials for high energy density rechargeable batteries. However, typical polymer electrolytes are not electrochemically stable at the charging voltage of advanced positive electrode materials. Although not yet reported in literature, decomposition is expected to adversely affect the performance and lifetime of polymer-electrolyte-based batteries. In an attempt to better understand polymer electrolyte oxidation and design stable polymer electrolyte/positive electrode interfaces, we are studying electron transfer across model interfaces comprising gold nanoparticles and organic protecting ligands assembled into monolayer films. Gold nanoparticles provide large interfacial surface area yielding a measurable electrochemical signal. They are inert and hence non-reactive with most polymer electrolytes and lithium salts. The surface can be easily modified with ligands of different chemistry and molecular weight. In our study, poly(ethylene oxide) (PEO) will serve as the polymer electrolyte and lithium bis(trifluoromethanesulfonyl) imide salt (LiTFSI) will be the lithium salt. The effect of ligand type and molecular weight on both optical and electrical properties of the gold nanoparticle film will be presented. Finally, the electrochemical stability of the electrode/electrolyte interface and its dependence on interfacial properties will be presented.

  16. A diffuse interface model of grain boundary faceting

    NASA Astrophysics Data System (ADS)

    Abdeljawad, F.; Medlin, D. L.; Zimmerman, J. A.; Hattar, K.; Foiles, S. M.

    2016-06-01

    Interfaces, free or internal, greatly influence the physical properties and stability of materials microstructures. Of particular interest are the processes that occur due to anisotropic interfacial properties. In the case of grain boundaries (GBs) in metals, several experimental observations revealed that an initially flat GB may facet into hill-and-valley structures with well defined planes and corners/edges connecting them. Herein, we present a diffuse interface model that is capable of accounting for strongly anisotropic GB properties and capturing the formation of hill-and-valley morphologies. The hallmark of our approach is the ability to independently examine the various factors affecting GB faceting and subsequent facet coarsening. More specifically, our formulation incorporates higher order expansions to account for the excess energy due to facet junctions and their non-local interactions. As a demonstration of the modeling capability, we consider the Σ5 <001 > tilt GB in body-centered-cubic iron, where faceting along the {210} and {310} planes was experimentally observed. Atomistic calculations were utilized to determine the inclination-dependent GB energy, which was then used as an input in our model. Linear stability analysis and simulation results highlight the role of junction energy and associated non-local interactions on the resulting facet length scales. Broadly speaking, our modeling approach provides a general framework to examine the microstructural stability of polycrystalline systems with highly anisotropic GBs.

  17. A symbolic/subsymbolic interface protocol for cognitive modeling

    PubMed Central

    Simen, Patrick; Polk, Thad

    2009-01-01

    Researchers studying complex cognition have grown increasingly interested in mapping symbolic cognitive architectures onto subsymbolic brain models. Such a mapping seems essential for understanding cognition under all but the most extreme viewpoints (namely, that cognition consists exclusively of digitally implemented rules; or instead, involves no rules whatsoever). Making this mapping reduces to specifying an interface between symbolic and subsymbolic descriptions of brain activity. To that end, we propose parameterization techniques for building cognitive models as programmable, structured, recurrent neural networks. Feedback strength in these models determines whether their components implement classically subsymbolic neural network functions (e.g., pattern recognition), or instead, logical rules and digital memory. These techniques support the implementation of limited production systems. Though inherently sequential and symbolic, these neural production systems can exploit principles of parallel, analog processing from decision-making models in psychology and neuroscience to explain the effects of brain damage on problem solving behavior. PMID:20711520

  18. Modeling the Energy Use of a Connected and Automated Transportation System (Poster)

    SciTech Connect

    Gonder, J.; Brown, A.

    2014-07-01

    Early research points to large potential impacts of connected and automated vehicles (CAVs) on transportation energy use - dramatic savings, increased use, or anything in between. Due to a lack of suitable data and integrated modeling tools to explore these complex future systems, analyses to date have relied on simple combinations of isolated effects. This poster proposes a framework for modeling the potential energy implications from increasing penetration of CAV technologies and for assessing technology and policy options to steer them toward favorable energy outcomes. Current CAV modeling challenges include estimating behavior change, understanding potential vehicle-to-vehicle interactions, and assessing traffic flow and vehicle use under different automation scenarios. To bridge these gaps and develop a picture of potential future automated systems, NREL is integrating existing modeling capabilities with additional tools and data inputs to create a more fully integrated CAV assessment toolkit.

  19. Groundwater modeling and remedial optimization design using graphical user interfaces

    SciTech Connect

    Deschaine, L.M.

    1997-05-01

    The ability to accurately predict the behavior of chemicals in groundwater systems under natural flow circumstances or remedial screening and design conditions is the cornerstone to the environmental industry. The ability to do this efficiently and effectively communicate the information to the client and regulators is what differentiates effective consultants from ineffective consultants. Recent advances in groundwater modeling graphical user interfaces (GUIs) are doing for numerical modeling what Windows{trademark} did for DOS{trademark}. GUI facilitates both the modeling process and the information exchange. This Test Drive evaluates the performance of two GUIs--Groundwater Vistas and ModIME--on an actual groundwater model calibration and remedial design optimization project. In the early days of numerical modeling, data input consisted of large arrays of numbers that required intensive labor to input and troubleshoot. Model calibration was also manual, as was interpreting the reams of computer output for each of the tens or hundreds of simulations required to calibrate and perform optimal groundwater remedial design. During this period, the majority of the modelers effort (and budget) was spent just getting the model running, as opposed to solving the environmental challenge at hand. GUIs take the majority of the grunt work out of the modeling process, thereby allowing the modeler to focus on designing optimal solutions.

  20. Individual differences in response to automation: the five factor model of personality.

    PubMed

    Szalma, James L; Taylor, Grant S

    2011-06-01

    This study examined the relationship of operator personality (Five Factor Model) and characteristics of the task and of adaptive automation (reliability and adaptiveness-whether the automation was well-matched to changes in task demand) to operator performance, workload, stress, and coping. This represents the first investigation of how the Five Factors relate to human response to automation. One-hundred-sixty-one college students experienced either 75% or 95% reliable automation provided with task loads of either two or four displays to be monitored. The task required threat detection in a simulated uninhabited ground vehicle (UGV) task. Task demand exerted the strongest influence on outcome variables. Automation characteristics did not directly impact workload or stress, but effects did emerge in the context of trait-task interactions that varied as a function of the dimension of workload and stress. The pattern of relationships of traits to dependent variables was generally moderated by at least one task factor. Neuroticism was related to poorer performance in some conditions, and all five traits were associated with at least one measure of workload and stress. Neuroticism generally predicted increased workload and stress and the other traits predicted decreased levels of these states. However, in the case of the relation of Extraversion and Agreeableness to Worry, Frustration, and avoidant coping, the direction of effects varied across task conditions. The results support incorporation of individual differences into automation design by identifying the relevant person characteristics and using the information to determine what functions to automate and the form and level of automation.

  1. Spherical wave reflection in layered media with rough interfaces: Three-dimensional modeling.

    PubMed

    Pinson, Samuel; Cordioli, Julio; Guillon, Laurent

    2016-08-01

    In the context of sediment characterization, layer interface roughnesses may be responsible for sound-speed profile measurement uncertainties. To study the roughness influence, a three-dimensional (3D) modeling of a layered seafloor with rough interfaces is necessary. Although roughness scattering has an abundant literature, 3D modeling of spherical wave reflection on rough interfaces is generally limited to a single interface (using Kirchhoff-Helmholtz integral) or computationally expensive techniques (finite difference or finite element method). In this work, it is demonstrated that the wave reflection over a layered medium with irregular interfaces can be modeled as a sum of integrals over each interface. The main approximations of the method are the tangent-plane approximation, the Born approximation (multiple reflection between interfaces are neglected) and flat-interface approximation for the transmitted waves into the sediment. The integration over layer interfaces results in a method with reasonable computation cost.

  2. Spherical wave reflection in layered media with rough interfaces: Three-dimensional modeling.

    PubMed

    Pinson, Samuel; Cordioli, Julio; Guillon, Laurent

    2016-08-01

    In the context of sediment characterization, layer interface roughnesses may be responsible for sound-speed profile measurement uncertainties. To study the roughness influence, a three-dimensional (3D) modeling of a layered seafloor with rough interfaces is necessary. Although roughness scattering has an abundant literature, 3D modeling of spherical wave reflection on rough interfaces is generally limited to a single interface (using Kirchhoff-Helmholtz integral) or computationally expensive techniques (finite difference or finite element method). In this work, it is demonstrated that the wave reflection over a layered medium with irregular interfaces can be modeled as a sum of integrals over each interface. The main approximations of the method are the tangent-plane approximation, the Born approximation (multiple reflection between interfaces are neglected) and flat-interface approximation for the transmitted waves into the sediment. The integration over layer interfaces results in a method with reasonable computation cost. PMID:27586741

  3. A Binary Programming Approach to Automated Test Assembly for Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Finkelman, Matthew D.; Kim, Wonsuk; Roussos, Louis; Verschoor, Angela

    2010-01-01

    Automated test assembly (ATA) has been an area of prolific psychometric research. Although ATA methodology is well developed for unidimensional models, its application alongside cognitive diagnosis models (CDMs) is a burgeoning topic. Two suggested procedures for combining ATA and CDMs are to maximize the cognitive diagnostic index and to use a…

  4. Data for Environmental Modeling (D4EM): Background and Applications of Data Automation

    EPA Science Inventory

    The Data for Environmental Modeling (D4EM) project demonstrates the development of a comprehensive set of open source software tools that overcome obstacles to accessing data needed by automating the process of populating model input data sets with environmental data available fr...

  5. General Models for Automated Essay Scoring: Exploring an Alternative to the Status Quo

    ERIC Educational Resources Information Center

    Kelly, P. Adam

    2005-01-01

    Powers, Burstein, Chodorow, Fowles, and Kukich (2002) suggested that automated essay scoring (AES) may benefit from the use of "general" scoring models designed to score essays irrespective of the prompt for which an essay was written. They reasoned that such models may enhance score credibility by signifying that an AES system measures the same…

  6. The electrical behavior of GaAs-insulator interfaces - A discrete energy interface state model

    NASA Technical Reports Server (NTRS)

    Kazior, T. E.; Lagowski, J.; Gatos, H. C.

    1983-01-01

    The relationship between the electrical behavior of GaAs Metal Insulator Semiconductor (MIS) structures and the high density discrete energy interface states (0.7 and 0.9 eV below the conduction band) was investigated utilizing photo- and thermal emission from the interface states in conjunction with capacitance measurements. It was found that all essential features of the anomalous behavior of GaAs MIS structures, such as the frequency dispersion and the C-V hysteresis, can be explained on the basis of nonequilibrium charging and discharging of the high density discrete energy interface states.

  7. Automated side-chain model building and sequence assignment by template matching

    SciTech Connect

    Terwilliger, Thomas C.

    2003-01-01

    A method for automated macromolecular side-chain model building and for aligning the sequence to the map is described. An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.

  8. Growth/reflectance model interface for wheat and corresponding model

    NASA Technical Reports Server (NTRS)

    Suits, G. H.; Sieron, R.; Odenweller, J.

    1984-01-01

    The use of modeling to explore the possibility of discovering new and useful crop condition indicators which might be available from the Thematic Mapper and to connect these symptoms to the biological causes in the crop is discussed. A crop growth model was used to predict the day to day growth features of the crop as it responds biologically to the various environmental factors. A reflectance model was used to predict the character of the interaction of daylight with the predicted growth features. An atmospheric path radiance was added to the reflected daylight to simulate the radiance appearing at the sensor. Finally, the digitized data sent to a ground station were calculated. The crop under investigation is wheat.

  9. The Interface Between Theory and Data in Structural Equation Models

    USGS Publications Warehouse

    Grace, James B.; Bollen, Kenneth A.

    2006-01-01

    Structural equation modeling (SEM) holds the promise of providing natural scientists the capacity to evaluate complex multivariate hypotheses about ecological systems. Building on its predecessors, path analysis and factor analysis, SEM allows for the incorporation of both observed and unobserved (latent) variables into theoretically based probabilistic models. In this paper we discuss the interface between theory and data in SEM and the use of an additional variable type, the composite, for representing general concepts. In simple terms, composite variables specify the influences of collections of other variables and can be helpful in modeling general relationships of the sort commonly of interest to ecologists. While long recognized as a potentially important element of SEM, composite variables have received very limited use, in part because of a lack of theoretical consideration, but also because of difficulties that arise in parameter estimation when using conventional solution procedures. In this paper we present a framework for discussing composites and demonstrate how the use of partially reduced form models can help to overcome some of the parameter estimation and evaluation problems associated with models containing composites. Diagnostic procedures for evaluating the most appropriate and effective use of composites are illustrated with an example from the ecological literature. It is argued that an ability to incorporate composite variables into structural equation models may be particularly valuable in the study of natural systems, where concepts are frequently multifaceted and the influences of suites of variables are often of interest.

  10. A robust and flexible Geospatial Modeling Interface (GMI) for environmental model deployment and evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper provides an overview of the GMI (Geospatial Modeling Interface) simulation framework for environmental model deployment and assessment. GMI currently provides access to multiple environmental models including AgroEcoSystem-Watershed (AgES-W), Nitrate Leaching and Economic Analysis 2 (NLEA...

  11. Modeling and diagnosing interface mix in layered ICF implosions

    NASA Astrophysics Data System (ADS)

    Weber, C. R.; Berzak Hopkins, L. F.; Clark, D. S.; Haan, S. W.; Ho, D. D.; Meezan, N. B.; Milovich, J. L.; Robey, H. F.; Smalyuk, V. A.; Thomas, C. A.

    2015-11-01

    Mixing at the fuel-ablator interface of an inertial confinement fusion (ICF) implosion can arise from an unfavorable in-flight Atwood number between the cryogenic DT fuel and the ablator. High-Z dopant is typically added to the ablator to control the Atwood number, but recent high-density carbon (HDC) capsules have been shot at the National Ignition Facility (NIF) without this added dopant. Highly resolved post-shot modeling of these implosions shows that there was significant mixing of ablator material into the dense DT fuel. This mix lowers the fuel density and results in less overall compression, helping to explain the measured ratio of down scattered-to-primary neutrons. Future experimental designs will seek to improve this issue through adding dopant and changing the x-ray spectra with a different hohlraum wall material. To test these changes, we are designing an experimental platform to look at the growth of this mixing layer. This technique uses side-on radiography to measure the spatial extent of an embedded high-Z tracer layer near the interface. Work performed under the auspices of the U.S. D.O.E. by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.

  12. Modeling the Electrical Contact Resistance at Steel-Carbon Interfaces

    NASA Astrophysics Data System (ADS)

    Brimmo, Ayoola T.; Hassan, Mohamed I.

    2016-01-01

    In the aluminum smelting industry, electrical contact resistance at the stub-carbon (steel-carbon) interface has been recurrently reported to be of magnitudes that legitimately necessitate concern. Mitigating this via finite element modeling has been the focus of a number of investigations, with the pressure- and temperature-dependent contact resistance relation frequently cited as a factor that limits the accuracy of such models. In this study, pressure- and temperature-dependent relations are derived from the most extensively cited works that have experimentally characterized the electrical contact resistance at these contacts. These relations are applied in a validated thermo-electro-mechanical finite element model used to estimate the voltage drop across a steel-carbon laboratory setup. By comparing the models' estimate of the contact electrical resistance with experimental measurements, we deduce the applicability of the different relations over a range of temperatures. The ultimate goal of this study is to apply mathematical modeling in providing pressure- and temperature-dependent relations that best describe the steel-carbon electrical contact resistance and identify the best fit relation at specific thermodynamic conditions.

  13. SN_GUI: a graphical user interface for snowpack modeling

    NASA Astrophysics Data System (ADS)

    Spreitzhofer, G.; Fierz, C.; Lehning, M.

    2004-10-01

    SNOWPACK is a physical snow cover model. The model not only serves as a valuable research tool, but also runs operationally on a network of high Alpine automatic weather and snow measurement sites. In order to facilitate the operation of SNOWPACK and the interpretation of the results obtained by this model, a user-friendly graphical user interface for snowpack modeling, named SN_GUI, was created. This Java-based and thus platform-independent tool can be operated in two modes, one designed to fulfill the requirements of avalanche warning services (e.g. by providing information about critical layers within the snowpack that are closely related to the avalanche activity), and the other one offering a variety of additional options satisfying the needs of researchers. The user of SN_GUI is graphically guided through the entire process of creating snow cover simulations. The starting point is the efficient creation of input parameter files for SNOWPACK, followed by the launching of SNOWPACK with a variety of parameter settings. Finally, after the successful termination of the run, a number of interactive display options may be used to visualize the model output. Among these are vertical profiles and time profiles for many parameters. Besides other features, SN_GUI allows the use of various color, time and coordinate scales, and the comparison of measured and observed parameters.

  14. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT (AGWA): A GIS-BASED HYDROLOGICAL MODELING TOOL FOR WATERSHED MANAGEMENT AND LANDSCAPE ASSESSMENT

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment (http://www.epa.gov/nerlesd1/land-sci/agwa/introduction.htm and www.tucson.ars.ag.gov/agwa) tool is a GIS interface jointly developed by the U.S. Environmental Protection Agency, USDA-Agricultural Research Service, and the University ...

  15. Automating Routine Tasks in AmI Systems by Using Models at Runtime

    NASA Astrophysics Data System (ADS)

    Serral, Estefanía; Valderas, Pedro; Pelechano, Vicente

    One of the most important challenges to be confronted in Ambient Intelligent (AmI) systems is to automate routine tasks on behalf of users. In this work, we confront this challenge presenting a novel approach based on models at runtime. This approach proposes a context-adaptive task model that allows routine tasks to be specified in an understandable way for users, facilitating their participation in the specification. These tasks are described according to context, which is specified in an ontology-based context model. Both the context model and the task model are also used at runtime. The approach provides a software infrastructure capable of automating the routine tasks as they were specified in these models by interpreting them at runtime.

  16. Automated Discovery and Modeling of Sequential Patterns Preceding Events of Interest

    NASA Technical Reports Server (NTRS)

    Rohloff, Kurt

    2010-01-01

    The integration of emerging data manipulation technologies has enabled a paradigm shift in practitioners' abilities to understand and anticipate events of interest in complex systems. Example events of interest include outbreaks of socio-political violence in nation-states. Rather than relying on human-centric modeling efforts that are limited by the availability of SMEs, automated data processing technologies has enabled the development of innovative automated complex system modeling and predictive analysis technologies. We introduce one such emerging modeling technology - the sequential pattern methodology. We have applied the sequential pattern methodology to automatically identify patterns of observed behavior that precede outbreaks of socio-political violence such as riots, rebellions and coups in nation-states. The sequential pattern methodology is a groundbreaking approach to automated complex system model discovery because it generates easily interpretable patterns based on direct observations of sampled factor data for a deeper understanding of societal behaviors that is tolerant of observation noise and missing data. The discovered patterns are simple to interpret and mimic human's identifications of observed trends in temporal data. Discovered patterns also provide an automated forecasting ability: we discuss an example of using discovered patterns coupled with a rich data environment to forecast various types of socio-political violence in nation-states.

  17. Automated volumetric grid generation for finite element modeling of human hand joints

    SciTech Connect

    Hollerbach, K.; Underhill, K.; Rainsberger, R.

    1995-02-01

    We are developing techniques for finite element analysis of human joints. These techniques need to provide high quality results rapidly in order to be useful to a physician. The research presented here increases model quality and decreases user input time by automating the volumetric mesh generation step.

  18. Evaluation of automated cell disruptor methods for oomycetous and ascomycetous model organisms

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Two automated cell disruptor-based methods for RNA extraction; disruption of thawed cells submerged in TRIzol Reagent (method QP), and direct disruption of frozen cells on dry ice (method CP), were optimized for a model oomycete, Phytophthora capsici, and compared with grinding in a mortar and pestl...

  19. Modeling Multiple Human-Automation Distributed Systems using Network-form Games

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume

    2012-01-01

    The paper describes at a high-level the network-form game framework (based on Bayes net and game theory), which can be used to model and analyze safety issues in large, distributed, mixed human-automation systems such as NextGen.

  20. Automated Test Assembly for Cognitive Diagnosis Models Using a Genetic Algorithm

    ERIC Educational Resources Information Center

    Finkelman, Matthew; Kim, Wonsuk; Roussos, Louis A.

    2009-01-01

    Much recent psychometric literature has focused on cognitive diagnosis models (CDMs), a promising class of instruments used to measure the strengths and weaknesses of examinees. This article introduces a genetic algorithm to perform automated test assembly alongside CDMs. The algorithm is flexible in that it can be applied whether the goal is to…

  1. Parallelization of a hydrological model using the message passing interface

    USGS Publications Warehouse

    Wu, Yiping; Li, Tiejian; Sun, Liqun; Chen, Ji

    2013-01-01

    With the increasing knowledge about the natural processes, hydrological models such as the Soil and Water Assessment Tool (SWAT) are becoming larger and more complex with increasing computation time. Additionally, other procedures such as model calibration, which may require thousands of model iterations, can increase running time and thus further reduce rapid modeling and analysis. Using the widely-applied SWAT as an example, this study demonstrates how to parallelize a serial hydrological model in a Windows® environment using a parallel programing technology—Message Passing Interface (MPI). With a case study, we derived the optimal values for the two parameters (the number of processes and the corresponding percentage of work to be distributed to the master process) of the parallel SWAT (P-SWAT) on an ordinary personal computer and a work station. Our study indicates that model execution time can be reduced by 42%–70% (or a speedup of 1.74–3.36) using multiple processes (two to five) with a proper task-distribution scheme (between the master and slave processes). Although the computation time cost becomes lower with an increasing number of processes (from two to five), this enhancement becomes less due to the accompanied increase in demand for message passing procedures between the master and all slave processes. Our case study demonstrates that the P-SWAT with a five-process run may reach the maximum speedup, and the performance can be quite stable (fairly independent of a project size). Overall, the P-SWAT can help reduce the computation time substantially for an individual model run, manual and automatic calibration procedures, and optimization of best management practices. In particular, the parallelization method we used and the scheme for deriving the optimal parameters in this study can be valuable and easily applied to other hydrological or environmental models.

  2. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1991-01-01

    Some of the many analytical models in human-computer interface design that are currently being developed are described. The usefulness of analytical models for human-computer interface design is evaluated. Can the use of analytical models be recommended to interface designers? The answer, based on the empirical research summarized here, is: not at this time. There are too many unanswered questions concerning the validity of models and their ability to meet the practical needs of design organizations.

  3. An automated construction of error models for uncertainty quantification and model calibration

    NASA Astrophysics Data System (ADS)

    Josset, L.; Lunati, I.

    2015-12-01

    To reduce the computational cost of stochastic predictions, it is common practice to rely on approximate flow solvers (or «proxy»), which provide an inexact, but computationally inexpensive response [1,2]. Error models can be constructed to correct the proxy response: based on a learning set of realizations for which both exact and proxy simulations are performed, a transformation is sought to map proxy into exact responses. Once the error model is constructed a prediction of the exact response is obtained at the cost of a proxy simulation for any new realization. Despite its effectiveness [2,3], the methodology relies on several user-defined parameters, which impact the accuracy of the predictions. To achieve a fully automated construction, we propose a novel methodology based on an iterative scheme: we first initialize the error model with a small training set of realizations; then, at each iteration, we add a new realization both to improve the model and to evaluate its performance. More specifically, at each iteration we use the responses predicted by the updated model to identify the realizations that need to be considered to compute the quantity of interest. Another user-defined parameter is the number of dimensions of the response spaces between which the mapping is sought. To identify the space dimensions that optimally balance mapping accuracy and risk of overfitting, we follow a Leave-One-Out Cross Validation. Also, the definition of a stopping criterion is central to an automated construction. We use a stability measure based on bootstrap techniques to stop the iterative procedure when the iterative model has converged. The methodology is illustrated with two test cases in which an inverse problem has to be solved and assess the performance of the method. We show that an iterative scheme is crucial to increase the applicability of the approach. [1] Josset, L., and I. Lunati, Local and global error models for improving uncertainty quantification, Math

  4. The Automated Geospatial Watershed Assessment Tool (AGWA): Developing Post-Fire Model Parameters Using Precipitation and Runoff Records from Gauged Watersheds

    NASA Astrophysics Data System (ADS)

    Sheppard, B. S.; Goodrich, D. C.; Guertin, D. P.; Burns, I. S.; Canfield, E.; Sidman, G.

    2014-12-01

    New tools and functionality have been incorporated into the Automated Geospatial Watershed Assessment Tool (AGWA) to assess the impacts of wildfire on runoff and erosion. AGWA (see: www.tucson.ars.ag.gov/agwa or http://www.epa.gov/esd/land-sci/agwa/) is a GIS interface jointly developed by the USDA-Agricultural Research Service, the U.S. Environmental Protection Agency, the University of Arizona, and the University of Wyoming to automate the parameterization and execution of a suite of hydrologic and erosion models (RHEM, WEPP, KINEROS2 and SWAT). Through an intuitive interface the user selects an outlet from which AGWA delineates and discretizes the watershed using a Digital Elevation Model (DEM). The watershed model elements are then intersected with terrain, soils, and land cover data layers to derive the requisite model input parameters. With the addition of a burn severity map AGWA can be used to model post wildfire changes to a catchment. By applying the same design storm to burned and unburned conditions a rapid assessment of the watershed can be made and areas that are the most prone to flooding can be identified. Post-fire precipitation and runoff records from gauged forested watersheds are now being used to make improvements to post fire model input parameters. Rainfall and runoff pairs have been selected from these records in order to calibrate parameter values for surface roughness and saturated hydraulic conductivity used in the KINEROS2 model. Several objective functions will be tried in the calibration process. Results will be validated. Currently Department of Interior Burn Area Emergency Response (DOI BAER) teams are using the AGWA-KINEROS2 modeling interface to assess hydrologically imposed risk immediately following wild fire. These parameter refinements are being made to further improve the quality of these assessments.

  5. Description of waste pretreatment and interfacing systems dynamic simulation model

    SciTech Connect

    Garbrick, D.J.; Zimmerman, B.D.

    1995-05-01

    The Waste Pretreatment and Interfacing Systems Dynamic Simulation Model was created to investigate the required pretreatment facility processing rates for both high level and low level waste so that the vitrification of tank waste can be completed according to the milestones defined in the Tri-Party Agreement (TPA). In order to achieve this objective, the processes upstream and downstream of the pretreatment facilities must also be included. The simulation model starts with retrieval of tank waste and ends with vitrification for both low level and high level wastes. This report describes the results of three simulation cases: one based on suggested average facility processing rates, one with facility rates determined so that approximately 6 new DSTs are required, and one with facility rates determined so that approximately no new DSTs are required. It appears, based on the simulation results, that reasonable facility processing rates can be selected so that no new DSTs are required by the TWRS program. However, this conclusion must be viewed with respect to the modeling assumptions, described in detail in the report. Also included in the report, in an appendix, are results of two sensitivity cases: one with glass plant water recycle steams recycled versus not recycled, and one employing the TPA SST retrieval schedule versus a more uniform SST retrieval schedule. Both recycling and retrieval schedule appear to have a significant impact on overall tank usage.

  6. Nonparametric Bayesian Modeling for Automated Database Schema Matching

    SciTech Connect

    Ferragut, Erik M; Laska, Jason A

    2015-01-01

    The problem of merging databases arises in many government and commercial applications. Schema matching, a common first step, identifies equivalent fields between databases. We introduce a schema matching framework that builds nonparametric Bayesian models for each field and compares them by computing the probability that a single model could have generated both fields. Our experiments show that our method is more accurate and faster than the existing instance-based matching algorithms in part because of the use of nonparametric Bayesian models.

  7. Reduced complexity structural modeling for automated airframe synthesis

    NASA Technical Reports Server (NTRS)

    Hajela, Prabhat

    1987-01-01

    A procedure is developed for the optimum sizing of wing structures based on representing the built-up finite element assembly of the structure by equivalent beam models. The reduced-order beam models are computationally less demanding in an optimum design environment which dictates repetitive analysis of several trial designs. The design procedure is implemented in a computer program requiring geometry and loading information to create the wing finite element model and its equivalent beam model, and providing a rapid estimate of the optimum weight obtained from a fully stressed design approach applied to the beam. The synthesis procedure is demonstrated for representative conventional-cantilever and joined wing configurations.

  8. Combining neural network models for automated diagnostic systems.

    PubMed

    Ubeyli, Elif Derya

    2006-12-01

    This paper illustrates the use of combined neural network (CNN) models to guide model selection for diagnosis of internal carotid arterial (ICA) disorders. The ICA Doppler signals were decomposed into time-frequency representations using discrete wavelet transform and statistical features were calculated to depict their distribution. The first level networks were implemented for the diagnosis of ICA disorders using the statistical features as inputs. To improve diagnostic accuracy, the second level network was trained using the outputs of the first level networks as input data. The CNN models achieved accuracy rates which were higher than that of the stand-alone neural network models. PMID:17233161

  9. Graphical User Interface for Simulink Integrated Performance Analysis Model

    NASA Technical Reports Server (NTRS)

    Durham, R. Caitlyn

    2009-01-01

    The J-2X Engine (built by Pratt & Whitney Rocketdyne,) in the Upper Stage of the Ares I Crew Launch Vehicle, will only start within a certain range of temperature and pressure for Liquid Hydrogen and Liquid Oxygen propellants. The purpose of the Simulink Integrated Performance Analysis Model is to verify that in all reasonable conditions the temperature and pressure of the propellants are within the required J-2X engine start boxes. In order to run the simulation, test variables must be entered at all reasonable values of parameters such as heat leak and mass flow rate. To make this testing process as efficient as possible in order to save the maximum amount of time and money, and to show that the J-2X engine will start when it is required to do so, a graphical user interface (GUI) was created to allow the input of values to be used as parameters in the Simulink Model, without opening or altering the contents of the model. The GUI must allow for test data to come from Microsoft Excel files, allow those values to be edited before testing, place those values into the Simulink Model, and get the output from the Simulink Model. The GUI was built using MATLAB, and will run the Simulink simulation when the Simulate option is activated. After running the simulation, the GUI will construct a new Microsoft Excel file, as well as a MATLAB matrix file, using the output values for each test of the simulation so that they may graphed and compared to other values.

  10. Interfacing click chemistry with automated oligonucleotide synthesis for the preparation of fluorescent DNA probes containing internal xanthene and cyanine dyes.

    PubMed

    Astakhova, I Kira; Wengel, Jesper

    2013-01-14

    Double-labeled oligonucleotide probes containing fluorophores interacting by energy-transfer mechanisms are essential for modern bioanalysis, molecular diagnostics, and in vivo imaging techniques. Although bright xanthene and cyanine dyes are gaining increased prominence within these fields, little attention has thus far been paid to probes containing these dyes internally attached, a fact which is mainly due to the quite challenging synthesis of such oligonucleotide probes. Herein, by using 2'-O-propargyl uridine phosphoramidite and a series of xanthenes and cyanine azide derivatives, we have for the first time performed solid-phase copper(I)-catalyzed azide-alkyne cycloaddition (CuAAC) click labeling during the automated phosphoramidite oligonucleotide synthesis followed by postsynthetic click reactions in solution. We demonstrate that our novel strategy is rapid and efficient for the preparation of novel oligonucleotide probes containing internally positioned xanthene and cyanine dye pairs and thus represents a significant step forward for the preparation of advanced fluorescent oligonucleotide probes. Furthermore, we demonstrate that the novel xanthene and cyanine labeled probes display unusual and very promising photophysical properties resulting from energy-transfer interactions between the fluorophores controlled by nucleic acid assembly. Potential benefits of using these novel fluorescent probes within, for example, molecular diagnostics and fluorescence microscopy include: Considerable Stokes shifts (40-110 nm), quenched fluorescence of single-stranded probes accompanied by up to 7.7-fold light-up effect of emission upon target DNA/RNA binding, remarkable sensitivity to single-nucleotide mismatches, generally high fluorescence brightness values (FB up to 26), and hence low limit of target detection values (LOD down to <5 nM).

  11. Proteomics for Validation of Automated Gene Model Predictions

    SciTech Connect

    Zhou, Kemin; Panisko, Ellen A.; Magnuson, Jon K.; Baker, Scott E.; Grigoriev, Igor V.

    2008-02-14

    High-throughput liquid chromatography mass spectrometry (LC-MS)-based proteomic analysis has emerged as a powerful tool for functional annotation of genome sequences. These analyses complement the bioinformatic and experimental tools used for deriving, verifying, and functionally annotating models of genes and their transcripts. Furthermore, proteomics extends verification and functional annotation to the level of the translation product of the gene model.

  12. A method for modeling contact dynamics for automated capture mechanisms

    NASA Technical Reports Server (NTRS)

    Williams, Philip J.

    1991-01-01

    Logicon Control Dynamics develops contact dynamics models for space-based docking and berthing vehicles. The models compute contact forces for the physical contact between mating capture mechanism surfaces. Realistic simulation requires proportionality constants, for calculating contact forces, to approximate surface stiffness of contacting bodies. Proportionality for rigid metallic bodies becomes quite large. Small penetrations of surface boundaries can produce large contact forces.

  13. Man power/cost estimation model: Automated planetary projects

    NASA Technical Reports Server (NTRS)

    Kitchen, L. D.

    1975-01-01

    A manpower/cost estimation model is developed which is based on a detailed level of financial analysis of over 30 million raw data points which are then compacted by more than three orders of magnitude to the level at which the model is applicable. The major parameter of expenditure is manpower (specifically direct labor hours) for all spacecraft subsystem and technical support categories. The resultant model is able to provide a mean absolute error of less than fifteen percent for the eight programs comprising the model data base. The model includes cost saving inheritance factors, broken down in four levels, for estimating follow-on type programs where hardware and design inheritance are evident or expected.

  14. Driven Interfaces: From Flow to Creep Through Model Reduction

    NASA Astrophysics Data System (ADS)

    Agoritsas, Elisabeth; García-García, Reinaldo; Lecomte, Vivien; Truskinovsky, Lev; Vandembroucq, Damien

    2016-08-01

    The response of spatially extended systems to a force leading their steady state out of equilibrium is strongly affected by the presence of disorder. We focus on the mean velocity induced by a constant force applied on one-dimensional interfaces. In the absence of disorder, the velocity is linear in the force. In the presence of disorder, it is widely admitted, as well as experimentally and numerically verified, that the velocity presents a stretched exponential dependence in the force (the so-called `creep law'), which is out of reach of linear response, or more generically of direct perturbative expansions at small force. In dimension one, there is no exact analytical derivation of such a law, even from a theoretical physical point of view. We propose an effective model with two degrees of freedom, constructed from the full spatially extended model, that captures many aspects of the creep phenomenology. It provides a justification of the creep law form of the velocity-force characteristics, in a quasistatic approximation. It allows, moreover, to capture the non-trivial effects of short-range correlations in the disorder, which govern the low-temperature asymptotics. It enables us to establish a phase diagram where the creep law manifests itself in the vicinity of the origin in the force-system-size-temperature coordinates. Conjointly, we characterise the crossover between the creep regime and a linear-response regime that arises due to finite system size.

  15. Driven Interfaces: From Flow to Creep Through Model Reduction

    NASA Astrophysics Data System (ADS)

    Agoritsas, Elisabeth; García-García, Reinaldo; Lecomte, Vivien; Truskinovsky, Lev; Vandembroucq, Damien

    2016-09-01

    The response of spatially extended systems to a force leading their steady state out of equilibrium is strongly affected by the presence of disorder. We focus on the mean velocity induced by a constant force applied on one-dimensional interfaces. In the absence of disorder, the velocity is linear in the force. In the presence of disorder, it is widely admitted, as well as experimentally and numerically verified, that the velocity presents a stretched exponential dependence in the force (the so-called `creep law'), which is out of reach of linear response, or more generically of direct perturbative expansions at small force. In dimension one, there is no exact analytical derivation of such a law, even from a theoretical physical point of view. We propose an effective model with two degrees of freedom, constructed from the full spatially extended model, that captures many aspects of the creep phenomenology. It provides a justification of the creep law form of the velocity-force characteristics, in a quasistatic approximation. It allows, moreover, to capture the non-trivial effects of short-range correlations in the disorder, which govern the low-temperature asymptotics. It enables us to establish a phase diagram where the creep law manifests itself in the vicinity of the origin in the force-system-size-temperature coordinates. Conjointly, we characterise the crossover between the creep regime and a linear-response regime that arises due to finite system size.

  16. Petri net-based modelling of human-automation conflicts in aviation.

    PubMed

    Pizziol, Sergio; Tessier, Catherine; Dehais, Frédéric

    2014-01-01

    Analyses of aviation safety reports reveal that human-machine conflicts induced by poor automation design are remarkable precursors of accidents. A review of different crew-automation conflicting scenarios shows that they have a common denominator: the autopilot behaviour interferes with the pilot's goal regarding the flight guidance via 'hidden' mode transitions. Considering both the human operator and the machine (i.e. the autopilot or the decision functions) as agents, we propose a Petri net model of those conflicting interactions, which allows them to be detected as deadlocks in the Petri net. In order to test our Petri net model, we designed an autoflight system that was formally analysed to detect conflicting situations. We identified three conflicting situations that were integrated in an experimental scenario in a flight simulator with 10 general aviation pilots. The results showed that the conflicts that we had a-priori identified as critical had impacted the pilots' performance. Indeed, the first conflict remained unnoticed by eight participants and led to a potential collision with another aircraft. The second conflict was detected by all the participants but three of them did not manage the situation correctly. The last conflict was also detected by all the participants but provoked typical automation surprise situation as only one declared that he had understood the autopilot behaviour. These behavioural results are discussed in terms of workload and number of fired 'hidden' transitions. Eventually, this study reveals that both formal and experimental approaches are complementary to identify and assess the criticality of human-automation conflicts. Practitioner Summary: We propose a Petri net model of human-automation conflicts. An experiment was conducted with general aviation pilots performing a scenario involving three conflicting situations to test the soundness of our formal approach. This study reveals that both formal and experimental approaches

  17. Automated protein model building combined with iterative structure refinement.

    PubMed

    Perrakis, A; Morris, R; Lamzin, V S

    1999-05-01

    In protein crystallography, much time and effort are often required to trace an initial model from an interpretable electron density map and to refine it until it best agrees with the crystallographic data. Here, we present a method to build and refine a protein model automatically and without user intervention, starting from diffraction data extending to resolution higher than 2.3 A and reasonable estimates of crystallographic phases. The method is based on an iterative procedure that describes the electron density map as a set of unconnected atoms and then searches for protein-like patterns. Automatic pattern recognition (model building) combined with refinement, allows a structural model to be obtained reliably within a few CPU hours. We demonstrate the power of the method with examples of a few recently solved structures.

  18. Modelling the inhomogeneous SiC Schottky interface

    NASA Astrophysics Data System (ADS)

    Gammon, P. M.; Pérez-Tomás, A.; Shah, V. A.; Vavasour, O.; Donchev, E.; Pang, J. S.; Myronov, M.; Fisher, C. A.; Jennings, M. R.; Leadley, D. R.; Mawby, P. A.

    2013-12-01

    For the first time, the I-V-T dataset of a Schottky diode has been accurately modelled, parameterised, and fully fit, incorporating the effects of interface inhomogeneity, patch pinch-off and resistance, and ideality factors that are both heavily temperature and voltage dependent. A Ni/SiC Schottky diode is characterised at 2 K intervals from 20 to 320 K, which, at room temperature, displays low ideality factors (n < 1.01) that suggest that these diodes may be homogeneous. However, at cryogenic temperatures, excessively high (n > 8), voltage dependent ideality factors and evidence of the so-called "thermionic field emission effect" within a T0-plot, suggest significant inhomogeneity. Two models are used, each derived from Tung's original interactive parallel conduction treatment of barrier height inhomogeneity that can reproduce these commonly seen effects in single temperature I-V traces. The first model incorporates patch pinch-off effects and produces accurate and reliable fits above around 150 K, and at current densities lower than 10-5 A cm-2. Outside this region, we show that resistive effects within a given patch are responsible for the excessive ideality factors, and a second simplified model incorporating these resistive effects as well as pinch-off accurately reproduces the entire temperature range. Analysis of these fitting parameters reduces confidence in those fits above 230 K, and questions are raised about the physical interpretation of the fitting parameters. Despite this, both methods used are shown to be useful tools for accurately reproducing I-V-T data over a large temperature range.

  19. Automated mask creation from a 3D model using Faethm.

    SciTech Connect

    Schiek, Richard Louis; Schmidt, Rodney Cannon

    2007-11-01

    We have developed and implemented a method which given a three-dimensional object can infer from topology the two-dimensional masks needed to produce that object with surface micro-machining. The masks produced by this design tool can be generic, process independent masks, or if given process constraints, specific for a target process. This design tool calculates the two-dimensional mask set required to produce a given three-dimensional model by investigating the vertical topology of the model.

  20. Modeling of the cell-electrode interface noise for microelectrode arrays.

    PubMed

    Guo, Jing; Yuan, Jie; Chan, Mansun

    2012-12-01

    Microelectrodes are widely used in the physiological recording of cell field potentials. As microelectrode signals are generally in the μV range, characteristics of the cell-electrode interface are important to the recording accuracy. Although the impedance of the microelectrode-solution interface has been well studied and modeled in the past, no effective model has been experimentally verified to estimate the noise of the cell-electrode interface. Also in existing interface models, spectral information is largely disregarded. In this work, we developed a model for estimating the noise of the cell-electrode interface from interface impedances. This model improves over existing noise models by including the cell membrane capacitor and frequency dependent impedances. With low-noise experiment setups, this model is verified by microelectrode array (MEA) experiments with mouse muscle myoblast cells. Experiments show that the noise estimated from this model has <;10% error, which is much less than estimations from existing models. With this model, noise of the cell-electrode interface can be estimated by simply measuring interface impedances. This model also provides insights for micro- electrode design to achieve good recording signal-to-noise ratio.

  1. Comparison of Joint Modeling Approaches Including Eulerian Sliding Interfaces

    SciTech Connect

    Lomov, I; Antoun, T; Vorobiev, O

    2009-12-16

    Accurate representation of discontinuities such as joints and faults is a key ingredient for high fidelity modeling of shock propagation in geologic media. The following study was done to improve treatment of discontinuities (joints) in the Eulerian hydrocode GEODYN (Lomov and Liu 2005). Lagrangian methods with conforming meshes and explicit inclusion of joints in the geologic model are well suited for such an analysis. Unfortunately, current meshing tools are unable to automatically generate adequate hexahedral meshes for large numbers of irregular polyhedra. Another concern is that joint stiffness in such explicit computations requires significantly reduced time steps, with negative implications for both the efficiency and quality of the numerical solution. An alternative approach is to use non-conforming meshes and embed joint information into regular computational elements. However, once slip displacement on the joints become comparable to the zone size, Lagrangian (even non-conforming) meshes could suffer from tangling and decreased time step problems. The use of non-conforming meshes in an Eulerian solver may alleviate these difficulties and provide a viable numerical approach for modeling the effects of faults on the dynamic response of geologic materials. We studied shock propagation in jointed/faulted media using a Lagrangian and two Eulerian approaches. To investigate the accuracy of this joint treatment the GEODYN calculations have been compared with results from the Lagrangian code GEODYN-L which uses an explicit treatment of joints via common plane contact. We explore two approaches to joint treatment in the code, one for joints with finite thickness and the other for tight joints. In all cases the sliding interfaces are tracked explicitly without homogenization or blending the joint and block response into an average response. In general, rock joints will introduce an increase in normal compliance in addition to a reduction in shear strength. In the

  2. Automated biowaste sampling system urine subsystem operating model, part 1

    NASA Technical Reports Server (NTRS)

    Fogal, G. L.; Mangialardi, J. K.; Rosen, F.

    1973-01-01

    The urine subsystem automatically provides for the collection, volume sensing, and sampling of urine from six subjects during space flight. Verification of the subsystem design was a primary objective of the current effort which was accomplished thru the detail design, fabrication, and verification testing of an operating model of the subsystem.

  3. Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.

    PubMed

    Huynh, Linh; Tagkopoulos, Ilias

    2015-08-21

    In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.

  4. A simplified cellular automation model for city traffic

    SciTech Connect

    Simon, P.M.; Nagel, K. |

    1997-12-31

    The authors systematically investigate the effect of blockage sites in a cellular automata model for traffic flow. Different scheduling schemes for the blockage sites are considered. None of them returns a linear relationship between the fraction of green time and the throughput. The authors use this information for a fast implementation of traffic in Dallas.

  5. A Voyage to Arcturus: A model for automated management of a WLCG Tier-2 facility

    NASA Astrophysics Data System (ADS)

    Roy, Gareth; Crooks, David; Mertens, Lena; Mitchell, Mark; Purdie, Stuart; Cadellin Skipsey, Samuel; Britton, David

    2014-06-01

    With the current trend towards "On Demand Computing" in big data environments it is crucial that the deployment of services and resources becomes increasingly automated. Deployment based on cloud platforms is available for large scale data centre environments but these solutions can be too complex and heavyweight for smaller, resource constrained WLCG Tier-2 sites. Along with a greater desire for bespoke monitoring and collection of Grid related metrics, a more lightweight and modular approach is desired. In this paper we present a model for a lightweight automated framework which can be use to build WLCG grid sites, based on "off the shelf" software components. As part of the research into an automation framework the use of both IPMI and SNMP for physical device management will be included, as well as the use of SNMP as a monitoring/data sampling layer such that more comprehensive decision making can take place and potentially be automated. This could lead to reduced down times and better performance as services are recognised to be in a non-functional state by autonomous systems.

  6. Automated Volumetric Breast Density derived by Shape and Appearance Modeling.

    PubMed

    Malkov, Serghei; Kerlikowske, Karla; Shepherd, John

    2014-03-22

    The image shape and texture (appearance) estimation designed for facial recognition is a novel and promising approach for application in breast imaging. The purpose of this study was to apply a shape and appearance model to automatically estimate percent breast fibroglandular volume (%FGV) using digital mammograms. We built a shape and appearance model using 2000 full-field digital mammograms from the San Francisco Mammography Registry with known %FGV measured by single energy absorptiometry method. An affine transformation was used to remove rotation, translation and scale. Principal Component Analysis (PCA) was applied to extract significant and uncorrelated components of %FGV. To build an appearance model, we transformed the breast images into the mean texture image by piecewise linear image transformation. Using PCA the image pixels grey-scale values were converted into a reduced set of the shape and texture features. The stepwise regression with forward selection and backward elimination was used to estimate the outcome %FGV with shape and appearance features and other system parameters. The shape and appearance scores were found to correlate moderately to breast %FGV, dense tissue volume and actual breast volume, body mass index (BMI) and age. The highest Pearson correlation coefficient was equal 0.77 for the first shape PCA component and actual breast volume. The stepwise regression method with ten-fold cross-validation to predict %FGV from shape and appearance variables and other system outcome parameters generated a model with a correlation of r(2) = 0.8. In conclusion, a shape and appearance model demonstrated excellent feasibility to extract variables useful for automatic %FGV estimation. Further exploring and testing of this approach is warranted.

  7. Automated volumetric breast density derived by shape and appearance modeling

    NASA Astrophysics Data System (ADS)

    Malkov, Serghei; Kerlikowske, Karla; Shepherd, John

    2014-03-01

    The image shape and texture (appearance) estimation designed for facial recognition is a novel and promising approach for application in breast imaging. The purpose of this study was to apply a shape and appearance model to automatically estimate percent breast fibroglandular volume (%FGV) using digital mammograms. We built a shape and appearance model using 2000 full-field digital mammograms from the San Francisco Mammography Registry with known %FGV measured by single energy absorptiometry method. An affine transformation was used to remove rotation, translation and scale. Principal Component Analysis (PCA) was applied to extract significant and uncorrelated components of %FGV. To build an appearance model, we transformed the breast images into the mean texture image by piecewise linear image transformation. Using PCA the image pixels grey-scale values were converted into a reduced set of the shape and texture features. The stepwise regression with forward selection and backward elimination was used to estimate the outcome %FGV with shape and appearance features and other system parameters. The shape and appearance scores were found to correlate moderately to breast %FGV, dense tissue volume and actual breast volume, body mass index (BMI) and age. The highest Pearson correlation coefficient was equal 0.77 for the first shape PCA component and actual breast volume. The stepwise regression method with ten-fold cross-validation to predict %FGV from shape and appearance variables and other system outcome parameters generated a model with a correlation of r2 = 0.8. In conclusion, a shape and appearance model demonstrated excellent feasibility to extract variables useful for automatic %FGV estimation. Further exploring and testing of this approach is warranted.

  8. Automated Volumetric Breast Density derived by Shape and Appearance Modeling.

    PubMed

    Malkov, Serghei; Kerlikowske, Karla; Shepherd, John

    2014-03-22

    The image shape and texture (appearance) estimation designed for facial recognition is a novel and promising approach for application in breast imaging. The purpose of this study was to apply a shape and appearance model to automatically estimate percent breast fibroglandular volume (%FGV) using digital mammograms. We built a shape and appearance model using 2000 full-field digital mammograms from the San Francisco Mammography Registry with known %FGV measured by single energy absorptiometry method. An affine transformation was used to remove rotation, translation and scale. Principal Component Analysis (PCA) was applied to extract significant and uncorrelated components of %FGV. To build an appearance model, we transformed the breast images into the mean texture image by piecewise linear image transformation. Using PCA the image pixels grey-scale values were converted into a reduced set of the shape and texture features. The stepwise regression with forward selection and backward elimination was used to estimate the outcome %FGV with shape and appearance features and other system parameters. The shape and appearance scores were found to correlate moderately to breast %FGV, dense tissue volume and actual breast volume, body mass index (BMI) and age. The highest Pearson correlation coefficient was equal 0.77 for the first shape PCA component and actual breast volume. The stepwise regression method with ten-fold cross-validation to predict %FGV from shape and appearance variables and other system outcome parameters generated a model with a correlation of r(2) = 0.8. In conclusion, a shape and appearance model demonstrated excellent feasibility to extract variables useful for automatic %FGV estimation. Further exploring and testing of this approach is warranted. PMID:25083119

  9. Analytical and numerical modeling of non-collinear shear wave mixing at an imperfect interface.

    PubMed

    Zhang, Ziyin; Nagy, Peter B; Hassan, Waled

    2016-02-01

    Non-collinear shear wave mixing at an imperfect interface between two solids can be exploited for nonlinear ultrasonic assessment of bond quality. In this study we developed two analytical models for nonlinear imperfect interfaces. The first model uses a finite nonlinear interfacial stiffness representation of an imperfect interface of vanishing thickness, while the second model relies on a thin nonlinear interphase layer to represent an imperfect interface region. The second model is actually a derivative of the first model obtained by calculating the equivalent interfacial stiffness of a thin isotropic nonlinear interphase layer in the quasi-static approximation. The predictions of both analytical models were numerically verified by comparison to COMSOL finite element simulations. These models can accurately predict the additional nonlinearity caused by interface imperfections based on the strength of the reflected and transmitted mixed longitudinal waves produced by them under non-collinear shear wave interrogation. PMID:26482394

  10. An Improvement in Thermal Modelling of Automated Tape Placement Process

    SciTech Connect

    Barasinski, Anaies; Leygue, Adrien; Poitou, Arnaud; Soccard, Eric

    2011-01-17

    The thermoplastic tape placement process offers the possibility of manufacturing large laminated composite parts with all kinds of geometries (double curved i.e.). This process is based on the fusion bonding of a thermoplastic tape on a substrate. It has received a growing interest during last years because of its non autoclave abilities.In order to control and optimize the quality of the manufactured part, we need to predict the temperature field throughout the processing of the laminate. In this work, we focus on a thermal modeling of this process which takes in account the imperfect bonding existing between the different layers of the substrate by introducing thermal contact resistance in the model. This study is leaning on experimental results which inform us that the value of the thermal resistance evolves with temperature and pressure applied on the material.

  11. An Improvement in Thermal Modelling of Automated Tape Placement Process

    NASA Astrophysics Data System (ADS)

    Barasinski, Anaïs; Leygue, Adrien; Soccard, Eric; Poitou, Arnaud

    2011-01-01

    The thermoplastic tape placement process offers the possibility of manufacturing large laminated composite parts with all kinds of geometries (double curved i.e.). This process is based on the fusion bonding of a thermoplastic tape on a substrate. It has received a growing interest during last years because of its non autoclave abilities. In order to control and optimize the quality of the manufactured part, we need to predict the temperature field throughout the processing of the laminate. In this work, we focus on a thermal modeling of this process which takes in account the imperfect bonding existing between the different layers of the substrate by introducing thermal contact resistance in the model. This study is leaning on experimental results which inform us that the value of the thermal resistance evolves with temperature and pressure applied on the material.

  12. Automated target recognition using passive radar and coordinated flight models

    NASA Astrophysics Data System (ADS)

    Ehrman, Lisa M.; Lanterman, Aaron D.

    2003-09-01

    Rather than emitting pulses, passive radar systems rely on illuminators of opportunity, such as TV and FM radio, to illuminate potential targets. These systems are particularly attractive since they allow receivers to operate without emitting energy, rendering them covert. Many existing passive radar systems estimate the locations and velocities of targets. This paper focuses on adding an automatic target recognition (ATR) component to such systems. Our approach to ATR compares the Radar Cross Section (RCS) of targets detected by a passive radar system to the simulated RCS of known targets. To make the comparison as accurate as possible, the received signal model accounts for aircraft position and orientation, propagation losses, and antenna gain patterns. The estimated positions become inputs for an algorithm that uses a coordinated flight model to compute probable aircraft orientation angles. The Fast Illinois Solver Code (FISC) simulates the RCS of several potential target classes as they execute the estimated maneuvers. The RCS is then scaled by the Advanced Refractive Effects Prediction System (AREPS) code to account for propagation losses that occur as functions of altitude and range. The Numerical Electromagnetic Code (NEC2) computes the antenna gain pattern, so that the RCS can be further scaled. The Rician model compares the RCS of the illuminated aircraft with those of the potential targets. This comparison results in target identification.

  13. Model-based metrics of human-automation function allocation in complex work environments

    NASA Astrophysics Data System (ADS)

    Kim, So Young

    Function allocation is the design decision which assigns work functions to all agents in a team, both human and automated. Efforts to guide function allocation systematically has been studied in many fields such as engineering, human factors, team and organization design, management science, and cognitive systems engineering. Each field focuses on certain aspects of function allocation, but not all; thus, an independent discussion of each does not address all necessary issues with function allocation. Four distinctive perspectives emerged from a review of these fields: technology-centered, human-centered, team-oriented, and work-oriented. Each perspective focuses on different aspects of function allocation: capabilities and characteristics of agents (automation or human), team structure and processes, and work structure and the work environment. Together, these perspectives identify the following eight issues with function allocation: 1) Workload, 2) Incoherency in function allocations, 3) Mismatches between responsibility and authority, 4) Interruptive automation, 5) Automation boundary conditions, 6) Function allocation preventing human adaptation to context, 7) Function allocation destabilizing the humans' work environment, and 8) Mission Performance. Addressing these issues systematically requires formal models and simulations that include all necessary aspects of human-automation function allocation: the work environment, the dynamics inherent to the work, agents, and relationships among them. Also, addressing these issues requires not only a (static) model, but also a (dynamic) simulation that captures temporal aspects of work such as the timing of actions and their impact on the agent's work. Therefore, with properly modeled work as described by the work environment, the dynamics inherent to the work, agents, and relationships among them, a modeling framework developed by this thesis, which includes static work models and dynamic simulation, can capture the

  14. A 2-D Interface Element for Coupled Analysis of Independently Modeled 3-D Finite Element Subdomains

    NASA Technical Reports Server (NTRS)

    Kandil, Osama A.

    1998-01-01

    Over the past few years, the development of the interface technology has provided an analysis framework for embedding detailed finite element models within finite element models which are less refined. This development has enabled the use of cascading substructure domains without the constraint of coincident nodes along substructure boundaries. The approach used for the interface element is based on an alternate variational principle often used in deriving hybrid finite elements. The resulting system of equations exhibits a high degree of sparsity but gives rise to a non-positive definite system which causes difficulties with many of the equation solvers in general-purpose finite element codes. Hence the global system of equations is generally solved using, a decomposition procedure with pivoting. The research reported to-date for the interface element includes the one-dimensional line interface element and two-dimensional surface interface element. Several large-scale simulations, including geometrically nonlinear problems, have been reported using the one-dimensional interface element technology; however, only limited applications are available for the surface interface element. In the applications reported to-date, the geometry of the interfaced domains exactly match each other even though the spatial discretization within each domain may be different. As such, the spatial modeling of each domain, the interface elements and the assembled system is still laborious. The present research is focused on developing a rapid modeling procedure based on a parametric interface representation of independently defined subdomains which are also independently discretized.

  15. Drivers' communicative interactions: on-road observations and modelling for integration in future automation systems.

    PubMed

    Portouli, Evangelia; Nathanael, Dimitris; Marmaras, Nicolas

    2014-01-01

    Social interactions with other road users are an essential component of the driving activity and may prove critical in view of future automation systems; still up to now they have received only limited attention in the scientific literature. In this paper, it is argued that drivers base their anticipations about the traffic scene to a large extent on observations of social behaviour of other 'animate human-vehicles'. It is further argued that in cases of uncertainty, drivers seek to establish a mutual situational awareness through deliberate communicative interactions. A linguistic model is proposed for modelling these communicative interactions. Empirical evidence from on-road observations and analysis of concurrent running commentary by 25 experienced drivers support the proposed model. It is suggested that the integration of a social interactions layer based on illocutionary acts in future driving support and automation systems will improve their performance towards matching human driver's expectations. Practitioner Summary: Interactions between drivers on the road may play a significant role in traffic coordination. On-road observations and running commentaries are presented as empirical evidence to support a model of such interactions; incorporation of drivers' interactions in future driving support and automation systems may improve their performance towards matching driver's expectations.

  16. Automated EEG monitoring in defining a chronic epilepsy model.

    PubMed

    Mascott, C R; Gotman, J; Beaudet, A

    1994-01-01

    There has been a recent surge of interest in chronic animal models of epilepsy. Proper assessment of these models requires documentation of spontaneous seizures by EEG, observation, or both in each individual animal to confirm the presumed epileptic condition. We used the same automatic seizure detection system as that currently used for patients in our institution and many others. Electrodes were implanted in 43 rats before intraamygdalar administration of kainic acid (KA). Animals were monitored intermittently for 3 months. Nine of the rats were protected by anticonvulsants [pentobarbital (PB) and diazepam (DZP)] at the time of KA injection. Between 1 and 3 months after KA injection, spontaneous seizures were detected in 20 of the 34 unprotected animals (59%). Surprisingly, spontaneous seizures were also detected during the same period in 2 of the 9 protected animals that were intended to serve as nonepileptic controls. Although the absence of confirmed spontaneous seizures in the remaining animals cannot exclude their occurrence, it indicates that, if present, they are at least rare. On the other hand, definitive proof of epilepsy is invaluable in the attempt to interpret pathologic data from experimental brains.

  17. Piloted Simulation of a Model-Predictive Automated Recovery System

    NASA Technical Reports Server (NTRS)

    Liu, James (Yuan); Litt, Jonathan; Sowers, T. Shane; Owens, A. Karl; Guo, Ten-Huei

    2014-01-01

    This presentation describes a model-predictive automatic recovery system for aircraft on the verge of a loss-of-control situation. The system determines when it must intervene to prevent an imminent accident, resulting from a poor approach. It estimates the altitude loss that would result from a go-around maneuver at the current flight condition. If the loss is projected to violate a minimum altitude threshold, the maneuver is automatically triggered. The system deactivates to allow landing once several criteria are met. Piloted flight simulator evaluation showed the system to provide effective envelope protection during extremely unsafe landing attempts. The results demonstrate how flight and propulsion control can be integrated to recover control of the vehicle automatically and prevent a potential catastrophe.

  18. Introducing a new open source GIS user interface for the SWAT model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Soil and Water Assessment Tool (SWAT) model is a robust watershed modelling tool. It typically uses the ArcSWAT interface to create its inputs. ArcSWAT is public domain software which works in the licensed ArcGIS environment. The aim of this paper was to develop an open source user interface ...

  19. Fundamental processes of exciton scattering at organic solar-cell interfaces: One-dimensional model calculation

    NASA Astrophysics Data System (ADS)

    Masugata, Yoshimitsu; Iizuka, Hideyuki; Sato, Kosuke; Nakayama, Takashi

    2016-08-01

    Fundamental processes of exciton scattering at organic solar-cell interfaces were studied using a one-dimensional tight-binding model and by performing a time-evolution simulation of electron–hole pair wave packets. We found the fundamental features of exciton scattering: the scattering promotes not only the dissociation of excitons and the generation of interface-bound (charge-transferred) excitons but also the transmission and reflection of excitons depending on the electron and hole interface offsets. In particular, the dissociation increases in a certain region of an interface offset, while the transmission shows resonances with higher-energy bound-exciton and interface bound-exciton states. We also studied the effects of carrier-transfer and potential modulations at the interface and the scattering of charged excitons, and we found trap dissociations where one of the carriers is trapped around the interface after the dissociation.

  20. Models for identification of erroneous atom-to-atom mapping of reactions performed by automated algorithms.

    PubMed

    Muller, Christophe; Marcou, Gilles; Horvath, Dragos; Aires-de-Sousa, João; Varnek, Alexandre

    2012-12-21

    Machine learning (SVM and JRip rule learner) methods have been used in conjunction with the Condensed Graph of Reaction (CGR) approach to identify errors in the atom-to-atom mapping of chemical reactions produced by an automated mapping tool by ChemAxon. The modeling has been performed on the three first enzymatic classes of metabolic reactions from the KEGG database. Each reaction has been converted into a CGR representing a pseudomolecule with conventional (single, double, aromatic, etc.) bonds and dynamic bonds characterizing chemical transformations. The ChemAxon tool was used to automatically detect the matching atom pairs in reagents and products. These automated mappings were analyzed by the human expert and classified as "correct" or "wrong". ISIDA fragment descriptors generated for CGRs for both correct and wrong mappings were used as attributes in machine learning. The learned models have been validated in n-fold cross-validation on the training set followed by a challenge to detect correct and wrong mappings within an external test set of reactions, never used for learning. Results show that both SVM and JRip models detect most of the wrongly mapped reactions. We believe that this approach could be used to identify erroneous atom-to-atom mapping performed by any automated algorithm.

  1. Modelling of series of types of automated trenchless works tunneling

    NASA Astrophysics Data System (ADS)

    Gendarz, P.; Rzasinski, R.

    2016-08-01

    Microtunneling is the newest method for making underground installations. Show method is the result of experience and methods applied in other, previous methods of trenchless underground works. It is considered reasonable to elaborate a series of types of construction of tunneling machines, to develop this particular earthworks method. There are many design solutions of machines, but the current goal is to develop non - excavation robotized machine. Erosion machines with main dimensions of the tunnels which are: 1600, 2000, 2500, 3150 are design with use of the computer aided methods. Series of types of construction of tunneling machines creating process was preceded by analysis of current state. The verification of practical methodology of creating the systematic part series was based on the designed erosion machines series of types. There were developed: method of construction similarity of the erosion machines, algorithmic methods of quantitative construction attributes variant analyzes in the I-DEAS advanced graphical program, relational and program parameterization. There manufacturing process of the parts will be created, which allows to verify the technological process on the CNC machines. The models of designed will be modified and the construction will be consulted with erosion machine users and manufacturers like: Tauber Rohrbau GmbH & Co.KG from Minster, OHL ZS a.s. from Brna,. The companies’ acceptance will result in practical verification by JUMARPOL company.

  2. A semi-automated vascular access system for preclinical models

    NASA Astrophysics Data System (ADS)

    Berry-Pusey, B. N.; Chang, Y. C.; Prince, S. W.; Chu, K.; David, J.; Taschereau, R.; Silverman, R. W.; Williams, D.; Ladno, W.; Stout, D.; Tsao, T. C.; Chatziioannou, A.

    2013-08-01

    Murine models are used extensively in biological and translational research. For many of these studies it is necessary to access the vasculature for the injection of biologically active agents. Among the possible methods for accessing the mouse vasculature, tail vein injections are a routine but critical step for many experimental protocols. To perform successful tail vein injections, a high skill set and experience is required, leaving most scientists ill-suited to perform this task. This can lead to a high variability between injections, which can impact experimental results. To allow more scientists to perform tail vein injections and to decrease the variability between injections, a vascular access system (VAS) that semi-automatically inserts a needle into the tail vein of a mouse was developed. The VAS uses near infrared light, image processing techniques, computer controlled motors, and a pressure feedback system to insert the needle and to validate its proper placement within the vein. The VAS was tested by injecting a commonly used radiolabeled probe (FDG) into the tail veins of five mice. These mice were then imaged using micro-positron emission tomography to measure the percentage of the injected probe remaining in the tail. These studies showed that, on average, the VAS leaves 3.4% of the injected probe in the tail. With these preliminary results, the VAS system demonstrates the potential for improving the accuracy of tail vein injections in mice.

  3. Automated Finite Element Modeling of Wing Structures for Shape Optimization

    NASA Technical Reports Server (NTRS)

    Harvey, Michael Stephen

    1993-01-01

    The displacement formulation of the finite element method is the most general and most widely used technique for structural analysis of airplane configurations. Modem structural synthesis techniques based on the finite element method have reached a certain maturity in recent years, and large airplane structures can now be optimized with respect to sizing type design variables for many load cases subject to a rich variety of constraints including stress, buckling, frequency, stiffness and aeroelastic constraints (Refs. 1-3). These structural synthesis capabilities use gradient based nonlinear programming techniques to search for improved designs. For these techniques to be practical a major improvement was required in computational cost of finite element analyses (needed repeatedly in the optimization process). Thus, associated with the progress in structural optimization, a new perspective of structural analysis has emerged, namely, structural analysis specialized for design optimization application, or.what is known as "design oriented structural analysis" (Ref. 4). This discipline includes approximation concepts and methods for obtaining behavior sensitivity information (Ref. 1), all needed to make the optimization of large structural systems (modeled by thousands of degrees of freedom and thousands of design variables) practical and cost effective.

  4. User's Manual for the Object User Interface (OUI): An Environmental Resource Modeling Framework

    USGS Publications Warehouse

    Markstrom, Steven L.; Koczot, Kathryn M.

    2008-01-01

    The Object User Interface is a computer application that provides a framework for coupling environmental-resource models and for managing associated temporal and spatial data. The Object User Interface is designed to be easily extensible to incorporate models and data interfaces defined by the user. Additionally, the Object User Interface is highly configurable through the use of a user-modifiable, text-based control file that is written in the eXtensible Markup Language. The Object User Interface user's manual provides (1) installation instructions, (2) an overview of the graphical user interface, (3) a description of the software tools, (4) a project example, and (5) specifications for user configuration and extension.

  5. An Energy Approach to a Micromechanics Model Accounting for Nonlinear Interface Debonding.

    SciTech Connect

    Tan, H.; Huang, Y.; Geubelle, P. H.; Liu, C.; Breitenfeld, M. S.

    2005-01-01

    We developed a micromechanics model to study the effect of nonlinear interface debonding on the constitutive behavior of composite materials. While implementing this micromechanics model into a large simulation code on solid rockets, we are challenged by problems such as tension/shear coupling and the nonuniform distribution of displacement jump at the particle/matrix interfaces. We therefore propose an energy approach to solve these problems. This energy approach calculates the potential energy of the representative volume element, including the contribution from the interface debonding. By minimizing the potential energy with respect to the variation of the interface displacement jump, the traction balanced interface debonding can be found and the macroscopic constitutive relations established. This energy approach has the ability to treat different load conditions in a unified way, and the interface cohesive law can be in any arbitrary forms. In this paper, the energy approach is verified to give the same constitutive behaviors as reported before.

  6. Automated calibration of a stream solute transport model: Implications for interpretation of biogeochemical parameters

    USGS Publications Warehouse

    Scott, D.T.; Gooseff, M.N.; Bencala, K.E.; Runkel, R.L.

    2003-01-01

    The hydrologic processes of advection, dispersion, and transient storage are the primary physical mechanisms affecting solute transport in streams. The estimation of parameters for a conservative solute transport model is an essential step to characterize transient storage and other physical features that cannot be directly measured, and often is a preliminary step in the study of reactive solutes. Our study used inverse modeling to estimate parameters of the transient storage model OTIS (One dimensional Transport with Inflow and Storage). Observations from a tracer injection experiment performed on Uvas Creek, California, USA, are used to illustrate the application of automated solute transport model calibration to conservative and nonconservative stream solute transport. A computer code for universal inverse modeling (UCODE) is used for the calibrations. Results of this procedure are compared with a previous study that used a trial-and-error parameter estimation approach. The results demonstrated 1) importance of the proper estimation of discharge and lateral inflow within the stream system; 2) that although the fit of the observations is not much better when transient storage is invoked, a more randomly distributed set of residuals resulted (suggesting non-systematic error), indicating that transient storage is occurring; 3) that inclusion of transient storage for a reactive solute (Sr2+) provided a better fit to the observations, highlighting the importance of robust model parameterization; and 4) that applying an automated calibration inverse modeling estimation approach resulted in a comprehensive understanding of the model results and the limitation of input data.

  7. Cluster resolution: a metric for automated, objective and optimized feature selection in chemometric modeling.

    PubMed

    Sinkov, Nikolai A; Harynuk, James J

    2011-01-30

    A novel metric termed cluster resolution is presented. This metric compares the separation of clusters of data points while simultaneously considering the shapes of the clusters and their relative orientations. Using cluster resolution in conjunction with an objective variable ranking metric allows for fully automated feature selection for the construction of chemometric models. The metric is based upon considering the maximum size of confidence ellipses around clusters of points representing different classes of objects that can be constructed without any overlap of the ellipses. For demonstration purposes we utilized PCA to classify samples of gasoline based upon their octane rating. The entire GC-MS chromatogram of each sample comprising over 2 × 10(6) variables was considered. As an example, automated ranking by ANOVA was applied followed by a forward selection approach to choose variables for inclusion. This approach can be generally applied to feature selection for a variety of applications and represents a significant step towards the development of fully automated, objective construction of chemometric models.

  8. AUTOMATED ACTIN FILAMENT SEGMENTATION, TRACKING AND TIP ELONGATION MEASUREMENTS BASED ON OPEN ACTIVE CONTOUR MODELS.

    PubMed

    Li, Hongsheng; Shen, Tian; Smith, Matthew B; Fujiwara, Ikuko; Vavylonis, Dimitrios; Huang, Xiaolei

    2009-06-28

    This paper presents an automated method for actin filament segmentation and tracking for measuring tip elongation rates in Total Internal Reflection Fluorescence Microscopy (TIRFM) images. The main contributions of the paper are: (i) we use a novel open active contour model for filament segmentation and tracking, which is fast and robust against noise; (ii) different strategies are proposed to solve the filament intersection problem, which is shown to be the main difficulty in filament tracking; and (iii) this fully automated method avoids the need of human interaction and thus reduces required time for the entire elongation measurement process on an image sequence. Application to experimental results demonstrated the robustness and effectiveness of this method.

  9. IDEF3 and IDEF4 automation system requirements document and system environment models

    NASA Technical Reports Server (NTRS)

    Blinn, Thomas M.

    1989-01-01

    The requirements specification is provided for the IDEF3 and IDEF4 tools that provide automated support for IDEF3 and IDEF4 modeling. The IDEF3 method is a scenario driven process flow description capture method intended to be used by domain experts to represent the knowledge about how a particular system or process works. The IDEF3 method provides modes to represent both (1) Process Flow Description to capture the relationships between actions within the context of a specific scenario, and (2) Object State Transition to capture the allowable transitions of an object in the domain. The IDEF4 method provides a method for capturing the (1) Class Submodel or object hierarchy, (2) Method Submodel or the procedures associated with each classes of objects, and (3) the Dispath Matching or the relationships between the objects and methods in the object oriented design. The requirements specified describe the capabilities that a fully functional IDEF3 or IDEF4 automated tool should support.

  10. A COMSOL-GEMS interface for modeling coupled reactive-transport geochemical processes

    NASA Astrophysics Data System (ADS)

    Azad, Vahid Jafari; Li, Chang; Verba, Circe; Ideker, Jason H.; Isgor, O. Burkan

    2016-07-01

    An interface was developed between COMSOL MultiphysicsTM finite element analysis software and (geo)chemical modeling platform, GEMS, for the reactive-transport modeling of (geo)chemical processes in variably saturated porous media. The two standalone software packages are managed from the interface that uses a non-iterative operator splitting technique to couple the transport (COMSOL) and reaction (GEMS) processes. The interface allows modeling media with complex chemistry (e.g. cement) using GEMS thermodynamic database formats. Benchmark comparisons show that the developed interface can be used to predict a variety of reactive-transport processes accurately. The full functionality of the interface was demonstrated to model transport processes, governed by extended Nernst-Plank equation, in Class H Portland cement samples in high pressure and temperature autoclaves simulating systems that are used to store captured carbon dioxide (CO2) in geological reservoirs.

  11. Patient-specific bone modeling and analysis: the role of integration and automation in clinical adoption.

    PubMed

    Zadpoor, Amir A; Weinans, Harrie

    2015-03-18

    Patient-specific analysis of bones is considered an important tool for diagnosis and treatment of skeletal diseases and for clinical research aimed at understanding the etiology of skeletal diseases and the effects of different types of treatment on their progress. In this article, we discuss how integration of several important components enables accurate and cost-effective patient-specific bone analysis, focusing primarily on patient-specific finite element (FE) modeling of bones. First, the different components are briefly reviewed. Then, two important aspects of patient-specific FE modeling, namely integration of modeling components and automation of modeling approaches, are discussed. We conclude with a section on validation of patient-specific modeling results, possible applications of patient-specific modeling procedures, current limitations of the modeling approaches, and possible areas for future research.

  12. An automation of design and modelling tasks in NX Siemens environment with original software - generator module

    NASA Astrophysics Data System (ADS)

    Zbiciak, M.; Grabowik, C.; Janik, W.

    2015-11-01

    Nowadays the design constructional process is almost exclusively aided with CAD/CAE/CAM systems. It is evaluated that nearly 80% of design activities have a routine nature. These design routine tasks are highly susceptible to automation. Design automation is usually made with API tools which allow building original software responsible for adding different engineering activities. In this paper the original software worked out in order to automate engineering tasks at the stage of a product geometrical shape design is presented. The elaborated software works exclusively in NX Siemens CAD/CAM/CAE environment and was prepared in Microsoft Visual Studio with application of the .NET technology and NX SNAP library. The software functionality allows designing and modelling of spur and helicoidal involute gears. Moreover, it is possible to estimate relative manufacturing costs. With the Generator module it is possible to design and model both standard and non-standard gear wheels. The main advantage of the model generated in such a way is its better representation of an involute curve in comparison to those which are drawn in specialized standard CAD systems tools. It comes from fact that usually in CAD systems an involute curve is drawn by 3 points that respond to points located on the addendum circle, the reference diameter of a gear and the base circle respectively. In the Generator module the involute curve is drawn by 11 involute points which are located on and upper the base and the addendum circles therefore 3D gear wheels models are highly accurate. Application of the Generator module makes the modelling process very rapid so that the gear wheel modelling time is reduced to several seconds. During the conducted research the analysis of differences between standard 3 points and 11 points involutes was made. The results and conclusions drawn upon analysis are shown in details.

  13. EST2uni: an open, parallel tool for automated EST analysis and database creation, with a data mining web interface and microarray expression data integration

    PubMed Central

    Forment, Javier; Gilabert, Francisco; Robles, Antonio; Conejero, Vicente; Nuez, Fernando; Blanca, Jose M

    2008-01-01

    Background Expressed sequence tag (EST) collections are composed of a high number of single-pass, redundant, partial sequences, which need to be processed, clustered, and annotated to remove low-quality and vector regions, eliminate redundancy and sequencing errors, and provide biologically relevant information. In order to provide a suitable way of performing the different steps in the analysis of the ESTs, flexible computation pipelines adapted to the local needs of specific EST projects have to be developed. Furthermore, EST collections must be stored in highly structured relational databases available to researchers through user-friendly interfaces which allow efficient and complex data mining, thus offering maximum capabilities for their full exploitation. Results We have created EST2uni, an integrated, highly-configurable EST analysis pipeline and data mining software package that automates the pre-processing, clustering, annotation, database creation, and data mining of EST collections. The pipeline uses standard EST analysis tools and the software has a modular design to facilitate the addition of new analytical methods and their configuration. Currently implemented analyses include functional and structural annotation, SNP and microsatellite discovery, integration of previously known genetic marker data and gene expression results, and assistance in cDNA microarray design. It can be run in parallel in a PC cluster in order to reduce the time necessary for the analysis. It also creates a web site linked to the database, showing collection statistics, with complex query capabilities and tools for data mining and retrieval. Conclusion The software package presented here provides an efficient and complete bioinformatics tool for the management of EST collections which is very easy to adapt to the local needs of different EST projects. The code is freely available under the GPL license and can be obtained at . This site also provides detailed instructions for

  14. A Contextual Model for Identity Management (IdM) Interfaces

    ERIC Educational Resources Information Center

    Fuller, Nathaniel J.

    2014-01-01

    The usability of Identity Management (IdM) systems is highly dependent upon design that simplifies the processes of identification, authentication, and authorization. Recent findings reveal two critical problems that degrade IdM usability: (1) unfeasible techniques for managing various digital identifiers, and (2) ambiguous security interfaces.…

  15. The JigCell model builder: a spreadsheet interface for creating biochemical reaction network models.

    PubMed

    Vass, Marc T; Shaffer, Clifford A; Ramakrishnan, Naren; Watson, Layne T; Tyson, John J

    2006-01-01

    Converting a biochemical reaction network to a set of kinetic rate equations is tedious and error prone. We describe known interface paradigms for inputing models of intracellular regulatory networks: graphical layout (diagrams), wizards, scripting languages, and direct entry of chemical equations. We present the JigCell Model Builder, which allows users to define models as a set of reaction equations using a spreadsheet (an example of direct entry of equations) and outputs model definitions in the Systems Biology Markup Language, Level 2. We present the results of two usability studies. The spreadsheet paradigm demonstrated its effectiveness in reducing the number of errors made by modelers when compared to hand conversion of a wiring diagram to differential equations. A comparison of representatives of the four interface paradigms for a simple model of the cell cycle was conducted which measured time, mouse clicks, and keystrokes to enter the model, and the number of screens needed to view the contents of the model. All four paradigms had similar data entry times. The spreadsheet and scripting language approaches require significantly fewer screens to view the models than do the wizard or graphical layout approaches.

  16. Swimming of a model ciliate near an air-liquid interface.

    PubMed

    Wang, S; Ardekani, A M

    2013-06-01

    In this work, the role of the hydrodynamic forces on a swimming microorganism near an air-liquid interface is studied. The lubrication theory is utilized to analyze hydrodynamic effects within the narrow gap between a flat interface and a small swimmer. By using an archetypal low-Reynolds-number swimming model called "squirmer," we find that the magnitude of the vertical swimming velocity is on the order of O(εlnε), where ε is the ratio of the gap width to the swimmer's body size. The reduced swimming velocity near an interface can explain experimental observations of the aggregation of microorganisms near a liquid interface. PMID:23848775

  17. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    PubMed Central

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-01-01

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  18. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    SciTech Connect

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-11-15

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  19. An architecture and model for cognitive engineering simulation analysis - Application to advanced aviation automation

    NASA Technical Reports Server (NTRS)

    Corker, Kevin M.; Smith, Barry R.

    1993-01-01

    The process of designing crew stations for large-scale, complex automated systems is made difficult because of the flexibility of roles that the crew can assume, and by the rapid rate at which system designs become fixed. Modern cockpit automation frequently involves multiple layers of control and display technology in which human operators must exercise equipment in augmented, supervisory, and fully automated control modes. In this context, we maintain that effective human-centered design is dependent on adequate models of human/system performance in which representations of the equipment, the human operator(s), and the mission tasks are available to designers for manipulation and modification. The joint Army-NASA Aircrew/Aircraft Integration (A3I) Program, with its attendant Man-machine Integration Design and Analysis System (MIDAS), was initiated to meet this challenge. MIDAS provides designers with a test bed for analyzing human-system integration in an environment in which both cognitive human function and 'intelligent' machine function are described in similar terms. This distributed object-oriented simulation system, its architecture and assumptions, and our experiences from its application in advanced aviation crew stations are described.

  20. A New Tool for Inundation Modeling: Community Modeling Interface for Tsunamis (ComMIT)

    NASA Astrophysics Data System (ADS)

    Titov, V. V.; Moore, C. W.; Greenslade, D. J. M.; Pattiaratchi, C.; Badal, R.; Synolakis, C. E.; Kânoğlu, U.

    2011-11-01

    Almost 5 years after the 26 December 2004 Indian Ocean tragedy, the 10 August 2009 Andaman tsunami demonstrated that accurate forecasting is possible using the tsunami community modeling tool Community Model Interface for Tsunamis (ComMIT). ComMIT is designed for ease of use, and allows dissemination of results to the community while addressing concerns associated with proprietary issues of bathymetry and topography. It uses initial conditions from a precomputed propagation database, has an easy-to-interpret graphical interface, and requires only portable hardware. ComMIT was initially developed for Indian Ocean countries with support from the United Nations Educational, Scientific, and Cultural Organization (UNESCO), the United States Agency for International Development (USAID), and the National Oceanic and Atmospheric Administration (NOAA). To date, more than 60 scientists from 17 countries in the Indian Ocean have been trained and are using it in operational inundation mapping.

  1. Modeling Auditory-Haptic Interface Cues from an Analog Multi-line Telephone

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Anderson, Mark R.; Bittner, Rachael M.

    2012-01-01

    The Western Electric Company produced a multi-line telephone during the 1940s-1970s using a six-button interface design that provided robust tactile, haptic and auditory cues regarding the "state" of the communication system. This multi-line telephone was used as a model for a trade study comparison of two interfaces: a touchscreen interface (iPad)) versus a pressure-sensitive strain gauge button interface (Phidget USB interface controllers). The experiment and its results are detailed in the authors' AES 133rd convention paper " Multimodal Information Management: Evaluation of Auditory and Haptic Cues for NextGen Communication Dispays". This Engineering Brief describes how the interface logic, visual indications, and auditory cues of the original telephone were synthesized using MAX/MSP, including the logic for line selection, line hold, and priority line activation.

  2. Hydro-mechanical regimes of deforming subduction interface: modeling versus observations

    NASA Astrophysics Data System (ADS)

    Zheng, L.; Gerya, T.; May, D.

    2015-12-01

    A lot of evidence indicates that fluid flows exist in the subduction interface, including seismic observation, magnetotelluric imaging, heat flow modeling, etc. Fluid percolation should strongly modify rock deformation affected by fluid-induced weakening within the subduction interface. Hence, we study the fluid-rock interaction along the subduction interface using a visco-plastic hydro-mechanical model, in which rock deformation and fluid percolation are self-consistently coupled. Based on a series of 2D numerical experiments, we found two typical hydro-mechanical regimes of deforming subduction interface: (1) coupled and (2) decoupled. In the case of the coupled regime, the tectonic movement of the subduction interface is divided into blocks; newly generated faults are distributed uniformly , say fault band; fluid activity concentrates inside the faults. In the case of the decoupled regime, the upper layer of the subduction interface stops moving while the lower layer continues moving along with the subduction slab; a primary fault is generated at the centre of the subduction interface, or namely decoupled interface. Available observations suggests that both coupled and decoupled regimes can be observed in the nature at different scales. Systematic parameter study suggests that it is mainly the magnitude of the yield strength of subducted rocks depending on their cohesion and friction coefficient, which control the transition between the coupled and decoupled subduction interface regimes.

  3. Statistical modelling of networked human-automation performance using working memory capacity.

    PubMed

    Ahmed, Nisar; de Visser, Ewart; Shaw, Tyler; Mohamed-Ameen, Amira; Campbell, Mark; Parasuraman, Raja

    2014-01-01

    This study examines the challenging problem of modelling the interaction between individual attentional limitations and decision-making performance in networked human-automation system tasks. Analysis of real experimental data from a task involving networked supervision of multiple unmanned aerial vehicles by human participants shows that both task load and network message quality affect performance, but that these effects are modulated by individual differences in working memory (WM) capacity. These insights were used to assess three statistical approaches for modelling and making predictions with real experimental networked supervisory performance data: classical linear regression, non-parametric Gaussian processes and probabilistic Bayesian networks. It is shown that each of these approaches can help designers of networked human-automated systems cope with various uncertainties in order to accommodate future users by linking expected operating conditions and performance from real experimental data to observable cognitive traits like WM capacity. Practitioner Summary: Working memory (WM) capacity helps account for inter-individual variability in operator performance in networked unmanned aerial vehicle supervisory tasks. This is useful for reliable performance prediction near experimental conditions via linear models; robust statistical prediction beyond experimental conditions via Gaussian process models and probabilistic inference about unknown task conditions/WM capacities via Bayesian network models. PMID:24308716

  4. a Psycholinguistic Model for Simultaneous Translation, and Proficiency Assessment by Automated Acoustic Analysis of Discourse.

    NASA Astrophysics Data System (ADS)

    Yaghi, Hussein M.

    Two separate but related issues are addressed: how simultaneous translation (ST) works on a cognitive level and how such translation can be objectively assessed. Both of these issues are discussed in the light of qualitative and quantitative analyses of a large corpus of recordings of ST and shadowing. The proposed ST model utilises knowledge derived from a discourse analysis of the data, many accepted facts in the psychology tradition, and evidence from controlled experiments that are carried out here. This model has three advantages: (i) it is based on analyses of extended spontaneous speech rather than word-, syllable-, or clause -bound stimuli; (ii) it draws equally on linguistic and psychological knowledge; and (iii) it adopts a non-traditional view of language called 'the linguistic construction of reality'. The discourse-based knowledge is also used to develop three computerised systems for the assessment of simultaneous translation: one is a semi-automated system that treats the content of the translation; and two are fully automated, one of which is based on the time structure of the acoustic signals whilst the other is based on their cross-correlation. For each system, several parameters of performance are identified, and they are correlated with assessments rendered by the traditional, subjective, qualitative method. Using signal processing techniques, the acoustic analysis of discourse leads to the conclusion that quality in simultaneous translation can be assessed quantitatively with varying degrees of automation. It identifies as measures of performance (i) three content-based standards; (ii) four time management parameters that reflect the influence of the source on the target language time structure; and (iii) two types of acoustical signal coherence. Proficiency in ST is shown to be directly related to coherence and speech rate but inversely related to omission and delay. High proficiency is associated with a high degree of simultaneity and

  5. Reconciling lattice and continuum models for polymers at interfaces

    NASA Astrophysics Data System (ADS)

    Fleer, G. J.; Skvortsov, A. M.

    2012-04-01

    It is well known that lattice and continuum descriptions for polymers at interfaces are, in principle, equivalent. In order to compare the two models quantitatively, one needs a relation between the inverse extrapolation length c as used in continuum theories and the lattice adsorption parameter Δχs (defined with respect to the critical point). So far, this has been done only for ideal chains with zero segment volume in extremely dilute solutions. The relation Δχs(c) is obtained by matching the boundary conditions in the two models. For depletion (positive c and Δχs) the result is very simple: Δχs = ln(1 + c/5). For adsorption (negative c and Δχs) the ideal-chain treatment leads to an unrealistic divergence for strong adsorption: c decreases without bounds and the train volume fraction exceeds unity. This due to the fact that for ideal chains the volume filling cannot be accounted for. We extend the treatment to real chains with finite segment volume at finite concentrations, for both good and theta solvents. For depletion the volume filling is not important and the ideal-chain result Δχs = ln(1 + c/5) is generally valid also for non-ideal chains, at any concentration, chain length, or solvency. Depletion profiles can be accurately described in terms of two length scales: ρ = tanh2[(z + p)/δ], where the depletion thickness (distal length) δ is a known function of chain length and polymer concentration, and the proximal length p is a known function of c (or Δχs) and δ. For strong repulsion p = 1/c (then the proximal length equals the extrapolation length), for weaker repulsion p depends also on chain length and polymer concentration (then p is smaller than 1/c). In very dilute solutions we find quantitative agreement with previous analytical results for ideal chains, for any chain length, down to oligomers. In more concentrated solutions there is excellent agreement with numerical self-consistent depletion profiles, for both weak and strong

  6. Intelligent sensor-model automated control of PMR-15 autoclave processing

    NASA Astrophysics Data System (ADS)

    Hart, S.; Kranbuehl, D.; Loos, A.; Hinds, B.; Koury, J.

    An intelligent sensor model system has been built and used for automated control of the PMR-15 cure process in the autoclave. The system uses frequency-dependent FM sensing (FDEMS), the Loos processing model, and the Air Force QPAL intelligent software shell. The Loos model is used to predict and optimize the cure process including the time-temperature dependence of the extent of reaction, flow, and part consolidation. The FDEMS sensing system in turn monitors, in situ, the removal of solvent, changes in the viscosity, reaction advancement and cure completion in the mold continuously throughout the processing cycle. The sensor information is compared with the optimum processing conditions from the model. The QPAL composite cure control system allows comparison of the sensor monitoring with the model predictions to be broken down into a series of discrete steps and provides a language for making decisions on what to do next regarding time-temperature and pressure.

  7. The enhanced Software Life Cyle Support Environment (ProSLCSE): Automation for enterprise and process modeling

    NASA Technical Reports Server (NTRS)

    Milligan, James R.; Dutton, James E.

    1993-01-01

    In this paper, we have introduced a comprehensive method for enterprise modeling that addresses the three important aspects of how an organization goes about its business. FirstEP includes infrastructure modeling, information modeling, and process modeling notations that are intended to be easy to learn and use. The notations stress the use of straightforward visual languages that are intuitive, syntactically simple, and semantically rich. ProSLCSE will be developed with automated tools and services to facilitate enterprise modeling and process enactment. In the spirit of FirstEP, ProSLCSE tools will also be seductively easy to use. Achieving fully managed, optimized software development and support processes will be long and arduous for most software organizations, and many serious problems will have to be solved along the way. ProSLCSE will provide the ability to document, communicate, and modify existing processes, which is the necessary first step.

  8. Intelligent sensor-model automated control of PMR-15 autoclave processing

    NASA Technical Reports Server (NTRS)

    Hart, S.; Kranbuehl, D.; Loos, A.; Hinds, B.; Koury, J.

    1992-01-01

    An intelligent sensor model system has been built and used for automated control of the PMR-15 cure process in the autoclave. The system uses frequency-dependent FM sensing (FDEMS), the Loos processing model, and the Air Force QPAL intelligent software shell. The Loos model is used to predict and optimize the cure process including the time-temperature dependence of the extent of reaction, flow, and part consolidation. The FDEMS sensing system in turn monitors, in situ, the removal of solvent, changes in the viscosity, reaction advancement and cure completion in the mold continuously throughout the processing cycle. The sensor information is compared with the optimum processing conditions from the model. The QPAL composite cure control system allows comparison of the sensor monitoring with the model predictions to be broken down into a series of discrete steps and provides a language for making decisions on what to do next regarding time-temperature and pressure.

  9. Simulation of evaporation of a sessile drop using a diffuse interface model

    NASA Astrophysics Data System (ADS)

    Sefiane, Khellil; Ding, Hang; Sahu, Kirti; Matar, Omar

    2008-11-01

    We consider here the evaporation dynamics of a Newtonian liquid sessile drop using an improved diffuse interface model. The governing equations for the drop and surrounding vapour are both solved, and separated by the order parameter (i.e. volume fraction), based on the previous work of Ding et al. JCP 2007. The diffuse interface model has been shown to be successful in modelling the moving contact line problems (Jacqmin 2000; Ding and Spelt 2007, 2008). Here, a pinned contact line of the drop is assumed. The evaporative mass flux at the liquid-vapour interface is a function of local temperature constitutively and treated as a source term in the interface evolution equation, i.e. Cahn-Hilliard equation. The model is validated by comparing its predictions with data available in the literature. The evaporative dynamics are illustrated in terms of drop snapshots, and a quantitative comparison with the results using a free surface model are made.

  10. Mathematical analysis of a sharp-diffuse interfaces model for seawater intrusion

    NASA Astrophysics Data System (ADS)

    Choquet, C.; Diédhiou, M. M.; Rosier, C.

    2015-10-01

    We consider a new model mixing sharp and diffuse interface approaches for seawater intrusion phenomena in free aquifers. More precisely, a phase field model is introduced in the boundary conditions on the virtual sharp interfaces. We thus include in the model the existence of diffuse transition zones but we preserve the simplified structure allowing front tracking. The three-dimensional problem then reduces to a two-dimensional model involving a strongly coupled system of partial differential equations of parabolic type describing the evolution of the depths of the two free surfaces, that is the interface between salt- and freshwater and the water table. We prove the existence of a weak solution for the model completed with initial and boundary conditions. We also prove that the depths of the two interfaces satisfy a coupled maximum principle.

  11. Coherent description of transport across the water interface: From nanodroplets to climate models

    NASA Astrophysics Data System (ADS)

    Wilhelmsen, Øivind; Trinh, Thuat T.; Lervik, Anders; Badam, Vijay Kumar; Kjelstrup, Signe; Bedeaux, Dick

    2016-03-01

    Transport of mass and energy across the vapor-liquid interface of water is of central importance in a variety of contexts such as climate models, weather forecasts, and power plants. We provide a complete description of the transport properties of the vapor-liquid interface of water with the framework of nonequilibrium thermodynamics. Transport across the planar interface is then described by 3 interface transfer coefficients where 9 more coefficients extend the description to curved interfaces. We obtain all coefficients in the range 260-560 K by taking advantage of water evaporation experiments at low temperatures, nonequilibrium molecular dynamics with the TIP4P/2005 rigid-water-molecule model at high temperatures, and square gradient theory to represent the whole range. Square gradient theory is used to link the region where experiments are possible (low vapor pressures) to the region where nonequilibrium molecular dynamics can be done (high vapor pressures). This enables a description of transport across the planar water interface, interfaces of bubbles, and droplets, as well as interfaces of water structures with complex geometries. The results are likely to improve the description of evaporation and condensation of water at widely different scales; they open a route to improve the understanding of nanodroplets on a small scale and the precision of climate models on a large scale.

  12. Approximation of skewed interfaces with tensor-based model reduction procedures: Application to the reduced basis hierarchical model reduction approach

    NASA Astrophysics Data System (ADS)

    Ohlberger, Mario; Smetana, Kathrin

    2016-09-01

    In this article we introduce a procedure, which allows to recover the potentially very good approximation properties of tensor-based model reduction procedures for the solution of partial differential equations in the presence of interfaces or strong gradients in the solution which are skewed with respect to the coordinate axes. The two key ideas are the location of the interface either by solving a lower-dimensional partial differential equation or by using data functions and the subsequent removal of the interface of the solution by choosing the determined interface as the lifting function of the Dirichlet boundary conditions. We demonstrate in numerical experiments for linear elliptic equations and the reduced basis-hierarchical model reduction approach that the proposed procedure locates the interface well and yields a significantly improved convergence behavior even in the case when we only consider an approximation of the interface.

  13. The development of an automated ward independent delirium risk prediction model.

    PubMed

    de Wit, Hugo A J M; Winkens, Bjorn; Mestres Gonzalvo, Carlota; Hurkens, Kim P G M; Mulder, Wubbo J; Janknegt, Rob; Verhey, Frans R; van der Kuy, Paul-Hugo M; Schols, Jos M G A

    2016-08-01

    Background A delirium is common in hospital settings resulting in increased mortality and costs. Prevention of a delirium is clearly preferred over treatment. A delirium risk prediction model can be helpful to identify patients at risk of a delirium, allowing the start of preventive treatment. Current risk prediction models rely on manual calculation of the individual patient risk. Objective The aim of this study was to develop an automated ward independent delirium riskprediction model. To show that such a model can be constructed exclusively from electronically available risk factors and thereby implemented into a clinical decision support system (CDSS) to optimally support the physician to initiate preventive treatment. Setting A Dutch teaching hospital. Methods A retrospective cohort study in which patients, 60 years or older, were selected when admitted to the hospital, with no delirium diagnosis when presenting, or during the first day of admission. We used logistic regression analysis to develop a delirium predictive model out of the electronically available predictive variables. Main outcome measure A delirium risk prediction model. Results A delirium risk prediction model was developed using predictive variables that were significant in the univariable regression analyses. The area under the receiver operating characteristics curve of the "medication model" model was 0.76 after internal validation. Conclusions CDSSs can be used to automatically predict the risk of a delirium in individual hospitalised patients' by exclusively using electronically available predictive variables. To increase the use and improve the quality of predictive models, clinical risk factors should be documented ready for automated use. PMID:27177868

  14. Effects of modeling errors on trajectory predictions in air traffic control automation

    NASA Technical Reports Server (NTRS)

    Jackson, Michael R. C.; Zhao, Yiyuan; Slattery, Rhonda

    1996-01-01

    Air traffic control automation synthesizes aircraft trajectories for the generation of advisories. Trajectory computation employs models of aircraft performances and weather conditions. In contrast, actual trajectories are flown in real aircraft under actual conditions. Since synthetic trajectories are used in landing scheduling and conflict probing, it is very important to understand the differences between computed trajectories and actual trajectories. This paper examines the effects of aircraft modeling errors on the accuracy of trajectory predictions in air traffic control automation. Three-dimensional point-mass aircraft equations of motion are assumed to be able to generate actual aircraft flight paths. Modeling errors are described as uncertain parameters or uncertain input functions. Pilot or autopilot feedback actions are expressed as equality constraints to satisfy control objectives. A typical trajectory is defined by a series of flight segments with different control objectives for each flight segment and conditions that define segment transitions. A constrained linearization approach is used to analyze trajectory differences caused by various modeling errors by developing a linear time varying system that describes the trajectory errors, with expressions to transfer the trajectory errors across moving segment transitions. A numerical example is presented for a complete commercial aircraft descent trajectory consisting of several flight segments.

  15. Interfacing air pathway models with other media models for impact assessment

    SciTech Connect

    Drake, R.L.

    1980-10-01

    The assessment of the impacts/effects of a coal conversion industry on human health, ecological systems, property and aesthetics requires knowledge about effluent and fugitive emissions, dispersion of pollutants in abiotic media, chemical and physical transformations of pollutants during transport, and pollutant fate passing through biotic pathways. Some of the environmental impacts that result from coal conversion facility effluents are subtle, acute, subacute or chronic effects in humans and other ecosystem members, acute or chronic damage of materials and property, odors, impaired atmospheric visibility, and impacts on local, regional and global weather and climate. This great variety of impacts and effects places great demands on the abiotic and biotic numerical simulators (modelers) in terms of time and space scales, transformation rates, and system structure. This paper primarily addresses the demands placed on the atmospheric analyst. The paper considers the important air pathway processes, the interfacing of air pathway models with other media models, and the classes of air pathway models currently available. In addition, a strong plea is made for interaction and communication between all modeling groups to promote efficient construction of intermedia models that truly interface across pathway boundaries.

  16. Ab-initio molecular modeling of interfaces in tantalum-carbon system

    SciTech Connect

    Balani, Kantesh; Mungole, Tarang; Bakshi, Srinivasa Rao; Agarwal, Arvind

    2012-03-15

    Processing of ultrahigh temperature TaC ceramic material with sintering additives of B{sub 4}C and reinforcement of carbon nanotubes (CNTs) gives rise to possible formation of several interfaces (Ta{sub 2}C-TaC, TaC-CNT, Ta{sub 2}C-CNT, TaB{sub 2}-TaC, and TaB{sub 2}-CNT) that could influence the resultant properties. Current work focuses on interfaces developed during spark plasma sintering of TaC-system and performing ab initio molecular modeling of the interfaces generated during processing of TaC-B{sub 4}C and TaC-CNT composites. The energy of the various interfaces has been evaluated and compared with TaC-Ta{sub 2}C interface. The iso-surface electronic contours are extracted from the calculations eliciting the enhanced stability of TaC-CNT interface by 72.2%. CNTs form stable interfaces with Ta{sub 2}C and TaB{sub 2} phases with a reduction in the energy by 35.8% and 40.4%, respectively. The computed Ta-C-B interfaces are also compared with experimentally observed interfaces in high resolution TEM images.

  17. An automated model-based aim point distribution system for solar towers

    NASA Astrophysics Data System (ADS)

    Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen

    2016-05-01

    Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.

  18. Automated Translation and Thermal Zoning of Digital Building Models for Energy Analysis

    SciTech Connect

    Jones, Nathaniel L.; McCrone, Colin J.; Walter, Bruce J.; Pratt, Kevin B.; Greenberg, Donald P.

    2013-08-26

    Building energy simulation is valuable during the early stages of design, when decisions can have the greatest impact on energy performance. However, preparing digital design models for building energy simulation typically requires tedious manual alteration. This paper describes a series of five automated steps to translate geometric data from an unzoned CAD model into a multi-zone building energy model. First, CAD input is interpreted as geometric surfaces with materials. Second, surface pairs defining walls of various thicknesses are identified. Third, normal directions of unpaired surfaces are determined. Fourth, space boundaries are defined. Fifth, optionally, settings from previous simulations are applied, and spaces are aggregated into a smaller number of thermal zones. Building energy models created quickly using this method can offer guidance throughout the design process.

  19. Automated determination of fibrillar structures by simultaneous model building and fiber diffraction refinement.

    PubMed

    Potrzebowski, Wojciech; André, Ingemar

    2015-07-01

    For highly oriented fibrillar molecules, three-dimensional structures can often be determined from X-ray fiber diffraction data. However, because of limited information content, structure determination and validation can be challenging. We demonstrate that automated structure determination of protein fibers can be achieved by guiding the building of macromolecular models with fiber diffraction data. We illustrate the power of our approach by determining the structures of six bacteriophage viruses de novo using fiber diffraction data alone and together with solid-state NMR data. Furthermore, we demonstrate the feasibility of molecular replacement from monomeric and fibrillar templates by solving the structure of a plant virus using homology modeling and protein-protein docking. The generated models explain the experimental data to the same degree as deposited reference structures but with improved structural quality. We also developed a cross-validation method for model selection. The results highlight the power of fiber diffraction data as structural constraints.

  20. A Study on Automated Context-aware Access Control Model Using Ontology

    NASA Astrophysics Data System (ADS)

    Jang, Bokman; Jang, Hyokyung; Choi, Euiin

    Applications in context-aware computing environment will be connected wireless network and various devices. According to, recklessness access of information resource can make trouble of system. So, access authority management is very important issue both information resource and adapt to system through founding security policy of needed system. But, existing security model is easy of approach to resource through simply user ID and password. This model has a problem that is not concerned about user's environment information. In this paper, propose model of automated context-aware access control using ontology that can more efficiently control about resource through inference and judgment of context information that collect user's information and user's environment context information in order to ontology modeling.

  1. Automated determination of fibrillar structures by simultaneous model building and fiber diffraction refinement.

    PubMed

    Potrzebowski, Wojciech; André, Ingemar

    2015-07-01

    For highly oriented fibrillar molecules, three-dimensional structures can often be determined from X-ray fiber diffraction data. However, because of limited information content, structure determination and validation can be challenging. We demonstrate that automated structure determination of protein fibers can be achieved by guiding the building of macromolecular models with fiber diffraction data. We illustrate the power of our approach by determining the structures of six bacteriophage viruses de novo using fiber diffraction data alone and together with solid-state NMR data. Furthermore, we demonstrate the feasibility of molecular replacement from monomeric and fibrillar templates by solving the structure of a plant virus using homology modeling and protein-protein docking. The generated models explain the experimental data to the same degree as deposited reference structures but with improved structural quality. We also developed a cross-validation method for model selection. The results highlight the power of fiber diffraction data as structural constraints. PMID:25961412

  2. A phase-field point-particle model for particle-laden interfaces

    NASA Astrophysics Data System (ADS)

    Gu, Chuan; Botto, Lorenzo

    2014-11-01

    The irreversible attachment of solid particles to fluid interfaces is exploited in a variety of applications, such as froth flotation and Pickering emulsions. Critical in these applications is to predict particle transport in and near the interface, and the two-way coupling between the particles and the interface. While it is now possible to carry out particle-resolved simulations of these systems, simulating relatively large systems with many particles remains challenging. We present validation studies and preliminary results for a hybrid Eulerian-Lagrangian simulation method, in which the dynamics of the interface is fully-resolved by a phase-field approach, while the particles are treated in the ``point-particle'' approximation. With this method, which represents a compromise between the competing needs of resolving particle and interface scale phenomena, we are able to simulate the adsorption of a large number of particles in the interface of drops, and particle-interface interactions during the spinodal coarsening of a multiphase system. While this method models the adsorption phenomenon efficiently and with reasonable accuracy, it still requires understanding subtle issues related to the modelling of hydrodynamic and capillary forces for particles in contact with interface.

  3. A coupled damage-plasticity model for the cyclic behavior of shear-loaded interfaces

    NASA Astrophysics Data System (ADS)

    Carrara, P.; De Lorenzis, L.

    2015-12-01

    The present work proposes a novel thermodynamically consistent model for the behavior of interfaces under shear (i.e. mode-II) cyclic loading conditions. The interface behavior is defined coupling damage and plasticity. The admissible states' domain is formulated restricting the tangential interface stress to non-negative values, which makes the model suitable e.g. for interfaces with thin adherends. Linear softening is assumed so as to reproduce, under monotonic conditions, a bilinear mode-II interface law. Two damage variables govern respectively the loss of strength and of stiffness of the interface. The proposed model needs the evaluation of only four independent parameters, i.e. three defining the monotonic mode-II interface law, and one ruling the fatigue behavior. This limited number of parameters and their clear physical meaning facilitate experimental calibration. Model predictions are compared with experimental results on fiber reinforced polymer sheets externally bonded to concrete involving different load histories, and an excellent agreement is obtained.

  4. Generating Phenotypical Erroneous Human Behavior to Evaluate Human-automation Interaction Using Model Checking

    PubMed Central

    Bolton, Matthew L.; Bass, Ellen J.; Siminiceanu, Radu I.

    2012-01-01

    Breakdowns in complex systems often occur as a result of system elements interacting in unanticipated ways. In systems with human operators, human-automation interaction associated with both normative and erroneous human behavior can contribute to such failures. Model-driven design and analysis techniques provide engineers with formal methods tools and techniques capable of evaluating how human behavior can contribute to system failures. This paper presents a novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models. The generated erroneous behavior is capable of replicating Hollnagel’s zero-order phenotypes of erroneous action for omissions, jumps, repetitions, and intrusions. Multiple phenotypical acts can occur in sequence, thus allowing for the generation of higher order phenotypes. The task behavior model pattern capable of generating erroneous behavior can be integrated into a formal system model so that system safety properties can be formally verified with a model checker. This allows analysts to prove that a human-automation interactive system (as represented by the model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. We present benchmarks related to the size of the statespace and verification time of models to show how the erroneous human behavior generation process scales. We demonstrate the method with a case study: the operation of a radiation therapy machine. A potential problem resulting from a generated erroneous human action is discovered. A design intervention is presented which prevents this problem from occurring. We discuss how our method could be used to evaluate larger applications and recommend future paths of development. PMID:23105914

  5. A conceptual model of the automated credibility assessment of the volunteered geographic information

    NASA Astrophysics Data System (ADS)

    Idris, N. H.; Jackson, M. J.; Ishak, M. H. I.

    2014-02-01

    The use of Volunteered Geographic Information (VGI) in collecting, sharing and disseminating geospatially referenced information on the Web is increasingly common. The potentials of this localized and collective information have been seen to complement the maintenance process of authoritative mapping data sources and in realizing the development of Digital Earth. The main barrier to the use of this data in supporting this bottom up approach is the credibility (trust), completeness, accuracy, and quality of both the data input and outputs generated. The only feasible approach to assess these data is by relying on an automated process. This paper describes a conceptual model of indicators (parameters) and practical approaches to automated assess the credibility of information contributed through the VGI including map mashups, Geo Web and crowd - sourced based applications. There are two main components proposed to be assessed in the conceptual model - metadata and data. The metadata component comprises the indicator of the hosting (websites) and the sources of data / information. The data component comprises the indicators to assess absolute and relative data positioning, attribute, thematic, temporal and geometric correctness and consistency. This paper suggests approaches to assess the components. To assess the metadata component, automated text categorization using supervised machine learning is proposed. To assess the correctness and consistency in the data component, we suggest a matching validation approach using the current emerging technologies from Linked Data infrastructures and using third party reviews validation. This study contributes to the research domain that focuses on the credibility, trust and quality issues of data contributed by web citizen providers.

  6. Design Through Manufacturing: The Solid Model-Finite Element Analysis Interface

    NASA Technical Reports Server (NTRS)

    Rubin, Carol

    2002-01-01

    State-of-the-art computer aided design (CAD) presently affords engineers the opportunity to create solid models of machine parts reflecting every detail of the finished product. Ideally, in the aerospace industry, these models should fulfill two very important functions: (1) provide numerical. control information for automated manufacturing of precision parts, and (2) enable analysts to easily evaluate the stress levels (using finite element analysis - FEA) for all structurally significant parts used in aircraft and space vehicles. Today's state-of-the-art CAD programs perform function (1) very well, providing an excellent model for precision manufacturing. But they do not provide a straightforward and simple means of automating the translation from CAD to FEA models, especially for aircraft-type structures. Presently, the process of preparing CAD models for FEA consumes a great deal of the analyst's time.

  7. Modeling interface roughness scattering in a layered seabed for normal-incident chirp sonar signals.

    PubMed

    Tang, Dajun; Hefner, Brian T

    2012-04-01

    Downward looking sonar, such as the chirp sonar, is widely used as a sediment survey tool in shallow water environments. Inversion of geo-acoustic parameters from such sonar data precedes the availability of forward models. An exact numerical model is developed to initiate the simulation of the acoustic field produced by such a sonar in the presence of multiple rough interfaces. The sediment layers are assumed to be fluid layers with non-intercepting rough interfaces.

  8. Ab initio modelling of UN grain boundary interfaces

    NASA Astrophysics Data System (ADS)

    Kotomin, E. A.; Zhukovkii, Yu F.; Bocharov, D.; Gryaznov, D.

    2012-08-01

    The uranium mononitride (UN) is a material considered as a promising candidate for Generation-IV nuclear reactor fuels. Unfortunately, oxygen in air affects UN fuel performance and stability. Therefore, it is necessary to understand the mechanism of oxygen adsorption and further UN oxidation in the bulk and at surface. Recently, we performed a detailed study on oxygen interaction with UN surface using density functional theory (DFT) calculations. We were able to identify an atomistic mechanism of UN surface oxidation consisting of several important steps, starting with the oxygen molecule dissociation and finishing with oxygen atom incorporation into vacancies on the surface. However, in reality most of processes occur at the interfaces and on UN grain boundaries. In this study, we present the results of first DFT calculations on O behaviour inside UN grain boundaries performed using GGA exchange-correlation functional PW91 as implemented into the VASP computer code. We consider a simple interface (310)[001](36.8°) tilt grain boundary. The N vacancy formation energies and energies of O incorporation into pre-existing vacancies in the grain boundaries as well as O solution energies were compared with those obtained for the UN (001) and (110) surfaces

  9. Streamflow forecasting using the modular modeling system and an object-user interface

    USGS Publications Warehouse

    Jeton, A.E.

    2001-01-01

    The U.S. Geological Survey (USGS), in cooperation with the Bureau of Reclamation (BOR), developed a computer program to provide a general framework needed to couple disparate environmental resource models and to manage the necessary data. The Object-User Interface (OUI) is a map-based interface for models and modeling data. It provides a common interface to run hydrologic models and acquire, browse, organize, and select spatial and temporal data. One application is to assist river managers in utilizing streamflow forecasts generated with the Precipitation-Runoff Modeling System running in the Modular Modeling System (MMS), a distributed-parameter watershed model, and the National Weather Service Extended Streamflow Prediction (ESP) methodology.

  10. Automated decomposition algorithm for Raman spectra based on a Voigt line profile model.

    PubMed

    Chen, Yunliang; Dai, Liankui

    2016-05-20

    Raman spectra measured by spectrometers usually suffer from band overlap and random noise. In this paper, an automated decomposition algorithm based on a Voigt line profile model for Raman spectra is proposed to solve this problem. To decompose a measured Raman spectrum, a Voigt line profile model is introduced to parameterize the measured spectrum, and a Gaussian function is used as the instrumental broadening function. Hence, the issue of spectral decomposition is transformed into a multiparameter optimization problem of the Voigt line profile model parameters. The algorithm can eliminate instrumental broadening, obtain a recovered Raman spectrum, resolve overlapping bands, and suppress random noise simultaneously. Moreover, the recovered spectrum can be decomposed to a group of Lorentzian functions. Experimental results on simulated Raman spectra show that the performance of this algorithm is much better than a commonly used blind deconvolution method. The algorithm has also been tested on the industrial Raman spectra of ortho-xylene and proved to be effective.

  11. Automated decomposition algorithm for Raman spectra based on a Voigt line profile model.

    PubMed

    Chen, Yunliang; Dai, Liankui

    2016-05-20

    Raman spectra measured by spectrometers usually suffer from band overlap and random noise. In this paper, an automated decomposition algorithm based on a Voigt line profile model for Raman spectra is proposed to solve this problem. To decompose a measured Raman spectrum, a Voigt line profile model is introduced to parameterize the measured spectrum, and a Gaussian function is used as the instrumental broadening function. Hence, the issue of spectral decomposition is transformed into a multiparameter optimization problem of the Voigt line profile model parameters. The algorithm can eliminate instrumental broadening, obtain a recovered Raman spectrum, resolve overlapping bands, and suppress random noise simultaneously. Moreover, the recovered spectrum can be decomposed to a group of Lorentzian functions. Experimental results on simulated Raman spectra show that the performance of this algorithm is much better than a commonly used blind deconvolution method. The algorithm has also been tested on the industrial Raman spectra of ortho-xylene and proved to be effective. PMID:27411136

  12. Improved automated diagnosis of misfire in internal combustion engines based on simulation models

    NASA Astrophysics Data System (ADS)

    Chen, Jian; Bond Randall, Robert

    2015-12-01

    In this paper, a new advance in the application of Artificial Neural Networks (ANNs) to the automated diagnosis of misfires in Internal Combustion engines(IC engines) is detailed. The automated diagnostic system comprises three stages: fault detection, fault localization and fault severity identification. Particularly, in the severity identification stage, separate Multi-Layer Perceptron networks (MLPs) with saturating linear transfer functions were designed for individual speed conditions, so they could achieve finer classification. In order to obtain sufficient data for the network training, numerical simulation was used to simulate different ranges of misfires in the engine. The simulation models need to be updated and evaluated using experimental data, so a series of experiments were first carried out on the engine test rig to capture the vibration signals for both normal condition and with a range of misfires. Two methods were used for the misfire diagnosis: one is based on the torsional vibration signals of the crankshaft and the other on the angular acceleration signals (rotational motion) of the engine block. Following the signal processing of the experimental and simulation signals, the best features were selected as the inputs to ANN networks. The ANN systems were trained using only the simulated data and tested using real experimental cases, indicating that the simulation model can be used for a wider range of faults for which it can still be considered valid. The final results have shown that the diagnostic system based on simulation can efficiently diagnose misfire, including location and severity.

  13. CaspR: a web server for automated molecular replacement using homology modelling

    PubMed Central

    Claude, Jean-Baptiste; Suhre, Karsten; Notredame, Cédric; Claverie, Jean-Michel; Abergel, Chantal

    2004-01-01

    Molecular replacement (MR) is the method of choice for X-ray crystallography structure determination when structural homologues are available in the Protein Data Bank (PDB). Although the success rate of MR decreases sharply when the sequence similarity between template and target proteins drops below 35% identical residues, it has been found that screening for MR solutions with a large number of different homology models may still produce a suitable solution where the original template failed. Here we present the web tool CaspR, implementing such a strategy in an automated manner. On input of experimental diffraction data, of the corresponding target sequence and of one or several potential templates, CaspR executes an optimized molecular replacement procedure using a combination of well-established stand-alone software tools. The protocol of model building and screening begins with the generation of multiple structure–sequence alignments produced with T-COFFEE, followed by homology model building using MODELLER, molecular replacement with AMoRe and model refinement based on CNS. As a result, CaspR provides a progress report in the form of hierarchically organized summary sheets that describe the different stages of the computation with an increasing level of detail. For the 10 highest-scoring potential solutions, pre-refined structures are made available for download in PDB format. Results already obtained with CaspR and reported on the web server suggest that such a strategy significantly increases the fraction of protein structures which may be solved by MR. Moreover, even in situations where standard MR yields a solution, pre-refined homology models produced by CaspR significantly reduce the time-consuming refinement process. We expect this automated procedure to have a significant impact on the throughput of large-scale structural genomics projects. CaspR is freely available at http://igs-server.cnrs-mrs.fr/Caspr/. PMID:15215460

  14. TOBAGO — a semi-automated approach for the generation of 3-D building models

    NASA Astrophysics Data System (ADS)

    Gruen, Armin

    3-D city models are in increasing demand for a great number of applications. Photogrammetry is a relevant technology that can provide an abundance of geometric, topologic and semantic information concerning these models. The pressure to generate a large amount of data with high degree of accuracy and completeness poses a great challenge to phtogrammetry. The development of automated and semi-automated methods for the generation of those data sets is therefore a key issue in photogrammetric research. We present in this article a strategy and methodology for an efficient generation of even fairly complex building models. Within this concept we request the operator to measure the house roofs from a stereomodel in form of an unstructured point cloud. According to our experience this can be done very quickly. Even a non-experienced operator can measure several hundred roofs or roof units per day. In a second step we fit generic building models fully automatically to these point clouds. The structure information is inherently included in these building models. In such a way geometric, topologic and even semantic data can be handed over to a CAD-system, in our case AutoCad, for further visualization and manipulation. The structuring is achieved in three steps. In a first step a classifier is initiated which recognizes the class of houses a particular roof point cloud belongs to. This recognition step is primarily based on the analysis of the number of ridge points. In the second and third steps the concrete topological relations between roof points are investigated and generic building models are fitted to the point clouds. Based on the technique of constraint-based reasoning two geometrical parsers are solving this problem. We have tested the methodology under a variety of different conditions in several pilot projects. The results will indicate the good performance of our approach. In addition we will demonstrate how the results can be used for visualization (texture

  15. Automated 3D Damaged Cavity Model Builder for Lower Surface Acreage Tile on Orbiter

    NASA Technical Reports Server (NTRS)

    Belknap, Shannon; Zhang, Michael

    2013-01-01

    The 3D Automated Thermal Tool for Damaged Acreage Tile Math Model builder was developed to perform quickly and accurately 3D thermal analyses on damaged lower surface acreage tiles and structures beneath the damaged locations on a Space Shuttle Orbiter. The 3D model builder created both TRASYS geometric math models (GMMs) and SINDA thermal math models (TMMs) to simulate an idealized damaged cavity in the damaged tile(s). The GMMs are processed in TRASYS to generate radiation conductors between the surfaces in the cavity. The radiation conductors are inserted into the TMMs, which are processed in SINDA to generate temperature histories for all of the nodes on each layer of the TMM. The invention allows a thermal analyst to create quickly and accurately a 3D model of a damaged lower surface tile on the orbiter. The 3D model builder can generate a GMM and the correspond ing TMM in one or two minutes, with the damaged cavity included in the tile material. A separate program creates a configuration file, which would take a couple of minutes to edit. This configuration file is read by the model builder program to determine the location of the damage, the correct tile type, tile thickness, structure thickness, and SIP thickness of the damage, so that the model builder program can build an accurate model at the specified location. Once the models are built, they are processed by the TRASYS and SINDA.

  16. A comparison of molecular dynamics and diffuse interface model predictions of Lennard-Jones fluid evaporation

    SciTech Connect

    Barbante, Paolo; Frezzotti, Aldo; Gibelli, Livio

    2014-12-09

    The unsteady evaporation of a thin planar liquid film is studied by molecular dynamics simulations of Lennard-Jones fluid. The obtained results are compared with the predictions of a diffuse interface model in which capillary Korteweg contributions are added to hydrodynamic equations, in order to obtain a unified description of the liquid bulk, liquid-vapor interface and vapor region. Particular care has been taken in constructing a diffuse interface model matching the thermodynamic and transport properties of the Lennard-Jones fluid. The comparison of diffuse interface model and molecular dynamics results shows that, although good agreement is obtained in equilibrium conditions, remarkable deviations of diffuse interface model predictions from the reference molecular dynamics results are observed in the simulation of liquid film evaporation. It is also observed that molecular dynamics results are in good agreement with preliminary results obtained from a composite model which describes the liquid film by a standard hydrodynamic model and the vapor by the Boltzmann equation. The two mathematical model models are connected by kinetic boundary conditions assuming unit evaporation coefficient.

  17. An automated shell for management of parametric dispersion/deposition modeling

    SciTech Connect

    Paddock, R.A.; Absil, M.J.G.; Peerenboom, J.P.; Newsom, D.E.; North, M.J.; Coskey, R.J. Jr.

    1994-03-01

    In 1993, the US Army tasked Argonne National Laboratory to perform a study of chemical agent dispersion and deposition for the Chemical Stockpile Emergency Preparedness Program using an existing Army computer model. The study explored a wide range of situations in terms of six parameters: agent type, quantity released, liquid droplet size, release height, wind speed, and atmospheric stability. A number of discrete values of interest were chosen for each parameter resulting in a total of 18,144 possible different combinations of parameter values. Therefore, the need arose for a systematic method to assemble the large number of input streams for the model, filter out unrealistic combinations of parameter values, run the model, and extract the results of interest from the extensive model output. To meet these needs, we designed an automated shell for the computer model. The shell processed the inputs, ran the model, and reported the results of interest. By doing so, the shell compressed the time needed to perform the study and freed the researchers to focus on the evaluation and interpretation of the model predictions. The results of the study are still under review by the Army and other agencies; therefore, it would be premature to discuss the results in this paper. However, the design of the shell could be applied to other hazards for which multiple-parameter modeling is performed. This paper describes the design and operation of the shell as an example for other hazards and models.

  18. Pilot interaction with cockpit automation 2: An experimental study of pilots' model and awareness of the Flight Management System

    NASA Technical Reports Server (NTRS)

    Sarter, Nadine B.; Woods, David D.

    1994-01-01

    Technological developments have made it possible to automate more and more functions on the commercial aviation flight deck and in other dynamic high-consequence domains. This increase in the degrees of freedom in design has shifted questions away from narrow technological feasibility. Many concerned groups, from designers and operators to regulators and researchers, have begun to ask questions about how we should use the possibilities afforded by technology skillfully to support and expand human performance. In this article, we report on an experimental study that addressed these questions by examining pilot interaction with the current generation of flight deck automation. Previous results on pilot-automation interaction derived from pilot surveys, incident reports, and training observations have produced a corpus of features and contexts in which human-machine coordination is likely to break down (e.g., automation surprises). We used these data to design a simulated flight scenario that contained a variety of probes designed to reveal pilots' mental model of one major component of flight deck automation: the Flight Management System (FMS). The events within the scenario were also designed to probe pilots' ability to apply their knowledge and understanding in specific flight contexts and to examine their ability to track the status and behavior of the automated system (mode awareness). Although pilots were able to 'make the system work' in standard situations, the results reveal a variety of latent problems in pilot-FMS interaction that can affect pilot performance in nonnormal time critical situations.

  19. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1993-01-01

    Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.

  20. A Dual-Scale Approach for Modeling Turbulent Two-Phase Interface Dynamics

    NASA Astrophysics Data System (ADS)

    Herrmann, Marcus

    2014-11-01

    Turbulent liquid/gas phase interface dynamics are at the core of many applicatons. For example, in atomizing flows, the properties of the resulting liquid spray are determined by the interplay of fluid and surface tension forces. The resulting dynamics typically span 4-6 orders of magnitude in length scales, making DNS exceedingly expensive. This motivates the need for modeling approaches based on spatial filtering or ensemble averaging. In this talk, a dual-scale modeling approach is presented to describe turbulent two-phase interface dynamics in a LES-type spatial filtering context. To close the unclosed terms related to the phase interface arising from filtering the Navier-Stokes equation, a resolved realization of the phase interface dynamics is explicitly filtered. This resolved realization is maintained on a high resolution over-set mesh using a Refined Local Surface Grid approach employing an un-split, geometric, bounded, and conservative Volume of Fluid method. The required model for the resolved realization of the interface advection velocity includes the effects of sub-filter surface tension, dissipation, and turbulent eddies. Results of the dual-scale model will be compared to recent DNS by McCaslin & Desjardins of an interface in homogeneous isotropic turbulence. Supported by NSF Grant CBET-1054272 and the 2014 CTR Summer Program.

  1. CHANNEL MORPHOLOGY TOOL (CMT): A GIS-BASED AUTOMATED EXTRACTION MODEL FOR CHANNEL GEOMETRY

    SciTech Connect

    JUDI, DAVID; KALYANAPU, ALFRED; MCPHERSON, TIMOTHY; BERSCHEID, ALAN

    2007-01-17

    This paper describes an automated Channel Morphology Tool (CMT) developed in ArcGIS 9.1 environment. The CMT creates cross-sections along a stream centerline and uses a digital elevation model (DEM) to create station points with elevations along each of the cross-sections. The generated cross-sections may then be exported into a hydraulic model. Along with the rapid cross-section generation the CMT also eliminates any cross-section overlaps that might occur due to the sinuosity of the channels using the Cross-section Overlap Correction Algorithm (COCoA). The CMT was tested by extracting cross-sections from a 5-m DEM for a 50-km channel length in Houston, Texas. The extracted cross-sections were compared directly with surveyed cross-sections in terms of the cross-section area. Results indicated that the CMT-generated cross-sections satisfactorily matched the surveyed data.

  2. Modelling and simulations of multi-component lipid membranes and open membranes via diffuse interface approaches.

    PubMed

    Wang, Xiaoqiang; Du, Qiang

    2008-03-01

    Diffuse interface (phase field) models are developed for multi-component vesicle membranes with different lipid compositions and membranes with free boundary. These models are used to simulate the deformation of membranes under the elastic bending energy and the line tension energy with prescribed volume and surface area constraints. By comparing our numerical simulations with recent biological experiments, it is demonstrated that the diffuse interface models can effectively capture the rich phenomena associated with the multi-component vesicle transformation and thus offering great functionality in their simulation and modelling.

  3. Automated Feature Based Tls Data Registration for 3d Building Modeling

    NASA Astrophysics Data System (ADS)

    Kitamura, K.; Kochi, N.; Kaneko, S.

    2012-07-01

    In this paper we present a novel method for the registration of point cloud data obtained using terrestrial laser scanner (TLS). The final goal of our investigation is the automated reconstruction of CAD drawings and the 3D modeling of objects surveyed by TLS. Because objects are scanned from multiple positions, individual point cloud need to be registered to the same coordinate system. We propose in this paper an automated feature based registration procedure. Our proposed method does not require the definition of initial values or the placement of targets and is robust against noise and background elements. A feature extraction procedure is performed for each point cloud as pre-processing. The registration of the point clouds from different viewpoints is then performed by utilizing the extracted features. The feature extraction method which we had developed previously (Kitamura, 2010) is used: planes and edges are extracted from the point cloud. By utilizing these features, the amount of information to process is reduced and the efficiency of the whole registration procedure is increased. In this paper, we describe the proposed algorithm and, in order to demonstrate its effectiveness, we show the results obtained by using real data.

  4. Framework for non-coherent interface models at finite displacement jumps and finite strains

    NASA Astrophysics Data System (ADS)

    Ottosen, Niels Saabye; Ristinmaa, Matti; Mosler, Jörn

    2016-05-01

    This paper deals with a novel constitutive framework suitable for non-coherent interfaces, such as cracks, undergoing large deformations in a geometrically exact setting. For this type of interface, the displacement field shows a jump across the interface. Within the engineering community, so-called cohesive zone models are frequently applied in order to describe non-coherent interfaces. However, for existing models to comply with the restrictions imposed by (a) thermodynamical consistency (e.g., the second law of thermodynamics), (b) balance equations (in particular, balance of angular momentum) and (c) material frame indifference, these models are essentially fiber models, i.e. models where the traction vector is collinear with the displacement jump. This constraints the ability to model shear and, in addition, anisotropic effects are excluded. A novel, extended constitutive framework which is consistent with the above mentioned fundamental physical principles is elaborated in this paper. In addition to the classical tractions associated with a cohesive zone model, the main idea is to consider additional tractions related to membrane-like forces and out-of-plane shear forces acting within the interface. For zero displacement jump, i.e. coherent interfaces, this framework degenerates to existing formulations presented in the literature. For hyperelasticity, the Helmholtz energy of the proposed novel framework depends on the displacement jump as well as on the tangent vectors of the interface with respect to the current configuration - or equivalently - the Helmholtz energy depends on the displacement jump and the surface deformation gradient. It turns out that by defining the Helmholtz energy in terms of the invariants of these variables, all above-mentioned fundamental physical principles are automatically fulfilled. Extensions of the novel framework necessary for material degradation (damage) and plasticity are also covered.

  5. Automated optimization and construction of chemometric models based on highly variable raw chromatographic data.

    PubMed

    Sinkov, Nikolai A; Johnston, Brandon M; Sandercock, P Mark L; Harynuk, James J

    2011-07-01

    Direct chemometric interpretation of raw chromatographic data (as opposed to integrated peak tables) has been shown to be advantageous in many circumstances. However, this approach presents two significant challenges: data alignment and feature selection. In order to interpret the data, the time axes must be precisely aligned so that the signal from each analyte is recorded at the same coordinates in the data matrix for each and every analyzed sample. Several alignment approaches exist in the literature and they work well when the samples being aligned are reasonably similar. In cases where the background matrix for a series of samples to be modeled is highly variable, the performance of these approaches suffers. Considering the challenge of feature selection, when the raw data are used each signal at each time is viewed as an individual, independent variable; with the data rates of modern chromatographic systems, this generates hundreds of thousands of candidate variables, or tens of millions of candidate variables if multivariate detectors such as mass spectrometers are utilized. Consequently, an automated approach to identify and select appropriate variables for inclusion in a model is desirable. In this research we present an alignment approach that relies on a series of deuterated alkanes which act as retention anchors for an alignment signal, and couple this with an automated feature selection routine based on our novel cluster resolution metric for the construction of a chemometric model. The model system that we use to demonstrate these approaches is a series of simulated arson debris samples analyzed by passive headspace extraction, GC-MS, and interpreted using partial least squares discriminant analysis (PLS-DA).

  6. A multilayered sharp interface model of coupled freshwater and saltwater flow in coastal systems: model development and application

    USGS Publications Warehouse

    Essaid, H.I.

    1990-01-01

    The model allows for regional simulation of coastal groundwater conditions, including the effects of saltwater dynamics on the freshwater system. Vertically integrated freshwater and saltwater flow equations incorporating the interface boundary condition are solved within each aquifer. Leakage through confining layers is calculated by Darcy's law, accounting for density differences across the layer. The locations of the interface tip and toe, within grid blocks, are tracked by linearly extrapolating the position of the interface. The model has been verified using available analytical solutions and experimental results and applied to the Soquel-Aptos basin, Santa Cruz County, California. -from Author

  7. Theoretical modelling of the semiconductor-electrolyte interface

    NASA Astrophysics Data System (ADS)

    Schelling, Patrick Kenneth

    We have developed tight-binding models of transition metal oxides. In contrast to many tight-binding models, these models include a description of electron-electron interactions. After parameterizing to bulk first-principles calculations, we demonstrated the transferability of the model by calculating atomic and electronic structure of rutile surfaces, which compared well with experiment and first-principles calculations. We also studied the structure of twist grain boundaries in rutile. Molecular dynamics simulations using the model were also carried out to describe polaron localization. We have also demonstrated that tight-binding models can be constructed to describe metallic systems. The computational cost tight-binding simulations was greatly reduced by incorporating O(N) electronic structure methods. We have also interpreted photoluminesence experiments on GaAs electrodes in contact with an electrolyte using drift-diffusion models. Electron transfer velocities were obtained by fitting to experimental results.

  8. Control of enterprise interfaces for supply chain enterprise modeling

    SciTech Connect

    Interrante, L.D.; Macfarlane, J.F.

    1995-04-01

    There is a current trend for manufacturing enterprises in a supply chain of a particular industry to join forces in an attempt to promote efficiencies and improve competitive position. Such alliances occur in the context of specific legal and business agreements such that each enterprise retains a majority of its business and manufacturing information as private and shares other information with its trading partners. Shared information may include enterprise demand projections, capacities, finished goods inventories, and aggregate production schedules. Evidence of the trend toward information sharing includes the recent emphases on vendor-managed inventories, quick response, and Electronic Data Interchange (EDI) standards. The increased competition brought on by the global marketplace is driving industries to consider the advantages of trading partner agreements. Aggregate-level forecasts, supply-chain production smoothing, and aggregate-level inventory policies can reduce holding costs, record-keeping overhead, and lead time in product development. The goal of this research is to orchestrate information exchange among trading partners to allow for aggregate-level analysis to enhance supply chain efficiency. The notion of Enterprise Interface Control (EIC) is introduced as a means of accomplishing this end.

  9. Model of bound interface dynamics for coupled magnetic domain walls

    NASA Astrophysics Data System (ADS)

    Politi, P.; Metaxas, P. J.; Jamet, J.-P.; Stamps, R. L.; Ferré, J.

    2011-08-01

    A domain wall in a ferromagnetic system will move under the action of an external magnetic field. Ultrathin Co layers sandwiched between Pt have been shown to be a suitable experimental realization of a weakly disordered 2D medium in which to study the dynamics of 1D interfaces (magnetic domain walls). The behavior of these systems is encapsulated in the velocity-field response v(H) of the domain walls. In a recent paper [P. J. Metaxas , Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.104.237206 104, 237206 (2010)] we studied the effect of ferromagnetic coupling between two such ultrathin layers, each exhibiting different v(H) characteristics. The main result was the existence of bound states over finite-width field ranges, wherein walls in the two layers moved together at the same speed. Here we discuss in detail the theory of domain wall dynamics in coupled systems. In particular, we show that a bound creep state is expected for vanishing H and we give the analytical, parameter free expression for its velocity which agrees well with experimental results.

  10. Analytic Element Modeling of Steady Interface Flow in Multilayer Aquifers Using AnAqSim.

    PubMed

    Fitts, Charles R; Godwin, Joshua; Feiner, Kathleen; McLane, Charles; Mullendore, Seth

    2015-01-01

    This paper presents the analytic element modeling approach implemented in the software AnAqSim for simulating steady groundwater flow with a sharp fresh-salt interface in multilayer (three-dimensional) aquifer systems. Compared with numerical methods for variable-density interface modeling, this approach allows quick model construction and can yield useful guidance about the three-dimensional configuration of an interface even at a large scale. The approach employs subdomains and multiple layers as outlined by Fitts (2010) with the addition of discharge potentials for shallow interface flow (Strack 1989). The following simplifying assumptions are made: steady flow, a sharp interface between fresh- and salt water, static salt water, and no resistance to vertical flow and hydrostatic heads within each fresh water layer. A key component of this approach is a transition to a thin fixed minimum fresh water thickness mode when the fresh water thickness approaches zero. This allows the solution to converge and determine the steady interface position without a long transient simulation. The approach is checked against the widely used numerical codes SEAWAT and SWI/MODFLOW and a hypothetical application of the method to a coastal wellfield is presented.

  11. Mathematical modeling of the interaction between an insoluble solid particle and a solidifying interface

    NASA Astrophysics Data System (ADS)

    Catalina, Adrian Vasile

    When a moving solidification front intercepts an insoluble particle, three distinct interaction phenomena can occur: instantaneous engulfment, continuous pushing of the particle, or particle pushing followed by engulfment. Various mathematical models, aiming to predict the critical solidification velocity for the pushing/engulfment transition, have been published in the literature. However, their predictions were not confirmed by the recent experimental measurements performed in microgravity conditions. The aim of this dissertation is to further continue the study of the interaction particle/solidifying interface through mathematical modeling. In this respect, two new analytical models were developed. In addition, a finite difference numerical approach is proposed. The first analytical model, the Equilibrium Breakdown Model, reveals the fact that the particle/solidifying interface interaction is not a steady state process, as assumed in the previously published models. Its simple formulation makes it attractive for practical purposes such as manufacturing of composite materials. The second model, i.e., the Dynamic Model, is more complex and, for the first time, it is able to capture and explain interesting phenomena that escaped the steady state analyses of previously published models. It shows that steady state interaction is only a particular case that can occur only at sub-critical solidification velocity. In this work, both analytical models were successfully validated against experimental data produced under microgravity conditions. The numerical approach, based on an interface tracking procedure, consists in the development of two distinct models, i.e., a solidification model and a fluid flow model. These two models together can give a more comprehensive picture of the particle/interface interaction. The solidification model has the capability to accommodate changes of the solid/liquid interface temperature because of capillarity and solute redistribution. It

  12. Automated home cage assessment shows behavioral changes in a transgenic mouse model of spinocerebellar ataxia type 17.

    PubMed

    Portal, Esteban; Riess, Olaf; Nguyen, Huu Phuc

    2013-08-01

    Spinocerebellar Ataxia type 17 (SCA17) is an autosomal dominantly inherited, neurodegenerative disease characterized by ataxia, involuntary movements, and dementia. A novel SCA17 mouse model having a 71 polyglutamine repeat expansion in the TATA-binding protein (TBP) has shown age related motor deficit using a classic motor test, yet concomitant weight increase might be a confounding factor for this measurement. In this study we used an automated home cage system to test several motor readouts for this same model to confirm pathological behavior results and evaluate benefits of automated home cage in behavior phenotyping. Our results confirm motor deficits in the Tbp/Q71 mice and present previously unrecognized behavioral characteristics obtained from the automated home cage, indicating its use for high-throughput screening and testing, e.g. of therapeutic compounds.

  13. Examining Uncertainty in Demand Response Baseline Models and Variability in Automated Response to Dynamic Pricing

    SciTech Connect

    Mathieu, Johanna L.; Callaway, Duncan S.; Kiliccote, Sila

    2011-08-15

    Controlling electric loads to deliver power system services presents a number of interesting challenges. For example, changes in electricity consumption of Commercial and Industrial (C&I) facilities are usually estimated using counterfactual baseline models, and model uncertainty makes it difficult to precisely quantify control responsiveness. Moreover, C&I facilities exhibit variability in their response. This paper seeks to understand baseline model error and demand-side variability in responses to open-loop control signals (i.e. dynamic prices). Using a regression-based baseline model, we define several Demand Response (DR) parameters, which characterize changes in electricity use on DR days, and then present a method for computing the error associated with DR parameter estimates. In addition to analyzing the magnitude of DR parameter error, we develop a metric to determine how much observed DR parameter variability is attributable to real event-to-event variability versus simply baseline model error. Using data from 38 C&I facilities that participated in an automated DR program in California, we find that DR parameter errors are large. For most facilities, observed DR parameter variability is likely explained by baseline model error, not real DR parameter variability; however, a number of facilities exhibit real DR parameter variability. In some cases, the aggregate population of C&I facilities exhibits real DR parameter variability, resulting in implications for the system operator with respect to both resource planning and system stability.

  14. Automated Generation of Fault Management Artifacts from a Simple System Model

    NASA Technical Reports Server (NTRS)

    Kennedy, Andrew K.; Day, John C.

    2013-01-01

    Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.

  15. An Automated Application Framework to Model Disordered Materials Based on a High Throughput First Principles Approach

    NASA Astrophysics Data System (ADS)

    Oses, Corey; Yang, Kesong; Curtarolo, Stefano; Duke Univ Collaboration; UC San Diego Collaboration

    Predicting material properties of disordered systems remains a long-standing and formidable challenge in rational materials design. To address this issue, we introduce an automated software framework capable of modeling partial occupation within disordered materials using a high-throughput (HT) first principles approach. At the heart of the approach is the construction of supercells containing a virtually equivalent stoichiometry to the disordered material. All unique supercell permutations are enumerated and material properties of each are determined via HT electronic structure calculations. In accordance with a canonical ensemble of supercell states, the framework evaluates ensemble average properties of the system as a function of temperature. As proof of concept, we examine the framework's final calculated properties of a zinc chalcogenide (ZnS1-xSex), a wide-gap oxide semiconductor (MgxZn1-xO), and an iron alloy (Fe1-xCux) at various stoichiometries.

  16. Distributed model predictive control with hierarchical architecture for communication: application in automated irrigation channels

    NASA Astrophysics Data System (ADS)

    Farhadi, Alireza; Khodabandehlou, Ali

    2016-08-01

    This paper is concerned with a distributed model predictive control (DMPC) method that is based on a distributed optimisation method with two-level architecture for communication. Feasibility (constraints satisfaction by the approximated solution), convergence and optimality of this distributed optimisation method are mathematically proved. For an automated irrigation channel, the satisfactory performance of the proposed DMPC method in attenuation of the undesired upstream transient error propagation and amplification phenomenon is illustrated and compared with the performance of another DMPC method that exploits a single-level architecture for communication. It is illustrated that the DMPC that exploits a two-level architecture for communication has a better performance by better managing communication overhead.

  17. An automated approach for segmentation of intravascular ultrasound images based on parametric active contour models.

    PubMed

    Vard, Alireza; Jamshidi, Kamal; Movahhedinia, Naser

    2012-06-01

    This paper presents a fully automated approach to detect the intima and media-adventitia borders in intravascular ultrasound images based on parametric active contour models. To detect the intima border, we compute a new image feature applying a combination of short-term autocorrelations calculated for the contour pixels. These feature values are employed to define an energy function of the active contour called normalized cumulative short-term autocorrelation. Exploiting this energy function, the intima border is separated accurately from the blood region contaminated by high speckle noise. To extract media-adventitia boundary, we define a new form of energy function based on edge, texture and spring forces for the active contour. Utilizing this active contour, the media-adventitia border is identified correctly even in presence of branch openings and calcifications. Experimental results indicate accuracy of the proposed methods. In addition, statistical analysis demonstrates high conformity between manual tracing and the results obtained by the proposed approaches.

  18. Dynamic Distribution and Layouting of Model-Based User Interfaces in Smart Environments

    NASA Astrophysics Data System (ADS)

    Roscher, Dirk; Lehmann, Grzegorz; Schwartze, Veit; Blumendorf, Marco; Albayrak, Sahin

    The developments in computer technology in the last decade change the ways of computer utilization. The emerging smart environments make it possible to build ubiquitous applications that assist users during their everyday life, at any time, in any context. But the variety of contexts-of-use (user, platform and environment) makes the development of such ubiquitous applications for smart environments and especially its user interfaces a challenging and time-consuming task. We propose a model-based approach, which allows adapting the user interface at runtime to numerous (also unknown) contexts-of-use. Based on a user interface modelling language, defining the fundamentals and constraints of the user interface, a runtime architecture exploits the description to adapt the user interface to the current context-of-use. The architecture provides automatic distribution and layout algorithms for adapting the applications also to contexts unforeseen at design time. Designers do not specify predefined adaptations for each specific situation, but adaptation constraints and guidelines. Furthermore, users are provided with a meta user interface to influence the adaptations according to their needs. A smart home energy management system serves as running example to illustrate the approach.

  19. Size effects in martensitic microstructures: Finite-strain phase field model versus sharp-interface approach

    NASA Astrophysics Data System (ADS)

    Tůma, K.; Stupkiewicz, S.; Petryk, H.

    2016-10-01

    A finite-strain phase field model for martensitic phase transformation and twinning in shape memory alloys is developed and confronted with the corresponding sharp-interface approach extended to interfacial energy effects. The model is set in the energy framework so that the kinetic equations and conditions of mechanical equilibrium are fully defined by specifying the free energy and dissipation potentials. The free energy density involves the bulk and interfacial energy contributions, the latter describing the energy of diffuse interfaces in a manner typical for phase-field approaches. To ensure volume preservation during martensite reorientation at finite deformation within a diffuse interface, it is proposed to apply linear mixing of the logarithmic transformation strains. The physically different nature of phase interfaces and twin boundaries in the martensitic phase is reflected by introducing two order-parameters in a hierarchical manner, one as the reference volume fraction of austenite, and thus of the whole martensite, and the second as the volume fraction of one variant of martensite in the martensitic phase only. The microstructure evolution problem is given a variational formulation in terms of incremental fields of displacement and order parameters, with unilateral constraints on volume fractions explicitly enforced by applying the augmented Lagrangian method. As an application, size-dependent microstructures with diffuse interfaces are calculated for the cubic-to-orthorhombic transformation in a CuAlNi shape memory alloy and compared with the sharp-interface microstructures with interfacial energy effects.

  20. Fullerene film on metal surface: Diffusion of metal atoms and interface model

    SciTech Connect

    Li, Wen-jie; Li, Hai-Yang; Li, Hong-Nian; Wang, Peng; Wang, Xiao-Xiong; Wang, Jia-Ou; Wu, Rui; Qian, Hai-Jie; Ibrahim, Kurash

    2014-05-12

    We try to understand the fact that fullerene film behaves as n-type semiconductor in electronic devices and establish a model describing the energy level alignment at fullerene/metal interfaces. The C{sub 60}/Ag(100) system was taken as a prototype and studied with photoemission measurements. The photoemission spectra revealed that the Ag atoms of the substrate diffused far into C{sub 60} film and donated electrons to the molecules. So the C{sub 60} film became n-type semiconductor with the Ag atoms acting as dopants. The C{sub 60}/Ag(100) interface should be understood as two sub-interfaces on both sides of the molecular layer directly contacting with the substrate. One sub-interface is Fermi level alignment, and the other is vacuum level alignment.

  1. Developing a User-process Model for Designing Menu-based Interfaces: An Exploratory Study.

    ERIC Educational Resources Information Center

    Ju, Boryung; Gluck, Myke

    2003-01-01

    The purpose of this study was to organize menu items based on a user-process model and implement a new version of current software for enhancing usability of interfaces. A user-process model was developed, drawn from actual users' understanding of their goals and strategies to solve their information needs by using Dervin's Sense-Making Theory…

  2. Rethinking Design Process: Using 3D Digital Models as an Interface in Collaborative Session

    ERIC Educational Resources Information Center

    Ding, Suining

    2008-01-01

    This paper describes a pilot study for an alternative design process by integrating a designer-user collaborative session with digital models. The collaborative session took place in a 3D AutoCAD class for a real world project. The 3D models served as an interface for designer-user collaboration during the design process. Students not only learned…

  3. Tape-Drop Transient Model for In-Situ Automated Tape Placement of Thermoplastic Composites

    NASA Technical Reports Server (NTRS)

    Costen, Robert C.; Marchello, Joseph M.

    1998-01-01

    Composite parts of nonuniform thickness can be fabricated by in-situ automated tape placement (ATP) if the tape can be started and stopped at interior points of the part instead of always at its edges. This technique is termed start/stop-on-the-part, or, alternatively, tape-add/tape-drop. The resulting thermal transients need to be managed in order to achieve net shape and maintain uniform interlaminar weld strength and crystallinity. Starting-on-the-part has been treated previously. This paper continues the study with a thermal analysis of stopping-on-the-part. The thermal source is switched off when the trailing end of the tape enters the nip region of the laydown/consolidation head. The thermal transient is determined by a Fourier-Laplace transform solution of the two-dimensional, time-dependent thermal transport equation. This solution requires that the Peclet number Pe (the dimensionless ratio of inertial to diffusive heat transport) be independent of time and much greater than 1. Plotted isotherms show that the trailing tape-end cools more rapidly than the downstream portions of tape. This cooling can weaken the bond near the tape end; however the length of the affected region is found to be less than 2 mm. To achieve net shape, the consolidation head must continue to move after cut-off until the temperature on the weld interface decreases to the glass transition temperature. The time and elapsed distance for this condition to occur are computed for the Langley ATP robot applying PEEK/carbon fiber composite tape and for two upgrades in robot performance. The elapsed distance after cut-off ranges from about 1 mm for the present robot to about 1 cm for the second upgrade.

  4. Atomistic Cohesive Zone Models for Interface Decohesion in Metals

    NASA Technical Reports Server (NTRS)

    Yamakov, Vesselin I.; Saether, Erik; Glaessgen, Edward H.

    2009-01-01

    Using a statistical mechanics approach, a cohesive-zone law in the form of a traction-displacement constitutive relationship characterizing the load transfer across the plane of a growing edge crack is extracted from atomistic simulations for use within a continuum finite element model. The methodology for the atomistic derivation of a cohesive-zone law is presented. This procedure can be implemented to build cohesive-zone finite element models for simulating fracture in nanocrystalline or ultrafine grained materials.

  5. Modeling and matching of landmarks for automation of Mars Rover localization

    NASA Astrophysics Data System (ADS)

    Wang, Jue

    The Mars Exploration Rover (MER) mission, begun in January 2004, has been extremely successful. However, decision-making for many operation tasks of the current MER mission and the 1997 Mars Pathfinder mission is performed on Earth through a predominantly manual, time-consuming process. Unmanned planetary rover navigation is ideally expected to reduce rover idle time, diminish the need for entering safe-mode, and dynamically handle opportunistic science events without required communication to Earth. Successful automation of rover navigation and localization during the extraterrestrial exploration requires that accurate position and attitude information can be received by a rover and that the rover has the support of simultaneous localization and mapping. An integrated approach with Bundle Adjustment (BA) and Visual Odometry (VO) can efficiently refine the rover position. However, during the MER mission, BA is done manually because of the difficulty in the automation of the cross-sitetie points selection. This dissertation proposes an automatic approach to select cross-site tie points from multiple rover sites based on the methods of landmark extraction, landmark modeling, and landmark matching. The first step in this approach is that important landmarks such as craters and rocks are defined. Methods of automatic feature extraction and landmark modeling are then introduced. Complex models with orientation angles and simple models without those angles are compared. The results have shown that simple models can provide reasonably good results. Next, the sensitivity of different modeling parameters is analyzed. Based on this analysis, cross-site rocks are matched through two complementary stages: rock distribution pattern matching and rock model matching. In addition, a preliminary experiment on orbital and ground landmark matching is also briefly introduced. Finally, the reliability of the cross-site tie points selection is validated by fault detection, which

  6. Classification of Mouse Sperm Motility Patterns Using an Automated Multiclass Support Vector Machines Model1

    PubMed Central

    Goodson, Summer G.; Zhang, Zhaojun; Tsuruta, James K.; Wang, Wei; O'Brien, Deborah A.

    2011-01-01

    Vigorous sperm motility, including the transition from progressive to hyperactivated motility that occurs in the female reproductive tract, is required for normal fertilization in mammals. We developed an automated, quantitative method that objectively classifies five distinct motility patterns of mouse sperm using Support Vector Machines (SVM), a common method in supervised machine learning. This multiclass SVM model is based on more than 2000 sperm tracks that were captured by computer-assisted sperm analysis (CASA) during in vitro capacitation and visually classified as progressive, intermediate, hyperactivated, slow, or weakly motile. Parameters associated with the classified tracks were incorporated into established SVM algorithms to generate a series of equations. These equations were integrated into a binary decision tree that sequentially sorts uncharacterized tracks into distinct categories. The first equation sorts CASA tracks into vigorous and nonvigorous categories. Additional equations classify vigorous tracks as progressive, intermediate, or hyperactivated and nonvigorous tracks as slow or weakly motile. Our CASAnova software uses these SVM equations to classify individual sperm motility patterns automatically. Comparisons of motility profiles from sperm incubated with and without bicarbonate confirmed the ability of the model to distinguish hyperactivated patterns of motility that develop during in vitro capacitation. The model accurately classifies motility profiles of sperm from a mutant mouse model with severe motility defects. Application of the model to sperm from multiple inbred strains reveals strain-dependent differences in sperm motility profiles. CASAnova provides a rapid and reproducible platform for quantitative comparisons of motility in large, heterogeneous populations of mouse sperm. PMID:21349820

  7. Automated clustering of ensembles of alternative models in protein structure databases.

    PubMed

    Domingues, Francisco S; Rahnenführer, Jörg; Lengauer, Thomas

    2004-06-01

    Experimentally determined protein structures have been classified in different public databases according to their structural and evolutionary relationships. Frequently, alternative structural models, determined using X-ray crystallography or NMR spectroscopy, are available for a protein. These models can present significant structural dissimilarity. Currently there is no classification available for these alternative structures. In order to classify them, we developed STRuster, an automated method for clustering ensembles of structural models according to their backbone structure. The method is based on the calculation of carbon alpha (Calpha) distance matrices. Two filters are applied in the calculation of the dissimilarity measure in order to identify both large and small (but significant) backbone conformational changes. The resulting dissimilarity value is used for hierarchical clustering and partitioning around medoids (PAM). Hierarchical clustering reflects the hierarchy of similarities between all pairs of models, while PAM groups the models into the 'optimal' number of clusters. The method has been applied to cluster the structures in each SCOP species level and can be easily applied to any other sets of conformers. The results are available at: http://bioinf.mpi-sb.mpg.de/projects/struster/. PMID:15319469

  8. Automated As-Built Model Generation of Subway Tunnels from Mobile LiDAR Data.

    PubMed

    Arastounia, Mostafa

    2016-01-01

    This study proposes fully-automated methods for as-built model generation of subway tunnels employing mobile Light Detection and Ranging (LiDAR) data. The employed dataset is acquired by a Velodyne HDL 32E and covers 155 m of a subway tunnel containing six million points. First, the tunnel's main axis and cross sections are extracted. Next, a preliminary model is created by fitting an ellipse to each extracted cross section. The model is refined by employing residual analysis and Baarda's data snooping method to eliminate outliers. The final model is then generated by applying least squares adjustment to outlier-free data. The obtained results indicate that the tunnel's main axis and 1551 cross sections at 0.1 m intervals are successfully extracted. Cross sections have an average semi-major axis of 7.8508 m with a standard deviation of 0.2 mm and semi-minor axis of 7.7509 m with a standard deviation of 0.1 mm. The average normal distance of points from the constructed model (average absolute error) is also 0.012 m. The developed algorithm is applicable to tunnels with any horizontal orientation and degree of curvature since it makes no assumptions, nor does it use any a priori knowledge regarding the tunnel's curvature and horizontal orientation. PMID:27649172

  9. Automated As-Built Model Generation of Subway Tunnels from Mobile LiDAR Data

    PubMed Central

    Arastounia, Mostafa

    2016-01-01

    This study proposes fully-automated methods for as-built model generation of subway tunnels employing mobile Light Detection and Ranging (LiDAR) data. The employed dataset is acquired by a Velodyne HDL 32E and covers 155 m of a subway tunnel containing six million points. First, the tunnel’s main axis and cross sections are extracted. Next, a preliminary model is created by fitting an ellipse to each extracted cross section. The model is refined by employing residual analysis and Baarda’s data snooping method to eliminate outliers. The final model is then generated by applying least squares adjustment to outlier-free data. The obtained results indicate that the tunnel’s main axis and 1551 cross sections at 0.1 m intervals are successfully extracted. Cross sections have an average semi-major axis of 7.8508 m with a standard deviation of 0.2 mm and semi-minor axis of 7.7509 m with a standard deviation of 0.1 mm. The average normal distance of points from the constructed model (average absolute error) is also 0.012 m. The developed algorithm is applicable to tunnels with any horizontal orientation and degree of curvature since it makes no assumptions, nor does it use any a priori knowledge regarding the tunnel’s curvature and horizontal orientation. PMID:27649172

  10. Automated gating of flow cytometry data via robust model-based clustering.

    PubMed

    Lo, Kenneth; Brinkman, Ryan Remy; Gottardo, Raphael

    2008-04-01

    The capability of flow cytometry to offer rapid quantification of multidimensional characteristics for millions of cells has made this technology indispensable for health research, medical diagnosis, and treatment. However, the lack of statistical and bioinformatics tools to parallel recent high-throughput technological advancements has hindered this technology from reaching its full potential. We propose a flexible statistical model-based clustering approach for identifying cell populations in flow cytometry data based on t-mixture models with a Box-Cox transformation. This approach generalizes the popular Gaussian mixture models to account for outliers and allow for nonelliptical clusters. We describe an Expectation-Maximization (EM) algorithm to simultaneously handle parameter estimation and transformation selection. Using two publicly available datasets, we demonstrate that our proposed methodology provides enough flexibility and robustness to mimic manual gating results performed by an expert researcher. In addition, we present results from a simulation study, which show that this new clustering framework gives better results in terms of robustness to model misspecification and estimation of the number of clusters, compared to the popular mixture models. The proposed clustering methodology is well adapted to automated analysis of flow cytometry data. It tends to give more reproducible results, and helps reduce the significant subjectivity and human time cost encountered in manual gating analysis.

  11. pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling

    NASA Astrophysics Data System (ADS)

    Florian Wellmann, J.; Thiele, Sam T.; Lindsay, Mark D.; Jessell, Mark W.

    2016-03-01

    We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilize the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.

  12. pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling

    NASA Astrophysics Data System (ADS)

    Wellmann, J. F.; Thiele, S. T.; Lindsay, M. D.; Jessell, M. W.

    2015-11-01

    We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilise the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a~link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential-fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.

  13. Progress and challenges in the automated construction of Markov state models for full protein systems.

    PubMed

    Bowman, Gregory R; Beauchamp, Kyle A; Boxer, George; Pande, Vijay S

    2009-09-28

    Markov state models (MSMs) are a powerful tool for modeling both the thermodynamics and kinetics of molecular systems. In addition, they provide a rigorous means to combine information from multiple sources into a single model and to direct future simulations/experiments to minimize uncertainties in the model. However, constructing MSMs is challenging because doing so requires decomposing the extremely high dimensional and rugged free energy landscape of a molecular system into long-lived states, also called metastable states. Thus, their application has generally required significant chemical intuition and hand-tuning. To address this limitation we have developed a toolkit for automating the construction of MSMs called MSMBUILDER (available at https://simtk.org/home/msmbuilder). In this work we demonstrate the application of MSMBUILDER to the villin headpiece (HP-35 NleNle), one of the smallest and fastest folding proteins. We show that the resulting MSM captures both the thermodynamics and kinetics of the original molecular dynamics of the system. As a first step toward experimental validation of our methodology we show that our model provides accurate structure prediction and that the longest timescale events correspond to folding.

  14. 3-D shear lag model for the analysis of interface damage in ceramic matrix composites

    SciTech Connect

    Dharani, L.R.; Ji, F.

    1995-12-31

    In this paper a micromechanics analytical model is presented for characterizing the behavior of a unidirectional brittle matrix composite containing initial matrix flaws, specifically, as they approach a fiber-matrix interface. It is contemplated that when a matrix crack impinges on the interface it may go around the fiber or go through the fiber by breaking it or debond the fiber/matrix interface. It has been experimentally observed that the crack front does not remain straight, rather it bows once it impinges on a row of fibers. If a unit cell approach is used, the problem is clearly non-axisymmetric and three-dimensional. Since most of the previous analyses dealing with self-similar cracking and interface debonding have considered axisymmetric cracking or two-dimensional planar geometries, the development of an analytical micromechanics model using a 3-D (non-axisymmetric) formulation is needed. The model is based on the consistent shear lag constitutive relations and does account for the large stiffness of the ceramic matrix. Since the present consistent shear lag model is for Cartesian coordinates, we have first derived the consistent shear lag constitutive relations in cylindrical coordinates. The governing equations are obtained by minimizing the potential energy in which the three displacements are represented by means of finite exponential series. Since the full field stresses and displacements are known, the strain energy release rates for self-similar extension of the matrix crack (Gp) and the interface debonding (Gd) are calculated using the Compliance method. The competition between various failure modes will be assessed based on the above strain energy release rates and the corresponding critical (toughness) values. The type of interfaces addressed include fictional, elastic, and gradient with varying properties (interphase). An extensive parametric study will be presented involving different constitutive properties and interface conditions.

  15. Smart Frameworks and Self-Describing Models: Model Metadata for Automated Coupling of Hydrologic Process Components (Invited)

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.

    2013-12-01

    Model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that allow heterogeneous sets of process models to be assembled in a plug-and-play manner to create composite "system models". These mechanisms facilitate code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers, e.g. by requiring them to provide their output in specific forms that meet the input requirements of other models. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can compare the answers to these queries with similar answers from other process models in a collection and then automatically call framework

  16. Atomistic modeling of the Au droplet-GaAs interface for size-selective nanowire growth

    NASA Astrophysics Data System (ADS)

    Sakong, Sung; Du, Yaojun A.; Kratzer, Peter

    2013-10-01

    Density functional theory calculations within both the local density approximation and the generalized gradient approximation are used to study Au-catalyzed growth under near-equilibrium conditions. We discuss both the chemical equilibrium of a GaAs nanowire with an As2 gas atmosphere and the mechanical equilibrium between the capillary forces at the nanowire tip. For the latter goal, the interface between the gold nanoparticle and the nanowire is modeled atomically within a slab approach, and the interface energies are evaluated from the total energies of the model systems. We discuss three growth regimes, one catalyzed by an (almost) pure Au particle, an intermediate alloy-catalyzed growth regime, and a Ga-catalyzed growth regime. Using the interface energies calculated from the atomic models, as well as the surface energies of the nanoparticle and the nanowire sidewalls, we determine the optimized geometry of the nanoparticle-capped nanowire by minimizing the free energy of a continuum model. Under typical experimental conditions of 10-4 Pa As2 and 700 K, our results in the local density approximation are insensitive to the Ga concentration in the nanoparticle. In these growth conditions, the energetically most favored interface has an interface energy of around 45 meV/Å2, and the correspondingly optimized droplet on top of a GaAs nanowire is somewhat larger than a hemisphere and forms a contact angle around 130∘ for both pure Au and Au-Ga alloy nanoparticles.

  17. Atomistic Modeling of Corrosion Events at the Interface between a Metal and Its Environment

    DOE PAGES

    Taylor, Christopher D.

    2012-01-01

    Atomistic simulation is a powerful tool for probing the structure and properties of materials and the nature of chemical reactions. Corrosion is a complex process that involves chemical reactions occurring at the interface between a material and its environment and is, therefore, highly suited to study by atomistic modeling techniques. In this paper, the complex nature of corrosion processes and mechanisms is briefly reviewed. Various atomistic methods for exploring corrosion mechanisms are then described, and recent applications in the literature surveyed. Several instances of the application of atomistic modeling to corrosion science are then reviewed in detail, including studies ofmore » the metal-water interface, the reaction of water on electrified metallic interfaces, the dissolution of metal atoms from metallic surfaces, and the role of competitive adsorption in controlling the chemical nature and structure of a metallic surface. Some perspectives are then given concerning the future of atomistic modeling in the field of corrosion science.« less

  18. Numerical Modelling of Subduction Plate Interface, Technical Advances for Outstanding Questions

    NASA Astrophysics Data System (ADS)

    Le Pourhiet, L.; Ruh, J.; Pranger, C. C.; Zheng, L.; van Dinther, Y.; May, D.; Gerya, T.; Burov, E. B.

    2015-12-01

    The subduction zone interface is the place of the largest earthquakes on earth. Compared to the size of a subduction zone itself, it constitutes a very thin zone (few kilometers) with effective rheological behaviour that varies as a function of pressure, temperature, loading, nature of the material locally embedded within the interface as well as the amount of water, melts and CO2. Capturing the behaviour of this interface and its evolution in time is crucial, yet modelling it is not an easy task. In the last decade, thermo-mechanical models of subduction zone have flourished in the literature. They mostly focused on the long-term dynamics of the subduction; e.g. flat subduction, slab detachment or exhumation. The models were validated models against PTt path of exhumed material as well as topography. The models that could reproduce the data all included a mechanically weak subduction channel made of extremely weak and non cohesive material. While this subduction channel model is very convenient at large scale and might apply to some real subduction zones, it does not capture the many geological field evidences that point out the exhumation of very large slice of almost pristine oceanic crust along localised shear zone. Moreover, modelling of sismological and geodetic data using short term tectonic modelling approach also point out that large localised patches rupture within the subduction interface, which is in accordance with geological data but not with large-scale long-term tectonic models. I will present how high resolution models permit to produce slicing at the subduction interface and give clues on how the plate coupling and effective location of the plate interface vary over a few millions of year time scale. I will then discuss the implication of these new high-resolution long-term models of subduction zone on earthquake generation, report progress in the development of self-consistent thermomechanical codes which can handle large strain, high resolution

  19. Diffuse photon remission along unique spiral paths on a cylindrical interface is modeled by photon remission along a straight line on a semi-infinite interface.

    PubMed

    Zhang, Anqi; Piao, Daqing; Yao, Gang; Bunting, Charles F; Jiang, Yuhao

    2011-03-01

    We demonstrate that, for a long cylindrical applicator that interfaces concavely or convexly with a scattering-dominant medium, a unique set of spiral-shaped directions exist on the tissue-applicator interface, along which the diffuse photon remission is essentially modeled by the photon remission along a straight line on a semi-infinite interface. This interesting phenomenon, which is validated in steady state in this work by finite-element and Monte Carlo methods, may be particularly useful for simplifying deeper-tissue sensing in endoscopic imaging geometry.

  20. New Model for Multimedia Interfaces to Online Public Access Catalogues.

    ERIC Educational Resources Information Center

    Pejtersen, Annelise Mark

    1992-01-01

    Describes the Book House, an interactive, multimedia online public access catalog (OPAC) developed in Denmark that uses icons, text, and animation. An alternative design model that addresses problems in OPACs is described; and database design, system navigation, use for fiction retrieval, and evaluation are discussed. (20 references) (MES)

  1. Automated method for modeling seven-helix transmembrane receptors from experimental data.

    PubMed Central

    Herzyk, P; Hubbard, R E

    1995-01-01

    A rule-based automated method is presented for modeling the structures of the seven transmembrane helices of G-protein-coupled receptors. The structures are generated by using a simulated annealing Monte Carlo procedure that positions and orients rigid helices to satisfy structural restraints. The restraints are derived from analysis of experimental information from biophysical studies on native and mutant proteins, from analysis of the sequences of related proteins, and from theoretical considerations of protein structure. Calculations are presented for two systems. The method was validated through calculations using appropriate experimental information for bacteriorhodopsin, which produced a model structure with a root mean square (rms) deviation of 1.87 A from the structure determined by electron microscopy. Calculations are also presented using experimental and theoretical information available for bovine rhodopsin to assign the helices to a projection density map and to produce a model of bovine rhodopsin that can be used as a template for modeling other G-protein-coupled receptors. Images FIGURE 2 FIGURE 3 FIGURE 4 FIGURE 8 FIGURE 11 PMID:8599649

  2. The integration of a novice user interface into a professional modeling tool.

    PubMed Central

    Ramakrishnan, S.; Hmelo, C. E.; Day, R. S.; Shirey, W. E.; Huang, Q.

    1998-01-01

    This paper describes a software tool, the Oncology Thinking Cap (OncoTCAP) and reports on our efforts to develop a novice user interface to simplify the task of describing biological models of cancer and its treatment. Oncology Thinking Cap includes a modeling tool for making relationships explicit and provide dynamic feedback about the interaction between cancer cell kinetics, treatments, and patient outcomes. OncoTCAP supports student learning by making normally invisible processes visible and providing a representational tool that can be used to conduct thought experiments. We also describe our novice interface and report the results of initial usability testing. Images Figure 2 Figure 3 PMID:9929305

  3. Delft FEWS: an open interface that connects models and data streams for operational forecasting systems

    NASA Astrophysics Data System (ADS)

    de Rooij, Erik; Werner, Micha

    2010-05-01

    Many of the operational forecasting systems that are in use today are centred around a single modelling suite. Over the years these systems and the required data streams have been tailored to provide a closed-knit interaction with their underlying modelling components. However, as time progresses it becomes a challenge to integrate new technologies into these model centric operational systems. Often the software used to develop these systems is out of date, or the original designers of these systems are no longer available. Additionally, the changing of the underlying models may requiring the complete system to be changed. This then becomes an extensive effort, not only from a software engineering point of view, but also from a training point of view. Due to significant time and resources being committed to re-training the forecasting teams that interact with the system on a daily basis. One approach to reducing the effort required in integrating new models and data is through an open interface architecture, and through the use of defined interfaces and standards in data exchange. This approach is taken by the Delft-FEWS operational forecasting shell, which has now been applied in some 40 operational forecasting centres across the world. The Delft-FEWS framework provides several interfaces that allow models and data in differing formats to be flexibly integrated with the system. The most common approach to the integration of modes is through the Delft-FEWS Published Interface. This is an XML based data exchange format that supports the exchange of time series data, as well as vector and gridded data formats. The Published Interface supports standardised data formats such as GRIB and the NetCDF-CF standard. A wide range of models has been integrated with the system through this approach, and these are used operationally across the forecasting centres using Delft FEWS. Models can communicate directly with the interface of Delft-FEWS, or through a SOAP service. This

  4. Semi-automated calibration method for modelling of mountain permafrost evolution in Switzerland

    NASA Astrophysics Data System (ADS)

    Marmy, A.; Rajczak, J.; Delaloye, R.; Hilbich, C.; Hoelzle, M.; Kotlarski, S.; Lambiel, C.; Noetzli, J.; Phillips, M.; Salzmann, N.; Staub, B.; Hauck, C.

    2015-09-01

    Permafrost is a widespread phenomenon in the European Alps. Many important topics such as the future evolution of permafrost related to climate change and the detection of permafrost related to potential natural hazards sites are of major concern to our society. Numerical permafrost models are the only tools which facilitate the projection of the future evolution of permafrost. Due to the complexity of the processes involved and the heterogeneity of Alpine terrain, models must be carefully calibrated and results should be compared with observations at the site (borehole) scale. However, a large number of local point data are necessary to obtain a broad overview of the thermal evolution of mountain permafrost over a larger area, such as the Swiss Alps, and the site-specific model calibration of each point would be time-consuming. To face this issue, this paper presents a semi-automated calibration method using the Generalized Likelihood Uncertainty Estimation (GLUE) as implemented in a 1-D soil model (CoupModel) and applies it to six permafrost sites in the Swiss Alps prior to long-term permafrost evolution simulations. We show that this automated calibration method is able to accurately reproduce the main thermal condition characteristics with some limitations at sites with unique conditions such as 3-D air or water circulation, which have to be calibrated manually. The calibration obtained was used for RCM-based long-term simulations under the A1B climate scenario specifically downscaled at each borehole site. The projection shows general permafrost degradation with thawing at 10 m, even partially reaching 20 m depths until the end of the century, but with different timing among the sites. The degradation is more rapid at bedrock sites whereas ice-rich sites with a blocky surface cover showed a reduced sensitivity to climate change. The snow cover duration is expected to be reduced drastically (between -20 to -37 %) impacting the ground thermal regime. However

  5. Automated generation of high-quality training data for appearance-based object models

    NASA Astrophysics Data System (ADS)

    Becker, Stefan; Voelker, Arno; Kieritz, Hilke; Hübner, Wolfgang; Arens, Michael

    2013-11-01

    Methods for automated person detection and person tracking are essential core components in modern security and surveillance systems. Most state-of-the-art person detectors follow a statistical approach, where prototypical appearances of persons are learned from training samples with known class labels. Selecting appropriate learning samples has a significant impact on the quality of the generated person detectors. For example, training a classifier on a rigid body model using training samples with strong pose variations is in general not effective, irrespective of the classifiers capabilities. Generation of high-quality training data is, apart from performance issues, a very time consuming process, comprising a significant amount of manual work. Furthermore, due to inevitable limitations of freely available training data, corresponding classifiers are not always transferable to a given sensor and are only applicable in a well-defined narrow variety of scenes and camera setups. Semi-supervised learning methods are a commonly used alternative to supervised training, in general requiring only few labeled samples. However, as a drawback semi-supervised methods always include a generative component, which is known to be difficult to learn. Therefore, automated processes for generating training data sets for supervised methods are needed. Such approaches could either help to better adjust classifiers to respective hardware, or serve as a complement to existing data sets. Towards this end, this paper provides some insights into the quality requirements of automatically generated training data for supervised learning methods. Assuming a static camera, labels are generated based on motion detection by background subtraction with respect to weak constraints on the enclosing bounding box of the motion blobs. Since this labeling method consists of standard components, we illustrate the effectiveness by adapting a person detector to cameras of a sensor network. While varying

  6. Role of bulk and of interface contacts in the behavior of lattice model dimeric proteins.

    PubMed

    Tiana, G; Provasi, D; Broglia, R A

    2003-05-01

    Some dimeric proteins first fold and then dimerize (three-state dimers) while others first dimerize and then fold (two-state dimers). Within the framework of a minimal lattice model, we can distinguish between sequences following one or the other mechanism on the basis of the distribution of the ground state energy between bulk and interface contacts. The topology of contacts is very different for the bulk than for the interface: while the bulk displays a rich network of interactions, the dimer interface is built up of a set of essentially independent contacts. Consequently, the two sets of interactions play very different roles both, in the folding and in the evolutionary history of the protein. Three-state dimers, where a large fraction of energy is concentrated in few contacts buried in the bulk, and where the relative contact energy of interface contacts is considerably smaller than that associated with bulk contacts, fold according to a hierarchical pathway controlled by local elementary structures, as also happens in the folding of single-domain monomeric proteins. On the other hand, two-state dimers display a relative contact energy of interface contacts, which is larger than the corresponding quantity associated with the bulk. In this case, the assembly of the interface stabilizes the system and leads the two chains to fold. The specific properties of three-state dimers acquired through evolution are expected to be more robust than those of two-state dimers; a fact that has consequences on proteins connected with viral diseases.

  7. Modeling the Assembly of Polymer-Grafted Nanoparticles at Oil-Water Interfaces.

    PubMed

    Yong, Xin

    2015-10-27

    Using dissipative particle dynamics (DPD), I model the interfacial adsorption and self-assembly of polymer-grafted nanoparticles at a planar oil-water interface. The amphiphilic core-shell nanoparticles irreversibly adsorb to the interface and create a monolayer covering the interface. The polymer chains of the adsorbed nanoparticles are significantly deformed by surface tension to conform to the interface. I quantitatively characterize the properties of the particle-laden interface and the structure of the monolayer in detail at different surface coverages. I observe that the monolayer of particles grafted with long polymer chains undergoes an intriguing liquid-crystalline-amorphous phase transition in which the relationship between the monolayer structure and the surface tension/pressure of the interface is elucidated. Moreover, my results indicate that the amorphous state at high surface coverage is induced by the anisotropic distribution of the randomly grafted chains on each particle core, which leads to noncircular in-plane morphology formed under excluded volume effects. These studies provide a fundamental understanding of the interfacial behavior of polymer-grafted nanoparticles for achieving complete control of the adsorption and subsequent self-assembly. PMID:26439456

  8. Operando X-ray Investigation of Electrode/Electrolyte Interfaces in Model Solid Oxide Fuel Cells

    PubMed Central

    2016-01-01

    We employed operando anomalous surface X-ray diffraction to investigate the buried interface between the cathode and the electrolyte of a model solid oxide fuel cell with atomic resolution. The cell was studied under different oxygen pressures at elevated temperatures and polarizations by external potential control. Making use of anomalous X-ray diffraction effects at the Y and Zr K-edges allowed us to resolve the interfacial structure and chemical composition of a (100)-oriented, 9.5 mol % yttria-stabilized zirconia (YSZ) single crystal electrolyte below a La0.6Sr0.4CoO3−δ (LSC) electrode. We observe yttrium segregation toward the YSZ/LSC electrolyte/electrode interface under reducing conditions. Under oxidizing conditions, the interface becomes Y depleted. The yttrium segregation is corroborated by an enhanced outward relaxation of the YSZ interfacial metal ion layer. At the same time, an increase in point defect concentration in the electrolyte at the interface was observed, as evidenced by reduced YSZ crystallographic site occupancies for the cations as well as the oxygen ions. Such changes in composition are expected to strongly influence the oxygen ion transport through this interface which plays an important role for the performance of solid oxide fuel cells. The structure of the interface is compared to the bare YSZ(100) surface structure near the microelectrode under identical conditions and to the structure of the YSZ(100) surface prepared under ultrahigh vacuum conditions. PMID:27346923

  9. Model studies of Rayleigh instabilities via microdesigned interfaces

    SciTech Connect

    Glaeser, Andreas M.

    2000-10-17

    The energetic and kinetic properties of surfaces play a critical role in defining the microstructural changes that occur during sintering and high-temperature use of ceramics. Characterization of surface diffusion in ceramics is particularly difficult, and significant variations in reported values of surface diffusivities arise even in well-studied systems. Effects of impurities, surface energy anisotropy, and the onset of surface attachment limited kinetics (SALK) are believed to contribute to this variability. An overview of the use of Rayleigh instabilities as a means of characterizing surface diffusivities is presented. The development of models of morphological evolution that account for effects of surface energy anisotropy is reviewed, and the potential interplay between impurities and surface energy anisotropy is addressed. The status of experimental studies of Rayleigh instabilities in sapphire utilizing lithographically introduced pore channels of controlled geometry and crystallography is summarized. Results of model studies indicate that impurities can significantly influence both the spatial and temporal characteristics of Rayleigh instabilities; this is attributed at least in part to impurity effects on the surface energy anisotropy. Related model experiments indicate that the onset of SALK may also contribute significantly to apparent variations in surface diffusion coefficients.

  10. Materials Testing and Automation

    NASA Astrophysics Data System (ADS)

    Cooper, Wayne D.; Zweigoron, Ronald B.

    1980-07-01

    The advent of automation in materials testing has been in large part responsible for recent radical changes in the materials testing field: Tests virtually impossible to perform without a computer have become more straightforward to conduct. In addition, standardized tests may be performed with enhanced efficiency and repeatability. A typical automated system is described in terms of its primary subsystems — an analog station, a digital computer, and a processor interface. The processor interface links the analog functions with the digital computer; it includes data acquisition, command function generation, and test control functions. Features of automated testing are described with emphasis on calculated variable control, control of a variable that is computed by the processor and cannot be read directly from a transducer. Three calculated variable tests are described: a yield surface probe test, a thermomechanical fatigue test, and a constant-stress-intensity range crack-growth test. Future developments are discussed.

  11. Automated System Marketplace 1987: Maturity and Competition.

    ERIC Educational Resources Information Center

    Walton, Robert A.; Bridge, Frank R.

    1988-01-01

    This annual review of the library automation marketplace presents profiles of 15 major library automation firms and looks at emerging trends. Seventeen charts and tables provide data on market shares, number and size of installations, hardware availability, operating systems, and interfaces. A directory of 49 automation sources is included. (MES)

  12. Photometric model of diffuse surfaces described as a distribution of interfaced Lambertian facets.

    PubMed

    Simonot, Lionel

    2009-10-20

    The Lambertian model for diffuse reflection is widely used for the sake of its simplicity. Nevertheless, this model is known to be inaccurate in describing a lot of real-world objects, including those that present a matte surface. To overcome this difficulty, we propose a photometric model where the surfaces are described as a distribution of facets where each facet consists of a flat interface on a Lambertian background. Compared to the Lambertian model, it includes two additional physical parameters: an interface roughness parameter and the ratio between the refractive indices of the background binder and of the upper medium. The Torrance-Sparrow model--distribution of strictly specular facets--and the Oren-Nayar model--distribution of strictly Lambertian facets--appear as special cases. PMID:19844317

  13. Semi-automated DIRSIG scene modeling from three-dimensional lidar and passive imagery

    NASA Astrophysics Data System (ADS)

    Lach, Stephen R.

    The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is an established, first-principles based scene simulation tool that produces synthetic multispectral and hyperspectral images from the visible to long wave infrared (0.4 to 20 microns). Over the last few years, significant enhancements such as spectral polarimetric and active Light Detection and Ranging (lidar) models have also been incorporated into the software, providing an extremely powerful tool for multi-sensor algorithm testing and sensor evaluation. However, the extensive time required to create large-scale scenes has limited DIRSIG's ability to generate scenes "on demand." To date, scene generation has been a laborious, time-intensive process, as the terrain model, CAD objects and background maps have to be created and attributed manually. To shorten the time required for this process, this research developed an approach to reduce the man-in-the-loop requirements for several aspects of synthetic scene construction. Through a fusion of 3D lidar data with passive imagery, we were able to semi-automate several of the required tasks in the DIRSIG scene creation process. Additionally, many of the remaining tasks realized a shortened implementation time through this application of multi-modal imagery. Lidar data is exploited to identify ground and object features as well as to define initial tree location and building parameter estimates. These estimates are then refined by analyzing high-resolution frame array imagery using the concepts of projective geometry in lieu of the more common Euclidean approach found in most traditional photogrammetric references. Spectral imagery is also used to assign material characteristics to the modeled geometric objects. This is achieved through a modified atmospheric compensation applied to raw hyperspectral imagery. These techniques have been successfully applied to imagery collected over the RIT campus and the greater Rochester area. The data used

  14. Modelling and interpreting biologically crusted dryland soil sub-surface structure using automated micropenetrometry

    NASA Astrophysics Data System (ADS)

    Hoon, Stephen R.; Felde, Vincent J. M. N. L.; Drahorad, Sylvie L.; Felix-Henningsen, Peter

    2015-04-01

    Soil penetrometers are used routinely to determine the shear strength of soils and deformable sediments both at the surface and throughout a depth profile in disciplines as diverse as soil science, agriculture, geoengineering and alpine avalanche-safety (e.g. Grunwald et al. 2001, Van Herwijnen et al. 2009). Generically, penetrometers comprise two principal components: An advancing probe, and a transducer; the latter to measure the pressure or force required to cause the probe to penetrate or advance through the soil or sediment. The force transducer employed to determine the pressure can range, for example, from a simple mechanical spring gauge to an automatically data-logged electronic transducer. Automated computer control of the penetrometer step size and probe advance rate enables precise measurements to be made down to a resolution of 10's of microns, (e.g. the automated electronic micropenetrometer (EMP) described by Drahorad 2012). Here we discuss the determination, modelling and interpretation of biologically crusted dryland soil sub-surface structures using automated micropenetrometry. We outline a model enabling the interpretation of depth dependent penetration resistance (PR) profiles and their spatial differentials using the model equations, σ {}(z) ={}σ c0{}+Σ 1n[σ n{}(z){}+anz + bnz2] and dσ /dz = Σ 1n[dσ n(z) /dz{} {}+{}Frn(z)] where σ c0 and σ n are the plastic deformation stresses for the surface and nth soil structure (e.g. soil crust, layer, horizon or void) respectively, and Frn(z)dz is the frictional work done per unit volume by sliding the penetrometer rod an incremental distance, dz, through the nth layer. Both σ n(z) and Frn(z) are related to soil structure. They determine the form of σ {}(z){} measured by the EMP transducer. The model enables pores (regions of zero deformation stress) to be distinguished from changes in layer structure or probe friction. We have applied this method to both artificial calibration soils in the

  15. Laboratory measurements and theoretical modeling of seismoelectric interface response and coseismic wave fields

    SciTech Connect

    Schakel, M. D.; Slob, E. C.; Heller, H. K. J.; Smeulders, D. M. J.

    2011-04-01

    A full-waveform seismoelectric numerical model incorporating the directivity pattern of a pressure source is developed. This model provides predictions of coseismic electric fields and the electromagnetic waves that originate from a fluid/porous-medium interface. An experimental setup in which coseismic electric fields and interface responses are measured is constructed. The seismo-electric origin of the signals is confirmed. The numerically predicted polarity reversal of the interfacial signal and seismoelectric effects due to multiple scattering are detected in the measurements. Both the simulated coseismic electric fields and the electromagnetic waves originating from interfaces agree with the measurements in terms of travel times, waveform, polarity, amplitude, and spatial amplitude decay, demonstrating that seismoelectric effects are comprehensively described by theory.

  16. Time integration for diffuse interface models for two-phase flow

    SciTech Connect

    Aland, Sebastian

    2014-04-01

    We propose a variant of the θ-scheme for diffuse interface models for two-phase flow, together with three new linearization techniques for the surface tension. These involve either additional stabilizing force terms, or a fully implicit coupling of the Navier–Stokes and Cahn–Hilliard equation. In the common case that the equations for interface and flow are coupled explicitly, we find a time step restriction which is very different to other two-phase flow models and in particular is independent of the grid size. We also show that the proposed stabilization techniques can lift this time step restriction. Even more pronounced is the performance of the proposed fully implicit scheme which is stable for arbitrarily large time steps. We demonstrate in a Taylor-flow application that this superior coupling between flow and interface equation can decrease the computation time by several orders of magnitude.

  17. Cellular-automata models of solid-liquid interfaces.

    PubMed

    Cheng, Cho-Kun; Kier, Lemont B

    2007-11-01

    A series of cellular-automata (CA) models have been created, simulating relationships between water (or aqueous solutions) and solid surfaces of differing hydropathic (i.e., hydrophilic or hydrophobic) nature. Both equilibrium- and dynamic-flow models were examined, employing simple breaking and joining rules to simulate the hydropathic interactions. The CA simulations show that water accumulates near hydrophilic surfaces and avoids hydrophobic surfaces, forming concave-up and concave-down meniscuses, resp., under equilibrium conditions. In the dynamic-flow simulations, the flow rate of water was found to increase past a wall surface as the surface became less hydrophilic, reaching a maximum rate when the solid surface was of intermediate hydropathic state, and then declining with further increase in the hydrophobicity of the surface. Solution simulations show that non-polar solutes tend to achieve higher concentrations near hydrophobic-wall surfaces, whereas other hydrophobic/hydrophilic combinations of solutes and surfaces do not show such accumulations. Physical interpretations of the results are presented, as are some possible biological consequences. PMID:18027370

  18. Automated Geometric Model Builder Using Range Image Sensor Data: Final Acquistion

    SciTech Connect

    Diegert, C.; Sackos, J.

    1999-02-01

    This report documents a data collection where we recorded redundant range image data from multiple views of a simple scene, and recorded accurate survey measurements of the same scene. Collecting these data was a focus of the research project Automated Geometric Model Builder Using Range Image Sensor Data (96-0384), supported by Sandia's Laboratory-Directed Research and Development (LDRD) Program during fiscal years 1996, 1997, and 1998. The data described here are available from the authors on CDROM, or electronically over the Internet. Included in this data distribution are Computer-Aided Design (CAD) models we constructed from the survey measurements. The CAD models are compatible with the SolidWorks 98 Plus system, the modern Computer-Aided Design software system that is central to Sandia's DeskTop Engineering Project (DTEP). Integration of our measurements (as built) with the constructive geometry process of the CAD system (as designed) delivers on a vision of the research project. This report on our final data collection will also serve as a final report on the project.

  19. DockTope: a Web-based tool for automated pMHC-I modelling

    PubMed Central

    Menegatti Rigo, Maurício; Amaral Antunes, Dinler; Vaz de Freitas, Martiela; Fabiano de Almeida Mendes, Marcus; Meira, Lindolfo; Sinigaglia, Marialva; Fioravanti Vieira, Gustavo

    2015-01-01

    The immune system is constantly challenged, being required to protect the organism against a wide variety of infectious pathogens and, at the same time, to avoid autoimmune disorders. One of the most important molecules involved in these events is the Major Histocompatibility Complex class I (MHC-I), responsible for binding and presenting small peptides from the intracellular environment to CD8+ T cells. The study of peptide:MHC-I (pMHC-I) molecules at a structural level is crucial to understand the molecular mechanisms underlying immunologic responses. Unfortunately, there are few pMHC-I structures in the Protein Data Bank (PDB) (especially considering the total number of complexes that could be formed combining different peptides), and pMHC-I modelling tools are scarce. Here, we present DockTope, a free and reliable web-based tool for pMHC-I modelling, based on crystal structures from the PDB. DockTope is fully automated and allows any researcher to construct a pMHC-I complex in an efficient way. We have reproduced a dataset of 135 non-redundant pMHC-I structures from the PDB (Cα RMSD below 1 Å). Modelling of pMHC-I complexes is remarkably important, contributing to the knowledge of important events such as cross-reactivity, autoimmunity, cancer therapy, transplantation and rational vaccine design. PMID:26674250

  20. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    SciTech Connect

    Robertson, J.; Polly, B.; Collis, J.

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  1. VoICE: A semi-automated pipeline for standardizing vocal analysis across models

    PubMed Central

    Burkett, Zachary D.; Day, Nancy F.; Peñagarikano, Olga; Geschwind, Daniel H.; White, Stephanie A.

    2015-01-01

    The study of vocal communication in animal models provides key insight to the neurogenetic basis for speech and communication disorders. Current methods for vocal analysis suffer from a lack of standardization, creating ambiguity in cross-laboratory and cross-species comparisons. Here, we present VoICE (Vocal Inventory Clustering Engine), an approach to grouping vocal elements by creating a high dimensionality dataset through scoring spectral similarity between all vocalizations within a recording session. This dataset is then subjected to hierarchical clustering, generating a dendrogram that is pruned into meaningful vocalization “types” by an automated algorithm. When applied to birdsong, a key model for vocal learning, VoICE captures the known deterioration in acoustic properties that follows deafening, including altered sequencing. In a mammalian neurodevelopmental model, we uncover a reduced vocal repertoire of mice lacking the autism susceptibility gene, Cntnap2. VoICE will be useful to the scientific community as it can standardize vocalization analyses across species and laboratories. PMID:26018425

  2. Towards Automated Bargaining in Electronic Markets: A Partially Two-Sided Competition Model

    NASA Astrophysics Data System (ADS)

    Gatti, Nicola; Lazaric, Alessandro; Restelli, Marcello

    This paper focuses on the prominent issue of automating bargaining agents within electronic markets. Models of bargaining in literature deal with settings wherein there are only two agents and no model satisfactorily captures settings in which there is competition among buyers, being they more than one, and analogously among sellers. In this paper, we extend the principal bargaining protocol, i.e. the alternating-offers protocol, to capture bargaining in markets. The model we propose is such that, in presence of a unique buyer and a unique seller, agents' equilibrium strategies are those in the original protocol. Moreover, we game theoretically study the considered game providing the following results: in presence of one-sided competition (more buyers and one seller or vice versa) we provide agents' equilibrium strategies for all the values of the parameters, in presence of two-sided competition (more buyers and more sellers) we provide an algorithm that produce agents' equilibrium strategies for a large set of the parameters and we experimentally evaluate its effectiveness.

  3. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    SciTech Connect

    and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  4. Partially Automated Method for Localizing Standardized Acupuncture Points on the Heads of Digital Human Models

    PubMed Central

    Kim, Jungdae; Kang, Dae-In

    2015-01-01

    Having modernized imaging tools for precise positioning of acupuncture points over the human body where the traditional therapeutic method is applied is essential. For that reason, we suggest a more systematic positioning method that uses X-ray computer tomographic images to precisely position acupoints. Digital Korean human data were obtained to construct three-dimensional head-skin and skull surface models of six individuals. Depending on the method used to pinpoint the positions of the acupoints, every acupoint was classified into one of three types: anatomical points, proportional points, and morphological points. A computational algorithm and procedure were developed for partial automation of the positioning. The anatomical points were selected by using the structural characteristics of the skin surface and skull. The proportional points were calculated from the positions of the anatomical points. The morphological points were also calculated by using some control points related to the connections between the source and the target models. All the acupoints on the heads of the six individual were displayed on three-dimensional computer graphical image models. This method may be helpful for developing more accurate experimental designs and for providing more quantitative volumetric methods for performing analyses in acupuncture-related research. PMID:26101534

  5. Diffuse interface models of locally inextensible vesicles in a viscous fluid.

    PubMed

    Aland, Sebastian; Egerer, Sabine; Lowengrub, John; Voigt, Axel

    2014-11-15

    We present a new diffuse interface model for the dynamics of inextensible vesicles in a viscous fluid with inertial forces. A new feature of this work is the implementation of the local inextensibility condition in the diffuse interface context. Local inextensibility is enforced by using a local Lagrange multiplier, which provides the necessary tension force at the interface. We introduce a new equation for the local Lagrange multiplier whose solution essentially provides a harmonic extension of the multiplier off the interface while maintaining the local inextensibility constraint near the interface. We also develop a local relaxation scheme that dynamically corrects local stretching/compression errors thereby preventing their accumulation. Asymptotic analysis is presented that shows that our new system converges to a relaxed version of the inextensible sharp interface model. This is also verified numerically. To solve the equations, we use an adaptive finite element method with implicit coupling between the Navier-Stokes and the diffuse interface inextensibility equations. Numerical simulations of a single vesicle in a shear flow at different Reynolds numbers demonstrate that errors in enforcing local inextensibility may accumulate and lead to large differences in the dynamics in the tumbling regime and smaller differences in the inclination angle of vesicles in the tank-treading regime. The local relaxation algorithm is shown to prevent the accumulation of stretching and compression errors very effectively. Simulations of two vesicles in an extensional flow show that local inextensibility plays an important role when vesicles are in close proximity by inhibiting fluid drainage in the near contact region. PMID:25246712

  6. Diffuse interface models of locally inextensible vesicles in a viscous fluid

    PubMed Central

    Aland, Sebastian; Egerer, Sabine; Lowengrub, John; Voigt, Axel

    2014-01-01

    We present a new diffuse interface model for the dynamics of inextensible vesicles in a viscous fluid with inertial forces. A new feature of this work is the implementation of the local inextensibility condition in the diffuse interface context. Local inextensibility is enforced by using a local Lagrange multiplier, which provides the necessary tension force at the interface. We introduce a new equation for the local Lagrange multiplier whose solution essentially provides a harmonic extension of the multiplier off the interface while maintaining the local inextensibility constraint near the interface. We also develop a local relaxation scheme that dynamically corrects local stretching/compression errors thereby preventing their accumulation. Asymptotic analysis is presented that shows that our new system converges to a relaxed version of the inextensible sharp interface model. This is also verified numerically. To solve the equations, we use an adaptive finite element method with implicit coupling between the Navier-Stokes and the diffuse interface inextensibility equations. Numerical simulations of a single vesicle in a shear flow at different Reynolds numbers demonstrate that errors in enforcing local inextensibility may accumulate and lead to large differences in the dynamics in the tumbling regime and smaller differences in the inclination angle of vesicles in the tank-treading regime. The local relaxation algorithm is shown to prevent the accumulation of stretching and compression errors very effectively. Simulations of two vesicles in an extensional flow show that local inextensibility plays an important role when vesicles are in close proximity by inhibiting fluid drainage in the near contact region. PMID:25246712

  7. World-wide distribution automation systems

    SciTech Connect

    Devaney, T.M.

    1994-12-31

    A worldwide power distribution automation system is outlined. Distribution automation is defined and the status of utility automation is discussed. Other topics discussed include a distribution management system, substation feeder, and customer functions, potential benefits, automation costs, planning and engineering considerations, automation trends, databases, system operation, computer modeling of system, and distribution management systems.

  8. Interface modeling to predict well casing damage for big hill strategic petroleum reserve.

    SciTech Connect

    Ehgartner, Brian L.; Park, Byoung Yoon

    2012-02-01

    Oil leaks were found in well casings of Caverns 105 and 109 at the Big Hill Strategic Petroleum Reserve site. According to the field observations, two instances of casing damage occurred at the depth of the interface between the caprock and top of salt. This damage could be caused by interface movement induced by cavern volume closure due to salt creep. A three dimensional finite element model, which allows each cavern to be configured individually, was constructed to investigate shear and vertical displacements across each interface. The model contains interfaces between each lithology and a shear zone to examine the interface behavior in a realistic manner. This analysis results indicate that the casings of Caverns 105 and 109 failed by shear stress that exceeded shear strength due to the horizontal movement of the top of salt relative to the caprock, and tensile stress due to the downward movement of the top of salt from the caprock, respectively. The casings of Caverns 101, 110, 111 and 114, located at the far ends of the field, are predicted to be failed by shear stress in the near future. The casings of inmost Caverns 107 and 108 are predicted to be failed by tensile stress in the near future.

  9. A Demonstration of Automated DNA Sequencing.

    ERIC Educational Resources Information Center

    Latourelle, Sandra; Seidel-Rogol, Bonnie

    1998-01-01

    Details a simulation that employs a paper-and-pencil model to demonstrate the principles behind automated DNA sequencing. Discusses the advantages of automated sequencing as well as the chemistry of automated DNA sequencing. (DDR)

  10. Damage evolution of bi-body model composed of weakly cemented soft rock and coal considering different interface effect.

    PubMed

    Zhao, Zenghui; Lv, Xianzhou; Wang, Weiming; Tan, Yunliang

    2016-01-01

    Considering the structure effect of tunnel stability in western mining of China, three typical kinds of numerical model were respectively built as follows based on the strain softening constitutive model and linear elastic-perfectly plastic model for soft rock and interface: R-M, R-C(s)-M and R-C(w)-M. Calculation results revealed that the stress-strain relation and failure characteristics of the three models vary between each other. The combination model without interface or with a strong interface presented continuous failure, while weak interface exhibited 'cut off' effect. Thus, conceptual models of bi-material model and bi-body model were established. Then numerical experiments of tri-axial compression were carried out for the two models. The relationships between stress evolution, failure zone and deformation rate fluctuations as well as the displacement of interface were detailed analyzed. Results show that two breakaway points of deformation rate actually demonstrate the starting and penetration of the main rupture, respectively. It is distinguishable due to the large fluctuation. The bi-material model shows general continuous failure while bi-body model shows 'V' type shear zone in weak body and failure in strong body near the interface due to the interface effect. With the increasing of confining pressure, the 'cut off' effect of weak interface is not obvious. These conclusions lay the theoretical foundation for further development of constitutive model for soft rock-coal combination body. PMID:27066329

  11. Development and Implementation of an Extensible Interface-Based Spatiotemporal Geoprocessing and Modeling Toolbox

    NASA Astrophysics Data System (ADS)

    Cao, Y.; Ames, D. P.

    2011-12-01

    This poster presents an object oriented and interface-based spatiotemporal data processing and modeling toolbox that can be extended by third parties to include complete suites of new tools through the implementation of simple interfaces. The resulting software implementation includes both a toolbox and workflow designer or "model builder" constructed using the underlying open source DotSpatial library and MapWindow desktop GIS. The unique contribution of this research and software development activity is in the creation and use of an extensibility architecture for both specific tools (through a so-called "ITool" interface) and batches of tools (through a so-called "IToolProvider" interface.) This concept is introduced to allow for seamless integration of geoprocessing tools from various sources (e.g. distinct libraries of spatiotemporal processing code) - including online sources - within a single user environment. In this way, the IToolProvider interface allows developers to wrap large existing collections of data analysis code without having to re-write it for interoperability. Additionally, developers do not need to design the user interfaces for loading, displaying or interacting with their specific tools, but rather can simply implement the provided interfaces and have their tools and tool collections appear in the toolbox alongside other tools. The demonstration software presented here is based on an implementation of the interfaces and sample tool libraries using the C# .NET programming language. This poster will include a summary of the interfaces as well as a demonstration of the system using the Whitebox Geospatial Analysis Tools (GAT) as an example case of a large number of existing tools that can be exposed to users through this new system. Vector analysis tools which are native in DotSpatial are linked to the Whitebox raster analysis tools in the model builder environment for ease of execution and consistent/repeatable use. We expect that this

  12. Sloan Digital Sky Survey photometric telescope automation and observing software

    SciTech Connect

    Eric H. Neilsen, Jr. et al.

    2002-10-16

    The photometric telescope (PT) provides observations necessary for the photometric calibration of the Sloan Digital Sky Survey (SDSS). Because the attention of the observing staff is occupied by the operation of the 2.5 meter telescope which takes the survey data proper, the PT must reliably take data with little supervision. In this paper we describe the PT's observing program, MOP, which automates most tasks necessary for observing. MOP's automated target selection is closely modeled on the actions a human observer might take, and is built upon a user interface that can be (and has been) used for manual operation. This results in an interface that makes it easy for an observer to track the activities of the automating procedures and intervene with minimum disturbance when necessary. MOP selects targets from the same list of standard star and calibration fields presented to the user, and chooses standard star fields covering ranges of airmass, color, and time necessary to monitor atmospheric extinction and produce a photometric solution. The software determines when additional standard star fields are unnecessary, and selects survey calibration fields according to availability and priority. Other automated features of MOP, such as maintaining the focus and keeping a night log, are also built around still functional manual interfaces, allowing the observer to be as active in observing as desired; MOP's automated features may be used as tools for manual observing, ignored entirely, or allowed to run the telescope with minimal supervision when taking routine data.

  13. A Monthly Water-Balance Model Driven By a Graphical User Interface

    USGS Publications Warehouse

    McCabe, Gregory J.; Markstrom, Steven L.

    2007-01-01

    This report describes a monthly water-balance model driven by a graphical user interface, referred to as the Thornthwaite monthly water-balance program. Computations of monthly water-balance components of the hydrologic cycle are made for a specified location. The program can be used as a research tool, an assessment tool, and a tool for classroom instruction.

  14. AgRISTARS: Yield model development/soil moisture. Interface control document

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The interactions and support functions required between the crop Yield Model Development (YMD) Project and Soil Moisture (SM) Project are defined. The requirements for YMD support of SM and vice-versa are outlined. Specific tasks in support of these interfaces are defined for development of support functions.

  15. Numerical simulations of the moving contact line problem using a diffuse-interface model

    NASA Astrophysics Data System (ADS)

    Afzaal, Muhammad; Sibley, David; Duncan, Andrew; Yatsyshin, Petr; Duran-Olivencia, Miguel A.; Nold, Andreas; Savva, Nikos; Schmuck, Markus; Kalliadasis, Serafim

    2015-11-01

    Moving contact lines are a ubiquitous phenomenon both in nature and in many modern technologies. One prevalent way of numerically tackling the problem is with diffuse-interface (phase-field) models, where the classical sharp-interface model of continuum mechanics is relaxed to one with a finite thickness fluid-fluid interface, capturing physics from mesoscopic lengthscales. The present work is devoted to the study of the contact line between two fluids confined by two parallel plates, i.e. a dynamically moving meniscus. Our approach is based on a coupled Navier-Stokes/Cahn-Hilliard model. This system of partial differential equations allows a tractable numerical solution to be computed, capturing diffusive and advective effects in a prototypical case study in a finite-element framework. Particular attention is paid to the static and dynamic contact angle of the meniscus advancing or receding between the plates. The results obtained from our approach are compared to the classical sharp-interface model to elicit the importance of considering diffusion and associated effects. We acknowledge financial support from European Research Council via Advanced Grant No. 247031.

  16. Self-Observation Model Employing an Instinctive Interface for Classroom Active Learning

    ERIC Educational Resources Information Center

    Chen, Gwo-Dong; Nurkhamid; Wang, Chin-Yeh; Yang, Shu-Han; Chao, Po-Yao

    2014-01-01

    In a classroom, obtaining active, whole-focused, and engaging learning results from a design is often difficult. In this study, we propose a self-observation model that employs an instinctive interface for classroom active learning. Students can communicate with virtual avatars in the vertical screen and can react naturally according to the…

  17. Modeling the current distribution across the depth electrode-brain interface in deep brain stimulation.

    PubMed

    Yousif, Nada; Liu, Xuguang

    2007-09-01

    The mismatch between the extensive clinical use of deep brain stimulation (DBS), which is being used to treat an increasing number of neurological disorders, and the lack of understanding of the underlying mechanisms is confounded by the difficulty of measuring the spread of electric current in the brain in vivo. In this article we present a brief review of the recent computational models that simulate the electric current and field distribution in 3D space and, consequently, make estimations of the brain volume being modulated by therapeutic DBS. Such structural modeling work can be categorized into three main approaches: target-specific modeling, models of instrumentation and modeling the electrode-brain interface. Comments are made for each of these approaches with emphasis on our electrode-brain interface modeling, since the stimulating current must travel across the electrode-brain interface in order to reach the surrounding brain tissue and modulate the pathological neural activity. For future modeling work, a combined approach needs to be taken to reveal the underlying mechanisms, and both structural and dynamic models need to be clinically validated to make reliable predictions about the therapeutic effect of DBS in order to assist clinical practice.

  18. Cryo-EM Data Are Superior to Contact and Interface Information in Integrative Modeling.

    PubMed

    de Vries, Sjoerd J; Chauvot de Beauchêne, Isaure; Schindler, Christina E M; Zacharias, Martin

    2016-02-23

    Protein-protein interactions carry out a large variety of essential cellular processes. Cryo-electron microscopy (cryo-EM) is a powerful technique for the modeling of protein-protein interactions at a wide range of resolutions, and recent developments have caused a revolution in the field. At low resolution, cryo-EM maps can drive integrative modeling of the interaction, assembling existing structures into the map. Other experimental techniques can provide information on the interface or on the contacts between the monomers in the complex. This inevitably raises the question regarding which type of data is best suited to drive integrative modeling approaches. Systematic comparison of the prediction accuracy and specificity of the different integrative modeling paradigms is unavailable to date. Here, we compare EM-driven, interface-driven, and contact-driven integrative modeling paradigms. Models were generated for the protein docking benchmark using the ATTRACT docking engine and evaluated using the CAPRI two-star criterion. At 20 Å resolution, EM-driven modeling achieved a success rate of 100%, outperforming the other paradigms even with perfect interface and contact information. Therefore, even very low resolution cryo-EM data is superior in predicting heterodimeric and heterotrimeric protein assemblies. Our study demonstrates that a force field is not necessary, cryo-EM data alone is sufficient to accurately guide the monomers into place. The resulting rigid models successfully identify regions of conformational change, opening up perspectives for targeted flexible remodeling.

  19. Facial pressure zones of an oronasal interface for noninvasive ventilation: a computer model analysis* **

    PubMed Central

    Barros, Luana Souto; Talaia, Pedro; Drummond, Marta; Natal-Jorge, Renato

    2014-01-01

    OBJECTIVE: To study the effects of an oronasal interface (OI) for noninvasive ventilation, using a three-dimensional (3D) computational model with the ability to simulate and evaluate the main pressure zones (PZs) of the OI on the human face. METHODS: We used a 3D digital model of the human face, based on a pre-established geometric model. The model simulated soft tissues, skull, and nasal cartilage. The geometric model was obtained by 3D laser scanning and post-processed for use in the model created, with the objective of separating the cushion from the frame. A computer simulation was performed to determine the pressure required in order to create the facial PZs. We obtained descriptive graphical images of the PZs and their intensity. RESULTS: For the graphical analyses of each face-OI model pair and their respective evaluations, we ran 21 simulations. The computer model identified several high-impact PZs in the nasal bridge and paranasal regions. The variation in soft tissue depth had a direct impact on the amount of pressure applied (438-724 cmH2O). CONCLUSIONS: The computer simulation results indicate that, in patients submitted to noninvasive ventilation with an OI, the probability of skin lesion is higher in the nasal bridge and paranasal regions. This methodology could increase the applicability of biomechanical research on noninvasive ventilation interfaces, providing the information needed in order to choose the interface that best minimizes the risk of skin lesion. PMID:25610506

  20. An ASM/ADM model interface for dynamic plant-wide simulation.

    PubMed

    Nopens, Ingmar; Batstone, Damien J; Copp, John B; Jeppsson, Ulf; Volcke, Eveline; Alex, Jens; Vanrolleghem, Peter A

    2009-04-01

    Mathematical modelling has proven to be very useful in process design, operation and optimisation. A recent trend in WWTP modelling is to include the different subunits in so-called plant-wide models rather than focusing on parts of the entire process. One example of a typical plant-wide model is the coupling of an upstream activated sludge plant (including primary settler, and secondary clarifier) to an anaerobic digester for sludge digestion. One of the key challenges when coupling these processes has been the definition of an interface between the well accepted activated sludge model (ASM1) and anaerobic digestion model (ADM1). Current characterisation and interface models have key limitations, the most critical of which is the over-use of X(c) (or lumped complex) variable as a main input to the ADM1. Over-use of X(c) does not allow for variation of degradability, carbon oxidation state or nitrogen content. In addition, achieving a target influent pH through the proper definition of the ionic system can be difficult. In this paper, we define an interface and characterisation model that maps degradable components directly to carbohydrates, proteins and lipids (and their soluble analogues), as well as organic acids, rather than using X(c). While this interface has been designed for use with the Benchmark Simulation Model No. 2 (BSM2), it is widely applicable to ADM1 input characterisation in general. We have demonstrated the model both hypothetically (BSM2), and practically on a full-scale anaerobic digester treating sewage sludge.

  1. Aircraft wing structural design optimization based on automated finite element modelling and ground structure approach

    NASA Astrophysics Data System (ADS)

    Yang, Weizhu; Yue, Zhufeng; Li, Lei; Wang, Peiyan

    2016-01-01

    An optimization procedure combining an automated finite element modelling (AFEM) technique with a ground structure approach (GSA) is proposed for structural layout and sizing design of aircraft wings. The AFEM technique, based on CATIA VBA scripting and PCL programming, is used to generate models automatically considering the arrangement of inner systems. GSA is used for local structural topology optimization. The design procedure is applied to a high-aspect-ratio wing. The arrangement of the integral fuel tank, landing gear and control surfaces is considered. For the landing gear region, a non-conventional initial structural layout is adopted. The positions of components, the number of ribs and local topology in the wing box and landing gear region are optimized to obtain a minimum structural weight. Constraints include tank volume, strength, buckling and aeroelastic parameters. The results show that the combined approach leads to a greater weight saving, i.e. 26.5%, compared with three additional optimizations based on individual design approaches.

  2. Psychovegetative syndrome diagnosis: an automated psychophysiological investigation and mathematical modeling approach.

    PubMed

    Brelidze, Z; Samadashvili, Z; Khachapuridze, G; Kubaneishvili, E; Nozadze, Z; Benidze, I; Tsitskishvili, N

    1995-01-01

    1. INTRODUCTION. The main purpose of our work was to create the informational expert system of psychovegetative syndrome diagnosis by applying clinical data and estimating the functioning of the central and peripheral part of regulatory apparatus of the human organism, taking into consideration parallel and consecutive sensory, motor, associative, emotional drive systems, and internal body state. We used automatized psychophysiological investigation and mathematical models. For this purpose the following principal tasks have been prepared: the creation of database of quantifiable estimation patient state; the definition and automation of psychophysiological investigation; mathematical modeling of vegetative functions using a non-invasive sample and its connection with real psychophysiological experiment; mathematical modeling of organisms inner medium homeostasis; and the creation of an informational-expert system of psychovegetative syndrome diagnosis. 2. DATABASE OF ESTIMATION OF PATIENTS STATE. The medical records of the DB "PATIENT" contain data on patient psychic and somatoneurological status. 3. AUTOMATED PSYCHOPHYSIOLOGICAL INVESTIGATION. Psychophysiological investigation enables estimation of the functioning of several subsystems of the human organism and establishes an interrelationship between them by means of electrophysiological data and performance parameters. The study of psychophysiological provision of behavior by psychophysiological investigation enables us to get information about adaptational mechanisms of the patient under certain environmental loads. By means of special mathematical provision, the mathematical elaboration of biosignals as performance parameters has been realized; also realized were the formation of received parameters in the database, the estimation of separated parameters in the view of informativity, and the establishment of diagnostic patterns. 4. MATHEMATICAL MODELS FOR ESTIMATION INTERNAL BODY STATE. The proposed

  3. First principles modeling of interfaces of lithium (thio) phosphate solid electrolytes and lithium metal anodes

    NASA Astrophysics Data System (ADS)

    Holzwarth, N. A. W.; Lepley, N. D.; Al-Qawasmeh, A. N. M.; Kates, C. M.

    2014-03-01

    Computer modeling studies show that while lithium phosphate electrolytes form stable interfaces with lithium metal anodes, lithium thiophosphate electrolytes are typically structurally and chemically altered by the presence of lithium metal. On the other hand, experiments have shown that an electrochemical cell of Li/Li3PS4/Li can be cycled many times. One possible explanation of the apparent experimental stability of the Li/Li3PS4/Li system is that a stabilizing buffer layer is formed at the interface during the first few electrochemical cycles. In order to computationally explore this possibility, we examined the influence of ``thin film'' buffer layers of Li2S on the surface of the electrolyte. Using first principles techniques, stable electrolyte-buffer layer configurations were constructed and the resulting Li3PS4/Li2S and Li2S/Li interfaces were found to be structurally and chemically stable. Supported by NSF grant DMR-1105485.

  4. Nuclear Reactor/Hydrogen Process Interface Including the HyPEP Model

    SciTech Connect

    Steven R. Sherman

    2007-05-01

    The Nuclear Reactor/Hydrogen Plant interface is the intermediate heat transport loop that will connect a very high temperature gas-cooled nuclear reactor (VHTR) to a thermochemical, high-temperature electrolysis, or hybrid hydrogen production plant. A prototype plant called the Next Generation Nuclear Plant (NGNP) is planned for construction and operation at the Idaho National Laboratory in the 2018-2021 timeframe, and will involve a VHTR, a high-temperature interface, and a hydrogen production plant. The interface is responsible for transporting high-temperature thermal energy from the nuclear reactor to the hydrogen production plant while protecting the nuclear plant from operational disturbances at the hydrogen plant. Development of the interface is occurring under the DOE Nuclear Hydrogen Initiative (NHI) and involves the study, design, and development of high-temperature heat exchangers, heat transport systems, materials, safety, and integrated system models. Research and development work on the system interface began in 2004 and is expected to continue at least until the start of construction of an engineering-scale demonstration plant.

  5. Perturbative approach to the structure of a planar interface in the Landau-de Gennes model.

    PubMed

    Pełka, Robert; Saito, Kazuya

    2006-10-01

    The structure of nearly static planar interfaces is studied within the framework of the Landau-de Gennes model with the dynamics governed by the time-dependent Ginzburg-Landau equation. To account for the full elastic anisotropy the free energy expansion is extended to include a third order gradient term. The solutions corresponding to the in-plane or homeotropic director alignment at the interface are sought. For this purpose a consistent perturbative scheme is constructed which enables one to calculate successive corrections to the velocity and the order parameter of the interface. The implications of the solutions are discussed. The elastic anisotropy introduces asymmetry into the order parameter and free energy profiles, even for the high symmetry homeotropic configuration. The velocity of the interface with the homeotropic or in-plane alignment is enhanced or reduced, respectively. There is no reorientation of the optical axis in the boundary layer. For the class of nematogens with approximate splay-bend degeneracy the temperature dependence of the interface velocity is weakly affected by the remaining twist anisotropy. PMID:17155076

  6. Open boundary conditions for the Diffuse Interface Model in 1-D

    NASA Astrophysics Data System (ADS)

    Desmarais, J. L.; Kuerten, J. G. M.

    2014-04-01

    New techniques are developed for solving multi-phase flows in unbounded domains using the Diffuse Interface Model in 1-D. They extend two open boundary conditions originally designed for the Navier-Stokes equations. The non-dimensional formulation of the DIM generalizes the approach to any fluid. The equations support a steady state whose analytical approximation close to the critical point depends only on temperature. This feature enables the use of detectors at the boundaries switching between conventional boundary conditions in bulk phases and a multi-phase strategy in interfacial regions. Moreover, the latter takes advantage of the steady state approximation to minimize the interface-boundary interactions. The techniques are applied to fluids experiencing a phase transition and where the interface between the phases travels through one of the boundaries. When the interface crossing the boundary is fully developed, the technique greatly improves results relative to cases where conventional boundary conditions can be used. Limitations appear when the interface crossing the boundary is not a stable equilibrium between the two phases: the terms responsible for creating the true balance between the phases perturb the interior solution. Both boundary conditions present good numerical stability properties: the error remains bounded when the initial conditions or the far field values are perturbed. For the PML, the influence of its main parameters on the global error is investigated to make a compromise between computational costs and maximum error. The approach can be extended to multiple spatial dimensions.

  7. Fully automated segmentation of oncological PET volumes using a combined multiscale and statistical model

    SciTech Connect

    Montgomery, David W. G.; Amira, Abbes; Zaidi, Habib

    2007-02-15

    The widespread application of positron emission tomography (PET) in clinical oncology has driven this imaging technology into a number of new research and clinical arenas. Increasing numbers of patient scans have led to an urgent need for efficient data handling and the development of new image analysis techniques to aid clinicians in the diagnosis of disease and planning of treatment. Automatic quantitative assessment of metabolic PET data is attractive and will certainly revolutionize the practice of functional imaging since it can lower variability across institutions and may enhance the consistency of image interpretation independent of reader experience. In this paper, a novel automated system for the segmentation of oncological PET data aiming at providing an accurate quantitative analysis tool is proposed. The initial step involves expectation maximization (EM)-based mixture modeling using a k-means clustering procedure, which varies voxel order for initialization. A multiscale Markov model is then used to refine this segmentation by modeling spatial correlations between neighboring image voxels. An experimental study using an anthropomorphic thorax phantom was conducted for quantitative evaluation of the performance of the proposed segmentation algorithm. The comparison of actual tumor volumes to the volumes calculated using different segmentation methodologies including standard k-means, spatial domain Markov Random Field Model (MRFM), and the new multiscale MRFM proposed in this paper showed that the latter dramatically reduces the relative error to less than 8% for small lesions (7 mm radii) and less than 3.5% for larger lesions (9 mm radii). The analysis of the resulting segmentations of clinical oncologic PET data seems to confirm that this methodology shows promise and can successfully segment patient lesions. For problematic images, this technique enables the identification of tumors situated very close to nearby high normal physiologic uptake. The

  8. Modeling Complex Cross-Systems Software Interfaces Using SysML

    NASA Technical Reports Server (NTRS)

    Mandutianu, Sanda; Morillo, Ron; Simpson, Kim; Liepack, Otfrid; Bonanne, Kevin

    2013-01-01

    The complex flight and ground systems for NASA human space exploration are designed, built, operated and managed as separate programs and projects. However, each system relies on one or more of the other systems in order to accomplish specific mission objectives, creating a complex, tightly coupled architecture. Thus, there is a fundamental need to understand how each system interacts with the other. To determine if a model-based system engineering approach could be utilized to assist with understanding the complex system interactions, the NASA Engineering and Safety Center (NESC) sponsored a task to develop an approach for performing cross-system behavior modeling. This paper presents the results of applying Model Based Systems Engineering (MBSE) principles using the System Modeling Language (SysML) to define cross-system behaviors and how they map to crosssystem software interfaces documented in system-level Interface Control Documents (ICDs).

  9. Planning a port interface for an ocean incineration system: computer-model user's manual. Final report

    SciTech Connect

    Glucksman, M.A.; Marcus, H.S.

    1986-06-01

    The User's Manual is written to accompany the computer model developed in the report, Planning a Port Interface For An Ocean Incineration System. The model is based on SYMPHONY (TM) a Lotus Development Corp. product. Apart from the requirement for the software, the model needs an IBM PC compatible personal computer with at least 576 kilobytes of RAM. The model assumes the viewpoint of a planner that has yet to choose a particular type of vessel and port technology. The model contains four types of information: physical parameters of system alternatives, government regulations, risks associated with different system alternatives, and relevant background information.

  10. Distribution automation applications of fiber optics

    NASA Technical Reports Server (NTRS)

    Kirkham, Harold; Johnston, A.; Friend, H.

    1989-01-01

    Motivations for interest and research in distribution automation are discussed. The communication requirements of distribution automation are examined and shown to exceed the capabilities of power line carrier, radio, and telephone systems. A fiber optic based communication system is described that is co-located with the distribution system and that could satisfy the data rate and reliability requirements. A cost comparison shows that it could be constructed at a cost that is similar to that of a power line carrier system. The requirements for fiber optic sensors for distribution automation are discussed. The design of a data link suitable for optically-powered electronic sensing is presented. Empirical results are given. A modeling technique that was used to understand the reflections of guided light from a variety of surfaces is described. An optical position-indicator design is discussed. Systems aspects of distribution automation are discussed, in particular, the lack of interface, communications, and data standards. The economics of distribution automation are examined.

  11. Analytical model for radiative transfer including the effects of a rough material interface.

    PubMed

    Giddings, Thomas E; Kellems, Anthony R

    2016-08-20

    The reflected and transmitted radiance due to a source located above a water surface is computed based on models for radiative transfer in continuous optical media separated by a discontinuous air-water interface with random surface roughness. The air-water interface is described as the superposition of random, unresolved roughness on a deterministic realization of a stochastic wave surface at resolved scales. Under the geometric optics assumption, the bidirectional reflection and transmission functions for the air-water interface are approximated by applying regular perturbation methods to Snell's law and including the effects of a random surface roughness component. Formal analytical solutions to the radiative transfer problem under the small-angle scattering approximation account for the effects of scattering and absorption as light propagates through the atmosphere and water and also capture the diffusive effects due to the interaction of light with the rough material interface that separates the two optical media. Results of the analytical models are validated against Monte Carlo simulations, and the approximation to the bidirectional reflection function is also compared to another well-known analytical model.

  12. Analytical model for radiative transfer including the effects of a rough material interface.

    PubMed

    Giddings, Thomas E; Kellems, Anthony R

    2016-08-20

    The reflected and transmitted radiance due to a source located above a water surface is computed based on models for radiative transfer in continuous optical media separated by a discontinuous air-water interface with random surface roughness. The air-water interface is described as the superposition of random, unresolved roughness on a deterministic realization of a stochastic wave surface at resolved scales. Under the geometric optics assumption, the bidirectional reflection and transmission functions for the air-water interface are approximated by applying regular perturbation methods to Snell's law and including the effects of a random surface roughness component. Formal analytical solutions to the radiative transfer problem under the small-angle scattering approximation account for the effects of scattering and absorption as light propagates through the atmosphere and water and also capture the diffusive effects due to the interaction of light with the rough material interface that separates the two optical media. Results of the analytical models are validated against Monte Carlo simulations, and the approximation to the bidirectional reflection function is also compared to another well-known analytical model. PMID:27556978

  13. Third-generation electrokinetically pumped sheath-flow nanospray interface with improved stability and sensitivity for automated capillary zone electrophoresis-mass spectrometry analysis of complex proteome digests.

    PubMed

    Sun, Liangliang; Zhu, Guijie; Zhang, Zhenbin; Mou, Si; Dovichi, Norman J

    2015-05-01

    We have reported a set of electrokinetically pumped sheath flow nanoelectrospray interfaces to couple capillary zone electrophoresis with mass spectrometry. A separation capillary is threaded through a cross into a glass emitter. A side arm provides fluidic contact with a sheath buffer reservoir that is connected to a power supply. The potential applied to the sheath buffer drives electro-osmosis in the emitter to pump the sheath fluid at nanoliter per minute rates. Our first-generation interface placed a flat-tipped capillary in the emitter. Sensitivity was inversely related to orifice size and to the distance from the capillary tip to the emitter orifice. A second-generation interface used a capillary with an etched tip that allowed the capillary exit to approach within a few hundred micrometers of the emitter orifice, resulting in a significant increase in sensitivity. In both the first- and second-generation interfaces, the emitter diameter was typically 8 μm; these narrow orifices were susceptible to plugging and tended to have limited lifetime. We now report a third-generation interface that employs a larger diameter emitter orifice with very short distance between the capillary tip and the emitter orifice. This modified interface is much more robust and produces much longer lifetime than our previous designs with no loss in sensitivity. We evaluated the third-generation interface for a 5000 min (127 runs, 3.5 days) repetitive analysis of bovine serum albumin digest using an uncoated capillary. We observed a 10% relative standard deviation in peak area, an average of 160,000 theoretical plates, and very low carry-over (much less than 1%). We employed a linear-polyacrylamide (LPA)-coated capillary for single-shot, bottom-up proteomic analysis of 300 ng of Xenopus laevis fertilized egg proteome digest and identified 1249 protein groups and 4038 peptides in a 110 min separation using an LTQ-Orbitrap Velos mass spectrometer; peak capacity was ∼330. The

  14. Visualization: A Mind-Machine Interface for Discovery.

    PubMed

    Nielsen, Cydney B

    2016-02-01

    Computation is critical for enabling us to process data volumes and model data complexities that are unthinkable by manual means. However, we are far from automating the sense-making process. Human knowledge and reasoning are critical for discovery. Visualization offers a powerful interface between mind and machine that should be further exploited in future genome analysis tools. PMID:26739384

  15. An Agent-Based Interface to Terrestrial Ecological Forecasting

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Nemani, Ramakrishna; Pang, Wan-Lin; Votava, Petr; Etzioni, Oren

    2004-01-01

    This paper describes a flexible agent-based ecological forecasting system that combines multiple distributed data sources and models to provide near-real-time answers to questions about the state of the Earth system We build on novel techniques in automated constraint-based planning and natural language interfaces to automatically generate data products based on descriptions of the desired data products.

  16. MaxMod: a hidden Markov model based novel interface to MODELLER for improved prediction of protein 3D models.

    PubMed

    Parida, Bikram K; Panda, Prasanna K; Misra, Namrata; Mishra, Barada K

    2015-02-01

    Modeling the three-dimensional (3D) structures of proteins assumes great significance because of its manifold applications in biomolecular research. Toward this goal, we present MaxMod, a graphical user interface (GUI) of the MODELLER program that combines profile hidden Markov model (profile HMM) method with Clustal Omega program to significantly improve the selection of homologous templates and target-template alignment for construction of accurate 3D protein models. MaxMod distinguishes itself from other existing GUIs of MODELLER software by implementing effortless modeling of proteins using templates that bear modified residues. Additionally, it provides various features such as loop optimization, express modeling (a feature where protein model can be generated directly from its sequence, without any further user intervention) and automatic update of PDB database, thus enhancing the user-friendly control of computational tasks. We find that HMM-based MaxMod performs better than other modeling packages in terms of execution time and model quality. MaxMod is freely available as a downloadable standalone tool for academic and non-commercial purpose at http://www.immt.res.in/maxmod/. PMID:25636267

  17. MaxMod: a hidden Markov model based novel interface to MODELLER for improved prediction of protein 3D models.

    PubMed

    Parida, Bikram K; Panda, Prasanna K; Misra, Namrata; Mishra, Barada K

    2015-02-01

    Modeling the three-dimensional (3D) structures of proteins assumes great significance because of its manifold applications in biomolecular research. Toward this goal, we present MaxMod, a graphical user interface (GUI) of the MODELLER program that combines profile hidden Markov model (profile HMM) method with Clustal Omega program to significantly improve the selection of homologous templates and target-template alignment for construction of accurate 3D protein models. MaxMod distinguishes itself from other existing GUIs of MODELLER software by implementing effortless modeling of proteins using templates that bear modified residues. Additionally, it provides various features such as loop optimization, express modeling (a feature where protein model can be generated directly from its sequence, without any further user intervention) and automatic update of PDB database, thus enhancing the user-friendly control of computational tasks. We find that HMM-based MaxMod performs better than other modeling packages in terms of execution time and model quality. MaxMod is freely available as a downloadable standalone tool for academic and non-commercial purpose at http://www.immt.res.in/maxmod/.

  18. a Plugin to Interface Openmodeller from Qgis for SPECIES' Potential Distribution Modelling

    NASA Astrophysics Data System (ADS)

    Becker, Daniel; Willmes, Christian; Bareth, Georg; Weniger, Gerd-Christian

    2016-06-01

    This contribution describes the development of a plugin for the geographic information system QGIS to interface the openModeller software package. The aim is to use openModeller to generate species' potential distribution models for various archaeological applications (site catchment analysis, for example). Since the usage of openModeller's command-line interface and configuration files can be a bit inconvenient, an extension of the QGIS user interface to handle these tasks, in combination with the management of the geographic data, was required. The implementation was realized in Python using PyQGIS and PyQT. The plugin, in combination with QGIS, handles the tasks of managing geographical data, data conversion, generation of configuration files required by openModeller and compilation of a project folder. The plugin proved to be very helpful with the task of compiling project datasets and configuration files for multiple instances of species occurrence datasets and the overall handling of openModeller. In addition, the plugin is easily extensible to take potential new requirements into account in the future.

  19. Model for the water-amorphous silica interface: the undissociated surface.

    PubMed

    Hassanali, Ali A; Singer, Sherwin J

    2007-09-27

    The physical and chemical properties of the amorphous silica-water interface are of crucial importance for a fundamental understanding of electrochemical and electrokinetic phenomena, and for various applications including chromatography, sensors, metal ion extraction, and the construction of micro- and nanoscale devices. A model for the undissociated amorphous silica-water interface reported here is a step toward a practical microscopic model of this important system. We have extended the popular BKS and SPC/E models for bulk silica and water to describe the hydrated, hydroxylated amorphous silica surface. The parameters of our model were determined using ab initio quantum chemical studies on small fragments. Our model will be useful in empirical potential studies, and as a starting point for ab initio molecular dynamics calculations. At this stage, we present a model for the undissociated surface. Our calculated value for the heat of immersion, 0.3 J x m(-2), falls within the range of reported experimental values of 0.2-0.8 J x m(-2). We also study the perturbation of water properties near the silica-water interface. The disordered surface is characterized by regions that are hydrophilic and hydrophobic, depending on the statistical variations in silanol group density.

  20. Modeling strategic use of human computer interfaces with novel hidden Markov models.

    PubMed

    Mariano, Laura J; Poore, Joshua C; Krum, David M; Schwartz, Jana L; Coskren, William D; Jones, Eric M

    2015-01-01

    Immersive software tools are virtual environments designed to give their users an augmented view of real-world data and ways of manipulating that data. As virtual environments, every action users make while interacting with these tools can be carefully logged, as can the state of the software and the information it presents to the user, giving these actions context. This data provides a high-resolution lens through which dynamic cognitive and behavioral processes can be viewed. In this report, we describe new methods for the analysis and interpretation of such data, utilizing a novel implementation of the Beta Process Hidden Markov Model (BP-HMM) for analysis of software activity logs. We further report the results of a preliminary study designed to establish the validity of our modeling approach. A group of 20 participants were asked to play a simple computer game, instrumented to log every interaction with the interface. Participants had no previous experience with the game's functionality or rules, so the activity logs collected during their naïve interactions capture patterns of exploratory behavior and skill acquisition as they attempted to learn the rules of the game. Pre- and post-task questionnaires probed for self-reported styles of problem solving, as well as task engagement, difficulty, and workload. We jointly modeled the activity log sequences collected from all participants using the BP-HMM approach, identifying a global library of activity patterns representative of the collective behavior of all the participants. Analyses show systematic relationships between both pre- and post-task questionnaires, self-reported approaches to analytic problem solving, and metrics extracted from the BP-HMM decomposition. Overall, we find that this novel approach to decomposing unstructured behavioral data within software environments provides a sensible means for understanding how users learn to integrate software functionality for strategic task pursuit.

  1. Modeling strategic use of human computer interfaces with novel hidden Markov models

    PubMed Central

    Mariano, Laura J.; Poore, Joshua C.; Krum, David M.; Schwartz, Jana L.; Coskren, William D.; Jones, Eric M.

    2015-01-01

    Immersive software tools are virtual environments designed to give their users an augmented view of real-world data and ways of manipulating that data. As virtual environments, every action users make while interacting with these tools can be carefully logged, as can the state of the software and the information it presents to the user, giving these actions context. This data provides a high-resolution lens through which dynamic cognitive and behavioral processes can be viewed. In this report, we describe new methods for the analysis and interpretation of such data, utilizing a novel implementation of the Beta Process Hidden Markov Model (BP-HMM) for analysis of software activity logs. We further report the results of a preliminary study designed to establish the validity of our modeling approach. A group of 20 participants were asked to play a simple computer game, instrumented to log every interaction with the interface. Participants had no previous experience with the game's functionality or rules, so the activity logs collected during their naïve interactions capture patterns of exploratory behavior and skill acquisition as they attempted to learn the rules of the game. Pre- and post-task questionnaires probed for self-reported styles of problem solving, as well as task engagement, difficulty, and workload. We jointly modeled the activity log sequences collected from all participants using the BP-HMM approach, identifying a global library of activity patterns representative of the collective behavior of all the participants. Analyses show systematic relationships between both pre- and post-task questionnaires, self-reported approaches to analytic problem solving, and metrics extracted from the BP-HMM decomposition. Overall, we find that this novel approach to decomposing unstructured behavioral data within software environments provides a sensible means for understanding how users learn to integrate software functionality for strategic task pursuit. PMID

  2. A new method for automated discontinuity trace mapping on rock mass 3D surface model

    NASA Astrophysics Data System (ADS)

    Li, Xiaojun; Chen, Jianqin; Zhu, Hehua

    2016-04-01

    This paper presents an automated discontinuity trace mapping method on a 3D surface model of rock mass. Feature points of discontinuity traces are first detected using the Normal Tensor Voting Theory, which is robust to noisy point cloud data. Discontinuity traces are then extracted from feature points in four steps: (1) trace feature point grouping, (2) trace segment growth, (3) trace segment connection, and (4) redundant trace segment removal. A sensitivity analysis is conducted to identify optimal values for the parameters used in the proposed method. The optimal triangular mesh element size is between 5 cm and 6 cm; the angle threshold in the trace segment growth step is between 70° and 90°; the angle threshold in the trace segment connection step is between 50° and 70°, and the distance threshold should be at least 15 times the mean triangular mesh element size. The method is applied to the excavation face trace mapping of a drill-and-blast tunnel. The results show that the proposed discontinuity trace mapping method is fast and effective and could be used as a supplement to traditional direct measurement of discontinuity traces.

  3. Multi-fractal analysis for vehicle distribution based on cellular automation model

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Li, Shi-Gao

    2015-09-01

    It is well known that traffic flow presents multi-fractal characteristics at time scales. The aim of this study is to test its multi-fractality at spacial scales. The vehicular cellular automation (CA) model is chosen as a tool to get vehicle positions on a single lane road. First, multi-fractal of vehicle distribution is checked, and multi-fractal spectrums are plotted. Second, analysis results show that the width of a multi-fractal spectrum expresses the ratio of the maximum to minimum densities, and the height difference between the left and right vertexes represents the relative size between the numbers of sections with the maximum and minimum densities. Finally, the effects of the random deceleration probability and the average density on homogeneity of vehicle distribution are analyzed. The results show that random deceleration increases the ratio of the maximum to minimum densities, and deceases the relative size between the numbers of sections with the maximum and minimum densities, when the global density is limited to a specific range. Therefore, the multi-fractal spectrum can be used to quantify the homogeneity of spacial distribution of traffic flow.

  4. Automated parameter estimation for biological models using Bayesian statistical model checking

    PubMed Central

    2015-01-01

    Background Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Domain experts usually estimate the values of these parameters by fitting the model to experimental data. Model fitting is usually expressed as an optimization problem that requires minimizing a cost-function which measures some notion of distance between the model and the data. This optimization problem is often solved by combining local and global search methods that tend to perform well for the specific application domain. When some prior information about parameters is available, methods such as Bayesian inference are commonly used for parameter learning. Choosing the appropriate parameter search technique requires detailed domain knowledge and insight into the underlying system. Results Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. Conclusions We have developed a new algorithmic technique for discovering parameters in complex stochastic models of

  5. Modeling the Effect of Interface Wear on Fatigue Hysteresis Behavior of Carbon Fiber-Reinforced Ceramic-Matrix Composites

    NASA Astrophysics Data System (ADS)

    Longbiao, Li

    2015-12-01

    An analytical method has been developed to investigate the effect of interface wear on fatigue hysteresis behavior in carbon fiber-reinforced ceramic-matrix composites (CMCs). The damage mechanisms, i.e., matrix multicracking, fiber/matrix interface debonding and interface wear, fibers fracture, slip and pull-out, have been considered. The statistical matrix multicracking model and fracture mechanics interface debonding criterion were used to determine the matrix crack spacing and interface debonded length. Upon first loading to fatigue peak stress and subsequent cyclic loading, the fibers failure probabilities and fracture locations were determined by combining the interface wear model and fiber statistical failure model based on the assumption that the loads carried by broken and intact fibers satisfy the global load sharing criterion. The effects of matrix properties, i.e., matrix cracking characteristic strength and matrix Weibull modulus, interface properties, i.e., interface shear stress and interface debonded energy, fiber properties, i.e., fiber Weibull modulus and fiber characteristic strength, and cycle number on fibers failure, hysteresis loops and interface slip, have been investigated. The hysteresis loops under fatigue loading from the present analytical method were in good agreement with experimental data.

  6. A DIFFUSE-INTERFACE APPROACH FOR MODELING TRANSPORT, DIFFUSION AND ADSORPTION/DESORPTION OF MATERIAL QUANTITIES ON A DEFORMABLE INTERFACE*

    PubMed Central

    Teigen, Knut Erik; Li, Xiangrong; Lowengrub, John; Wang, Fan; Voigt, Axel

    2010-01-01

    A method is presented to solve two-phase problems involving a material quantity on an interface. The interface can be advected, stretched, and change topology, and material can be adsorbed to or desorbed from it. The method is based on the use of a diffuse interface framework, which allows a simple implementation using standard finite-difference or finite-element techniques. Here, finite-difference methods on a block-structured adaptive grid are used, and the resulting equations are solved using a non-linear multigrid method. Interfacial flow with soluble surfactants is used as an example of the application of the method, and several test cases are presented demonstrating its accuracy and convergence. PMID:21373370

  7. Bilinear modeling of EMG signals to extract user-independent features for multiuser myoelectric interface.

    PubMed

    Matsubara, Takamitsu; Morimoto, Jun

    2013-08-01

    In this study, we propose a multiuser myoelectric interface that can easily adapt to novel users. When a user performs different motions (e.g., grasping and pinching), different electromyography (EMG) signals are measured. When different users perform the same motion (e.g., grasping), different EMG signals are also measured. Therefore, designing a myoelectric interface that can be used by multiple users to perform multiple motions is difficult. To cope with this problem, we propose for EMG signals a bilinear model that is composed of two linear factors: 1) user dependent and 2) motion dependent. By decomposing the EMG signals into these two factors, the extracted motion-dependent factors can be used as user-independent features. We can construct a motion classifier on the extracted feature space to develop the multiuser interface. For novel users, the proposed adaptation method estimates the user-dependent factor through only a few interactions. The bilinear EMG model with the estimated user-dependent factor can extract the user-independent features from the novel user data. We applied our proposed method to a recognition task of five hand gestures for robotic hand control using four-channel EMG signals measured from subject forearms. Our method resulted in 73% accuracy, which was statistically significantly different from the accuracy of standard nonmultiuser interfaces, as the result of a two-sample t -test at a significance level of 1%.

  8. Integration Of Heat Transfer Coefficient In Glass Forming Modeling With Special Interface Element

    NASA Astrophysics Data System (ADS)

    Moreau, P.; César de Sá, J.; Grégoire, S.; Lochegnies, D.

    2007-05-01

    Numerical modeling of the glass forming processes requires the accurate knowledge of the heat exchange between the glass and the forming tools. A laboratory testing is developed to determine the evolution of the heat transfer coefficient in different glass/mould contact conditions (contact pressure, temperature, lubrication…). In this paper, trials are performed to determine heat transfer coefficient evolutions in experimental conditions close to the industrial blow-and-blow process conditions. In parallel of this work, a special interface element is implemented in a commercial Finite Element code in order to deal with heat transfer between glass and mould for non-meshing meshes and evolutive contact. This special interface element, implemented by using user subroutines, permits to introduce the previous heat transfer coefficient evolutions in the numerical modelings at the glass/mould interface in function of the local temperatures, contact pressures, contact time and kind of lubrication. The blow-and-blow forming simulation of a perfume bottle is finally performed to assess the special interface element performance.

  9. Integration Of Heat Transfer Coefficient In Glass Forming Modeling With Special Interface Element

    SciTech Connect

    Moreau, P.; Gregoire, S.; Lochegnies, D.; Cesar de Sa, J.

    2007-05-17

    Numerical modeling of the glass forming processes requires the accurate knowledge of the heat exchange between the glass and the forming tools. A laboratory testing is developed to determine the evolution of the heat transfer coefficient in different glass/mould contact conditions (contact pressure, temperature, lubrication...). In this paper, trials are performed to determine heat transfer coefficient evolutions in experimental conditions close to the industrial blow-and-blow process conditions. In parallel of this work, a special interface element is implemented in a commercial Finite Element code in order to deal with heat transfer between glass and mould for non-meshing meshes and evolutive contact. This special interface element, implemented by using user subroutines, permits to introduce the previous heat transfer coefficient evolutions in the numerical modelings at the glass/mould interface in function of the local temperatures, contact pressures, contact time and kind of lubrication. The blow-and-blow forming simulation of a perfume bottle is finally performed to assess the special interface element performance.

  10. 3-D FEM Modeling of fiber/matrix interface debonding in UD composites including surface effects

    NASA Astrophysics Data System (ADS)

    Pupurs, A.; Varna, J.

    2012-02-01

    Fiber/matrix interface debond growth is one of the main mechanisms of damage evolution in unidirectional (UD) polymer composites. Because for polymer composites the fiber strain to failure is smaller than for the matrix multiple fiber breaks occur at random positions when high mechanical stress is applied to the composite. The energy released due to each fiber break is usually larger than necessary for the creation of a fiber break therefore a partial debonding of fiber/matrix interface is typically observed. Thus the stiffness reduction of UD composite is contributed both from the fiber breaks and from the interface debonds. The aim of this paper is to analyze the debond growth in carbon fiber/epoxy and glass fiber/epoxy UD composites using fracture mechanics principles by calculation of energy release rate GII. A 3-D FEM model is developed for calculation of energy release rate for fiber/matrix interface debonds at different locations in the composite including the composite surface region where the stress state differs from the one in the bulk composite. In the model individual partially debonded fiber is surrounded by matrix region and embedded in a homogenized composite.

  11. 19 CFR 24.25 - Statement processing and Automated Clearinghouse.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 19 Customs Duties 1 2013-04-01 2013-04-01 false Statement processing and Automated Clearinghouse... processing and Automated Clearinghouse. (a) Description. Statement processing is a voluntary automated program for participants in the Automated Broker Interface (ABI), allowing the grouping of...

  12. 19 CFR 24.25 - Statement processing and Automated Clearinghouse.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 19 Customs Duties 1 2014-04-01 2014-04-01 false Statement processing and Automated Clearinghouse... processing and Automated Clearinghouse. (a) Description. Statement processing is a voluntary automated program for participants in the Automated Broker Interface (ABI), allowing the grouping of...

  13. 19 CFR 24.25 - Statement processing and Automated Clearinghouse.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 19 Customs Duties 1 2012-04-01 2012-04-01 false Statement processing and Automated Clearinghouse... processing and Automated Clearinghouse. (a) Description. Statement processing is a voluntary automated program for participants in the Automated Broker Interface (ABI), allowing the grouping of...

  14. 19 CFR 24.25 - Statement processing and Automated Clearinghouse.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 19 Customs Duties 1 2011-04-01 2011-04-01 false Statement processing and Automated Clearinghouse... processing and Automated Clearinghouse. (a) Description. Statement processing is a voluntary automated program for participants in the Automated Broker Interface (ABI), allowing the grouping of...

  15. Optics of an opal modeled with a stratified effective index and the effect of the interface

    NASA Astrophysics Data System (ADS)

    Maurin, Isabelle; Moufarej, Elias; Laliotis, Athanasios; Bloch, Daniel

    2015-08-01

    Reflection and transmission for an artificial opal are described through a model of stratified medium based upon a one-dimensional variation of an effective index. The model is notably applicable to a Langmuir-Blodgett type disordered opal. Light scattering is accounted for by a phenomenological absorption. The interface region between the opal and the substrate -or the vacuum- induces a periodicity break in the photonic crystal arrangement, which exhibits a prominent influence on the reflection, notably away from the Bragg reflection peak. Experimental results are compared to our model. The model is extendable to inverse opals, stacked cylinders, or irradiation by evanescent waves

  16. Interfacing MATLAB and Python Optimizers to Black-Box Environmental Simulation Models

    NASA Astrophysics Data System (ADS)

    Matott, L. S.; Leung, K.; Tolson, B.

    2009-12-01

    A common approach for utilizing environmental models in a management or policy-analysis context is to incorporate them into a simulation-optimization framework - where an underlying process-based environmental model is linked with an optimization search algorithm. The optimization search algorithm iteratively adjusts various model inputs (i.e. parameters or design variables) in order to minimize an application-specific objective function computed on the basis of model outputs (i.e. response variables). Numerous optimization algorithms have been applied to the simulation-optimization of environmental systems and this research investigated the use of optimization libraries and toolboxes that are readily available in MATLAB and Python - two popular high-level programming languages. Inspired by model-independent calibration codes (e.g. PEST and UCODE), a small piece of interface software (known as PIGEON) was developed. PIGEON allows users to interface Python and MATLAB optimizers with arbitrary black-box environmental models without writing any additional interface code. An initial set of benchmark tests (involving more than 20 MATLAB and Python optimization algorithms) were performed to validate the interface software - results highlight the need to carefully consider such issues as numerical precision in output files and enforcement (or not) of parameter limits. Additional benchmark testing considered the problem of fitting isotherm expressions to laboratory data - with an emphasis on dual-mode expressions combining non-linear isotherms with a linear partitioning component. With respect to the selected isotherm fitting problems, derivative-free search algorithms significantly outperformed gradient-based algorithms. Attempts to improve gradient-based performance, via parameter tuning and also via several alternative multi-start approaches, were largely unsuccessful.

  17. Phononic band structures and stability analysis using radial basis function method with consideration of different interface models

    NASA Astrophysics Data System (ADS)

    Yan, Zhi-zhong; Wei, Chun-qiu; Zheng, Hui; Zhang, Chuanzeng

    2016-05-01

    In this paper, a meshless radial basis function (RBF) collocation method is developed to calculate the phononic band structures taking account of different interface models. The present method is validated by using the analytical results in the case of perfect interfaces. The stability is fully discussed based on the types of RBFs, the shape parameters and the node numbers. And the advantages of the proposed RBF method compared to the finite element method (FEM) are also illustrated. In addition, the influences of the spring-interface model and the three-phase model on the wave band gaps are investigated by comparing with the perfect interfaces. For different interface models, the effects of various interface conditions, length ratios and density ratios on the band gap width are analyzed. The comparison results of the two models show that the weakly bonded interface has a significant effect on the properties of phononic crystals. Besides, the band structures of the spring-interface model have certain similarities and differences with those of the three-phase model.

  18. Evaluation of automated statistical shape model based knee kinematics from biplane fluoroscopy.

    PubMed

    Baka, Nora; Kaptein, Bart L; Giphart, J Erik; Staring, Marius; de Bruijne, Marleen; Lelieveldt, Boudewijn P F; Valstar, Edward

    2014-01-01

    State-of-the-art fluoroscopic knee kinematic analysis methods require the patient-specific bone shapes segmented from CT or MRI. Substituting the patient-specific bone shapes with personalizable models, such as statistical shape models (SSM), could eliminate the CT/MRI acquisitions, and thereby decrease costs and radiation dose (when eliminating CT). SSM based kinematics, however, have not yet been evaluated on clinically relevant joint motion parameters. Therefore, in this work the applicability of SSMs for computing knee kinematics from biplane fluoroscopic sequences was explored. Kinematic precision with an edge based automated bone tracking method using SSMs was evaluated on 6 cadaveric and 10 in-vivo fluoroscopic sequences. The SSMs of the femur and the tibia-fibula were created using 61 training datasets. Kinematic precision was determined for medial-lateral tibial shift, anterior-posterior tibial drawer, joint distraction-contraction, flexion, tibial rotation and adduction. The relationship between kinematic precision and bone shape accuracy was also investigated. The SSM based kinematics resulted in sub-millimeter (0.48-0.81mm) and approximately 1° (0.69-0.99°) median precision on the cadaveric knees compared to bone-marker-based kinematics. The precision on the in-vivo datasets was comparable to that of the cadaveric sequences when evaluated with a semi-automatic reference method. These results are promising, though further work is necessary to reach the accuracy of CT-based kinematics. We also demonstrated that a better shape reconstruction accuracy does not automatically imply a better kinematic precision. This result suggests that the ability of accurately fitting the edges in the fluoroscopic sequences has a larger role in determining the kinematic precision than that of the overall 3D shape accuracy.

  19. Evidence evaluation in fingerprint comparison and automated fingerprint identification systems--Modeling between finger variability.

    PubMed

    Egli Anthonioz, N M; Champod, C

    2014-02-01

    In the context of the investigation of the use of automated fingerprint identification systems (AFIS) for the evaluation of fingerprint evidence, the current study presents investigations into the variability of scores from an AFIS system when fingermarks from a known donor are compared to fingerprints that are not from the same source. The ultimate goal is to propose a model, based on likelihood ratios, which allows the evaluation of mark-to-print comparisons. In particular, this model, through its use of AFIS technology, benefits from the possibility of using a large amount of data, as well as from an already built-in proximity measure, the AFIS score. More precisely, the numerator of the LR is obtained from scores issued from comparisons between impressions from the same source and showing the same minutia configuration. The denominator of the LR is obtained by extracting scores from comparisons of the questioned mark with a database of non-matching sources. This paper focuses solely on the assignment of the denominator of the LR. We refer to it by the generic term of between-finger variability. The issues addressed in this paper in relation to between-finger variability are the required sample size, the influence of the finger number and general pattern, as well as that of the number of minutiae included and their configuration on a given finger. Results show that reliable estimation of between-finger variability is feasible with 10,000 scores. These scores should come from the appropriate finger number/general pattern combination as defined by the mark. Furthermore, strategies of obtaining between-finger variability when these elements cannot be conclusively seen on the mark (and its position with respect to other marks for finger number) have been presented. These results immediately allow case-by-case estimation of the between-finger variability in an operational setting.

  20. Evidence evaluation in fingerprint comparison and automated fingerprint identification systems--Modeling between finger variability.

    PubMed

    Egli Anthonioz, N M; Champod, C

    2014-02-01

    In the context of the investigation of the use of automated fingerprint identification systems (AFIS) for the evaluation of fingerprint evidence, the current study presents investigations into the variability of scores from an AFIS system when fingermarks from a known donor are compared to fingerprints that are not from the same source. The ultimate goal is to propose a model, based on likelihood ratios, which allows the evaluation of mark-to-print comparisons. In particular, this model, through its use of AFIS technology, benefits from the possibility of using a large amount of data, as well as from an already built-in proximity measure, the AFIS score. More precisely, the numerator of the LR is obtained from scores issued from comparisons between impressions from the same source and showing the same minutia configuration. The denominator of the LR is obtained by extracting scores from comparisons of the questioned mark with a database of non-matching sources. This paper focuses solely on the assignment of the denominator of the LR. We refer to it by the generic term of between-finger variability. The issues addressed in this paper in relation to between-finger variability are the required sample size, the influence of the finger number and general pattern, as well as that of the number of minutiae included and their configuration on a given finger. Results show that reliable estimation of between-finger variability is feasible with 10,000 scores. These scores should come from the appropriate finger number/general pattern combination as defined by the mark. Furthermore, strategies of obtaining between-finger variability when these elements cannot be conclusively seen on the mark (and its position with respect to other marks for finger number) have been presented. These results immediately allow case-by-case estimation of the between-finger variability in an operational setting. PMID:24447455

  1. An automated system to simulate the River discharge in Kyushu Island using the H08 model

    NASA Astrophysics Data System (ADS)

    Maji, A.; Jeon, J.; Seto, S.

    2015-12-01

    Kyushu Island is located in southwestern part of Japan, and it is often affected by typhoons and a Baiu front. There have been severe water-related disasters recorded in Kyushu Island. On the other hand, because of high population density and for crop growth, water resource is an important issue of Kyushu Island.The simulation of river discharge is important for water resource management and early warning of water-related disasters. This study attempts to apply H08 model to simulate river discharge in Kyushu Island. Geospatial meteorological and topographical data were obtained from Japanese Ministry of Land, Infrastructure, Transport and Tourism (MLIT) and Automated Meteorological Data Acquisition System (AMeDAS) of Japan Meteorological Agency (JMA). The number of the observation stations of AMeDAS is limited and is not quite satisfactory for the application of water resources models in Kyushu. It is necessary to spatially interpolate the point data to produce grid dataset. Meteorological grid dataset is produced by considering elevation dependence. Solar radiation is estimated from hourly sunshine duration by a conventional formula. We successfully improved the accuracy of interpolated data just by considering elevation dependence and found out that the bias is related to geographical location. The rain/snow classification is done by H08 model and is validated by comparing estimated and observed snow rate. The estimates tend to be larger than the corresponding observed values. A system to automatically produce daily meteorological grid dataset is being constructed.The geospatial river network data were produced by ArcGIS and they were utilized in the H08 model to simulate the river discharge. Firstly, this research is to compare simulated and measured specific discharge, which is the ratio of discharge to watershed area. Significant error between simulated and measured data were seen in some rivers. Secondly, the outputs by the coupled model including crop growth

  2. An SPH model for multiphase flows with complex interfaces and large density differences

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Zong, Z.; Liu, M. B.; Zou, L.; Li, H. T.; Shu, C.

    2015-02-01

    In this paper, an improved SPH model for multiphase flows with complex interfaces and large density differences is developed. The multiphase SPH model is based on the assumption of pressure continuity over the interfaces and avoids directly using the information of neighboring particles' densities or masses in solving governing equations. In order to improve computational accuracy and to obtain smooth pressure fields, a corrected density re-initialization is applied. A coupled dynamic solid boundary treatment (SBT) is implemented both to reduce numerical oscillations and to prevent unphysical particle penetration in the boundary area. The density correction and coupled dynamics SBT algorithms are modified to adapt to the density discontinuity on fluid interfaces in multiphase simulation. A cut-off value of the particle density is set to avoid negative pressure, which can lead to severe numerical difficulties and may even terminate the simulations. Three representative numerical examples, including a Rayleigh-Taylor instability test, a non-Boussinesq problem and a dam breaking simulation, are presented and compared with analytical results or experimental data. It is demonstrated that the present SPH model is capable of modeling complex multiphase flows with large interfacial deformations and density ratios.

  3. Multiscale Modeling of Intergranular Fracture in Aluminum: Constitutive Relation For Interface Debonding

    NASA Technical Reports Server (NTRS)

    Yamakov, V.; Saether, E.; Glaessgen, E. H.

    2008-01-01

    Intergranular fracture is a dominant mode of failure in ultrafine grained materials. In the present study, the atomistic mechanisms of grain-boundary debonding during intergranular fracture in aluminum are modeled using a coupled molecular dynamics finite element simulation. Using a statistical mechanics approach, a cohesive-zone law in the form of a traction-displacement constitutive relationship, characterizing the load transfer across the plane of a growing edge crack, is extracted from atomistic simulations and then recast in a form suitable for inclusion within a continuum finite element model. The cohesive-zone law derived by the presented technique is free of finite size effects and is statistically representative for describing the interfacial debonding of a grain boundary (GB) interface examined at atomic length scales. By incorporating the cohesive-zone law in cohesive-zone finite elements, the debonding of a GB interface can be simulated in a coupled continuum-atomistic model, in which a crack starts in the continuum environment, smoothly penetrates the continuum-atomistic interface, and continues its propagation in the atomistic environment. This study is a step towards relating atomistically derived decohesion laws to macroscopic predictions of fracture and constructing multiscale models for nanocrystalline and ultrafine grained materials.

  4. Including nonequilibrium interface kinetics in a continuum model for melting nanoscaled particles

    PubMed Central

    Back, Julian M.; McCue, Scott W.; Moroney, Timothy J.

    2014-01-01

    The melting temperature of a nanoscaled particle is known to decrease as the curvature of the solid-melt interface increases. This relationship is most often modelled by a Gibbs–Thomson law, with the decrease in melting temperature proposed to be a product of the curvature of the solid-melt interface and the surface tension. Such a law must break down for sufficiently small particles, since the curvature becomes singular in the limit that the particle radius vanishes. Furthermore, the use of this law as a boundary condition for a Stefan-type continuum model is problematic because it leads to a physically unrealistic form of mathematical blow-up at a finite particle radius. By numerical simulation, we show that the inclusion of nonequilibrium interface kinetics in the Gibbs–Thomson law regularises the continuum model, so that the mathematical blow up is suppressed. As a result, the solution continues until complete melting, and the corresponding melting temperature remains finite for all time. The results of the adjusted model are consistent with experimental findings of abrupt melting of nanoscaled particles. This small-particle regime appears to be closely related to the problem of melting a superheated particle. PMID:25399918

  5. The DaveMLTranslator: An Interface for DAVE-ML Aerodynamic Models

    NASA Technical Reports Server (NTRS)

    Hill, Melissa A.; Jackson, E. Bruce

    2007-01-01

    It can take weeks or months to incorporate a new aerodynamic model into a vehicle simulation and validate the performance of the model. The Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML) has been proposed as a means to reduce the time required to accomplish this task by defining a standard format for typical components of a flight dynamic model. The purpose of this paper is to describe an object-oriented C++ implementation of a class that interfaces a vehicle subsystem model specified in DAVE-ML and a vehicle simulation. Using the DaveMLTranslator class, aerodynamic or other subsystem models can be automatically imported and verified at run-time, significantly reducing the elapsed time between receipt of a DAVE-ML model and its integration into a simulation environment. The translator performs variable initializations, data table lookups, and mathematical calculations for the aerodynamic build-up, and executes any embedded static check-cases for verification. The implementation is efficient, enabling real-time execution. Simple interface code for the model inputs and outputs is the only requirement to integrate the DaveMLTranslator as a vehicle aerodynamic model. The translator makes use of existing table-lookup utilities from the Langley Standard Real-Time Simulation in C++ (LaSRS++). The design and operation of the translator class is described and comparisons with existing, conventional, C++ aerodynamic models of the same vehicle are given.

  6. Automating ground-fixed target modeling with the smart target model generator

    NASA Astrophysics Data System (ADS)

    Verner, D.; Dukes, R.

    2007-04-01

    The Smart Target Model Generator (STMG) is an AFRL/MNAL sponsored tool for generating 3D building models for use in various weapon effectiveness tools. These tools include tri-service approved tools such as Modular Effectiveness/Vulnerability Assessment (MEVA), Building Analysis Module in Joint Weaponeering System (JWS), PENCRV3D, and WinBlast. It also supports internal dispersion modeling of chemical contaminants. STMG also has capabilities to generate infrared or other sensor images. Unlike most CAD-models, STMG provides physics-based component properties such as strength, density, reinforcement, and material type. Interior components such as electrical and mechanical equipment, rooms, and ducts are also modeled. Buildings can be manually created with a graphical editor or automatically generated using rule-bases which size and place the structural components using rules based on structural engineering principles. In addition to its primary purposes of supporting conventional kinetic munitions, it can also be used to support sensor modeling and automatic target recognition.

  7. Cockpit automation

    NASA Technical Reports Server (NTRS)

    Wiener, Earl L.

    1988-01-01

    The aims and methods of aircraft cockpit automation are reviewed from a human-factors perspective. Consideration is given to the mixed pilot reception of increased automation, government concern with the safety and reliability of highly automated aircraft, the formal definition of automation, and the ground-proximity warning system and accidents involving controlled flight into terrain. The factors motivating automation include technology availability; safety; economy, reliability, and maintenance; workload reduction and two-pilot certification; more accurate maneuvering and navigation; display flexibility; economy of cockpit space; and military requirements.

  8. Modeling of ultrasound transmission through a solid-liquid interface comprising a network of gas pockets

    SciTech Connect

    Paumel, K.; Baque, F.; Moysan, J.; Corneloup, G.; Chatain, D.

    2011-08-15

    Ultrasonic inspection of sodium-cooled fast reactor requires a good acoustic coupling between the transducer and the liquid sodium. Ultrasonic transmission through a solid surface in contact with liquid sodium can be complex due to the presence of microscopic gas pockets entrapped by the surface roughness. Experiments are run using substrates with controlled roughness consisting of a network of holes and a modeling approach is then developed. In this model, a gas pocket stiffness at a partially solid-liquid interface is defined. This stiffness is then used to calculate the transmission coefficient of ultrasound at the entire interface. The gas pocket stiffness has a static, as well as an inertial component, which depends on the ultrasonic frequency and the radiative mass.

  9. Modeling of ultrasound transmission through a solid-liquid interface comprising a network of gas pockets

    NASA Astrophysics Data System (ADS)

    Paumel, K.; Moysan, J.; Chatain, D.; Corneloup, G.; Baqué, F.

    2011-08-01

    Ultrasonic inspection of sodium-cooled fast reactor requires a good acoustic coupling between the transducer and the liquid sodium. Ultrasonic transmission through a solid surface in contact with liquid sodium can be complex due to the presence of microscopic gas pockets entrapped by the surface roughness. Experiments are run using substrates with controlled roughness consisting of a network of holes and a modeling approach is then developed. In this model, a gas pocket stiffness at a partially solid-liquid interface is defined. This stiffness is then used to calculate the transmission coefficient of ultrasound at the entire interface. The gas pocket stiffness has a static, as well as an inertial component, which depends on the ultrasonic frequency and the radiative mass.

  10. Computer modelling of the surface tension of the gas-liquid and liquid-liquid interface.

    PubMed

    Ghoufi, Aziz; Malfreyt, Patrice; Tildesley, Dominic J

    2016-03-01

    This review presents the state of the art in molecular simulations of interfacial systems and of the calculation of the surface tension from the underlying intermolecular potential. We provide a short account of different methodological factors (size-effects, truncation procedures, long-range corrections and potential models) that can affect the results of the simulations. Accurate calculations are presented for the calculation of the surface tension as a function of the temperature, pressure and composition by considering the planar gas-liquid interface of a range of molecular fluids. In particular, we consider the challenging problems of reproducing the interfacial tension of salt solutions as a function of the salt molality; the simulations of spherical interfaces including the calculation of the sign and size of the Tolman length for a spherical droplet; the use of coarse-grained models in the calculation of the interfacial tension of liquid-liquid surfaces and the mesoscopic simulations of oil-water-surfactant interfacial systems.

  11. Modeling Geometry and Progressive Failure of Material Interfaces in Plain Weave Composites

    NASA Technical Reports Server (NTRS)

    Hsu, Su-Yuen; Cheng, Ron-Bin

    2010-01-01

    A procedure combining a geometrically nonlinear, explicit-dynamics contact analysis, computer aided design techniques, and elasticity-based mesh adjustment is proposed to efficiently generate realistic finite element models for meso-mechanical analysis of progressive failure in textile composites. In the procedure, the geometry of fiber tows is obtained by imposing a fictitious expansion on the tows. Meshes resulting from the procedure are conformal with the computed tow-tow and tow-matrix interfaces but are incongruent at the interfaces. The mesh interfaces are treated as cohesive contact surfaces not only to resolve the incongruence but also to simulate progressive failure. The method is employed to simulate debonding at the material interfaces in a ceramic-matrix plain weave composite with matrix porosity and in a polymeric matrix plain weave composite without matrix porosity, both subject to uniaxial cyclic loading. The numerical results indicate progression of the interfacial damage during every loading and reverse loading event in a constant strain amplitude cyclic process. However, the composites show different patterns of damage advancement.

  12. Designing of Multi-Interface Diverging Experiments to Model Rayleigh-Taylor Growth in Supernovae

    NASA Astrophysics Data System (ADS)

    Grosskopf, Michael; Drake, R.; Kuranz, C.; Plewa, T.; Hearn, N.; Meakin, C.; Arnett, D.; Miles, A.; Robey, H.; Hansen, J.; Hsing, W.; Edwards, M.

    2008-05-01

    In previous experiments on the Omega Laser, researchers studying blast-wave-driven instabilities have observed the growth of Rayleigh-Taylor instabilities under conditions scaled to the He/H interface of SN1987A. Most of these experiments have been planar experiments, as the energy available proved unable to accelerate enough mass in a diverging geometry. With the advent of the NIF laser, which can deliver hundreds of kJ to an experiment, it is possible to produce 3D, blast-wave-driven, multiple-interface explosions and to study the mixing that develops. We report scaling simulations to model the interface dynamics of a multilayered, diverging Rayleigh-Taylor experiment for NIF using CALE, a hybrid adaptive Lagrangian-Eulerian code developed at LLNL. Specifically, we looked both qualitatively and quantitatively at the Rayleigh-Taylor growth and multi-interface interactions in mass-scaled, spherically divergent systems using different materials. The simulations will assist in the target design process and help choose diagnostics to maximize the information we receive in a particular shot. Simulations are critical for experimental planning, especially for experiments on large-scale facilities. *This research was sponsored by LLNL through contract LLNL B56128 and by the NNSA through DOE Research Grant DE-FG52-04NA00064.

  13. Interfacing Cultured Neurons to Microtransducers Arrays: A Review of the Neuro-Electronic Junction Models.

    PubMed

    Massobrio, Paolo; Massobrio, Giuseppe; Martinoia, Sergio

    2016-01-01

    Microtransducer arrays, both metal microelectrodes and silicon-based devices, are widely used as neural interfaces to measure, extracellularly, the electrophysiological activity of excitable cells. Starting from the pioneering works at the beginning of the 70's, improvements in manufacture methods, materials, and geometrical shape have been made. Nowadays, these devices are routinely used in different experimental conditions (both in vivo and in vitro), and for several applications ranging from basic research in neuroscience to more biomedical oriented applications. However, the use of these micro-devices deeply depends on the nature of the interface (coupling) between the cell membrane and the sensitive active surface of the microtransducer. Thus, many efforts have been oriented to improve coupling conditions. Particularly, in the latest years, two innovations related to the use of carbon nanotubes as interface material and to the development of micro-structures which can be engulfed by the cell membrane have been proposed. In this work, we review what can be simulated by using simple circuital models and what happens at the interface between the sensitive active surface of the microtransducer and the neuronal membrane of in vitro neurons. We finally focus our attention on these two novel technological solutions capable to improve the coupling between neuron and micro-nano transducer. PMID:27445657

  14. Mathematical modeling of planar and spherical vapor-liquid phase interfaces for multicomponent fluids

    NASA Astrophysics Data System (ADS)

    Celný, David; Vinš, Václav; Planková, Barbora; Hrubý, Jan

    2016-03-01

    Development of methods for accurate modeling of phase interfaces is important for understanding various natural processes and for applications in technology such as power production and carbon dioxide separation and storage. In particular, prediction of the course of the non-equilibrium phase transition processes requires knowledge of the properties of the strongly curved phase interfaces of microscopic droplets. In our work, we focus on the spherical vapor-liquid phase interfaces for binary mixtures. We developed a robust computational method to determine the density and concentration profiles. The fundamentals of our approach lie in the Cahn-Hilliard gradient theory, allowing to transcribe the functional formulation into a system of ordinary Euler-Langrange equations. This system is then split and modified into a shape suitable for iterative computation. For this task, we combine the Newton-Raphson and the shooting methods providing a good convergence speed. For the thermodynamic roperties, the PC-SAFT equation of state is used. We determine the density and concentration profiles for spherical phase interfaces at various saturation factors for the binary mixture of CO2 and C9H20. The computed concentration profiles allow to the determine the work of formation and other characteristics of the microscopic droplets.

  15. Interfacing Cultured Neurons to Microtransducers Arrays: A Review of the Neuro-Electronic Junction Models

    PubMed Central

    Massobrio, Paolo; Massobrio, Giuseppe; Martinoia, Sergio

    2016-01-01

    Microtransducer arrays, both metal microelectrodes and silicon-based devices, are widely used as neural interfaces to measure, extracellularly, the electrophysiological activity of excitable cells. Starting from the pioneering works at the beginning of the 70's, improvements in manufacture methods, materials, and geometrical shape have been made. Nowadays, these devices are routinely used in different experimental conditions (both in vivo and in vitro), and for several applications ranging from basic research in neuroscience to more biomedical oriented applications. However, the use of these micro-devices deeply depends on the nature of the interface (coupling) between the cell membrane and the sensitive active surface of the microtransducer. Thus, many efforts have been oriented to improve coupling conditions. Particularly, in the latest years, two innovations related to the use of carbon nanotubes as interface material and to the development of micro-structures which can be engulfed by the cell membrane have been proposed. In this work, we review what can be simulated by using simple circuital models and what happens at the interface between the sensitive active surface of the microtransducer and the neuronal membrane of in vitro neurons. We finally focus our attention on these two novel technological solutions capable to improve the coupling between neuron and micro-nano transducer. PMID:27445657

  16. Formulation of consumables management models: Mission planning processor payload interface definition

    NASA Technical Reports Server (NTRS)

    Torian, J. G.

    1977-01-01

    Consumables models required for the mission planning and scheduling function are formulated. The relation of the models to prelaunch, onboard, ground support, and postmission functions for the space transportation systems is established. Analytical models consisting of an orbiter planning processor with consumables data base is developed. A method of recognizing potential constraint violations in both the planning and flight operations functions, and a flight data file storage/retrieval of information over an extended period which interfaces with a flight operations processor for monitoring of the actual flights is presented.

  17. Ergonomic Models of Anthropometry, Human Biomechanics and Operator-Equipment Interfaces

    NASA Technical Reports Server (NTRS)

    Kroemer, Karl H. E. (Editor); Snook, Stover H. (Editor); Meadows, Susan K. (Editor); Deutsch, Stanley (Editor)

    1988-01-01

    The Committee on Human Factors was established in October 1980 by the Commission on Behavioral and Social Sciences and Education of the National Research Council. The committee is sponsored by the Office of Naval Research, the Air Force Office of Scientific Research, the Army Research Institute for the Behavioral and Social Sciences, the National Aeronautics and Space Administration, and the National Science Foundation. The workshop discussed the following: anthropometric models; biomechanical models; human-machine interface models; and research recommendations. A 17-page bibliography is included.

  18. A comprehensive flexoelectric model for droplet interface bilayers acting as sensors and energy harvesters

    NASA Astrophysics Data System (ADS)

    Kancharala, Ashok; Freeman, Eric; Philen, Michael

    2016-10-01

    Droplet interface bilayers have found applications in the development of biologically-inspired mechanosensors. In this research, a comprehensive flexoelectric framework has been developed to predict the mechanoelectric capabilities of the biological membrane under mechanical excitation for sensing and energy harvesting applications. The dynamic behavior of the droplets has been modeled using nonlinear finite element analysis, coupled with a flexoelectric model for predicting the resulting material polarization. This coupled model allows for the prediction of the mechanoelectrical response of the droplets under excitation. Using the developed framework, the potential for sensing and energy harvesting through lipid membranes is investigated.

  19. Object-Based Integration of Photogrammetric and LiDAR Data for Automated Generation of Complex Polyhedral Building Models

    PubMed Central

    Kim, Changjae; Habib, Ayman

    2009-01-01

    This research is concerned with a methodology for automated generation of polyhedral building models for complex structures, whose rooftops are bounded by straight lines. The process starts by utilizing LiDAR data for building hypothesis generation and derivation of individual planar patches constituting building rooftops. Initial boundaries of these patches are then refined through the integration of LiDAR and photogrammetric data and hierarchical processing of the planar patches. Building models for complex structures are finally produced using the refined boundaries. The performance of the developed methodology is evaluated through qualitative and quantitative analysis of the generated building models from real data. PMID:22346722

  20. Automated Bayesian model development for frequency detection in biological time series

    PubMed Central

    2011-01-01

    sampled data. Biological time series often deviate significantly from the requirements of optimality for Fourier transformation. In this paper we present an alternative approach based on Bayesian inference. We show the value of placing spectral analysis in the framework of Bayesian inference and demonstrate how model comparison can automate this procedure. PMID:21702910

  1. Degenerate Ising model for atomistic simulation of crystal-melt interfaces.

    PubMed

    Schebarchov, D; Schulze, T P; Hendy, S C

    2014-02-21

    One of the simplest microscopic models for a thermally driven first-order phase transition is an Ising-type lattice system with nearest-neighbour interactions, an external field, and a degeneracy parameter. The underlying lattice and the interaction coupling constant control the anisotropic energy of the phase boundary, the field strength represents the bulk latent heat, and the degeneracy quantifies the difference in communal entropy between the two phases. We simulate the (stochastic) evolution of this minimal model by applying rejection-free canonical and microcanonical Monte Carlo algorithms, and we obtain caloric curves and heat capacity plots for square (2D) and face-centred cubic (3D) lattices with periodic boundary conditions. Since the model admits precise adjustment of bulk latent heat and communal entropy, neither of which affect the interface properties, we are able to tune the crystal nucleation barriers at a fixed degree of undercooling and verify a dimension-dependent scaling expected from classical nucleation theory. We also analyse the equilibrium crystal-melt coexistence in the microcanonical ensemble, where we detect negative heat capacities and find that this phenomenon is more pronounced when the interface is the dominant contributor to the total entropy. The negative branch of the heat capacity appears smooth only when the equilibrium interface-area-to-volume ratio is not constant but varies smoothly with the excitation energy. Finally, we simulate microcanonical crystal nucleation and subsequent relaxation to an equilibrium Wulff shape, demonstrating the model's utility in tracking crystal-melt interfaces at the atomistic level. PMID:24559357

  2. Evidence of meniscus interface transport in dip-pen nanolithography: An annular diffusion model

    NASA Astrophysics Data System (ADS)

    Nafday, Omkar A.; Vaughn, Mark W.; Weeks, Brandon L.

    2006-10-01

    Ring shaped dots were patterned with mercaptohexadecanoic acid ink by dip-pen nanolithography. These dots have an ink-free inner core surrounded by an inked annular region, making them different from the filled dots usually obtained. This suggests a different transport mechanism than the current hypothesis of bulk water meniscus transport. A meniscus interface ink transport model is proposed, and its general applicability is demonstrated by predicting the patterned dot radii of chemically diverse inks.

  3. Degenerate Ising model for atomistic simulation of crystal-melt interfaces

    SciTech Connect

    Schebarchov, D.; Schulze, T. P.; Hendy, S. C.

    2014-02-21

    One of the simplest microscopic models for a thermally driven first-order phase transition is an Ising-type lattice system with nearest-neighbour interactions, an external field, and a degeneracy parameter. The underlying lattice and the interaction coupling constant control the anisotropic energy of the phase boundary, the field strength represents the bulk latent heat, and the degeneracy quantifies the difference in communal entropy between the two phases. We simulate the (stochastic) evolution of this minimal model by applying rejection-free canonical and microcanonical Monte Carlo algorithms, and we obtain caloric curves and heat capacity plots for square (2D) and face-centred cubic (3D) lattices with periodic boundary conditions. Since the model admits precise adjustment of bulk latent heat and communal entropy, neither of which affect the interface properties, we are able to tune the crystal nucleation barriers at a fixed degree of undercooling and verify a dimension-dependent scaling expected from classical nucleation theory. We also analyse the equilibrium crystal-melt coexistence in the microcanonical ensemble, where we detect negative heat capacities and find that this phenomenon is more pronounced when the interface is the dominant contributor to the total entropy. The negative branch of the heat capacity appears smooth only when the equilibrium interface-area-to-volume ratio is not constant but varies smoothly with the excitation energy. Finally, we simulate microcanonical crystal nucleation and subsequent relaxation to an equilibrium Wulff shape, demonstrating the model's utility in tracking crystal-melt interfaces at the atomistic level.

  4. Automation based on knowledge modeling theory and its applications in engine diagnostic systems using Space Shuttle Main Engine vibrational data. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kim, Jonnathan H.

    1995-01-01

    Humans can perform many complicated tasks without explicit rules. This inherent and advantageous capability becomes a hurdle when a task is to be automated. Modern computers and numerical calculations require explicit rules and discrete numerical values. In order to bridge the gap between human knowledge and automating tools, a knowledge model is proposed. Knowledge modeling techniques are discussed and utilized to automate a labor and time intensive task of detecting anomalous bearing wear patterns in the Space Shuttle Main Engine (SSME) High Pressure Oxygen Turbopump (HPOTP).

  5. Structure and application of an interface program between a geographic-information system and a ground-water flow model

    USGS Publications Warehouse

    Van Metre, P.C.

    1990-01-01

    A computer-program interface between a geographic-information system and a groundwater flow model links two unrelated software systems for use in developing the flow models. The interface program allows the modeler to compile and manage geographic components of a groundwater model within the geographic information system. A significant savings of time and effort is realized in developing, calibrating, and displaying the groundwater flow model. Four major guidelines were followed in developing the interface program: (1) no changes to the groundwater flow model code were to be made; (2) a data structure was to be designed within the geographic information system that follows the same basic data structure as the groundwater flow model; (3) the interface program was to be flexible enough to support all basic data options available within the model; and (4) the interface program was to be as efficient as possible in terms of computer time used and online-storage space needed. Because some programs in the interface are written in control-program language, the interface will run only on a computer with the PRIMOS operating system. (USGS)

  6. Simplifying the interaction between cognitive models and task environments with the JSON Network Interface.

    PubMed

    Hope, Ryan M; Schoelles, Michael J; Gray, Wayne D

    2014-12-01

    Process models of cognition, written in architectures such as ACT-R and EPIC, should be able to interact with the same software with which human subjects interact. By eliminating the need to simulate the experiment, this approach would simplify the modeler's effort, while ensuring that all steps required of the human are also required by the model. In practice, the difficulties of allowing one software system to interact with another present a significant barrier to any modeler who is not also skilled at this type of programming. The barrier increases if the programming language used by the modeling software differs from that used by the experimental software. The JSON Network Interface simplifies this problem for ACT-R modelers, and potentially, modelers using other systems.

  7. Automated Detection and Classification of Rockfall Induced Seismic Signals with Hidden-Markov-Models

    NASA Astrophysics Data System (ADS)

    Zeckra, M.; Hovius, N.; Burtin, A.; Hammer, C.

    2015-12-01

    Originally introduced in speech recognition, Hidden Markov Models are applied in different research fields of pattern recognition. In seismology, this technique has recently been introduced to improve common detection algorithms, like STA/LTA ratio or cross-correlation methods. Mainly used for the monitoring of volcanic activity, this study is one of the first applications to seismic signals induced by geomorphologic processes. With an array of eight broadband seismometers deployed around the steep Illgraben catchment (Switzerland) with high-level erosion, we studied a sequence of landslides triggered over a period of several days in winter. A preliminary manual classification led us to identify three main seismic signal classes that were used as a start for the HMM automated detection and classification: (1) rockslide signal, including a failure source and the debris mobilization along the slope, (2) rockfall signal from the remobilization of debris along the unstable slope, and (3) single cracking signal from the affected cliff observed before the rockslide events. Besides the ability to classify the whole dataset automatically, the HMM approach reflects the origin and the interactions of the three signal classes, which helps us to understand this geomorphic crisis and the possible triggering mechanisms for slope processes. The temporal distribution of crack events (duration > 5s, frequency band [2-8] Hz) follows an inverse Omori law, leading to the catastrophic behaviour of the failure mechanisms and the interest for warning purposes in rockslide risk assessment. Thanks to a dense seismic array and independent weather observations in the landslide area, this dataset also provides information about the triggering mechanisms, which exhibit a tight link between rainfall and freezing level fluctuations.

  8. Automated detection of arterial input function in DSC perfusion MRI in a stroke rat model

    NASA Astrophysics Data System (ADS)

    Yeh, M.-Y.; Lee, T.-H.; Yang, S.-T.; Kuo, H.-H.; Chyi, T.-K.; Liu, H.-L.

    2009-05-01

    Quantitative cerebral blood flow (CBF) estimation requires deconvolution of the tissue concentration time curves with an arterial input function (AIF). However, image-based determination of AIF in rodent is challenged due to limited spatial resolution. We evaluated the feasibility of quantitative analysis using automated AIF detection and compared the results with commonly applied semi-quantitative analysis. Permanent occlusion of bilateral or unilateral common carotid artery was used to induce cerebral ischemia in rats. The image using dynamic susceptibility contrast method was performed on a 3-T magnetic resonance scanner with a spin-echo echo-planar-image sequence (TR/TE = 700/80 ms, FOV = 41 mm, matrix = 64, 3 slices, SW = 2 mm), starting from 7 s prior to contrast injection (1.2 ml/kg) at four different time points. For quantitative analysis, CBF was calculated by the AIF which was obtained from 10 voxels with greatest contrast enhancement after deconvolution. For semi-quantitative analysis, relative CBF was estimated by the integral divided by the first moment of the relaxivity time curves. We observed if the AIFs obtained in the three different ROIs (whole brain, hemisphere without lesion and hemisphere with lesion) were similar, the CBF ratios (lesion/normal) between quantitative and semi-quantitative analyses might have a similar trend at different operative time points. If the AIFs were different, the CBF ratios might be different. We concluded that using local maximum one can define proper AIF without knowing the anatomical location of arteries in a stroke rat model.

  9. Automated Detection and Predictive Modeling of Flux Transfer Events using CLUSTER Data

    NASA Astrophysics Data System (ADS)

    Sipes, T. B.; Karimabadi, H.; Driscoll, J.; Wang, Y.; Lavraud, B.; Slavin, J. A.

    2006-12-01

    Almost all statistical studies of flux ropes (FTEs) and traveling compression regions (TCRs) have been based on (i) visual inspection of data to compile a list of events and (ii) use of histograms and simple linear correlation analysis to study their properties and potential causes and dependencies. This approach has several major drawbacks including being highly subjective and inefficient. The traditional use of histograms and simple linear correlation analysis is also only useful for analysis of systems that show dominant dependencies on one or two variables at the most. However, if the system has complex dependencies, more sophisticated statistical techniques are required. For example, Wang et al. [2006] showed evidence that FTE occurrence rate are affected by IMF Bygsm, Bzgsm, and magnitude, and the IMF clock, tilt, spiral, and cone angles. If the initial findings were correct that FTEs occur only during periods of southward IMF, one could use the direction of IMF as a predictor of occurrence of FTEs. But in light of Wang et al. result, one cannot draw quantitative conclusions about conditions under which FTEs occur. It may be that a certain combination of these parameters is the true controlling parameter. To uncover this, one needs to deploy more sophisticated techniques. We have developed a new, sophisticated data mining tool called MineTool. MineTool is highly accurate, flexible and capable of handling difficult and even noisy datasets extremely well. It has the ability to outperform standard data mining tools such as artificial neural networks, decision/regression trees and support vector machines. Here we present preliminary results of the application of this tool to the CLUSTER data to perform two tasks: (i) automated detection of FTEs, and (ii) predictive modeling of occurrences of FTEs based on IMF and magnetospheric conditions.

  10. A novel automated behavioral test battery assessing cognitive rigidity in two genetic mouse models of autism

    PubMed Central

    Puścian, Alicja; Łęski, Szymon; Górkiewicz, Tomasz; Meyza, Ksenia; Lipp, Hans-Peter; Knapska, Ewelina

    2014-01-01

    Repetitive behaviors are a key feature of many pervasive developmental disorders, such as autism. As a heterogeneous group of symptoms, repetitive behaviors are conceptualized into two main subgroups: sensory/motor (lower-order) and cognitive rigidity (higher-order). Although lower-order repetitive behaviors are measured in mouse models in several paradigms, so far there have been no high-throughput tests directly measuring cognitive rigidity. We describe a novel approach for monitoring repetitive behaviors during reversal learning in mice in the automated IntelliCage system. During the reward-motivated place preference reversal learning, designed to assess cognitive abilities of mice, visits to the previously rewarded places were recorded to measure cognitive flexibility. Thereafter, emotional flexibility was assessed by measuring conditioned fear extinction. Additionally, to look for neuronal correlates of cognitive impairments, we measured CA3-CA1 hippocampal long term potentiation (LTP). To standardize the designed tests we used C57BL/6 and BALB/c mice, representing two genetic backgrounds, for induction of autism by prenatal exposure to the sodium valproate. We found impairments of place learning related to perseveration and no LTP impairments in C57BL/6 valproate-treated mice. In contrast, BALB/c valproate-treated mice displayed severe deficits of place learning not associated with perseverative behaviors and accompanied by hippocampal LTP impairments. Alterations of cognitive flexibility observed in C57BL/6 valproate-treated mice were related to neither restricted exploration pattern nor to emotional flexibility. Altogether, we showed that the designed tests of cognitive performance and perseverative behaviors are efficient and highly replicable. Moreover, the results suggest that genetic background is crucial for the behavioral effects of prenatal valproate treatment. PMID:24808839

  11. Finite Element Modeling of Laminated Composite Plates with Locally Delaminated Interface Subjected to Impact Loading

    PubMed Central

    Abo Sabah, Saddam Hussein; Kueh, Ahmad Beng Hong

    2014-01-01

    This paper investigates the effects of localized interface progressive delamination on the behavior of two-layer laminated composite plates when subjected to low velocity impact loading for various fiber orientations. By means of finite element approach, the laminae stiffnesses are constructed independently from their interface, where a well-defined virtually zero-thickness interface element is discreetly adopted for delamination simulation. The present model has the advantage of simulating a localized interfacial condition at arbitrary locations, for various degeneration areas and intensities, under the influence of numerous boundary conditions since the interfacial description is expressed discretely. In comparison, the model shows good agreement with existing results from the literature when modeled in a perfectly bonded state. It is found that as the local delamination area increases, so does the magnitude of the maximum displacement history. Also, as top and bottom fiber orientations deviation increases, both central deflection and energy absorption increase although the relative maximum displacement correspondingly decreases when in contrast to the laminates perfectly bonded state. PMID:24696668

  12. Automated modeling of ecosystem CO2 fluxes based on closed chamber measurements: A standardized conceptual and practical approach

    NASA Astrophysics Data System (ADS)

    Hoffmann, Mathias; Jurisch, Nicole; Albiac Borraz, Elisa; Hagemann, Ulrike; Sommer, Michael; Augustin, Jürgen

    2015-04-01

    Closed chamber measurements are widely used for determining the CO2 exchange of small-scale or heterogeneous ecosystems. Among the chamber design and operational handling, the data processing procedure is a considerable source of uncertainty of obtained results. We developed a standardized automatic data processing algorithm, based on the language and statistical computing environment R© to (i) calculate measured CO2 flux rates, (ii) parameterize ecosystem respiration (Reco) and gross primary production (GPP) models, (iii) optionally compute an adaptive temperature model, (iv) model Reco, GPP and net ecosystem exchange (NEE), and (v) evaluate model uncertainty (calibration, validation and uncertainty prediction). The algorithm was tested for different manual and automatic chamber measurement systems (such as e.g. automated NEE-chambers and the LI-8100A soil CO2 Flux system) and ecosystems. Our study shows that even minor changes within the modelling approach may result in considerable differences of calculated flux rates, derived photosynthetic active radiation and temperature dependencies and subsequently modeled Reco, GPP and NEE balance of up to 25%. Thus, certain modeling implications will be given, since automated and standardized data processing procedures, based on clearly defined criteria, such as statistical parameters and thresholds are a prerequisite and highly desirable to guarantee the reproducibility, traceability of modelling results and encourage a better comparability between closed chamber based CO2 measurements.

  13. Evidence evaluation in fingerprint comparison and automated fingerprint identification systems--modelling within finger variability.

    PubMed

    Egli, Nicole M; Champod, Christophe; Margot, Pierre

    2007-04-11

    Recent challenges and errors in fingerprint identification have highlighted the need for assessing the information content of a papillary pattern in a systematic way. In particular, estimation of the statistical uncertainty associated with this type of evidence is more and more called upon. The approach used in the present study is based on the assessment of likelihood ratios (LRs). This evaluative tool weighs the likelihood of evidence given two mutually exclusive hypotheses. The computation of likelihood ratios on a database of marks of known sources (matching the unknown and non-matching the unknown mark) allows an estimation of the evidential contribution of fingerprint evidence. LRs are computed taking advantage of the scores obtained from an automated fingerprint identification system and hence are based exclusively on level II features (minutiae). The AFIS system attributes a score to any comparison (fingerprint to fingerprint, mark to mark and mark to fingerprint), used here as a proximity measure between the respective arrangements of minutiae. The numerator of the LR addresses the within finger variability and is obtained by comparing the same configurations of minutiae coming from the same source. Only comparisons where the same minutiae are visible both on the mark and on the print are therefore taken into account. The denominator of the LR is obtained by cross-comparison with a database of prints originating from non-matching sources. The estimation of the numerator of the LR is much more complex in terms of specific data requirements than the estimation of the denominator of the LR (that requires only a large database of prints from an non-associated population). Hence this paper addresses specific issues associated with the numerator or within finger variability. This study aims at answering the following questions: (1) how a database for modelling within finger variability should be acquired; (2) whether or not the visualisation technique or the

  14. A Cognitive System Model for Human/Automation Dynamics in Airspace Management

    NASA Technical Reports Server (NTRS)

    Corker, Kevin M.; Pisanich, Gregory; Lebacqz, J. Victor (Technical Monitor)

    1997-01-01

    NASA has initiated a significant thrust of research and development focused on providing the flight crew and air traffic managers automation aids to increase capacity in en route and terminal area operations through the use of flexible, more fuel-efficient routing, while improving the level of safety in commercial carrier operations. In that system development, definition of cognitive requirements for integrated multi-operator dynamic aiding systems is fundamental. In order to support that cognitive function definition, we have extended the Man Machine Integrated Design and Analysis System (MIDAS) to include representation of multiple cognitive agents (both human operators and intelligent aiding systems) operating aircraft, airline operations centers and air traffic control centers in the evolving airspace. The demands of this application require representation of many intelligent agents sharing world-models, and coordinating action/intention with cooperative scheduling of goals and actions in a potentially unpredictable world of operations. The MIDAS operator models have undergone significant development in order to understand the requirements for operator aiding and the impact of that aiding in the complex nondeterminate system of national airspace operations. The operator model's structure has been modified to include attention functions, action priority, and situation assessment. The cognitive function model has been expanded to include working memory operations including retrieval from long-term store, interference, visual-motor and verbal articulatory loop functions, and time-based losses. The operator's activity structures have been developed to include prioritization and interruption of multiple parallel activities among multiple operators, to provide for anticipation (knowledge of the intention and action of remote operators), and to respond to failures of the system and other operators in the system in situation-specific paradigms. The model's internal

  15. Automating the analytical laboratory via the Chemical Analysis Automation paradigm

    SciTech Connect

    Hollen, R.; Rzeszutko, C.

    1997-10-01

    To address the need for standardization within the analytical chemistry laboratories of the nation, the Chemical Analysis Automation (CAA) program within the US Department of Energy, Office of Science and Technology`s Robotic Technology Development Program is developing laboratory sample analysis systems that will automate the environmental chemical laboratories. The current laboratory automation paradigm consists of islands-of-automation that do not integrate into a system architecture. Thus, today the chemist must perform most aspects of environmental analysis manually using instrumentation that generally cannot communicate with other devices in the laboratory. CAA is working towards a standardized and modular approach to laboratory automation based upon the Standard Analysis Method (SAM) architecture. Each SAM system automates a complete chemical method. The building block of a SAM is known as the Standard Laboratory Module (SLM). The SLM, either hardware or software, automates a subprotocol of an analysis method and can operate as a standalone or as a unit within a SAM. The CAA concept allows the chemist to easily assemble an automated analysis system, from sample extraction through data interpretation, using standardized SLMs without the worry of hardware or software incompatibility or the necessity of generating complicated control programs. A Task Sequence Controller (TSC) software program schedules and monitors the individual tasks to be performed by each SLM configured within a SAM. The chemist interfaces with the operation of the TSC through the Human Computer Interface (HCI), a logical, icon-driven graphical user interface. The CAA paradigm has successfully been applied in automating EPA SW-846 Methods 3541/3620/8081 for the analysis of PCBs in a soil matrix utilizing commercially available equipment in tandem with SLMs constructed by CAA.

  16. Configuring a Graphical User Interface for Managing Local HYSPLIT Model Runs Through AWIPS

    NASA Technical Reports Server (NTRS)

    Wheeler, mark M.; Blottman, Peter F.; Sharp, David W.; Hoeth, Brian; VanSpeybroeck, Kurt M.

    2009-01-01

    Responding to incidents involving the release of harmful airborne pollutants is a continual challenge for Weather Forecast Offices in the National Weather Service. When such incidents occur, current protocol recommends forecaster-initiated requests of NOAA's Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model output through the National Centers of Environmental Prediction to obtain critical dispersion guidance. Individual requests are submitted manually through a secured web site, with desired multiple requests submitted in sequence, for the purpose of obtaining useful trajectory and concentration forecasts associated with the significant release of harmful chemical gases, radiation, wildfire smoke, etc., into local the atmosphere. To help manage the local HYSPLIT for both routine and emergency use, a graphical user interface was designed for operational efficiency. The interface allows forecasters to quickly determine the current HYSPLIT configuration for the list of predefined sites (e.g., fixed sites and floating sites), and to make any necessary adjustments to key parameters such as Input Model. Number of Forecast Hours, etc. When using the interface, forecasters will obtain desired output more confidently and without the danger of corrupting essential configuration files.

  17. AUTOMATED GIS WATERSHED ANALYSIS TOOLS FOR RUSLE/SEDMOD SOIL EROSION AND SEDIMENTATION MODELING

    EPA Science Inventory

    A comprehensive procedure for computing soil erosion and sediment delivery metrics has been developed using a suite of automated Arc Macro Language (AML ) scripts and a pair of processing- intensive ANSI C++ executable programs operating on an ESRI ArcGIS 8.x Workstation platform...

  18. Definition of common support equipment and space station interface requirements for IOC model technology experiments

    NASA Technical Reports Server (NTRS)

    Russell, Richard A.; Waiss, Richard D.

    1988-01-01

    A study was conducted to identify the common support equipment and Space Station interface requirements for the IOC (initial operating capabilities) model technology experiments. In particular, each principal investigator for the proposed model technology experiment was contacted and visited for technical understanding and support for the generation of the detailed technical backup data required for completion of this study. Based on the data generated, a strong case can be made for a dedicated technology experiment command and control work station consisting of a command keyboard, cathode ray tube, data processing and storage, and an alert/annunciator panel located in the pressurized laboratory.

  19. PRay - A graphical user interface for interactive visualization and modification of rayinvr models

    NASA Astrophysics Data System (ADS)

    Fromm, T.

    2016-01-01

    PRay is a graphical user interface for interactive displaying and editing of velocity models for seismic refraction. It is optimized for editing rayinvr models but can also be used as a dynamic viewer for ray tracing results from other software. The main features are the graphical editing of nodes and fast adjusting of the display (stations and phases). It can be extended by user-defined shell scripts and links to phase picking software. PRay is open source software written in the scripting language Perl, runs on Unix-like operating systems including Mac OS X and provides a version controlled source code repository for community development.

  20. A model for the control mode man-computer interface dialogue

    NASA Technical Reports Server (NTRS)

    Chafin, R. L.

    1981-01-01

    A four stage model is presented for the control mode man-computer interface dialogue. It consists of context development, semantic development syntactic development, and command execution. Each stage is discussed in terms of the operator skill levels (naive, novice, competent, and expert) and pertinent human factors issues. These issues are human problem solving, human memory, and schemata. The execution stage is discussed in terms of the operators typing skills. This model provides an understanding of the human process in command mode activity for computer systems and a foundation for relating system characteristics to operator characteristics.

  1. Multi-scale/multi-physical modeling in head/disk interface of magnetic data storage

    NASA Astrophysics Data System (ADS)

    Chung, Pil Seung; Smith, Robert; Vemuri, Sesha Hari; Jhon, Young In; Tak, Kyungjae; Moon, Il; Biegler, Lorenz T.; Jhon, Myung S.

    2012-04-01

    The model integration of the head-disk interface (HDI) in the hard disk drive system, which includes the hierarchy of highly interactive layers (magnetic layer, carbon overcoat (COC), lubricant, and air bearing system (ABS)), has recently been focused upon to resolve technical barriers and enhance reliability. Heat-assisted magnetic recording especially demands that the model simultaneously incorporates thermal and mechanical phenomena by considering the enormous combinatorial cases of materials and multi-scale/multi-physical phenomena. In this paper, we explore multi-scale/multi-physical simulation methods for HDI, which will holistically integrate magnetic layers, COC, lubricants, and ABS in non-isothermal conditions.

  2. Using Hydrodynamic Codes in Modeling of Multi-Interface Diverging Experiments for NIF

    NASA Astrophysics Data System (ADS)

    Grosskopf, Michael; Drake, R. P.; Kuranz, C. C.; Plewa, T.; Hearn, N.; Meakin, C.; Arnett, D.; Miles, A. R.; Robey, H. F.; Hansen, J. F.; Remington, B. A.; Hsing, W.; Edwards, M. J.

    2008-04-01

    Using the Omega Laser, researchers studying supernova dynamics have observed the growth of Rayleigh-Taylor instabilities in a high energy density system. The NIF laser hopes to generate the energy needed to expand these experiments to a diverging system. We report scaling simulations to model the interface dynamics of a multilayered, diverging Rayleigh-Taylor experiment for NIF using CALE, a hybrid adaptive Lagrangian-Eulerian code developed at LLNL. Specifically, we looked both qualitatively and quantitatively at the Rayleigh-Taylor growth and multi-interface interactions in mass-scaled systems using different materials. The simulations will assist in the target design process and help choose diagnostics to maximize the information we receive in a particular shot. Simulations are critical for experimental planning, especially for experiments on large-scale facilities.

  3. Modeling of multi-interface, diverging, hydrodynamic experiments for the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Grosskopf, M. J.; Drake, R. P.; Kuranz, C. C.; Miles, A. R.; Hansen, J. F.; Plewa, T.; Hearn, N.; Arnett, D.; Wheeler, J. C.

    2009-08-01

    The National Ignition Facility (NIF) will soon provide experiments with far more than ten times the energy than has been previously available on laser facilities. In the context of supernova-relevant hydrodynamics, this will enable experiments in which hydrodynamic instabilities develop from multiple, coupled interfaces in a diverging explosion. This paper discusses the design of such blast-wave-driven explosions in which the relative masses of the layers are scaled to those within the star. It reports scaling simulations with CALE to model the global dynamics of such an experiment. CALE is a hybrid, Arbitrary Lagrangian-Eulerian code. The simulations probed the instability growth and multi-interface interactions in mass-scaled systems using different materials. The simulations assist in the target design process and in developing an experiment that can be diagnosed.

  4. Modeling of Multi-Interface, Diverging, Hydrodynamic Experiments for the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Grosskopf, M. J.; Drake, R. P.; Kuranz, C. C.; Miles, A. R.; Hansen, J. F.; Plewa, T.; Hearn, N.; Arnett, D.; Wheeler, J. C.

    2008-11-01

    The National Ignition Facility (NIF) will soon provide experiments with far more than ten times the energy than has been previously available on laser facilities. In the context of supernova-relevant hydrodynamics, this will enable experiments in which hydrodynamic instabilities develop from multiple, coupled interfaces in a diverging explosion. This presentation discusses the design of such blast-wave-driven explosions in which the relative masses of the layers are scaled to those within the star. It reports scaling simulations with CALE to model the global dynamics of such an experiment. The simulations probed the instability growth and multi-interface interactions in mass-scaled systems to assess the diagnosability and experimental value of different designs using a variety of materials. Initial conditions in the simulation near the irradiated surface have been shown to lead to spurious structure on the shock; therefore, a series of simulations to understand this structure is also discussed.

  5. Testing of Environmental Satellite Bus-Instrument Interfaces Using Engineering Models

    NASA Technical Reports Server (NTRS)

    Gagnier, Donald; Hayner, Rick; Nosek, Thomas; Roza, Michael; Hendershot, James E.; Razzaghi, Andrea I.

    2004-01-01

    This paper discusses the formulation and execution of a laboratory test of the electrical interfaces between multiple atmospheric scientific instruments and the spacecraft bus that carries them. The testing, performed in 2002, used engineering models of the instruments and the Aura spacecraft bus electronics. Aura is one of NASA s Earth Observatory System missions. The test was designed to evaluate the complex interfaces in the command and data handling subsystems prior to integration of the complete flight instruments on the spacecraft. A problem discovered during the flight integration phase of the observatory can cause significant cost and schedule impacts. The tests successfully revealed problems and led to their resolution before the full-up integration phase, saving significant cost and schedule. This approach could be beneficial for future environmental satellite programs involving the integration of multiple, complex scientific instruments onto a spacecraft bus.

  6. Prediction of hot spots in protein interfaces using a random forest model with hybrid features.

    PubMed

    Wang, Lin; Liu, Zhi-Ping; Zhang, Xiang-Sun; Chen, Luonan

    2012-03-01

    Prediction of hot spots in protein interfaces provides crucial information for the research on protein-protein interaction and drug design. Existing machine learning methods generally judge whether a given residue is likely to be a hot spot by extracting features only from the target residue. However, hot spots usually form a small cluster of residues which are tightly packed together at the center of protein interface. With this in mind, we present a novel method to extract hybrid features which incorporate a wide range of information of the target residue and its spatially neighboring residues, i.e. the nearest contact residue in the other face (mirror-contact residue) and the nearest contact residue in the same face (intra-contact residue). We provide a novel random forest (RF) model to effectively integrate these hybrid features for predicting hot spots in protein interfaces. Our method can achieve accuracy (ACC) of 82.4% and Matthew's correlation coefficient (MCC) of 0.482 in Alanine Scanning Energetics Database, and ACC of 77.6% and MCC of 0.429 in Binding Interface Database. In a comparison study, performance of our RF model exceeds other existing methods, such as Robetta, FOLDEF, KFC, KFC2, MINERVA and HotPoint. Of our hybrid features, three physicochemical features of target residues (mass, polarizability and isoelectric point), the relative side-chain accessible surface area and the average depth index of mirror-contact residues are found to be the main discriminative features in hot spots prediction. We also confirm that hot spots tend to form large contact surface areas between two interacting proteins. Source data and code are available at: http://www.aporc.org/doc/wiki/HotSpot. PMID:22258275

  7. The local structure factor near an interface; beyond extended capillary-wave models

    NASA Astrophysics Data System (ADS)

    Parry, A. O.; Rascón, C.; Evans, R.

    2016-06-01

    We investigate the local structure factor S (zq) at a free liquid-gas interface in systems with short-ranged intermolecular forces and determine the corrections to the leading-order, capillary-wave-like, Goldstone mode divergence of S (zq) known to occur for parallel (i.e. measured along the interface) wavevectors q\\to 0 . We show from explicit solution of the inhomogeneous Ornstein-Zernike equation that for distances z far from the interface, where the profile decays exponentially, S (zq) splits unambiguously into bulk and interfacial contributions. On each side of the interface, the interfacial contributions can be characterised by distinct liquid and gas wavevector dependent surface tensions, {σ l}(q) and {σg}(q) , which are determined solely by the bulk two-body and three-body direct correlation functions. At high temperatures, the wavevector dependence simplifies and is determined almost entirely by the appropriate bulk structure factor, leading to positive rigidity coefficients. Our predictions are confirmed by explicit calculation of S (zq) within square-gradient theory and the Sullivan model. The results for the latter predict a striking temperature dependence for {σ l}(q) and {σg}(q) , and have implications for fluctuation effects. Our results account quantitatively for the findings of a recent very extensive simulation study by Höfling and Dietrich of the total structure factor in the interfacial region, in a system with a cut-off Lennard-Jones potential, in sharp contrast to extended capillary-wave models which failed completely to describe the simulation results.

  8. Fracture permeability and seismic wave scattering--Poroelastic linear-slip interface model for heterogeneous fractures

    SciTech Connect

    Nakagawa, S.; Myer, L.R.

    2009-06-15

    Schoenberg's Linear-slip Interface (LSI) model for single, compliant, viscoelastic fractures has been extended to poroelastic fractures for predicting seismic wave scattering. However, this extended model results in no impact of the in-plane fracture permeability on the scattering. Recently, we proposed a variant of the LSI model considering the heterogeneity in the in-plane fracture properties. This modified model considers wave-induced, fracture-parallel fluid flow induced by passing seismic waves. The research discussed in this paper applies this new LSI model to heterogeneous fractures to examine when and how the permeability of a fracture is reflected in the scattering of seismic waves. From numerical simulations, we conclude that the heterogeneity in the fracture properties is essential for the scattering of seismic waves to be sensitive to the permeability of a fracture.

  9. Work Practice Simulation of Complex Human-Automation Systems in Safety Critical Situations: The Brahms Generalized berlingen Model

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Linde, Charlotte; Seah, Chin; Shafto, Michael

    2013-01-01

    The transition from the current air traffic system to the next generation air traffic system will require the introduction of new automated systems, including transferring some functions from air traffic controllers to on­-board automation. This report describes a new design verification and validation (V&V) methodology for assessing aviation safety. The approach involves a detailed computer simulation of work practices that includes people interacting with flight-critical systems. The research is part of an effort to develop new modeling and verification methodologies that can assess the safety of flight-critical systems, system configurations, and operational concepts. The 2002 Ueberlingen mid-air collision was chosen for analysis and modeling because one of the main causes of the accident was one crew's response to a conflict between the instructions of the air traffic controller and the instructions of TCAS, an automated Traffic Alert and Collision Avoidance System on-board warning system. It thus furnishes an example of the problem of authority versus autonomy. It provides a starting point for exploring authority/autonomy conflict in the larger system of organization, tools, and practices in which the participants' moment-by-moment actions take place. We have developed a general air traffic system model (not a specific simulation of Überlingen events), called the Brahms Generalized Ueberlingen Model (Brahms-GUeM). Brahms is a multi-agent simulation system that models people, tools, facilities/vehicles, and geography to simulate the current air transportation system as a collection of distributed, interactive subsystems (e.g., airports, air-traffic control towers and personnel, aircraft, automated flight systems and air-traffic tools, instruments, crew). Brahms-GUeM can be configured in different ways, called scenarios, such that anomalous events that contributed to the Überlingen accident can be modeled as functioning according to requirements or in an

  10. Interfacing comprehensive rotorcraft analysis with advanced aeromechanics and vortex wake models

    NASA Astrophysics Data System (ADS)

    Liu, Haiying

    This dissertation describes three aspects of the comprehensive rotorcraft analysis. First, a physics-based methodology for the modeling of hydraulic devices within multibody-based comprehensive models of rotorcraft systems is developed. This newly proposed approach can predict the fully nonlinear behavior of hydraulic devices, and pressure levels in the hydraulic chambers are coupled with the dynamic response of the system. The proposed hydraulic device models are implemented in a multibody code and calibrated by comparing their predictions with test bench measurements for the UH-60 helicopter lead-lag damper. Predicted peak damping forces were found to be in good agreement with measurements, while the model did not predict the entire time history of damper force to the same level of accuracy. The proposed model evaluates relevant hydraulic quantities such as chamber pressures, orifice flow rates, and pressure relief valve displacements. This model could be used to design lead-lag dampers with desirable force and damping characteristics. The second part of this research is in the area of computational aeroelasticity, in which an interface between computational fluid dynamics (CFD) and computational structural dynamics (CSD) is established. This interface enables data exchange between CFD and CSD with the goal of achieving accurate airloads predictions. In this work, a loose coupling approach based on the delta-airloads method is developed in a finite-element method based multibody dynamics formulation, DYMORE. To validate this aerodynamic interface, a CFD code, OVERFLOW-2, is loosely coupled with a CSD program, DYMORE, to compute the airloads of different flight conditions for Sikorsky UH-60 aircraft. This loose coupling approach has good convergence characteristics. The predicted airloads are found to be in good agreement with the experimental data, although not for all flight conditions. In addition, the tight coupling interface between the CFD program, OVERFLOW

  11. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    PubMed

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  12. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP

    PubMed Central

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740

  13. Cockpit automation - In need of a philosophy

    NASA Technical Reports Server (NTRS)

    Wiener, E. L.

    1985-01-01

    Concern has been expressed over the rapid development and deployment of automatic devices in transport aircraft, due mainly to the human interface and particularly the role of automation in inducing human error. The paper discusses the need for coherent philosophies of automation, and proposes several approaches: (1) flight management by exception, which states that as long as a crew stays within the bounds of regulations, air traffic control and flight safety, it may fly as it sees fit; (2) exceptions by forecasting, where the use of forecasting models would predict boundary penetration, rather than waiting for it to happen; (3) goal-sharing, where a computer is informed of overall goals, and subsequently has the capability of checking inputs and aircraft position for consistency with the overall goal or intentions; and (4) artificial intelligence and expert systems, where intelligent machines could mimic human reason.

  14. ModelMuse - A Graphical User Interface for MODFLOW-2005 and PHAST

    USGS Publications Warehouse

    Winston, Richard B.

    2009-01-01

    ModelMuse is a graphical user interface (GUI) for the U.S. Geological Survey (USGS) models MODFLOW-2005 and PHAST. This software package provides a GUI for creating the flow and transport input file for PHAST and the input files for MODFLOW-2005. In ModelMuse, the spatial data for the model is independent of the grid, and the temporal data is independent of the stress periods. Being able to input these data independently allows the user to redefine the spatial and temporal discretization at will. This report describes the basic concepts required to work with ModelMuse. These basic concepts include the model grid, data sets, formulas, objects, the method used to assign values to data sets, and model features. The ModelMuse main window has a top, front, and side view of the model that can be used for editing the model, and a 3-D view of the model that can be used to display properties of the model. ModelMuse has tools to generate and edit the model grid. It also has a variety of interpolation methods and geographic functions that can be used to help define the spatial variability of the model. ModelMuse can be used to execute both MODFLOW-2005 and PHAST and can also display the results of MODFLOW-2005 models. An example of using ModelMuse with MODFLOW-2005 is included in this report. Several additional examples are described in the help system for ModelMuse, which can be accessed from the Help menu.

  15. Automation for System Safety Analysis

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Fleming, Land; Throop, David; Thronesbery, Carroll; Flores, Joshua; Bennett, Ted; Wennberg, Paul

    2009-01-01

    This presentation describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  16. Easy-to-use interface

    SciTech Connect

    Blattner, M M; Blattner, D O; Tong, Y

    1999-04-01

    Easy-to-use interfaces are a class of interfaces that fall between public access interfaces and graphical user interfaces in usability and cognitive difficulty. We describe characteristics of easy-to-use interfaces by the properties of four dimensions: selection, navigation, direct manipulation, and contextual metaphors. Another constraint we introduced was to include as little text as possible, and what text we have will be in at least four languages. Formative evaluations were conducted to identify and isolate these characteristics. Our application is a visual interface for a home automation system intended for a diverse set of users. The design will be expanded to accommodate the visually disabled in the near future.

  17. Diffuse-interface modeling of liquid-vapor coexistence in equilibrium drops using smoothed particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Sigalotti, Leonardo Di G.; Troconis, Jorge; Sira, Eloy; Peña-Polo, Franklin; Klapp, Jaime

    2014-07-01

    We study numerically liquid-vapor phase separation in two-dimensional, nonisothermal, van der Waals (vdW) liquid drops using the method of smoothed particle hydrodynamics (SPH). In contrast to previous SPH simulations of drop formation, our approach is fully adaptive and follows the diffuse-interface model for a single-component fluid, where a reversible, capillary (Korteweg) force is added to the equations of motion to model the rapid but smooth transition of physical quantities through the interface separating the bulk phases. Surface tension arises naturally from the cohesive part of the vdW equation of state and the capillary forces. The drop models all start from a square-shaped liquid and spinodal decomposition is investigated for a range of initial densities and temperatures. The simulations predict the formation of stable, subcritical liquid drops with a vapor atmosphere, with the densities and temperatures of coexisting liquid and vapor in the vdW phase diagram closely matching the binodal curve. We find that the values of surface tension, as determined from the Young-Laplace equation, are in good agreement with the results of independent numerical simulations and experimental data. The models also predict the increase of the vapor pressure with temperature and the fitting to the numerical data reproduces very well the Clausius-Clapeyron relation, thus allowing for the calculation of the vaporization pressure for this vdW fluid.

  18. Diffuse-interface modeling of liquid-vapor coexistence in equilibrium drops using smoothed particle hydrodynamics.

    PubMed

    Sigalotti, Leonardo Di G; Troconis, Jorge; Sira, Eloy; Peña-Polo, Franklin; Klapp, Jaime

    2014-07-01

    We study numerically liquid-vapor phase separation in two-dimensional, nonisothermal, van der Waals (vdW) liquid drops using the method of smoothed particle hydrodynamics (SPH). In contrast to previous SPH simulations of drop formation, our approach is fully adaptive and follows the diffuse-interface model for a single-component fluid, where a reversible, capillary (Korteweg) force is added to the equations of motion to model the rapid but smooth transition of physical quantities through the interface separating the bulk phases. Surface tension arises naturally from the cohesive part of the vdW equation of state and the capillary forces. The drop models all start from a square-shaped liquid and spinodal decomposition is investigated for a range of initial densities and temperatures. The simulations predict the formation of stable, subcritical liquid drops with a vapor atmosphere, with the densities and temperatures of coexisting liquid and vapor in the vdW phase diagram closely matching the binodal curve. We find that the values of surface tension, as determined from the Young-Laplace equation, are in good agreement with the results of independent numerical simulations and experimental data. The models also predict the increase of the vapor pressure with temperature and the fitting to the numerical data reproduces very well the Clausius-Clapeyron relation, thus allowing for the calculation of the vaporization pressure for this vdW fluid. PMID:25122383

  19. Diffuse-interface modeling of liquid-vapor coexistence in equilibrium drops using smoothed particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Klapp, Jaime; di G Sigalotti, Leonardo; Troconis, Jorge; Sira, Eloy; Pena, Franklin; ININ-IVIC Team; Cinvestav-UAM-A Team

    2014-11-01

    We study numerically liquid-vapor phase separation in two-dimensional, nonisothermal, van der Waals (vdW) liquid drops using the method of Smoothed Particle Hydrodynamics (SPH). In contrast to previous SPH simulations of drop formation, our approach is fully adaptive and follows the diffuse interface model for a single-component fluid, where a reversible, capillary (Korteweg) force is added to the equations of motion to model the rapid but smooth transition of physical quantities through the interface separating the bulk phases. Surface tension arises naturally from the cohesive part of the vdW equation of state and the capillary forces. The drop models all start from a square-shaped liquid and spinodal decomposition is investigated for a range of initial densities and temperatures. The simulations predict the formation of stable, subcritical liquid drops with a vapor atmosphere, with the densities and temperatures of coexisting liquid and vapor in the vdW phase diagram closely matching the binodal curve. We find that the values of surface tension, as determined from the Young-Laplace equation, are in good agreement with the results of independent numerical simulations and experimental data. The models also predict the increase of the vapor pressure with temperature and the fitting to the numerical data reproduces very well the Clausius-Clapeyron relation, thus allowing for the calculation of the vaporization pressure for this vdW fluid. Cinvestav-Abacus.

  20. Diffuse-interface modeling of liquid-vapor coexistence in equilibrium drops using smoothed particle hydrodynamics.

    PubMed

    Sigalotti, Leonardo Di G; Troconis, Jorge; Sira, Eloy; Peña-Polo, Franklin; Klapp, Jaime

    2014-07-01

    We study numerically liquid-vapor phase separation in two-dimensional, nonisothermal, van der Waals (vdW) liquid drops using the method of smoothed particle hydrodynamics (SPH). In contrast to previous SPH simulations of drop formation, our approach is fully adaptive and follows the diffuse-interface model for a single-component fluid, where a reversible, capillary (Korteweg) force is added to the equations of motion to model the rapid but smooth transition of physical quantities through the interface separating the bulk phases. Surface tension arises naturally from the cohesive part of the vdW equation of state and the capillary forces. The drop models all start from a square-shaped liquid and spinodal decomposition is investigated for a range of initial densities and temperatures. The simulations predict the formation of stable, subcritical liquid drops with a vapor atmosphere, with the densities and temperatures of coexisting liquid and vapor in the vdW phase diagram closely matching the binodal curve. We find that the values of surface tension, as determined from the Young-Laplace equation, are in good agreement with the results of independent numerical simulations and experimental data. The models also predict the increase of the vapor pressure with temperature and the fitting to the numerical data reproduces very well the Clausius-Clapeyron relation, thus allowing for the calculation of the vaporization pressure for this vdW fluid.

  1. Modeling nurses' attitude toward using automated unit-based medication storage and distribution systems: an extension of the technology acceptance model.

    PubMed

    Escobar-Rodríguez, Tomás; Romero-Alonso, María Mercedes

    2013-05-01

    This article analyzes the attitude of nurses toward the use of automated unit-based medication storage and distribution systems and identifies influencing factors. Understanding these factors provides an opportunity to explore actions that might be taken to boost adoption by potential users. The theoretical grounding for this research is the Technology Acceptance Model. The Technology Acceptance Model specifies the causal relationships between perceived usefulness, perceived ease of use, attitude toward using, and actual usage behavior. The research model has six constructs, and nine hypotheses were generated from connections between these six constructs. These constructs include perceived risks, experience level, and training. The findings indicate that these three external variables are related to the perceived ease of use and perceived usefulness of automated unit-based medication storage and distribution systems, and therefore, they have a significant influence on attitude toward the use of these systems.

  2. Particles at fluid-fluid interfaces: A new Navier-Stokes-Cahn-Hilliard surface- phase-field-crystal model

    NASA Astrophysics Data System (ADS)

    Aland, Sebastian; Lowengrub, John; Voigt, Axel

    2012-10-01

    Colloid particles that are partially wetted by two immiscible fluids can become confined to fluid-fluid interfaces. At sufficiently high volume fractions, the colloids may jam and the interface may crystallize. The fluids together with the interfacial colloids form an emulsion with interesting material properties and offer an important route to new soft materials. A promising approach to simulate these emulsions was presented in Aland [Phys. FluidsPHFLE61070-663110.1063/1.3584815 23, 062103 (2011)], where a Navier-Stokes-Cahn-Hilliard model for the macroscopic two-phase fluid system was combined with a surface phase-field-crystal model for the microscopic colloidal particles along the interface. Unfortunately this model leads to spurious velocities which require very fine spatial and temporal resolutions to accurately and stably simulate. In this paper we develop an improved Navier-Stokes-Cahn-Hilliard-surface phase-field-crystal model based on the principles of mass conservation and thermodynamic consistency. To validate our approach, we derive a sharp interface model and show agreement with the improved diffuse interface model. Using simple flow configurations, we show that the new model has much better properties and does not lead to spurious velocities. Finally, we demonstrate the solid-like behavior of the crystallized interface by simulating the fall of a solid ball through a colloid-laden multiphase fluid.

  3. Particles at fluid-fluid interfaces: A new Navier-Stokes-Cahn-Hilliard surface- phase-field-crystal model.

    PubMed

    Aland, Sebastian; Lowengrub, John; Voigt, Axel

    2012-10-01

    Colloid particles that are partially wetted by two immiscible fluids can become confined to fluid-fluid interfaces. At sufficiently high volume fractions, the colloids may jam and the interface may crystallize. The fluids together with the interfacial colloids form an emulsion with interesting material properties and offer an important route to new soft materials. A promising approach to simulate these emulsions was presented in Aland et al. [Phys. Fluids 23, 062103 (2011)], where a Navier-Stokes-Cahn-Hilliard model for the macroscopic two-phase fluid system was combined with a surface phase-field-crystal model for the microscopic colloidal particles along the interface. Unfortunately this model leads to spurious velocities which require very fine spatial and temporal resolutions to accurately and stably simulate. In this paper we develop an improved Navier-Stokes-Cahn-Hilliard-surface phase-field-crystal model based on the principles of mass conservation and thermodynamic consistency. To validate our approach, we derive a sharp interface model and show agreement with the improved diffuse interface model. Using simple flow configurations, we show that the new model has much better properties and does not lead to spurious velocities. Finally, we demonstrate the solid-like behavior of the crystallized interface by simulating the fall of a solid ball through a colloid-laden multiphase fluid. PMID:23214691

  4. Particles at fluid-fluid interfaces: A new Navier-Stokes-Cahn-Hilliard surface-phase-field-crystal model

    PubMed Central

    Aland, Sebastian; Lowengrub, John; Voigt, Axel

    2013-01-01

    Colloid particles that are partially wetted by two immiscible fluids can become confined to fluid-fluid interfaces. At sufficiently high volume fractions, the colloids may jam and the interface may crystallize. The fluids together with the interfacial colloids form an emulsion with interesting material properties and offer an important route to new soft materials. A promising approach to simulate these emulsions was presented in Aland et al. [Phys. Fluids 23, 062103 (2011)], where a Navier-Stokes-Cahn-Hilliard model for the macroscopic two-phase fluid system was combined with a surface phase-field-crystal model for the microscopic colloidal particles along the interface. Unfortunately this model leads to spurious velocities which require very fine spatial and temporal resolutions to accurately and stably simulate. In this paper we develop an improved Navier-Stokes-Cahn-Hilliard-surface phase-field-crystal model based on the principles of mass conservation and thermodynamic consistency. To validate our approach, we derive a sharp interface model and show agreement with the improved diffuse interface model. Using simple flow configurations, we show that the new model has much better properties and does not lead to spurious velocities. Finally, we demonstrate the solid-like behavior of the crystallized interface by simulating the fall of a solid ball through a colloid-laden multiphase fluid. PMID:23214691

  5. Development and implementation of (Q)SAR modeling within the CHARMMing web-user interface.

    PubMed

    Weidlich, Iwona E; Pevzner, Yuri; Miller, Benjamin T; Filippov, Igor V; Woodcock, H Lee; Brooks, Bernard R

    2015-01-01

    Recent availability of large publicly accessible databases of chemical compounds and their biological activities (PubChem, ChEMBL) has inspired us to develop a web-based tool for structure activity relationship and quantitative structure activity relationship modeling to add to the services provided by CHARMMing (www.charmming.org). This new module implements some of the most recent advances in modern machine learning algorithms-Random Forest, Support Vector Machine, Stochastic Gradient Descent, Gradient Tree Boosting, so forth. A user can import training data from Pubchem Bioassay data collections directly from our interface or upload his or her own SD files which contain structures and activity information to create new models (either categorical or numerical). A user can then track the model generation process and run models on new data to predict activity.

  6. Development and implementation of (Q)SAR modeling within the CHARMMing web-user interface.

    PubMed

    Weidlich, Iwona E; Pevzner, Yuri; Miller, Benjamin T; Filippov, Igor V; Woodcock, H Lee; Brooks, Bernard R

    2015-01-01

    Recent availability of large publicly accessible databases of chemical compounds and their biological activities (PubChem, ChEMBL) has inspired us to develop a web-based tool for structure activity relationship and quantitative structure activity relationship modeling to add to the services provided by CHARMMing (www.charmming.org). This new module implements some of the most recent advances in modern machine learning algorithms-Random Forest, Support Vector Machine, Stochastic Gradient Descent, Gradient Tree Boosting, so forth. A user can import training data from Pubchem Bioassay data collections directly from our interface or upload his or her own SD files which contain structures and activity information to create new models (either categorical or numerical). A user can then track the model generation process and run models on new data to predict activity. PMID:25362883

  7. Development and implementation of (Q)SAR modeling within the CHARMMing Web-user interface

    PubMed Central

    Weidlich, Iwona E.; Pevzner, Yuri; Miller, Benjamin T.; Filippov, Igor V.; Woodcock, H. Lee; Brooks, Bernard R.

    2014-01-01

    Recent availability of large publicly accessible databases of chemical compounds and their biological activities (PubChem, ChEMBL) has inspired us to develop a Web-based tool for SAR and QSAR modeling to add to the services provided by CHARMMing (www.charmming.org). This new module implements some of the most recent advances in modern machine learning algorithms – Random Forest, Support Vector Machine (SVM), Stochastic Gradient Descent, Gradient Tree Boosting etc. A user can import training data from Pubchem Bioassay data collections directly from our interface or upload his or her own SD files which contain structures and activity information to create new models (either categorical or numerical). A user can then track the model generation process and run models on new data to predict activity. PMID:25362883

  8. Development of an Automated Precipitation Processing Model and Applications in Hydrologic Investigations

    NASA Astrophysics Data System (ADS)

    Milewski, A. M.; Markondiah Jayaprakash, S.; Sultan, M.; Becker, R.

    2006-12-01

    Given the advances in new technologies, more and more scientists are beginning to utilize remote sensing or satellite imagery in their research applications. Remote sensing data offer a synoptic view and observational quantitative parameters over large domains and thus provide cost-effective solutions by reducing the labor involved in collecting extensive field observations. One of the valuable data sets that can be extracted from remote sensing observations is precipitation. Prior to the deployment of the relevant satellite-based sensors, users had to resort to rainfall stations to obtain precipitation data. Currently, users can freely download digital Tropical Rainfall Measuring Mission (TRMM) and Special Spectral Measuring Imager (SSM/I) precipitation data, however, the process of data extraction is not user friendly as it requires computer programming to fully utilize these datasets. We have developed the Automated Precipitation Processing Module (APPM) to simplify the tedious manual process needed to retrieve rainfall estimates via satellite measurements. The function of the APPM is to process the TRMM and SSM/I data according to the user's spatial and temporal inputs. Using APPM, we processed all available TRMM and SSM/I data for six continents (processed data is available on six compact discs: one/continent: refer to www.esrs.wmich.edu). The input data includes global SSM/I (1987-1998) and TRMM (1998-2005) covering an area extending from 50 degrees North to 50 degrees South. Advantages of using our software include: (1) user friendly technology, (2) reduction in processing time (e.g., processing of the entire TRMM & SSM/I dataset (1987-2005) for Africa was reduced from one year to one week), and (3) reduction in required computer resources (original TRMM & SSM/I data: 1.5 terabytes; processed: 300 megabytes). The APPM reads raw binary data and allows for: (1) sub-setting global dataset given user-defined boundaries (latitude and longitude), (2) selection of

  9. An automated approach for extracting Barrier Island morphology from digital elevation models

    NASA Astrophysics Data System (ADS)

    Wernette, Phillipe; Houser, Chris; Bishop, Michael P.

    2016-06-01

    The response and recovery of a barrier island to extreme storms depends on the elevation of the dune base and crest, both of which can vary considerably alongshore and through time. Quantifying the response to and recovery from storms requires that we can first identify and differentiate the dune(s) from the beach and back-barrier, which in turn depends on accurate identification and delineation of the dune toe, crest and heel. The purpose of this paper is to introduce a multi-scale automated approach for extracting beach, dune (dune toe, dune crest and dune heel), and barrier island morphology. The automated approach introduced here extracts the shoreline and back-barrier shoreline based on elevation thresholds, and extracts the dune toe, dune crest and dune heel based on the average relative relief (RR) across multiple spatial scales of analysis. The multi-scale automated RR approach to extracting dune toe, dune crest, and dune heel based upon relative relief is more objective than traditional approaches because every pixel is analyzed across multiple computational scales and the identification of features is based on the calculated RR values. The RR approach out-performed contemporary approaches and represents a fast objective means to define important beach and dune features for predicting barrier island response to storms. The RR method also does not require that the dune toe, crest, or heel are spatially continuous, which is important because dune morphology is likely naturally variable alongshore.

  10. Modulation Depth Estimation and Variable Selection in State-Space Models for Neural Interfaces

    PubMed Central

    Hochberg, Leigh R.; Donoghue, John P.; Brown, Emery N.

    2015-01-01

    Rapid developments in neural interface technology are making it possible to record increasingly large signal sets of neural activity. Various factors such as asymmetrical information distribution and across-channel redundancy may, however, limit the benefit of high-dimensional signal sets, and the increased computational complexity may not yield corresponding improvement in system performance. High-dimensional system models may also lead to overfitting and lack of generalizability. To address these issues, we present a generalized modulation depth measure using the state-space framework that quantifies the tuning of a neural signal channel to relevant behavioral covariates. For a dynamical system, we develop computationally efficient procedures for estimating modulation depth from multivariate data. We show that this measure can be used to rank neural signals and select an optimal channel subset for inclusion in the neural decoding algorithm. We present a scheme for choosing the optimal subset based on model order selection criteria. We apply this method to neuronal ensemble spike-rate decoding in neural interfaces, using our framework to relate motor cortical activity with intended movement kinematics. With offline analysis of intracortical motor imagery data obtained from individuals with tetraplegia using the BrainGate neural interface, we demonstrate that our variable selection scheme is useful for identifying and ranking the most information-rich neural signals. We demonstrate that our approach offers several orders of magnitude lower complexity but virtually identical decoding performance compared to greedy search and other selection schemes. Our statistical analysis shows that the modulation depth of human motor cortical single-unit signals is well characterized by the generalized Pareto distribution. Our variable selection scheme has wide applicability in problems involving multisensor signal modeling and estimation in biomedical engineering systems. PMID

  11. Modulation depth estimation and variable selection in state-space models for neural interfaces.

    PubMed

    Malik, Wasim Q; Hochberg, Leigh R; Donoghue, John P; Brown, Emery N

    2015-02-01

    Rapid developments in neural interface technology are making it possible to record increasingly large signal sets of neural activity. Various factors such as asymmetrical information distribution and across-channel redundancy may, however, limit the benefit of high-dimensional signal sets, and the increased computational complexity may not yield corresponding improvement in system performance. High-dimensional system models may also lead to overfitting and lack of generalizability. To address these issues, we present a generalized modulation depth measure using the state-space framework that quantifies the tuning of a neural signal channel to relevant behavioral covariates. For a dynamical system, we develop computationally efficient procedures for estimating modulation depth from multivariate data. We show that this measure can be used to rank neural signals and select an optimal channel subset for inclusion in the neural decoding algorithm. We present a scheme for choosing the optimal subset based on model order selection criteria. We apply this method to neuronal ensemble spike-rate decoding in neural interfaces, using our framework to relate motor cortical activity with intended movement kinematics. With offline analysis of intracortical motor imagery data obtained from individuals with tetraplegia using the BrainGate neural interface, we demonstrate that our variable selection scheme is useful for identifying and ranking the most information-rich neural signals. We demonstrate that our approach offers several orders of magnitude lower complexity but virtually identical decoding performance compared to greedy search and other selection schemes. Our statistical analysis shows that the modulation depth of human motor cortical single-unit signals is well characterized by the generalized Pareto distribution. Our variable selection scheme has wide applicability in problems involving multisensor signal modeling and estimation in biomedical engineering systems. PMID

  12. Automated data evaluation and modelling of simultaneous (19) F-(1) H medium-resolution NMR spectra for online reaction monitoring.

    PubMed

    Zientek, Nicolai; Laurain, Clément; Meyer, Klas; Paul, Andrea; Engel, Dirk; Guthausen, Gisela; Kraume, Matthias; Maiwald, Michael

    2016-06-01

    Medium-resolution nuclear magnetic resonance spectroscopy (MR-NMR) currently develops to an important analytical tool for both quality control and process monitoring. In contrast to high-resolution online NMR (HR-NMR), MR-NMR can be operated under rough environmental conditions. A continuous re-circulating stream of reaction mixture from the reaction vessel to the NMR spectrometer enables a non-invasive, volume integrating online analysis of reactants and products. Here, we investigate the esterification of 2,2,2-trifluoroethanol with acetic acid to 2,2,2-trifluoroethyl acetate both by (1) H HR-NMR (500 MHz) and (1) H and (19) F MR-NMR (43 MHz) as a model system. The parallel online measurement is realised by splitting the flow, which allows the adjustment of quantitative and independent flow rates, both in the HR-NMR probe as well as in the MR-NMR probe, in addition to a fast bypass line back to the reactor. One of the fundamental acceptance criteria for online MR-MNR spectroscopy is a robust data treatment and evaluation strategy with the potential for automation. The MR-NMR spectra are treated by an automated baseline and phase correction using the minimum entropy method. The evaluation strategies comprise (i) direct integration, (ii) automated line fitting, (iii) indirect hard modelling (IHM) and (iv) partial least squares regression (PLS-R). To assess the potential of these evaluation strategies for MR-NMR, prediction results are compared with the line fitting data derived from the quantitative HR-NMR spectroscopy. Although, superior results are obtained from both IHM and PLS-R for (1) H MR-NMR, especially the latter demands for elaborate data pretreatment, whereas IHM models needed no previous alignment. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Weak solutions for a non-Newtonian diffuse interface model with different densities

    NASA Astrophysics Data System (ADS)

    Abels, Helmut; Breit, Dominic

    2016-11-01

    We consider weak solutions for a diffuse interface model of two non-Newtonian viscous, incompressible fluids of power-law type in the case of different densities in a bounded, sufficiently smooth domain. This leads to a coupled system of a nonhomogenouos generalized Navier-Stokes system and a Cahn-Hilliard equation. For the Cahn-Hilliard part a smooth free energy density and a constant, positive mobility is assumed. Using the {{L}∞} -truncation method we prove existence of weak solutions for a power-law exponent p>\\frac{2d+2}{d+2} , d  =  2, 3.

  14. Molecules to modeling: Toxoplasma gondii oocysts at the human–animal–environment interface

    PubMed Central

    VanWormer, Elizabeth; Fritz, Heather; Shapiro, Karen; Mazet, Jonna A.K.; Conrad, Patricia A.

    2013-01-01

    Environmental transmission of extremely resistant Toxoplasma gondii oocysts has resulted in infection of diverse species around the world, leading to severe disease and deaths in human and animal populations. This review explores T. gondii oocyst shedding, survival, and transmission, emphasizing the importance of linking laboratory and landscape from molecular characterization of oocysts to watershed-level models of oocyst loading and transport in terrestrial and aquatic systems. Building on discipline-specific studies, a One Health approach incorporating tools and perspectives from diverse fields and stakeholders has contributed to an advanced understanding of T. gondii and is addressing transmission at the rapidly changing human–animal–environment interface. PMID:23218130

  15. Modeling Proteins at the Interface of Structure, Evolution, and Population Genetics

    NASA Astrophysics Data System (ADS)

    Teufel, Ashley I.; Grahnen, Johan A.; Liberles, David A.

    Biological systems span multiple layers of organization and modeling across layers of organization enables inference that is not possible by analyzing just one layer. An example of this is seen in an organism's fitness, which can be directly impacted by selection for output from a metabolic or signal transduction pathway. Even this complex process is already several layers removed from the environment and ecosystem. Within the pathway are individual enzymatic reactions and protein-protein, protein-small molecule, and protein-DNA interactions. Enzymatic and physical constants characterize these reactions and interactions, where selection dictates ranges and thresholds of values that are dependent upon values for other links in the pathway. The physical constants (for protein-protein binding, for example) are dictated by the amino acid sequences at the interface. These constants are also constrained by the amino acid sequences that are necessary to maintain a properly folded structure as a scaffold to maintain the interaction interface. As sequences evolve, population genetic and molecular evolutionary models describe the availability of combinations of amino acid changes for selection, depending in turn on parameters like the mutation rate and effective population size. As the systems biology level of constraints has not been thoroughly characterized, it is this multiscale modeling problem that describes the interplay between protein biophysical chemistry and population genetics/molecular evolution that we will describe.

  16. Modelling the Bioelectronic Interface in Engineered Tethered Membranes: From Biosensing to Electroporation.

    PubMed

    Hoiles, William; Krishnamurthy, Vikram; Cornell, Bruce

    2015-06-01

    This paper studies the construction and predictive models of three novel measurement platforms: (i) a Pore Formation Measurement Platform (PFMP) for detecting the presence of pore forming proteins and peptides, (ii) the Ion Channel Switch (ICS) biosensor for detecting the presence of analyte molecules in a fluid chamber, and (iii) an Electroporation Measurement Platform (EMP) that provides reliable measurements of the electroporation phenomenon. Common to all three measurement platforms is that they are comprised of an engineered tethered membrane that is formed via a rapid solvent exchange technique allowing the platform to have a lifetime of several months. The membrane is tethered to a gold electrode bioelectronic interface that includes an ionic reservoir separating the membrane and gold surface, allowing the membrane to mimic the physiological response of natural cell membranes. The electrical response of the PFMP, ICS, and EMP are predicted using continuum theories for electrodiffusive flow coupled with boundary conditions for modelling chemical reactions and electrical double layers present at the bioelectronic interface. Experimental measurements are used to validate the predictive accuracy of the dynamic models. These include using the PFMP for measuring the pore formation dynamics of the antimicrobial peptide PGLa and the protein toxin Staphylococcal α-Hemolysin; the ICS biosensor for measuring nano-molar concentrations of streptavidin, ferritin, thyroid stimulating hormone (TSH), and human chorionic gonadotropin (pregnancy hormone hCG); and the EMP for measuring electroporation of membranes with different tethering densities, and membrane compositions.

  17. Fluid-assisted deformation of the subduction interface: Coupled and decoupled regimes from 2-D hydromechanical modeling

    NASA Astrophysics Data System (ADS)

    Zheng, Liang; May, Dave; Gerya, Taras; Bostock, Michael

    2016-08-01

    Shear deformation, accompanied with fluid activity inside the subduction interface, is related to many tectonic energy-releasing events, including regular and slow earthquakes. We have numerically examined the fluid-rock interactions inside a deforming subduction interface using state-of-the-art 2-D hydromechanical numerical models, which incorporate the rock fracturing behavior as a plastic rheology which is dependent on the pore fluid pressure. Our modeling results suggest that two typical dynamical regimes of the deforming subduction interface exist, namely, a "coupled" and a "decoupled" regime. In the coupled regime the subduction interface is subdivided into multiple rigid blocks, each separated by a narrow shear zone inclined at an angle of 15-20° with respect to the slab surface. In contrast, in the decoupled regime the subduction interface is divided into two distinct layers moving relative to each other along a pervasive slab surface-parallel shear zone. Through a systematic parameter study, we observe that the tensile strength (cohesion) of the material within the subduction interface dictates the resulting style of deformation within the interface: high cohesion (~60 MPa) results in the coupled regime, while low cohesion (~10 MPa) leads to the decoupled regime. We also demonstrate that the lithostatic pressure and inflow/outflow fluid fluxes (i.e., fluid-fluxed boundary condition) influence the location and orientation of faults. Predictions from our numerical models are supported by experimental laboratory studies, geological data, and geophysical observations from modern subduction settings.

  18. Establishing a Novel Modeling Tool: A Python-Based Interface for a Neuromorphic Hardware System

    PubMed Central

    Brüderle, Daniel; Müller, Eric; Davison, Andrew; Muller, Eilif; Schemmel, Johannes; Meier, Karlheinz

    2008-01-01

    Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated. PMID:19562085

  19. Establishing a novel modeling tool: a python-based interface for a neuromorphic hardware system.

    PubMed

    Brüderle, Daniel; Müller, Eric; Davison, Andrew; Muller, Eilif; Schemmel, Johannes; Meier, Karlheinz

    2009-01-01

    Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated. PMID:19562085

  20. A phase field dislocation dynamics model for a bicrystal interface system: An investigation into dislocation slip transmission across cube-on-cube interfaces

    SciTech Connect

    Zeng, Y.; Hunter, A.; Beyerlein, I. J.; Koslowski, M.

    2015-09-14

    In this study, we present a phase field dislocation dynamics formulation designed to treat a system comprised of two materials differing in moduli and lattice parameters that meet at a common interface. We apply the model to calculate the critical stress τcrit required to transmit a perfect dislocation across the bimaterial interface with a cube-on-cube orientation relationship. The calculation of τcrit accounts for the effects of: 1) the lattice mismatch (misfit or coherency stresses), 2) the elastic moduli mismatch (Koehler forces or image stresses), and 3) the formation of the residual dislocation in the interface. Our results show that the value of τcrit associated with the transmission of a dislocation from material 1 to material 2 is not the same as that from material 2 to material 1. Dislocation transmission from the material with the lower shear modulus and larger lattice parameter tends to be easier than the reverse and this apparent asymmetry in τcrit generally increases with increases in either lattice or moduli mismatch or both. In efforts to clarify the roles of lattice and moduli mismatch, we construct an analytical model for τcrit based on the formation energy of the residual dislocation. We show that path dependence in this energetic barrier can explain the asymmetry seen in the calculated τcrit values.