Science.gov

Sample records for automated modelling interface

  1. Automated Fluid Interface System (AFIS)

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Automated remote fluid servicing will be necessary for future space missions, as future satellites will be designed for on-orbit consumable replenishment. In order to develop an on-orbit remote servicing capability, a standard interface between a tanker and the receiving satellite is needed. The objective of the Automated Fluid Interface System (AFIS) program is to design, fabricate, and functionally demonstrate compliance with all design requirements for an automated fluid interface system. A description and documentation of the Fairchild AFIS design is provided.

  2. Testing of the Automated Fluid Interface System

    NASA Technical Reports Server (NTRS)

    Johnston, A. S.; Tyler, Tony R.

    1998-01-01

    The Automated Fluid Interface System (AFIS) is an advanced development prototype satellite servicer. The device was designed to transfer consumables from one spacecraft to another. An engineering model was built and underwent development testing at Marshall Space Flight Center. While the current AFIS is not suitable for spaceflight, testing and evaluation of the AFIS provided significant experience which would be beneficial in building a flight unit.

  3. Development and testing of the Automated Fluid Interface System

    NASA Astrophysics Data System (ADS)

    Milton, Martha E.; Tyler, Tony R.

    1993-05-01

    The Automated Fluid Interface System (AFIS) is an advanced development program aimed at becoming the standard interface for satellite servicing for years to come. The AFIS will be capable of transferring propellants, fluids, gasses, power, and cryogens from a tanker to an orbiting satellite. The AFIS program currently under consideration is a joint venture between the NASA/Marshall Space Flight Center and Moog, Inc. An engineering model has been built and is undergoing development testing to investigate the mechanism's abilities.

  4. Development and testing of the Automated Fluid Interface System

    NASA Technical Reports Server (NTRS)

    Milton, Martha E.; Tyler, Tony R.

    1993-01-01

    The Automated Fluid Interface System (AFIS) is an advanced development program aimed at becoming the standard interface for satellite servicing for years to come. The AFIS will be capable of transferring propellants, fluids, gasses, power, and cryogens from a tanker to an orbiting satellite. The AFIS program currently under consideration is a joint venture between the NASA/Marshall Space Flight Center and Moog, Inc. An engineering model has been built and is undergoing development testing to investigate the mechanism's abilities.

  5. Automated identification and indexing of dislocations in crystal interfaces

    SciTech Connect

    Stukowski, Alexander; Bulatov, Vasily V.; Arsenlis, Athanasios

    2012-10-31

    Here, we present a computational method for identifying partial and interfacial dislocations in atomistic models of crystals with defects. Our automated algorithm is based on a discrete Burgers circuit integral over the elastic displacement field and is not limited to specific lattices or dislocation types. Dislocations in grain boundaries and other interfaces are identified by mapping atomic bonds from the dislocated interface to an ideal template configuration of the coherent interface to reveal incompatible displacements induced by dislocations and to determine their Burgers vectors. Additionally, the algorithm generates a continuous line representation of each dislocation segment in the crystal and also identifies dislocation junctions.

  6. Automated identification and indexing of dislocations in crystal interfaces

    DOE PAGESBeta

    Stukowski, Alexander; Bulatov, Vasily V.; Arsenlis, Athanasios

    2012-10-31

    Here, we present a computational method for identifying partial and interfacial dislocations in atomistic models of crystals with defects. Our automated algorithm is based on a discrete Burgers circuit integral over the elastic displacement field and is not limited to specific lattices or dislocation types. Dislocations in grain boundaries and other interfaces are identified by mapping atomic bonds from the dislocated interface to an ideal template configuration of the coherent interface to reveal incompatible displacements induced by dislocations and to determine their Burgers vectors. Additionally, the algorithm generates a continuous line representation of each dislocation segment in the crystal andmore » also identifies dislocation junctions.« less

  7. Automated Student Model Improvement

    ERIC Educational Resources Information Center

    Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.

    2012-01-01

    Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…

  8. Automation Interfaces of the Orion GNC Executive Architecture

    NASA Technical Reports Server (NTRS)

    Hart, Jeremy

    2009-01-01

    This viewgraph presentation describes Orion mission's automation Guidance, Navigation and Control (GNC) architecture and interfaces. The contents include: 1) Orion Background; 2) Shuttle/Orion Automation Comparison; 3) Orion Mission Sequencing; 4) Orion Mission Sequencing Display Concept; and 5) Status and Forward Plans.

  9. Towards automation of user interface design

    NASA Technical Reports Server (NTRS)

    Gastner, Rainer; Kraetzschmar, Gerhard K.; Lutz, Ernst

    1992-01-01

    This paper suggests an approach to automatic software design in the domain of graphical user interfaces. There are still some drawbacks in existing user interface management systems (UIMS's) which basically offer only quantitative layout specifications via direct manipulation. Our approach suggests a convenient way to get a default graphical user interface which may be customized and redesigned easily in further prototyping cycles.

  10. Control Interface and Tracking Control System for Automated Poultry Inspection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A new visible/near-infrared inspection system interface was developed in order to conduct research to test and implement an automated chicken inspection system for online operation on commercial chicken processing lines. The spectroscopic system demonstrated effective spectral acquisition and data ...

  11. Space station automation and robotics study. Operator-systems interface

    NASA Technical Reports Server (NTRS)

    1984-01-01

    This is the final report of a Space Station Automation and Robotics Planning Study, which was a joint project of the Boeing Aerospace Company, Boeing Commercial Airplane Company, and Boeing Computer Services Company. The study is in support of the Advanced Technology Advisory Committee established by NASA in accordance with a mandate by the U.S. Congress. Boeing support complements that provided to the NASA Contractor study team by four aerospace contractors, the Stanford Research Institute (SRI), and the California Space Institute. This study identifies automation and robotics (A&R) technologies that can be advanced by requirements levied by the Space Station Program. The methodology used in the study is to establish functional requirements for the operator system interface (OSI), establish the technologies needed to meet these requirements, and to forecast the availability of these technologies. The OSI would perform path planning, tracking and control, object recognition, fault detection and correction, and plan modifications in connection with extravehicular (EV) robot operations.

  12. Geographic information system/watershed model interface

    USGS Publications Warehouse

    Fisher, Gary T.

    1989-01-01

    Geographic information systems allow for the interactive analysis of spatial data related to water-resources investigations. A conceptual design for an interface between a geographic information system and a watershed model includes functions for the estimation of model parameter values. Design criteria include ease of use, minimal equipment requirements, a generic data-base management system, and use of a macro language. An application is demonstrated for a 90.1-square-kilometer subbasin of the Patuxent River near Unity, Maryland, that performs automated derivation of watershed parameters for hydrologic modeling.

  13. Database-driven web interface automating gyrokinetic simulations for validation

    NASA Astrophysics Data System (ADS)

    Ernst, D. R.

    2010-11-01

    We are developing a web interface to connect plasma microturbulence simulation codes with experimental data. The website automates the preparation of gyrokinetic simulations utilizing plasma profile and magnetic equilibrium data from TRANSP analysis of experiments, read from MDSPLUS over the internet. This database-driven tool saves user sessions, allowing searches of previous simulations, which can be restored to repeat the same analysis for a new discharge. The website includes a multi-tab, multi-frame, publication quality java plotter Webgraph, developed as part of this project. Input files can be uploaded as templates and edited with context-sensitive help. The website creates inputs for GS2 and GYRO using a well-tested and verified back-end, in use for several years for the GS2 code [D. R. Ernst et al., Phys. Plasmas 11(5) 2637 (2004)]. A centralized web site has the advantage that users receive bug fixes instantaneously, while avoiding the duplicated effort of local compilations. Possible extensions to the database to manage run outputs, toward prototyping for the Fusion Simulation Project, are envisioned. Much of the web development utilized support from the DoE National Undergraduate Fellowship program [e.g., A. Suarez and D. R. Ernst, http://meetings.aps.org/link/BAPS.2005.DPP.GP1.57.

  14. Automated parking garage system model

    NASA Technical Reports Server (NTRS)

    Collins, E. R., Jr.

    1975-01-01

    A one-twenty-fifth scale model of the key components of an automated parking garage system is described. The design of the model required transferring a vehicle from an entry level, vertically (+Z, -Z), to a storage location at any one of four storage positions (+X, -X, +Y, +Y, -Y) on the storage levels. There are three primary subsystems: (1) a screw jack to provide the vertical motion of the elevator, (2) a cam-driven track-switching device to provide X to Y motion, and (3) a transfer cart to provide horizontal travel and a small amount to vertical motion for transfer to the storage location. Motive power is provided by dc permanent magnet gear motors, one each for the elevator and track switching device and two for the transfer cart drive system (one driving the cart horizontally and the other providing the vertical transfer). The control system, through the use of a microprocessor, provides complete automation through a feedback system which utilizes sensing devices.

  15. On Abstractions and Simplifications in the Design of Human-Automation Interfaces

    NASA Technical Reports Server (NTRS)

    Heymann, Michael; Degani, Asaf; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This report addresses the design of human-automation interaction from a formal perspective that focuses on the information content of the interface, rather than the design of the graphical user interface. It also addresses the issue of the information provided to the user (e.g., user-manuals, training material, and all other resources). In this report, we propose a formal procedure for generating interfaces and user-manuals. The procedure is guided by two criteria: First, the interface must be correct, that is, with the given interface the user will be able to perform the specified tasks correctly. Second, the interface should be succinct. The report discusses the underlying concepts and the formal methods for this approach. Two examples are used to illustrate the procedure. The algorithm for constructing interfaces can be automated, and a preliminary software system for its implementation has been developed.

  16. On Abstractions and Simplifications in the Design of Human-Automation Interfaces

    NASA Technical Reports Server (NTRS)

    Heymann, Michael; Degani, Asaf; Shafto, Michael; Meyer, George; Clancy, Daniel (Technical Monitor)

    2001-01-01

    This report addresses the design of human-automation interaction from a formal perspective that focuses on the information content of the interface, rather than the design of the graphical user interface. It also addresses the, issue of the information provided to the user (e.g., user-manuals, training material, and all other resources). In this report, we propose a formal procedure for generating interfaces and user-manuals. The procedure is guided by two criteria: First, the interface must be correct, i.e., that with the given interface the user will be able to perform the specified tasks correctly. Second, the interface should be as succinct as possible. The report discusses the underlying concepts and the formal methods for this approach. Several examples are used to illustrate the procedure. The algorithm for constructing interfaces can be automated, and a preliminary software system for its implementation has been developed.

  17. Old and New Models for Office Automation.

    ERIC Educational Resources Information Center

    Cole, Eliot

    1983-01-01

    Discusses organization design as context for office automation; mature computer-based systems as one application of organization design variables; and emerging office automation systems (organizational information management, personal information management) as another application of these variables. Management information systems models and…

  18. Theoretical considerations in designing operator interfaces for automated systems

    NASA Technical Reports Server (NTRS)

    Norman, Susan D.

    1987-01-01

    The domains most amenable to techniques based on artificial intelligence (AI) are those that are systematic or for which a systematic domain can be generated. In aerospace systems, many operational tasks are systematic owing to the highly procedural nature of the applications. However, aerospace applications can also be nonprocedural, particularly in the event of a failure or an unexpected event. Several techniques are discussed for designing automated systems for real-time, dynamic environments, particularly when a 'breakdown' occurs. A breakdown is defined as operation of an automated system outside its predetermined, conceptual domain.

  19. Alloy Interface Interdiffusion Modeled

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo H.; Garces, Jorge E.; Abel, Phillip B.

    2003-01-01

    With renewed interest in developing nuclear-powered deep space probes, attention will return to improving the metallurgical processing of potential nuclear fuels so that they remain dimensionally stable over the years required for a successful mission. Previous work on fuel alloys at the NASA Glenn Research Center was primarily empirical, with virtually no continuing research. Even when empirical studies are exacting, they often fail to provide enough insight to guide future research efforts. In addition, from a fundamental theoretical standpoint, the actinide metals (which include materials used for nuclear fuels) pose a severe challenge to modern electronic-structure theory. Recent advances in quantum approximate atomistic modeling, coupled with first-principles derivation of needed input parameters, can help researchers develop new alloys for nuclear propulsion.

  20. Cooperative control - The interface challenge for men and automated machines

    NASA Technical Reports Server (NTRS)

    Hankins, W. W., III; Orlando, N. E.

    1984-01-01

    The research issues associated with the increasing autonomy and independence of machines and their evolving relationships to human beings are explored. The research, conducted by Langley Research Center (LaRC), will produce a new social work order in which the complementary attributes of robots and human beings, which include robots' greater strength and precision and humans' greater physical and intellectual dexterity, are necessary for systems of cooperation. Attention is given to the tools for performing the research, including the Intelligent Systems Research Laboratory (ISRL) and industrial manipulators, as well as to the research approaches taken by the Automation Technology Branch (ATB) of LaRC to achieve high automation levels. The ATB is focusing on artificial intelligence research through DAISIE, a system which tends to organize its environment into hierarchical controller/planner abstractions.

  1. Automation and Accountability in Decision Support System Interface Design

    ERIC Educational Resources Information Center

    Cummings, Mary L.

    2006-01-01

    When the human element is introduced into decision support system design, entirely new layers of social and ethical issues emerge but are not always recognized as such. This paper discusses those ethical and social impact issues specific to decision support systems and highlights areas that interface designers should consider during design with an…

  2. Automated, Parametric Geometry Modeling and Grid Generation for Turbomachinery Applications

    NASA Technical Reports Server (NTRS)

    Harrand, Vincent J.; Uchitel, Vadim G.; Whitmire, John B.

    2000-01-01

    The objective of this Phase I project is to develop a highly automated software system for rapid geometry modeling and grid generation for turbomachinery applications. The proposed system features a graphical user interface for interactive control, a direct interface to commercial CAD/PDM systems, support for IGES geometry output, and a scripting capability for obtaining a high level of automation and end-user customization of the tool. The developed system is fully parametric and highly automated, and, therefore, significantly reduces the turnaround time for 3D geometry modeling, grid generation and model setup. This facilitates design environments in which a large number of cases need to be generated, such as for parametric analysis and design optimization of turbomachinery equipment. In Phase I we have successfully demonstrated the feasibility of the approach. The system has been tested on a wide variety of turbomachinery geometries, including several impellers and a multi stage rotor-stator combination. In Phase II, we plan to integrate the developed system with turbomachinery design software and with commercial CAD/PDM software.

  3. Model-Based Design of Air Traffic Controller-Automation Interaction

    NASA Technical Reports Server (NTRS)

    Romahn, Stephan; Callantine, Todd J.; Palmer, Everett A.; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    A model of controller and automation activities was used to design the controller-automation interactions necessary to implement a new terminal area air traffic management concept. The model was then used to design a controller interface that provides the requisite information and functionality. Using data from a preliminary study, the Crew Activity Tracking System (CATS) was used to help validate the model as a computational tool for describing controller performance.

  4. Reinventing the energy modelling-policy interface

    NASA Astrophysics Data System (ADS)

    Strachan, Neil; Fais, Birgit; Daly, Hannah

    2016-03-01

    Energy modelling has a crucial underpinning role for policy making, but the modelling-policy interface faces several limitations. A reinvention of this interface would better provide timely, targeted, tested, transparent and iterated insights from such complex multidisciplinary tools.

  5. Automated two-dimensional interface for capillary gas chromatography

    DOEpatents

    Strunk, Michael R.; Bechtold, William E.

    1996-02-20

    A multidimensional gas chromatograph (GC) system having wide bore capillary and narrow bore capillary GC columns in series and having a novel system interface. Heart cuts from a high flow rate sample, separated by a wide bore GC column, are collected and directed to a narrow bore GC column with carrier gas injected at a lower flow compatible with a mass spectrometer. A bimodal six-way valve is connected with the wide bore GC column outlet and a bimodal four-way valve is connected with the narrow bore GC column inlet. A trapping and retaining circuit with a cold trap is connected with the six-way valve and a transfer circuit interconnects the two valves. The six-way valve is manipulated between first and second mode positions to collect analyte, and the four-way valve is manipulated between third and fourth mode positions to allow carrier gas to sweep analyte from a deactivated cold trap, through the transfer circuit, and then to the narrow bore GC capillary column for separation and subsequent analysis by a mass spectrometer. Rotary valves have substantially the same bore width as their associated columns to minimize flow irregularities and resulting sample peak deterioration. The rotary valves are heated separately from the GC columns to avoid temperature lag and resulting sample deterioration.

  6. Automated two-dimensional interface for capillary gas chromatography

    DOEpatents

    Strunk, M.R.; Bechtold, W.E.

    1996-02-20

    A multidimensional gas chromatograph (GC) system is disclosed which has wide bore capillary and narrow bore capillary GC columns in series and has a novel system interface. Heart cuts from a high flow rate sample, separated by a wide bore GC column, are collected and directed to a narrow bore GC column with carrier gas injected at a lower flow compatible with a mass spectrometer. A bimodal six-way valve is connected with the wide bore GC column outlet and a bimodal four-way valve is connected with the narrow bore GC column inlet. A trapping and retaining circuit with a cold trap is connected with the six-way valve and a transfer circuit interconnects the two valves. The six-way valve is manipulated between first and second mode positions to collect analyte, and the four-way valve is manipulated between third and fourth mode positions to allow carrier gas to sweep analyte from a deactivated cold trap, through the transfer circuit, and then to the narrow bore GC capillary column for separation and subsequent analysis by a mass spectrometer. Rotary valves have substantially the same bore width as their associated columns to minimize flow irregularities and resulting sample peak deterioration. The rotary valves are heated separately from the GC columns to avoid temperature lag and resulting sample deterioration. 3 figs.

  7. Automated chemical mass balance receptor modeling

    SciTech Connect

    Hanrahan, P.L.; Core, J.E.

    1986-09-01

    Chemical mass balance (CMB) receptor modeling provides alternative or complementary methods to dispersion models for apportioning particulate source impacts. This method estimates particulate source contributions at a receptor by comparing the chemistry of the ambient aerosol to the chemistry of the emissions from the various sources. To minimize demands on the analyst and facilitate the processing of large volumes of data, an initial version of an automated CMB model has been developed and is operational on an IBM personal computer as well as on a Harris mini-mainframe computer. Although it currently does not have all the features of the more interactive manual model, it does show promise for reducing man-power demands. The automated model is based on an early version of the EPA CMB model, which has been converted to run on an IBM-PC or compatible microcomputer. It uses the effective variance method. The interactive manual model is also undergoing modifications under an EPA contract. Some of these new features of the EPA model have been included in one version of the automated model.

  8. FORCe: Fully Online and Automated Artifact Removal for Brain-Computer Interfacing.

    PubMed

    Daly, Ian; Scherer, Reinhold; Billinger, Martin; Müller-Putz, Gernot

    2015-09-01

    A fully automated and online artifact removal method for the electroencephalogram (EEG) is developed for use in brain-computer interfacing (BCI). The method (FORCe) is based upon a novel combination of wavelet decomposition, independent component analysis, and thresholding. FORCe is able to operate on a small channel set during online EEG acquisition and does not require additional signals (e.g., electrooculogram signals). Evaluation of FORCe is performed offline on EEG recorded from 13 BCI particpants with cerebral palsy (CP) and online with three healthy participants. The method outperforms the state-of the-art automated artifact removal methods Lagged Auto-Mutual Information Clustering (LAMIC) and Fully Automated Statistical Thresholding for EEG artifact Rejection (FASTER), and is able to remove a wide range of artifact types including blink, electromyogram (EMG), and electrooculogram (EOG) artifacts. PMID:25134085

  9. Automated modeling of medical decisions.

    PubMed Central

    Egar, J. W.; Musen, M. A.

    1993-01-01

    We have developed a graph grammar and a graph-grammar derivation system that, together, generate decision-theoretic models from unordered lists of medical terms. The medical terms represent considerations in a dilemma that confronts the patient and the health-care provider. Our current grammar ensures that several desirable structural properties are maintained in all derived decision models. PMID:8130509

  10. Design Through Manufacturing: The Solid Model - Finite Element Analysis Interface

    NASA Technical Reports Server (NTRS)

    Rubin, Carol

    2003-01-01

    State-of-the-art computer aided design (CAD) presently affords engineers the opportunity to create solid models of machine parts which reflect every detail of the finished product. Ideally, these models should fulfill two very important functions: (1) they must provide numerical control information for automated manufacturing of precision parts, and (2) they must enable analysts to easily evaluate the stress levels (using finite element analysis - FEA) for all structurally significant parts used in space missions. Today's state-of-the-art CAD programs perform function (1) very well, providing an excellent model for precision manufacturing. But they do not provide a straightforward and simple means of automating the translation from CAD to FEA models, especially for aircraft-type structures. The research performed during the fellowship period investigated the transition process from the solid CAD model to the FEA stress analysis model with the final goal of creating an automatic interface between the two. During the period of the fellowship a detailed multi-year program for the development of such an interface was created. The ultimate goal of this program will be the development of a fully parameterized automatic ProE/FEA translator for parts and assemblies, with the incorporation of data base management into the solution, and ultimately including computational fluid dynamics and thermal modeling in the interface.

  11. RCrane: semi-automated RNA model building

    PubMed Central

    Keating, Kevin S.; Pyle, Anna Marie

    2012-01-01

    RNA crystals typically diffract to much lower resolutions than protein crystals. This low-resolution diffraction results in unclear density maps, which cause considerable difficulties during the model-building process. These difficulties are exacerbated by the lack of computational tools for RNA modeling. Here, RCrane, a tool for the partially automated building of RNA into electron-density maps of low or intermediate resolution, is presented. This tool works within Coot, a common program for macromolecular model building. RCrane helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RCrane then allows the crystallographer to review the newly built structure and select alternative backbone conformations where desired. This tool can also be used to automatically correct the backbone structure of previously built nucleotides. These automated corrections can fix incorrect sugar puckers, steric clashes and other structural problems. PMID:22868764

  12. Automating Risk Analysis of Software Design Models

    PubMed Central

    Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P.

    2014-01-01

    The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance. PMID:25136688

  13. Automating risk analysis of software design models.

    PubMed

    Frydman, Maxime; Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P

    2014-01-01

    The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance. PMID:25136688

  14. Modeling Increased Complexity and the Reliance on Automation: FLightdeck Automation Problems (FLAP) Model

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2014-01-01

    This paper highlights the development of a model that is focused on the safety issue of increasing complexity and reliance on automation systems in transport category aircraft. Recent statistics show an increase in mishaps related to manual handling and automation errors due to pilot complacency and over-reliance on automation, loss of situational awareness, automation system failures and/or pilot deficiencies. Consequently, the aircraft can enter a state outside the flight envelope and/or air traffic safety margins which potentially can lead to loss-of-control (LOC), controlled-flight-into-terrain (CFIT), or runway excursion/confusion accidents, etc. The goal of this modeling effort is to provide NASA's Aviation Safety Program (AvSP) with a platform capable of assessing the impacts of AvSP technologies and products towards reducing the relative risk of automation related accidents and incidents. In order to do so, a generic framework, capable of mapping both latent and active causal factors leading to automation errors, is developed. Next, the framework is converted into a Bayesian Belief Network model and populated with data gathered from Subject Matter Experts (SMEs). With the insertion of technologies and products, the model provides individual and collective risk reduction acquired by technologies and methodologies developed within AvSP.

  15. Atomistic modeling of dislocation-interface interactions

    SciTech Connect

    Wang, Jian; Valone, Steven M; Beyerlein, Irene J; Misra, Amit; Germann, T. C.

    2011-01-31

    Using atomic scale models and interface defect theory, we first classify interface structures into a few types with respect to geometrical factors, then study the interfacial shear response and further simulate the dislocation-interface interactions using molecular dynamics. The results show that the atomic scale structural characteristics of both heterophases and homophases interfaces play a crucial role in (i) their mechanical responses and (ii) the ability of incoming lattice dislocations to transmit across them.

  16. A Generalized Timeline Representation, Services, and Interface for Automating Space Mission Operations

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Johnston, Mark; Frank, Jeremy; Giuliano, Mark; Kavelaars, Alicia; Lenzen, Christoph; Policella, Nicola

    2012-01-01

    Most use a timeline based representation for operations modeling. Most model a core set of state, resource types. Most provide similar capabilities on this modeling to enable (semi) automated schedule generation. In this paper we explore the commonality of : representation and services for these timelines. These commonalities offer potential to be harmonized to enable interoperability, re-use.

  17. Automation model of sewerage rehabilitation planning.

    PubMed

    Yang, M D; Su, T C

    2006-01-01

    The major steps of sewerage rehabilitation include inspection of sewerage, assessment of structural conditions, computation of structural condition grades, and determination of rehabilitation methods and materials. Conventionally, sewerage rehabilitation planning relies on experts with professional background that is tedious and time-consuming. This paper proposes an automation model of planning optimal sewerage rehabilitation strategies for the sewer system by integrating image process, clustering technology, optimization, and visualization display. Firstly, image processing techniques, such as wavelet transformation and co-occurrence features extraction, were employed to extract various characteristics of structural failures from CCTV inspection images. Secondly, a classification neural network was established to automatically interpret the structural conditions by comparing the extracted features with the typical failures in a databank. Then, to achieve optimal rehabilitation efficiency, a genetic algorithm was used to determine appropriate rehabilitation methods and substitution materials for the pipe sections with a risk of mal-function and even collapse. Finally, the result from the automation model can be visualized in a geographic information system in which essential information of the sewer system and sewerage rehabilitation plans are graphically displayed. For demonstration, the automation model of optimal sewerage rehabilitation planning was applied to a sewer system in east Taichung, Chinese Taiwan. PMID:17302324

  18. Automated Expert Modeling and Student Evaluation

    Energy Science and Technology Software Center (ESTSC)

    2012-09-12

    AEMASE searches a database of recorded events for combinations of events that are of interest. It compares matching combinations to a statistical model to determine similarity to previous events of interest and alerts the user as new matching examples are found. AEMASE is currently used by weapons tactics instructors to find situations of interest in recorded tactical training scenarios. AEMASE builds on a sub-component, the Relational Blackboard (RBB), which is being released as open-source software.more » AEMASE builds on RBB adding interactive expert model construction (automated knowledge capture) and re-evaluation of scenario data.« less

  19. Automated Expert Modeling and Student Evaluation

    SciTech Connect

    2012-09-12

    AEMASE searches a database of recorded events for combinations of events that are of interest. It compares matching combinations to a statistical model to determine similarity to previous events of interest and alerts the user as new matching examples are found. AEMASE is currently used by weapons tactics instructors to find situations of interest in recorded tactical training scenarios. AEMASE builds on a sub-component, the Relational Blackboard (RBB), which is being released as open-source software. AEMASE builds on RBB adding interactive expert model construction (automated knowledge capture) and re-evaluation of scenario data.

  20. Automated statistical modeling of analytical measurement systems

    SciTech Connect

    Jacobson, J J

    1992-08-01

    The statistical modeling of analytical measurement systems at the Idaho Chemical Processing Plant (ICPP) has been completely automated through computer software. The statistical modeling of analytical measurement systems is one part of a complete quality control program used by the Remote Analytical Laboratory (RAL) at the ICPP. The quality control program is an integration of automated data input, measurement system calibration, database management, and statistical process control. The quality control program and statistical modeling program meet the guidelines set forth by the American Society for Testing Materials and American National Standards Institute. A statistical model is a set of mathematical equations describing any systematic bias inherent in a measurement system and the precision of a measurement system. A statistical model is developed from data generated from the analysis of control standards. Control standards are samples which are made up at precise known levels by an independent laboratory and submitted to the RAL. The RAL analysts who process control standards do not know the values of those control standards. The object behind statistical modeling is to describe real process samples in terms of their bias and precision and, to verify that a measurement system is operating satisfactorily. The processing of control standards gives us this ability.

  1. Enhancing SCADA and distribution automation through advanced remote terminal unit interfaces

    SciTech Connect

    Rose, M.; Lawrence, S.J.; Bassiouni, R.

    1995-12-31

    One of the essential features of a SCADA or Distribution Automation system is its ability to communicate to intelligent electronic devices (IEDs). With advances in interface hardware such as Remote Terminal Units (RTUs), utilities can more efficiently and effectively communicate with their field equipment. Mississippi Power Company (MPC) has been successful in implementing its SCADA systems with IEDs to maximize its system`s effectiveness. In general, the benefits of this technology are observed in overall cost savings, increased system information, and improved reliability and control. In many cases, MPC uses a single RTU on its SCADA system to communicate to and control a group of IEDs, minimizing the amount of hardware necessary to streamline the system and its simplifies the integration process of the IEDs. Interfaces to many intelligent field devices through a RTU such as Quantum Smart Meters, Schweitzer Relays and Cooper 4C Recolsers will be discussed. These interfaces can help any utility by providing them with more information, more reliable control, and a cost effective method of communication throughout their SCADA system. The details of how these interfaces work and the information they can provide will be illustrated as well as the open communication system the RTU can create in various types of SCADA systems.

  2. Automated system for measuring the surface dilational modulus of liquid–air interfaces

    NASA Astrophysics Data System (ADS)

    Stadler, Dominik; Hofmann, Matthias J.; Motschmann, Hubert; Shamonin, Mikhail

    2016-06-01

    The surface dilational modulus is a crucial parameter for describing the rheological properties of aqueous surfactant solutions. These properties are important for many technological processes. The present paper describes a fully automated instrument based on the oscillating bubble technique. It works in the frequency range from 1 Hz to 500 Hz, where surfactant exchange dynamics governs the relaxation process. The originality of instrument design is the consistent combination of modern measurement technologies with advanced imaging and signal processing algorithms. Key steps on the way to reliable and precise measurements are the excitation of harmonic oscillation of the bubble, phase sensitive evaluation of the pressure response, adjustment and maintenance of the bubble shape to half sphere geometry for compensation of thermal drifts, contour tracing of the bubbles video images, removal of noise and artefacts within the image for improving the reliability of the measurement, and, in particular, a complex trigger scheme for the measurement of the oscillation amplitude, which may vary with frequency as a result of resonances. The corresponding automation and programming tasks are described in detail. Various programming strategies, such as the use of MATLAB® software and native C++ code are discussed. An advance in the measurement technique is demonstrated by a fully automated measurement. The instrument has the potential to mature into a standard technique in the fields of colloid and interface chemistry and provides a significant extension of the frequency range to established competing techniques and state-of-the-art devices based on the same measurement principle.

  3. A Diffuse Interface Model with Immiscibility Preservation

    PubMed Central

    Tiwari, Arpit; Freund, Jonathan B.; Pantano, Carlos

    2013-01-01

    A new, simple, and computationally efficient interface capturing scheme based on a diffuse interface approach is presented for simulation of compressible multiphase flows. Multi-fluid interfaces are represented using field variables (interface functions) with associated transport equations that are augmented, with respect to an established formulation, to enforce a selected interface thickness. The resulting interface region can be set just thick enough to be resolved by the underlying mesh and numerical method, yet thin enough to provide an efficient model for dynamics of well-resolved scales. A key advance in the present method is that the interface regularization is asymptotically compatible with the thermodynamic mixture laws of the mixture model upon which it is constructed. It incorporates first-order pressure and velocity non-equilibrium effects while preserving interface conditions for equilibrium flows, even within the thin diffused mixture region. We first quantify the improved convergence of this formulation in some widely used one-dimensional configurations, then show that it enables fundamentally better simulations of bubble dynamics. Demonstrations include both a spherical bubble collapse, which is shown to maintain excellent symmetry despite the Cartesian mesh, and a jetting bubble collapse adjacent a wall. Comparisons show that without the new formulation the jet is suppressed by numerical diffusion leading to qualitatively incorrect results. PMID:24058207

  4. A Generalized Timeline Representation, Services, and Interface for Automating Space Mission Operations

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Johnston, Mark; Frank, Jeremy; Giuliano, Mark; Kavelaars, Alicia; Lenzen, Christoph; Policella, Nicola

    2012-01-01

    Numerous automated and semi-automated planning & scheduling systems have been developed for space applications. Most of these systems are model-based in that they encode domain knowledge necessary to predict spacecraft state and resources based on initial conditions and a proposed activity plan. The spacecraft state and resources as often modeled as a series of timelines, with a timeline or set of timelines to represent a state or resource key in the operations of the spacecraft. In this paper, we first describe a basic timeline representation that can represent a set of state, resource, timing, and transition constraints. We describe a number of planning and scheduling systems designed for space applications (and in many cases deployed for use of ongoing missions) and describe how they do and do not map onto this timeline model.

  5. Automation life-cycle cost model

    NASA Technical Reports Server (NTRS)

    Gathmann, Thomas P.; Reeves, Arlinda J.; Cline, Rick; Henrion, Max; Ruokangas, Corinne

    1992-01-01

    The problem domain being addressed by this contractual effort can be summarized by the following list: Automation and Robotics (A&R) technologies appear to be viable alternatives to current, manual operations; Life-cycle cost models are typically judged with suspicion due to implicit assumptions and little associated documentation; and Uncertainty is a reality for increasingly complex problems and few models explicitly account for its affect on the solution space. The objectives for this effort range from the near-term (1-2 years) to far-term (3-5 years). In the near-term, the envisioned capabilities of the modeling tool are annotated. In addition, a framework is defined and developed in the Decision Modelling System (DEMOS) environment. Our approach is summarized as follows: Assess desirable capabilities (structure into near- and far-term); Identify useful existing models/data; Identify parameters for utility analysis; Define tool framework; Encode scenario thread for model validation; and Provide transition path for tool development. This report contains all relevant, technical progress made on this contractual effort.

  6. Parallel computing for automated model calibration

    SciTech Connect

    Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.; Vail, Lance W.

    2002-07-29

    Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. So far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.

  7. Automated sample preparation and analysis using a sequential-injection-capillary electrophoresis (SI-CE) interface.

    PubMed

    Kulka, Stephan; Quintás, Guillermo; Lendl, Bernhard

    2006-06-01

    A fully automated sequential-injection-capillary electrophoresis (SI-CE) system was developed using commercially available components as the syringe pump, the selection and injection valves and the high voltage power supply. The interface connecting the SI with the CE unit consisted of two T-pieces, where the capillary was inserted in one T-piece and a Pt electrode in the other (grounded) T-piece. By pressurising the whole system using a syringe pump, hydrodynamic injection was feasible. For characterisation, the system was applied to a mixture of adenosine and adenosine monophosphate at different concentrations. The calibration curve obtained gave a detection limit of 0.5 microg g(-1) (correlation coefficient of 0.997). The reproducibility of the injection was also assessed, resulting in a RSD value (5 injections) of 5.4%. The total time of analysis, from injection, conditioning and separation to cleaning the capillary again was 15 minutes. In another application, employing the full power of the automated SIA-CE system, myoglobin was mixed directly using the flow system with different concentrations of sodium dodecyl sulfate (SDS), a known denaturing agent. The different conformations obtained in this way were analysed with the CE system and a distinct shift in migration time and decreasing of the native peak of myoglobin (Mb) could be observed. The protein samples prepared were also analysed with off-line infrared spectroscopy (IR), confirming these results. PMID:16732362

  8. Systems Engineering Interfaces: A Model Based Approach

    NASA Technical Reports Server (NTRS)

    Fosse, Elyse; Delp, Christopher

    2013-01-01

    Currently: Ops Rev developed and maintains a framework that includes interface-specific language, patterns, and Viewpoints. Ops Rev implements the framework to design MOS 2.0 and its 5 Mission Services. Implementation de-couples interfaces and instances of interaction Future: A Mission MOSE implements the approach and uses the model based artifacts for reviews. The framework extends further into the ground data layers and provides a unified methodology.

  9. Bayesian Safety Risk Modeling of Human-Flightdeck Automation Interaction

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2015-01-01

    Usage of automatic systems in airliners has increased fuel efficiency, added extra capabilities, enhanced safety and reliability, as well as provide improved passenger comfort since its introduction in the late 80's. However, original automation benefits, including reduced flight crew workload, human errors or training requirements, were not achieved as originally expected. Instead, automation introduced new failure modes, redistributed, and sometimes increased workload, brought in new cognitive and attention demands, and increased training requirements. Modern airliners have numerous flight modes, providing more flexibility (and inherently more complexity) to the flight crew. However, the price to pay for the increased flexibility is the need for increased mode awareness, as well as the need to supervise, understand, and predict automated system behavior. Also, over-reliance on automation is linked to manual flight skill degradation and complacency in commercial pilots. As a result, recent accidents involving human errors are often caused by the interactions between humans and the automated systems (e.g., the breakdown in man-machine coordination), deteriorated manual flying skills, and/or loss of situational awareness due to heavy dependence on automated systems. This paper describes the development of the increased complexity and reliance on automation baseline model, named FLAP for FLightdeck Automation Problems. The model development process starts with a comprehensive literature review followed by the construction of a framework comprised of high-level causal factors leading to an automation-related flight anomaly. The framework was then converted into a Bayesian Belief Network (BBN) using the Hugin Software v7.8. The effects of automation on flight crew are incorporated into the model, including flight skill degradation, increased cognitive demand and training requirements along with their interactions. Besides flight crew deficiencies, automation system

  10. Microcanonical model for interface formation

    SciTech Connect

    Rucklidge, A.; Zaleski, S.

    1988-04-01

    We describe a new cellular automaton model which allows us to simulate separation of phases. The model is an extension of existing cellular automata for the Ising model, such as Q2R. It conserves particle number and presents the qualitative features of spinodal decomposition. The dynamics is deterministic and does not require random number generators. The spins exchange energy with small local reservoirs or demons. The rate of relaxation to equilibrium is investigated, and the results are compared to the Lifshitz-Slyozov theory.

  11. Model compilation: An approach to automated model derivation

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo

    1990-01-01

    An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.

  12. Modeling Europa's Ice-Ocean Interface

    NASA Astrophysics Data System (ADS)

    Elsenousy, A.; Vance, S.; Bills, B. G.

    2014-12-01

    This work focuses on modeling the ice-ocean interface on Jupiter's Moon (Europa); mainly from the standpoint of heat and salt transfer relationship with emphasis on the basal ice growth rate and its implications to Europa's tidal response. Modeling the heat and salt flux at Europa's ice/ocean interface is necessary to understand the dynamics of Europa's ocean and its interaction with the upper ice shell as well as the history of active turbulence at this area. To achieve this goal, we used McPhee et al., 2008 parameterizations on Earth's ice/ocean interface that was developed to meet Europa's ocean dynamics. We varied one parameter at a time to test its influence on both; "h" the basal ice growth rate and on "R" the double diffusion tendency strength. The double diffusion tendency "R" was calculated as the ratio between the interface heat exchange coefficient αh to the interface salt exchange coefficient αs. Our preliminary results showed a strong double diffusion tendency R ~200 at Europa's ice-ocean interface for plausible changes in the heat flux due to onset or elimination of a hydrothermal activity, suggesting supercooling and a strong tendency for forming frazil ice.

  13. An interface tracking model for droplet electrocoalescence.

    SciTech Connect

    Erickson, Lindsay Crowl

    2013-09-01

    This report describes an Early Career Laboratory Directed Research and Development (LDRD) project to develop an interface tracking model for droplet electrocoalescence. Many fluid-based technologies rely on electrical fields to control the motion of droplets, e.g. microfluidic devices for high-speed droplet sorting, solution separation for chemical detectors, and purification of biodiesel fuel. Precise control over droplets is crucial to these applications. However, electric fields can induce complex and unpredictable fluid dynamics. Recent experiments (Ristenpart et al. 2009) have demonstrated that oppositely charged droplets bounce rather than coalesce in the presence of strong electric fields. A transient aqueous bridge forms between approaching drops prior to pinch-off. This observation applies to many types of fluids, but neither theory nor experiments have been able to offer a satisfactory explanation. Analytic hydrodynamic approximations for interfaces become invalid near coalescence, and therefore detailed numerical simulations are necessary. This is a computationally challenging problem that involves tracking a moving interface and solving complex multi-physics and multi-scale dynamics, which are beyond the capabilities of most state-of-the-art simulations. An interface-tracking model for electro-coalescence can provide a new perspective to a variety of applications in which interfacial physics are coupled with electrodynamics, including electro-osmosis, fabrication of microelectronics, fuel atomization, oil dehydration, nuclear waste reprocessing and solution separation for chemical detectors. We present a conformal decomposition finite element (CDFEM) interface-tracking method for the electrohydrodynamics of two-phase flow to demonstrate electro-coalescence. CDFEM is a sharp interface method that decomposes elements along fluid-fluid boundaries and uses a level set function to represent the interface.

  14. A Web Interface for Eco System Modeling

    NASA Astrophysics Data System (ADS)

    McHenry, K.; Kooper, R.; Serbin, S. P.; LeBauer, D. S.; Desai, A. R.; Dietze, M. C.

    2012-12-01

    We have developed the Predictive Ecosystem Analyzer (PEcAn) as an open-source scientific workflow system and ecoinformatics toolbox that manages the flow of information in and out of regional-scale terrestrial biosphere models, facilitates heterogeneous data assimilation, tracks data provenance, and enables more effective feedback between models and field research. The over-arching goal of PEcAn is to make otherwise complex analyses transparent, repeatable, and accessible to a diverse array of researchers, allowing both novice and expert users to focus on using the models to examine complex ecosystems rather than having to deal with complex computer system setup and configuration questions in order to run the models. Through the developed web interface we hide much of the data and model details and allow the user to simply select locations, ecosystem models, and desired data sources as inputs to the model. Novice users are guided by the web interface through setting up a model execution and plotting the results. At the same time expert users are given enough freedom to modify specific parameters before the model gets executed. This will become more important as more and more models are added to the PEcAn workflow as well as more and more data that will become available as NEON comes online. On the backend we support the execution of potentially computationally expensive models on different High Performance Computers (HPC) and/or clusters. The system can be configured with a single XML file that gives it the flexibility needed for configuring and running the different models on different systems using a combination of information stored in a database as well as pointers to files on the hard disk. While the web interface usually creates this configuration file, expert users can still directly edit it to fine tune the configuration.. Once a workflow is finished the web interface will allow for the easy creation of plots over result data while also allowing the user to

  15. Interfacing a robotic station with a gas chromatograph for the full automation of the determination of organochlorine pesticides in vegetables

    SciTech Connect

    Torres, P.; Luque de Castro, M.D.

    1996-12-31

    A fully automated method for the determination of organochlorine pesticides in vegetables is proposed. The overall system acts as an {open_quotes}analytical black box{close_quotes} because a robotic station performs the prelimninary operations, from weighing to capping the leached analytes and location in an autosampler of an automated gas chromatograph with electron capture detection. The method has been applied to the determination of lindane, heptachlor, captan, chlordane and metoxcychlor in tea, marjoram, cinnamon, pennyroyal, and mint with good results in most cases. A gas chromatograph has been interfaced to a robotic station for the determination of pesticides in vegetables. 15 refs., 4 figs., 2 tabs.

  16. Model-centric distribution automation: Capacity, reliability, and efficiency

    DOE PAGESBeta

    Onen, Ahmet; Jung, Jaesung; Dilek, Murat; Cheng, Danling; Broadwater, Robert P.; Scirbona, Charlie; Cocks, George; Hamilton, Stephanie; Wang, Xiaoyu

    2016-02-26

    A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditionalmore » system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.« less

  17. Transitions in a probabilistic interface growth model

    NASA Astrophysics Data System (ADS)

    Alves, S. G.; Moreira, J. G.

    2011-04-01

    We study a generalization of the Wolf-Villain (WV) interface growth model based on a probabilistic growth rule. In the WV model, particles are randomly deposited onto a substrate and subsequently move to a position nearby where the binding is strongest. We introduce a growth probability which is proportional to a power of the number ni of bindings of the site i: p_i\\propto n_i^\

  18. Modeling, Instrumentation, Automation, and Optimization of Water Resource Recovery Facilities.

    PubMed

    Sweeney, Michael W; Kabouris, John C

    2016-10-01

    A review of the literature published in 2015 on topics relating to water resource recovery facilities (WRRF) in the areas of modeling, automation, measurement and sensors and optimization of wastewater treatment (or water resource reclamation) is presented. PMID:27620091

  19. Automating a human factors evaluation of graphical user interfaces for NASA applications: An update on CHIMES

    NASA Technical Reports Server (NTRS)

    Jiang, Jian-Ping; Murphy, Elizabeth D.; Bailin, Sidney C.; Truszkowski, Walter F.

    1993-01-01

    Capturing human factors knowledge about the design of graphical user interfaces (GUI's) and applying this knowledge on-line are the primary objectives of the Computer-Human Interaction Models (CHIMES) project. The current CHIMES prototype is designed to check a GUI's compliance with industry-standard guidelines, general human factors guidelines, and human factors recommendations on color usage. Following the evaluation, CHIMES presents human factors feedback and advice to the GUI designer. The paper describes the approach to modeling human factors guidelines, the system architecture, a new method developed to convert quantitative RGB primaries into qualitative color representations, and the potential for integrating CHIMES with user interface management systems (UIMS). Both the conceptual approach and its implementation are discussed. This paper updates the presentation on CHIMES at the first International Symposium on Ground Data Systems for Spacecraft Control.

  20. XRLSim model specifications and user interfaces

    SciTech Connect

    Young, K.D.; Breitfeller, E.; Woodruff, J.P.

    1989-12-01

    The two chapters in this manual document the engineering development leading to modification of XRLSim -- an Ada-based computer program developed to provide a realistic simulation of an x-ray laser weapon platform. Complete documentation of the FY88 effort to develop XRLSim was published in April 1989, as UCID-21736:XRLSIM Model Specifications and User Interfaces, by L. C. Ng, D. T. Gavel, R. M. Shectman. P. L. Sholl, and J. P. Woodruff. The FY89 effort has been primarily to enhance the x-ray laser weapon-platform model fidelity. Chapter 1 of this manual details enhancements made to XRLSim model specifications during FY89. Chapter 2 provides the user with changes in user interfaces brought about by these enhancements. This chapter is offered as a series of deletions, replacements, and insertions to the original document to enable XRLSim users to implement enhancements developed during FY89.

  1. Automated model integration at source code level: An approach for implementing models into the NASA Land Information System

    NASA Astrophysics Data System (ADS)

    Wang, S.; Peters-Lidard, C. D.; Mocko, D. M.; Kumar, S.; Nearing, G. S.; Arsenault, K. R.; Geiger, J. V.

    2014-12-01

    Model integration bridges the data flow between modeling frameworks and models. However, models usually do not fit directly into a particular modeling environment, if not designed for it. An example includes implementing different types of models into the NASA Land Information System (LIS), a software framework for land-surface modeling and data assimilation. Model implementation requires scientific knowledge and software expertise and may take a developer months to learn LIS and model software structure. Debugging and testing of the model implementation is also time-consuming due to not fully understanding LIS or the model. This time spent is costly for research and operational projects. To address this issue, an approach has been developed to automate model integration into LIS. With this in mind, a general model interface was designed to retrieve forcing inputs, parameters, and state variables needed by the model and to provide as state variables and outputs to LIS. Every model can be wrapped to comply with the interface, usually with a FORTRAN 90 subroutine. Development efforts need only knowledge of the model and basic programming skills. With such wrappers, the logic is the same for implementing all models. Code templates defined for this general model interface could be re-used with any specific model. Therefore, the model implementation can be done automatically. An automated model implementation toolkit was developed with Microsoft Excel and its built-in VBA language. It allows model specifications in three worksheets and contains FORTRAN 90 code templates in VBA programs. According to the model specification, the toolkit generates data structures and procedures within FORTRAN modules and subroutines, which transfer data between LIS and the model wrapper. Model implementation is standardized, and about 80 - 90% of the development load is reduced. In this presentation, the automated model implementation approach is described along with LIS programming

  2. Automation Marketplace 2010: New Models, Core Systems

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2010-01-01

    In a year when a difficult economy presented fewer opportunities for immediate gains, the major industry players have defined their business strategies with fundamentally different concepts of library automation. This is no longer an industry where companies compete on the basis of the best or the most features in similar products but one where…

  3. Modeling strategic behavior in human-automation interaction - Why an 'aid' can (and should) go unused

    NASA Technical Reports Server (NTRS)

    Kirlik, Alex

    1993-01-01

    Task-offload aids (e.g., an autopilot, an 'intelligent' assistant) can be selectively engaged by the human operator to dynamically delegate tasks to automation. Introducing such aids eliminates some task demands but creates new ones associated with programming, engaging, and disengaging the aiding device via an interface. The burdens associated with managing automation can sometimes outweigh the potential benefits of automation to improved system performance. Aid design parameters and features of the overall multitask context combine to determine whether or not a task-offload aid will effectively support the operator. A modeling and sensitivity analysis approach is presented that identifies effective strategies for human-automation interaction as a function of three task-context parameters and three aid design parameters. The analysis and modeling approaches provide resources for predicting how a well-adapted operator will use a given task-offload aid, and for specifying aid design features that ensure that automation will provide effective operator support in a multitask environment.

  4. Perspectives of Interfacing People with Technology in the Development of Office Automation.

    ERIC Educational Resources Information Center

    Conroy, Thomas R.; Ewbank, Ray V. K.

    Noting the increasing impact of office automation on the workings of both people and organizations, this paper purposes the need for implementation methodologies, termed "self-actualizing systems," to introduce automation technologies into the office environment with a minimum of trauma to workers. Such methodologies, it contends, allow users to…

  5. ModelMate - A graphical user interface for model analysis

    USGS Publications Warehouse

    Banta, Edward R.

    2011-01-01

    ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.

  6. Modeling of metal-ferroelectric-insulator-semiconductor structure considering the effects of interface traps

    NASA Astrophysics Data System (ADS)

    Sun, Jing; Shi, Xiao Rong; Zheng, Xue Jun; Tian, Li; Zhu, Zhe

    2015-06-01

    An improved model, in which the interface traps effects are considered, is developed by combining with quantum mechanical model, dipole switching theory and silicon physics of metal-oxide-semiconductor structure to describe the electrical properties of metal-ferroelectric-insulator-semiconductor (MFIS) structure. Using the model, the effects of the interface traps on the surface potential (ϕSi) of the semiconductor, the low frequency (LF) capacitance-voltage (C-V) characteristics and memory window of MFIS structure are simulated, and the results show that the ϕSi- V and LF C-V curves are shifted toward the positive-voltage direction and the memory window become worse as the density of the interface trap states increases. This paper is expected to provide some guidance to the design and performance improvement of MFIS structure devices. In addition, the improved model can be integrated into electronic design automation (EDA) software for circuit simulation.

  7. A catalog of automated analysis methods for enterprise models.

    PubMed

    Florez, Hector; Sánchez, Mario; Villalobos, Jorge

    2016-01-01

    Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool. PMID:27047732

  8. Modeling and deadlock avoidance of automated manufacturing systems with multiple automated guided vehicles.

    PubMed

    Wu, Naiqi; Zhou, MengChu

    2005-12-01

    An automated manufacturing system (AMS) contains a number of versatile machines (or workstations), buffers, an automated material handling system (MHS), and is computer-controlled. An effective and flexible alternative for implementing MHS is to use automated guided vehicle (AGV) system. The deadlock issue in AMS is very important in its operation and has extensively been studied. The deadlock problems were separately treated for parts in production and transportation and many techniques were developed for each problem. However, such treatment does not take the advantage of the flexibility offered by multiple AGVs. In general, it is intractable to obtain maximally permissive control policy for either problem. Instead, this paper investigates these two problems in an integrated way. First we model an AGV system and part processing processes by resource-oriented Petri nets, respectively. Then the two models are integrated by using macro transitions. Based on the combined model, a novel control policy for deadlock avoidance is proposed. It is shown to be maximally permissive with computational complexity of O (n2) where n is the number of machines in AMS if the complexity for controlling the part transportation by AGVs is not considered. Thus, the complexity of deadlock avoidance for the whole system is bounded by the complexity in controlling the AGV system. An illustrative example shows its application and power. PMID:16366245

  9. Automated particulate sampler field test model operations guide

    SciTech Connect

    Bowyer, S.M.; Miley, H.S.

    1996-10-01

    The Automated Particulate Sampler Field Test Model Operations Guide is a collection of documents which provides a complete picture of the Automated Particulate Sampler (APS) and the Field Test in which it was evaluated. The Pacific Northwest National Laboratory (PNNL) Automated Particulate Sampler was developed for the purpose of radionuclide particulate monitoring for use under the Comprehensive Test Ban Treaty (CTBT). Its design was directed by anticipated requirements of small size, low power consumption, low noise level, fully automatic operation, and most predominantly the sensitivity requirements of the Conference on Disarmament Working Paper 224 (CDWP224). This guide is intended to serve as both a reference document for the APS and to provide detailed instructions on how to operate the sampler. This document provides a complete description of the APS Field Test Model and all the activity related to its evaluation and progression.

  10. Radiation budget measurement/model interface

    NASA Technical Reports Server (NTRS)

    Vonderhaar, T. H.; Ciesielski, P.; Randel, D.; Stevens, D.

    1983-01-01

    This final report includes research results from the period February, 1981 through November, 1982. Two new results combine to form the final portion of this work. They are the work by Hanna (1982) and Stevens to successfully test and demonstrate a low-order spectral climate model and the work by Ciesielski et al. (1983) to combine and test the new radiation budget results from NIMBUS-7 with earlier satellite measurements. Together, the two related activities set the stage for future research on radiation budget measurement/model interfacing. Such combination of results will lead to new applications of satellite data to climate problems. The objectives of this research under the present contract are therefore satisfied. Additional research reported herein includes the compilation and documentation of the radiation budget data set a Colorado State University and the definition of climate-related experiments suggested after lengthy analysis of the satellite radiation budget experiments.

  11. Modeling interfaces between solids: Application to Li battery materials

    NASA Astrophysics Data System (ADS)

    Lepley, N. D.; Holzwarth, N. A. W.

    2015-12-01

    We present a general scheme to model an energy for analyzing interfaces between crystalline solids, quantitatively including the effects of varying configurations and lattice strain. This scheme is successfully applied to the modeling of likely interface geometries of several solid state battery materials including Li metal, Li3PO4 , Li3PS4 , Li2O , and Li2S . Our formalism, together with a partial density of states analysis, allows us to characterize the thickness, stability, and transport properties of these interfaces. We find that all of the interfaces in this study are stable with the exception of Li3PS4/Li . For this chemically unstable interface, the partial density of states helps to identify mechanisms associated with the interface reactions. Our energetic measure of interfaces and our analysis of the band alignment between interface materials indicate multiple factors, which may be predictors of interface stability, an important property of solid electrolyte systems.

  12. Automated data acquisition technology development:Automated modeling and control development

    NASA Technical Reports Server (NTRS)

    Romine, Peter L.

    1995-01-01

    This report documents the completion of, and improvements made to, the software developed for automated data acquisition and automated modeling and control development on the Texas Micro rackmounted PC's. This research was initiated because a need was identified by the Metal Processing Branch of NASA Marshall Space Flight Center for a mobile data acquisition and data analysis system, customized for welding measurement and calibration. Several hardware configurations were evaluated and a PC based system was chosen. The Welding Measurement System (WMS), is a dedicated instrument strickly for use of data acquisition and data analysis. In addition to the data acquisition functions described in this thesis, WMS also supports many functions associated with process control. The hardware and software requirements for an automated acquisition system for welding process parameters, welding equipment checkout, and welding process modeling were determined in 1992. From these recommendations, NASA purchased the necessary hardware and software. The new welding acquisition system is designed to collect welding parameter data and perform analysis to determine the voltage versus current arc-length relationship for VPPA welding. Once the results of this analysis are obtained, they can then be used to develop a RAIL function to control welding startup and shutdown without torch crashing.

  13. Variational Implicit Solvation with Solute Molecular Mechanics: From Diffuse-Interface to Sharp-Interface Models.

    PubMed

    Li, Bo; Zhao, Yanxiang

    2013-01-01

    Central in a variational implicit-solvent description of biomolecular solvation is an effective free-energy functional of the solute atomic positions and the solute-solvent interface (i.e., the dielectric boundary). The free-energy functional couples together the solute molecular mechanical interaction energy, the solute-solvent interfacial energy, the solute-solvent van der Waals interaction energy, and the electrostatic energy. In recent years, the sharp-interface version of the variational implicit-solvent model has been developed and used for numerical computations of molecular solvation. In this work, we propose a diffuse-interface version of the variational implicit-solvent model with solute molecular mechanics. We also analyze both the sharp-interface and diffuse-interface models. We prove the existence of free-energy minimizers and obtain their bounds. We also prove the convergence of the diffuse-interface model to the sharp-interface model in the sense of Γ-convergence. We further discuss properties of sharp-interface free-energy minimizers, the boundary conditions and the coupling of the Poisson-Boltzmann equation in the diffuse-interface model, and the convergence of forces from diffuse-interface to sharp-interface descriptions. Our analysis relies on the previous works on the problem of minimizing surface areas and on our observations on the coupling between solute molecular mechanical interactions with the continuum solvent. Our studies justify rigorously the self consistency of the proposed diffuse-interface variational models of implicit solvation. PMID:24058213

  14. Variational Implicit Solvation with Solute Molecular Mechanics: From Diffuse-Interface to Sharp-Interface Models

    PubMed Central

    Li, Bo; Zhao, Yanxiang

    2013-01-01

    Central in a variational implicit-solvent description of biomolecular solvation is an effective free-energy functional of the solute atomic positions and the solute-solvent interface (i.e., the dielectric boundary). The free-energy functional couples together the solute molecular mechanical interaction energy, the solute-solvent interfacial energy, the solute-solvent van der Waals interaction energy, and the electrostatic energy. In recent years, the sharp-interface version of the variational implicit-solvent model has been developed and used for numerical computations of molecular solvation. In this work, we propose a diffuse-interface version of the variational implicit-solvent model with solute molecular mechanics. We also analyze both the sharp-interface and diffuse-interface models. We prove the existence of free-energy minimizers and obtain their bounds. We also prove the convergence of the diffuse-interface model to the sharp-interface model in the sense of Γ-convergence. We further discuss properties of sharp-interface free-energy minimizers, the boundary conditions and the coupling of the Poisson–Boltzmann equation in the diffuse-interface model, and the convergence of forces from diffuse-interface to sharp-interface descriptions. Our analysis relies on the previous works on the problem of minimizing surface areas and on our observations on the coupling between solute molecular mechanical interactions with the continuum solvent. Our studies justify rigorously the self consistency of the proposed diffuse-interface variational models of implicit solvation. PMID:24058213

  15. Automation of Endmember Pixel Selection in SEBAL/METRIC Model

    NASA Astrophysics Data System (ADS)

    Bhattarai, N.; Quackenbush, L. J.; Im, J.; Shaw, S. B.

    2015-12-01

    The commonly applied surface energy balance for land (SEBAL) and its variant, mapping evapotranspiration (ET) at high resolution with internalized calibration (METRIC) models require manual selection of endmember (i.e. hot and cold) pixels to calibrate sensible heat flux. Current approaches for automating this process are based on statistical methods and do not appear to be robust under varying climate conditions and seasons. In this paper, we introduce a new approach based on simple machine learning tools and search algorithms that provides an automatic and time efficient way of identifying endmember pixels for use in these models. The fully automated models were applied on over 100 cloud-free Landsat images with each image covering several eddy covariance flux sites in Florida and Oklahoma. Observed land surface temperatures at automatically identified hot and cold pixels were within 0.5% of those from pixels manually identified by an experienced operator (coefficient of determination, R2, ≥ 0.92, Nash-Sutcliffe efficiency, NSE, ≥ 0.92, and root mean squared error, RMSE, ≤ 1.67 K). Daily ET estimates derived from the automated SEBAL and METRIC models were in good agreement with their manual counterparts (e.g., NSE ≥ 0.91 and RMSE ≤ 0.35 mm day-1). Automated and manual pixel selection resulted in similar estimates of observed ET across all sites. The proposed approach should reduce time demands for applying SEBAL/METRIC models and allow for their more widespread and frequent use. This automation can also reduce potential bias that could be introduced by an inexperienced operator and extend the domain of the models to new users.

  16. Automating sensitivity analysis of computer models using computer calculus

    SciTech Connect

    Oblow, E.M.; Pin, F.G.

    1985-01-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs.

  17. Computational design of patterned interfaces using reduced order models

    PubMed Central

    Vattré, A. J.; Abdolrahim, N.; Kolluri, K.; Demkowicz, M. J.

    2014-01-01

    Patterning is a familiar approach for imparting novel functionalities to free surfaces. We extend the patterning paradigm to interfaces between crystalline solids. Many interfaces have non-uniform internal structures comprised of misfit dislocations, which in turn govern interface properties. We develop and validate a computational strategy for designing interfaces with controlled misfit dislocation patterns by tailoring interface crystallography and composition. Our approach relies on a novel method for predicting the internal structure of interfaces: rather than obtaining it from resource-intensive atomistic simulations, we compute it using an efficient reduced order model based on anisotropic elasticity theory. Moreover, our strategy incorporates interface synthesis as a constraint on the design process. As an illustration, we apply our approach to the design of interfaces with rapid, 1-D point defect diffusion. Patterned interfaces may be integrated into the microstructure of composite materials, markedly improving performance. PMID:25169868

  18. A nonlinear interface model applied to masonry structures

    NASA Astrophysics Data System (ADS)

    Lebon, Frédéric; Raffa, Maria Letizia; Rizzoni, Raffaella

    2015-12-01

    In this paper, a new imperfect interface model is presented. The model includes finite strains, micro-cracks and smooth roughness. The model is consistently derived by coupling a homogenization approach for micro-cracked media and arguments of asymptotic analysis. The model is applied to brick/mortar interfaces. Numerical results are presented.

  19. Automated Environment Generation for Software Model Checking

    NASA Technical Reports Server (NTRS)

    Tkachuk, Oksana; Dwyer, Matthew B.; Pasareanu, Corina S.

    2003-01-01

    A key problem in model checking open systems is environment modeling (i.e., representing the behavior of the execution context of the system under analysis). Software systems are fundamentally open since their behavior is dependent on patterns of invocation of system components and values defined outside the system but referenced within the system. Whether reasoning about the behavior of whole programs or about program components, an abstract model of the environment can be essential in enabling sufficiently precise yet tractable verification. In this paper, we describe an approach to generating environments of Java program fragments. This approach integrates formally specified assumptions about environment behavior with sound abstractions of environment implementations to form a model of the environment. The approach is implemented in the Bandera Environment Generator (BEG) which we describe along with our experience using BEG to reason about properties of several non-trivial concurrent Java programs.

  20. Automated adaptive inference of phenomenological dynamical models

    PubMed Central

    Daniels, Bryan C.; Nemenman, Ilya

    2015-01-01

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved. PMID:26293508

  1. Automated adaptive inference of phenomenological dynamical models

    NASA Astrophysics Data System (ADS)

    Daniels, Bryan C.; Nemenman, Ilya

    2015-08-01

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved.

  2. Model Search: Formalizing and Automating Constraint Solving in MDE Platforms

    NASA Astrophysics Data System (ADS)

    Kleiner, Mathias; Del Fabro, Marcos Didonet; Albert, Patrick

    Model Driven Engineering (MDE) and constraint programming (CP) have been widely used and combined in different applications. However, existing results are either ad-hoc, not fully integrated or manually executed. In this article, we present a formalization and an approach for automating constraint-based solving in a MDE platform. Our approach generalizes existing work by combining known MDE concepts with CP techniques into a single operation called model search. We present the theoretical basis for model search, as well as an automated process that details the involved operations. We validate our approach by comparing two implemented solutions (one based on Alloy/SAT, the other on OPL/CP), and by executing them over an academic use-case.

  3. Automated sample plan selection for OPC modeling

    NASA Astrophysics Data System (ADS)

    Casati, Nathalie; Gabrani, Maria; Viswanathan, Ramya; Bayraktar, Zikri; Jaiswal, Om; DeMaris, David; Abdo, Amr Y.; Oberschmidt, James; Krause, Andreas

    2014-03-01

    It is desired to reduce the time required to produce metrology data for calibration of Optical Proximity Correction (OPC) models and also maintain or improve the quality of the data collected with regard to how well that data represents the types of patterns that occur in real circuit designs. Previous work based on clustering in geometry and/or image parameter space has shown some benefit over strictly manual or intuitive selection, but leads to arbitrary pattern exclusion or selection which may not be the best representation of the product. Forming the pattern selection as an optimization problem, which co-optimizes a number of objective functions reflecting modelers' insight and expertise, has shown to produce models with equivalent quality to the traditional plan of record (POR) set but in a less time.

  4. Qgui: A high-throughput interface for automated setup and analysis of free energy calculations and empirical valence bond simulations in biological systems.

    PubMed

    Isaksen, Geir Villy; Andberg, Tor Arne Heim; Åqvist, Johan; Brandsdal, Bjørn Olav

    2015-07-01

    Structural information and activity data has increased rapidly for many protein targets during the last decades. In this paper, we present a high-throughput interface (Qgui) for automated free energy and empirical valence bond (EVB) calculations that use molecular dynamics (MD) simulations for conformational sampling. Applications to ligand binding using both the linear interaction energy (LIE) method and the free energy perturbation (FEP) technique are given using the estrogen receptor (ERα) as a model system. Examples of free energy profiles obtained using the EVB method for the rate-limiting step of the enzymatic reaction catalyzed by trypsin are also shown. In addition, we present calculation of high-precision Arrhenius plots to obtain the thermodynamic activation enthalpy and entropy with Qgui from running a large number of EVB simulations. PMID:26080356

  5. A power line data communication interface using spread spectrum technology in home automation

    SciTech Connect

    Shwehdi, M.H.; Khan, A.Z.

    1996-07-01

    Building automation technology is rapidly developing towards more reliable communication systems, devices that control electronic equipments. These equipment if controlled leads to efficient energy management, and savings on the monthly electricity bill. Power Line communication (PLC) has been one of the dreams of the electronics industry for decades, especially for building automation. It is the purpose of this paper to demonstrate communication methods among electronic control devices through an AC power line carrier within the buildings for more efficient energy control. The paper outlines methods of communication over a powerline, namely the X-10 and CE bus. It also introduces the spread spectrum technology as to increase speed to 100--150 times faster than the X-10 system. The powerline carrier has tremendous applications in the field of building automation. The paper presents an attempt to realize a smart house concept, so called, in which all home electronic devices from a coffee maker to a water heater microwave to chaos robots will be utilized by an intelligent network whenever one wishes to do so. The designed system may be applied very profitably to help in energy management for both customer and utility.

  6. Automated modelling of signal transduction networks

    PubMed Central

    2002-01-01

    Background Intracellular signal transduction is achieved by networks of proteins and small molecules that transmit information from the cell surface to the nucleus, where they ultimately effect transcriptional changes. Understanding the mechanisms cells use to accomplish this important process requires a detailed molecular description of the networks involved. Results We have developed a computational approach for generating static models of signal transduction networks which utilizes protein-interaction maps generated from large-scale two-hybrid screens and expression profiles from DNA microarrays. Networks are determined entirely by integrating protein-protein interaction data with microarray expression data, without prior knowledge of any pathway intermediates. In effect, this is equivalent to extracting subnetworks of the protein interaction dataset whose members have the most correlated expression profiles. Conclusion We show that our technique accurately reconstructs MAP Kinase signaling networks in Saccharomyces cerevisiae. This approach should enhance our ability to model signaling networks and to discover new components of known networks. More generally, it provides a method for synthesizing molecular data, either individual transcript abundance measurements or pairwise protein interactions, into higher level structures, such as pathways and networks. PMID:12413400

  7. Elevator model based on a tiny PLC for teaching automation

    NASA Astrophysics Data System (ADS)

    Kim, Kee Hwan; Lee, Young Dae

    2005-12-01

    The development of control related applications requires knowledge of different subject matters like mechanical components, control equipment and physics. To understand the behavior of these heterogeneous applications is not easy especially the students who begin to study the electronic engineering. In order to introduce to them the most common components and skills necessary to put together a functioning automated system, we have designed a simple elevator model controlled by a PLC which was designed based on a microcontroller.

  8. Automated refinement and inference of analytical models for metabolic networks

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael D.; Vallabhajosyula, Ravishankar R.; Jenkins, Jerry W.; Hood, Jonathan E.; Soni, Abhishek S.; Wikswo, John P.; Lipson, Hod

    2011-10-01

    The reverse engineering of metabolic networks from experimental data is traditionally a labor-intensive task requiring a priori systems knowledge. Using a proven model as a test system, we demonstrate an automated method to simplify this process by modifying an existing or related model--suggesting nonlinear terms and structural modifications--or even constructing a new model that agrees with the system's time series observations. In certain cases, this method can identify the full dynamical model from scratch without prior knowledge or structural assumptions. The algorithm selects between multiple candidate models by designing experiments to make their predictions disagree. We performed computational experiments to analyze a nonlinear seven-dimensional model of yeast glycolytic oscillations. This approach corrected mistakes reliably in both approximated and overspecified models. The method performed well to high levels of noise for most states, could identify the correct model de novo, and make better predictions than ordinary parametric regression and neural network models. We identified an invariant quantity in the model, which accurately derived kinetics and the numerical sensitivity coefficients of the system. Finally, we compared the system to dynamic flux estimation and discussed the scaling and application of this methodology to automated experiment design and control in biological systems in real time.

  9. Automated refinement and inference of analytical models for metabolic networks

    PubMed Central

    Schmidt, Michael D; Vallabhajosyula, Ravishankar R; Jenkins, Jerry W; Hood, Jonathan E; Soni, Abhishek S; Wikswo, John P; Lipson, Hod

    2013-01-01

    The reverse engineering of metabolic networks from experimental data is traditionally a labor-intensive task requiring a priori systems knowledge. Using a proven model as a test system, we demonstrate an automated method to simplify this process by modifying an existing or related model – suggesting nonlinear terms and structural modifications – or even constructing a new model that agrees with the system’s time-series observations. In certain cases, this method can identify the full dynamical model from scratch without prior knowledge or structural assumptions. The algorithm selects between multiple candidate models by designing experiments to make their predictions disagree. We performed computational experiments to analyze a nonlinear seven-dimensional model of yeast glycolytic oscillations. This approach corrected mistakes reliably in both approximated and overspecified models. The method performed well to high levels of noise for most states, could identify the correct model de novo, and make better predictions than ordinary parametric regression and neural network models. We identified an invariant quantity in the model, which accurately derived kinetics and the numerical sensitivity coefficients of the system. Finally, we compared the system to dynamic flux estimation and discussed the scaling and application of this methodology to automated experiment design and control in biological systems in real-time. PMID:21832805

  10. Flexible dynamic models for user interfaces

    NASA Astrophysics Data System (ADS)

    Vogelsang, Holger; Brinkschulte, Uwe; Siormanolakis, Marios

    1997-04-01

    This paper describes an approach for a platform- and implementation-independent design of user interfaces using the UIMS idea. It is a result of a detailed examination of object-oriented techniques for program specification and implementation. This analysis leads to a description of the requirements for man-machine interaction from the software- developers point of view. On the other hand, the final user of the whole system has a different view of this system. He needs metaphors of his own world to fulfill his tasks. It's the job of the user interface designer to bring these views together. The approach, described in this paper, helps bringing both kinds of developers together, using a well defined interface with minimal communication overhead.

  11. Modelling melt-solid interfaces in Bridgman growth

    NASA Technical Reports Server (NTRS)

    Barber, Patrick G.; Berry, Robert F.; Debnam, William J.; Fripp, Archibald F.; Huang, YU

    1989-01-01

    Doped epoxy models with abrupt interfaces were prepared to test radiographic and computer enhancement procedures used to study the images of melt-solid interfaces during crystal growth in Bridgman furnaces. A column averaging procedure resulted in improved images that faithfully reproduced the positions and shapes of interfaces even at very low density differences. These techniques were applied to lead tin telluride growing in Bridgman furnaces.

  12. Rapid Automated Aircraft Simulation Model Updating from Flight Data

    NASA Technical Reports Server (NTRS)

    Brian, Geoff; Morelli, Eugene A.

    2011-01-01

    Techniques to identify aircraft aerodynamic characteristics from flight measurements and compute corrections to an existing simulation model of a research aircraft were investigated. The purpose of the research was to develop a process enabling rapid automated updating of aircraft simulation models using flight data and apply this capability to all flight regimes, including flight envelope extremes. The process presented has the potential to improve the efficiency of envelope expansion flight testing, revision of control system properties, and the development of high-fidelity simulators for pilot training.

  13. A model for testing centerfinding algorithms for automated optical navigation

    NASA Technical Reports Server (NTRS)

    Griffin, M. D.; Breckenridge, W. G.

    1979-01-01

    An efficient software simulation of the imaging process for optical navigation is presented, illustrating results using simple examples. The problems of image definition and optical system modeling, including ideal image containing features and realistic models of optical filtering performed by the entire camera system, are examined. A digital signal processing technique is applied to the problem of developing methods of automated optical navigation and the subsequent mathematical formulation is presented. Specific objectives such as an analysis of the effects of camera defocusing on centerfinding of planar targets, addition of noise filtering to the algorithm, and implementation of multiple frame capability were investigated.

  14. Automated Decomposition of Model-based Learning Problems

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Millar, Bill

    1996-01-01

    A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.

  15. Unifying binary fluid diffuse-interface models in the sharp-interface limit

    NASA Astrophysics Data System (ADS)

    Sibley, David; Nold, Andreas; Kalliadasis, Serafim

    2013-11-01

    Flows involving free boundaries occur widely in both nature and technological applications, existing at liquid-gas interfaces (e.g. between liquid water and water vapour) or between different immiscible fluids (e.g. oil and water, and termed a binary fluid). To understand the asymptotic behaviour near a contact line, a liquid-gas diffuse-interface model has been investigated recently. In contrast, here we investigate the behaviour between two ostensibly immiscible fluids, a binary fluid, using related models where the interface has a thin but finite thickness. Quantities such as the mass fraction of the two fluid components are modelled as varying smoothly but rapidly in the interfacial region. There has been a wide variety of models used for this situation, based on Cahn-Hilliard or Allen-Cahn theories coupled to hydrodynamic equations, and we consider the effect of these differences using matched asymptotic methods in the important sharp-interface limit, where the interface thickness goes to zero. Our aim is to understand which models represent better the classical hydrodynamic model and associated free-surface boundary conditions.

  16. Automated modelling of spatially-distributed glacier ice thickness and volume

    NASA Astrophysics Data System (ADS)

    James, William H. M.; Carrivick, Jonathan L.

    2016-07-01

    Ice thickness distribution and volume are both key parameters for glaciological and hydrological applications. This study presents VOLTA (Volume and Topography Automation), which is a Python script tool for ArcGISTM that requires just a digital elevation model (DEM) and glacier outline(s) to model distributed ice thickness, volume and bed topography. Ice thickness is initially estimated at points along an automatically generated centreline network based on the perfect-plasticity rheology assumption, taking into account a valley side drag component of the force balance equation. Distributed ice thickness is subsequently interpolated using a glaciologically correct algorithm. For five glaciers with independent field-measured bed topography, VOLTA modelled volumes were between 26.5% (underestimate) and 16.6% (overestimate) of that derived from field observations. Greatest differences were where an asymmetric valley cross section shape was present or where significant valley infill had occurred. Compared with other methods of modelling ice thickness and volume, key advantages of VOLTA are: a fully automated approach and a user friendly graphical user interface (GUI), GIS consistent geometry, fully automated centreline generation, inclusion of a side drag component in the force balance equation, estimation of glacier basal shear stress for each individual glacier, fully distributed ice thickness output and the ability to process multiple glaciers rapidly. VOLTA is capable of regional scale ice volume assessment, which is a key parameter for exploring glacier response to climate change. VOLTA also permits subtraction of modelled ice thickness from the input surface elevation to produce an ice-free DEM, which is a key input for reconstruction of former glaciers. VOLTA could assist with prediction of future glacier geometry changes and hence in projection of future meltwater fluxes.

  17. Rapid Prototyping of Hydrologic Model Interfaces with IPython

    NASA Astrophysics Data System (ADS)

    Farthing, M. W.; Winters, K. D.; Ahmadia, A. J.; Hesser, T.; Howington, S. E.; Johnson, B. D.; Tate, J.; Kees, C. E.

    2014-12-01

    A significant gulf still exists between the state of practice and state of the art in hydrologic modeling. Part of this gulf is due to the lack of adequate pre- and post-processing tools for newly developed computational models. The development of user interfaces has traditionally lagged several years behind the development of a particular computational model or suite of models. As a result, models with mature interfaces often lack key advancements in model formulation, solution methods, and/or software design and technology. Part of the problem has been a focus on developing monolithic tools to provide comprehensive interfaces for the entire suite of model capabilities. Such efforts require expertise in software libraries and frameworks for creating user interfaces (e.g., Tcl/Tk, Qt, and MFC). These tools are complex and require significant investment in project resources (time and/or money) to use. Moreover, providing the required features for the entire range of possible applications and analyses creates a cumbersome interface. For a particular site or application, the modeling requirements may be simplified or at least narrowed, which can greatly reduce the number and complexity of options that need to be accessible to the user. However, monolithic tools usually are not adept at dynamically exposing specific workflows. Our approach is to deliver highly tailored interfaces to users. These interfaces may be site and/or process specific. As a result, we end up with many, customized interfaces rather than a single, general-use tool. For this approach to be successful, it must be efficient to create these tailored interfaces. We need technology for creating quality user interfaces that is accessible and has a low barrier for integration into model development efforts. Here, we present efforts to leverage IPython notebooks as tools for rapid prototyping of site and application-specific user interfaces. We provide specific examples from applications in near

  18. Building a Nationwide Bibliographic Database: The Role of Local Shared Automated Systems.

    ERIC Educational Resources Information Center

    Wetherbee, Louella V.

    1992-01-01

    Discusses the actual and potential impact of local shared automated library systems on the development of a comprehensive nationwide bibliographic database (NBD). Shared local automated systems are described; four local shared automated system models are compared; and the current interface between local shared automated library systems and the NBD…

  19. COMPASS: A Framework for Automated Performance Modeling and Prediction

    SciTech Connect

    Lee, Seyong; Meredith, Jeremy S; Vetter, Jeffrey S

    2015-01-01

    Flexible, accurate performance predictions offer numerous benefits such as gaining insight into and optimizing applications and architectures. However, the development and evaluation of such performance predictions has been a major research challenge, due to the architectural complexities. To address this challenge, we have designed and implemented a prototype system, named COMPASS, for automated performance model generation and prediction. COMPASS generates a structured performance model from the target application's source code using automated static analysis, and then, it evaluates this model using various performance prediction techniques. As we demonstrate on several applications, the results of these predictions can be used for a variety of purposes, such as design space exploration, identifying performance tradeoffs for applications, and understanding sensitivities of important parameters. COMPASS can generate these predictions across several types of applications from traditional, sequential CPU applications to GPU-based, heterogeneous, parallel applications. Our empirical evaluation demonstrates a maximum overhead of 4%, flexibility to generate models for 9 applications, speed, ease of creation, and very low relative errors across a diverse set of architectures.

  20. The Application of the Cumulative Logistic Regression Model to Automated Essay Scoring

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; Sinharay, Sandip

    2010-01-01

    Most automated essay scoring programs use a linear regression model to predict an essay score from several essay features. This article applied a cumulative logit model instead of the linear regression model to automated essay scoring. Comparison of the performances of the linear regression model and the cumulative logit model was performed on a…

  1. Structured Analysis Tool interface to the Strategic Defense Initiative architecture dataflow modeling technique. Master's thesis

    SciTech Connect

    Austin, K.A.

    1989-12-01

    A software interface was designed and implemented that extends the use of Structured Analysis (SA) Tool (SAtool) as a graphical front-end to the Strategic Defense Initiative Architecture Dataflow Modeling Technique (SADMT). SAtool is a computer-aided software engineering tool developed at the Air Force Institute of Technology that automates the requirements analysis phase of software development using a graphics editor. The tool automates two approaches for documenting software requirements analysis: SA diagrams and data dictionaries. SADMT is an Ada based simulation framework that enables users to model real-world architectures for simulation purposes. This research was accomplished in three phases. During the first phase, entity-relationship (E-R) models of each software package were developed. From these E-R models, relationships between the two software packages were identified and used to develop a mapping from SAtool to SADMT. The next phase of the research was the development of a software interface in Ada based on the mapping developed in the first phase. A combination of a top-down and a bottom-up approach was used in developing the software.

  2. Developing a Graphical User Interface to Automate the Estimation and Prediction of Risk Values for Flood Protective Structures using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Hasan, M.; Helal, A.; Gabr, M.

    2014-12-01

    In this project, we focus on providing a computer-automated platform for a better assessment of the potential failures and retrofit measures of flood-protecting earth structures, e.g., dams and levees. Such structures play an important role during extreme flooding events as well as during normal operating conditions. Furthermore, they are part of other civil infrastructures such as water storage and hydropower generation. Hence, there is a clear need for accurate evaluation of stability and functionality levels during their service lifetime so that the rehabilitation and maintenance costs are effectively guided. Among condition assessment approaches based on the factor of safety, the limit states (LS) approach utilizes numerical modeling to quantify the probability of potential failures. The parameters for LS numerical modeling include i) geometry and side slopes of the embankment, ii) loading conditions in terms of rate of rising and duration of high water levels in the reservoir, and iii) cycles of rising and falling water levels simulating the effect of consecutive storms throughout the service life of the structure. Sample data regarding the correlations of these parameters are available through previous research studies. We have unified these criteria and extended the risk assessment in term of loss of life through the implementation of a graphical user interface to automate input parameters that divides data into training and testing sets, and then feeds them into Artificial Neural Network (ANN) tool through MATLAB programming. The ANN modeling allows us to predict risk values of flood protective structures based on user feedback quickly and easily. In future, we expect to fine-tune the software by adding extensive data on variations of parameters.

  3. Modeling and Control of the Automated Radiator Inspection Device

    NASA Technical Reports Server (NTRS)

    Dawson, Darren

    1991-01-01

    Many of the operations performed at the Kennedy Space Center (KSC) are dangerous and repetitive tasks which make them ideal candidates for robotic applications. For one specific application, KSC is currently in the process of designing and constructing a robot called the Automated Radiator Inspection Device (ARID), to inspect the radiator panels on the orbiter. The following aspects of the ARID project are discussed: modeling of the ARID; design of control algorithms; and nonlinear based simulation of the ARID. Recommendations to assist KSC personnel in the successful completion of the ARID project are given.

  4. Automated Physico-Chemical Cell Model Development through Information Theory

    SciTech Connect

    Peter J. Ortoleva

    2005-11-29

    The objective of this project was to develop predictive models of the chemical responses of microbial cells to variations in their surroundings. The application of these models is optimization of environmental remediation and energy-producing biotechnical processes.The principles on which our project is based are as follows: chemical thermodynamics and kinetics; automation of calibration through information theory; integration of multiplex data (e.g. cDNA microarrays, NMR, proteomics), cell modeling, and bifurcation theory to overcome cellular complexity; and the use of multiplex data and information theory to calibrate and run an incomplete model. In this report we review four papers summarizing key findings and a web-enabled, multiple module workflow we have implemented that consists of a set of interoperable systems biology computational modules.

  5. Automated extraction of knowledge for model-based diagnostics

    NASA Technical Reports Server (NTRS)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  6. Finite element modeling of frictionally restrained composite interfaces

    NASA Technical Reports Server (NTRS)

    Ballarini, Roberto; Ahmed, Shamim

    1989-01-01

    The use of special interface finite elements to model frictional restraint in composite interfaces is described. These elements simulate Coulomb friction at the interface, and are incorporated into a standard finite element analysis of a two-dimensional isolated fiber pullout test. Various interfacial characteristics, such as the distribution of stresses at the interface, the extent of slip and delamination, load diffusion from fiber to matrix, and the amount of fiber extraction or depression are studied for different friction coefficients. The results are compared to those obtained analytically using a singular integral equation approach, and those obtained by assuming a constant interface shear strength. The usefulness of these elements in micromechanical modeling of fiber-reinforced composite materials is highlighted.

  7. Ray tracing in discontinuous velocity model with implicit Interface

    NASA Astrophysics Data System (ADS)

    Zhang, Jianxing; Yang, Qin; Meng, Xianhai; Li, Jigang

    2016-07-01

    Ray tracing in the velocity model containing complex discontinuities is still facing many challenges. The main difficulty arises from the detection of the spatial relationship between the rays and the interfaces that are usually described in non-linear parametric forms. We propose a novel model representation method that can facilitate the implementation of classical shooting-ray methods. In the representation scheme, each interface is expressed as the zero contour of a signed distance field. A multi-copy strategy is adopted to describe the volumetric properties within blocks. The implicit description of the interface makes it easier to detect the ray-interface intersection. The direct calculation of the intersection point is converted into the problem of judging the signs of a ray segment's endpoints. More importantly, the normal to the interface at the intersection point can be easily acquired according to the signed distance field of the interface. The multiple storage of the velocity property in the proximity of the interface can provide accurate and unambiguous velocity information of the intersection point. Thus, the departing ray path can be determined easily and robustly. In addition, the new representation method can describe velocity models containing very complex geological structures, such as faults, salt domes, intrusions, and pinches, without any simplification. The examples on synthetic and real models validate the robustness and accuracy of the ray tracing based on the proposed model representation scheme.

  8. Flexible automated parameterization of hydrologic models using fuzzy logic

    NASA Astrophysics Data System (ADS)

    Samanta, Sudeep; Mackay, D. Scott

    2003-01-01

    Recent developments in model calibration suggest that information obtained from calibration is inherently uncertain in nature. Therefore identification of optimum parameter values is often highly nonspecific. A calibration framework using fuzzy logic is presented to deal with such uncertain information. An application of this technique to calibrate the streamflow of a hydrologic submodel embedded within an ecosystem simulation model demonstrates that objective estimates of parameter values and the range of model output associated with a failure to identify a unique solution can be obtained with suitable choices of objective functions. An iterative refinement in parameter estimates through a process of elimination was possible by incorporating multiple objective functions in calibration, thereby reducing the range of parameter values that capture the streamflow response. It is shown that objective function tradeoffs can lead to suboptimal solutions using the process of elimination without an automated procedure for reevaluation. Owing to its computational simplicity and flexibility this framework could be extended into a nonmonotonic system for automated parameter estimation.

  9. A new seismically constrained subduction interface model for Central America

    NASA Astrophysics Data System (ADS)

    Kyriakopoulos, C.; Newman, A. V.; Thomas, A. M.; Moore-Driskell, M.; Farmer, G. T.

    2015-08-01

    We provide a detailed, seismically defined three-dimensional model for the subducting plate interface along the Middle America Trench between northern Nicaragua and southern Costa Rica. The model uses data from a weighted catalog of about 30,000 earthquake hypocenters compiled from nine catalogs to constrain the interface through a process we term the "maximum seismicity method." The method determines the average position of the largest cluster of microseismicity beneath an a priori functional surface above the interface. This technique is applied to all seismicity above 40 km depth, the approximate intersection of the hanging wall Mohorovičić discontinuity, where seismicity likely lies along the plate interface. Below this depth, an envelope above 90% of seismicity approximates the slab surface. Because of station proximity to the interface, this model provides highest precision along the interface beneath the Nicoya Peninsula of Costa Rica, an area where marked geometric changes coincide with crustal transitions and topography observed seaward of the trench. The new interface is useful for a number of geophysical studies that aim to understand subduction zone earthquake behavior and geodynamic and tectonic development of convergent plate boundaries.

  10. Molecular modeling of cracks at interfaces in nanoceramic composites

    NASA Astrophysics Data System (ADS)

    Pavia, F.; Curtin, W. A.

    2013-10-01

    Toughness in Ceramic Matrix Composites (CMCs) is achieved if crack deflection can occur at the fiber/matrix interface, preventing crack penetration into the fiber and enabling energy-dissipating fiber pullout. To investigate toughening in nanoscale CMCs, direct atomistic models are used to study how matrix cracks behave as a function of the degree of interfacial bonding/sliding, as controlled by the density of C interstitial atoms, at the interface between carbon nanotubes (CNTs) and a diamond matrix. Under all interface conditions studied, incident matrix cracks do not penetrate into the nanotube. Under increased loading, weaker interfaces fail in shear while stronger interfaces do not fail and, instead, the CNT fails once the stress on the CNT reaches its tensile strength. An analytic shear lag model captures all of the micromechanical details as a function of loading and material parameters. Interface deflection versus fiber penetration is found to depend on the relative bond strengths of the interface and the CNT, with CNT failure occurring well below the prediction of the toughness-based continuum He-Hutchinson model. The shear lag model, in contrast, predicts the CNT failure point and shows that the nanoscale embrittlement transition occurs at an interface shear strength scaling as τs~ɛσ rather than τs~σ typically prevailing for micron scale composites, where ɛ and σ are the CNT failure strain and stress, respectively. Interface bonding also lowers the effective fracture strength in SWCNTs, due to formation of defects, but does not play a role in DWCNTs having interwall coupling, which are weaker than SWCNTs but less prone to damage in the outerwall.

  11. Control of a Wheelchair in an Indoor Environment Based on a Brain-Computer Interface and Automated Navigation.

    PubMed

    Zhang, Rui; Li, Yuanqing; Yan, Yongyong; Zhang, Hao; Wu, Shaoyu; Yu, Tianyou; Gu, Zhenghui

    2016-01-01

    The concept of controlling a wheelchair using brain signals is promising. However, the continuous control of a wheelchair based on unstable and noisy electroencephalogram signals is unreliable and generates a significant mental burden for the user. A feasible solution is to integrate a brain-computer interface (BCI) with automated navigation techniques. This paper presents a brain-controlled intelligent wheelchair with the capability of automatic navigation. Using an autonomous navigation system, candidate destinations and waypoints are automatically generated based on the existing environment. The user selects a destination using a motor imagery (MI)-based or P300-based BCI. According to the determined destination, the navigation system plans a short and safe path and navigates the wheelchair to the destination. During the movement of the wheelchair, the user can issue a stop command with the BCI. Using our system, the mental burden of the user can be substantially alleviated. Furthermore, our system can adapt to changes in the environment. Two experiments based on MI and P300 were conducted to demonstrate the effectiveness of our system. PMID:26054072

  12. An Interactive Tool For Semi-automated Statistical Prediction Using Earth Observations and Models

    NASA Astrophysics Data System (ADS)

    Zaitchik, B. F.; Berhane, F.; Tadesse, T.

    2015-12-01

    We developed a semi-automated statistical prediction tool applicable to concurrent analysis or seasonal prediction of any time series variable in any geographic location. The tool was developed using Shiny, JavaScript, HTML and CSS. A user can extract a predictand by drawing a polygon over a region of interest on the provided user interface (global map). The user can select the Climatic Research Unit (CRU) precipitation or Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS) as predictand. They can also upload their own predictand time series. Predictors can be extracted from sea surface temperature, sea level pressure, winds at different pressure levels, air temperature at various pressure levels, and geopotential height at different pressure levels. By default, reanalysis fields are applied as predictors, but the user can also upload their own predictors, including a wide range of compatible satellite-derived datasets. The package generates correlations of the variables selected with the predictand. The user also has the option to generate composites of the variables based on the predictand. Next, the user can extract predictors by drawing polygons over the regions that show strong correlations (composites). Then, the user can select some or all of the statistical prediction models provided. Provided models include Linear Regression models (GLM, SGLM), Tree-based models (bagging, random forest, boosting), Artificial Neural Network, and other non-linear models such as Generalized Additive Model (GAM) and Multivariate Adaptive Regression Splines (MARS). Finally, the user can download the analysis steps they used, such as the region they selected, the time period they specified, the predictand and predictors they chose and preprocessing options they used, and the model results in PDF or HTML format. Key words: Semi-automated prediction, Shiny, R, GLM, ANN, RF, GAM, MARS

  13. User interface for ground-water modeling: Arcview extension

    USGS Publications Warehouse

    Tsou, M.-S.; Whittemore, D.O.

    2001-01-01

    Numerical simulation for ground-water modeling often involves handling large input and output data sets. A geographic information system (GIS) provides an integrated platform to manage, analyze, and display disparate data and can greatly facilitate modeling efforts in data compilation, model calibration, and display of model parameters and results. Furthermore, GIS can be used to generate information for decision making through spatial overlay and processing of model results. Arc View is the most widely used Windows-based GIS software that provides a robust user-friendly interface to facilitate data handling and display. An extension is an add-on program to Arc View that provides additional specialized functions. An Arc View interface for the ground-water flow and transport models MODFLOW and MT3D was built as an extension for facilitating modeling. The extension includes preprocessing of spatially distributed (point, line, and polygon) data for model input and postprocessing of model output. An object database is used for linking user dialogs and model input files. The Arc View interface utilizes the capabilities of the 3D Analyst extension. Models can be automatically calibrated through the Arc View interface by external linking to such programs as PEST. The efficient pre- and postprocessing capabilities and calibration link were demonstrated for ground-water modeling in southwest Kansas.

  14. An Automated 3d Indoor Topological Navigation Network Modelling

    NASA Astrophysics Data System (ADS)

    Jamali, A.; Rahman, A. A.; Boguslawski, P.; Gold, C. M.

    2015-10-01

    Indoor navigation is important for various applications such as disaster management and safety analysis. In the last decade, indoor environment has been a focus of wide research; that includes developing techniques for acquiring indoor data (e.g. Terrestrial laser scanning), 3D indoor modelling and 3D indoor navigation models. In this paper, an automated 3D topological indoor network generated from inaccurate 3D building models is proposed. In a normal scenario, 3D indoor navigation network derivation needs accurate 3D models with no errors (e.g. gap, intersect) and two cells (e.g. rooms, corridors) should touch each other to build their connections. The presented 3D modeling of indoor navigation network is based on surveying control points and it is less dependent on the 3D geometrical building model. For reducing time and cost of indoor building data acquisition process, Trimble LaserAce 1000 as surveying instrument is used. The modelling results were validated against an accurate geometry of indoor building environment which was acquired using Trimble M3 total station.

  15. Empirical rheological model for rough or grooved bonded interfaces.

    PubMed

    Belloncle, Valentina Vlasie; Rousseau, Martine

    2007-12-01

    In the industrial sector, it is common to use metal/adhesive/metal structural bonds. The cohesion of such structures can be improved by preliminary chemical treatments (degreasing with solvents, alkaline, or acid pickling), electrochemical treatments (anodising), or mechanical treatments (abrasion, sandblasting, grooving) of the metallic plates. All these pretreatments create some asperities, ranging from roughnesses to grooves. On the other hand, in damage solid mechanics and in non-destructive testing, rheological models are used to measure the strength of bonded interfaces. However, these models do not take into account the interlocking of the adhesive in the porosities. Here, an empirical rheological model taking into account the interlocking effects is developed. This model depends on a characteristic parameter representing the average porosity along the interface, which considerably simplifies the corresponding stress and displacement jump conditions. The paper deals with the influence of this interface model on the ultrasonic guided modes of the structure. PMID:17659313

  16. Ab initio diffuse-interface model for lithiated electrode interface evolution

    NASA Astrophysics Data System (ADS)

    Stournara, Maria E.; Kumar, Ravi; Qi, Yue; Sheldon, Brian W.

    2016-07-01

    The study of chemical segregation at interfaces, and in particular the ability to predict the thickness of segregated layers via analytical expressions or computational modeling, is a fundamentally challenging topic in the design of novel heterostructured materials. This issue is particularly relevant for the phase-field (PF) methodology, which has become a prominent tool for describing phase transitions. These models rely on phenomenological parameters that pertain to the interfacial energy and thickness, quantities that cannot be experimentally measured. Instead of back-calculating these parameters from experimental data, here we combine a set of analytical expressions based on the Cahn-Hilliard approach with ab initio calculations to compute the gradient energy parameter κ and the thickness λ of the segregated Li layer at the LixSi-Cu interface. With this bottom-up approach we calculate the thickness λ of the Li diffuse interface to be on the order of a few nm, in agreement with prior experimental secondary ion mass spectrometry observations. Our analysis indicates that Li segregation is primarily driven by solution thermodynamics, while the strain contribution in this system is relatively small. This combined scheme provides an essential first step in the systematic evaluation of the thermodynamic parameters of the PF methodology, and we believe that it can serve as a framework for the development of quantitative interface models in the field of Li-ion batteries.

  17. Ab initio diffuse-interface model for lithiated electrode interface evolution.

    PubMed

    Stournara, Maria E; Kumar, Ravi; Qi, Yue; Sheldon, Brian W

    2016-07-01

    The study of chemical segregation at interfaces, and in particular the ability to predict the thickness of segregated layers via analytical expressions or computational modeling, is a fundamentally challenging topic in the design of novel heterostructured materials. This issue is particularly relevant for the phase-field (PF) methodology, which has become a prominent tool for describing phase transitions. These models rely on phenomenological parameters that pertain to the interfacial energy and thickness, quantities that cannot be experimentally measured. Instead of back-calculating these parameters from experimental data, here we combine a set of analytical expressions based on the Cahn-Hilliard approach with ab initio calculations to compute the gradient energy parameter κ and the thickness λ of the segregated Li layer at the Li_{x}Si-Cu interface. With this bottom-up approach we calculate the thickness λ of the Li diffuse interface to be on the order of a few nm, in agreement with prior experimental secondary ion mass spectrometry observations. Our analysis indicates that Li segregation is primarily driven by solution thermodynamics, while the strain contribution in this system is relatively small. This combined scheme provides an essential first step in the systematic evaluation of the thermodynamic parameters of the PF methodology, and we believe that it can serve as a framework for the development of quantitative interface models in the field of Li-ion batteries. PMID:27575197

  18. Flightdeck Automation Problems (FLAP) Model for Safety Technology Portfolio Assessment

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2014-01-01

    NASA's Aviation Safety Program (AvSP) develops and advances methodologies and technologies to improve air transportation safety. The Safety Analysis and Integration Team (SAIT) conducts a safety technology portfolio assessment (PA) to analyze the program content, to examine the benefits and risks of products with respect to program goals, and to support programmatic decision making. The PA process includes systematic identification of current and future safety risks as well as tracking several quantitative and qualitative metrics to ensure the program goals are addressing prominent safety risks accurately and effectively. One of the metrics within the PA process involves using quantitative aviation safety models to gauge the impact of the safety products. This paper demonstrates the role of aviation safety modeling by providing model outputs and evaluating a sample of portfolio elements using the Flightdeck Automation Problems (FLAP) model. The model enables not only ranking of the quantitative relative risk reduction impact of all portfolio elements, but also highlighting the areas with high potential impact via sensitivity and gap analyses in support of the program office. Although the model outputs are preliminary and products are notional, the process shown in this paper is essential to a comprehensive PA of NASA's safety products in the current program and future programs/projects.

  19. Diffuse Interface Model for Microstructure Evolution

    NASA Astrophysics Data System (ADS)

    Nestler, Britta

    A phase-field model for a general class of multi-phase metallic alloys is proposed which describes both, multi-phase solidification phenomena as well as polycrystalline grain structures. The model serves as a computational method to simulate the motion and kinetics of multiple phase boundaries and enables the visualization of the diffusion processes and of the phase transitions in multi-phase systems. Numerical simulations are presented which illustrate the capability of the phase-field model to recover a variety of complex experimental growth structures. In particular, the phase-field model can be used to simulate microstructure evolutions in eutectic, peritectic and monotectic alloys. In addition, polycrystalline grain structures with effects such as wetting, grain growth, symmetry properties of adjacent triple junctions in thin film samples and stability criteria at multiple junctions are described by phase-field simulations.

  20. Back to the Future: A Non-Automated Method of Constructing Transfer Models

    ERIC Educational Resources Information Center

    Feng, Mingyu; Beck, Joseph

    2009-01-01

    Representing domain knowledge is important for constructing educational software, and automated approaches have been proposed to construct and refine such models. In this paper, instead of applying automated and computationally intensive approaches, we simply start with existing hand-constructed transfer models at various levels of granularity and…

  1. A distributed data component for the open modeling interface

    Technology Transfer Automated Retrieval System (TEKTRAN)

    As the volume of collected data continues to increase in the environmental sciences, so does the need for effective means for accessing those data. We have developed an Open Modeling Interface (OpenMI) data component that retrieves input data for model components from environmental information syste...

  2. Integration of finite element modeling with solid modeling through a dynamic interface

    NASA Technical Reports Server (NTRS)

    Shephard, Mark S.

    1987-01-01

    Finite element modeling is dominated by geometric modeling type operations. Therefore, an effective interface to geometric modeling requires access to both the model and the modeling functionality used to create it. The use of a dynamic interface that addresses these needs through the use of boundary data structures and geometric operators is discussed.

  3. Automated piecewise power-law modeling of biological systems.

    PubMed

    Machina, Anna; Ponosov, Arkady; Voit, Eberhard O

    2010-09-01

    Recent trends suggest that future biotechnology will increasingly rely on mathematical models of the biological systems under investigation. In particular, metabolic engineering will make wider use of metabolic pathway models in stoichiometric or fully kinetic format. A significant obstacle to the use of pathway models is the identification of suitable process descriptions and their parameters. We recently showed that, at least under favorable conditions, Dynamic Flux Estimation (DFE) permits the numerical characterization of fluxes from sets of metabolic time series data. However, DFE does not prescribe how to convert these numerical results into functional representations. In some cases, Michaelis-Menten rate laws or canonical formats are well suited, in which case the estimation of parameter values is easy. However, in other cases, appropriate functional forms are not evident, and exhaustive searches among all possible candidate models are not feasible. We show here how piecewise power-law functions of one or more variables offer an effective default solution for the almost unbiased representation of uni- and multivariate time series data. The results of an automated algorithm for their determination are piecewise power-law fits, whose accuracy is only limited by the available data. The individual power-law pieces may lead to discontinuities at break points or boundaries between sub-domains. In many practical applications, these boundary gaps do not cause problems. Potential smoothing techniques, based on differential inclusions and Filippov's theory, are discussed in Appendix A. PMID:20060428

  4. An interface model for dosage adjustment connects hematotoxicity to pharmacokinetics.

    PubMed

    Meille, C; Iliadis, A; Barbolosi, D; Frances, N; Freyer, G

    2008-12-01

    When modeling is required to describe pharmacokinetics and pharmacodynamics simultaneously, it is difficult to link time-concentration profiles and drug effects. When patients are under chemotherapy, despite the huge amount of blood monitoring numerations, there is a lack of exposure variables to describe hematotoxicity linked with the circulating drug blood levels. We developed an interface model that transforms circulating pharmacokinetic concentrations to adequate exposures, destined to be inputs of the pharmacodynamic process. The model is materialized by a nonlinear differential equation involving three parameters. The relevance of the interface model for dosage adjustment is illustrated by numerous simulations. In particular, the interface model is incorporated into a complex system including pharmacokinetics and neutropenia induced by docetaxel and by cisplatin. Emphasis is placed on the sensitivity of neutropenia with respect to the variations of the drug amount. This complex system including pharmacokinetic, interface, and pharmacodynamic hematotoxicity models is an interesting tool for analysis of hematotoxicity induced by anticancer agents. The model could be a new basis for further improvements aimed at incorporating new experimental features. PMID:19107581

  5. Automated robust generation of compact 3D statistical shape models

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaz; Likar, Bostjan; Tomazevic, Dejan; Pernus, Franjo

    2004-05-01

    Ascertaining the detailed shape and spatial arrangement of anatomical structures is important not only within diagnostic settings but also in the areas of planning, simulation, intraoperative navigation, and tracking of pathology. Robust, accurate and efficient automated segmentation of anatomical structures is difficult because of their complexity and inter-patient variability. Furthermore, the position of the patient during image acquisition, the imaging device and protocol, image resolution, and other factors induce additional variations in shape and appearance. Statistical shape models (SSMs) have proven quite successful in capturing structural variability. A possible approach to obtain a 3D SSM is to extract reference voxels by precisely segmenting the structure in one, reference image. The corresponding voxels in other images are determined by registering the reference image to each other image. The SSM obtained in this way describes statistically plausible shape variations over the given population as well as variations due to imperfect registration. In this paper, we present a completely automated method that significantly reduces shape variations induced by imperfect registration, thus allowing a more accurate description of variations. At each iteration, the derived SSM is used for coarse registration, which is further improved by describing finer variations of the structure. The method was tested on 64 lumbar spinal column CT scans, from which 23, 38, 45, 46 and 42 volumes of interest containing vertebra L1, L2, L3, L4 and L5, respectively, were extracted. Separate SSMs were generated for each vertebra. The results show that the method is capable of reducing the variations induced by registration errors.

  6. Designing a flexible grid enabled scientific modeling interface.

    SciTech Connect

    Dvorak, M.; Taylor, J.; Mickelson, S.

    2002-08-15

    The Espresso Scientific Modeling Interface (Espresso) is a scientific modeling productivity tool developed from climate modelers. Espresso was designed to be an extensible interface to both scientific models and Grid resources. It also aims to be a contemporary piece of software that relies on Globus.org's Java CoG Kit for a Grid toolkit, Sun's Java 2 API and is configured using XML. This article covers the design implementation of Espresso's Grid functionality and how it interacts with existing scientific models. The authors give specific examples of how they have designed Espresso to perform climate simulations using the PSU/NCAR MM5 atmospheric model. Plans to incorporate the CCSM and FOAM climate models are also discussed.

  7. Improved Sharp Interface Models in Coastal Aquifers of Finite Dimensions

    NASA Astrophysics Data System (ADS)

    Christelis, Vasileios; Mantoglou, Aristotelis

    2013-04-01

    Coastal aquifer management often involves aquifers of finite dimensions where optimal pumping rates must be calculated through a combined simulation-optimization procedure. Variable-density numerical models are considered more exact than sharp interface models as they better describe the governing flow and transport equations. However, such models are not always preferable in pumping optimization studies, due to their complexity and computational burden. On the other hand, sharp interface models are approximate and can lead to large errors if they are not applied properly, particularly when model boundaries are not treated correctly. The present paper proposes improved sharp interface models considering aquifer boundaries in a proper way. Two sharp interface models are investigated based on the single potential formulation and the Ghyben-Herzberg relation. The first model (Strack, 1976) is based on the assumption of a semi-infinite aquifer with a sea-boundary only. The second model (Mantoglou, 2003) is based on an analytical solution developed for coastal aquifers of finite size and accounts for inland and lateral aquifer boundaries. Next, both models are modified using an empirical correction factor (similar to Pool and Carrera, 2011) which accounts for mixing. A simple pumping optimization problem with a single well in a confined coastal aquifer of finite size with four boundaries (sea, inland and lateral impervious boundaries) is employed. The constraint prevents the toe of the interface to reach the well and the optimal pumping rates are calculated for different locations of the pumping well and different combinations of aquifer parameters. Then the results of the sharp interface models are compared to the 'true' results of the corresponding variable-density numerical model in order to evaluate the performance of the sharp interface models. The results indicate that when the location of the well is close to the sea-boundary, the semi-infinite and the finite

  8. NASA: Model development for human factors interfacing

    NASA Technical Reports Server (NTRS)

    Smith, L. L.

    1984-01-01

    The results of an intensive literature review in the general topics of human error analysis, stress and job performance, and accident and safety analysis revealed no usable techniques or approaches for analyzing human error in ground or space operations tasks. A task review model is described and proposed to be developed in order to reduce the degree of labor intensiveness in ground and space operations tasks. An extensive number of annotated references are provided.

  9. Modelling interfacial cracking with non-matching cohesive interface elements

    NASA Astrophysics Data System (ADS)

    Nguyen, Vinh Phu; Nguyen, Chi Thanh; Bordas, Stéphane; Heidarpour, Amin

    2016-07-01

    Interfacial cracking occurs in many engineering problems such as delamination in composite laminates, matrix/interface debonding in fibre reinforced composites etc. Computational modelling of these interfacial cracks usually employs compatible or matching cohesive interface elements. In this paper, incompatible or non-matching cohesive interface elements are proposed for interfacial fracture mechanics problems. They allow non-matching finite element discretisations of the opposite crack faces thus lifting the constraint on the compatible discretisation of the domains sharing the interface. The formulation is based on a discontinuous Galerkin method and works with both initially elastic and rigid cohesive laws. The proposed formulation has the following advantages compared to classical interface elements: (i) non-matching discretisations of the domains and (ii) no high dummy stiffness. Two and three dimensional quasi-static fracture simulations are conducted to demonstrate the method. Our method not only simplifies the meshing process but also it requires less computational demands, compared with standard interface elements, for problems that involve materials/solids having a large mismatch in stiffnesses.

  10. Molecular Modeling of Water Interfaces: From Molecular Spectroscopy to Thermodynamics.

    PubMed

    Nagata, Yuki; Ohto, Tatsuhiko; Backus, Ellen H G; Bonn, Mischa

    2016-04-28

    Understanding aqueous interfaces at the molecular level is not only fundamentally important, but also highly relevant for a variety of disciplines. For instance, electrode-water interfaces are relevant for electrochemistry, as are mineral-water interfaces for geochemistry and air-water interfaces for environmental chemistry; water-lipid interfaces constitute the boundaries of the cell membrane, and are thus relevant for biochemistry. One of the major challenges in these fields is to link macroscopic properties such as interfacial reactivity, solubility, and permeability as well as macroscopic thermodynamic and spectroscopic observables to the structure, structural changes, and dynamics of molecules at these interfaces. Simulations, by themselves, or in conjunction with appropriate experiments, can provide such molecular-level insights into aqueous interfaces. In this contribution, we review the current state-of-the-art of three levels of molecular dynamics (MD) simulation: ab initio, force field, and coarse-grained. We discuss the advantages, the potential, and the limitations of each approach for studying aqueous interfaces, by assessing computations of the sum-frequency generation spectra and surface tension. The comparison of experimental and simulation data provides information on the challenges of future MD simulations, such as improving the force field models and the van der Waals corrections in ab initio MD simulations. Once good agreement between experimental observables and simulation can be established, the simulation can be used to provide insights into the processes at a level of detail that is generally inaccessible to experiments. As an example we discuss the mechanism of the evaporation of water. We finish by presenting an outlook outlining four future challenges for molecular dynamics simulations of aqueous interfacial systems. PMID:27010817

  11. Critical interfaces and duality in the Ashkin-Teller model

    SciTech Connect

    Picco, Marco; Santachiara, Raoul

    2011-06-15

    We report on the numerical measures on different spin interfaces and Fortuin-Kasteleyn (FK) cluster boundaries in the Askhin-Teller (AT) model. For a general point on the AT critical line, we find that the fractal dimension of a generic spin cluster interface can take one of four different possible values. In particular we found spin interfaces whose fractal dimension is d{sub f}=3/2 all along the critical line. Furthermore, the fractal dimension of the boundaries of FK clusters was found to satisfy all along the AT critical line a duality relation with the fractal dimension of their outer boundaries. This result provides clear numerical evidence that such duality, which is well known in the case of the O(n) model, exists in an extended conformal field theory.

  12. Attenuation of numerical artefacts in the modelling of fluid interfaces

    NASA Astrophysics Data System (ADS)

    Evrard, Fabien; van Wachem, Berend G. M.; Denner, Fabian

    2015-11-01

    Numerical artefacts in the modelling of fluid interfaces, such as parasitic currents or spurious capillary waves, present a considerable problem in two-phase flow modelling. Parasitic currents result from an imperfect evaluation of the interface curvature and can severely affect the flow, whereas spatially underresolved (spurious) capillary waves impose strict limits on the time-step and, hence, dictate the required computational resources for surface-tension-dominated flows. By applying an additional shear stress term at the fluid interface, thereby dissipating the surface energy associated with small wavelengths, we have been able to considerably reduce the adverse impact of parasitic currents and mitigate the time-step limit imposed by capillary waves. However, a careful choice of the applied interface viscosity is crucial, since an excess of additional dissipation compromises the accuracy of the solution. We present the derivation of the additional interfacial shear stress term, explain the underlying physical mechanism and discuss the impact on parasitic currents and interface instabilities based on a variety of numerical experiments. We acknowledge financial support from the Engineering and Physical Sciences Research Council (EPSRC) through Grant No. EP/M021556/1 and from PETROBRAS.

  13. Computer modelling studies of the bilayer/water interface.

    PubMed

    Pasenkiewicz-Gierula, Marta; Baczynski, Krzysztof; Markiewicz, Michal; Murzyn, Krzysztof

    2016-10-01

    This review summarises high resolution studies on the interface of lamellar lipid bilayers composed of the most typical lipid molecules which constitute the lipid matrix of biomembranes. The presented results were obtained predominantly by computer modelling methods. Whenever possible, the results were compared with experimental results obtained for similar systems. The first and main section of the review is concerned with the bilayer-water interface and is divided into four subsections. The first describes the simplest case, where the interface consists only of lipid head groups and water molecules and focuses on interactions between the lipid heads and water molecules; the second describes the interface containing also mono- and divalent ions and concentrates on lipid-ion interactions; the third describes direct inter-lipid interactions. These three subsections are followed by a discussion on the network of direct and indirect inter-lipid interactions at the bilayer interface. The second section summarises recent computer simulation studies on the interactions of antibacterial membrane active compounds with various models of the bacterial outer membrane. This article is part of a Special Issue entitled: Biosimulations edited by Ilpo Vattulainen and Tomasz Róg. PMID:26825705

  14. Automated optic disk boundary detection by modified active contour model.

    PubMed

    Xu, Juan; Chutatape, Opas; Chew, Paul

    2007-03-01

    This paper presents a novel deformable-model-based algorithm for fully automated detection of optic disk boundary in fundus images. The proposed method improves and extends the original snake (deforming-only technique) in two aspects: clustering and smoothing update. The contour points are first self-separated into edge-point group or uncertain-point group by clustering after each deformation, and these contour points are then updated by different criteria based on different groups. The updating process combines both the local and global information of the contour to achieve the balance of contour stability and accuracy. The modifications make the proposed algorithm more accurate and robust to blood vessel occlusions, noises, ill-defined edges and fuzzy contour shapes. The comparative results show that the proposed method can estimate the disk boundaries of 100 test images closer to the groundtruth, as measured by mean distance to closest point (MDCP) <3 pixels, with the better success rate when compared to those obtained by gradient vector flow snake (GVF-snake) and modified active shape models (ASM). PMID:17355059

  15. Model annotation for synthetic biology: automating model to nucleotide sequence conversion

    PubMed Central

    Misirli, Goksel; Hallinan, Jennifer S.; Yu, Tommy; Lawson, James R.; Wimalaratne, Sarala M.; Cooling, Michael T.; Wipat, Anil

    2011-01-01

    Motivation: The need for the automated computational design of genetic circuits is becoming increasingly apparent with the advent of ever more complex and ambitious synthetic biology projects. Currently, most circuits are designed through the assembly of models of individual parts such as promoters, ribosome binding sites and coding sequences. These low level models are combined to produce a dynamic model of a larger device that exhibits a desired behaviour. The larger model then acts as a blueprint for physical implementation at the DNA level. However, the conversion of models of complex genetic circuits into DNA sequences is a non-trivial undertaking due to the complexity of mapping the model parts to their physical manifestation. Automating this process is further hampered by the lack of computationally tractable information in most models. Results: We describe a method for automatically generating DNA sequences from dynamic models implemented in CellML and Systems Biology Markup Language (SBML). We also identify the metadata needed to annotate models to facilitate automated conversion, and propose and demonstrate a method for the markup of these models using RDF. Our algorithm has been implemented in a software tool called MoSeC. Availability: The software is available from the authors' web site http://research.ncl.ac.uk/synthetic_biology/downloads.html. Contact: anil.wipat@ncl.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21296753

  16. Automated macromolecular model building for X-ray crystallography using ARP/wARP version 7

    PubMed Central

    Langer, Gerrit G; Cohen, Serge X; Lamzin, Victor S; Perrakis, Anastassis

    2008-01-01

    ARP/wARP is a software suite to build macromolecular models in X-ray crystallography electron density maps. Structural genomics initiatives and the study of complex macromolecular assemblies and membrane proteins all rely on advanced methods for 3D structure determination. ARP/wARP meets these needs by providing the tools to obtain a macromolecular model automatically, with a reproducible computational procedure. ARP/wARP 7.0 tackles several tasks: iterative protein model building including a high-level decision-making control module; fast construction of the secondary structure of a protein; building flexible loops in alternate conformations; fully automated placement of ligands, including a choice of the best fitting ligand from a “cocktail”; and finding ordered water molecules. All protocols are easy to handle by a non-expert user through a graphical user interface or a command line. The time required is typically a few minutes although iterative model building may take a few hours. PMID:18600222

  17. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT (AGWA): A GIS-BASED HYDROLOGIC MODELING TOOL FOR WATERSHED ASSESSMENT AND ANALYSIS

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment tool (AGWA) is a GIS interface jointly developed by the USDA Agricultural Research Service, the U.S. Environmental Protection Agency, the University of Arizona, and the University of Wyoming to automate the parameterization and execu...

  18. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT (AGWA): A GIS-BASED HYRDOLOGIC MODELING TOOL FOR WATERSHED ASSESSMENT AND ANALYSIS

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment tool (AGWA) is a GIS interface jointly

    developed by the USDA Agricultural Research Service, the U.S. Environmental Protection

    Agency, the University of Arizona, and the University of Wyoming to automate the

    parame...

  19. AUTOMATED GEOSPATICAL WATERSHED ASSESSMENT (AGWA): A GIS-BASED HYDROLOICAL MODELING TOOL FOR WATERSHED ASSESSMENT AND ANALYSIS

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment tool (AGWA) is a GIS interface jointly developed by the USDA Agricultural Research Service, the U.S. Environmental Protection Agency, the University of Arizona, and the University of Wyoming to automate the parameterization and execut...

  20. Interface fracture and composite deformation of model laminates

    NASA Astrophysics Data System (ADS)

    Fox, Matthew R.

    Model laminates were studied to improve the understanding of composite mechanical behavior. NiAl/Mo and NiAl/Cr model laminates, with a series of interfaces, were bonded at 1100°C. Reaction layers were present in all laminates, varying in thickness with bonding conditions. Interface fracture strengths and resistances were determined under primarily mode II loading conditions using a novel technique, the asymmetrically-loaded shear (ALS) test, in which one layer of the laminate was loaded in compression, producing a stable interface crack. The NiAl/Mo interface was also fractured in four-point bending. A small amount of plasticity was found to play a role in crack initiation. During steady-state mode II interface fracture of NiAl/Mo model laminates, large-scale slip was observed near the crack tip in the NiAl adjacent to the interface. After testing, the local slope and curvature of the interface were characterized at intervals along the interface and at slip locations to qualitatively describe local stresses present at and just ahead of the crack tip. The greatest percentage of slip occurred where closing forces on the crack tip were below the maximum value and were decreasing with crack growth. A mechanism for crack propagation is presented describing the role of large-scale slip in crack propagation. The mechanical response of structural laminates in 3-D stress states, as would be present in a polycrystalline aggregate composed of lamellar grains, are lacking. In order to understand the response of laminates composed of hard and soft phases, Pb/Zn laminates were prepared and tested in compression with varying lamellar orientation relative to the loading axis. A model describing the mechanical response in a general state assuming elastic-perfectly plastic isotropic layers was developed. For the 90° laminate, a different approach was applied, using the friction hill concepts used in forging analyses. With increasing ratios of cross-sectional radius to layer

  1. Aviation Safety: Modeling and Analyzing Complex Interactions between Humans and Automated Systems

    NASA Technical Reports Server (NTRS)

    Rungta, Neha; Brat, Guillaume; Clancey, William J.; Linde, Charlotte; Raimondi, Franco; Seah, Chin; Shafto, Michael

    2013-01-01

    The on-going transformation from the current US Air Traffic System (ATS) to the Next Generation Air Traffic System (NextGen) will force the introduction of new automated systems and most likely will cause automation to migrate from ground to air. This will yield new function allocations between humans and automation and therefore change the roles and responsibilities in the ATS. Yet, safety in NextGen is required to be at least as good as in the current system. We therefore need techniques to evaluate the safety of the interactions between humans and automation. We think that current human factor studies and simulation-based techniques will fall short in front of the ATS complexity, and that we need to add more automated techniques to simulations, such as model checking, which offers exhaustive coverage of the non-deterministic behaviors in nominal and off-nominal scenarios. In this work, we present a verification approach based both on simulations and on model checking for evaluating the roles and responsibilities of humans and automation. Models are created using Brahms (a multi-agent framework) and we show that the traditional Brahms simulations can be integrated with automated exploration techniques based on model checking, thus offering a complete exploration of the behavioral space of the scenario. Our formal analysis supports the notion of beliefs and probabilities to reason about human behavior. We demonstrate the technique with the Ueberligen accident since it exemplifies authority problems when receiving conflicting advices from human and automated systems.

  2. Individual Differences in Response to Automation: The Five Factor Model of Personality

    ERIC Educational Resources Information Center

    Szalma, James L.; Taylor, Grant S.

    2011-01-01

    This study examined the relationship of operator personality (Five Factor Model) and characteristics of the task and of adaptive automation (reliability and adaptiveness--whether the automation was well-matched to changes in task demand) to operator performance, workload, stress, and coping. This represents the first investigation of how the Five…

  3. Designers' models of the human-computer interface

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Breedin, Sarah D.

    1993-01-01

    Understanding design models of the human-computer interface (HCI) may produce two types of benefits. First, interface development often requires input from two different types of experts: human factors specialists and software developers. Given the differences in their backgrounds and roles, human factors specialists and software developers may have different cognitive models of the HCI. Yet, they have to communicate about the interface as part of the design process. If they have different models, their interactions are likely to involve a certain amount of miscommunication. Second, the design process in general is likely to be guided by designers' cognitive models of the HCI, as well as by their knowledge of the user, tasks, and system. Designers do not start with a blank slate; rather they begin with a general model of the object they are designing. The author's approach to a design model of the HCI was to have three groups make judgments of categorical similarity about the components of an interface: human factors specialists with HCI design experience, software developers with HCI design experience, and a baseline group of computer users with no experience in HCI design. The components of the user interface included both display components such as windows, text, and graphics, and user interaction concepts, such as command language, editing, and help. The judgments of the three groups were analyzed using hierarchical cluster analysis and Pathfinder. These methods indicated, respectively, how the groups categorized the concepts, and network representations of the concepts for each group. The Pathfinder analysis provides greater information about local, pairwise relations among concepts, whereas the cluster analysis shows global, categorical relations to a greater extent.

  4. Automated MRI Cerebellar Size Measurements Using Active Appearance Modeling

    PubMed Central

    Price, Mathew; Cardenas, Valerie A.; Fein, George

    2014-01-01

    Although the human cerebellum has been increasingly identified as an important hub that shows potential for helping in the diagnosis of a large spectrum of disorders, such as alcoholism, autism, and fetal alcohol spectrum disorder, the high costs associated with manual segmentation, and low availability of reliable automated cerebellar segmentation tools, has resulted in a limited focus on cerebellar measurement in human neuroimaging studies. We present here the CATK (Cerebellar Analysis Toolkit), which is based on the Bayesian framework implemented in FMRIB’s FIRST. This approach involves training Active Appearance Models (AAM) using hand-delineated examples. CATK can currently delineate the cerebellar hemispheres and three vermal groups (lobules I–V, VI–VII, and VIII–X). Linear registration with the low-resolution MNI152 template is used to provide initial alignment, and Point Distribution Models (PDM) are parameterized using stellar sampling. The Bayesian approach models the relationship between shape and texture through computation of conditionals in the training set. Our method varies from the FIRST framework in that initial fitting is driven by 1D intensity profile matching, and the conditional likelihood function is subsequently used to refine fitting. The method was developed using T1-weighted images from 63 subjects that were imaged and manually labeled: 43 subjects were scanned once and were used for training models, and 20 subjects were imaged twice (with manual labeling applied to both runs) and used to assess reliability and validity. Intraclass correlation analysis shows that CATK is highly reliable (average test-retest ICCs of 0.96), and offers excellent agreement with the gold standard (average validity ICC of 0.87 against manual labels). Comparisons against an alternative atlas-based approach, SUIT (Spatially Unbiased Infratentorial Template), that registers images with a high-resolution template of the cerebellum, show that our AAM

  5. Automated forward mechanical modeling of wrinkle ridges on Mars

    NASA Astrophysics Data System (ADS)

    Nahm, Amanda; Peterson, Samuel

    2016-04-01

    One of the main goals of the InSight mission to Mars is to understand the internal structure of Mars [1], in part through passive seismology. Understanding the shallow surface structure of the landing site is critical to the robust interpretation of recorded seismic signals. Faults, such as the wrinkle ridges abundant in the proposed landing site in Elysium Planitia, can be used to determine the subsurface structure of the regions they deform. Here, we test a new automated method for modeling of the topography of a wrinkle ridge (WR) in Elysium Planitia, allowing for faster and more robust determination of subsurface fault geometry for interpretation of the local subsurface structure. We perform forward mechanical modeling of fault-related topography [e.g., 2, 3], utilizing the modeling program Coulomb [4, 5] to model surface displacements surface induced by blind thrust faulting. Fault lengths are difficult to determine for WR; we initially assume a fault length of 30 km, but also test the effects of different fault lengths on model results. At present, we model the wrinkle ridge as a single blind thrust fault with a constant fault dip, though WR are likely to have more complicated fault geometry [e.g., 6-8]. Typically, the modeling is performed using the Coulomb GUI. This approach can be time consuming, requiring user inputs to change model parameters and to calculate the associated displacements for each model, which limits the number of models and parameter space that can be tested. To reduce active user computation time, we have developed a method in which the Coulomb GUI is bypassed. The general modeling procedure remains unchanged, and a set of input files is generated before modeling with ranges of pre-defined parameter values. The displacement calculations are divided into two suites. For Suite 1, a total of 3770 input files were generated in which the fault displacement (D), dip angle (δ), depth to upper fault tip (t), and depth to lower fault tip (B

  6. Atomic Models of Strong Solids Interfaces Viewed as Composite Structures

    NASA Astrophysics Data System (ADS)

    Staffell, I.; Shang, J. L.; Kendall, K.

    2014-02-01

    This paper looks back through the 1960s to the invention of carbon fibres and the theories of Strong Solids. In particular it focuses on the fracture mechanics paradox of strong composites containing weak interfaces. From Griffith theory, it is clear that three parameters must be considered in producing a high strength composite:- minimising defects; maximising the elastic modulus; and raising the fracture energy along the crack path. The interface then introduces two further factors:- elastic modulus mismatch causing crack stopping; and debonding along a brittle interface due to low interface fracture energy. Consequently, an understanding of the fracture energy of a composite interface is needed. Using an interface model based on atomic interaction forces, it is shown that a single layer of contaminant atoms between the matrix and the reinforcement can reduce the interface fracture energy by an order of magnitude, giving a large delamination effect. The paper also looks to a future in which cars will be made largely from composite materials. Radical improvements in automobile design are necessary because the number of cars worldwide is predicted to double. This paper predicts gains in fuel economy by suggesting a new theory of automobile fuel consumption using an adaptation of Coulomb's friction law. It is demonstrated both by experiment and by theoretical argument that the energy dissipated in standard vehicle tests depends only on weight. Consequently, moving from metal to fibre construction can give a factor 2 improved fuel economy performance, roughly the same as moving from a petrol combustion drive to hydrogen fuel cell propulsion. Using both options together can give a factor 4 improvement, as demonstrated by testing a composite car using the ECE15 protocol.

  7. Interfaces between phases in a lattice model of microemulsions

    NASA Astrophysics Data System (ADS)

    Dawson, K. A.

    1987-02-01

    A lattice model which has recently been developed to aid the study of microemulsions is briefly reviewed. The local-density mean-field equations are presented and the interfacial profiles and surface tensions are computed using a variational method. These density profiles describing the interface between oil rich and water rich phases, both of which are isotropic, are structured and nonmonotonic. Some comments about a perturbation expansion which confirms these conclusions are made. It is possible to compute the surface tension to high numerical accuracy using the variational procedure. This permits discussion of the question of wetting of the oil-water interface by a microemulsion phase. The interfacial tensions along the oil-water-microemulsion coexistence line are ultra-low. The oil-water interface is not wet by microemulsion throughout most of the bicontinuous regime.

  8. Automated medical diagnosis with fuzzy stochastic models: monitoring chronic diseases.

    PubMed

    Jeanpierre, Laurent; Charpillet, François

    2004-01-01

    As the world population ages, the patients per physician ratio keeps on increasing. This is even more important in the domain of chronic pathologies where people are usually monitored for years and need regular consultations. To address this problem, we propose an automated system to monitor a patient population, detecting anomalies in instantaneous data and in their temporal evolution, so that it could alert physicians. By handling the population of healthy patients autonomously and by drawing the physicians' attention to the patients-at-risk, the system allows physicians to spend comparatively more time with patients who need their services. In such a system, the interaction between the patients, the diagnosis module, and the physicians is very important. We have based this system on a combination of stochastic models, fuzzy filters, and strong medical semantics. We particularly focused on a particular tele-medicine application: the Diatelic Project. Its objective is to monitor chronic kidney-insufficient patients and to detect hydration troubles. During two years, physicians from the ALTIR have conducted a prospective randomized study of the system. This experiment clearly shows that the proposed system is really beneficial to the patients' health. PMID:15520535

  9. Automation of sample plan creation for process model calibration

    NASA Astrophysics Data System (ADS)

    Oberschmidt, James; Abdo, Amr; Desouky, Tamer; Al-Imam, Mohamed; Krasnoperova, Azalia; Viswanathan, Ramya

    2010-04-01

    The process of preparing a sample plan for optical and resist model calibration has always been tedious. Not only because it is required to accurately represent full chip designs with countless combinations of widths, spaces and environments, but also because of the constraints imposed by metrology which may result in limiting the number of structures to be measured. Also, there are other limits on the types of these structures, and this is mainly due to the accuracy variation across different types of geometries. For instance, pitch measurements are normally more accurate than corner rounding. Thus, only certain geometrical shapes are mostly considered to create a sample plan. In addition, the time factor is becoming very crucial as we migrate from a technology node to another due to the increase in the number of development and production nodes, and the process is getting more complicated if process window aware models are to be developed in a reasonable time frame, thus there is a need for reliable methods to choose sample plans which also help reduce cycle time. In this context, an automated flow is proposed for sample plan creation. Once the illumination and film stack are defined, all the errors in the input data are fixed and sites are centered. Then, bad sites are excluded. Afterwards, the clean data are reduced based on geometrical resemblance. Also, an editable database of measurement-reliable and critical structures are provided, and their percentage in the final sample plan as well as the total number of 1D/2D samples can be predefined. It has the advantage of eliminating manual selection or filtering techniques, and it provides powerful tools for customizing the final plan, and the time needed to generate these plans is greatly reduced.

  10. A polarizable continuum model for molecules at spherical diffuse interfaces

    NASA Astrophysics Data System (ADS)

    Di Remigio, Roberto; Mozgawa, Krzysztof; Cao, Hui; Weijo, Ville; Frediani, Luca

    2016-03-01

    We present an extension of the Polarizable Continuum Model (PCM) to simulate solvent effects at diffuse interfaces with spherical symmetry, such as nanodroplets and micelles. We derive the form of the Green's function for a spatially varying dielectric permittivity with spherical symmetry and exploit the integral equation formalism of the PCM for general dielectric environments to recast the solvation problem into a continuum solvation framework. This allows the investigation of the solvation of ions and molecules in nonuniform dielectric environments, such as liquid droplets, micelles or membranes, while maintaining the computationally appealing characteristics of continuum solvation models. We describe in detail our implementation, both for the calculation of the Green's function and for its subsequent use in the PCM electrostatic problem. The model is then applied on a few test systems, mainly to analyze the effect of interface curvature on solvation energetics.

  11. A polarizable continuum model for molecules at spherical diffuse interfaces.

    PubMed

    Di Remigio, Roberto; Mozgawa, Krzysztof; Cao, Hui; Weijo, Ville; Frediani, Luca

    2016-03-28

    We present an extension of the Polarizable Continuum Model (PCM) to simulate solvent effects at diffuse interfaces with spherical symmetry, such as nanodroplets and micelles. We derive the form of the Green's function for a spatially varying dielectric permittivity with spherical symmetry and exploit the integral equation formalism of the PCM for general dielectric environments to recast the solvation problem into a continuum solvation framework. This allows the investigation of the solvation of ions and molecules in nonuniform dielectric environments, such as liquid droplets, micelles or membranes, while maintaining the computationally appealing characteristics of continuum solvation models. We describe in detail our implementation, both for the calculation of the Green's function and for its subsequent use in the PCM electrostatic problem. The model is then applied on a few test systems, mainly to analyze the effect of interface curvature on solvation energetics. PMID:27036423

  12. Computer modelling of nanoscale diffusion phenomena at epitaxial interfaces

    NASA Astrophysics Data System (ADS)

    Michailov, M.; Ranguelov, B.

    2014-05-01

    The present study outlines an important area in the application of computer modelling to interface phenomena. Being relevant to the fundamental physical problem of competing atomic interactions in systems with reduced dimensionality, these phenomena attract special academic attention. On the other hand, from a technological point of view, detailed knowledge of the fine atomic structure of surfaces and interfaces correlates with a large number of practical problems in materials science. Typical examples are formation of nanoscale surface patterns, two-dimensional superlattices, atomic intermixing at an epitaxial interface, atomic transport phenomena, structure and stability of quantum wires on surfaces. We discuss here a variety of diffusion mechanisms that control surface-confined atomic exchange, formation of alloyed atomic stripes and islands, relaxation of pure and alloyed atomic terraces, diffusion of clusters and their stability in an external field. The computational model refines important details of diffusion of adatoms and clusters accounting for the energy barriers at specific atomic sites: smooth domains, terraces, steps and kinks. The diffusion kinetics, integrity and decomposition of atomic islands in an external field are considered in detail and assigned to specific energy regions depending on the cluster stability in mass transport processes. The presented ensemble of diffusion scenarios opens a way for nanoscale surface design towards regular atomic interface patterns with exotic physical features.

  13. Multiscale modeling of droplet interface bilayer membrane networks.

    PubMed

    Freeman, Eric C; Farimani, Amir B; Aluru, Narayana R; Philen, Michael K

    2015-11-01

    Droplet interface bilayer (DIB) networks are considered for the development of stimuli-responsive membrane-based materials inspired by cellular mechanics. These DIB networks are often modeled as combinations of electrical circuit analogues, creating complex networks of capacitors and resistors that mimic the biomolecular structures. These empirical models are capable of replicating data from electrophysiology experiments, but these models do not accurately capture the underlying physical phenomena and consequently do not allow for simulations of material functionalities beyond the voltage-clamp or current-clamp conditions. The work presented here provides a more robust description of DIB network behavior through the development of a hierarchical multiscale model, recognizing that the macroscopic network properties are functions of their underlying molecular structure. The result of this research is a modeling methodology based on controlled exchanges across the interfaces of neighboring droplets. This methodology is validated against experimental data, and an extension case is provided to demonstrate possible future applications of droplet interface bilayer networks. PMID:26594262

  14. Developing A Laser Shockwave Model For Characterizing Diffusion Bonded Interfaces

    SciTech Connect

    James A. Smith; Jeffrey M. Lacy; Barry H. Rabin

    2014-07-01

    12. Other advances in QNDE and related topics: Preferred Session Laser-ultrasonics Developing A Laser Shockwave Model For Characterizing Diffusion Bonded Interfaces 41st Annual Review of Progress in Quantitative Nondestructive Evaluation Conference QNDE Conference July 20-25, 2014 Boise Centre 850 West Front Street Boise, Idaho 83702 James A. Smith, Jeffrey M. Lacy, Barry H. Rabin, Idaho National Laboratory, Idaho Falls, ID ABSTRACT: The US National Nuclear Security Agency has a Global Threat Reduction Initiative (GTRI) which is assigned with reducing the worldwide use of high-enriched uranium (HEU). A salient component of that initiative is the conversion of research reactors from HEU to low enriched uranium (LEU) fuels. An innovative fuel is being developed to replace HEU. The new LEU fuel is based on a monolithic fuel made from a U-Mo alloy foil encapsulated in Al-6061 cladding. In order to complete the fuel qualification process, the laser shock technique is being developed to characterize the clad-clad and fuel-clad interface strengths in fresh and irradiated fuel plates. The Laser Shockwave Technique (LST) is being investigated to characterize interface strength in fuel plates. LST is a non-contact method that uses lasers for the generation and detection of large amplitude acoustic waves to characterize interfaces in nuclear fuel plates. However the deposition of laser energy into the containment layer on specimen’s surface is intractably complex. The shock wave energy is inferred from the velocity on the backside and the depth of the impression left on the surface from the high pressure plasma pulse created by the shock laser. To help quantify the stresses and strengths at the interface, a finite element model is being developed and validated by comparing numerical and experimental results for back face velocities and front face depressions with experimental results. This paper will report on initial efforts to develop a finite element model for laser

  15. A visual interface for the SUPERFLEX hydrological modelling framework

    NASA Astrophysics Data System (ADS)

    Gao, H.; Fenicia, F.; Kavetski, D.; Savenije, H. H. G.

    2012-04-01

    The SUPERFLEX framework is a modular modelling system for conceptual hydrological modelling at the catchment scale. This work reports the development of a visual interface for the SUPERFLEX model. This aims to enhance the communication between the hydrologic experimentalists and modelers, in particular further bridging the gap between the field soft data and the modeler's knowledge. In collaboration with field experimentalists, modelers can visually and intuitively hypothesize different model architectures and combinations of reservoirs, select from a library of constructive functions to describe the relationship between reservoirs' storage and discharge, specify the shape of lag functions and, finally, set parameter values. The software helps hydrologists take advantage of any existing insights into the study site, translate it into a conceptual hydrological model and implement it within a computationally robust algorithm. This tool also helps challenge and contrast competing paradigms such as the "uniqueness of place" vs "one model fits all". Using this interface, hydrologists can test different hypotheses and model representations, and stepwise build deeper understanding of the watershed of interest.

  16. Symmetric model of compressible granular mixtures with permeable interfaces

    NASA Astrophysics Data System (ADS)

    Saurel, Richard; Le Martelot, Sébastien; Tosello, Robert; Lapébie, Emmanuel

    2014-12-01

    Compressible granular materials are involved in many applications, some of them being related to energetic porous media. Gas permeation effects are important during their compaction stage, as well as their eventual chemical decomposition. Also, many situations involve porous media separated from pure fluids through two-phase interfaces. It is thus important to develop theoretical and numerical formulations to deal with granular materials in the presence of both two-phase interfaces and gas permeation effects. Similar topic was addressed for fluid mixtures and interfaces with the Discrete Equations Method (DEM) [R. Abgrall and R. Saurel, "Discrete equations for physical and numerical compressible multiphase mixtures," J. Comput. Phys. 186(2), 361-396 (2003)] but it seemed impossible to extend this approach to granular media as intergranular stress [K. K. Kuo, V. Yang, and B. B. Moore, "Intragranular stress, particle-wall friction and speed of sound in granular propellant beds," J. Ballist. 4(1), 697-730 (1980)] and associated configuration energy [J. B. Bdzil, R. Menikoff, S. F. Son, A. K. Kapila, and D. S. Stewart, "Two-phase modeling of deflagration-to-detonation transition in granular materials: A critical examination of modeling issues," Phys. Fluids 11, 378 (1999)] were present with significant effects. An approach to deal with fluid-porous media interfaces was derived in Saurel et al. ["Modelling dynamic and irreversible powder compaction," J. Fluid Mech. 664, 348-396 (2010)] but its validity was restricted to weak velocity disequilibrium only. Thanks to a deeper analysis, the DEM is successfully extended to granular media modelling in the present paper. It results in an enhanced version of the Baer and Nunziato ["A two-phase mixture theory for the deflagration-to-detonation transition (DDT) in reactive granular materials," Int. J. Multiphase Flow 12(6), 861-889 (1986)] model as symmetry of the formulation is now preserved. Several computational examples are

  17. Automated MRI segmentation for individualized modeling of current flow in the human head

    NASA Astrophysics Data System (ADS)

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-12-01

    Objective. High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets.Main results. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly.Significance. Fully

  18. Automated MRI Segmentation for Individualized Modeling of Current Flow in the Human Head

    PubMed Central

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-01-01

    Objective High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography (HD-EEG) require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images (MRI) requires labor-intensive manual segmentation, even when leveraging available automated segmentation tools. Also, accurate placement of many high-density electrodes on individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach A fully automated segmentation technique based on Statical Parametric Mapping 8 (SPM8), including an improved tissue probability map (TPM) and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on 4 healthy subjects and 7 stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. Main results The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view (FOV) extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly

  19. A Multiple Agent Model of Human Performance in Automated Air Traffic Control and Flight Management Operations

    NASA Technical Reports Server (NTRS)

    Corker, Kevin; Pisanich, Gregory; Condon, Gregory W. (Technical Monitor)

    1995-01-01

    A predictive model of human operator performance (flight crew and air traffic control (ATC)) has been developed and applied in order to evaluate the impact of automation developments in flight management and air traffic control. The model is used to predict the performance of a two person flight crew and the ATC operators generating and responding to clearances aided by the Center TRACON Automation System (CTAS). The purpose of the modeling is to support evaluation and design of automated aids for flight management and airspace management and to predict required changes in procedure both air and ground in response to advancing automation in both domains. Additional information is contained in the original extended abstract.

  20. Industrial Automation Mechanic Model Curriculum Project. Final Report.

    ERIC Educational Resources Information Center

    Toledo Public Schools, OH.

    This document describes a demonstration program that developed secondary level competency-based instructional materials for industrial automation mechanics. Program activities included task list compilation, instructional materials research, learning activity packet (LAP) development, construction of lab elements, system implementation,…

  1. Bacterial Adhesion to Hexadecane (Model NAPL)-Water Interfaces

    NASA Astrophysics Data System (ADS)

    Ghoshal, S.; Zoueki, C. R.; Tufenkji, N.

    2009-05-01

    The rates of biodegradation of NAPLs have been shown to be influenced by the adhesion of hydrocarbon- degrading microorganisms as well as their proximity to the NAPL-water interface. Several studies provide evidence for bacterial adhesion or biofilm formation at alkane- or crude oil-water interfaces, but there is a significant knowledge gap in our understanding of the processes that influence initial adhesion of bacteria on to NAPL-water interfaces. In this study bacterial adhesion to hexadecane, and a series of NAPLs comprised of hexadecane amended with toluene, and/or with asphaltenes and resins, which are the surface active fractions of crude oils, were examined using a Microbial Adhesion to Hydrocarbons (MATH) assay. The microorganisms employed were Mycobacterium kubicae, Pseudomonas aeruginosa and Pseudomonas putida, which are hydrocarbon degraders or soil microorganisms. MATH assays as well as electrophoretic mobility measurements of the bacterial cells and the NAPL droplet surfaces in aqueous solutions were conducted at three solution pHs (4, 6 and 7). Asphaltenes and resins were shown to generally decrease microbial adhesion. Results of the MATH assay were not in qualitative agreement with theoretical predictions of bacteria- hydrocarbon interactions based on the extended Derjaguin-Landau-Verwey-Overbeek (XDLVO) model of free energy of interaction between the cell and NAPL droplets. In this model the free energy of interaction between two colloidal particles is predicted based on electrical double layer, van der Waals and hydrophobic forces. It is likely that the steric repulsion between bacteria and NAPL surfaces, caused by biopolymers on bacterial surfaces and aphaltenes and resins at the NAPL-water interface contributed to the decreased adhesion compared to that predicted by the XDLVO model.

  2. ShowFlow: A practical interface for groundwater modeling

    SciTech Connect

    Tauxe, J.D.

    1990-12-01

    ShowFlow was created to provide a user-friendly, intuitive environment for researchers and students who use computer modeling software. What traditionally has been a workplace available only to those familiar with command-line based computer systems is now within reach of almost anyone interested in the subject of modeling. In the case of this edition of ShowFlow, the user can easily experiment with simulations using the steady state gaussian plume groundwater pollutant transport model SSGPLUME, though ShowFlow can be rewritten to provide a similar interface for any computer model. Included in this thesis is all the source code for both the ShowFlow application for Microsoft{reg sign} Windows{trademark} and the SSGPLUME model, a User's Guide, and a Developer's Guide for converting ShowFlow to run other model programs. 18 refs., 13 figs.

  3. The MineTool Software Suite: A Novel Data Mining Palette of Tools for Automated Modeling of Space Physics Data

    NASA Astrophysics Data System (ADS)

    Sipes, T.; Karimabadi, H.; Roberts, A.

    2009-12-01

    We present a new data mining software tool called MineTool for analysis and modeling of space physics data. MineTool is a graphical user interface implementation that merges two data mining algorithms into an easy-to-use software tool: an algorithm for analysis and modeling of static data [Karimabadi et al, 2007] and MineTool-TS, an algorithm for data mining of time series data [Karimabadi et al, 2009]. By virtue of automating the modeling process and model evaluations, MineTool makes data mining and predictive modeling more accessible to non-experts. The software is entirely in Java and freeware. By ranking all inputs as predictors of the outcome before constructing a model, MineTool enables inclusion of only relevant variables as well. The technique aggregates the various stages of model building into a four-step process consisting of (i) data segmentation and sampling, (ii) variable pre-selection and transform generation, (iii) predictive model estimation and validation, and (iv) final model selection. Optimal strategies are chosen for each modeling step. A notable feature of the technique is that the final model is always in closed analytical form rather than “black box” form characteristic of some other techniques. Having the analytical model enables deciphering the importance of various variables to affecting the outcome. MineTool suite also provides capabilities for data preparation for data mining as well as visualization of the datasets. MineTool has successfully been used to develop models for automated detection of flux transfer events (FTEs) at Earth’s magnetopause in the Cluster spacecraft time series data and 3D magnetopause modeling. In this presentation, we demonstrate the ease of use of the software through examples including how it was used in the FTE problem.

  4. Behavior of asphaltene model compounds at w/o interfaces.

    PubMed

    Nordgård, Erland L; Sørland, Geir; Sjöblom, Johan

    2010-02-16

    Asphaltenes, present in significant amounts in heavy crude oil, contains subfractions capable of stabilizing water-in-oil emulsions. Still, the composition of these subfractions is not known in detail, and the actual mechanism behind emulsion stability is dependent on perceived interfacial concentrations and compositions. This study aims at utilizing polyaromatic surfactants which contains an acidic moiety as model compounds for the surface-active subfraction of asphaltenes. A modified pulse-field gradient (PFG) NMR method has been used to study droplet sizes and stability of emulsions prepared with asphaltene model compounds. The method has been compared to the standard microscopy droplet counting method. Arithmetic and volumetric mean droplet sizes as a function of surfactant concentration and water content clearly showed that the interfacial area was dependent on the available surfactant at the emulsion interface. Adsorption of the model compounds onto hydrophilic silica has been investigated by UV depletion, and minor differences in the chemical structure of the model compounds caused significant differences in the affinity toward this highly polar surface. The cross-sectional areas obtained have been compared to areas from the surface-to-volume ratio found by NMR and gave similar results for one of the two model compounds. The mean molecular area for this compound suggested a tilted geometry of the aromatic core with respect to the interface, which has also been proposed for real asphaltenic samples. The film behavior was further investigated using a liquid-liquid Langmuir trough supporting the ability to form stable interfacial films. This study supports that acidic, or strong hydrogen-bonding fractions, can promote stable water-in-oil emulsion. The use of model compounds opens up for studying emulsion behavior and demulsifier efficiency based on true interfacial concentrations rather than perceived interfaces. PMID:19852481

  5. Thermal Edge-Effects Model for Automated Tape Placement of Thermoplastic Composites

    NASA Technical Reports Server (NTRS)

    Costen, Robert C.

    2000-01-01

    Two-dimensional thermal models for automated tape placement (ATP) of thermoplastic composites neglect the diffusive heat transport that occurs between the newly placed tape and the cool substrate beside it. Such lateral transport can cool the tape edges prematurely and weaken the bond. The three-dimensional, steady state, thermal transport equation is solved by the Green's function method for a tape of finite width being placed on an infinitely wide substrate. The isotherm for the glass transition temperature on the weld interface is used to determine the distance inward from the tape edge that is prematurely cooled, called the cooling incursion Delta a. For the Langley ATP robot, Delta a = 0.4 mm for a unidirectional lay-up of PEEK/carbon fiber composite, and Delta a = 1.2 mm for an isotropic lay-up. A formula for Delta a is developed and applied to a wide range of operating conditions. A surprise finding is that Delta a need not decrease as the Peclet number Pe becomes very large, where Pe is the dimensionless ratio of inertial to diffusive heat transport. Conformable rollers that increase the consolidation length would also increase Delta a, unless other changes are made, such as proportionally increasing the material speed. To compensate for premature edge cooling, the thermal input could be extended past the tape edges by the amount Delta a. This method should help achieve uniform weld strength and crystallinity across the width of the tape.

  6. Language Model Applications to Spelling with Brain-Computer Interfaces

    PubMed Central

    Mora-Cortes, Anderson; Manyakov, Nikolay V.; Chumerin, Nikolay; Van Hulle, Marc M.

    2014-01-01

    Within the Ambient Assisted Living (AAL) community, Brain-Computer Interfaces (BCIs) have raised great hopes as they provide alternative communication means for persons with disabilities bypassing the need for speech and other motor activities. Although significant advancements have been realized in the last decade, applications of language models (e.g., word prediction, completion) have only recently started to appear in BCI systems. The main goal of this article is to review the language model applications that supplement non-invasive BCI-based communication systems by discussing their potential and limitations, and to discern future trends. First, a brief overview of the most prominent BCI spelling systems is given, followed by an in-depth discussion of the language models applied to them. These language models are classified according to their functionality in the context of BCI-based spelling: the static/dynamic nature of the user interface, the use of error correction and predictive spelling, and the potential to improve their classification performance by using language models. To conclude, the review offers an overview of the advantages and challenges when implementing language models in BCI-based communication systems when implemented in conjunction with other AAL technologies. PMID:24675760

  7. Developing a laser shockwave model for characterizing diffusion bonded interfaces

    SciTech Connect

    Lacy, Jeffrey M. Smith, James A. Rabin, Barry H.

    2015-03-31

    The US National Nuclear Security Agency has a Global Threat Reduction Initiative (GTRI) with the goal of reducing the worldwide use of high-enriched uranium (HEU). A salient component of that initiative is the conversion of research reactors from HEU to low enriched uranium (LEU) fuels. An innovative fuel is being developed to replace HEU in high-power research reactors. The new LEU fuel is a monolithic fuel made from a U-Mo alloy foil encapsulated in Al-6061 cladding. In order to support the fuel qualification process, the Laser Shockwave Technique (LST) is being developed to characterize the clad-clad and fuel-clad interface strengths in fresh and irradiated fuel plates. LST is a non-contact method that uses lasers for the generation and detection of large amplitude acoustic waves to characterize interfaces in nuclear fuel plates. However, because the deposition of laser energy into the containment layer on a specimen's surface is intractably complex, the shock wave energy is inferred from the surface velocity measured on the backside of the fuel plate and the depth of the impression left on the surface by the high pressure plasma pulse created by the shock laser. To help quantify the stresses generated at the interfaces, a finite element method (FEM) model is being utilized. This paper will report on initial efforts to develop and validate the model by comparing numerical and experimental results for back surface velocities and front surface depressions in a single aluminum plate representative of the fuel cladding.

  8. Developing a laser shockwave model for characterizing diffusion bonded interfaces

    NASA Astrophysics Data System (ADS)

    Lacy, Jeffrey M.; Smith, James A.; Rabin, Barry H.

    2015-03-01

    The US National Nuclear Security Agency has a Global Threat Reduction Initiative (GTRI) with the goal of reducing the worldwide use of high-enriched uranium (HEU). A salient component of that initiative is the conversion of research reactors from HEU to low enriched uranium (LEU) fuels. An innovative fuel is being developed to replace HEU in high-power research reactors. The new LEU fuel is a monolithic fuel made from a U-Mo alloy foil encapsulated in Al-6061 cladding. In order to support the fuel qualification process, the Laser Shockwave Technique (LST) is being developed to characterize the clad-clad and fuel-clad interface strengths in fresh and irradiated fuel plates. LST is a non-contact method that uses lasers for the generation and detection of large amplitude acoustic waves to characterize interfaces in nuclear fuel plates. However, because the deposition of laser energy into the containment layer on a specimen's surface is intractably complex, the shock wave energy is inferred from the surface velocity measured on the backside of the fuel plate and the depth of the impression left on the surface by the high pressure plasma pulse created by the shock laser. To help quantify the stresses generated at the interfaces, a finite element method (FEM) model is being utilized. This paper will report on initial efforts to develop and validate the model by comparing numerical and experimental results for back surface velocities and front surface depressions in a single aluminum plate representative of the fuel cladding.

  9. A diffuse interface model of grain boundary faceting

    NASA Astrophysics Data System (ADS)

    Abdeljawad, Fadi; Medlin, Douglas; Zimmerman, Jonathan; Hattar, Khalid; Foiles, Stephen

    Incorporating anisotropy into thermodynamic treatments of interfaces dates back to over a century ago. For a given orientation of two abutting grains in a pure metal, depressions in the grain boundary (GB) energy may exist as a function of GB inclination, defined by the plane normal. Therefore, an initially flat GB may facet resulting in a hill-and-valley structure. Herein, we present a diffuse interface model of GB faceting that is capable of capturing anisotropic GB energies and mobilities, and accounting for the excess energy due to facet junctions and their non-local interactions. The hallmark of our approach is the ability to independently examine the role of each of the interface properties on the faceting behavior. As a demonstration, we consider the Σ 5 < 001 > tilt GB in iron, where faceting along the { 310 } and { 210 } planes was experimentally observed. Linear stability analysis and numerical examples highlight the role of junction energy and associated non-local interactions on the resulting facet length scales. On the whole, our modeling approach provides a general framework to examine the spatio-temporal evolution of highly anisotropic GBs in polycrystalline metals. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  10. Diffuse-interface modeling of three-phase interactions

    NASA Astrophysics Data System (ADS)

    Park, Jang Min; Anderson, Patrick D.

    2016-05-01

    In this work, a numerical model is developed to study the three-phase interactions which take place when two immiscible drops suspended in a third immiscible liquid are brought together. The diffuse-interface model coupled with the hydrodynamic equations is solved by a standard finite element method. Partial and complete engulfing between two immiscible drops is studied, and the effects of several parameters are discussed. In the partial-engulfing case, two stages of wetting and pulling are identified, which qualitatively agrees with the experiment. In the complete-engulfing case, three stages of wetting and/or penetration, pulling, and spreading are identified.

  11. Numerical modeling of materials processes with fluid-fluid interfaces

    NASA Astrophysics Data System (ADS)

    Yanke, Jeffrey Michael

    A numerical model has been developed to study material processes that depend on the interaction between fluids with a large discontinuity in thermophysical properties. A base model capable of solving equations of mass, momentum, energy conservation, and solidification has been altered to enable tracking of the interface between two immiscible fluids and correctly predict the interface deformation using a volume of fluid (VOF) method. Two materials processes investigated using this technique are Electroslag Remelting (ESR) and plasma spray deposition. ESR is a secondary melting technique that passes an AC current through an electrically resistive slag to provide the heat necessary to melt the alloy. The simulation tracks the interface between the slag and metal. The model was validated against industrial scale ESR ingots and was able to predict trends in melt rate, sump depth, macrosegregation, and liquid sump depth. In order to better understand the underlying physics of the process, several constant current ESR runs simulated the effects of freezing slag in the model. Including the solidifying slag in the imulations was found to have an effect on the melt rate and sump shape but there is too much uncertainty in ESR slag property data at this time for quantitative predictions. The second process investigated in this work is the deposition of ceramic coatings via plasma spray deposition. In plasma spray deposition, powderized coating material is injected into a plasma that melts and carries the powder towards the substrate were it impacts, flattening out and freezing. The impacting droplets pile up to form a porous coating. The model is used to simulate this rain of liquid ceramic particles impacting the substrate and forming a coating. Trends in local solidification time and porosity are calculated for various particle sizes and velocities. The predictions of decreasing porosity with increasing particle velocity matches previous experimental results. Also, a

  12. Automated Eukaryotic Gene Structure Annotation Using EVidenceModeler and the Program to Assemble Spliced Alignments

    SciTech Connect

    Haas, B J; Salzberg, S L; Zhu, W; Pertea, M; Allen, J E; Orvis, J; White, O; Buell, C R; Wortman, J R

    2007-12-10

    EVidenceModeler (EVM) is presented as an automated eukaryotic gene structure annotation tool that reports eukaryotic gene structures as a weighted consensus of all available evidence. EVM, when combined with the Program to Assemble Spliced Alignments (PASA), yields a comprehensive, configurable annotation system that predicts protein-coding genes and alternatively spliced isoforms. Our experiments on both rice and human genome sequences demonstrate that EVM produces automated gene structure annotation approaching the quality of manual curation.

  13. Electrochemical Stability of Model Polymer Electrolyte/Electrode Interfaces

    NASA Astrophysics Data System (ADS)

    Hallinan, Daniel; Yang, Guang

    2015-03-01

    Polymer electrolytes are promising materials for high energy density rechargeable batteries. However, typical polymer electrolytes are not electrochemically stable at the charging voltage of advanced positive electrode materials. Although not yet reported in literature, decomposition is expected to adversely affect the performance and lifetime of polymer-electrolyte-based batteries. In an attempt to better understand polymer electrolyte oxidation and design stable polymer electrolyte/positive electrode interfaces, we are studying electron transfer across model interfaces comprising gold nanoparticles and organic protecting ligands assembled into monolayer films. Gold nanoparticles provide large interfacial surface area yielding a measurable electrochemical signal. They are inert and hence non-reactive with most polymer electrolytes and lithium salts. The surface can be easily modified with ligands of different chemistry and molecular weight. In our study, poly(ethylene oxide) (PEO) will serve as the polymer electrolyte and lithium bis(trifluoromethanesulfonyl) imide salt (LiTFSI) will be the lithium salt. The effect of ligand type and molecular weight on both optical and electrical properties of the gold nanoparticle film will be presented. Finally, the electrochemical stability of the electrode/electrolyte interface and its dependence on interfacial properties will be presented.

  14. A biological model for controlling interface growth and morphology.

    SciTech Connect

    Hoyt, Jeffrey John; Holm, Elizabeth Ann

    2004-01-01

    Biological systems create proteins that perform tasks more efficiently and precisely than conventional chemicals. For example, many plants and animals produce proteins to control the freezing of water. Biological antifreeze proteins (AFPs) inhibit the solidification process, even below the freezing point. These molecules bond to specific sites at the ice/water interface and are theorized to suppress solidification chemically or geometrically. In this project, we investigated the theoretical and experimental data on AFPs and performed analyses to understand the unique physics of AFPs. The experimental literature was analyzed to determine chemical mechanisms and effects of protein bonding at ice surfaces, specifically thermodynamic freezing point depression, suppression of ice nucleation, decrease in dendrite growth kinetics, solute drag on the moving solid/liquid interface, and stearic pinning of the ice interface. Stearic pinning was found to be the most likely candidate to explain experimental results, including freezing point depression, growth morphologies, and thermal hysteresis. A new stearic pinning model was developed and applied to AFPs, with excellent quantitative results. Understanding biological antifreeze mechanisms could enable important medical and engineering applications, but considerable future work will be necessary.

  15. A diffuse interface model of grain boundary faceting

    NASA Astrophysics Data System (ADS)

    Abdeljawad, F.; Medlin, D. L.; Zimmerman, J. A.; Hattar, K.; Foiles, S. M.

    2016-06-01

    Interfaces, free or internal, greatly influence the physical properties and stability of materials microstructures. Of particular interest are the processes that occur due to anisotropic interfacial properties. In the case of grain boundaries (GBs) in metals, several experimental observations revealed that an initially flat GB may facet into hill-and-valley structures with well defined planes and corners/edges connecting them. Herein, we present a diffuse interface model that is capable of accounting for strongly anisotropic GB properties and capturing the formation of hill-and-valley morphologies. The hallmark of our approach is the ability to independently examine the various factors affecting GB faceting and subsequent facet coarsening. More specifically, our formulation incorporates higher order expansions to account for the excess energy due to facet junctions and their non-local interactions. As a demonstration of the modeling capability, we consider the Σ5 <001 > tilt GB in body-centered-cubic iron, where faceting along the {210} and {310} planes was experimentally observed. Atomistic calculations were utilized to determine the inclination-dependent GB energy, which was then used as an input in our model. Linear stability analysis and simulation results highlight the role of junction energy and associated non-local interactions on the resulting facet length scales. Broadly speaking, our modeling approach provides a general framework to examine the microstructural stability of polycrystalline systems with highly anisotropic GBs.

  16. A symbolic/subsymbolic interface protocol for cognitive modeling

    PubMed Central

    Simen, Patrick; Polk, Thad

    2009-01-01

    Researchers studying complex cognition have grown increasingly interested in mapping symbolic cognitive architectures onto subsymbolic brain models. Such a mapping seems essential for understanding cognition under all but the most extreme viewpoints (namely, that cognition consists exclusively of digitally implemented rules; or instead, involves no rules whatsoever). Making this mapping reduces to specifying an interface between symbolic and subsymbolic descriptions of brain activity. To that end, we propose parameterization techniques for building cognitive models as programmable, structured, recurrent neural networks. Feedback strength in these models determines whether their components implement classically subsymbolic neural network functions (e.g., pattern recognition), or instead, logical rules and digital memory. These techniques support the implementation of limited production systems. Though inherently sequential and symbolic, these neural production systems can exploit principles of parallel, analog processing from decision-making models in psychology and neuroscience to explain the effects of brain damage on problem solving behavior. PMID:20711520

  17. Interface Management for a NASA Flight Project Using Model-Based Systems Engineering (MBSE)

    NASA Technical Reports Server (NTRS)

    Vipavetz, Kevin; Shull, Thomas A.; Infeld, Samatha; Price, Jim

    2016-01-01

    The goal of interface management is to identify, define, control, and verify interfaces; ensure compatibility; provide an efficient system development; be on time and within budget; while meeting stakeholder requirements. This paper will present a successful seven-step approach to interface management used in several NASA flight projects. The seven-step approach using Model Based Systems Engineering will be illustrated by interface examples from the Materials International Space Station Experiment-X (MISSE-X) project. The MISSE-X was being developed as an International Space Station (ISS) external platform for space environmental studies, designed to advance the technology readiness of materials and devices critical for future space exploration. Emphasis will be given to best practices covering key areas such as interface definition, writing good interface requirements, utilizing interface working groups, developing and controlling interface documents, handling interface agreements, the use of shadow documents, the importance of interface requirement ownership, interface verification, and product transition.

  18. Groundwater modeling and remedial optimization design using graphical user interfaces

    SciTech Connect

    Deschaine, L.M.

    1997-05-01

    The ability to accurately predict the behavior of chemicals in groundwater systems under natural flow circumstances or remedial screening and design conditions is the cornerstone to the environmental industry. The ability to do this efficiently and effectively communicate the information to the client and regulators is what differentiates effective consultants from ineffective consultants. Recent advances in groundwater modeling graphical user interfaces (GUIs) are doing for numerical modeling what Windows{trademark} did for DOS{trademark}. GUI facilitates both the modeling process and the information exchange. This Test Drive evaluates the performance of two GUIs--Groundwater Vistas and ModIME--on an actual groundwater model calibration and remedial design optimization project. In the early days of numerical modeling, data input consisted of large arrays of numbers that required intensive labor to input and troubleshoot. Model calibration was also manual, as was interpreting the reams of computer output for each of the tens or hundreds of simulations required to calibrate and perform optimal groundwater remedial design. During this period, the majority of the modelers effort (and budget) was spent just getting the model running, as opposed to solving the environmental challenge at hand. GUIs take the majority of the grunt work out of the modeling process, thereby allowing the modeler to focus on designing optimal solutions.

  19. A Model of Process-Based Automation: Cost and Quality Implications in the Medication Management Process

    ERIC Educational Resources Information Center

    Spaulding, Trent Joseph

    2011-01-01

    The objective of this research is to understand how a set of systems, as defined by the business process, creates value. The three studies contained in this work develop the model of process-based automation. The model states that complementarities among systems are specified by handoffs in the business process. The model also provides theory to…

  20. Towards an Improved Pilot-Vehicle Interface for Highly Automated Aircraft: Evaluation of the Haptic Flight Control System

    NASA Technical Reports Server (NTRS)

    Schutte, Paul; Goodrich, Kenneth; Williams, Ralph

    2012-01-01

    The control automation and interaction paradigm (e.g., manual, autopilot, flight management system) used on virtually all large highly automated aircraft has long been an exemplar of breakdowns in human factors and human-centered design. An alternative paradigm is the Haptic Flight Control System (HFCS) that is part of NASA Langley Research Center s Naturalistic Flight Deck Concept. The HFCS uses only stick and throttle for easily and intuitively controlling the actual flight of the aircraft without losing any of the efficiency and operational benefits of the current paradigm. Initial prototypes of the HFCS are being evaluated and this paper describes one such evaluation. In this evaluation we examined claims regarding improved situation awareness, appropriate workload, graceful degradation, and improved pilot acceptance. Twenty-four instrument-rated pilots were instructed to plan and fly four different flights in a fictitious airspace using a moderate fidelity desktop simulation. Three different flight control paradigms were tested: Manual control, Full Automation control, and a simplified version of the HFCS. Dependent variables included both subjective (questionnaire) and objective (SAGAT) measures of situation awareness, workload (NASA-TLX), secondary task performance, time to recognize automation failures, and pilot preference (questionnaire). The results showed a statistically significant advantage for the HFCS in a number of measures. Results that were not statistically significant still favored the HFCS. The results suggest that the HFCS does offer an attractive and viable alternative to the tactical components of today s FMS/autopilot control system. The paper describes further studies that are planned to continue to evaluate the HFCS.

  1. Spherical wave reflection in layered media with rough interfaces: Three-dimensional modeling.

    PubMed

    Pinson, Samuel; Cordioli, Julio; Guillon, Laurent

    2016-08-01

    In the context of sediment characterization, layer interface roughnesses may be responsible for sound-speed profile measurement uncertainties. To study the roughness influence, a three-dimensional (3D) modeling of a layered seafloor with rough interfaces is necessary. Although roughness scattering has an abundant literature, 3D modeling of spherical wave reflection on rough interfaces is generally limited to a single interface (using Kirchhoff-Helmholtz integral) or computationally expensive techniques (finite difference or finite element method). In this work, it is demonstrated that the wave reflection over a layered medium with irregular interfaces can be modeled as a sum of integrals over each interface. The main approximations of the method are the tangent-plane approximation, the Born approximation (multiple reflection between interfaces are neglected) and flat-interface approximation for the transmitted waves into the sediment. The integration over layer interfaces results in a method with reasonable computation cost. PMID:27586741

  2. Modeling the Energy Use of a Connected and Automated Transportation System (Poster)

    SciTech Connect

    Gonder, J.; Brown, A.

    2014-07-01

    Early research points to large potential impacts of connected and automated vehicles (CAVs) on transportation energy use - dramatic savings, increased use, or anything in between. Due to a lack of suitable data and integrated modeling tools to explore these complex future systems, analyses to date have relied on simple combinations of isolated effects. This poster proposes a framework for modeling the potential energy implications from increasing penetration of CAV technologies and for assessing technology and policy options to steer them toward favorable energy outcomes. Current CAV modeling challenges include estimating behavior change, understanding potential vehicle-to-vehicle interactions, and assessing traffic flow and vehicle use under different automation scenarios. To bridge these gaps and develop a picture of potential future automated systems, NREL is integrating existing modeling capabilities with additional tools and data inputs to create a more fully integrated CAV assessment toolkit.

  3. Modeling organohalide perovskites for photovoltaic applications: From materials to interfaces

    NASA Astrophysics Data System (ADS)

    de Angelis, Filippo

    2015-03-01

    The field of hybrid/organic photovoltaics has been revolutionized in 2012 by the first reports of solid-state solar cells based on organohalide perovskites, now topping at 20% efficiency. First-principles modeling has been widely applied to the dye-sensitized solar cells field, and more recently to perovskite-based solar cells. The computational design and screening of new materials has played a major role in advancing the DSCs field. Suitable modeling strategies may also offer a view of the crucial heterointerfaces ruling the device operational mechanism. I will illustrate how simulation tools can be employed in the emerging field of perovskite solar cells. The performance of the proposed simulation toolbox along with the fundamental modeling strategies are presented using selected examples of relevant materials and interfaces. The main issue with hybrid perovskite modeling is to be able to accurately describe their structural, electronic and optical features. These materials show a degree of short range disorder, due to the presence of mobile organic cations embedded within the inorganic matrix, requiring to average their properties over a molecular dynamics trajectory. Due to the presence of heavy atoms (e.g. Sn and Pb) their electronic structure must take into account spin-orbit coupling (SOC) in an effective way, possibly including GW corrections. The proposed SOC-GW method constitutes the basis for tuning the materials electronic and optical properties, rationalizing experimental trends. Modeling charge generation in perovskite-sensitized TiO2 interfaces is then approached based on a SOC-DFT scheme, describing alignment of energy levels in a qualitatively correct fashion. The role of interfacial chemistry on the device performance is finally discussed. The research leading to these results has received funding from the European Union Seventh Framework Programme [FP7/2007 2013] under Grant Agreement No. 604032 of the MESO project.

  4. Intelligent User Interfaces for Information Analysis: A Cognitive Model

    SciTech Connect

    Schwarting, Irene S.; Nelson, Rob A.; Cowell, Andrew J.

    2006-01-29

    Intelligent user interfaces (IUIs) for information analysis (IA) need to be designed with an intrinsic understanding of the analytical objectives and the dimensions of the information space. These analytical objectives are oriented around the requirement to provide decision makers with courses of action. Most tools available to support analysis barely skim the surface of the dimensions and categories of information used in analysis, and almost none are designed to address the ultimate requirement of decision support. This paper presents a high-level model of the cognitive framework of information analysts in the context of doing their jobs. It is intended that this model will enable the derivation of design requirements for advanced IUIs for IA.

  5. ORIGAMI -- The Oak Ridge Geometry Analysis and Modeling Interface

    SciTech Connect

    Burns, T.J.

    1996-04-01

    A revised ``ray-tracing`` package which is a superset of the geometry specifications of the radiation transport codes MORSE, MASH (GIFT Versions 4 and 5), HETC, and TORT has been developed by ORNL. Two additional CAD-based formats are also included as part of the superset: the native format of the BRL-CAD system--MGED, and the solid constructive geometry subset of the IGES specification. As part of this upgrade effort, ORNL has designed an Xwindows-based utility (ORIGAMI) to facilitate the construction, manipulation, and display of the geometric models required by the MASH code. Since the primary design criterion for this effort was that the utility ``see`` the geometric model exactly as the radiation transport code does, ORIGAMI is designed to utilize the same ``ray-tracing`` package as the revised version of MASH. ORIGAMI incorporates the functionality of two previously developed graphical utilities, CGVIEW and ORGBUG, into a single consistent interface.

  6. PyGSM: Python interface to the Global Sky Model

    NASA Astrophysics Data System (ADS)

    Price, Danny C.

    2016-03-01

    PyGSM is a Python interface for the Global Sky Model (GSM, ascl:1011.010). The GSM is a model of diffuse galactic radio emission, constructed from a variety of all-sky surveys spanning the radio band (e.g. Haslam and WMAP). PyGSM uses the GSM to generate all-sky maps in Healpix format of diffuse Galactic radio emission from 10 MHz to 94 GHz. The PyGSM module provides visualization utilities, file output in FITS format, and the ability to generate observed skies for a given location and date. PyGSM requires Healpy, PyEphem (ascl:1112.014), and AstroPy (ascl:1304.002).

  7. The electrical behavior of GaAs-insulator interfaces - A discrete energy interface state model

    NASA Technical Reports Server (NTRS)

    Kazior, T. E.; Lagowski, J.; Gatos, H. C.

    1983-01-01

    The relationship between the electrical behavior of GaAs Metal Insulator Semiconductor (MIS) structures and the high density discrete energy interface states (0.7 and 0.9 eV below the conduction band) was investigated utilizing photo- and thermal emission from the interface states in conjunction with capacitance measurements. It was found that all essential features of the anomalous behavior of GaAs MIS structures, such as the frequency dispersion and the C-V hysteresis, can be explained on the basis of nonequilibrium charging and discharging of the high density discrete energy interface states.

  8. Data for Environmental Modeling (D4EM): Background and Applications of Data Automation

    EPA Science Inventory

    The Data for Environmental Modeling (D4EM) project demonstrates the development of a comprehensive set of open source software tools that overcome obstacles to accessing data needed by automating the process of populating model input data sets with environmental data available fr...

  9. A Binary Programming Approach to Automated Test Assembly for Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Finkelman, Matthew D.; Kim, Wonsuk; Roussos, Louis; Verschoor, Angela

    2010-01-01

    Automated test assembly (ATA) has been an area of prolific psychometric research. Although ATA methodology is well developed for unidimensional models, its application alongside cognitive diagnosis models (CDMs) is a burgeoning topic. Two suggested procedures for combining ATA and CDMs are to maximize the cognitive diagnostic index and to use a…

  10. Growth/reflectance model interface for wheat and corresponding model

    NASA Technical Reports Server (NTRS)

    Suits, G. H.; Sieron, R.; Odenweller, J.

    1984-01-01

    The use of modeling to explore the possibility of discovering new and useful crop condition indicators which might be available from the Thematic Mapper and to connect these symptoms to the biological causes in the crop is discussed. A crop growth model was used to predict the day to day growth features of the crop as it responds biologically to the various environmental factors. A reflectance model was used to predict the character of the interaction of daylight with the predicted growth features. An atmospheric path radiance was added to the reflected daylight to simulate the radiance appearing at the sensor. Finally, the digitized data sent to a ground station were calculated. The crop under investigation is wheat.

  11. A robust and flexible Geospatial Modeling Interface (GMI) for environmental model deployment and evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper provides an overview of the GMI (Geospatial Modeling Interface) simulation framework for environmental model deployment and assessment. GMI currently provides access to multiple environmental models including AgroEcoSystem-Watershed (AgES-W), Nitrate Leaching and Economic Analysis 2 (NLEA...

  12. Modeling and diagnosing interface mix in layered ICF implosions

    NASA Astrophysics Data System (ADS)

    Weber, C. R.; Berzak Hopkins, L. F.; Clark, D. S.; Haan, S. W.; Ho, D. D.; Meezan, N. B.; Milovich, J. L.; Robey, H. F.; Smalyuk, V. A.; Thomas, C. A.

    2015-11-01

    Mixing at the fuel-ablator interface of an inertial confinement fusion (ICF) implosion can arise from an unfavorable in-flight Atwood number between the cryogenic DT fuel and the ablator. High-Z dopant is typically added to the ablator to control the Atwood number, but recent high-density carbon (HDC) capsules have been shot at the National Ignition Facility (NIF) without this added dopant. Highly resolved post-shot modeling of these implosions shows that there was significant mixing of ablator material into the dense DT fuel. This mix lowers the fuel density and results in less overall compression, helping to explain the measured ratio of down scattered-to-primary neutrons. Future experimental designs will seek to improve this issue through adding dopant and changing the x-ray spectra with a different hohlraum wall material. To test these changes, we are designing an experimental platform to look at the growth of this mixing layer. This technique uses side-on radiography to measure the spatial extent of an embedded high-Z tracer layer near the interface. Work performed under the auspices of the U.S. D.O.E. by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.

  13. Modeling the Electrical Contact Resistance at Steel-Carbon Interfaces

    NASA Astrophysics Data System (ADS)

    Brimmo, Ayoola T.; Hassan, Mohamed I.

    2016-01-01

    In the aluminum smelting industry, electrical contact resistance at the stub-carbon (steel-carbon) interface has been recurrently reported to be of magnitudes that legitimately necessitate concern. Mitigating this via finite element modeling has been the focus of a number of investigations, with the pressure- and temperature-dependent contact resistance relation frequently cited as a factor that limits the accuracy of such models. In this study, pressure- and temperature-dependent relations are derived from the most extensively cited works that have experimentally characterized the electrical contact resistance at these contacts. These relations are applied in a validated thermo-electro-mechanical finite element model used to estimate the voltage drop across a steel-carbon laboratory setup. By comparing the models' estimate of the contact electrical resistance with experimental measurements, we deduce the applicability of the different relations over a range of temperatures. The ultimate goal of this study is to apply mathematical modeling in providing pressure- and temperature-dependent relations that best describe the steel-carbon electrical contact resistance and identify the best fit relation at specific thermodynamic conditions.

  14. SN_GUI: a graphical user interface for snowpack modeling

    NASA Astrophysics Data System (ADS)

    Spreitzhofer, G.; Fierz, C.; Lehning, M.

    2004-10-01

    SNOWPACK is a physical snow cover model. The model not only serves as a valuable research tool, but also runs operationally on a network of high Alpine automatic weather and snow measurement sites. In order to facilitate the operation of SNOWPACK and the interpretation of the results obtained by this model, a user-friendly graphical user interface for snowpack modeling, named SN_GUI, was created. This Java-based and thus platform-independent tool can be operated in two modes, one designed to fulfill the requirements of avalanche warning services (e.g. by providing information about critical layers within the snowpack that are closely related to the avalanche activity), and the other one offering a variety of additional options satisfying the needs of researchers. The user of SN_GUI is graphically guided through the entire process of creating snow cover simulations. The starting point is the efficient creation of input parameter files for SNOWPACK, followed by the launching of SNOWPACK with a variety of parameter settings. Finally, after the successful termination of the run, a number of interactive display options may be used to visualize the model output. Among these are vertical profiles and time profiles for many parameters. Besides other features, SN_GUI allows the use of various color, time and coordinate scales, and the comparison of measured and observed parameters.

  15. Parallelization of a hydrological model using the message passing interface

    USGS Publications Warehouse

    Wu, Yiping; Li, Tiejian; Sun, Liqun; Chen, Ji

    2013-01-01

    With the increasing knowledge about the natural processes, hydrological models such as the Soil and Water Assessment Tool (SWAT) are becoming larger and more complex with increasing computation time. Additionally, other procedures such as model calibration, which may require thousands of model iterations, can increase running time and thus further reduce rapid modeling and analysis. Using the widely-applied SWAT as an example, this study demonstrates how to parallelize a serial hydrological model in a Windows® environment using a parallel programing technology—Message Passing Interface (MPI). With a case study, we derived the optimal values for the two parameters (the number of processes and the corresponding percentage of work to be distributed to the master process) of the parallel SWAT (P-SWAT) on an ordinary personal computer and a work station. Our study indicates that model execution time can be reduced by 42%–70% (or a speedup of 1.74–3.36) using multiple processes (two to five) with a proper task-distribution scheme (between the master and slave processes). Although the computation time cost becomes lower with an increasing number of processes (from two to five), this enhancement becomes less due to the accompanied increase in demand for message passing procedures between the master and all slave processes. Our case study demonstrates that the P-SWAT with a five-process run may reach the maximum speedup, and the performance can be quite stable (fairly independent of a project size). Overall, the P-SWAT can help reduce the computation time substantially for an individual model run, manual and automatic calibration procedures, and optimization of best management practices. In particular, the parallelization method we used and the scheme for deriving the optimal parameters in this study can be valuable and easily applied to other hydrological or environmental models.

  16. Modeling interface-controlled phase transformation kinetics in thin films

    NASA Astrophysics Data System (ADS)

    Pang, E. L.; Vo, N. Q.; Philippe, T.; Voorhees, P. W.

    2015-05-01

    The Johnson-Mehl-Avrami-Kolmogorov (JMAK) equation is widely used to describe phase transformation kinetics. This description, however, is not valid in finite size domains, in particular, thin films. A new computational model incorporating the level-set method is employed to study phase evolution in thin film systems. For both homogeneous (bulk) and heterogeneous (surface) nucleation, nucleation density and film thickness were systematically adjusted to study finite-thickness effects on the Avrami exponent during the transformation process. Only site-saturated nucleation with isotropic interface-kinetics controlled growth is considered in this paper. We show that the observed Avrami exponent is not constant throughout the phase transformation process in thin films with a value that is not consistent with the dimensionality of the transformation. Finite-thickness effects are shown to result in reduced time-dependent Avrami exponents when bulk nucleation is present, but not necessarily when surface nucleation is present.

  17. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1991-01-01

    Some of the many analytical models in human-computer interface design that are currently being developed are described. The usefulness of analytical models for human-computer interface design is evaluated. Can the use of analytical models be recommended to interface designers? The answer, based on the empirical research summarized here, is: not at this time. There are too many unanswered questions concerning the validity of models and their ability to meet the practical needs of design organizations.

  18. Automated NMR Fragment Based Screening Identified a Novel Interface Blocker to the LARG/RhoA Complex

    PubMed Central

    Gao, Jia; Ma, Rongsheng; Wang, Wei; Wang, Na; Sasaki, Ryan; Snyderman, David; Wu, Jihui; Ruan, Ke

    2014-01-01

    The small GTPase cycles between the inactive GDP form and the activated GTP form, catalyzed by the upstream guanine exchange factors. The modulation of such process by small molecules has been proven to be a fruitful route for therapeutic intervention to prevent the over-activation of the small GTPase. The fragment based approach emerging in the past decade has demonstrated its paramount potential in the discovery of inhibitors targeting such novel and challenging protein-protein interactions. The details regarding the procedure of NMR fragment screening from scratch have been rarely disclosed comprehensively, thus restricts its wider applications. To achieve a consistent screening applicable to a number of targets, we developed a highly automated protocol to cover every aspect of NMR fragment screening as possible, including the construction of small but diverse libray, determination of the aqueous solubility by NMR, grouping compounds with mutual dispersity to a cocktail, and the automated processing and visualization of the ligand based screening spectra. We exemplified our streamlined screening in RhoA alone and the complex of the small GTPase RhoA and its upstream guanine exchange factor LARG. Two hits were confirmed from the primary screening in cocktail and secondary screening over individual hits for LARG/RhoA complex, while one of them was also identified from the screening for RhoA alone. HSQC titration of the two hits over RhoA and LARG alone, respectively, identified one compound binding to RhoA.GDP at a 0.11 mM affinity, and perturbed the residues at the switch II region of RhoA. This hit blocked the formation of the LARG/RhoA complex, validated by the native gel electrophoresis, and the titration of RhoA to 15N labeled LARG in the absence and presence the compound, respectively. It therefore provides us a starting point toward a more potent inhibitor to RhoA activation catalyzed by LARG. PMID:24505392

  19. Resistive switching near electrode interfaces: Estimations by a current model

    NASA Astrophysics Data System (ADS)

    Schroeder, Herbert; Zurhelle, Alexander; Stemmer, Stefanie; Marchewka, Astrid; Waser, Rainer

    2013-02-01

    The growing resistive switching database is accompanied by many detailed mechanisms which often are pure hypotheses. Some of these suggested models can be verified by checking their predictions with the benchmarks of future memory cells. The valence change memory model assumes that the different resistances in ON and OFF states are made by changing the defect density profiles in a sheet near one working electrode during switching. The resulting different READ current densities in ON and OFF states were calculated by using an appropriate simulation model with variation of several important defect and material parameters of the metal/insulator (oxide)/metal thin film stack such as defect density and its profile change in density and thickness, height of the interface barrier, dielectric permittivity, applied voltage. The results were compared to the benchmarks and some memory windows of the varied parameters can be defined: The required ON state READ current density of 105 A/cm2 can only be achieved for barriers smaller than 0.7 eV and defect densities larger than 3 × 1020 cm-3. The required current ratio between ON and OFF states of at least 10 requests defect density reduction of approximately an order of magnitude in a sheet of several nanometers near the working electrode.

  20. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT (AGWA): A GIS-BASED HYDROLOGICAL MODELING TOOL FOR WATERSHED MANAGEMENT AND LANDSCAPE ASSESSMENT

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment (http://www.epa.gov/nerlesd1/land-sci/agwa/introduction.htm and www.tucson.ars.ag.gov/agwa) tool is a GIS interface jointly developed by the U.S. Environmental Protection Agency, USDA-Agricultural Research Service, and the University ...

  1. Modeling Multiple Human-Automation Distributed Systems using Network-form Games

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume

    2012-01-01

    The paper describes at a high-level the network-form game framework (based on Bayes net and game theory), which can be used to model and analyze safety issues in large, distributed, mixed human-automation systems such as NextGen.

  2. Automated volumetric grid generation for finite element modeling of human hand joints

    SciTech Connect

    Hollerbach, K.; Underhill, K.; Rainsberger, R.

    1995-02-01

    We are developing techniques for finite element analysis of human joints. These techniques need to provide high quality results rapidly in order to be useful to a physician. The research presented here increases model quality and decreases user input time by automating the volumetric mesh generation step.

  3. Evaluation of automated cell disruptor methods for oomycetous and ascomycetous model organisms

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Two automated cell disruptor-based methods for RNA extraction; disruption of thawed cells submerged in TRIzol Reagent (method QP), and direct disruption of frozen cells on dry ice (method CP), were optimized for a model oomycete, Phytophthora capsici, and compared with grinding in a mortar and pestl...

  4. Automated Discovery and Modeling of Sequential Patterns Preceding Events of Interest

    NASA Technical Reports Server (NTRS)

    Rohloff, Kurt

    2010-01-01

    The integration of emerging data manipulation technologies has enabled a paradigm shift in practitioners' abilities to understand and anticipate events of interest in complex systems. Example events of interest include outbreaks of socio-political violence in nation-states. Rather than relying on human-centric modeling efforts that are limited by the availability of SMEs, automated data processing technologies has enabled the development of innovative automated complex system modeling and predictive analysis technologies. We introduce one such emerging modeling technology - the sequential pattern methodology. We have applied the sequential pattern methodology to automatically identify patterns of observed behavior that precede outbreaks of socio-political violence such as riots, rebellions and coups in nation-states. The sequential pattern methodology is a groundbreaking approach to automated complex system model discovery because it generates easily interpretable patterns based on direct observations of sampled factor data for a deeper understanding of societal behaviors that is tolerant of observation noise and missing data. The discovered patterns are simple to interpret and mimic human's identifications of observed trends in temporal data. Discovered patterns also provide an automated forecasting ability: we discuss an example of using discovered patterns coupled with a rich data environment to forecast various types of socio-political violence in nation-states.

  5. Automated Test Assembly for Cognitive Diagnosis Models Using a Genetic Algorithm

    ERIC Educational Resources Information Center

    Finkelman, Matthew; Kim, Wonsuk; Roussos, Louis A.

    2009-01-01

    Much recent psychometric literature has focused on cognitive diagnosis models (CDMs), a promising class of instruments used to measure the strengths and weaknesses of examinees. This article introduces a genetic algorithm to perform automated test assembly alongside CDMs. The algorithm is flexible in that it can be applied whether the goal is to…

  6. Workstation Modelling and Development: Clinical Definition of a Picture Archiving and Communications System (PACS) User Interface

    NASA Astrophysics Data System (ADS)

    Braudes, Robert E.; Mun, Seong K.; Sibert, John L.; Schnizlein, John; Horii, Steven C.

    1989-05-01

    A PACS must provide a user interface which is acceptable to all potential users of the system. Observations and interviews have been conducted with six radiology services at the Georgetown University Medical Center, Department of Radiology, in order to evaluate user interface requirements for a PACS system. Based on these observations, a conceptual model of radiology has been developed. These discussions have also revealed some significant differences in the user interface requirements between the various services. Several underlying factors have been identified which may be used as initial predictors of individual user interface styles. A user model has been developed which incorporates these factors into the specification of a tailored PACS user interface.

  7. Graphical User Interface for Simulink Integrated Performance Analysis Model

    NASA Technical Reports Server (NTRS)

    Durham, R. Caitlyn

    2009-01-01

    The J-2X Engine (built by Pratt & Whitney Rocketdyne,) in the Upper Stage of the Ares I Crew Launch Vehicle, will only start within a certain range of temperature and pressure for Liquid Hydrogen and Liquid Oxygen propellants. The purpose of the Simulink Integrated Performance Analysis Model is to verify that in all reasonable conditions the temperature and pressure of the propellants are within the required J-2X engine start boxes. In order to run the simulation, test variables must be entered at all reasonable values of parameters such as heat leak and mass flow rate. To make this testing process as efficient as possible in order to save the maximum amount of time and money, and to show that the J-2X engine will start when it is required to do so, a graphical user interface (GUI) was created to allow the input of values to be used as parameters in the Simulink Model, without opening or altering the contents of the model. The GUI must allow for test data to come from Microsoft Excel files, allow those values to be edited before testing, place those values into the Simulink Model, and get the output from the Simulink Model. The GUI was built using MATLAB, and will run the Simulink simulation when the Simulate option is activated. After running the simulation, the GUI will construct a new Microsoft Excel file, as well as a MATLAB matrix file, using the output values for each test of the simulation so that they may graphed and compared to other values.

  8. An automated construction of error models for uncertainty quantification and model calibration

    NASA Astrophysics Data System (ADS)

    Josset, L.; Lunati, I.

    2015-12-01

    To reduce the computational cost of stochastic predictions, it is common practice to rely on approximate flow solvers (or «proxy»), which provide an inexact, but computationally inexpensive response [1,2]. Error models can be constructed to correct the proxy response: based on a learning set of realizations for which both exact and proxy simulations are performed, a transformation is sought to map proxy into exact responses. Once the error model is constructed a prediction of the exact response is obtained at the cost of a proxy simulation for any new realization. Despite its effectiveness [2,3], the methodology relies on several user-defined parameters, which impact the accuracy of the predictions. To achieve a fully automated construction, we propose a novel methodology based on an iterative scheme: we first initialize the error model with a small training set of realizations; then, at each iteration, we add a new realization both to improve the model and to evaluate its performance. More specifically, at each iteration we use the responses predicted by the updated model to identify the realizations that need to be considered to compute the quantity of interest. Another user-defined parameter is the number of dimensions of the response spaces between which the mapping is sought. To identify the space dimensions that optimally balance mapping accuracy and risk of overfitting, we follow a Leave-One-Out Cross Validation. Also, the definition of a stopping criterion is central to an automated construction. We use a stability measure based on bootstrap techniques to stop the iterative procedure when the iterative model has converged. The methodology is illustrated with two test cases in which an inverse problem has to be solved and assess the performance of the method. We show that an iterative scheme is crucial to increase the applicability of the approach. [1] Josset, L., and I. Lunati, Local and global error models for improving uncertainty quantification, Math

  9. The Automated Geospatial Watershed Assessment Tool (AGWA): Developing Post-Fire Model Parameters Using Precipitation and Runoff Records from Gauged Watersheds

    NASA Astrophysics Data System (ADS)

    Sheppard, B. S.; Goodrich, D. C.; Guertin, D. P.; Burns, I. S.; Canfield, E.; Sidman, G.

    2014-12-01

    New tools and functionality have been incorporated into the Automated Geospatial Watershed Assessment Tool (AGWA) to assess the impacts of wildfire on runoff and erosion. AGWA (see: www.tucson.ars.ag.gov/agwa or http://www.epa.gov/esd/land-sci/agwa/) is a GIS interface jointly developed by the USDA-Agricultural Research Service, the U.S. Environmental Protection Agency, the University of Arizona, and the University of Wyoming to automate the parameterization and execution of a suite of hydrologic and erosion models (RHEM, WEPP, KINEROS2 and SWAT). Through an intuitive interface the user selects an outlet from which AGWA delineates and discretizes the watershed using a Digital Elevation Model (DEM). The watershed model elements are then intersected with terrain, soils, and land cover data layers to derive the requisite model input parameters. With the addition of a burn severity map AGWA can be used to model post wildfire changes to a catchment. By applying the same design storm to burned and unburned conditions a rapid assessment of the watershed can be made and areas that are the most prone to flooding can be identified. Post-fire precipitation and runoff records from gauged forested watersheds are now being used to make improvements to post fire model input parameters. Rainfall and runoff pairs have been selected from these records in order to calibrate parameter values for surface roughness and saturated hydraulic conductivity used in the KINEROS2 model. Several objective functions will be tried in the calibration process. Results will be validated. Currently Department of Interior Burn Area Emergency Response (DOI BAER) teams are using the AGWA-KINEROS2 modeling interface to assess hydrologically imposed risk immediately following wild fire. These parameter refinements are being made to further improve the quality of these assessments.

  10. Driven Interfaces: From Flow to Creep Through Model Reduction

    NASA Astrophysics Data System (ADS)

    Agoritsas, Elisabeth; García-García, Reinaldo; Lecomte, Vivien; Truskinovsky, Lev; Vandembroucq, Damien

    2016-08-01

    The response of spatially extended systems to a force leading their steady state out of equilibrium is strongly affected by the presence of disorder. We focus on the mean velocity induced by a constant force applied on one-dimensional interfaces. In the absence of disorder, the velocity is linear in the force. In the presence of disorder, it is widely admitted, as well as experimentally and numerically verified, that the velocity presents a stretched exponential dependence in the force (the so-called `creep law'), which is out of reach of linear response, or more generically of direct perturbative expansions at small force. In dimension one, there is no exact analytical derivation of such a law, even from a theoretical physical point of view. We propose an effective model with two degrees of freedom, constructed from the full spatially extended model, that captures many aspects of the creep phenomenology. It provides a justification of the creep law form of the velocity-force characteristics, in a quasistatic approximation. It allows, moreover, to capture the non-trivial effects of short-range correlations in the disorder, which govern the low-temperature asymptotics. It enables us to establish a phase diagram where the creep law manifests itself in the vicinity of the origin in the force-system-size-temperature coordinates. Conjointly, we characterise the crossover between the creep regime and a linear-response regime that arises due to finite system size.

  11. Challenges in Modeling of the Plasma-Material Interface

    NASA Astrophysics Data System (ADS)

    Krstic, Predrag; Meyer, Fred; Allain, Jean Paul

    2013-09-01

    Plasma-Material Interface mixes materials of the two worlds, creating a new entity, a dynamical surface, which communicates between the two and represent one of the most challenging areas of multidisciplinary science, with many fundamental processes and synergies. How to build an integrated theoretical-experimental approach? Without mutual validation of experiment and theory chances very slim to have believable results? The outreach of the PMI science modeling at the fusion plasma facilities is illustrated by the significant step forward in understanding achieved recently by the quantum-classical modeling of the lithiated carbon surfaces irradiated by deuterium, showing surprisingly large role of oxygen in the deuterium retention and erosion chemistry. The plasma-facing walls of the next-generation fusion reactors will be exposed to high fluxes of neutrons and plasma-particles and will operate at high temperatures for thermodynamic efficiency. To this end we have been studying the evolution dynamics of vacancies and interstitials to the saturated dpa doses of tungsten surfaces bombarded by self-atoms, as well as the plasma-surface interactions of the damaged surfaces (erosion, hydrogen and helium uptake and fuzz formation). PSK and FWM acknowledge support of the ORNL LDRD program.

  12. Automated dynamic analytical model improvement for damped structures

    NASA Technical Reports Server (NTRS)

    Fuh, J. S.; Berman, A.

    1985-01-01

    A method is described to improve a linear nonproportionally damped analytical model of a structure. The procedure finds the smallest changes in the analytical model such that the improved model matches the measured modal parameters. Features of the method are: (1) ability to properly treat complex valued modal parameters of a damped system; (2) applicability to realistically large structural models; and (3) computationally efficiency without involving eigensolutions and inversion of a large matrix.

  13. Phase field modeling of a glide dislocation transmission across a coherent sliding interface

    NASA Astrophysics Data System (ADS)

    Zheng, Songlin; Ni, Yong; He, Linghui

    2015-04-01

    Three-dimensional phase field microelasticity modeling and simulation capable of representing core structure and elastic interactions of dislocations are used to study a glide dislocation transmission across a coherent sliding interface in face-centered cubic metals. We investigate the role of the interface sliding process, which is described as the reversible motion of interface dislocation on the interfacial barrier strength to transmission. Numerical results show that a wider transient interface sliding zone develops on the interface with a lower interfacial unstable stacking fault energy to trap the glide dislocation leading to a stronger barrier to transmission. The interface sliding zone shrinks in the case of high applied stress and low mobility for the interfacial dislocation. This indicates that such interfacial barrier strength might be rate dependent. We discuss the calculated interfacial barrier strength for the Cu/Ni interface from the contribution of interface sliding comparable to previous atomistic simulations.

  14. Model-based automated segmentation of kinetochore microtubule from electron tomography.

    PubMed

    Jiang, Ming; Ji, Qiang; McEwen, Bruce

    2004-01-01

    The segmentation of kinetochore microtubules from electron tomography is challenging due to the poor quality of the acquired data and the cluttered cellular surroundings. We propose to automate the microtubule segmentation by extending the active shape model (ASM) in two aspects. First, we develop a higher order boundary model obtained by 3-D local surface estimation that characterizes the microtubule boundary better than the gray level appearance model in the 2-D microtubule cross section. We then incorporate this model into the weight matrix of the fitting error measurement to increase the influence of salient features. Second, we integrate the ASM with Kalman filtering to utilize the shape information along the longitudinal direction of the microtubules. The ASM modified in this way is robust against missing data and outliers frequently present in the kinetochore tomography volume. Experimental results demonstrate that our automated method outperforms manual process but using only a fraction of the time of the latter. PMID:17272020

  15. Reduced complexity structural modeling for automated airframe synthesis

    NASA Technical Reports Server (NTRS)

    Hajela, Prabhat

    1987-01-01

    A procedure is developed for the optimum sizing of wing structures based on representing the built-up finite element assembly of the structure by equivalent beam models. The reduced-order beam models are computationally less demanding in an optimum design environment which dictates repetitive analysis of several trial designs. The design procedure is implemented in a computer program requiring geometry and loading information to create the wing finite element model and its equivalent beam model, and providing a rapid estimate of the optimum weight obtained from a fully stressed design approach applied to the beam. The synthesis procedure is demonstrated for representative conventional-cantilever and joined wing configurations.

  16. Combining neural network models for automated diagnostic systems.

    PubMed

    Ubeyli, Elif Derya

    2006-12-01

    This paper illustrates the use of combined neural network (CNN) models to guide model selection for diagnosis of internal carotid arterial (ICA) disorders. The ICA Doppler signals were decomposed into time-frequency representations using discrete wavelet transform and statistical features were calculated to depict their distribution. The first level networks were implemented for the diagnosis of ICA disorders using the statistical features as inputs. To improve diagnostic accuracy, the second level network was trained using the outputs of the first level networks as input data. The CNN models achieved accuracy rates which were higher than that of the stand-alone neural network models. PMID:17233161

  17. Comparison of Joint Modeling Approaches Including Eulerian Sliding Interfaces

    SciTech Connect

    Lomov, I; Antoun, T; Vorobiev, O

    2009-12-16

    Accurate representation of discontinuities such as joints and faults is a key ingredient for high fidelity modeling of shock propagation in geologic media. The following study was done to improve treatment of discontinuities (joints) in the Eulerian hydrocode GEODYN (Lomov and Liu 2005). Lagrangian methods with conforming meshes and explicit inclusion of joints in the geologic model are well suited for such an analysis. Unfortunately, current meshing tools are unable to automatically generate adequate hexahedral meshes for large numbers of irregular polyhedra. Another concern is that joint stiffness in such explicit computations requires significantly reduced time steps, with negative implications for both the efficiency and quality of the numerical solution. An alternative approach is to use non-conforming meshes and embed joint information into regular computational elements. However, once slip displacement on the joints become comparable to the zone size, Lagrangian (even non-conforming) meshes could suffer from tangling and decreased time step problems. The use of non-conforming meshes in an Eulerian solver may alleviate these difficulties and provide a viable numerical approach for modeling the effects of faults on the dynamic response of geologic materials. We studied shock propagation in jointed/faulted media using a Lagrangian and two Eulerian approaches. To investigate the accuracy of this joint treatment the GEODYN calculations have been compared with results from the Lagrangian code GEODYN-L which uses an explicit treatment of joints via common plane contact. We explore two approaches to joint treatment in the code, one for joints with finite thickness and the other for tight joints. In all cases the sliding interfaces are tracked explicitly without homogenization or blending the joint and block response into an average response. In general, rock joints will introduce an increase in normal compliance in addition to a reduction in shear strength. In the

  18. Proteomics for Validation of Automated Gene Model Predictions

    SciTech Connect

    Zhou, Kemin; Panisko, Ellen A.; Magnuson, Jon K.; Baker, Scott E.; Grigoriev, Igor V.

    2008-02-14

    High-throughput liquid chromatography mass spectrometry (LC-MS)-based proteomic analysis has emerged as a powerful tool for functional annotation of genome sequences. These analyses complement the bioinformatic and experimental tools used for deriving, verifying, and functionally annotating models of genes and their transcripts. Furthermore, proteomics extends verification and functional annotation to the level of the translation product of the gene model.

  19. Achieving runtime adaptability through automated model evolution and variant selection

    NASA Astrophysics Data System (ADS)

    Mosincat, Adina; Binder, Walter; Jazayeri, Mehdi

    2014-01-01

    Dynamically adaptive systems propose adaptation by means of variants that are specified in the system model at design time and allow for a fixed set of different runtime configurations. However, in a dynamic environment, unanticipated changes may result in the inability of the system to meet its quality requirements. To allow the system to react to these changes, this article proposes a solution for automatically evolving the system model by integrating new variants and periodically validating the existing ones based on updated quality parameters. To illustrate this approach, the article presents a BPEL-based framework using a service composition model to represent the functional requirements of the system. The framework estimates quality of service (QoS) values based on information provided by a monitoring mechanism, ensuring that changes in QoS are reflected in the system model. The article shows how the evolved model can be used at runtime to increase the system's autonomic capabilities and delivered QoS.

  20. Automated parametrical antenna modelling for ambient assisted living applications

    NASA Astrophysics Data System (ADS)

    Kazemzadeh, R.; John, W.; Mathis, W.

    2012-09-01

    In this paper a parametric modeling technique for a fast polynomial extraction of the physically relevant parameters of inductively coupled RFID/NFC (radio frequency identification/near field communication) antennas is presented. The polynomial model equations are obtained by means of a three-step procedure: first, full Partial Element Equivalent Circuit (PEEC) antenna models are determined by means of a number of parametric simulations within the input parameter range of a certain antenna class. Based on these models, the RLC antenna parameters are extracted in a subsequent model reduction step. Employing these parameters, polynomial equations describing the antenna parameter with respect to (w.r.t.) the overall antenna input parameter range are extracted by means of polynomial interpolation and approximation of the change of the polynomials' coefficients. The described approach is compared to the results of a reference PEEC solver with regard to accuracy and computation effort.

  1. Man power/cost estimation model: Automated planetary projects

    NASA Technical Reports Server (NTRS)

    Kitchen, L. D.

    1975-01-01

    A manpower/cost estimation model is developed which is based on a detailed level of financial analysis of over 30 million raw data points which are then compacted by more than three orders of magnitude to the level at which the model is applicable. The major parameter of expenditure is manpower (specifically direct labor hours) for all spacecraft subsystem and technical support categories. The resultant model is able to provide a mean absolute error of less than fifteen percent for the eight programs comprising the model data base. The model includes cost saving inheritance factors, broken down in four levels, for estimating follow-on type programs where hardware and design inheritance are evident or expected.

  2. Analytical and numerical modeling of non-collinear shear wave mixing at an imperfect interface.

    PubMed

    Zhang, Ziyin; Nagy, Peter B; Hassan, Waled

    2016-02-01

    Non-collinear shear wave mixing at an imperfect interface between two solids can be exploited for nonlinear ultrasonic assessment of bond quality. In this study we developed two analytical models for nonlinear imperfect interfaces. The first model uses a finite nonlinear interfacial stiffness representation of an imperfect interface of vanishing thickness, while the second model relies on a thin nonlinear interphase layer to represent an imperfect interface region. The second model is actually a derivative of the first model obtained by calculating the equivalent interfacial stiffness of a thin isotropic nonlinear interphase layer in the quasi-static approximation. The predictions of both analytical models were numerically verified by comparison to COMSOL finite element simulations. These models can accurately predict the additional nonlinearity caused by interface imperfections based on the strength of the reflected and transmitted mixed longitudinal waves produced by them under non-collinear shear wave interrogation. PMID:26482394

  3. Analytical and numerical modeling of non-collinear shear wave mixing at an imperfect interface

    NASA Astrophysics Data System (ADS)

    Zhang, Ziyin; Nagy, Peter B.; Hassan, Waled

    2016-02-01

    Non-collinear shear wave mixing at an imperfect interface between two solids can be exploited for nonlinear ultrasonic assessment of bond quality. In this study we developed two analytical models for nonlinear imperfect interfaces. The first model uses a finite nonlinear interfacial stiffness representation of an imperfect interface of vanishing thickness, while the second model relies on a thin nonlinear interphase layer to represent an imperfect interface region. The second model is actually a derivative of the first model obtained by calculating the equivalent interfacial stiffness of a thin isotropic nonlinear interphase layer in the quasi-static approximation. The predictions of both analytical models were numerically verified by comparison to COMSOL finite element simulations. These models can accurately predict the excess nonlinearity caused by interface imperfections based on the strength of the reflected and transmitted mixed longitudinal waves produced by them under non-collinear shear wave interrogation.

  4. Web Interface for Modeling Fog Oil Dispersion During Training

    NASA Astrophysics Data System (ADS)

    Lozar, Robert C.

    2002-08-01

    Predicting the dispersion of military camouflage training materials-Smokes and Obscurants (SO)-is a rapidly improving science. The Defense Threat Reduction Agency (DTRA) developed the Hazard Prediction and Assessment Capability (HPAC), a software package that allows the modeling of the dispersion of several potentially detrimental materials. ERDC/CERL characterized the most commonly used SO material, fog oil in HPAC terminology, to predict the SO dispersion characteristics in various training scenarios that might have an effect on Threatened and Endangered Species (TES) at DoD installations. To make the configuration more user friendly, the researchers implemented an initial web-interface version of HPAC with a modifiable fog-oil component that can be applied at any installation in the world. By this method, an installation SO trainer can plan the location and time of fog oil training activities and is able to predict the degree to which various areas will be effected, particularly important in ensuring the appropriate management of TES on a DoD installation.

  5. Petri net-based modelling of human-automation conflicts in aviation.

    PubMed

    Pizziol, Sergio; Tessier, Catherine; Dehais, Frédéric

    2014-01-01

    Analyses of aviation safety reports reveal that human-machine conflicts induced by poor automation design are remarkable precursors of accidents. A review of different crew-automation conflicting scenarios shows that they have a common denominator: the autopilot behaviour interferes with the pilot's goal regarding the flight guidance via 'hidden' mode transitions. Considering both the human operator and the machine (i.e. the autopilot or the decision functions) as agents, we propose a Petri net model of those conflicting interactions, which allows them to be detected as deadlocks in the Petri net. In order to test our Petri net model, we designed an autoflight system that was formally analysed to detect conflicting situations. We identified three conflicting situations that were integrated in an experimental scenario in a flight simulator with 10 general aviation pilots. The results showed that the conflicts that we had a-priori identified as critical had impacted the pilots' performance. Indeed, the first conflict remained unnoticed by eight participants and led to a potential collision with another aircraft. The second conflict was detected by all the participants but three of them did not manage the situation correctly. The last conflict was also detected by all the participants but provoked typical automation surprise situation as only one declared that he had understood the autopilot behaviour. These behavioural results are discussed in terms of workload and number of fired 'hidden' transitions. Eventually, this study reveals that both formal and experimental approaches are complementary to identify and assess the criticality of human-automation conflicts. Practitioner Summary: We propose a Petri net model of human-automation conflicts. An experiment was conducted with general aviation pilots performing a scenario involving three conflicting situations to test the soundness of our formal approach. This study reveals that both formal and experimental approaches

  6. Continental hydrosystem modelling: the concept of nested stream-aquifer interfaces

    NASA Astrophysics Data System (ADS)

    Flipo, N.; Mouhri, A.; Labarthe, B.; Biancamaria, S.

    2014-01-01

    Recent developments in hydrological modelling are based on a view of the interface being a single continuum through which water flows. These coupled hydrological-hydrogeological models, emphasising the importance of the stream-aquifer interface, are more and more used in hydrological sciences for pluri-disciplinary studies aiming at investigating environmental issues. This notion of a single continuum, which is accepted by the hydrological modellers, originates in the historical modelling of hydrosystems based on the hypothesis of a homogeneous media that led to the Darcy law. There is then a need to first bridge the gap between hydrological and eco-hydrological views of the stream-aquifer interfaces, and, secondly, to rationalise the modelling of stream-aquifer interface within a consistent framework that fully takes into account the multi-dimensionality of the stream-aquifer interfaces. We first define the concept of nested stream-aquifer interfaces as a key transitional component of continental hydrosystem. Based on a literature review, we then demonstrate the usefulness of the concept for the multi-dimensional study of the stream-aquifer interface, with a special emphasis on the stream network, which is identified as the key component for scaling hydrological processes occurring at the interface. Finally we focus on the stream-aquifer interface modelling at different scales, with up-to-date methodologies and give some guidances for the multi-dimensional modelling of the interface using the innovative methodology MIM (Measurements-Interpolation-Modelling), which is graphically developed, scaling in space the three pools of methods needed to fully understand stream-aquifer interfaces at various scales. The outcome of MIM is the localisation in space of the stream-aquifer interface types that can be studied by a given approach. The efficiency of the method is demonstrated with two approaches from the local (~1 m) to the continental (<10 M km2) scale.

  7. Automated mask creation from a 3D model using Faethm.

    SciTech Connect

    Schiek, Richard Louis; Schmidt, Rodney Cannon

    2007-11-01

    We have developed and implemented a method which given a three-dimensional object can infer from topology the two-dimensional masks needed to produce that object with surface micro-machining. The masks produced by this design tool can be generic, process independent masks, or if given process constraints, specific for a target process. This design tool calculates the two-dimensional mask set required to produce a given three-dimensional model by investigating the vertical topology of the model.

  8. A 2-D Interface Element for Coupled Analysis of Independently Modeled 3-D Finite Element Subdomains

    NASA Technical Reports Server (NTRS)

    Kandil, Osama A.

    1998-01-01

    Over the past few years, the development of the interface technology has provided an analysis framework for embedding detailed finite element models within finite element models which are less refined. This development has enabled the use of cascading substructure domains without the constraint of coincident nodes along substructure boundaries. The approach used for the interface element is based on an alternate variational principle often used in deriving hybrid finite elements. The resulting system of equations exhibits a high degree of sparsity but gives rise to a non-positive definite system which causes difficulties with many of the equation solvers in general-purpose finite element codes. Hence the global system of equations is generally solved using, a decomposition procedure with pivoting. The research reported to-date for the interface element includes the one-dimensional line interface element and two-dimensional surface interface element. Several large-scale simulations, including geometrically nonlinear problems, have been reported using the one-dimensional interface element technology; however, only limited applications are available for the surface interface element. In the applications reported to-date, the geometry of the interfaced domains exactly match each other even though the spatial discretization within each domain may be different. As such, the spatial modeling of each domain, the interface elements and the assembled system is still laborious. The present research is focused on developing a rapid modeling procedure based on a parametric interface representation of independently defined subdomains which are also independently discretized.

  9. Kriging and automated variogram modeling within a moving window

    NASA Astrophysics Data System (ADS)

    Haas, Timothy C.

    A spatial estimation procedure based on ordinary kriging is described and evaluated which consists of using only sampling sites contained within a moving window centered at the estimate location for modeling the covariance structure and constructing the kriging equations. The moving window, by depending on local data only to estimate the spatial covariance structure and calculate the estimate, is less affected by spatial trend in the data than conventional kriging approaches and implicitly models covariance nonstationarity. The window's covariance structure is estimated by automatically fitting a spherical variogram model to the unbiased estimates of semi-variance calculated at several lags. The automatic fit uses nonlinear least squares regression constrained by the nugget parameter being nonnegative. This estimation method is compared to the more standard method of ordinary kriging over fixed subregions by using both procedures in the analysis of NADP/NTN sulfate deposition data in the conterminous U.S. For this analysis, we find that the moving window scheme provides local variogram models which are minimally affected by trend, and that also this use of an ensemble of variograms allows the accurate modeling of a spatially changing covariance structure. Accurate spatial covariance modeling is needed by acid deposition effects researchers because it is a prerequisite for the calculation of defensible deposition confidence intervals from the error (kriging) variance.

  10. Automated biowaste sampling system urine subsystem operating model, part 1

    NASA Technical Reports Server (NTRS)

    Fogal, G. L.; Mangialardi, J. K.; Rosen, F.

    1973-01-01

    The urine subsystem automatically provides for the collection, volume sensing, and sampling of urine from six subjects during space flight. Verification of the subsystem design was a primary objective of the current effort which was accomplished thru the detail design, fabrication, and verification testing of an operating model of the subsystem.

  11. A simplified cellular automation model for city traffic

    SciTech Connect

    Simon, P.M.; Nagel, K. |

    1997-12-31

    The authors systematically investigate the effect of blockage sites in a cellular automata model for traffic flow. Different scheduling schemes for the blockage sites are considered. None of them returns a linear relationship between the fraction of green time and the throughput. The authors use this information for a fast implementation of traffic in Dallas.

  12. A Voyage to Arcturus: A model for automated management of a WLCG Tier-2 facility

    NASA Astrophysics Data System (ADS)

    Roy, Gareth; Crooks, David; Mertens, Lena; Mitchell, Mark; Purdie, Stuart; Cadellin Skipsey, Samuel; Britton, David

    2014-06-01

    With the current trend towards "On Demand Computing" in big data environments it is crucial that the deployment of services and resources becomes increasingly automated. Deployment based on cloud platforms is available for large scale data centre environments but these solutions can be too complex and heavyweight for smaller, resource constrained WLCG Tier-2 sites. Along with a greater desire for bespoke monitoring and collection of Grid related metrics, a more lightweight and modular approach is desired. In this paper we present a model for a lightweight automated framework which can be use to build WLCG grid sites, based on "off the shelf" software components. As part of the research into an automation framework the use of both IPMI and SNMP for physical device management will be included, as well as the use of SNMP as a monitoring/data sampling layer such that more comprehensive decision making can take place and potentially be automated. This could lead to reduced down times and better performance as services are recognised to be in a non-functional state by autonomous systems.

  13. Automated volumetric breast density derived by shape and appearance modeling

    NASA Astrophysics Data System (ADS)

    Malkov, Serghei; Kerlikowske, Karla; Shepherd, John

    2014-03-01

    The image shape and texture (appearance) estimation designed for facial recognition is a novel and promising approach for application in breast imaging. The purpose of this study was to apply a shape and appearance model to automatically estimate percent breast fibroglandular volume (%FGV) using digital mammograms. We built a shape and appearance model using 2000 full-field digital mammograms from the San Francisco Mammography Registry with known %FGV measured by single energy absorptiometry method. An affine transformation was used to remove rotation, translation and scale. Principal Component Analysis (PCA) was applied to extract significant and uncorrelated components of %FGV. To build an appearance model, we transformed the breast images into the mean texture image by piecewise linear image transformation. Using PCA the image pixels grey-scale values were converted into a reduced set of the shape and texture features. The stepwise regression with forward selection and backward elimination was used to estimate the outcome %FGV with shape and appearance features and other system parameters. The shape and appearance scores were found to correlate moderately to breast %FGV, dense tissue volume and actual breast volume, body mass index (BMI) and age. The highest Pearson correlation coefficient was equal 0.77 for the first shape PCA component and actual breast volume. The stepwise regression method with ten-fold cross-validation to predict %FGV from shape and appearance variables and other system outcome parameters generated a model with a correlation of r2 = 0.8. In conclusion, a shape and appearance model demonstrated excellent feasibility to extract variables useful for automatic %FGV estimation. Further exploring and testing of this approach is warranted.

  14. Automated Volumetric Breast Density derived by Shape and Appearance Modeling.

    PubMed

    Malkov, Serghei; Kerlikowske, Karla; Shepherd, John

    2014-03-22

    The image shape and texture (appearance) estimation designed for facial recognition is a novel and promising approach for application in breast imaging. The purpose of this study was to apply a shape and appearance model to automatically estimate percent breast fibroglandular volume (%FGV) using digital mammograms. We built a shape and appearance model using 2000 full-field digital mammograms from the San Francisco Mammography Registry with known %FGV measured by single energy absorptiometry method. An affine transformation was used to remove rotation, translation and scale. Principal Component Analysis (PCA) was applied to extract significant and uncorrelated components of %FGV. To build an appearance model, we transformed the breast images into the mean texture image by piecewise linear image transformation. Using PCA the image pixels grey-scale values were converted into a reduced set of the shape and texture features. The stepwise regression with forward selection and backward elimination was used to estimate the outcome %FGV with shape and appearance features and other system parameters. The shape and appearance scores were found to correlate moderately to breast %FGV, dense tissue volume and actual breast volume, body mass index (BMI) and age. The highest Pearson correlation coefficient was equal 0.77 for the first shape PCA component and actual breast volume. The stepwise regression method with ten-fold cross-validation to predict %FGV from shape and appearance variables and other system outcome parameters generated a model with a correlation of r(2) = 0.8. In conclusion, a shape and appearance model demonstrated excellent feasibility to extract variables useful for automatic %FGV estimation. Further exploring and testing of this approach is warranted. PMID:25083119

  15. Fundamental processes of exciton scattering at organic solar-cell interfaces: One-dimensional model calculation

    NASA Astrophysics Data System (ADS)

    Masugata, Yoshimitsu; Iizuka, Hideyuki; Sato, Kosuke; Nakayama, Takashi

    2016-08-01

    Fundamental processes of exciton scattering at organic solar-cell interfaces were studied using a one-dimensional tight-binding model and by performing a time-evolution simulation of electron–hole pair wave packets. We found the fundamental features of exciton scattering: the scattering promotes not only the dissociation of excitons and the generation of interface-bound (charge-transferred) excitons but also the transmission and reflection of excitons depending on the electron and hole interface offsets. In particular, the dissociation increases in a certain region of an interface offset, while the transmission shows resonances with higher-energy bound-exciton and interface bound-exciton states. We also studied the effects of carrier-transfer and potential modulations at the interface and the scattering of charged excitons, and we found trap dissociations where one of the carriers is trapped around the interface after the dissociation.

  16. An Improvement in Thermal Modelling of Automated Tape Placement Process

    SciTech Connect

    Barasinski, Anaies; Leygue, Adrien; Poitou, Arnaud; Soccard, Eric

    2011-01-17

    The thermoplastic tape placement process offers the possibility of manufacturing large laminated composite parts with all kinds of geometries (double curved i.e.). This process is based on the fusion bonding of a thermoplastic tape on a substrate. It has received a growing interest during last years because of its non autoclave abilities.In order to control and optimize the quality of the manufactured part, we need to predict the temperature field throughout the processing of the laminate. In this work, we focus on a thermal modeling of this process which takes in account the imperfect bonding existing between the different layers of the substrate by introducing thermal contact resistance in the model. This study is leaning on experimental results which inform us that the value of the thermal resistance evolves with temperature and pressure applied on the material.

  17. An Improvement in Thermal Modelling of Automated Tape Placement Process

    NASA Astrophysics Data System (ADS)

    Barasinski, Anaïs; Leygue, Adrien; Soccard, Eric; Poitou, Arnaud

    2011-01-01

    The thermoplastic tape placement process offers the possibility of manufacturing large laminated composite parts with all kinds of geometries (double curved i.e.). This process is based on the fusion bonding of a thermoplastic tape on a substrate. It has received a growing interest during last years because of its non autoclave abilities. In order to control and optimize the quality of the manufactured part, we need to predict the temperature field throughout the processing of the laminate. In this work, we focus on a thermal modeling of this process which takes in account the imperfect bonding existing between the different layers of the substrate by introducing thermal contact resistance in the model. This study is leaning on experimental results which inform us that the value of the thermal resistance evolves with temperature and pressure applied on the material.

  18. Model-based metrics of human-automation function allocation in complex work environments

    NASA Astrophysics Data System (ADS)

    Kim, So Young

    Function allocation is the design decision which assigns work functions to all agents in a team, both human and automated. Efforts to guide function allocation systematically has been studied in many fields such as engineering, human factors, team and organization design, management science, and cognitive systems engineering. Each field focuses on certain aspects of function allocation, but not all; thus, an independent discussion of each does not address all necessary issues with function allocation. Four distinctive perspectives emerged from a review of these fields: technology-centered, human-centered, team-oriented, and work-oriented. Each perspective focuses on different aspects of function allocation: capabilities and characteristics of agents (automation or human), team structure and processes, and work structure and the work environment. Together, these perspectives identify the following eight issues with function allocation: 1) Workload, 2) Incoherency in function allocations, 3) Mismatches between responsibility and authority, 4) Interruptive automation, 5) Automation boundary conditions, 6) Function allocation preventing human adaptation to context, 7) Function allocation destabilizing the humans' work environment, and 8) Mission Performance. Addressing these issues systematically requires formal models and simulations that include all necessary aspects of human-automation function allocation: the work environment, the dynamics inherent to the work, agents, and relationships among them. Also, addressing these issues requires not only a (static) model, but also a (dynamic) simulation that captures temporal aspects of work such as the timing of actions and their impact on the agent's work. Therefore, with properly modeled work as described by the work environment, the dynamics inherent to the work, agents, and relationships among them, a modeling framework developed by this thesis, which includes static work models and dynamic simulation, can capture the

  19. Automated Antenna Orientation For Wireless Data Transfer Using Bayesian Modeling

    NASA Astrophysics Data System (ADS)

    Guttman, Rotem D.

    2009-12-01

    The problem of attaining a usable wireless connection at an arbitrary location is one of great concern to mobile end users. The majority of antennae currently in use for mobile devices conducting two way communications are omnidirectional. The use of a directional antenna allows for increased effective coverage area without increasing power consumption. However, directional antennae must be oriented toward a wireless network access point in order for their benefits to be realized. This paper outlines a system for determining the optimal orientation of a directional antenna without the need for additional hardware. The response of the antenna is described by the use of a parameterized model corresponding to the sum of a set of cardioid functions. Signal strength is measured at several antenna orientations and is used by a Metropolis-Hastings search algorithm to estimate the model parameter values that best describe the antenna's response pattern. Using this model the antenna can be oriented to respond optimally to the wireless network access point's broadcast pattern.

  20. Automated target recognition using passive radar and coordinated flight models

    NASA Astrophysics Data System (ADS)

    Ehrman, Lisa M.; Lanterman, Aaron D.

    2003-09-01

    Rather than emitting pulses, passive radar systems rely on illuminators of opportunity, such as TV and FM radio, to illuminate potential targets. These systems are particularly attractive since they allow receivers to operate without emitting energy, rendering them covert. Many existing passive radar systems estimate the locations and velocities of targets. This paper focuses on adding an automatic target recognition (ATR) component to such systems. Our approach to ATR compares the Radar Cross Section (RCS) of targets detected by a passive radar system to the simulated RCS of known targets. To make the comparison as accurate as possible, the received signal model accounts for aircraft position and orientation, propagation losses, and antenna gain patterns. The estimated positions become inputs for an algorithm that uses a coordinated flight model to compute probable aircraft orientation angles. The Fast Illinois Solver Code (FISC) simulates the RCS of several potential target classes as they execute the estimated maneuvers. The RCS is then scaled by the Advanced Refractive Effects Prediction System (AREPS) code to account for propagation losses that occur as functions of altitude and range. The Numerical Electromagnetic Code (NEC2) computes the antenna gain pattern, so that the RCS can be further scaled. The Rician model compares the RCS of the illuminated aircraft with those of the potential targets. This comparison results in target identification.

  1. Numerical simulation of continuum models for fluid-fluid interface dynamics

    NASA Astrophysics Data System (ADS)

    Gross, S.; Reusken, A.

    2013-05-01

    This paper is concerned with numerical methods for two-phase incompressible flows assuming a sharp interface model for interfacial stresses. Standard continuum models for the fluid dynamics in the bulk phases, for mass transport of a solute between the phases and for surfactant transport on the interface are given. We review some recently developed finite element methods for the appropriate discretization of such models, e. g., a pressure extended finite element (XFE) space which is suitable to represent the pressure jump, a space-time extended finite element discretization for the mass transport equation of a solute and a surface finite element method (SurFEM) for surfactant transport. Numerical experiments based on level set interface capturing and adaptive multilevel finite element discretization are presented for rising droplets with a clean interface model and a spherical droplet in a Poisseuille flow with a Boussinesq-Scriven interface model.

  2. Interface-tracking electro-hydrodynamic model for droplet coalescence

    NASA Astrophysics Data System (ADS)

    Crowl Erickson, Lindsay; Noble, David

    2012-11-01

    Many fluid-based technologies rely on electrical fields to control the motion of droplets, e.g. micro-fluidic devices for high-speed droplet sorting, solution separation for chemical detectors, and purification of biodiesel fuel. Precise control over droplets is crucial to these applications. However, electric fields can induce complex and unpredictable fluid dynamics. Recent experiments (Ristenpart et al. 2009) have demonstrated that oppositely charged droplets bounce rather than coalesce in the presence of strong electric fields. Analytic hydrodynamic approximations for interfaces become invalid near coalescence, and therefore detailed numerical simulations are necessary. We present a conformal decomposition finite element (CDFEM) interface-tracking method for two-phase flow to demonstrate electro-coalescence. CDFEM is a sharp interface method that decomposes elements along fluid-fluid boundaries and uses a level set function to represent the interface. The electro-hydrodynamic equations solved allow for convection of charge and charge accumulation at the interface, both of which may be important factors for the pinch-off dynamics in this parameter regime.

  3. Flashover of a vacuum-insulator interface: A statistical model

    NASA Astrophysics Data System (ADS)

    Stygar, W. A.; Ives, H. C.; Wagoner, T. C.; Lott, J. A.; Anaya, V.; Harjes, H. C.; Corley, J. P.; Shoup, R. W.; Fehl, D. L.; Mowrer, G. R.; Wallace, Z. R.; Anderson, R. A.; Boyes, J. D.; Douglas, J. W.; Horry, M. L.; Jaramillo, T. F.; Johnson, D. L.; Long, F. W.; Martin, T. H.; McDaniel, D. H.; Milton, O.; Mostrom, M. A.; Muirhead, D. A.; Mulville, T. D.; Ramirez, J. J.; Ramirez, L. E.; Romero, T. M.; Seamen, J. F.; Smith, J. W.; Speas, C. S.; Spielman, R. B.; Struve, K. W.; Vogtlin, G. E.; Walsh, D. E.; Walsh, E. D.; Walsh, M. D.; Yamamoto, O.

    2004-07-01

    We have developed a statistical model for the flashover of a 45° vacuum-insulator interface (such as would be found in an accelerator) subject to a pulsed electric field. The model assumes that the initiation of a flashover plasma is a stochastic process, that the characteristic statistical component of the flashover delay time is much greater than the plasma formative time, and that the average rate at which flashovers occur is a power-law function of the instantaneous value of the electric field. Under these conditions, we find that the flashover probability is given by 1-exp(-EβpteffC/kβ), where Ep is the peak value in time of the spatially averaged electric field E(t), teff≡∫[E(t)/Ep]βdt is the effective pulse width, C is the insulator circumference, k∝exp(λ/d), and β and λ are constants. We define E(t) as V(t)/d, where V(t) is the voltage across the insulator and d is the insulator thickness. Since the model assumes that flashovers occur at random azimuthal locations along the insulator, it does not apply to systems that have a significant defect, i.e., a location contaminated with debris or compromised by an imperfection at which flashovers repeatedly take place, and which prevents a random spatial distribution. The model is consistent with flashover measurements to within 7% for pulse widths between 0.5 ns and 10 μs, and to within a factor of 2 between 0.5 ns and 90 s (a span of over 11 orders of magnitude). For these measurements, Ep ranges from 64 to 651 kV/cm, d from 0.50 to 4.32 cm, and C from 4.96 to 95.74 cm. The model is significantly more accurate, and is valid over a wider range of parameters, than the J. C. Martin flashover relation that has been in use since 1971 [J. C. Martin on Pulsed Power, edited by T. H. Martin, A. H. Guenther, and M. Kristiansen (Plenum, New York, 1996)]. We have generalized the statistical model to estimate the total-flashover probability of an insulator stack (i.e., an assembly of insulator-electrode systems

  4. User's Manual for the Object User Interface (OUI): An Environmental Resource Modeling Framework

    USGS Publications Warehouse

    Markstrom, Steven L.; Koczot, Kathryn M.

    2008-01-01

    The Object User Interface is a computer application that provides a framework for coupling environmental-resource models and for managing associated temporal and spatial data. The Object User Interface is designed to be easily extensible to incorporate models and data interfaces defined by the user. Additionally, the Object User Interface is highly configurable through the use of a user-modifiable, text-based control file that is written in the eXtensible Markup Language. The Object User Interface user's manual provides (1) installation instructions, (2) an overview of the graphical user interface, (3) a description of the software tools, (4) a project example, and (5) specifications for user configuration and extension.

  5. An Energy Approach to a Micromechanics Model Accounting for Nonlinear Interface Debonding.

    SciTech Connect

    Tan, H.; Huang, Y.; Geubelle, P. H.; Liu, C.; Breitenfeld, M. S.

    2005-01-01

    We developed a micromechanics model to study the effect of nonlinear interface debonding on the constitutive behavior of composite materials. While implementing this micromechanics model into a large simulation code on solid rockets, we are challenged by problems such as tension/shear coupling and the nonuniform distribution of displacement jump at the particle/matrix interfaces. We therefore propose an energy approach to solve these problems. This energy approach calculates the potential energy of the representative volume element, including the contribution from the interface debonding. By minimizing the potential energy with respect to the variation of the interface displacement jump, the traction balanced interface debonding can be found and the macroscopic constitutive relations established. This energy approach has the ability to treat different load conditions in a unified way, and the interface cohesive law can be in any arbitrary forms. In this paper, the energy approach is verified to give the same constitutive behaviors as reported before.

  6. A formalism for modeling solid electrolyte/electrode interfaces using first principles methods

    NASA Astrophysics Data System (ADS)

    Lepley, Nicholas; Holzwarth, Natalie

    We describe a scheme based on the interface energy for analyzing interfaces between crystalline solids, quantitatively including the effect of lattice strain. This scheme is applied to the modeling of likely interface geometries of several solid state battery materials including Li metal, Li3PO4, Li3PS4, Li2O, and Li2S. We find that all of the interfaces in this study are stable with the exception of Li3PS4/Li. For this chemically unstable interface, the partial density of states helps to identify mechanisms associated with the interface reactions. We also consider the case of charged defects at the interface, and show that accurately modeling them requires a careful treatment of the resulting electric fields. Our energetic measure of interfaces and our analysis of the band alignment between interface materials indicate multiple factors which may be predictors of interface stability, an important property of solid electrolyte systems. Supported by NSF Grant DMR-1105485 and DMR-1507942.

  7. Piloted Simulation of a Model-Predictive Automated Recovery System

    NASA Technical Reports Server (NTRS)

    Liu, James (Yuan); Litt, Jonathan; Sowers, T. Shane; Owens, A. Karl; Guo, Ten-Huei

    2014-01-01

    This presentation describes a model-predictive automatic recovery system for aircraft on the verge of a loss-of-control situation. The system determines when it must intervene to prevent an imminent accident, resulting from a poor approach. It estimates the altitude loss that would result from a go-around maneuver at the current flight condition. If the loss is projected to violate a minimum altitude threshold, the maneuver is automatically triggered. The system deactivates to allow landing once several criteria are met. Piloted flight simulator evaluation showed the system to provide effective envelope protection during extremely unsafe landing attempts. The results demonstrate how flight and propulsion control can be integrated to recover control of the vehicle automatically and prevent a potential catastrophe.

  8. Optical modeling of a-Si:H solar cells with rough interfaces: Effect of back contact and interface roughness

    NASA Astrophysics Data System (ADS)

    Zeman, M.; van Swaaij, R. A. C. M. M.; Metselaar, J. W.; Schropp, R. E. I.

    2000-12-01

    An approach to study the optical behavior of hydrogenated amorphous silicon solar cells with rough interfaces using computer modeling is presented. In this approach the descriptive haze parameters of a light scattering interface are related to the root mean square roughness of the interface. Using this approach we investigated the effect of front window contact roughness and back contact material on the optical properties of a single junction a-Si:H superstrate solar cell. The simulation results for a-Si:H solar cells with SnO2:F as a front contact and ideal Ag, ZnO/Ag, and Al/Ag as a back contact are shown. For cells with an absorber layer thickness of 150-600 nm the simulations demonstrate that the gain in photogenerated current density due to the use of a textured superstrate is around 2.3 mA cm-2 in comparison to solar cells with flat interfaces. The effect of the front and back contact roughness on the external quantum efficiency (QE) of the solar cell for different parts of the light spectrum was determined. The choice of the back contact strongly influences the QE and the absorption in the nonactive layers for the wavelengths above 650 nm. A practical Ag back contact can be successfully simulated by introducing a thin buffer layer between the n-type a-Si:H and Ag back contact, which has optical properties similar to Al, indicating that the actual reflection at the n-type a-Si:H/Ag interface is smaller than what is expected from the respective bulk optical parameters. In comparison to the practical Ag contact the QE of the cell can be strongly improved by using a ZnO layer at the Ag back contact or an ideal Ag contact. The photogenerated current densities for a solar cell with a 450 nm thick intrinsic a-Si:H layer with ZnO/Ag and ideal Ag are 16.7 and 17.3 mA cm-2, respectively, compared to 14.4 mA cm-2 for the practical Ag back contact. The effect of increasing the roughness of the contact interfaces was investigated for both superstrate and substrate types

  9. Modeling the flow in diffuse interface methods of solidification

    NASA Astrophysics Data System (ADS)

    Subhedar, A.; Steinbach, I.; Varnik, F.

    2015-08-01

    Fluid dynamical equations in the presence of a diffuse solid-liquid interface are investigated via a volume averaging approach. The resulting equations exhibit the same structure as the standard Navier-Stokes equation for a Newtonian fluid with a constant viscosity, the effect of the solid phase fraction appearing in the drag force only. This considerably simplifies the use of the lattice Boltzmann method as a fluid dynamics solver in solidification simulations. Galilean invariance is also satisfied within this approach. Further, we investigate deviations between the diffuse and sharp interface flow profiles via both quasiexact numerical integration and lattice Boltzmann simulations. It emerges from these studies that the freedom in choosing the solid-liquid coupling parameter h provides a flexible way of optimizing the diffuse interface-flow simulations. Once h is adapted for a given spatial resolution, the simulated flow profiles reach an accuracy comparable to quasiexact numerical simulations.

  10. Modeling the flow in diffuse interface methods of solidification.

    PubMed

    Subhedar, A; Steinbach, I; Varnik, F

    2015-08-01

    Fluid dynamical equations in the presence of a diffuse solid-liquid interface are investigated via a volume averaging approach. The resulting equations exhibit the same structure as the standard Navier-Stokes equation for a Newtonian fluid with a constant viscosity, the effect of the solid phase fraction appearing in the drag force only. This considerably simplifies the use of the lattice Boltzmann method as a fluid dynamics solver in solidification simulations. Galilean invariance is also satisfied within this approach. Further, we investigate deviations between the diffuse and sharp interface flow profiles via both quasiexact numerical integration and lattice Boltzmann simulations. It emerges from these studies that the freedom in choosing the solid-liquid coupling parameter h provides a flexible way of optimizing the diffuse interface-flow simulations. Once h is adapted for a given spatial resolution, the simulated flow profiles reach an accuracy comparable to quasiexact numerical simulations. PMID:26382542

  11. A COMSOL-GEMS interface for modeling coupled reactive-transport geochemical processes

    NASA Astrophysics Data System (ADS)

    Azad, Vahid Jafari; Li, Chang; Verba, Circe; Ideker, Jason H.; Isgor, O. Burkan

    2016-07-01

    An interface was developed between COMSOL MultiphysicsTM finite element analysis software and (geo)chemical modeling platform, GEMS, for the reactive-transport modeling of (geo)chemical processes in variably saturated porous media. The two standalone software packages are managed from the interface that uses a non-iterative operator splitting technique to couple the transport (COMSOL) and reaction (GEMS) processes. The interface allows modeling media with complex chemistry (e.g. cement) using GEMS thermodynamic database formats. Benchmark comparisons show that the developed interface can be used to predict a variety of reactive-transport processes accurately. The full functionality of the interface was demonstrated to model transport processes, governed by extended Nernst-Plank equation, in Class H Portland cement samples in high pressure and temperature autoclaves simulating systems that are used to store captured carbon dioxide (CO2) in geological reservoirs.

  12. Automated Finite Element Modeling of Wing Structures for Shape Optimization

    NASA Technical Reports Server (NTRS)

    Harvey, Michael Stephen

    1993-01-01

    The displacement formulation of the finite element method is the most general and most widely used technique for structural analysis of airplane configurations. Modem structural synthesis techniques based on the finite element method have reached a certain maturity in recent years, and large airplane structures can now be optimized with respect to sizing type design variables for many load cases subject to a rich variety of constraints including stress, buckling, frequency, stiffness and aeroelastic constraints (Refs. 1-3). These structural synthesis capabilities use gradient based nonlinear programming techniques to search for improved designs. For these techniques to be practical a major improvement was required in computational cost of finite element analyses (needed repeatedly in the optimization process). Thus, associated with the progress in structural optimization, a new perspective of structural analysis has emerged, namely, structural analysis specialized for design optimization application, or.what is known as "design oriented structural analysis" (Ref. 4). This discipline includes approximation concepts and methods for obtaining behavior sensitivity information (Ref. 1), all needed to make the optimization of large structural systems (modeled by thousands of degrees of freedom and thousands of design variables) practical and cost effective.

  13. Automation based on knowledge modeling theory and its applications in engine diagnostic systems using Space Shuttle Main Engine vibrational data

    NASA Astrophysics Data System (ADS)

    Kim, Jonnathan H.

    1995-04-01

    Humans can perform many complicated tasks without explicit rules. This inherent and advantageous capability becomes a hurdle when a task is to be automated. Modern computers and numerical calculations require explicit rules and discrete numerical values. In order to bridge the gap between human knowledge and automating tools, a knowledge model is proposed. Knowledge modeling techniques are discussed and utilized to automate a labor and time intensive task of detecting anomalous bearing wear patterns in the Space Shuttle Main Engine (SSME) High Pressure Oxygen Turbopump (HPOTP).

  14. A Contextual Model for Identity Management (IdM) Interfaces

    ERIC Educational Resources Information Center

    Fuller, Nathaniel J.

    2014-01-01

    The usability of Identity Management (IdM) systems is highly dependent upon design that simplifies the processes of identification, authentication, and authorization. Recent findings reveal two critical problems that degrade IdM usability: (1) unfeasible techniques for managing various digital identifiers, and (2) ambiguous security interfaces.…

  15. Automated calibration of a stream solute transport model: Implications for interpretation of biogeochemical parameters

    USGS Publications Warehouse

    Scott, D.T.; Gooseff, M.N.; Bencala, K.E.; Runkel, R.L.

    2003-01-01

    The hydrologic processes of advection, dispersion, and transient storage are the primary physical mechanisms affecting solute transport in streams. The estimation of parameters for a conservative solute transport model is an essential step to characterize transient storage and other physical features that cannot be directly measured, and often is a preliminary step in the study of reactive solutes. Our study used inverse modeling to estimate parameters of the transient storage model OTIS (One dimensional Transport with Inflow and Storage). Observations from a tracer injection experiment performed on Uvas Creek, California, USA, are used to illustrate the application of automated solute transport model calibration to conservative and nonconservative stream solute transport. A computer code for universal inverse modeling (UCODE) is used for the calibrations. Results of this procedure are compared with a previous study that used a trial-and-error parameter estimation approach. The results demonstrated 1) importance of the proper estimation of discharge and lateral inflow within the stream system; 2) that although the fit of the observations is not much better when transient storage is invoked, a more randomly distributed set of residuals resulted (suggesting non-systematic error), indicating that transient storage is occurring; 3) that inclusion of transient storage for a reactive solute (Sr2+) provided a better fit to the observations, highlighting the importance of robust model parameterization; and 4) that applying an automated calibration inverse modeling estimation approach resulted in a comprehensive understanding of the model results and the limitation of input data.

  16. A comparison of automated anatomical–behavioural mapping methods in a rodent model of stroke☆

    PubMed Central

    Crum, William R.; Giampietro, Vincent P.; Smith, Edward J.; Gorenkova, Natalia; Stroemer, R. Paul; Modo, Michel

    2013-01-01

    Neurological damage, due to conditions such as stroke, results in a complex pattern of structural changes and significant behavioural dysfunctions; the automated analysis of magnetic resonance imaging (MRI) and discovery of structural–behavioural correlates associated with these disorders remains challenging. Voxel lesion symptom mapping (VLSM) has been used to associate behaviour with lesion location in MRI, but this analysis requires the definition of lesion masks on each subject and does not exploit the rich structural information in the images. Tensor-based morphometry (TBM) has been used to perform voxel-wise structural analyses over the entire brain; however, a combination of lesion hyper-intensities and subtle structural remodelling away from the lesion might confound the interpretation of TBM. In this study, we compared and contrasted these techniques in a rodent model of stroke (n = 58) to assess the efficacy of these techniques in a challenging pre-clinical application. The results from the automated techniques were compared using manually derived region-of-interest measures of the lesion, cortex, striatum, ventricle and hippocampus, and considered against model power calculations. The automated TBM techniques successfully detect both lesion and non-lesion effects, consistent with manual measurements. These techniques do not require manual segmentation to the same extent as VLSM and should be considered part of the toolkit for the unbiased analysis of pre-clinical imaging-based studies. PMID:23727124

  17. IDEF3 and IDEF4 automation system requirements document and system environment models

    NASA Technical Reports Server (NTRS)

    Blinn, Thomas M.

    1989-01-01

    The requirements specification is provided for the IDEF3 and IDEF4 tools that provide automated support for IDEF3 and IDEF4 modeling. The IDEF3 method is a scenario driven process flow description capture method intended to be used by domain experts to represent the knowledge about how a particular system or process works. The IDEF3 method provides modes to represent both (1) Process Flow Description to capture the relationships between actions within the context of a specific scenario, and (2) Object State Transition to capture the allowable transitions of an object in the domain. The IDEF4 method provides a method for capturing the (1) Class Submodel or object hierarchy, (2) Method Submodel or the procedures associated with each classes of objects, and (3) the Dispath Matching or the relationships between the objects and methods in the object oriented design. The requirements specified describe the capabilities that a fully functional IDEF3 or IDEF4 automated tool should support.

  18. A New Tool for Inundation Modeling: Community Modeling Interface for Tsunamis (ComMIT)

    NASA Astrophysics Data System (ADS)

    Titov, V. V.; Moore, C. W.; Greenslade, D. J. M.; Pattiaratchi, C.; Badal, R.; Synolakis, C. E.; Kânoğlu, U.

    2011-11-01

    Almost 5 years after the 26 December 2004 Indian Ocean tragedy, the 10 August 2009 Andaman tsunami demonstrated that accurate forecasting is possible using the tsunami community modeling tool Community Model Interface for Tsunamis (ComMIT). ComMIT is designed for ease of use, and allows dissemination of results to the community while addressing concerns associated with proprietary issues of bathymetry and topography. It uses initial conditions from a precomputed propagation database, has an easy-to-interpret graphical interface, and requires only portable hardware. ComMIT was initially developed for Indian Ocean countries with support from the United Nations Educational, Scientific, and Cultural Organization (UNESCO), the United States Agency for International Development (USAID), and the National Oceanic and Atmospheric Administration (NOAA). To date, more than 60 scientists from 17 countries in the Indian Ocean have been trained and are using it in operational inundation mapping.

  19. Swimming of a model ciliate near an air-liquid interface.

    PubMed

    Wang, S; Ardekani, A M

    2013-06-01

    In this work, the role of the hydrodynamic forces on a swimming microorganism near an air-liquid interface is studied. The lubrication theory is utilized to analyze hydrodynamic effects within the narrow gap between a flat interface and a small swimmer. By using an archetypal low-Reynolds-number swimming model called "squirmer," we find that the magnitude of the vertical swimming velocity is on the order of O(εlnε), where ε is the ratio of the gap width to the swimmer's body size. The reduced swimming velocity near an interface can explain experimental observations of the aggregation of microorganisms near a liquid interface. PMID:23848775

  20. Progress in Modeling of Ion Effects at the Vapor/Water Interface

    NASA Astrophysics Data System (ADS)

    Netz, Roland R.; Horinek, Dominik

    2012-05-01

    The behavior of halide salts at the vapor/water interface has been the focus of a tremendous amount of work in the past ten years. A molecular view of the interface has been introduced with the observation that large anions have some affinity for the interface, but a quantitative description of the driving forces that determine ion adsorption or repulsion at the interface is still missing. This review discusses recent developments that are based on classical and quantum-chemical molecular simulations as well as developments that are based on simple potential models.

  1. Modeling Auditory-Haptic Interface Cues from an Analog Multi-line Telephone

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Anderson, Mark R.; Bittner, Rachael M.

    2012-01-01

    The Western Electric Company produced a multi-line telephone during the 1940s-1970s using a six-button interface design that provided robust tactile, haptic and auditory cues regarding the "state" of the communication system. This multi-line telephone was used as a model for a trade study comparison of two interfaces: a touchscreen interface (iPad)) versus a pressure-sensitive strain gauge button interface (Phidget USB interface controllers). The experiment and its results are detailed in the authors' AES 133rd convention paper " Multimodal Information Management: Evaluation of Auditory and Haptic Cues for NextGen Communication Dispays". This Engineering Brief describes how the interface logic, visual indications, and auditory cues of the original telephone were synthesized using MAX/MSP, including the logic for line selection, line hold, and priority line activation.

  2. An automation of design and modelling tasks in NX Siemens environment with original software - generator module

    NASA Astrophysics Data System (ADS)

    Zbiciak, M.; Grabowik, C.; Janik, W.

    2015-11-01

    Nowadays the design constructional process is almost exclusively aided with CAD/CAE/CAM systems. It is evaluated that nearly 80% of design activities have a routine nature. These design routine tasks are highly susceptible to automation. Design automation is usually made with API tools which allow building original software responsible for adding different engineering activities. In this paper the original software worked out in order to automate engineering tasks at the stage of a product geometrical shape design is presented. The elaborated software works exclusively in NX Siemens CAD/CAM/CAE environment and was prepared in Microsoft Visual Studio with application of the .NET technology and NX SNAP library. The software functionality allows designing and modelling of spur and helicoidal involute gears. Moreover, it is possible to estimate relative manufacturing costs. With the Generator module it is possible to design and model both standard and non-standard gear wheels. The main advantage of the model generated in such a way is its better representation of an involute curve in comparison to those which are drawn in specialized standard CAD systems tools. It comes from fact that usually in CAD systems an involute curve is drawn by 3 points that respond to points located on the addendum circle, the reference diameter of a gear and the base circle respectively. In the Generator module the involute curve is drawn by 11 involute points which are located on and upper the base and the addendum circles therefore 3D gear wheels models are highly accurate. Application of the Generator module makes the modelling process very rapid so that the gear wheel modelling time is reduced to several seconds. During the conducted research the analysis of differences between standard 3 points and 11 points involutes was made. The results and conclusions drawn upon analysis are shown in details.

  3. Interface-modified random circuit breaker network model applicable to both bipolar and unipolar resistance switching

    NASA Astrophysics Data System (ADS)

    Lee, S. B.; Lee, J. S.; Chang, S. H.; Yoo, H. K.; Kang, B. S.; Kahng, B.; Lee, M.-J.; Kim, C. J.; Noh, T. W.

    2011-01-01

    We observed reversible-type changes between bipolar (BRS) and unipolar resistance switching (URS) in one Pt/SrTiOx/Pt capacitor. To explain both BRS and URS in a unified scheme, we introduce the "interface-modified random circuit breaker network model," in which the bulk medium is represented by a percolating network of circuit breakers. To consider interface effects in BRS, we introduce circuit breakers to investigate resistance states near the interface. This percolation model explains the reversible-type changes in terms of connectivity changes in the circuit breakers and provides insights into many experimental observations of BRS which are under debate by earlier theoretical models.

  4. Reconciling lattice and continuum models for polymers at interfaces.

    PubMed

    Fleer, G J; Skvortsov, A M

    2012-04-01

    It is well known that lattice and continuum descriptions for polymers at interfaces are, in principle, equivalent. In order to compare the two models quantitatively, one needs a relation between the inverse extrapolation length c as used in continuum theories and the lattice adsorption parameter Δχ(s) (defined with respect to the critical point). So far, this has been done only for ideal chains with zero segment volume in extremely dilute solutions. The relation Δχ(s)(c) is obtained by matching the boundary conditions in the two models. For depletion (positive c and Δχ(s)) the result is very simple: Δχ(s) = ln(1 + c/5). For adsorption (negative c and Δχ(s)) the ideal-chain treatment leads to an unrealistic divergence for strong adsorption: c decreases without bounds and the train volume fraction exceeds unity. This due to the fact that for ideal chains the volume filling cannot be accounted for. We extend the treatment to real chains with finite segment volume at finite concentrations, for both good and theta solvents. For depletion the volume filling is not important and the ideal-chain result Δχ(s) = ln(1 + c/5) is generally valid also for non-ideal chains, at any concentration, chain length, or solvency. Depletion profiles can be accurately described in terms of two length scales: ρ = tanh(2)[(z + p)/δ], where the depletion thickness (distal length) δ is a known function of chain length and polymer concentration, and the proximal length p is a known function of c (or Δχ(s)) and δ. For strong repulsion p = 1/c (then the proximal length equals the extrapolation length), for weaker repulsion p depends also on chain length and polymer concentration (then p is smaller than 1/c). In very dilute solutions we find quantitative agreement with previous analytical results for ideal chains, for any chain length, down to oligomers. In more concentrated solutions there is excellent agreement with numerical self-consistent depletion profiles, for both weak

  5. Mathematical analysis of a sharp-diffuse interfaces model for seawater intrusion

    NASA Astrophysics Data System (ADS)

    Choquet, C.; Diédhiou, M. M.; Rosier, C.

    2015-10-01

    We consider a new model mixing sharp and diffuse interface approaches for seawater intrusion phenomena in free aquifers. More precisely, a phase field model is introduced in the boundary conditions on the virtual sharp interfaces. We thus include in the model the existence of diffuse transition zones but we preserve the simplified structure allowing front tracking. The three-dimensional problem then reduces to a two-dimensional model involving a strongly coupled system of partial differential equations of parabolic type describing the evolution of the depths of the two free surfaces, that is the interface between salt- and freshwater and the water table. We prove the existence of a weak solution for the model completed with initial and boundary conditions. We also prove that the depths of the two interfaces satisfy a coupled maximum principle.

  6. Simulation of evaporation of a sessile drop using a diffuse interface model

    NASA Astrophysics Data System (ADS)

    Sefiane, Khellil; Ding, Hang; Sahu, Kirti; Matar, Omar

    2008-11-01

    We consider here the evaporation dynamics of a Newtonian liquid sessile drop using an improved diffuse interface model. The governing equations for the drop and surrounding vapour are both solved, and separated by the order parameter (i.e. volume fraction), based on the previous work of Ding et al. JCP 2007. The diffuse interface model has been shown to be successful in modelling the moving contact line problems (Jacqmin 2000; Ding and Spelt 2007, 2008). Here, a pinned contact line of the drop is assumed. The evaporative mass flux at the liquid-vapour interface is a function of local temperature constitutively and treated as a source term in the interface evolution equation, i.e. Cahn-Hilliard equation. The model is validated by comparing its predictions with data available in the literature. The evaporative dynamics are illustrated in terms of drop snapshots, and a quantitative comparison with the results using a free surface model are made.

  7. Coherent description of transport across the water interface: From nanodroplets to climate models.

    PubMed

    Wilhelmsen, Øivind; Trinh, Thuat T; Lervik, Anders; Badam, Vijay Kumar; Kjelstrup, Signe; Bedeaux, Dick

    2016-03-01

    Transport of mass and energy across the vapor-liquid interface of water is of central importance in a variety of contexts such as climate models, weather forecasts, and power plants. We provide a complete description of the transport properties of the vapor-liquid interface of water with the framework of nonequilibrium thermodynamics. Transport across the planar interface is then described by 3 interface transfer coefficients where 9 more coefficients extend the description to curved interfaces. We obtain all coefficients in the range 260-560 K by taking advantage of water evaporation experiments at low temperatures, nonequilibrium molecular dynamics with the TIP4P/2005 rigid-water-molecule model at high temperatures, and square gradient theory to represent the whole range. Square gradient theory is used to link the region where experiments are possible (low vapor pressures) to the region where nonequilibrium molecular dynamics can be done (high vapor pressures). This enables a description of transport across the planar water interface, interfaces of bubbles, and droplets, as well as interfaces of water structures with complex geometries. The results are likely to improve the description of evaporation and condensation of water at widely different scales; they open a route to improve the understanding of nanodroplets on a small scale and the precision of climate models on a large scale. PMID:27078427

  8. Coherent description of transport across the water interface: From nanodroplets to climate models

    NASA Astrophysics Data System (ADS)

    Wilhelmsen, Øivind; Trinh, Thuat T.; Lervik, Anders; Badam, Vijay Kumar; Kjelstrup, Signe; Bedeaux, Dick

    2016-03-01

    Transport of mass and energy across the vapor-liquid interface of water is of central importance in a variety of contexts such as climate models, weather forecasts, and power plants. We provide a complete description of the transport properties of the vapor-liquid interface of water with the framework of nonequilibrium thermodynamics. Transport across the planar interface is then described by 3 interface transfer coefficients where 9 more coefficients extend the description to curved interfaces. We obtain all coefficients in the range 260-560 K by taking advantage of water evaporation experiments at low temperatures, nonequilibrium molecular dynamics with the TIP4P/2005 rigid-water-molecule model at high temperatures, and square gradient theory to represent the whole range. Square gradient theory is used to link the region where experiments are possible (low vapor pressures) to the region where nonequilibrium molecular dynamics can be done (high vapor pressures). This enables a description of transport across the planar water interface, interfaces of bubbles, and droplets, as well as interfaces of water structures with complex geometries. The results are likely to improve the description of evaporation and condensation of water at widely different scales; they open a route to improve the understanding of nanodroplets on a small scale and the precision of climate models on a large scale.

  9. Approximation of skewed interfaces with tensor-based model reduction procedures: Application to the reduced basis hierarchical model reduction approach

    NASA Astrophysics Data System (ADS)

    Ohlberger, Mario; Smetana, Kathrin

    2016-09-01

    In this article we introduce a procedure, which allows to recover the potentially very good approximation properties of tensor-based model reduction procedures for the solution of partial differential equations in the presence of interfaces or strong gradients in the solution which are skewed with respect to the coordinate axes. The two key ideas are the location of the interface either by solving a lower-dimensional partial differential equation or by using data functions and the subsequent removal of the interface of the solution by choosing the determined interface as the lifting function of the Dirichlet boundary conditions. We demonstrate in numerical experiments for linear elliptic equations and the reduced basis-hierarchical model reduction approach that the proposed procedure locates the interface well and yields a significantly improved convergence behavior even in the case when we only consider an approximation of the interface.

  10. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    SciTech Connect

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-11-15

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  11. An architecture and model for cognitive engineering simulation analysis - Application to advanced aviation automation

    NASA Technical Reports Server (NTRS)

    Corker, Kevin M.; Smith, Barry R.

    1993-01-01

    The process of designing crew stations for large-scale, complex automated systems is made difficult because of the flexibility of roles that the crew can assume, and by the rapid rate at which system designs become fixed. Modern cockpit automation frequently involves multiple layers of control and display technology in which human operators must exercise equipment in augmented, supervisory, and fully automated control modes. In this context, we maintain that effective human-centered design is dependent on adequate models of human/system performance in which representations of the equipment, the human operator(s), and the mission tasks are available to designers for manipulation and modification. The joint Army-NASA Aircrew/Aircraft Integration (A3I) Program, with its attendant Man-machine Integration Design and Analysis System (MIDAS), was initiated to meet this challenge. MIDAS provides designers with a test bed for analyzing human-system integration in an environment in which both cognitive human function and 'intelligent' machine function are described in similar terms. This distributed object-oriented simulation system, its architecture and assumptions, and our experiences from its application in advanced aviation crew stations are described.

  12. Interface-capturing lattice Boltzmann equation model for two-phase flows

    NASA Astrophysics Data System (ADS)

    Lou, Qin; Guo, Zhaoli

    2015-01-01

    In this work, an interface-capturing lattice Boltzmann equation (LBE) model is proposed for two-phase flows. In the model, a Lax-Wendroff propagation scheme and a properly chosen equilibrium distribution function are employed. The Lax-Wendroff scheme is used to provide an adjustable Courant-Friedrichs-Lewy (CFL) number, and the equilibrium distribution is presented to remove the dependence of the relaxation time on the CFL number. As a result, the interface can be captured accurately by decreasing the CFL number. A theoretical expression is derived for the chemical potential gradient by solving the LBE directly for a two-phase system with a flat interface. The result shows that the gradient of the chemical potential is proportional to the square of the CFL number, which explains why the proposed model is able to capture the interface naturally with a small CFL number, and why large interface error exists in the standard LBE model. Numerical tests, including a one-dimensional flat interface problem, a two-dimensional circular droplet problem, and a three-dimensional spherical droplet problem, demonstrate that the proposed LBE model performs well and can capture a sharp interface with a suitable CFL number.

  13. Interfacing air pathway models with other media models for impact assessment

    SciTech Connect

    Drake, R.L.

    1980-10-01

    The assessment of the impacts/effects of a coal conversion industry on human health, ecological systems, property and aesthetics requires knowledge about effluent and fugitive emissions, dispersion of pollutants in abiotic media, chemical and physical transformations of pollutants during transport, and pollutant fate passing through biotic pathways. Some of the environmental impacts that result from coal conversion facility effluents are subtle, acute, subacute or chronic effects in humans and other ecosystem members, acute or chronic damage of materials and property, odors, impaired atmospheric visibility, and impacts on local, regional and global weather and climate. This great variety of impacts and effects places great demands on the abiotic and biotic numerical simulators (modelers) in terms of time and space scales, transformation rates, and system structure. This paper primarily addresses the demands placed on the atmospheric analyst. The paper considers the important air pathway processes, the interfacing of air pathway models with other media models, and the classes of air pathway models currently available. In addition, a strong plea is made for interaction and communication between all modeling groups to promote efficient construction of intermedia models that truly interface across pathway boundaries.

  14. Statistical modelling of networked human-automation performance using working memory capacity.

    PubMed

    Ahmed, Nisar; de Visser, Ewart; Shaw, Tyler; Mohamed-Ameen, Amira; Campbell, Mark; Parasuraman, Raja

    2014-01-01

    This study examines the challenging problem of modelling the interaction between individual attentional limitations and decision-making performance in networked human-automation system tasks. Analysis of real experimental data from a task involving networked supervision of multiple unmanned aerial vehicles by human participants shows that both task load and network message quality affect performance, but that these effects are modulated by individual differences in working memory (WM) capacity. These insights were used to assess three statistical approaches for modelling and making predictions with real experimental networked supervisory performance data: classical linear regression, non-parametric Gaussian processes and probabilistic Bayesian networks. It is shown that each of these approaches can help designers of networked human-automated systems cope with various uncertainties in order to accommodate future users by linking expected operating conditions and performance from real experimental data to observable cognitive traits like WM capacity. Practitioner Summary: Working memory (WM) capacity helps account for inter-individual variability in operator performance in networked unmanned aerial vehicle supervisory tasks. This is useful for reliable performance prediction near experimental conditions via linear models; robust statistical prediction beyond experimental conditions via Gaussian process models and probabilistic inference about unknown task conditions/WM capacities via Bayesian network models. PMID:24308716

  15. Ab-initio molecular modeling of interfaces in tantalum-carbon system

    SciTech Connect

    Balani, Kantesh; Mungole, Tarang; Bakshi, Srinivasa Rao; Agarwal, Arvind

    2012-03-15

    Processing of ultrahigh temperature TaC ceramic material with sintering additives of B{sub 4}C and reinforcement of carbon nanotubes (CNTs) gives rise to possible formation of several interfaces (Ta{sub 2}C-TaC, TaC-CNT, Ta{sub 2}C-CNT, TaB{sub 2}-TaC, and TaB{sub 2}-CNT) that could influence the resultant properties. Current work focuses on interfaces developed during spark plasma sintering of TaC-system and performing ab initio molecular modeling of the interfaces generated during processing of TaC-B{sub 4}C and TaC-CNT composites. The energy of the various interfaces has been evaluated and compared with TaC-Ta{sub 2}C interface. The iso-surface electronic contours are extracted from the calculations eliciting the enhanced stability of TaC-CNT interface by 72.2%. CNTs form stable interfaces with Ta{sub 2}C and TaB{sub 2} phases with a reduction in the energy by 35.8% and 40.4%, respectively. The computed Ta-C-B interfaces are also compared with experimentally observed interfaces in high resolution TEM images.

  16. Phase-field modeling and experimental observation of the irregular interface morphology during directional solidification

    NASA Astrophysics Data System (ADS)

    Guo, Taiming

    Evolution of the complex solid-liquid interface morphology during a solidification process is an important issue in solidification theory since the morphology eventually dictates the final microstructure of the solidified material and therefore the material properties. Significant progress have been made in recent years in the study of the formation and development of regular dendritic growth, while only limited understanding is achieved for the irregular interface patterns observed in many industry processes. This dissertation focuses on the physical mechanisms of the development and transition of various irregular interface patterns, including the tilted dendritic, the seaweed, and the degenerate patterns. Both experimental observations and numerical simulation using the phase field modeling are performed. A special effort is devoted on the effects of the capillary anisotropy and the kinetic anisotropy in the evolution of the interface morphology during solidification. Experimentally, a directional solidification system is constructed to observe in situ the interface morphology by using the transparent organic material succinonitrile. With such a system, both the regular interface patterns (cellular and dendritic) and the irregular interface patterns (seaweed, degenerate and tilted dendritic) are observed. The effects of the temperature gradient and the interface velocity on the development and transition of the irregular interface patterns are investigated. It is found that the interface morphology transits from the seaweed to the tilted dendritic pattern as the interface velocity increases, while the tilted dendritic pattern may transit to the degenerate seaweed pattern as the temperature gradient increases. Under certain conditions, dendrites and seaweed coexist within the same grain. The dynamic transitions among various patterns and the effect of the solidification conditions are examined in detail. Numerically, a 2-D phase field model is developed to

  17. a Psycholinguistic Model for Simultaneous Translation, and Proficiency Assessment by Automated Acoustic Analysis of Discourse.

    NASA Astrophysics Data System (ADS)

    Yaghi, Hussein M.

    Two separate but related issues are addressed: how simultaneous translation (ST) works on a cognitive level and how such translation can be objectively assessed. Both of these issues are discussed in the light of qualitative and quantitative analyses of a large corpus of recordings of ST and shadowing. The proposed ST model utilises knowledge derived from a discourse analysis of the data, many accepted facts in the psychology tradition, and evidence from controlled experiments that are carried out here. This model has three advantages: (i) it is based on analyses of extended spontaneous speech rather than word-, syllable-, or clause -bound stimuli; (ii) it draws equally on linguistic and psychological knowledge; and (iii) it adopts a non-traditional view of language called 'the linguistic construction of reality'. The discourse-based knowledge is also used to develop three computerised systems for the assessment of simultaneous translation: one is a semi-automated system that treats the content of the translation; and two are fully automated, one of which is based on the time structure of the acoustic signals whilst the other is based on their cross-correlation. For each system, several parameters of performance are identified, and they are correlated with assessments rendered by the traditional, subjective, qualitative method. Using signal processing techniques, the acoustic analysis of discourse leads to the conclusion that quality in simultaneous translation can be assessed quantitatively with varying degrees of automation. It identifies as measures of performance (i) three content-based standards; (ii) four time management parameters that reflect the influence of the source on the target language time structure; and (iii) two types of acoustical signal coherence. Proficiency in ST is shown to be directly related to coherence and speech rate but inversely related to omission and delay. High proficiency is associated with a high degree of simultaneity and

  18. AIDE, A SYSTEM FOR DEVELOPING INTERACTIVE USER INTERFACES FOR ENVIRONMENTAL MODELS

    EPA Science Inventory

    Recent progress in environmental science and engineering has seen increasing use of interactive interfaces for computer models. nitial applications centered on the use of interactive software to assist in building complicated input sequences required by batch programs. rom these ...

  19. A coupled damage-plasticity model for the cyclic behavior of shear-loaded interfaces

    NASA Astrophysics Data System (ADS)

    Carrara, P.; De Lorenzis, L.

    2015-12-01

    The present work proposes a novel thermodynamically consistent model for the behavior of interfaces under shear (i.e. mode-II) cyclic loading conditions. The interface behavior is defined coupling damage and plasticity. The admissible states' domain is formulated restricting the tangential interface stress to non-negative values, which makes the model suitable e.g. for interfaces with thin adherends. Linear softening is assumed so as to reproduce, under monotonic conditions, a bilinear mode-II interface law. Two damage variables govern respectively the loss of strength and of stiffness of the interface. The proposed model needs the evaluation of only four independent parameters, i.e. three defining the monotonic mode-II interface law, and one ruling the fatigue behavior. This limited number of parameters and their clear physical meaning facilitate experimental calibration. Model predictions are compared with experimental results on fiber reinforced polymer sheets externally bonded to concrete involving different load histories, and an excellent agreement is obtained.

  20. Intelligent sensor-model automated control of PMR-15 autoclave processing

    NASA Technical Reports Server (NTRS)

    Hart, S.; Kranbuehl, D.; Loos, A.; Hinds, B.; Koury, J.

    1992-01-01

    An intelligent sensor model system has been built and used for automated control of the PMR-15 cure process in the autoclave. The system uses frequency-dependent FM sensing (FDEMS), the Loos processing model, and the Air Force QPAL intelligent software shell. The Loos model is used to predict and optimize the cure process including the time-temperature dependence of the extent of reaction, flow, and part consolidation. The FDEMS sensing system in turn monitors, in situ, the removal of solvent, changes in the viscosity, reaction advancement and cure completion in the mold continuously throughout the processing cycle. The sensor information is compared with the optimum processing conditions from the model. The QPAL composite cure control system allows comparison of the sensor monitoring with the model predictions to be broken down into a series of discrete steps and provides a language for making decisions on what to do next regarding time-temperature and pressure.

  1. Intelligent sensor-model automated control of PMR-15 autoclave processing

    NASA Astrophysics Data System (ADS)

    Hart, S.; Kranbuehl, D.; Loos, A.; Hinds, B.; Koury, J.

    An intelligent sensor model system has been built and used for automated control of the PMR-15 cure process in the autoclave. The system uses frequency-dependent FM sensing (FDEMS), the Loos processing model, and the Air Force QPAL intelligent software shell. The Loos model is used to predict and optimize the cure process including the time-temperature dependence of the extent of reaction, flow, and part consolidation. The FDEMS sensing system in turn monitors, in situ, the removal of solvent, changes in the viscosity, reaction advancement and cure completion in the mold continuously throughout the processing cycle. The sensor information is compared with the optimum processing conditions from the model. The QPAL composite cure control system allows comparison of the sensor monitoring with the model predictions to be broken down into a series of discrete steps and provides a language for making decisions on what to do next regarding time-temperature and pressure.

  2. The enhanced Software Life Cyle Support Environment (ProSLCSE): Automation for enterprise and process modeling

    NASA Technical Reports Server (NTRS)

    Milligan, James R.; Dutton, James E.

    1993-01-01

    In this paper, we have introduced a comprehensive method for enterprise modeling that addresses the three important aspects of how an organization goes about its business. FirstEP includes infrastructure modeling, information modeling, and process modeling notations that are intended to be easy to learn and use. The notations stress the use of straightforward visual languages that are intuitive, syntactically simple, and semantically rich. ProSLCSE will be developed with automated tools and services to facilitate enterprise modeling and process enactment. In the spirit of FirstEP, ProSLCSE tools will also be seductively easy to use. Achieving fully managed, optimized software development and support processes will be long and arduous for most software organizations, and many serious problems will have to be solved along the way. ProSLCSE will provide the ability to document, communicate, and modify existing processes, which is the necessary first step.

  3. Modeling Speech Disfluency to Predict Conceptual Misalignment in Speech Survey Interfaces

    ERIC Educational Resources Information Center

    Ehlen, Patrick; Schober, Michael F.; Conrad, Frederick G.

    2007-01-01

    Computer-based interviewing systems could use models of respondent disfluency behaviors to predict a need for clarification of terms in survey questions. This study compares simulated speech interfaces that use two such models--a generic model and a stereotyped model that distinguishes between the speech of younger and older speakers--to several…

  4. Design Through Manufacturing: The Solid Model-Finite Element Analysis Interface

    NASA Technical Reports Server (NTRS)

    Rubin, Carol

    2002-01-01

    State-of-the-art computer aided design (CAD) presently affords engineers the opportunity to create solid models of machine parts reflecting every detail of the finished product. Ideally, in the aerospace industry, these models should fulfill two very important functions: (1) provide numerical. control information for automated manufacturing of precision parts, and (2) enable analysts to easily evaluate the stress levels (using finite element analysis - FEA) for all structurally significant parts used in aircraft and space vehicles. Today's state-of-the-art CAD programs perform function (1) very well, providing an excellent model for precision manufacturing. But they do not provide a straightforward and simple means of automating the translation from CAD to FEA models, especially for aircraft-type structures. Presently, the process of preparing CAD models for FEA consumes a great deal of the analyst's time.

  5. The development of an automated ward independent delirium risk prediction model.

    PubMed

    de Wit, Hugo A J M; Winkens, Bjorn; Mestres Gonzalvo, Carlota; Hurkens, Kim P G M; Mulder, Wubbo J; Janknegt, Rob; Verhey, Frans R; van der Kuy, Paul-Hugo M; Schols, Jos M G A

    2016-08-01

    Background A delirium is common in hospital settings resulting in increased mortality and costs. Prevention of a delirium is clearly preferred over treatment. A delirium risk prediction model can be helpful to identify patients at risk of a delirium, allowing the start of preventive treatment. Current risk prediction models rely on manual calculation of the individual patient risk. Objective The aim of this study was to develop an automated ward independent delirium riskprediction model. To show that such a model can be constructed exclusively from electronically available risk factors and thereby implemented into a clinical decision support system (CDSS) to optimally support the physician to initiate preventive treatment. Setting A Dutch teaching hospital. Methods A retrospective cohort study in which patients, 60 years or older, were selected when admitted to the hospital, with no delirium diagnosis when presenting, or during the first day of admission. We used logistic regression analysis to develop a delirium predictive model out of the electronically available predictive variables. Main outcome measure A delirium risk prediction model. Results A delirium risk prediction model was developed using predictive variables that were significant in the univariable regression analyses. The area under the receiver operating characteristics curve of the "medication model" model was 0.76 after internal validation. Conclusions CDSSs can be used to automatically predict the risk of a delirium in individual hospitalised patients' by exclusively using electronically available predictive variables. To increase the use and improve the quality of predictive models, clinical risk factors should be documented ready for automated use. PMID:27177868

  6. Effects of modeling errors on trajectory predictions in air traffic control automation

    NASA Technical Reports Server (NTRS)

    Jackson, Michael R. C.; Zhao, Yiyuan; Slattery, Rhonda

    1996-01-01

    Air traffic control automation synthesizes aircraft trajectories for the generation of advisories. Trajectory computation employs models of aircraft performances and weather conditions. In contrast, actual trajectories are flown in real aircraft under actual conditions. Since synthetic trajectories are used in landing scheduling and conflict probing, it is very important to understand the differences between computed trajectories and actual trajectories. This paper examines the effects of aircraft modeling errors on the accuracy of trajectory predictions in air traffic control automation. Three-dimensional point-mass aircraft equations of motion are assumed to be able to generate actual aircraft flight paths. Modeling errors are described as uncertain parameters or uncertain input functions. Pilot or autopilot feedback actions are expressed as equality constraints to satisfy control objectives. A typical trajectory is defined by a series of flight segments with different control objectives for each flight segment and conditions that define segment transitions. A constrained linearization approach is used to analyze trajectory differences caused by various modeling errors by developing a linear time varying system that describes the trajectory errors, with expressions to transfer the trajectory errors across moving segment transitions. A numerical example is presented for a complete commercial aircraft descent trajectory consisting of several flight segments.

  7. An Evaluation of Sharp Interface Models for CO2 -Brine Displacement in Aquifers.

    PubMed

    Swickrath, Michael J; Mishra, Srikanta; Ravi Ganesh, Priya

    2016-05-01

    Understanding multiphase transport within saline aquifers is necessary for safe and efficient CO2 sequestration. To that end, numerous full-physics codes exist for rigorously modeling multiphase flow within porous and permeable rock formations. High-fidelity simulation with such codes is data- and computation-intensive, and may not be suitable for screening-level calculations. Alternatively, under conditions of vertical equilibrium, a class of sharp-interface models result in simplified relationships that can be solved with limited computing resources and geologic/fluidic data. In this study, the sharp-interface model of Nordbotten and Celia (2006a,2006b) is evaluated against results from a commercial full-physics simulator for a semi-confined system with vertical permeability heterogeneity. In general, significant differences were observed between the simulator and the sharp-interface model results. A variety of adjustments were made to the sharp-interface model including modifications to the fluid saturation and effective viscosity in the two-phase region behind the CO2 -brine interface. These adaptations significantly improved the predictive ability of the sharp interface model while maintaining overall tractability. PMID:26333189

  8. A new lattice Boltzmann model for interface reactions between immiscible fluids

    NASA Astrophysics Data System (ADS)

    Di Palma, Paolo Roberto; Huber, Christian; Viotti, Paolo

    2015-08-01

    In this paper, we describe a lattice Boltzmann model to simulate chemical reactions taking place at the interface between two immiscible fluids. The phase-field approach is used to identify the interface and its orientation, the concentration of reactant at the interface is then calculated iteratively to impose the correct reactive flux condition. The main advantages of the model is that interfaces are considered part of the bulk dynamics with the corrective reactive flux introduced as a source/sink term in the collision step, and, as a consequence, the model's implementation and performance is independent of the interface geometry and orientation. Results obtained with the proposed model are compared to analytical solution for three different benchmark tests (stationary flat boundary, moving flat boundary and dissolving droplet). We find an excellent agreement between analytical and numerical solutions in all cases. Finally, we present a simulation coupling the Shan Chen multiphase model and the interface reactive model to simulate the dissolution of a collection of immiscible droplets with different sizes rising by buoyancy in a stagnant fluid.

  9. Streamflow forecasting using the modular modeling system and an object-user interface

    USGS Publications Warehouse

    Jeton, A.E.

    2001-01-01

    The U.S. Geological Survey (USGS), in cooperation with the Bureau of Reclamation (BOR), developed a computer program to provide a general framework needed to couple disparate environmental resource models and to manage the necessary data. The Object-User Interface (OUI) is a map-based interface for models and modeling data. It provides a common interface to run hydrologic models and acquire, browse, organize, and select spatial and temporal data. One application is to assist river managers in utilizing streamflow forecasts generated with the Precipitation-Runoff Modeling System running in the Modular Modeling System (MMS), a distributed-parameter watershed model, and the National Weather Service Extended Streamflow Prediction (ESP) methodology.

  10. An automated model-based aim point distribution system for solar towers

    NASA Astrophysics Data System (ADS)

    Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen

    2016-05-01

    Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.

  11. An automated procedure for material parameter evaluation for viscoplastic constitutive models

    NASA Technical Reports Server (NTRS)

    Imbrie, P. K.; James, G. H.; Hill, P. S.; Allen, D. H.; Haisler, W. E.

    1988-01-01

    An automated procedure is presented for evaluating the material parameters in Walker's exponential viscoplastic constitutive model for metals at elevated temperature. Both physical and numerical approximations are utilized to compute the constants for Inconel 718 at 1100 F. When intermediate results are carefully scrutinized and engineering judgement applied, parameters may be computed which yield stress output histories that are in agreement with experimental results. A qualitative assessment of the theta-plot method for predicting the limiting value of stress is also presented. The procedure may also be used as a basis to develop evaluation schemes for other viscoplastic constitutive theories of this type.

  12. Automated optimization of water-water interaction parameters for a coarse-grained model.

    PubMed

    Fogarty, Joseph C; Chiu, See-Wing; Kirby, Peter; Jakobsson, Eric; Pandit, Sagar A

    2014-02-13

    We have developed an automated parameter optimization software framework (ParOpt) that implements the Nelder-Mead simplex algorithm and applied it to a coarse-grained polarizable water model. The model employs a tabulated, modified Morse potential with decoupled short- and long-range interactions incorporating four water molecules per interaction site. Polarizability is introduced by the addition of a harmonic angle term defined among three charged points within each bead. The target function for parameter optimization was based on the experimental density, surface tension, electric field permittivity, and diffusion coefficient. The model was validated by comparison of statistical quantities with experimental observation. We found very good performance of the optimization procedure and good agreement of the model with experiment. PMID:24460506

  13. PDB_REDO: automated re-refinement of X-ray structure models in the PDB.

    PubMed

    Joosten, Robbie P; Salzemann, Jean; Bloch, Vincent; Stockinger, Heinz; Berglund, Ann-Charlott; Blanchet, Christophe; Bongcam-Rudloff, Erik; Combet, Christophe; Da Costa, Ana L; Deleage, Gilbert; Diarena, Matteo; Fabbretti, Roberto; Fettahi, Géraldine; Flegel, Volker; Gisel, Andreas; Kasam, Vinod; Kervinen, Timo; Korpelainen, Eija; Mattila, Kimmo; Pagni, Marco; Reichstadt, Matthieu; Breton, Vincent; Tickle, Ian J; Vriend, Gert

    2009-06-01

    Structural biology, homology modelling and rational drug design require accurate three-dimensional macromolecular coordinates. However, the coordinates in the Protein Data Bank (PDB) have not all been obtained using the latest experimental and computational methods. In this study a method is presented for automated re-refinement of existing structure models in the PDB. A large-scale benchmark with 16 807 PDB entries showed that they can be improved in terms of fit to the deposited experimental X-ray data as well as in terms of geometric quality. The re-refinement protocol uses TLS models to describe concerted atom movement. The resulting structure models are made available through the PDB_REDO databank (http://www.cmbi.ru.nl/pdb_redo/). Grid computing techniques were used to overcome the computational requirements of this endeavour. PMID:22477769

  14. Automated Optimization of Water–Water Interaction Parameters for a Coarse-Grained Model

    PubMed Central

    2015-01-01

    We have developed an automated parameter optimization software framework (ParOpt) that implements the Nelder–Mead simplex algorithm and applied it to a coarse-grained polarizable water model. The model employs a tabulated, modified Morse potential with decoupled short- and long-range interactions incorporating four water molecules per interaction site. Polarizability is introduced by the addition of a harmonic angle term defined among three charged points within each bead. The target function for parameter optimization was based on the experimental density, surface tension, electric field permittivity, and diffusion coefficient. The model was validated by comparison of statistical quantities with experimental observation. We found very good performance of the optimization procedure and good agreement of the model with experiment. PMID:24460506

  15. Automated Translation and Thermal Zoning of Digital Building Models for Energy Analysis

    SciTech Connect

    Jones, Nathaniel L.; McCrone, Colin J.; Walter, Bruce J.; Pratt, Kevin B.; Greenberg, Donald P.

    2013-08-26

    Building energy simulation is valuable during the early stages of design, when decisions can have the greatest impact on energy performance. However, preparing digital design models for building energy simulation typically requires tedious manual alteration. This paper describes a series of five automated steps to translate geometric data from an unzoned CAD model into a multi-zone building energy model. First, CAD input is interpreted as geometric surfaces with materials. Second, surface pairs defining walls of various thicknesses are identified. Third, normal directions of unpaired surfaces are determined. Fourth, space boundaries are defined. Fifth, optionally, settings from previous simulations are applied, and spaces are aggregated into a smaller number of thermal zones. Building energy models created quickly using this method can offer guidance throughout the design process.

  16. A Study on Automated Context-aware Access Control Model Using Ontology

    NASA Astrophysics Data System (ADS)

    Jang, Bokman; Jang, Hyokyung; Choi, Euiin

    Applications in context-aware computing environment will be connected wireless network and various devices. According to, recklessness access of information resource can make trouble of system. So, access authority management is very important issue both information resource and adapt to system through founding security policy of needed system. But, existing security model is easy of approach to resource through simply user ID and password. This model has a problem that is not concerned about user's environment information. In this paper, propose model of automated context-aware access control using ontology that can more efficiently control about resource through inference and judgment of context information that collect user's information and user's environment context information in order to ontology modeling.

  17. Automated model-based bias field correction of MR images of the brain.

    PubMed

    Van Leemput, K; Maes, F; Vandermeulen, D; Suetens, P

    1999-10-01

    We propose a model-based method for fully automated bias field correction of MR brain images. The MR signal is modeled as a realization of a random process with a parametric probability distribution that is corrupted by a smooth polynomial inhomogeneity or bias field. The method we propose applies an iterative expectation-maximization (EM) strategy that interleaves pixel classification with estimation of class distribution and bias field parameters, improving the likelihood of the model parameters at each iteration. The algorithm, which can handle multichannel data and slice-by-slice constant intensity offsets, is initialized with information from a digital brain atlas about the a priori expected location of tissue classes. This allows full automation of the method without need for user interaction, yielding more objective and reproducible results. We have validated the bias correction algorithm on simulated data and we illustrate its performance on various MR images with important field inhomogeneities. We also relate the proposed algorithm to other bias correction algorithms. PMID:10628948

  18. Model-based automated extraction of microtubules from electron tomography volume.

    PubMed

    Jiang, Ming; Ji, Qiang; McEwen, Bruce F

    2006-07-01

    We propose a model-based automated approach to extracting microtubules from noisy electron tomography volume. Our approach consists of volume enhancement, microtubule localization, and boundary segmentation to exploit the unique geometric and photometric properties of microtubules. The enhancement starts with an anisotropic invariant wavelet transform to enhance the microtubules globally, followed by a three-dimensional (3-D) tube-enhancing filter based on Weingarten matrix to further accentuate the tubular structures locally. The enhancement ends with a modified coherence-enhancing diffusion to complete the interruptions along the microtubules. The microtubules are then localized with a centerline extraction algorithm adapted for tubular objects. To perform segmentation, we novelly modify and extend active shape model method. We first use 3-D local surface enhancement to characterize the microtubule boundary and improve shape searching by relating the boundary strength with the weight matrix of the searching error. We then integrate the active shape model with Kalman filtering to utilize the longitudinal smoothness along the microtubules. The segmentation improved in this way is robust against missing boundaries and outliers that are often present in the tomography volume. Experimental results demonstrate that our automated method produces results close to those by manual process and uses only a fraction of the time of the latter. PMID:16871731

  19. Generating Phenotypical Erroneous Human Behavior to Evaluate Human-automation Interaction Using Model Checking

    PubMed Central

    Bolton, Matthew L.; Bass, Ellen J.; Siminiceanu, Radu I.

    2012-01-01

    Breakdowns in complex systems often occur as a result of system elements interacting in unanticipated ways. In systems with human operators, human-automation interaction associated with both normative and erroneous human behavior can contribute to such failures. Model-driven design and analysis techniques provide engineers with formal methods tools and techniques capable of evaluating how human behavior can contribute to system failures. This paper presents a novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models. The generated erroneous behavior is capable of replicating Hollnagel’s zero-order phenotypes of erroneous action for omissions, jumps, repetitions, and intrusions. Multiple phenotypical acts can occur in sequence, thus allowing for the generation of higher order phenotypes. The task behavior model pattern capable of generating erroneous behavior can be integrated into a formal system model so that system safety properties can be formally verified with a model checker. This allows analysts to prove that a human-automation interactive system (as represented by the model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. We present benchmarks related to the size of the statespace and verification time of models to show how the erroneous human behavior generation process scales. We demonstrate the method with a case study: the operation of a radiation therapy machine. A potential problem resulting from a generated erroneous human action is discovered. A design intervention is presented which prevents this problem from occurring. We discuss how our method could be used to evaluate larger applications and recommend future paths of development. PMID:23105914

  20. Incorporation of the electrode electrolyte interface into finite-element models of metal microelectrodes

    NASA Astrophysics Data System (ADS)

    Cantrell, Donald R.; Inayat, Samsoon; Taflove, Allen; Ruoff, Rodney S.; Troy, John B.

    2008-03-01

    An accurate description of the electrode-electrolyte interfacial impedance is critical to the development of computational models of neural recording and stimulation that aim to improve understanding of neuro-electric interfaces and to expedite electrode design. This work examines the effect that the electrode-electrolyte interfacial impedance has upon the solutions generated from time-harmonic finite-element models of cone- and disk-shaped platinum microelectrodes submerged in physiological saline. A thin-layer approximation is utilized to incorporate a platinum-saline interfacial impedance into the finite-element models. This approximation is easy to implement and is not computationally costly. Using an iterative nonlinear solver, solutions were obtained for systems in which the electrode was driven at ac potentials with amplitudes from 10 mV to 500 mV and frequencies from 100 Hz to 100 kHz. The results of these simulations indicate that, under certain conditions, incorporation of the interface may strongly affect the solutions obtained. This effect, however, is dependent upon the amplitude of the driving potential and, to a lesser extent, its frequency. The solutions are most strongly affected at low amplitudes where the impedance of the interface is large. Here, the current density distribution that is calculated from models incorporating the interface is much more uniform than the current density distribution generated by models that neglect the interface. At higher potential amplitudes, however, the impedance of the interface decreases, and its effect on the solutions obtained is attenuated.

  1. A conceptual model of the automated credibility assessment of the volunteered geographic information

    NASA Astrophysics Data System (ADS)

    Idris, N. H.; Jackson, M. J.; Ishak, M. H. I.

    2014-02-01

    The use of Volunteered Geographic Information (VGI) in collecting, sharing and disseminating geospatially referenced information on the Web is increasingly common. The potentials of this localized and collective information have been seen to complement the maintenance process of authoritative mapping data sources and in realizing the development of Digital Earth. The main barrier to the use of this data in supporting this bottom up approach is the credibility (trust), completeness, accuracy, and quality of both the data input and outputs generated. The only feasible approach to assess these data is by relying on an automated process. This paper describes a conceptual model of indicators (parameters) and practical approaches to automated assess the credibility of information contributed through the VGI including map mashups, Geo Web and crowd - sourced based applications. There are two main components proposed to be assessed in the conceptual model - metadata and data. The metadata component comprises the indicator of the hosting (websites) and the sources of data / information. The data component comprises the indicators to assess absolute and relative data positioning, attribute, thematic, temporal and geometric correctness and consistency. This paper suggests approaches to assess the components. To assess the metadata component, automated text categorization using supervised machine learning is proposed. To assess the correctness and consistency in the data component, we suggest a matching validation approach using the current emerging technologies from Linked Data infrastructures and using third party reviews validation. This study contributes to the research domain that focuses on the credibility, trust and quality issues of data contributed by web citizen providers.

  2. A two-sided interface model for dissipation in structural systems with frictional joints

    NASA Astrophysics Data System (ADS)

    Miller, Jason D.; Dane Quinn, D.

    2009-03-01

    Modeling mechanical joints in an accurate and computationally efficient manner is of great importance in the analysis of structural systems, which can be composed of a large number of connected components. This work presents an interface model that can be decomposed into a series-series Iwan model together with an elastic chain, subject to interfacial shear loads. A reduced-order formulation of the resulting model is developed that significantly reduces the computational requirements for the simulation of frictional damping. Results are presented as the interface is subject to harmonic loading of varying amplitude. The models presented are able to qualitatively reproduce experimentally observed dissipation scalings. Finally, the interface models are embedded within a larger structural system to illustrate there effectiveness in capturing the structural damping induced by mechanical joints.

  3. A comparison of molecular dynamics and diffuse interface model predictions of Lennard-Jones fluid evaporation

    SciTech Connect

    Barbante, Paolo; Frezzotti, Aldo; Gibelli, Livio

    2014-12-09

    The unsteady evaporation of a thin planar liquid film is studied by molecular dynamics simulations of Lennard-Jones fluid. The obtained results are compared with the predictions of a diffuse interface model in which capillary Korteweg contributions are added to hydrodynamic equations, in order to obtain a unified description of the liquid bulk, liquid-vapor interface and vapor region. Particular care has been taken in constructing a diffuse interface model matching the thermodynamic and transport properties of the Lennard-Jones fluid. The comparison of diffuse interface model and molecular dynamics results shows that, although good agreement is obtained in equilibrium conditions, remarkable deviations of diffuse interface model predictions from the reference molecular dynamics results are observed in the simulation of liquid film evaporation. It is also observed that molecular dynamics results are in good agreement with preliminary results obtained from a composite model which describes the liquid film by a standard hydrodynamic model and the vapor by the Boltzmann equation. The two mathematical model models are connected by kinetic boundary conditions assuming unit evaporation coefficient.

  4. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1993-01-01

    Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.

  5. Modeling and Preliminary Testing Socket-Residual Limb Interface Stiffness of Above-Elbow Prostheses

    PubMed Central

    Sensinger, Jonathon W.; Weir, Richard F. ff.

    2011-01-01

    The interface between the socket and residual limb can have a significant effect on the performance of a prosthesis. Specifically, knowledge of the rotational stiffness of the socket-residual limb (S-RL) interface is extremely useful in designing new prostheses and evaluating new control paradigms, as well as in comparing existing and new socket technologies. No previous studies, however, have examined the rotational stiffness of S-RL interfaces. To address this problem, a math model is compared to a more complex finite element analysis, to see if the math model sufficiently captures the main effects of S-RL interface rotational stiffness. Both of these models are then compared to preliminary empirical testing, in which a series of X-rays, called fluoroscopy, is taken to obtain the movement of the bone relative to the socket. Force data are simultaneously recorded, and the combination of force and movement data are used to calculate the empirical rotational stiffness of elbow S-RL interface. The empirical rotational stiffness values are then compared to the models, to see if values of Young’s modulus obtained in other studies at localized points may be used to determine the global rotational stiffness of the S-RL interface. Findings include agreement between the models and empirical results and the ability of persons to significantly modulate the rotational stiffness of their S-RL interface a little less than one order of magnitude. The floor and ceiling of this range depend significantly on socket length and co-contraction levels, but not on residual limb diameter or bone diameter. Measured trans-humeral S-RL interface rotational stiffness values ranged from 24–140 Nm/rad for the four subjects tested in this study. PMID:18403287

  6. FE Modeling of Guided Wave Propagation in Structures with Weak Interfaces

    NASA Astrophysics Data System (ADS)

    Hosten, Bernard; Castaings, Michel

    2005-04-01

    This paper describes the use of a Finite Element code for modeling the effects of weak interfaces on the propagation of low order Lamb modes. The variable properties of the interface are modeled by uniform repartitions of compression and shear springs that insure the continuity of the stresses and impose a discontinuity in the displacement field. The method is tested by comparison with measurements that were presented in a previous QNDE conference (B.W.Drinkwater, M.Castaings, and B.Hosten "The interaction of Lamb waves with solid-solid interfaces", Q.N.D.E. Vol. 22, (2003) 1064-1071). The interface was the contact between a rough elastomer with high internal damping loaded against one surface of a glass plate. Both normal and shear stiffnesses of the interface were quantified from the attenuation of A0 and S0 Lamb waves caused by leakage of energy from the plate into the elastomer and measured at each step of a compressive loading. The FE model is made in the frequency domain, thus allowing the viscoelastic properties of the elastomer to be modeled by using complex moduli as input data. By introducing the interface stiffnesses in the code, the predicted guided waves attenuations are compared to the experimental results to validate the numerical FE method.

  7. Automation, Control and Modeling of Compound Semiconductor Thin-Film Growth

    SciTech Connect

    Breiland, W.G.; Coltrin, M.E.; Drummond, T.J.; Horn, K.M.; Hou, H.Q.; Klem, J.F.; Tsao, J.Y.

    1999-02-01

    This report documents the results of a laboratory-directed research and development (LDRD) project on control and agile manufacturing in the critical metalorganic chemical vapor deposition (MOCVD) and molecular beam epitaxy (MBE) materials growth processes essential to high-speed microelectronics and optoelectronic components. This effort is founded on a modular and configurable process automation system that serves as a backbone allowing integration of process-specific models and sensors. We have developed and integrated MOCVD- and MBE-specific models in this system, and demonstrated the effectiveness of sensor-based feedback control in improving the accuracy and reproducibility of semiconductor heterostructures. In addition, within this framework we have constructed ''virtual reactor'' models for growth processes, with the goal of greatly shortening the epitaxial growth process development cycle.

  8. Automated decomposition algorithm for Raman spectra based on a Voigt line profile model.

    PubMed

    Chen, Yunliang; Dai, Liankui

    2016-05-20

    Raman spectra measured by spectrometers usually suffer from band overlap and random noise. In this paper, an automated decomposition algorithm based on a Voigt line profile model for Raman spectra is proposed to solve this problem. To decompose a measured Raman spectrum, a Voigt line profile model is introduced to parameterize the measured spectrum, and a Gaussian function is used as the instrumental broadening function. Hence, the issue of spectral decomposition is transformed into a multiparameter optimization problem of the Voigt line profile model parameters. The algorithm can eliminate instrumental broadening, obtain a recovered Raman spectrum, resolve overlapping bands, and suppress random noise simultaneously. Moreover, the recovered spectrum can be decomposed to a group of Lorentzian functions. Experimental results on simulated Raman spectra show that the performance of this algorithm is much better than a commonly used blind deconvolution method. The algorithm has also been tested on the industrial Raman spectra of ortho-xylene and proved to be effective. PMID:27411136

  9. Automated High-Throughput Characterization of Single Neurons by Means of Simplified Spiking Models

    PubMed Central

    Hagens, Olivier; Naud, Richard; Koch, Christof; Gerstner, Wulfram

    2015-01-01

    Single-neuron models are useful not only for studying the emergent properties of neural circuits in large-scale simulations, but also for extracting and summarizing in a principled way the information contained in electrophysiological recordings. Here we demonstrate that, using a convex optimization procedure we previously introduced, a Generalized Integrate-and-Fire model can be accurately fitted with a limited amount of data. The model is capable of predicting both the spiking activity and the subthreshold dynamics of different cell types, and can be used for online characterization of neuronal properties. A protocol is proposed that, combined with emergent technologies for automatic patch-clamp recordings, permits automated, in vitro high-throughput characterization of single neurons. PMID:26083597

  10. Analytical model for thermal boundary conductance and equilibrium thermal accommodation coefficient at solid/gas interfaces.

    PubMed

    Giri, Ashutosh; Hopkins, Patrick E

    2016-02-28

    We develop an analytical model for the thermal boundary conductance between a solid and a gas. By considering the thermal fluxes in the solid and the gas, we describe the transmission of energy across the solid/gas interface with diffuse mismatch theory. From the predicted thermal boundary conductances across solid/gas interfaces, the equilibrium thermal accommodation coefficient is determined and compared to predictions from molecular dynamics simulations on the model solid-gas systems. We show that our model is applicable for modeling the thermal accommodation of gases on solid surfaces at non-cryogenic temperatures and relatively strong solid-gas interactions (εsf ≳ kBT). PMID:26931716

  11. Improved automated diagnosis of misfire in internal combustion engines based on simulation models

    NASA Astrophysics Data System (ADS)

    Chen, Jian; Bond Randall, Robert

    2015-12-01

    In this paper, a new advance in the application of Artificial Neural Networks (ANNs) to the automated diagnosis of misfires in Internal Combustion engines(IC engines) is detailed. The automated diagnostic system comprises three stages: fault detection, fault localization and fault severity identification. Particularly, in the severity identification stage, separate Multi-Layer Perceptron networks (MLPs) with saturating linear transfer functions were designed for individual speed conditions, so they could achieve finer classification. In order to obtain sufficient data for the network training, numerical simulation was used to simulate different ranges of misfires in the engine. The simulation models need to be updated and evaluated using experimental data, so a series of experiments were first carried out on the engine test rig to capture the vibration signals for both normal condition and with a range of misfires. Two methods were used for the misfire diagnosis: one is based on the torsional vibration signals of the crankshaft and the other on the angular acceleration signals (rotational motion) of the engine block. Following the signal processing of the experimental and simulation signals, the best features were selected as the inputs to ANN networks. The ANN systems were trained using only the simulated data and tested using real experimental cases, indicating that the simulation model can be used for a wider range of faults for which it can still be considered valid. The final results have shown that the diagnostic system based on simulation can efficiently diagnose misfire, including location and severity.

  12. Automated 3D Damaged Cavity Model Builder for Lower Surface Acreage Tile on Orbiter

    NASA Technical Reports Server (NTRS)

    Belknap, Shannon; Zhang, Michael

    2013-01-01

    The 3D Automated Thermal Tool for Damaged Acreage Tile Math Model builder was developed to perform quickly and accurately 3D thermal analyses on damaged lower surface acreage tiles and structures beneath the damaged locations on a Space Shuttle Orbiter. The 3D model builder created both TRASYS geometric math models (GMMs) and SINDA thermal math models (TMMs) to simulate an idealized damaged cavity in the damaged tile(s). The GMMs are processed in TRASYS to generate radiation conductors between the surfaces in the cavity. The radiation conductors are inserted into the TMMs, which are processed in SINDA to generate temperature histories for all of the nodes on each layer of the TMM. The invention allows a thermal analyst to create quickly and accurately a 3D model of a damaged lower surface tile on the orbiter. The 3D model builder can generate a GMM and the correspond ing TMM in one or two minutes, with the damaged cavity included in the tile material. A separate program creates a configuration file, which would take a couple of minutes to edit. This configuration file is read by the model builder program to determine the location of the damage, the correct tile type, tile thickness, structure thickness, and SIP thickness of the damage, so that the model builder program can build an accurate model at the specified location. Once the models are built, they are processed by the TRASYS and SINDA.

  13. TOBAGO — a semi-automated approach for the generation of 3-D building models

    NASA Astrophysics Data System (ADS)

    Gruen, Armin

    3-D city models are in increasing demand for a great number of applications. Photogrammetry is a relevant technology that can provide an abundance of geometric, topologic and semantic information concerning these models. The pressure to generate a large amount of data with high degree of accuracy and completeness poses a great challenge to phtogrammetry. The development of automated and semi-automated methods for the generation of those data sets is therefore a key issue in photogrammetric research. We present in this article a strategy and methodology for an efficient generation of even fairly complex building models. Within this concept we request the operator to measure the house roofs from a stereomodel in form of an unstructured point cloud. According to our experience this can be done very quickly. Even a non-experienced operator can measure several hundred roofs or roof units per day. In a second step we fit generic building models fully automatically to these point clouds. The structure information is inherently included in these building models. In such a way geometric, topologic and even semantic data can be handed over to a CAD-system, in our case AutoCad, for further visualization and manipulation. The structuring is achieved in three steps. In a first step a classifier is initiated which recognizes the class of houses a particular roof point cloud belongs to. This recognition step is primarily based on the analysis of the number of ridge points. In the second and third steps the concrete topological relations between roof points are investigated and generic building models are fitted to the point clouds. Based on the technique of constraint-based reasoning two geometrical parsers are solving this problem. We have tested the methodology under a variety of different conditions in several pilot projects. The results will indicate the good performance of our approach. In addition we will demonstrate how the results can be used for visualization (texture

  14. Framework for non-coherent interface models at finite displacement jumps and finite strains

    NASA Astrophysics Data System (ADS)

    Ottosen, Niels Saabye; Ristinmaa, Matti; Mosler, Jörn

    2016-05-01

    This paper deals with a novel constitutive framework suitable for non-coherent interfaces, such as cracks, undergoing large deformations in a geometrically exact setting. For this type of interface, the displacement field shows a jump across the interface. Within the engineering community, so-called cohesive zone models are frequently applied in order to describe non-coherent interfaces. However, for existing models to comply with the restrictions imposed by (a) thermodynamical consistency (e.g., the second law of thermodynamics), (b) balance equations (in particular, balance of angular momentum) and (c) material frame indifference, these models are essentially fiber models, i.e. models where the traction vector is collinear with the displacement jump. This constraints the ability to model shear and, in addition, anisotropic effects are excluded. A novel, extended constitutive framework which is consistent with the above mentioned fundamental physical principles is elaborated in this paper. In addition to the classical tractions associated with a cohesive zone model, the main idea is to consider additional tractions related to membrane-like forces and out-of-plane shear forces acting within the interface. For zero displacement jump, i.e. coherent interfaces, this framework degenerates to existing formulations presented in the literature. For hyperelasticity, the Helmholtz energy of the proposed novel framework depends on the displacement jump as well as on the tangent vectors of the interface with respect to the current configuration - or equivalently - the Helmholtz energy depends on the displacement jump and the surface deformation gradient. It turns out that by defining the Helmholtz energy in terms of the invariants of these variables, all above-mentioned fundamental physical principles are automatically fulfilled. Extensions of the novel framework necessary for material degradation (damage) and plasticity are also covered.

  15. Theoretical modelling of the semiconductor-electrolyte interface

    NASA Astrophysics Data System (ADS)

    Schelling, Patrick Kenneth

    We have developed tight-binding models of transition metal oxides. In contrast to many tight-binding models, these models include a description of electron-electron interactions. After parameterizing to bulk first-principles calculations, we demonstrated the transferability of the model by calculating atomic and electronic structure of rutile surfaces, which compared well with experiment and first-principles calculations. We also studied the structure of twist grain boundaries in rutile. Molecular dynamics simulations using the model were also carried out to describe polaron localization. We have also demonstrated that tight-binding models can be constructed to describe metallic systems. The computational cost tight-binding simulations was greatly reduced by incorporating O(N) electronic structure methods. We have also interpreted photoluminesence experiments on GaAs electrodes in contact with an electrolyte using drift-diffusion models. Electron transfer velocities were obtained by fitting to experimental results.

  16. A multilayered sharp interface model of coupled freshwater and saltwater flow in coastal systems: model development and application

    USGS Publications Warehouse

    Essaid, H.I.

    1990-01-01

    The model allows for regional simulation of coastal groundwater conditions, including the effects of saltwater dynamics on the freshwater system. Vertically integrated freshwater and saltwater flow equations incorporating the interface boundary condition are solved within each aquifer. Leakage through confining layers is calculated by Darcy's law, accounting for density differences across the layer. The locations of the interface tip and toe, within grid blocks, are tracked by linearly extrapolating the position of the interface. The model has been verified using available analytical solutions and experimental results and applied to the Soquel-Aptos basin, Santa Cruz County, California. -from Author

  17. Model of bound interface dynamics for coupled magnetic domain walls

    NASA Astrophysics Data System (ADS)

    Politi, P.; Metaxas, P. J.; Jamet, J.-P.; Stamps, R. L.; Ferré, J.

    2011-08-01

    A domain wall in a ferromagnetic system will move under the action of an external magnetic field. Ultrathin Co layers sandwiched between Pt have been shown to be a suitable experimental realization of a weakly disordered 2D medium in which to study the dynamics of 1D interfaces (magnetic domain walls). The behavior of these systems is encapsulated in the velocity-field response v(H) of the domain walls. In a recent paper [P. J. Metaxas , Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.104.237206 104, 237206 (2010)] we studied the effect of ferromagnetic coupling between two such ultrathin layers, each exhibiting different v(H) characteristics. The main result was the existence of bound states over finite-width field ranges, wherein walls in the two layers moved together at the same speed. Here we discuss in detail the theory of domain wall dynamics in coupled systems. In particular, we show that a bound creep state is expected for vanishing H and we give the analytical, parameter free expression for its velocity which agrees well with experimental results.

  18. An automated shell for management of parametric dispersion/deposition modeling

    SciTech Connect

    Paddock, R.A.; Absil, M.J.G.; Peerenboom, J.P.; Newsom, D.E.; North, M.J.; Coskey, R.J. Jr.

    1994-03-01

    In 1993, the US Army tasked Argonne National Laboratory to perform a study of chemical agent dispersion and deposition for the Chemical Stockpile Emergency Preparedness Program using an existing Army computer model. The study explored a wide range of situations in terms of six parameters: agent type, quantity released, liquid droplet size, release height, wind speed, and atmospheric stability. A number of discrete values of interest were chosen for each parameter resulting in a total of 18,144 possible different combinations of parameter values. Therefore, the need arose for a systematic method to assemble the large number of input streams for the model, filter out unrealistic combinations of parameter values, run the model, and extract the results of interest from the extensive model output. To meet these needs, we designed an automated shell for the computer model. The shell processed the inputs, ran the model, and reported the results of interest. By doing so, the shell compressed the time needed to perform the study and freed the researchers to focus on the evaluation and interpretation of the model predictions. The results of the study are still under review by the Army and other agencies; therefore, it would be premature to discuss the results in this paper. However, the design of the shell could be applied to other hazards for which multiple-parameter modeling is performed. This paper describes the design and operation of the shell as an example for other hazards and models.

  19. Interfaces with internal structures in generalized rock-paper-scissors models.

    PubMed

    Avelino, P P; Bazeia, D; Losano, L; Menezes, J; de Oliveira, B F

    2014-04-01

    In this work we investigate the development of stable dynamical structures along interfaces separating domains belonging to enemy partnerships in the context of cyclic predator-prey models with an even number of species N≥8. We use both stochastic and field theory simulations in one and two spatial dimensions, as well as analytical arguments, to describe the association at the interfaces of mutually neutral individuals belonging to enemy partnerships and to probe their role in the development of the dynamical structures at the interfaces. We identify an interesting behavior associated with the symmetric or asymmetric evolution of the interface profiles depending on whether N/2 is odd or even, respectively. We also show that the macroscopic evolution of the interface network is not very sensitive to the internal structure of the interfaces. Although this work focuses on cyclic predator-prey models with an even number of species, we argue that the results are expected to be quite generic in the context of spatial stochastic May-Leonard models. PMID:24827281

  20. Analytic Element Modeling of Steady Interface Flow in Multilayer Aquifers Using AnAqSim.

    PubMed

    Fitts, Charles R; Godwin, Joshua; Feiner, Kathleen; McLane, Charles; Mullendore, Seth

    2015-01-01

    This paper presents the analytic element modeling approach implemented in the software AnAqSim for simulating steady groundwater flow with a sharp fresh-salt interface in multilayer (three-dimensional) aquifer systems. Compared with numerical methods for variable-density interface modeling, this approach allows quick model construction and can yield useful guidance about the three-dimensional configuration of an interface even at a large scale. The approach employs subdomains and multiple layers as outlined by Fitts (2010) with the addition of discharge potentials for shallow interface flow (Strack 1989). The following simplifying assumptions are made: steady flow, a sharp interface between fresh- and salt water, static salt water, and no resistance to vertical flow and hydrostatic heads within each fresh water layer. A key component of this approach is a transition to a thin fixed minimum fresh water thickness mode when the fresh water thickness approaches zero. This allows the solution to converge and determine the steady interface position without a long transient simulation. The approach is checked against the widely used numerical codes SEAWAT and SWI/MODFLOW and a hypothetical application of the method to a coastal wellfield is presented. PMID:24942663

  1. Pilot interaction with cockpit automation 2: An experimental study of pilots' model and awareness of the Flight Management System

    NASA Technical Reports Server (NTRS)

    Sarter, Nadine B.; Woods, David D.

    1994-01-01

    Technological developments have made it possible to automate more and more functions on the commercial aviation flight deck and in other dynamic high-consequence domains. This increase in the degrees of freedom in design has shifted questions away from narrow technological feasibility. Many concerned groups, from designers and operators to regulators and researchers, have begun to ask questions about how we should use the possibilities afforded by technology skillfully to support and expand human performance. In this article, we report on an experimental study that addressed these questions by examining pilot interaction with the current generation of flight deck automation. Previous results on pilot-automation interaction derived from pilot surveys, incident reports, and training observations have produced a corpus of features and contexts in which human-machine coordination is likely to break down (e.g., automation surprises). We used these data to design a simulated flight scenario that contained a variety of probes designed to reveal pilots' mental model of one major component of flight deck automation: the Flight Management System (FMS). The events within the scenario were also designed to probe pilots' ability to apply their knowledge and understanding in specific flight contexts and to examine their ability to track the status and behavior of the automated system (mode awareness). Although pilots were able to 'make the system work' in standard situations, the results reveal a variety of latent problems in pilot-FMS interaction that can affect pilot performance in nonnormal time critical situations.

  2. Thermal modeling of roll and strip interfaces in rolling processes. Part 2: Simulation

    SciTech Connect

    Tseng, A.A.

    1999-02-12

    Part 1 of this paper reviewed the modeling approaches and correlations used to study the interface heat transfer phenomena of the roll-strip contact region in rolling processes. The thermal contact conductance approach was recommended for modeling the interface phenomena. To illustrate, the recommended approach and selected correlations are adopted in the present study for modeling of the roll-strip interface region. The specific values of the parameters used to correlate the corresponding thermal contact conductance for the typical cold and hot rolling of steels are first estimated. The influence of thermal contact resistance on the temperature distributions of the roll and strip is then studied. Comparing the present simulation results with previously published experimental and analytical results shows that the thermal contact conductance approach and numerical models used can reliably simulate the heat transfer behavior of the rolling process.

  3. Automated measurement of mouse apolipoprotein B: convenient screening tool for mouse models of atherosclerosis.

    PubMed

    Levine, D M; Williams, K J

    1997-04-01

    Although mice are commonly used for studies of atherosclerosis, investigators have had no convenient way to quantify apolipoprotein (apo) B, the major protein of atherogenic lipoproteins, in this model. We now report an automated immunoturbidimetric assay for mouse apo B with an NCCLS imprecision study CV < 5%. Added hemoglobin up to 50 g/L did not interfere with the assay, nor did one freeze-thaw cycle of serum samples. Assay linearity extends to apo B concentrations of 325 mg/L. We have used the assay to determine serum apo B concentrations under several atherogenic conditions, including the apo E "knock-out" genotype and treatment with a high-cholesterol diet. Our assay can be used to survey inbred mouse strains for variants in apo B concentrations or regulation. Moreover, the mouse can now be used as a convenient small-animal model to screen compounds that may lower apo B concentrations. PMID:9105271

  4. CHANNEL MORPHOLOGY TOOL (CMT): A GIS-BASED AUTOMATED EXTRACTION MODEL FOR CHANNEL GEOMETRY

    SciTech Connect

    JUDI, DAVID; KALYANAPU, ALFRED; MCPHERSON, TIMOTHY; BERSCHEID, ALAN

    2007-01-17

    This paper describes an automated Channel Morphology Tool (CMT) developed in ArcGIS 9.1 environment. The CMT creates cross-sections along a stream centerline and uses a digital elevation model (DEM) to create station points with elevations along each of the cross-sections. The generated cross-sections may then be exported into a hydraulic model. Along with the rapid cross-section generation the CMT also eliminates any cross-section overlaps that might occur due to the sinuosity of the channels using the Cross-section Overlap Correction Algorithm (COCoA). The CMT was tested by extracting cross-sections from a 5-m DEM for a 50-km channel length in Houston, Texas. The extracted cross-sections were compared directly with surveyed cross-sections in terms of the cross-section area. Results indicated that the CMT-generated cross-sections satisfactorily matched the surveyed data.

  5. The use of automated parameter searches to improve ion channel kinetics for neural modeling.

    PubMed

    Hendrickson, Eric B; Edgerton, Jeremy R; Jaeger, Dieter

    2011-10-01

    The voltage and time dependence of ion channels can be regulated, notably by phosphorylation, interaction with phospholipids, and binding to auxiliary subunits. Many parameter variation studies have set conductance densities free while leaving kinetic channel properties fixed as the experimental constraints on the latter are usually better than on the former. Because individual cells can tightly regulate their ion channel properties, we suggest that kinetic parameters may be profitably set free during model optimization in order to both improve matches to data and refine kinetic parameters. To this end, we analyzed the parameter optimization of reduced models of three electrophysiologically characterized and morphologically reconstructed globus pallidus neurons. We performed two automated searches with different types of free parameters. First, conductance density parameters were set free. Even the best resulting models exhibited unavoidable problems which were due to limitations in our channel kinetics. We next set channel kinetics free for the optimized density matches and obtained significantly improved model performance. Some kinetic parameters consistently shifted to similar new values in multiple runs across three models, suggesting the possibility for tailored improvements to channel models. These results suggest that optimized channel kinetics can improve model matches to experimental voltage traces, particularly for channels characterized under different experimental conditions than recorded data to be matched by a model. The resulting shifts in channel kinetics from the original template provide valuable guidance for future experimental efforts to determine the detailed kinetics of channel isoforms and possible modulated states in particular types of neurons. PMID:21243419

  6. General MACOS Interface for Modeling and Analysis for Controlled Optical Systems

    NASA Technical Reports Server (NTRS)

    Sigrist, Norbert; Basinger, Scott A.; Redding, David C.

    2012-01-01

    The General MACOS Interface (GMI) for Modeling and Analysis for Controlled Optical Systems (MACOS) enables the use of MATLAB as a front-end for JPL s critical optical modeling package, MACOS. MACOS is JPL s in-house optical modeling software, which has proven to be a superb tool for advanced systems engineering of optical systems. GMI, coupled with MACOS, allows for seamless interfacing with modeling tools from other disciplines to make possible integration of dynamics, structures, and thermal models with the addition of control systems for deformable optics and other actuated optics. This software package is designed as a tool for analysts to quickly and easily use MACOS without needing to be an expert at programming MACOS. The strength of MACOS is its ability to interface with various modeling/development platforms, allowing evaluation of system performance with thermal, mechanical, and optical modeling parameter variations. GMI provides an improved means for accessing selected key MACOS functionalities. The main objective of GMI is to marry the vast mathematical and graphical capabilities of MATLAB with the powerful optical analysis engine of MACOS, thereby providing a useful tool to anyone who can program in MATLAB. GMI also improves modeling efficiency by eliminating the need to write an interface function for each task/project, reducing error sources, speeding up user/modeling tasks, and making MACOS well suited for fast prototyping.

  7. Automated generation of uniform Group Technology part codes from solid model data

    SciTech Connect

    Ames, A.L.

    1987-01-01

    Group Technology is a manufacturing theory based on the identification of similar parts and the subsequent grouping of these parts to enhance the manufacturing process. Part classification and coding systems group parts into families based on design and manufacturing attributes. Traditionally, humans code parts by examining a blueprint of the part to find important features as defined in a set of part classification rules. This process can be difficult and time consuming due to the complexity of the classification system. Coding specifications can require considerable interpretation, making consistency a problem for organizations employing many (human) part coders. A solution to these problems is to automate the part coding process in software, using a CAD database as input. It is straightforward to translate the part classification rules into a rule based expert system. A more difficult task is the recognition of part coding features from a CAD database. Previous research in feature recognition has concentrated on material removal features (depressions such as holes, pockets and slots). Part classification requires the ability to recognize such features, plus other features such as hole patterns, symmetries and overall part shape. This paper extends feature recognition to include part classification and coding features and describes an expert system for automated part classification and coding being developed. This system accepts boundary-representation solid model data and generates a part code. Specific feature recognition problems (such as intersecting features) and the methods developed to solve these problems are presented.

  8. Automated Reconstruction of Walls from Airborne LIDAR Data for Complete 3d Building Modelling

    NASA Astrophysics Data System (ADS)

    He, Y.; Zhang, C.; Awrangjeb, M.; Fraser, C. S.

    2012-07-01

    Automated 3D building model generation continues to attract research interests in photogrammetry and computer vision. Airborne Light Detection and Ranging (LIDAR) data with increasing point density and accuracy has been recognized as a valuable source for automated 3D building reconstruction. While considerable achievements have been made in roof extraction, limited research has been carried out in modelling and reconstruction of walls, which constitute important components of a full building model. Low point density and irregular point distribution of LIDAR observations on vertical walls render this task complex. This paper develops a novel approach for wall reconstruction from airborne LIDAR data. The developed method commences with point cloud segmentation using a region growing approach. Seed points for planar segments are selected through principle component analysis, and points in the neighbourhood are collected and examined to form planar segments. Afterwards, segment-based classification is performed to identify roofs, walls and planar ground surfaces. For walls with sparse LIDAR observations, a search is conducted in the neighbourhood of each individual roof segment to collect wall points, and the walls are then reconstructed using geometrical and topological constraints. Finally, walls which were not illuminated by the LIDAR sensor are determined via both reconstructed roof data and neighbouring walls. This leads to the generation of topologically consistent and geometrically accurate and complete 3D building models. Experiments have been conducted in two test sites in the Netherlands and Australia to evaluate the performance of the proposed method. Results show that planar segments can be reliably extracted in the two reported test sites, which have different point density, and the building walls can be correctly reconstructed if the walls are illuminated by the LIDAR sensor.

  9. Further investigations of automated surface observing system (ASOS) winds used in air quality modeling applications

    SciTech Connect

    Brower, R.P.; Jones, W.B.; Sherwell, J.

    1999-07-01

    Since 1992, a significant shift in the way standard surface meteorological data are observed and collected has occurred across the country. The National Weather Service, the Federal Aviation Administration, and the Department of Defense have been deploying the Automated Surface Observing System (ASOS) at nearly one thousand sites. Prior to ASOS, manual observation and recordation were the norm. With the advent of ASOS, an unprecedented level of meteorological data is now available; observations of standard meteorological variables are available almost real-time at more sites. However, with ASOS, meteorological data are being gathered in a fundamentally different way. New automated instruments sample, analyze, and record meteorological observations without human intervention. Many of these meteorological observations are key inputs to predictive air quality models. Reliable estimates of plume transport and dispersion require reliable and available meteorological data. The effect of the ASOS method of data collection on the dispersion modeling community is not clear. Because the hourly data now being reported at most stations across the country are being gathered in a fundamentally different way than previously, it is prudent to examine the differences between hourly meteorological observations gathered before and after ASOS. A preliminary analysis1 of pre-ASOS and ASOS data suggested that the differences in the observations could impact the data's application to air quality models. This expanded study examines more thoroughly the differences between wind data gathered before and after ASOS implementation in order to identify potential ramifications for air quality modeling. Pre-ASOS and ASOS data, from five stations in and around Maryland that represent the diversity of urbanization and topography of the region and that have a reasonably long record of ASOS observations, are examined.

  10. Atomistic Cohesive Zone Models for Interface Decohesion in Metals

    NASA Technical Reports Server (NTRS)

    Yamakov, Vesselin I.; Saether, Erik; Glaessgen, Edward H.

    2009-01-01

    Using a statistical mechanics approach, a cohesive-zone law in the form of a traction-displacement constitutive relationship characterizing the load transfer across the plane of a growing edge crack is extracted from atomistic simulations for use within a continuum finite element model. The methodology for the atomistic derivation of a cohesive-zone law is presented. This procedure can be implemented to build cohesive-zone finite element models for simulating fracture in nanocrystalline or ultrafine grained materials.

  11. Dynamic Distribution and Layouting of Model-Based User Interfaces in Smart Environments

    NASA Astrophysics Data System (ADS)

    Roscher, Dirk; Lehmann, Grzegorz; Schwartze, Veit; Blumendorf, Marco; Albayrak, Sahin

    The developments in computer technology in the last decade change the ways of computer utilization. The emerging smart environments make it possible to build ubiquitous applications that assist users during their everyday life, at any time, in any context. But the variety of contexts-of-use (user, platform and environment) makes the development of such ubiquitous applications for smart environments and especially its user interfaces a challenging and time-consuming task. We propose a model-based approach, which allows adapting the user interface at runtime to numerous (also unknown) contexts-of-use. Based on a user interface modelling language, defining the fundamentals and constraints of the user interface, a runtime architecture exploits the description to adapt the user interface to the current context-of-use. The architecture provides automatic distribution and layout algorithms for adapting the applications also to contexts unforeseen at design time. Designers do not specify predefined adaptations for each specific situation, but adaptation constraints and guidelines. Furthermore, users are provided with a meta user interface to influence the adaptations according to their needs. A smart home energy management system serves as running example to illustrate the approach.

  12. Fullerene film on metal surface: Diffusion of metal atoms and interface model

    SciTech Connect

    Li, Wen-jie; Li, Hai-Yang; Li, Hong-Nian; Wang, Peng; Wang, Xiao-Xiong; Wang, Jia-Ou; Wu, Rui; Qian, Hai-Jie; Ibrahim, Kurash

    2014-05-12

    We try to understand the fact that fullerene film behaves as n-type semiconductor in electronic devices and establish a model describing the energy level alignment at fullerene/metal interfaces. The C{sub 60}/Ag(100) system was taken as a prototype and studied with photoemission measurements. The photoemission spectra revealed that the Ag atoms of the substrate diffused far into C{sub 60} film and donated electrons to the molecules. So the C{sub 60} film became n-type semiconductor with the Ag atoms acting as dopants. The C{sub 60}/Ag(100) interface should be understood as two sub-interfaces on both sides of the molecular layer directly contacting with the substrate. One sub-interface is Fermi level alignment, and the other is vacuum level alignment.

  13. Proteins at air-water interfaces: a coarse-grained model.

    PubMed

    Cieplak, Marek; Allan, Daniel B; Leheny, Robert L; Reich, Daniel H

    2014-11-01

    We present a coarse-grained model to describe the adsorption and deformation of proteins at an air-water interface. The interface is introduced empirically in the form of a localized field that couples to a hydropathy scale of amino acids. We consider three kinds of proteins: protein G, egg-white lysozyme, and hydrophobin. We characterize the nature of the deformation and the orientation of the proteins induced by their proximity to and association with the interface. We also study protein diffusion in the layer formed at the interface and show that the diffusion slows with increasing concentration in a manner similar to that for a colloidal suspension approaching the glass transition. PMID:25310625

  14. Toward automated model building from video in computer-assisted diagnoses in colonoscopy

    NASA Astrophysics Data System (ADS)

    Koppel, Dan; Chen, Chao-I.; Wang, Yuan-Fang; Lee, Hua; Gu, Jia; Poirson, Allen; Wolters, Rolf

    2007-03-01

    A 3D colon model is an essential component of a computer-aided diagnosis (CAD) system in colonoscopy to assist surgeons in visualization, and surgical planning and training. This research is thus aimed at developing the ability to construct a 3D colon model from endoscopic videos (or images). This paper summarizes our ongoing research in automated model building in colonoscopy. We have developed the mathematical formulations and algorithms for modeling static, localized 3D anatomic structures within a colon that can be rendered from multiple novel view points for close scrutiny and precise dimensioning. This ability is useful for the scenario when a surgeon notices some abnormal tissue growth and wants a close inspection and precise dimensioning. Our modeling system uses only video images and follows a well-established computer-vision paradigm for image-based modeling. We extract prominent features from images and establish their correspondences across multiple images by continuous tracking and discrete matching. We then use these feature correspondences to infer the camera's movement. The camera motion parameters allow us to rectify images into a standard stereo configuration and calculate pixel movements (disparity) in these images. The inferred disparity is then used to recover 3D surface depth. The inferred 3D depth, together with texture information recorded in images, allow us to construct a 3D model with both structure and appearance information that can be rendered from multiple novel view points.

  15. Rethinking Design Process: Using 3D Digital Models as an Interface in Collaborative Session

    ERIC Educational Resources Information Center

    Ding, Suining

    2008-01-01

    This paper describes a pilot study for an alternative design process by integrating a designer-user collaborative session with digital models. The collaborative session took place in a 3D AutoCAD class for a real world project. The 3D models served as an interface for designer-user collaboration during the design process. Students not only learned…

  16. Dosimetry Modeling for Predicting Radiolytic Production at the Spent Fuel - Water Interface

    SciTech Connect

    Miller, William H.; Kline, Amanda J.; Hanson, Brady D.

    2006-04-30

    Modeling of the alpha, beta, and gamma dose from spent fuel as a function of particle size and fuel to water ratio was examined. These doses will be combined with modeling of G values and interactions to determine the concentration of various species formed at the fuel water interface and their affect on dissolution rates.

  17. Atomistic modeling of the Au droplet-GaAs interface for size-selective nanowire growth

    NASA Astrophysics Data System (ADS)

    Sakong, Sung; Du, Yaojun A.; Kratzer, Peter

    2013-10-01

    Density functional theory calculations within both the local density approximation and the generalized gradient approximation are used to study Au-catalyzed growth under near-equilibrium conditions. We discuss both the chemical equilibrium of a GaAs nanowire with an As2 gas atmosphere and the mechanical equilibrium between the capillary forces at the nanowire tip. For the latter goal, the interface between the gold nanoparticle and the nanowire is modeled atomically within a slab approach, and the interface energies are evaluated from the total energies of the model systems. We discuss three growth regimes, one catalyzed by an (almost) pure Au particle, an intermediate alloy-catalyzed growth regime, and a Ga-catalyzed growth regime. Using the interface energies calculated from the atomic models, as well as the surface energies of the nanoparticle and the nanowire sidewalls, we determine the optimized geometry of the nanoparticle-capped nanowire by minimizing the free energy of a continuum model. Under typical experimental conditions of 10-4 Pa As2 and 700 K, our results in the local density approximation are insensitive to the Ga concentration in the nanoparticle. In these growth conditions, the energetically most favored interface has an interface energy of around 45 meV/Å2, and the correspondingly optimized droplet on top of a GaAs nanowire is somewhat larger than a hemisphere and forms a contact angle around 130∘ for both pure Au and Au-Ga alloy nanoparticles.

  18. Conservative phase-field lattice Boltzmann model for interface tracking equation.

    PubMed

    Geier, Martin; Fakhari, Abbas; Lee, Taehun

    2015-06-01

    Based on the phase-field theory, we propose a conservative lattice Boltzmann method to track the interface between two different fluids. The presented model recovers the conservative phase-field equation and conserves mass locally and globally. Two entirely different approaches are used to calculate the gradient of the phase field, which is needed in computation of the normal to the interface. One approach uses finite-difference stencils similar to many existing lattice Boltzmann models for tracking the two-phase interface, while the other one invokes central moments to calculate the gradient of the phase field without any finite differences involved. The former approach suffers from the nonlocality of the collision operator while the latter is entirely local making it highly suitable for massive parallel implementation. Several benchmark problems are carried out to assess the accuracy and stability of the proposed model. PMID:26172824

  19. Atomistic Modeling of Corrosion Events at the Interface between a Metal and Its Environment

    DOE PAGESBeta

    Taylor, Christopher D.

    2012-01-01

    Atomistic simulation is a powerful tool for probing the structure and properties of materials and the nature of chemical reactions. Corrosion is a complex process that involves chemical reactions occurring at the interface between a material and its environment and is, therefore, highly suited to study by atomistic modeling techniques. In this paper, the complex nature of corrosion processes and mechanisms is briefly reviewed. Various atomistic methods for exploring corrosion mechanisms are then described, and recent applications in the literature surveyed. Several instances of the application of atomistic modeling to corrosion science are then reviewed in detail, including studies ofmore » the metal-water interface, the reaction of water on electrified metallic interfaces, the dissolution of metal atoms from metallic surfaces, and the role of competitive adsorption in controlling the chemical nature and structure of a metallic surface. Some perspectives are then given concerning the future of atomistic modeling in the field of corrosion science.« less

  20. Segment-to-segment contact elements for modelling joint interfaces in finite element analysis

    NASA Astrophysics Data System (ADS)

    Mayer, M. H.; Gaul, L.

    2007-02-01

    This paper presents an efficient approach to model contact interfaces of joints in finite element analysis (FEA) with segment-to-segment contact elements like thin layer or zero thickness elements. These elements originate from geomechanics and have been applied recently in modal analysis as an efficient way to define the contact stiffness of fixed joints for model updating. A big advantage of these elements is that no global contact search algorithm is employed as used in master-slave contacts. Contact search algorithms are not necessary for modelling contact interfaces of fixed joints since the interfaces are always in contact and restricted to small relative movements, which saves much computing time. We first give an introduction into the theory of segment-to-segment contact elements leading to zero thickness and thin layer elements. As a new application of zero thickness elements, we demonstrate the implementation of a structural contact damping model, derived from a Masing model, as non-linear constitutive laws for the contact element. This damping model takes into account the non-linear influence of frictional microslip in the contact interface of fixed joints. With this model we simulate the non-linear response of a bolted structure. This approach constitutes a new way to simulate multi-degree-of-freedom systems with structural joints and predict modal damping properties.

  1. Automated Generation of Fault Management Artifacts from a Simple System Model

    NASA Technical Reports Server (NTRS)

    Kennedy, Andrew K.; Day, John C.

    2013-01-01

    Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.

  2. Examining Uncertainty in Demand Response Baseline Models and Variability in Automated Response to Dynamic Pricing

    SciTech Connect

    Mathieu, Johanna L.; Callaway, Duncan S.; Kiliccote, Sila

    2011-08-15

    Controlling electric loads to deliver power system services presents a number of interesting challenges. For example, changes in electricity consumption of Commercial and Industrial (C&I) facilities are usually estimated using counterfactual baseline models, and model uncertainty makes it difficult to precisely quantify control responsiveness. Moreover, C&I facilities exhibit variability in their response. This paper seeks to understand baseline model error and demand-side variability in responses to open-loop control signals (i.e. dynamic prices). Using a regression-based baseline model, we define several Demand Response (DR) parameters, which characterize changes in electricity use on DR days, and then present a method for computing the error associated with DR parameter estimates. In addition to analyzing the magnitude of DR parameter error, we develop a metric to determine how much observed DR parameter variability is attributable to real event-to-event variability versus simply baseline model error. Using data from 38 C&I facilities that participated in an automated DR program in California, we find that DR parameter errors are large. For most facilities, observed DR parameter variability is likely explained by baseline model error, not real DR parameter variability; however, a number of facilities exhibit real DR parameter variability. In some cases, the aggregate population of C&I facilities exhibits real DR parameter variability, resulting in implications for the system operator with respect to both resource planning and system stability.

  3. A Sketching Interface for Freeform 3D Modeling

    NASA Astrophysics Data System (ADS)

    Igarashi, Takeo

    This chapter introduces Teddy, a sketch-based modeling system to quickly and easily design freeform models such as stuffed animals and other rotund objects. The user draws several 2D freeform strokes interactively on the screen and the system automatically constructs plausible 3D polygonal surfaces. Our system supports several modeling operations, including the operation to construct a 3D polygonal surface from a 2D silhouette drawn by the user: it inflates the region surrounded by the silhouette making a wide area fat, and a narrow area thin. Teddy, our prototype system, is implemented as a Java program, and the mesh construction is done in real-time on a standard PC. Our informal user study showed that a first-time user masters the operations within 10 minutes, and can construct interesting 3D models within minutes. We also report the result of a case study where a high school teacher taught various 3D concepts in geography using the system.

  4. ITER physics-safety interface: models and assessments

    SciTech Connect

    Uckan, N.A.; Putvinski, S.; Wesley, J.; Bartels, H-W.; Honda, T.; Amano, T.; Boucher, D.; Fujisawa, N.; Post, D.; Rosenbluth, M.

    1996-10-01

    Plasma operation conditions and physics requirements to be used as a basis for safety analysis studies are developed and physics results motivated by safety considerations are presented for the ITER design. Physics guidelines and specifications for enveloping plasma dynamic events for Category I (operational event), Category II (likely event), and Category III (unlikely event) are characterized. Safety related physics areas that are considered are: (i) effect of plasma on machined and safety (disruptions, runaway electrons, fast plasma shutdown) and (ii) plasma response to ex-vessel LOCA from first wall providing a potential passive plasma shutdown due to Be evaporation. Physics models and expressions developed are implemented in safety analysis code (SAFALY, couples 0-D dynamic plasma model to thermal response of the in-vessel components). Results from SAFALY are presented.

  5. On the moving contact line singularity: asymptotics of a diffuse-interface model.

    PubMed

    Sibley, David N; Nold, Andreas; Savva, Nikos; Kalliadasis, Serafim

    2013-03-01

    The behaviour of a solid-liquid-gas system near the three-phase contact line is considered using a diffuse-interface model with no-slip at the solid and where the fluid phase is specified by a continuous density field. Relaxation of the classical approach of a sharp liquid-gas interface and careful examination of the asymptotic behaviour as the contact line is approached is shown to resolve the stress and pressure singularities associated with the moving contact line problem. Various features of the model are scrutinised, alongside extensions to incorporate slip, finite-time relaxation of the chemical potential, or a precursor film at the wall. PMID:23515762

  6. Modeling Nitrogen Cycle at the Surface-Subsurface Water Interface

    NASA Astrophysics Data System (ADS)

    Marzadri, A.; Tonina, D.; Bellin, A.

    2011-12-01

    Anthropogenic activities, primarily food and energy production, have altered the global nitrogen cycle, increasing reactive dissolved inorganic nitrogen, Nr, chiefly ammonium NH4+ and nitrate NO3-, availability in many streams worldwide. Increased Nr promotes biological activity often with negative consequences such as water body eutrophication and emission of nitrous oxide gas, N2O, an important greenhouse gas as a by-product of denitrification. The hyporheic zone may play an important role in processing Nr and returning it to the atmosphere. Here, we present a process-based three-dimensional semi-analytical model, which couples hyporheic hydraulics with biogeochemical reactions and transport equations. Transport is solved by means of particle tracking with negligible local dispersion and biogeochemical reactions modeled by linearized Monod's kinetics with temperature dependant reaction rate coefficients. Comparison of measured and predicted N2O emissions from 7 natural stream shows a good match. We apply our model to gravel bed rivers with alternate bar morphology to investigate the role of hyporheic hydraulic, depth of alluvium, relative availability of stream concentration of NO3- and NH4+ and water temperature on nitrogen gradients within the sediment. Our model shows complex concentration dynamics, which depend on hyporheic residence time distribution and consequently on streambed morphology, within the hyporheic zone. Nitrogen gas emissions from the hyporheic zone increase with alluvium depth in large low-gradient streams but not in small steep streams. On the other hand, hyporheic water temperature influences nitrification/denitrification processes mainly in small-steep than large low-gradient streams, because of the long residence times, which offset the slow reaction rates induced by low temperatures in the latter stream. The overall conclusion of our analysis is that river morphology has a major impact on biogeochemical processes such as nitrification

  7. An Automated Application Framework to Model Disordered Materials Based on a High Throughput First Principles Approach

    NASA Astrophysics Data System (ADS)

    Oses, Corey; Yang, Kesong; Curtarolo, Stefano; Duke Univ Collaboration; UC San Diego Collaboration

    Predicting material properties of disordered systems remains a long-standing and formidable challenge in rational materials design. To address this issue, we introduce an automated software framework capable of modeling partial occupation within disordered materials using a high-throughput (HT) first principles approach. At the heart of the approach is the construction of supercells containing a virtually equivalent stoichiometry to the disordered material. All unique supercell permutations are enumerated and material properties of each are determined via HT electronic structure calculations. In accordance with a canonical ensemble of supercell states, the framework evaluates ensemble average properties of the system as a function of temperature. As proof of concept, we examine the framework's final calculated properties of a zinc chalcogenide (ZnS1-xSex), a wide-gap oxide semiconductor (MgxZn1-xO), and an iron alloy (Fe1-xCux) at various stoichiometries.

  8. An automated statistical shape model developmental pipeline: application to the human scapula and humerus.

    PubMed

    Mutsvangwa, Tinashe; Burdin, Valérie; Schwartz, Cédric; Roux, Christian

    2015-04-01

    This paper presents development of statistical shape models based on robust and rigid-groupwise registration followed by pointset nonrigid registration. The main advantages of the pipeline include automation in that the method does not rely on manual landmarks or a regionalization step; there is no bias in the choice of reference during the correspondence steps and the use of the probabilistic principal component analysis framework increases the domain of the shape variability. A comparison between the widely used expectation maximization-iterative closest point algorithm and a recently reported groupwise method on publicly available data (hippocampus) using the well-known criteria of generality, specificity, and compactness is also presented. The proposed method gives similar values but the curves of generality and specificity are superior to those of the other two methods. Finally, the method is applied to the human scapula, which is a known difficult structure, and the human humerus. PMID:25389238

  9. Modelling and representation issues in automated feature extraction from aerial and satellite images

    NASA Astrophysics Data System (ADS)

    Sowmya, Arcot; Trinder, John

    New digital systems for the processing of photogrammetric and remote sensing images have led to new approaches to information extraction for mapping and Geographic Information System (GIS) applications, with the expectation that data can become more readily available at a lower cost and with greater currency. Demands for mapping and GIS data are increasing as well for environmental assessment and monitoring. Hence, researchers from the fields of photogrammetry and remote sensing, as well as computer vision and artificial intelligence, are bringing together their particular skills for automating these tasks of information extraction. The paper will review some of the approaches used in knowledge representation and modelling for machine vision, and give examples of their applications in research for image understanding of aerial and satellite imagery.

  10. Accident prediction model for railway-highway interfaces.

    PubMed

    Oh, Jutaek; Washington, Simon P; Nam, Doohee

    2006-03-01

    Considerable past research has explored relationships between vehicle accidents and geometric design and operation of road sections, but relatively little research has examined factors that contribute to accidents at railway-highway crossings. Between 1998 and 2002 in Korea, about 95% of railway accidents occurred at highway-rail grade crossings, resulting in 402 accidents, of which about 20% resulted in fatalities. These statistics suggest that efforts to reduce crashes at these locations may significantly reduce crash costs. The objective of this paper is to examine factors associated with railroad crossing crashes. Various statistical models are used to examine the relationships between crossing accidents and features of crossings. The paper also compares accident models developed in the United States and the safety effects of crossing elements obtained using Korea data. Crashes were observed to increase with total traffic volume and average daily train volumes. The proximity of crossings to commercial areas and the distance of the train detector from crossings are associated with larger numbers of accidents, as is the time duration between the activation of warning signals and gates. The unique contributions of the paper are the application of the gamma probability model to deal with underdispersion and the insights obtained regarding railroad crossing related vehicle crashes. PMID:16297846

  11. Smart Frameworks and Self-Describing Models: Model Metadata for Automated Coupling of Hydrologic Process Components (Invited)

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.

    2013-12-01

    Model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that allow heterogeneous sets of process models to be assembled in a plug-and-play manner to create composite "system models". These mechanisms facilitate code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers, e.g. by requiring them to provide their output in specific forms that meet the input requirements of other models. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can compare the answers to these queries with similar answers from other process models in a collection and then automatically call framework

  12. A graphical user interface for numerical modeling of acclimation responses of vegetation to climate change

    NASA Astrophysics Data System (ADS)

    Le, Phong V. V.; Kumar, Praveen; Drewry, Darren T.; Quijano, Juan C.

    2012-12-01

    Ecophysiological models that vertically resolve vegetation canopy states are becoming a powerful tool for studying the exchange of mass, energy, and momentum between the land surface and the atmosphere. A mechanistic multilayer canopy-soil-root system model (MLCan) developed by Drewry et al. (2010a) has been used to capture the emergent vegetation responses to elevated atmospheric CO2 for both C3 and C4 plants under various climate conditions. However, processing input data and setting up such a model can be time-consuming and error-prone. In this paper, a graphical user interface that has been developed for MLCan is presented. The design of this interface aims to provide visualization capabilities and interactive support for processing input meteorological forcing data and vegetation parameter values to facilitate the use of this model. In addition, the interface also provides graphical tools for analyzing the forcing data and simulated numerical results. The model and its interface are both written in the MATLAB programming language. Finally, an application of this model package for capturing the ecohydrological responses of three bioenergy crops (maize, miscanthus, and switchgrass) to local environmental drivers at two different sites in the Midwestern United States is presented.

  13. Modeling and matching of landmarks for automation of Mars Rover localization

    NASA Astrophysics Data System (ADS)

    Wang, Jue

    The Mars Exploration Rover (MER) mission, begun in January 2004, has been extremely successful. However, decision-making for many operation tasks of the current MER mission and the 1997 Mars Pathfinder mission is performed on Earth through a predominantly manual, time-consuming process. Unmanned planetary rover navigation is ideally expected to reduce rover idle time, diminish the need for entering safe-mode, and dynamically handle opportunistic science events without required communication to Earth. Successful automation of rover navigation and localization during the extraterrestrial exploration requires that accurate position and attitude information can be received by a rover and that the rover has the support of simultaneous localization and mapping. An integrated approach with Bundle Adjustment (BA) and Visual Odometry (VO) can efficiently refine the rover position. However, during the MER mission, BA is done manually because of the difficulty in the automation of the cross-sitetie points selection. This dissertation proposes an automatic approach to select cross-site tie points from multiple rover sites based on the methods of landmark extraction, landmark modeling, and landmark matching. The first step in this approach is that important landmarks such as craters and rocks are defined. Methods of automatic feature extraction and landmark modeling are then introduced. Complex models with orientation angles and simple models without those angles are compared. The results have shown that simple models can provide reasonably good results. Next, the sensitivity of different modeling parameters is analyzed. Based on this analysis, cross-site rocks are matched through two complementary stages: rock distribution pattern matching and rock model matching. In addition, a preliminary experiment on orbital and ground landmark matching is also briefly introduced. Finally, the reliability of the cross-site tie points selection is validated by fault detection, which

  14. Operando X-ray Investigation of Electrode/Electrolyte Interfaces in Model Solid Oxide Fuel Cells

    PubMed Central

    2016-01-01

    We employed operando anomalous surface X-ray diffraction to investigate the buried interface between the cathode and the electrolyte of a model solid oxide fuel cell with atomic resolution. The cell was studied under different oxygen pressures at elevated temperatures and polarizations by external potential control. Making use of anomalous X-ray diffraction effects at the Y and Zr K-edges allowed us to resolve the interfacial structure and chemical composition of a (100)-oriented, 9.5 mol % yttria-stabilized zirconia (YSZ) single crystal electrolyte below a La0.6Sr0.4CoO3−δ (LSC) electrode. We observe yttrium segregation toward the YSZ/LSC electrolyte/electrode interface under reducing conditions. Under oxidizing conditions, the interface becomes Y depleted. The yttrium segregation is corroborated by an enhanced outward relaxation of the YSZ interfacial metal ion layer. At the same time, an increase in point defect concentration in the electrolyte at the interface was observed, as evidenced by reduced YSZ crystallographic site occupancies for the cations as well as the oxygen ions. Such changes in composition are expected to strongly influence the oxygen ion transport through this interface which plays an important role for the performance of solid oxide fuel cells. The structure of the interface is compared to the bare YSZ(100) surface structure near the microelectrode under identical conditions and to the structure of the YSZ(100) surface prepared under ultrahigh vacuum conditions. PMID:27346923

  15. Modeling the Assembly of Polymer-Grafted Nanoparticles at Oil-Water Interfaces.

    PubMed

    Yong, Xin

    2015-10-27

    Using dissipative particle dynamics (DPD), I model the interfacial adsorption and self-assembly of polymer-grafted nanoparticles at a planar oil-water interface. The amphiphilic core-shell nanoparticles irreversibly adsorb to the interface and create a monolayer covering the interface. The polymer chains of the adsorbed nanoparticles are significantly deformed by surface tension to conform to the interface. I quantitatively characterize the properties of the particle-laden interface and the structure of the monolayer in detail at different surface coverages. I observe that the monolayer of particles grafted with long polymer chains undergoes an intriguing liquid-crystalline-amorphous phase transition in which the relationship between the monolayer structure and the surface tension/pressure of the interface is elucidated. Moreover, my results indicate that the amorphous state at high surface coverage is induced by the anisotropic distribution of the randomly grafted chains on each particle core, which leads to noncircular in-plane morphology formed under excluded volume effects. These studies provide a fundamental understanding of the interfacial behavior of polymer-grafted nanoparticles for achieving complete control of the adsorption and subsequent self-assembly. PMID:26439456

  16. Tape-Drop Transient Model for In-Situ Automated Tape Placement of Thermoplastic Composites

    NASA Technical Reports Server (NTRS)

    Costen, Robert C.; Marchello, Joseph M.

    1998-01-01

    Composite parts of nonuniform thickness can be fabricated by in-situ automated tape placement (ATP) if the tape can be started and stopped at interior points of the part instead of always at its edges. This technique is termed start/stop-on-the-part, or, alternatively, tape-add/tape-drop. The resulting thermal transients need to be managed in order to achieve net shape and maintain uniform interlaminar weld strength and crystallinity. Starting-on-the-part has been treated previously. This paper continues the study with a thermal analysis of stopping-on-the-part. The thermal source is switched off when the trailing end of the tape enters the nip region of the laydown/consolidation head. The thermal transient is determined by a Fourier-Laplace transform solution of the two-dimensional, time-dependent thermal transport equation. This solution requires that the Peclet number Pe (the dimensionless ratio of inertial to diffusive heat transport) be independent of time and much greater than 1. Plotted isotherms show that the trailing tape-end cools more rapidly than the downstream portions of tape. This cooling can weaken the bond near the tape end; however the length of the affected region is found to be less than 2 mm. To achieve net shape, the consolidation head must continue to move after cut-off until the temperature on the weld interface decreases to the glass transition temperature. The time and elapsed distance for this condition to occur are computed for the Langley ATP robot applying PEEK/carbon fiber composite tape and for two upgrades in robot performance. The elapsed distance after cut-off ranges from about 1 mm for the present robot to about 1 cm for the second upgrade.

  17. Semi-automated method for delineation of landmarks on models of the cerebral cortex

    PubMed Central

    Shattuck, David W.; Joshi, Anand A.; Pantazis, Dimitrios; Kan, Eric; Dutton, Rebecca A.; Sowell, Elizabeth R.; Thompson, Paul M.; Toga, Arthur W.; Leahy, Richard M.

    2009-01-01

    Sulcal and gyral landmarks on the human cerebral cortex are required for various studies of the human brain. Whether used directly to examine sulcal geometry, or indirectly to drive cortical surface registration methods, the accuracy of these landmarks is essential. While several methods have been developed to automatically identify sulci and gyri, their accuracy may be insufficient for certain neuroanatomical studies. We describe a semi-automated procedure that delineates a sulcus or gyrus given a limited number of user-selected points. The method uses a graph theory approach to identify the lowest-cost path between the points, where the cost is a combination of local curvature features and the distance between vertices on the surface representation. We implemented the algorithm in an interface that guides the user through a cortical surface delineation protocol, and we incorporated this tool into our BrainSuite software. We performed a study to compare the results produced using our method with results produced using Display, a popular tool that has been used extensively for manual delineation of sulcal landmarks. Six raters were trained on the delineation protocol. They performed delineations on 12 brains using both software packages. We performed a statistical analysis of 3 aspects of the delineation task: time required to delineate the surface, registration accuracy achieved compared to an expert-delineated gold-standard, and variation among raters. Our new method was shown to be faster to use, to provide reduced inter-rater variability, and to provide results that were at least as accurate as those produced using Display. PMID:19162074

  18. Automated structure modeling of large protein assemblies using crosslinks as distance restraints.

    PubMed

    Ferber, Mathias; Kosinski, Jan; Ori, Alessandro; Rashid, Umar J; Moreno-Morcillo, María; Simon, Bernd; Bouvier, Guillaume; Batista, Paulo Ricardo; Müller, Christoph W; Beck, Martin; Nilges, Michael

    2016-06-01

    Crosslinking mass spectrometry is increasingly used for structural characterization of multisubunit protein complexes. Chemical crosslinking captures conformational heterogeneity, which typically results in conflicting crosslinks that cannot be satisfied in a single model, making detailed modeling a challenging task. Here we introduce an automated modeling method dedicated to large protein assemblies ('XL-MOD' software is available at http://aria.pasteur.fr/supplementary-data/x-links) that (i) uses a form of spatial restraints that realistically reflects the distribution of experimentally observed crosslinked distances; (ii) automatically deals with ambiguous and/or conflicting crosslinks and identifies alternative conformations within a Bayesian framework; and (iii) allows subunit structures to be flexible during conformational sampling. We demonstrate our method by testing it on known structures and available crosslinking data. We also crosslinked and modeled the 17-subunit yeast RNA polymerase III at atomic resolution; the resulting model agrees remarkably well with recently published cryoelectron microscopy structures and provides additional insights into the polymerase structure. PMID:27111507

  19. pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling

    NASA Astrophysics Data System (ADS)

    Florian Wellmann, J.; Thiele, Sam T.; Lindsay, Mark D.; Jessell, Mark W.

    2016-03-01

    We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilize the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.

  20. pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling

    NASA Astrophysics Data System (ADS)

    Wellmann, J. F.; Thiele, S. T.; Lindsay, M. D.; Jessell, M. W.

    2015-11-01

    We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilise the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a~link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential-fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.

  1. Automation's Effect on Library Personnel.

    ERIC Educational Resources Information Center

    Dakshinamurti, Ganga

    1985-01-01

    Reports on survey studying the human-machine interface in Canadian university, public, and special libraries. Highlights include position category and educational background of 118 participants, participants' feelings toward automation, physical effects of automation, diffusion in decision making, interpersonal communication, future trends,…

  2. Mucosal inflammation at the respiratory interface: a zebrafish model.

    PubMed

    Progatzky, Fränze; Cook, H Terence; Lamb, Jonathan R; Bugeon, Laurence; Dallman, Margaret J

    2016-03-15

    Inflammatory diseases of the respiratory system such as asthma and chronic obstructive pulmonary disease are increasing globally and remain poorly understood conditions. Although attention has long focused on the activation of type 1 and type 2 helper T cells of the adaptive immune system in these diseases, it is becoming increasingly apparent that there is also a need to understand the contributions and interactions between innate immune cells and the epithelial lining of the respiratory system. Cigarette smoke predisposes the respiratory tissue to a higher incidence of inflammatory disease, and here we have used zebrafish gills as a model to study the effect of cigarette smoke on the respiratory epithelium. Zebrafish gills fulfill the same gas-exchange function as the mammalian airways and have a similar structure. Exposure to cigarette smoke extracts resulted in an increase in transcripts of the proinflammatory cytokines TNF-α, IL-1β, and MMP9 in the gill tissue, which was at least in part mediated via NF-κB activation. Longer term exposure of fish for 6 wk to cigarette smoke extract resulted in marked structural changes to the gills with lamellar fusion and mucus cell formation, while signs of inflammation or fibrosis were absent. This shows, for the first time, that zebrafish gills are a relevant model for studying the effect of inflammatory stimuli on a respiratory epithelium, since they mimic the immunopathology involved in respiratory inflammatory diseases of humans. PMID:26719149

  3. Photometric model of diffuse surfaces described as a distribution of interfaced Lambertian facets.

    PubMed

    Simonot, Lionel

    2009-10-20

    The Lambertian model for diffuse reflection is widely used for the sake of its simplicity. Nevertheless, this model is known to be inaccurate in describing a lot of real-world objects, including those that present a matte surface. To overcome this difficulty, we propose a photometric model where the surfaces are described as a distribution of facets where each facet consists of a flat interface on a Lambertian background. Compared to the Lambertian model, it includes two additional physical parameters: an interface roughness parameter and the ratio between the refractive indices of the background binder and of the upper medium. The Torrance-Sparrow model--distribution of strictly specular facets--and the Oren-Nayar model--distribution of strictly Lambertian facets--appear as special cases. PMID:19844317

  4. Analytical solutions in a hydraulic model of seepage with sharp interfaces

    NASA Astrophysics Data System (ADS)

    Kacimov, A. R.

    2002-02-01

    Flows in horizontal homogeneous porous layers are studied in terms of a hydraulic model with an abrupt interface between two incompressible Darcian fluids of contrasting density driven by an imposed gradient along the layer. The flow of one fluid moving above a resting finger-type pool of another is studied. A straight interface between two moving fluids is shown to slump, rotate and propagate deeper under periodic drive conditions than in a constant-rate regime. Superpropagation of the interface is related to Philip's superelevation in tidal dynamics and acceleration of the front in vertical infiltration in terms of the Green-Ampt model with an oscillating ponding water level. All solutions studied are based on reduction of the governing PDE to nonlinear ODEs and further analytical and numerical integration by computer algebra routines.

  5. Laboratory measurements and theoretical modeling of seismoelectric interface response and coseismic wave fields

    SciTech Connect

    Schakel, M. D.; Slob, E. C.; Heller, H. K. J.; Smeulders, D. M. J.

    2011-04-01

    A full-waveform seismoelectric numerical model incorporating the directivity pattern of a pressure source is developed. This model provides predictions of coseismic electric fields and the electromagnetic waves that originate from a fluid/porous-medium interface. An experimental setup in which coseismic electric fields and interface responses are measured is constructed. The seismo-electric origin of the signals is confirmed. The numerically predicted polarity reversal of the interfacial signal and seismoelectric effects due to multiple scattering are detected in the measurements. Both the simulated coseismic electric fields and the electromagnetic waves originating from interfaces agree with the measurements in terms of travel times, waveform, polarity, amplitude, and spatial amplitude decay, demonstrating that seismoelectric effects are comprehensively described by theory.

  6. Diffuse interface models of locally inextensible vesicles in a viscous fluid

    NASA Astrophysics Data System (ADS)

    Aland, Sebastian; Egerer, Sabine; Lowengrub, John; Voigt, Axel

    2014-11-01

    We present a new diffuse interface model for the dynamics of inextensible vesicles in a viscous fluid with inertial forces. A new feature of this work is the implementation of the local inextensibility condition in the diffuse interface context. Local inextensibility is enforced by using a local Lagrange multiplier, which provides the necessary tension force at the interface. We introduce a new equation for the local Lagrange multiplier whose solution essentially provides a harmonic extension of the multiplier off the interface while maintaining the local inextensibility constraint near the interface. We also develop a local relaxation scheme that dynamically corrects local stretching/compression errors thereby preventing their accumulation. Asymptotic analysis is presented that shows that our new system converges to a relaxed version of the inextensible sharp interface model. This is also verified numerically. To solve the equations, we use an adaptive finite element method with implicit coupling between the Navier-Stokes and the diffuse interface inextensibility equations. Numerical simulations of a single vesicle in a shear flow at different Reynolds numbers demonstrate that errors in enforcing local inextensibility may accumulate and lead to large differences in the dynamics in the tumbling regime and smaller differences in the inclination angle of vesicles in the tank-treading regime. The local relaxation algorithm is shown to prevent the accumulation of stretching and compression errors very effectively. Simulations of two vesicles in an extensional flow show that local inextensibility plays an important role when vesicles are in close proximity by inhibiting fluid drainage in the near contact region.

  7. Diffuse interface models of locally inextensible vesicles in a viscous fluid.

    PubMed

    Aland, Sebastian; Egerer, Sabine; Lowengrub, John; Voigt, Axel

    2014-11-15

    We present a new diffuse interface model for the dynamics of inextensible vesicles in a viscous fluid with inertial forces. A new feature of this work is the implementation of the local inextensibility condition in the diffuse interface context. Local inextensibility is enforced by using a local Lagrange multiplier, which provides the necessary tension force at the interface. We introduce a new equation for the local Lagrange multiplier whose solution essentially provides a harmonic extension of the multiplier off the interface while maintaining the local inextensibility constraint near the interface. We also develop a local relaxation scheme that dynamically corrects local stretching/compression errors thereby preventing their accumulation. Asymptotic analysis is presented that shows that our new system converges to a relaxed version of the inextensible sharp interface model. This is also verified numerically. To solve the equations, we use an adaptive finite element method with implicit coupling between the Navier-Stokes and the diffuse interface inextensibility equations. Numerical simulations of a single vesicle in a shear flow at different Reynolds numbers demonstrate that errors in enforcing local inextensibility may accumulate and lead to large differences in the dynamics in the tumbling regime and smaller differences in the inclination angle of vesicles in the tank-treading regime. The local relaxation algorithm is shown to prevent the accumulation of stretching and compression errors very effectively. Simulations of two vesicles in an extensional flow show that local inextensibility plays an important role when vesicles are in close proximity by inhibiting fluid drainage in the near contact region. PMID:25246712

  8. Models for ultrasonic characterization of environmental degradation of interfaces in adhesive joints

    SciTech Connect

    Lavrentyev, A.I.; Rokhlin, S.I. )

    1994-10-15

    In this paper we discuss two models of environmental degradation of adhesive joints developed from experimental observation of the joint failure mode. It is found that after severe degradation, failure is dominated by the interfacial mode, i.e., by failure at the interface between adhesive and adherend. The fraction of failure in the interfacial mode was found to be related to the joint strength and to be proportional to the frequency shift of a minimum in the spectrum of the reflected ultrasonic signal. One model considers an interface as an interphase in the form of a nonhomogeneous layer composed of two phases: soft'' which is viscoelastic (degraded part of the interphase) and stiff'' corresponding to the nondamaged interphase. Increase of the soft'' phase fraction corresponds to the process of degradation in the interphase. The second model describes degradation in a form of disbonds filled by absorbed water at the interface. The disbonded interface is modeled by transverse spring boundary conditions, with the complex spring stiffness representing the quality of the bond. The influence of different disbond growth scenarios is considered. Advantages and drawbacks of these models are discussed.

  9. Context based mixture model for cell phase identification in automated fluorescence microscopy

    PubMed Central

    Wang, Meng; Zhou, Xiaobo; King, Randy W; Wong, Stephen TC

    2007-01-01

    Background Automated identification of cell cycle phases of individual live cells in a large population captured via automated fluorescence microscopy technique is important for cancer drug discovery and cell cycle studies. Time-lapse fluorescence microscopy images provide an important method to study the cell cycle process under different conditions of perturbation. Existing methods are limited in dealing with such time-lapse data sets while manual analysis is not feasible. This paper presents statistical data analysis and statistical pattern recognition to perform this task. Results The data is generated from Hela H2B GFP cells imaged during a 2-day period with images acquired 15 minutes apart using an automated time-lapse fluorescence microscopy. The patterns are described with four kinds of features, including twelve general features, Haralick texture features, Zernike moment features, and wavelet features. To generate a new set of features with more discriminate power, the commonly used feature reduction techniques are used, which include Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), Maximum Margin Criterion (MMC), Stepwise Discriminate Analysis based Feature Selection (SDAFS), and Genetic Algorithm based Feature Selection (GAFS). Then, we propose a Context Based Mixture Model (CBMM) for dealing with the time-series cell sequence information and compare it to other traditional classifiers: Support Vector Machine (SVM), Neural Network (NN), and K-Nearest Neighbor (KNN). Being a standard practice in machine learning, we systematically compare the performance of a number of common feature reduction techniques and classifiers to select an optimal combination of a feature reduction technique and a classifier. A cellular database containing 100 manually labelled subsequence is built for evaluating the performance of the classifiers. The generalization error is estimated using the cross validation technique. The experimental results show

  10. Sequential Model-Based Parameter Optimization: an Experimental Investigation of Automated and Interactive Approaches

    NASA Astrophysics Data System (ADS)

    Hutter, Frank; Bartz-Beielstein, Thomas; Hoos, Holger H.; Leyton-Brown, Kevin; Murphy, Kevin P.

    This work experimentally investigates model-based approaches for optimizing the performance of parameterized randomized algorithms. Such approaches build a response surface model and use this model for finding good parameter settings of the given algorithm. We evaluated two methods from the literature that are based on Gaussian process models: sequential parameter optimization (SPO) (Bartz-Beielstein et al. 2005) and sequential Kriging optimization (SKO) (Huang et al. 2006). SPO performed better "out-of-the-box," whereas SKO was competitive when response values were log transformed. We then investigated key design decisions within the SPO paradigm, characterizing the performance consequences of each. Based on these findings, we propose a new version of SPO, dubbed SPO+, which extends SPO with a novel intensification procedure and a log-transformed objective function. In a domain for which performance results for other (modelfree) parameter optimization approaches are available, we demonstrate that SPO+ achieves state-of-the-art performance. Finally, we compare this automated parameter tuning approach to an interactive, manual process that makes use of classical

  11. Fast Model Adaptation for Automated Section Classification in Electronic Medical Records.

    PubMed

    Ni, Jian; Delaney, Brian; Florian, Radu

    2015-01-01

    Medical information extraction is the automatic extraction of structured information from electronic medical records, where such information can be used for improving healthcare processes and medical decision making. In this paper, we study one important medical information extraction task called section classification. The objective of section classification is to automatically identify sections in a medical document and classify them into one of the pre-defined section types. Training section classification models typically requires large amounts of human labeled training data to achieve high accuracy. Annotating institution-specific data, however, can be both expensive and time-consuming; which poses a big hurdle for adapting a section classification model to new medical institutions. In this paper, we apply two advanced machine learning techniques, active learning and distant supervision, to reduce annotation cost and achieve fast model adaptation for automated section classification in electronic medical records. Our experiment results show that active learning reduces the annotation cost and time by more than 50%, and distant supervision can achieve good model accuracy using weakly labeled training data only. PMID:26262005

  12. Lattice gas cellular automation model for rippling and aggregation in myxobacteria

    NASA Astrophysics Data System (ADS)

    Alber, Mark S.; Jiang, Yi; Kiskowski, Maria A.

    2004-05-01

    A lattice gas cellular automation (LGCA) model is used to simulate rippling and aggregation in myxobacteria. An efficient way of representing cells of different cell size, shape and orientation is presented that may be easily extended to model later stages of fruiting body formation. This LGCA model is designed to investigate whether a refractory period, a minimum response time, a maximum oscillation period and non-linear dependence of reversals of cells on C-factor are necessary assumptions for rippling. It is shown that a refractory period of 2-3 min, a minimum response time of up to 1 min and no maximum oscillation period best reproduce rippling in the experiments of Myxococcus xanthus. Non-linear dependence of reversals on C-factor is critical at high cell density. Quantitative simulations demonstrate that the increase in wavelength of ripples when a culture is diluted with non-signaling cells can be explained entirely by the decreased density of C-signaling cells. This result further supports the hypothesis that levels of C-signaling quantitatively depend on and modulate cell density. Analysis of the interpenetrating high density waves shows the presence of a phase shift analogous to the phase shift of interpenetrating solitons. Finally, a model for swarming, aggregation and early fruiting body formation is presented.

  13. Interface modeling to predict well casing damage for big hill strategic petroleum reserve.

    SciTech Connect

    Ehgartner, Brian L.; Park, Byoung Yoon

    2012-02-01

    Oil leaks were found in well casings of Caverns 105 and 109 at the Big Hill Strategic Petroleum Reserve site. According to the field observations, two instances of casing damage occurred at the depth of the interface between the caprock and top of salt. This damage could be caused by interface movement induced by cavern volume closure due to salt creep. A three dimensional finite element model, which allows each cavern to be configured individually, was constructed to investigate shear and vertical displacements across each interface. The model contains interfaces between each lithology and a shear zone to examine the interface behavior in a realistic manner. This analysis results indicate that the casings of Caverns 105 and 109 failed by shear stress that exceeded shear strength due to the horizontal movement of the top of salt relative to the caprock, and tensile stress due to the downward movement of the top of salt from the caprock, respectively. The casings of Caverns 101, 110, 111 and 114, located at the far ends of the field, are predicted to be failed by shear stress in the near future. The casings of inmost Caverns 107 and 108 are predicted to be failed by tensile stress in the near future.

  14. The performance of automated case-mix adjustment regression model building methods in a health outcome prediction setting.

    PubMed

    Jen, Min-Hua; Bottle, Alex; Kirkwood, Graham; Johnston, Ron; Aylin, Paul

    2011-09-01

    We have previously described a system for monitoring a number of healthcare outcomes using case-mix adjustment models. It is desirable to automate the model fitting process in such a system if monitoring covers a large number of outcome measures or subgroup analyses. Our aim was to compare the performance of three different variable selection strategies: "manual", "automated" backward elimination and re-categorisation, and including all variables at once, irrespective of their apparent importance, with automated re-categorisation. Logistic regression models for predicting in-hospital mortality and emergency readmission within 28 days were fitted to an administrative database for 78 diagnosis groups and 126 procedures from 1996 to 2006 for National Health Services hospital trusts in England. The performance of models was assessed with Receiver Operating Characteristic (ROC) c statistics, (measuring discrimination) and Brier score (assessing the average of the predictive accuracy). Overall, discrimination was similar for diagnoses and procedures and consistently better for mortality than for emergency readmission. Brier scores were generally low overall (showing higher accuracy) and were lower for procedures than diagnoses, with a few exceptions for emergency readmission within 28 days. Among the three variable selection strategies, the automated procedure had similar performance to the manual method in almost all cases except low-risk groups with few outcome events. For the rapid generation of multiple case-mix models we suggest applying automated modelling to reduce the time required, in particular when examining different outcomes of large numbers of procedures and diseases in routinely collected administrative health data. PMID:21556848

  15. Semi-automated calibration method for modelling of mountain permafrost evolution in Switzerland

    NASA Astrophysics Data System (ADS)

    Marmy, A.; Rajczak, J.; Delaloye, R.; Hilbich, C.; Hoelzle, M.; Kotlarski, S.; Lambiel, C.; Noetzli, J.; Phillips, M.; Salzmann, N.; Staub, B.; Hauck, C.

    2015-09-01

    Permafrost is a widespread phenomenon in the European Alps. Many important topics such as the future evolution of permafrost related to climate change and the detection of permafrost related to potential natural hazards sites are of major concern to our society. Numerical permafrost models are the only tools which facilitate the projection of the future evolution of permafrost. Due to the complexity of the processes involved and the heterogeneity of Alpine terrain, models must be carefully calibrated and results should be compared with observations at the site (borehole) scale. However, a large number of local point data are necessary to obtain a broad overview of the thermal evolution of mountain permafrost over a larger area, such as the Swiss Alps, and the site-specific model calibration of each point would be time-consuming. To face this issue, this paper presents a semi-automated calibration method using the Generalized Likelihood Uncertainty Estimation (GLUE) as implemented in a 1-D soil model (CoupModel) and applies it to six permafrost sites in the Swiss Alps prior to long-term permafrost evolution simulations. We show that this automated calibration method is able to accurately reproduce the main thermal condition characteristics with some limitations at sites with unique conditions such as 3-D air or water circulation, which have to be calibrated manually. The calibration obtained was used for RCM-based long-term simulations under the A1B climate scenario specifically downscaled at each borehole site. The projection shows general permafrost degradation with thawing at 10 m, even partially reaching 20 m depths until the end of the century, but with different timing among the sites. The degradation is more rapid at bedrock sites whereas ice-rich sites with a blocky surface cover showed a reduced sensitivity to climate change. The snow cover duration is expected to be reduced drastically (between -20 to -37 %) impacting the ground thermal regime. However

  16. Damage evolution of bi-body model composed of weakly cemented soft rock and coal considering different interface effect.

    PubMed

    Zhao, Zenghui; Lv, Xianzhou; Wang, Weiming; Tan, Yunliang

    2016-01-01

    Considering the structure effect of tunnel stability in western mining of China, three typical kinds of numerical model were respectively built as follows based on the strain softening constitutive model and linear elastic-perfectly plastic model for soft rock and interface: R-M, R-C(s)-M and R-C(w)-M. Calculation results revealed that the stress-strain relation and failure characteristics of the three models vary between each other. The combination model without interface or with a strong interface presented continuous failure, while weak interface exhibited 'cut off' effect. Thus, conceptual models of bi-material model and bi-body model were established. Then numerical experiments of tri-axial compression were carried out for the two models. The relationships between stress evolution, failure zone and deformation rate fluctuations as well as the displacement of interface were detailed analyzed. Results show that two breakaway points of deformation rate actually demonstrate the starting and penetration of the main rupture, respectively. It is distinguishable due to the large fluctuation. The bi-material model shows general continuous failure while bi-body model shows 'V' type shear zone in weak body and failure in strong body near the interface due to the interface effect. With the increasing of confining pressure, the 'cut off' effect of weak interface is not obvious. These conclusions lay the theoretical foundation for further development of constitutive model for soft rock-coal combination body. PMID:27066329

  17. Development and Implementation of an Extensible Interface-Based Spatiotemporal Geoprocessing and Modeling Toolbox

    NASA Astrophysics Data System (ADS)

    Cao, Y.; Ames, D. P.

    2011-12-01

    This poster presents an object oriented and interface-based spatiotemporal data processing and modeling toolbox that can be extended by third parties to include complete suites of new tools through the implementation of simple interfaces. The resulting software implementation includes both a toolbox and workflow designer or "model builder" constructed using the underlying open source DotSpatial library and MapWindow desktop GIS. The unique contribution of this research and software development activity is in the creation and use of an extensibility architecture for both specific tools (through a so-called "ITool" interface) and batches of tools (through a so-called "IToolProvider" interface.) This concept is introduced to allow for seamless integration of geoprocessing tools from various sources (e.g. distinct libraries of spatiotemporal processing code) - including online sources - within a single user environment. In this way, the IToolProvider interface allows developers to wrap large existing collections of data analysis code without having to re-write it for interoperability. Additionally, developers do not need to design the user interfaces for loading, displaying or interacting with their specific tools, but rather can simply implement the provided interfaces and have their tools and tool collections appear in the toolbox alongside other tools. The demonstration software presented here is based on an implementation of the interfaces and sample tool libraries using the C# .NET programming language. This poster will include a summary of the interfaces as well as a demonstration of the system using the Whitebox Geospatial Analysis Tools (GAT) as an example case of a large number of existing tools that can be exposed to users through this new system. Vector analysis tools which are native in DotSpatial are linked to the Whitebox raster analysis tools in the model builder environment for ease of execution and consistent/repeatable use. We expect that this

  18. Automated rodent in situ muscle contraction assay and myofiber organization analysis in sarcopenia animal models.

    PubMed

    Weber, H; Rauch, A; Adamski, S; Chakravarthy, K; Kulkarni, A; Dogdas, B; Bendtsen, C; Kath, G; Alves, S E; Wilkinson, H A; Chiu, C-S

    2012-06-01

    Age-related sarcopenia results in frailty and decreased mobility, which are associated with increased falls and long-term disability in the elderly. Given the global increase in lifespan, sarcopenia is a growing, unmet medical need. This report aims to systematically characterize muscle aging in preclinical models, which may facilitate the development of sarcopenia therapies. Naïve rats and mice were subjected to noninvasive micro X-ray computed tomography (micro-CT) imaging, terminal in situ muscle function characterizations, and ATPase-based myofiber analysis. We developed a Definiens (Parsippany, NJ)-based algorithm to automate micro-CT image analysis, which facilitates longitudinal in vivo muscle mass analysis. We report development and characterization of translational in situ skeletal muscle performance assay systems in rat and mouse. The systems incorporate a custom-designed animal assay stage, resulting in enhanced force measurement precision, and LabVIEW (National Instruments, Austin, TX)-based algorithms to support automated data acquisition and data analysis. We used ATPase-staining techniques for myofibers to characterize fiber subtypes and distribution. Major parameters contributing to muscle performance were identified using data mining and integration, enabled by Labmatrix (BioFortis, Columbia, MD). These technologies enabled the systemic and accurate monitoring of muscle aging from a large number of animals. The data indicated that longitudinal muscle cross-sectional area measurement effectively monitors change of muscle mass and function during aging. Furthermore, the data showed that muscle performance during aging is also modulated by myofiber remodeling factors, such as changes in myofiber distribution patterns and changes in fiber shape, which affect myofiber interaction. This in vivo muscle assay platform has been applied to support identification and validation of novel targets for the treatment of sarcopenia. PMID:22461442

  19. Automated generation of high-quality training data for appearance-based object models

    NASA Astrophysics Data System (ADS)

    Becker, Stefan; Voelker, Arno; Kieritz, Hilke; Hübner, Wolfgang; Arens, Michael

    2013-11-01

    Methods for automated person detection and person tracking are essential core components in modern security and surveillance systems. Most state-of-the-art person detectors follow a statistical approach, where prototypical appearances of persons are learned from training samples with known class labels. Selecting appropriate learning samples has a significant impact on the quality of the generated person detectors. For example, training a classifier on a rigid body model using training samples with strong pose variations is in general not effective, irrespective of the classifiers capabilities. Generation of high-quality training data is, apart from performance issues, a very time consuming process, comprising a significant amount of manual work. Furthermore, due to inevitable limitations of freely available training data, corresponding classifiers are not always transferable to a given sensor and are only applicable in a well-defined narrow variety of scenes and camera setups. Semi-supervised learning methods are a commonly used alternative to supervised training, in general requiring only few labeled samples. However, as a drawback semi-supervised methods always include a generative component, which is known to be difficult to learn. Therefore, automated processes for generating training data sets for supervised methods are needed. Such approaches could either help to better adjust classifiers to respective hardware, or serve as a complement to existing data sets. Towards this end, this paper provides some insights into the quality requirements of automatically generated training data for supervised learning methods. Assuming a static camera, labels are generated based on motion detection by background subtraction with respect to weak constraints on the enclosing bounding box of the motion blobs. Since this labeling method consists of standard components, we illustrate the effectiveness by adapting a person detector to cameras of a sensor network. While varying

  20. AgRISTARS: Yield model development/soil moisture. Interface control document

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The interactions and support functions required between the crop Yield Model Development (YMD) Project and Soil Moisture (SM) Project are defined. The requirements for YMD support of SM and vice-versa are outlined. Specific tasks in support of these interfaces are defined for development of support functions.

  1. Self-Observation Model Employing an Instinctive Interface for Classroom Active Learning

    ERIC Educational Resources Information Center

    Chen, Gwo-Dong; Nurkhamid; Wang, Chin-Yeh; Yang, Shu-Han; Chao, Po-Yao

    2014-01-01

    In a classroom, obtaining active, whole-focused, and engaging learning results from a design is often difficult. In this study, we propose a self-observation model that employs an instinctive interface for classroom active learning. Students can communicate with virtual avatars in the vertical screen and can react naturally according to the…

  2. Importance of interfaces in governing thermal transport in composite materials: modeling and experimental perspectives.

    PubMed

    Roy, Ajit K; Farmer, Barry L; Varshney, Vikas; Sihn, Sangwook; Lee, Jonghoon; Ganguli, Sabyasachi

    2012-02-01

    Thermal management in polymeric composite materials has become increasingly critical in the air-vehicle industry because of the increasing thermal load in small-scale composite devices extensively used in electronics and aerospace systems. The thermal transport phenomenon in these small-scale heterogeneous systems is essentially controlled by the interface thermal resistance because of the large surface-to-volume ratio. In this review article, several modeling strategies are discussed for different length scales, complemented by our experimental efforts to tailor the thermal transport properties of polymeric composite materials. Progress in the molecular modeling of thermal transport in thermosets is reviewed along with a discussion on the interface thermal resistance between functionalized carbon nanotube and epoxy resin systems. For the thermal transport in fiber-reinforced composites, various micromechanics-based analytical and numerical modeling schemes are reviewed in predicting the transverse thermal conductivity. Numerical schemes used to realize and scale the interface thermal resistance and the finite mean free path of the energy carrier in the mesoscale are discussed in the frame of the lattice Boltzmann-Peierls-Callaway equation. Finally, guided by modeling, complementary experimental efforts are discussed for exfoliated graphite and vertically aligned nanotubes based composites toward improving their effective thermal conductivity by tailoring interface thermal resistance. PMID:22295993

  3. Development of a GIS interface for WEPP model application to Great Lakes forested watersheds

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This presentation will highlight efforts on development of a new WEPP GIS interface, targeted toward application in forested regions bordering the Great Lakes. The key components and algorithms of the online GIS system will be outlined. The general procedures used to provide input to the WEPP model ...

  4. Numerical simulations of the moving contact line problem using a diffuse-interface model

    NASA Astrophysics Data System (ADS)

    Afzaal, Muhammad; Sibley, David; Duncan, Andrew; Yatsyshin, Petr; Duran-Olivencia, Miguel A.; Nold, Andreas; Savva, Nikos; Schmuck, Markus; Kalliadasis, Serafim

    2015-11-01

    Moving contact lines are a ubiquitous phenomenon both in nature and in many modern technologies. One prevalent way of numerically tackling the problem is with diffuse-interface (phase-field) models, where the classical sharp-interface model of continuum mechanics is relaxed to one with a finite thickness fluid-fluid interface, capturing physics from mesoscopic lengthscales. The present work is devoted to the study of the contact line between two fluids confined by two parallel plates, i.e. a dynamically moving meniscus. Our approach is based on a coupled Navier-Stokes/Cahn-Hilliard model. This system of partial differential equations allows a tractable numerical solution to be computed, capturing diffusive and advective effects in a prototypical case study in a finite-element framework. Particular attention is paid to the static and dynamic contact angle of the meniscus advancing or receding between the plates. The results obtained from our approach are compared to the classical sharp-interface model to elicit the importance of considering diffusion and associated effects. We acknowledge financial support from European Research Council via Advanced Grant No. 247031.

  5. A Monthly Water-Balance Model Driven By a Graphical User Interface

    USGS Publications Warehouse

    McCabe, Gregory J.; Markstrom, Steven L.

    2007-01-01

    This report describes a monthly water-balance model driven by a graphical user interface, referred to as the Thornthwaite monthly water-balance program. Computations of monthly water-balance components of the hydrologic cycle are made for a specified location. The program can be used as a research tool, an assessment tool, and a tool for classroom instruction.

  6. Facial pressure zones of an oronasal interface for noninvasive ventilation: a computer model analysis* **

    PubMed Central

    Barros, Luana Souto; Talaia, Pedro; Drummond, Marta; Natal-Jorge, Renato

    2014-01-01

    OBJECTIVE: To study the effects of an oronasal interface (OI) for noninvasive ventilation, using a three-dimensional (3D) computational model with the ability to simulate and evaluate the main pressure zones (PZs) of the OI on the human face. METHODS: We used a 3D digital model of the human face, based on a pre-established geometric model. The model simulated soft tissues, skull, and nasal cartilage. The geometric model was obtained by 3D laser scanning and post-processed for use in the model created, with the objective of separating the cushion from the frame. A computer simulation was performed to determine the pressure required in order to create the facial PZs. We obtained descriptive graphical images of the PZs and their intensity. RESULTS: For the graphical analyses of each face-OI model pair and their respective evaluations, we ran 21 simulations. The computer model identified several high-impact PZs in the nasal bridge and paranasal regions. The variation in soft tissue depth had a direct impact on the amount of pressure applied (438-724 cmH2O). CONCLUSIONS: The computer simulation results indicate that, in patients submitted to noninvasive ventilation with an OI, the probability of skin lesion is higher in the nasal bridge and paranasal regions. This methodology could increase the applicability of biomechanical research on noninvasive ventilation interfaces, providing the information needed in order to choose the interface that best minimizes the risk of skin lesion. PMID:25610506

  7. Modeling the current distribution across the depth electrode-brain interface in deep brain stimulation.

    PubMed

    Yousif, Nada; Liu, Xuguang

    2007-09-01

    The mismatch between the extensive clinical use of deep brain stimulation (DBS), which is being used to treat an increasing number of neurological disorders, and the lack of understanding of the underlying mechanisms is confounded by the difficulty of measuring the spread of electric current in the brain in vivo. In this article we present a brief review of the recent computational models that simulate the electric current and field distribution in 3D space and, consequently, make estimations of the brain volume being modulated by therapeutic DBS. Such structural modeling work can be categorized into three main approaches: target-specific modeling, models of instrumentation and modeling the electrode-brain interface. Comments are made for each of these approaches with emphasis on our electrode-brain interface modeling, since the stimulating current must travel across the electrode-brain interface in order to reach the surrounding brain tissue and modulate the pathological neural activity. For future modeling work, a combined approach needs to be taken to reveal the underlying mechanisms, and both structural and dynamic models need to be clinically validated to make reliable predictions about the therapeutic effect of DBS in order to assist clinical practice. PMID:17850197

  8. Automated Geospatial Watershed Assessment

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment (AGWA) tool is a Geographic Information Systems (GIS) interface jointly developed by the U.S. Environmental Protection Agency, the U.S. Department of Agriculture (USDA) Agricultural Research Service, and the University of Arizona to a...

  9. Microcontroller for automation application

    NASA Technical Reports Server (NTRS)

    Cooper, H. W.

    1975-01-01

    The description of a microcontroller currently being developed for automation application was given. It is basically an 8-bit microcomputer with a 40K byte random access memory/read only memory, and can control a maximum of 12 devices through standard 15-line interface ports.

  10. An ASM/ADM model interface for dynamic plant-wide simulation.

    PubMed

    Nopens, Ingmar; Batstone, Damien J; Copp, John B; Jeppsson, Ulf; Volcke, Eveline; Alex, Jens; Vanrolleghem, Peter A

    2009-04-01

    Mathematical modelling has proven to be very useful in process design, operation and optimisation. A recent trend in WWTP modelling is to include the different subunits in so-called plant-wide models rather than focusing on parts of the entire process. One example of a typical plant-wide model is the coupling of an upstream activated sludge plant (including primary settler, and secondary clarifier) to an anaerobic digester for sludge digestion. One of the key challenges when coupling these processes has been the definition of an interface between the well accepted activated sludge model (ASM1) and anaerobic digestion model (ADM1). Current characterisation and interface models have key limitations, the most critical of which is the over-use of X(c) (or lumped complex) variable as a main input to the ADM1. Over-use of X(c) does not allow for variation of degradability, carbon oxidation state or nitrogen content. In addition, achieving a target influent pH through the proper definition of the ionic system can be difficult. In this paper, we define an interface and characterisation model that maps degradable components directly to carbohydrates, proteins and lipids (and their soluble analogues), as well as organic acids, rather than using X(c). While this interface has been designed for use with the Benchmark Simulation Model No. 2 (BSM2), it is widely applicable to ADM1 input characterisation in general. We have demonstrated the model both hypothetically (BSM2), and practically on a full-scale anaerobic digester treating sewage sludge. PMID:19232670

  11. Automated Verification of Code Generated from Models: Comparing Specifications with Observations

    NASA Astrophysics Data System (ADS)

    Gerlich, R.; Sigg, D.; Gerlich, R.

    2008-08-01

    The interest for automatic code generation from models is increasing. A specification is expressed as model and verification and validation is performed in the application domain. Once the model is formally correct and complete, code can be generated automatically. The general belief is that this code should be correct as well. However, this might be not true: Many parameters impact the generation of code and its correctness: it depends on conditions changing from application to application, the properties of the code depend on the environment where it is executed. From the principles of ISVV (Independent Software Verification and Validation) it even must be doubted that the automatically generated code is correct. Therefore an additional activity is required proving the correctness of the whole chain from modelling level down to execution on the target platform. Certification of a code generator is the state-of-the-art approach dealing with such risks,. Scade [1] was the first code generator certified according to DO178B. The certification costs are a significant disadvantage of this certification approach. All codes needs to be analysed manually, and this procedure has to be repeated for recertification after each maintenance step. But certification does not guarantee at all that the generated code does comply with the model. Certification is based on compliance of the code of the code generator with given standards. Such compliance never can guarantee correctness of the whole chain through transformation down to the environment for execution, though the belief is that certification implies well-formed code at a reduced fault rate. The approach presented here goes a direction different from manual certification.. It is guided by the idea of automated proof: each time code is generated from a model the properties of the code when being executed in its environment are compared with the properties specified in the model. This allows to conclude on the correctness of

  12. Semi-automated DIRSIG scene modeling from three-dimensional lidar and passive imagery

    NASA Astrophysics Data System (ADS)

    Lach, Stephen R.

    The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is an established, first-principles based scene simulation tool that produces synthetic multispectral and hyperspectral images from the visible to long wave infrared (0.4 to 20 microns). Over the last few years, significant enhancements such as spectral polarimetric and active Light Detection and Ranging (lidar) models have also been incorporated into the software, providing an extremely powerful tool for multi-sensor algorithm testing and sensor evaluation. However, the extensive time required to create large-scale scenes has limited DIRSIG's ability to generate scenes "on demand." To date, scene generation has been a laborious, time-intensive process, as the terrain model, CAD objects and background maps have to be created and attributed manually. To shorten the time required for this process, this research developed an approach to reduce the man-in-the-loop requirements for several aspects of synthetic scene construction. Through a fusion of 3D lidar data with passive imagery, we were able to semi-automate several of the required tasks in the DIRSIG scene creation process. Additionally, many of the remaining tasks realized a shortened implementation time through this application of multi-modal imagery. Lidar data is exploited to identify ground and object features as well as to define initial tree location and building parameter estimates. These estimates are then refined by analyzing high-resolution frame array imagery using the concepts of projective geometry in lieu of the more common Euclidean approach found in most traditional photogrammetric references. Spectral imagery is also used to assign material characteristics to the modeled geometric objects. This is achieved through a modified atmospheric compensation applied to raw hyperspectral imagery. These techniques have been successfully applied to imagery collected over the RIT campus and the greater Rochester area. The data used

  13. Modelling and interpreting biologically crusted dryland soil sub-surface structure using automated micropenetrometry

    NASA Astrophysics Data System (ADS)

    Hoon, Stephen R.; Felde, Vincent J. M. N. L.; Drahorad, Sylvie L.; Felix-Henningsen, Peter

    2015-04-01

    Soil penetrometers are used routinely to determine the shear strength of soils and deformable sediments both at the surface and throughout a depth profile in disciplines as diverse as soil science, agriculture, geoengineering and alpine avalanche-safety (e.g. Grunwald et al. 2001, Van Herwijnen et al. 2009). Generically, penetrometers comprise two principal components: An advancing probe, and a transducer; the latter to measure the pressure or force required to cause the probe to penetrate or advance through the soil or sediment. The force transducer employed to determine the pressure can range, for example, from a simple mechanical spring gauge to an automatically data-logged electronic transducer. Automated computer control of the penetrometer step size and probe advance rate enables precise measurements to be made down to a resolution of 10's of microns, (e.g. the automated electronic micropenetrometer (EMP) described by Drahorad 2012). Here we discuss the determination, modelling and interpretation of biologically crusted dryland soil sub-surface structures using automated micropenetrometry. We outline a model enabling the interpretation of depth dependent penetration resistance (PR) profiles and their spatial differentials using the model equations, σ {}(z) ={}σ c0{}+Σ 1n[σ n{}(z){}+anz + bnz2] and dσ /dz = Σ 1n[dσ n(z) /dz{} {}+{}Frn(z)] where σ c0 and σ n are the plastic deformation stresses for the surface and nth soil structure (e.g. soil crust, layer, horizon or void) respectively, and Frn(z)dz is the frictional work done per unit volume by sliding the penetrometer rod an incremental distance, dz, through the nth layer. Both σ n(z) and Frn(z) are related to soil structure. They determine the form of σ {}(z){} measured by the EMP transducer. The model enables pores (regions of zero deformation stress) to be distinguished from changes in layer structure or probe friction. We have applied this method to both artificial calibration soils in the

  14. VoICE: A semi-automated pipeline for standardizing vocal analysis across models

    PubMed Central

    Burkett, Zachary D.; Day, Nancy F.; Peñagarikano, Olga; Geschwind, Daniel H.; White, Stephanie A.

    2015-01-01

    The study of vocal communication in animal models provides key insight to the neurogenetic basis for speech and communication disorders. Current methods for vocal analysis suffer from a lack of standardization, creating ambiguity in cross-laboratory and cross-species comparisons. Here, we present VoICE (Vocal Inventory Clustering Engine), an approach to grouping vocal elements by creating a high dimensionality dataset through scoring spectral similarity between all vocalizations within a recording session. This dataset is then subjected to hierarchical clustering, generating a dendrogram that is pruned into meaningful vocalization “types” by an automated algorithm. When applied to birdsong, a key model for vocal learning, VoICE captures the known deterioration in acoustic properties that follows deafening, including altered sequencing. In a mammalian neurodevelopmental model, we uncover a reduced vocal repertoire of mice lacking the autism susceptibility gene, Cntnap2. VoICE will be useful to the scientific community as it can standardize vocalization analyses across species and laboratories. PMID:26018425

  15. DockTope: a Web-based tool for automated pMHC-I modelling

    PubMed Central

    Menegatti Rigo, Maurício; Amaral Antunes, Dinler; Vaz de Freitas, Martiela; Fabiano de Almeida Mendes, Marcus; Meira, Lindolfo; Sinigaglia, Marialva; Fioravanti Vieira, Gustavo

    2015-01-01

    The immune system is constantly challenged, being required to protect the organism against a wide variety of infectious pathogens and, at the same time, to avoid autoimmune disorders. One of the most important molecules involved in these events is the Major Histocompatibility Complex class I (MHC-I), responsible for binding and presenting small peptides from the intracellular environment to CD8+ T cells. The study of peptide:MHC-I (pMHC-I) molecules at a structural level is crucial to understand the molecular mechanisms underlying immunologic responses. Unfortunately, there are few pMHC-I structures in the Protein Data Bank (PDB) (especially considering the total number of complexes that could be formed combining different peptides), and pMHC-I modelling tools are scarce. Here, we present DockTope, a free and reliable web-based tool for pMHC-I modelling, based on crystal structures from the PDB. DockTope is fully automated and allows any researcher to construct a pMHC-I complex in an efficient way. We have reproduced a dataset of 135 non-redundant pMHC-I structures from the PDB (Cα RMSD below 1 Å). Modelling of pMHC-I complexes is remarkably important, contributing to the knowledge of important events such as cross-reactivity, autoimmunity, cancer therapy, transplantation and rational vaccine design. PMID:26674250

  16. DockTope: a Web-based tool for automated pMHC-I modelling.

    PubMed

    Rigo, Maurício Menegatti; Antunes, Dinler Amaral; Vaz de Freitas, Martiela; Fabiano de Almeida Mendes, Marcus; Meira, Lindolfo; Sinigaglia, Marialva; Vieira, Gustavo Fioravanti

    2015-01-01

    The immune system is constantly challenged, being required to protect the organism against a wide variety of infectious pathogens and, at the same time, to avoid autoimmune disorders. One of the most important molecules involved in these events is the Major Histocompatibility Complex class I (MHC-I), responsible for binding and presenting small peptides from the intracellular environment to CD8(+) T cells. The study of peptide:MHC-I (pMHC-I) molecules at a structural level is crucial to understand the molecular mechanisms underlying immunologic responses. Unfortunately, there are few pMHC-I structures in the Protein Data Bank (PDB) (especially considering the total number of complexes that could be formed combining different peptides), and pMHC-I modelling tools are scarce. Here, we present DockTope, a free and reliable web-based tool for pMHC-I modelling, based on crystal structures from the PDB. DockTope is fully automated and allows any researcher to construct a pMHC-I complex in an efficient way. We have reproduced a dataset of 135 non-redundant pMHC-I structures from the PDB (Cα RMSD below 1 Å). Modelling of pMHC-I complexes is remarkably important, contributing to the knowledge of important events such as cross-reactivity, autoimmunity, cancer therapy, transplantation and rational vaccine design. PMID:26674250

  17. Partially Automated Method for Localizing Standardized Acupuncture Points on the Heads of Digital Human Models

    PubMed Central

    Kim, Jungdae; Kang, Dae-In

    2015-01-01

    Having modernized imaging tools for precise positioning of acupuncture points over the human body where the traditional therapeutic method is applied is essential. For that reason, we suggest a more systematic positioning method that uses X-ray computer tomographic images to precisely position acupoints. Digital Korean human data were obtained to construct three-dimensional head-skin and skull surface models of six individuals. Depending on the method used to pinpoint the positions of the acupoints, every acupoint was classified into one of three types: anatomical points, proportional points, and morphological points. A computational algorithm and procedure were developed for partial automation of the positioning. The anatomical points were selected by using the structural characteristics of the skin surface and skull. The proportional points were calculated from the positions of the anatomical points. The morphological points were also calculated by using some control points related to the connections between the source and the target models. All the acupoints on the heads of the six individual were displayed on three-dimensional computer graphical image models. This method may be helpful for developing more accurate experimental designs and for providing more quantitative volumetric methods for performing analyses in acupuncture-related research. PMID:26101534

  18. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    SciTech Connect

    and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  19. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    SciTech Connect

    Robertson, J.; Polly, B.; Collis, J.

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  20. Automated Geometric Model Builder Using Range Image Sensor Data: Final Acquistion

    SciTech Connect

    Diegert, C.; Sackos, J.

    1999-02-01

    This report documents a data collection where we recorded redundant range image data from multiple views of a simple scene, and recorded accurate survey measurements of the same scene. Collecting these data was a focus of the research project Automated Geometric Model Builder Using Range Image Sensor Data (96-0384), supported by Sandia's Laboratory-Directed Research and Development (LDRD) Program during fiscal years 1996, 1997, and 1998. The data described here are available from the authors on CDROM, or electronically over the Internet. Included in this data distribution are Computer-Aided Design (CAD) models we constructed from the survey measurements. The CAD models are compatible with the SolidWorks 98 Plus system, the modern Computer-Aided Design software system that is central to Sandia's DeskTop Engineering Project (DTEP). Integration of our measurements (as built) with the constructive geometry process of the CAD system (as designed) delivers on a vision of the research project. This report on our final data collection will also serve as a final report on the project.

  1. Automated Probing and Inference of Analytical Models for Metabolic Network Dynamics

    NASA Astrophysics Data System (ADS)

    Wikswo, John; Schmidt, Michael; Jenkins, Jerry; Hood, Jonathan; Lipson, Hod

    2010-03-01

    We introduce a method to automatically construct mathematical models of a biological system, and apply this technique to infer a seven-dimensional nonlinear model of glycolytic oscillations in yeast -- based only on noisy observational data obtained from in silico experiments. Graph-based symbolic encoding, fitness prediction, and estimation-exploration can for the first time provide the level of symbolic regression required for biological applications. With no a priori knowledge of the system, the Cornell algorithm in several hours of computation correctly identified all seven ordinary nonlinear differential equations, the most complicated of which was dA3dt=-1.12.A3-192.24.A3S11+12.50.A3^4+124.92.S3+31.69.A3S3, where A3 = [ATP], S1= [glucose], and S3 = [cytosolic pyruvate and acetaldehyde pool]. Errors on the 26 parameters ranged from 0 to 14.5%. The algorithm also automatically identified new and potentially useful chemical constants of the motion, e.g. -k1N2+K2v1+k2S1A3-(k4-k5v1)A3^4+k6 0. This approach may enable automated design, control and analysis of wet-lab experiments for model identification/refinement.

  2. Fully Automated Generation of Accurate Digital Surface Models with Sub-Meter Resolution from Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Wohlfeil, J.; Hirschmüller, H.; Piltz, B.; Börner, A.; Suppa, M.

    2012-07-01

    Modern pixel-wise image matching algorithms like Semi-Global Matching (SGM) are able to compute high resolution digital surface models from airborne and spaceborne stereo imagery. Although image matching itself can be performed automatically, there are prerequisites, like high geometric accuracy, which are essential for ensuring the high quality of resulting surface models. Especially for line cameras, these prerequisites currently require laborious manual interaction using standard tools, which is a growing problem due to continually increasing demand for such surface models. The tedious work includes partly or fully manual selection of tie- and/or ground control points for ensuring the required accuracy of the relative orientation of images for stereo matching. It also includes masking of large water areas that seriously reduce the quality of the results. Furthermore, a good estimate of the depth range is required, since accurate estimates can seriously reduce the processing time for stereo matching. In this paper an approach is presented that allows performing all these steps fully automated. It includes very robust and precise tie point selection, enabling the accurate calculation of the images' relative orientation via bundle adjustment. It is also shown how water masking and elevation range estimation can be performed automatically on the base of freely available SRTM data. Extensive tests with a large number of different satellite images from QuickBird and WorldView are presented as proof of the robustness and reliability of the proposed method.

  3. Automated System Marketplace 1987: Maturity and Competition.

    ERIC Educational Resources Information Center

    Walton, Robert A.; Bridge, Frank R.

    1988-01-01

    This annual review of the library automation marketplace presents profiles of 15 major library automation firms and looks at emerging trends. Seventeen charts and tables provide data on market shares, number and size of installations, hardware availability, operating systems, and interfaces. A directory of 49 automation sources is included. (MES)

  4. Coarse Grained Modeling of The Interface BetweenWater and Heterogeneous Surfaces

    SciTech Connect

    Willard, Adam; Chandler, David

    2008-06-23

    Using coarse grained models we investigate the behavior of water adjacent to an extended hydrophobic surface peppered with various fractions of hydrophilic patches of different sizes. We study the spatial dependence of the mean interface height, the solvent density fluctuations related to drying the patchy substrate, and the spatial dependence of interfacial fluctuations. We find that adding small uniform attractive interactions between the substrate and solvent cause the mean position of the interface to be very close to the substrate. Nevertheless, the interfacial fluctuations are large and spatially heterogeneous in response to the underlying patchy substrate. We discuss the implications of these findings to the assembly of heterogeneous surfaces.

  5. Motor-model-based dynamic scaling in human-computer interfaces.

    PubMed

    Muñoz, Luis Miguel; Casals, Alícia; Frigola, Manel; Amat, Josep

    2011-04-01

    This paper presents a study on how the application of scaling techniques to an interface affects its performance. A progressive scaling factor based on the position and velocity of the cursor and the targets improves the efficiency of an interface, thereby reducing the user's workload. The study uses several human-motor models to interpret human intention and thus contribute to defining and adapting the scaling parameters to the execution of the task. Two techniques addressed to vary the control-display ratio are compared, and a new method for aiding in the task of steering is proposed. PMID:21411399

  6. World-wide distribution automation systems

    SciTech Connect

    Devaney, T.M.

    1994-12-31

    A worldwide power distribution automation system is outlined. Distribution automation is defined and the status of utility automation is discussed. Other topics discussed include a distribution management system, substation feeder, and customer functions, potential benefits, automation costs, planning and engineering considerations, automation trends, databases, system operation, computer modeling of system, and distribution management systems.

  7. Mapping local laboratory interface terms to LOINC at a German university hospital using RELMA V.5: a semi-automated approach

    PubMed Central

    Bürkle, Thomas; Prokosch, Hans-Ulrich; Ganslandt, Thomas

    2013-01-01

    Objective Logical Observation Identifiers Names and Codes (LOINC) mapping of laboratory data is often a question of the effort of mapping compared with the benefits of the structure achieved. The new LOINC mapping assistant RELMA (version 2011) has the potential to reduce the effort required for semi-automated mapping. We examined quality, time effort, and sustainability of such mapping. Methods To verify the mapping quality, two samples of 100 laboratory terms were extracted from the laboratory system of a German university hospital and processed in a semi-automated fashion with RELMA V.5 and LOINC V.2.34 German translation DIMDI to obtain LOINC codes. These codes were reviewed by two experts from each of two laboratories. Then all 2148 terms used in these two laboratories were processed in the same way. Results In the initial samples, 93 terms from one laboratory system and 92 terms from the other were correctly mapped. Of the total 2148 terms, 1660 could be mapped. An average of 500 terms per day or 60 terms per hour could be mapped. Of the laboratory terms used in 2010, 99% could be mapped. Discussion Semi-automated LOINC mapping of non-English laboratory terms has become promising in terms of effort and mapping quality using the new version RELMA V.5. The effort is probably lower than for previous manual mapping. The mapping quality equals that of manual mapping and is far better than that reported with previous automated mapping activities. Conclusion RELMA V.5 and LOINC V.2.34 offer the opportunity to start thinking again about LOINC mapping even in non-English languages, since mapping effort is acceptable and mapping results equal those of previous manual mapping reports. PMID:22802268

  8. Continuity-based model interfacing for plant-wide simulation: a general approach.

    PubMed

    Volcke, Eveline I P; van Loosdrecht, Mark C M; Vanrolleghem, Peter A

    2006-08-01

    In plant-wide simulation studies of wastewater treatment facilities, often existing models from different origin need to be coupled. However, as these submodels are likely to contain different state variables, their coupling is not straightforward. The continuity-based interfacing method (CBIM) provides a general framework to construct model interfaces for models of wastewater systems, taking into account conservation principles. In this contribution, the CBIM approach is applied to study the effect of sludge digestion reject water treatment with a SHARON-Anammox process on a plant-wide scale. Separate models were available for the SHARON process and for the Anammox process. The Benchmark simulation model no. 2 (BSM2) is used to simulate the behaviour of the complete WWTP including sludge digestion. The CBIM approach is followed to develop three different model interfaces. At the same time, the generally applicable CBIM approach was further refined and particular issues when coupling models in which pH is considered as a state variable, are pointed out. PMID:16846629

  9. Interface localization in the 2D Ising model with a driven line

    NASA Astrophysics Data System (ADS)

    Cohen, O.; Mukamel, D.

    2016-04-01

    We study the effect of a one-dimensional driving field on the interface between two coexisting phases in a two dimensional model. This is done by considering an Ising model on a cylinder with Glauber dynamics in all sites and additional biased Kawasaki dynamics in the central ring. Based on the exact solution of the two-dimensional Ising model, we are able to compute the phase diagram of the driven model within a special limit of fast drive and slow spin flips in the central ring. The model is found to exhibit two phases where the interface is pinned to the central ring: one in which it fluctuates symmetrically around the central ring and another where it fluctuates asymmetrically. In addition, we find a phase where the interface is centered in the bulk of the system, either below or above the central ring of the cylinder. In the latter case, the symmetry breaking is ‘stronger’ than that found in equilibrium when considering a repulsive potential on the central ring. This equilibrium model is analyzed here by using a restricted solid-on-solid model.

  10. Integrating Automated Data into Ecosystem Models: How Can We Drink from a Firehose?

    NASA Astrophysics Data System (ADS)

    Allen, M. F.; Harmon, T. C.

    2014-12-01

    Sensors and imaging are changing the way we are measuring ecosystem behavior. Within short time frames, we are able to capture how organisms behave in response to rapid change, and detect events that alter composition and shift states. To transform these observations into process-level understanding, we need to efficiently interpret signals. One way to do this is to automatically integrate the data into ecosystem models. In our soil carbon cycling studies, we collect continuous time series for meteorological conditions, soil processes, and automated imagery. To characterize the timing and clarity of change behavior in our data, we adopted signal-processing approaches like coupled wavelet/coherency analyses. In situ CO2 measurements allow us to visualize when root/microbial activity results in CO2 being respired from the soil surface, versus when other chemical/physical phenomena may alter gas pathways. While these approaches are interesting in understanding individual phenomena, they fail to get us beyond the study of individual processes. Sensor data are compared with the outputs from ecosystem models to detect the patterns in specific phenomena or to revise model parameters or traits. For instance, we measured unexpected levels of soil CO2 in a tropical ecosystem. By examining small-scale ecosystem model parameters, we were able to pinpoint those parameters that needed to be altered to resemble the data outputs. However, we do not capture the essence of large-scale ecosystem shifts. The time is right to utilize real-time data assimilation as an additional forcing of ecosystem models. Continuous, diurnal soil temperature and moisture, along with hourly hyphal or root growth could feed into well-established ecosystem models such as HYDRUS or DayCENT. This approach would provide instantaneous "measurements" of shifting ecosystem processes as they occur, allowing us to identify critical process connections more efficiently.

  11. Dynamics modeling for parallel haptic interfaces with force sensing and control.

    PubMed

    Bernstein, Nicholas; Lawrence, Dale; Pao, Lucy

    2013-01-01

    Closed-loop force control can be used on haptic interfaces (HIs) to mitigate the effects of mechanism dynamics. A single multidimensional force-torque sensor is often employed to measure the interaction force between the haptic device and the user's hand. The parallel haptic interface at the University of Colorado (CU) instead employs smaller 1D force sensors oriented along each of the five actuating rods to build up a 5D force vector. This paper shows that a particular manipulandum/hand partition in the system dynamics is induced by the placement and type of force sensing, and discusses the implications on force and impedance control for parallel haptic interfaces. The details of a "squaring down" process are also discussed, showing how to obtain reduced degree-of-freedom models from the general six degree-of-freedom dynamics formulation. PMID:24808395

  12. Modeling Complex Cross-Systems Software Interfaces Using SysML

    NASA Technical Reports Server (NTRS)

    Mandutianu, Sanda; Morillo, Ron; Simpson, Kim; Liepack, Otfrid; Bonanne, Kevin

    2013-01-01

    The complex flight and ground systems for NASA human space exploration are designed, built, operated and managed as separate programs and projects. However, each system relies on one or more of the other systems in order to accomplish specific mission objectives, creating a complex, tightly coupled architecture. Thus, there is a fundamental need to understand how each system interacts with the other. To determine if a model-based system engineering approach could be utilized to assist with understanding the complex system interactions, the NASA Engineering and Safety Center (NESC) sponsored a task to develop an approach for performing cross-system behavior modeling. This paper presents the results of applying Model Based Systems Engineering (MBSE) principles using the System Modeling Language (SysML) to define cross-system behaviors and how they map to crosssystem software interfaces documented in system-level Interface Control Documents (ICDs).

  13. Perturbative approach to the structure of a planar interface in the Landau-de Gennes model.

    PubMed

    Pełka, Robert; Saito, Kazuya

    2006-10-01

    The structure of nearly static planar interfaces is studied within the framework of the Landau-de Gennes model with the dynamics governed by the time-dependent Ginzburg-Landau equation. To account for the full elastic anisotropy the free energy expansion is extended to include a third order gradient term. The solutions corresponding to the in-plane or homeotropic director alignment at the interface are sought. For this purpose a consistent perturbative scheme is constructed which enables one to calculate successive corrections to the velocity and the order parameter of the interface. The implications of the solutions are discussed. The elastic anisotropy introduces asymmetry into the order parameter and free energy profiles, even for the high symmetry homeotropic configuration. The velocity of the interface with the homeotropic or in-plane alignment is enhanced or reduced, respectively. There is no reorientation of the optical axis in the boundary layer. For the class of nematogens with approximate splay-bend degeneracy the temperature dependence of the interface velocity is weakly affected by the remaining twist anisotropy. PMID:17155076

  14. Nuclear Reactor/Hydrogen Process Interface Including the HyPEP Model

    SciTech Connect

    Steven R. Sherman

    2007-05-01

    The Nuclear Reactor/Hydrogen Plant interface is the intermediate heat transport loop that will connect a very high temperature gas-cooled nuclear reactor (VHTR) to a thermochemical, high-temperature electrolysis, or hybrid hydrogen production plant. A prototype plant called the Next Generation Nuclear Plant (NGNP) is planned for construction and operation at the Idaho National Laboratory in the 2018-2021 timeframe, and will involve a VHTR, a high-temperature interface, and a hydrogen production plant. The interface is responsible for transporting high-temperature thermal energy from the nuclear reactor to the hydrogen production plant while protecting the nuclear plant from operational disturbances at the hydrogen plant. Development of the interface is occurring under the DOE Nuclear Hydrogen Initiative (NHI) and involves the study, design, and development of high-temperature heat exchangers, heat transport systems, materials, safety, and integrated system models. Research and development work on the system interface began in 2004 and is expected to continue at least until the start of construction of an engineering-scale demonstration plant.

  15. NURBS- and T-spline-based isogeometric cohesive zone modeling of interface debonding

    NASA Astrophysics Data System (ADS)

    Dimitri, R.; De Lorenzis, L.; Wriggers, P.; Zavarise, G.

    2014-08-01

    Cohesive zone (CZ) models have long been used by the scientific community to analyze the progressive damage of materials and interfaces. In these models, non-linear relationships between tractions and relative displacements are assumed, which dictate both the work of separation per unit fracture surface and the peak stress that has to be reached for the crack formation. This contribution deals with isogeometric CZ modeling of interface debonding. The interface is discretized with generalized contact elements which account for both contact and cohesive debonding within a unified framework. The formulation is suitable for non-matching discretizations of the interacting surfaces in presence of large deformations and large relative displacements. The isogeometric discretizations are based on non uniform rational B-splines as well as analysis-suitable T-splines enabling local refinement. Conventional Lagrange polynomial discretizations are also used for comparison purposes. Some numerical examples demonstrate that the proposed formulation based on isogeometric analysis is a computationally accurate and efficient technology to solve challenging interface debonding problems in 2D and 3D.

  16. Analytical model for radiative transfer including the effects of a rough material interface.

    PubMed

    Giddings, Thomas E; Kellems, Anthony R

    2016-08-20

    The reflected and transmitted radiance due to a source located above a water surface is computed based on models for radiative transfer in continuous optical media separated by a discontinuous air-water interface with random surface roughness. The air-water interface is described as the superposition of random, unresolved roughness on a deterministic realization of a stochastic wave surface at resolved scales. Under the geometric optics assumption, the bidirectional reflection and transmission functions for the air-water interface are approximated by applying regular perturbation methods to Snell's law and including the effects of a random surface roughness component. Formal analytical solutions to the radiative transfer problem under the small-angle scattering approximation account for the effects of scattering and absorption as light propagates through the atmosphere and water and also capture the diffusive effects due to the interaction of light with the rough material interface that separates the two optical media. Results of the analytical models are validated against Monte Carlo simulations, and the approximation to the bidirectional reflection function is also compared to another well-known analytical model. PMID:27556978

  17. Integrated surface and groundwater modelling in the Thames Basin, UK using the Open Modelling Interface

    NASA Astrophysics Data System (ADS)

    Mackay, Jonathan; Abesser, Corinna; Hughes, Andrew; Jackson, Chris; Kingdon, Andrew; Mansour, Majdi; Pachocka, Magdalena; Wang, Lei; Williams, Ann

    2013-04-01

    The River Thames catchment is situated in the south-east of England. It covers approximately 16,000 km2 and is the most heavily populated river basin in the UK. It is also one of the driest and has experienced severe drought events in the recent past. With the onset of climate change and human exploitation of our environment, there are now serious concerns over the sustainability of water resources in this basin with 6 million m3 consumed every day for public water supply alone. Groundwater in the Thames basin is extremely important, providing 40% of water for public supply. The principal aquifer is the Chalk, a dual permeability limestone, which has been extensively studied to understand its hydraulic properties. The fractured Jurassic limestone in the upper catchment also forms an important aquifer, supporting baseflow downstream during periods of drought. These aquifers are unconnected other than through the River Thames and its tributaries, which provide two-thirds of London's drinking water. Therefore, to manage these water resources sustainably and to make robust projections into the future, surface and groundwater processes must be considered in combination. This necessitates the simulation of the feedbacks and complex interactions between different parts of the water cycle, and the development of integrated environmental models. The Open Modelling Interface (OpenMI) standard provides a method through which environmental models of varying complexity and structure can be linked, allowing them to run simultaneously and exchange data at each timestep. This architecture has allowed us to represent the surface and subsurface flow processes within the Thames basin at an appropriate level of complexity based on our understanding of particular hydrological processes and features. We have developed a hydrological model in OpenMI which integrates a process-driven, gridded finite difference groundwater model of the Chalk with a more simplistic, semi

  18. What determines the take-over time? An integrated model approach of driver take-over after automated driving.

    PubMed

    Zeeb, Kathrin; Buchner, Axel; Schrauf, Michael

    2015-05-01

    In recent years the automation level of driver assistance systems has increased continuously. One of the major challenges for highly automated driving is to ensure a safe driver take-over of the vehicle guidance. This must be ensured especially when the driver is engaged in non-driving related secondary tasks. For this purpose it is essential to find indicators of the driver's readiness to take over and to gain more knowledge about the take-over process in general. A simulator study was conducted to explore how drivers' allocation of visual attention during highly automated driving influences a take-over action in response to an emergency situation. Therefore we recorded drivers' gaze behavior during automated driving while simultaneously engaging in a visually demanding secondary task, and measured their reaction times in a take-over situation. According to their gaze behavior the drivers were categorized into "high", "medium" and "low-risk". The gaze parameters were found to be suitable for predicting the readiness to take-over the vehicle, in such a way that high-risk drivers reacted late and more often inappropriately in the take-over situation. However, there was no difference among the driver groups in the time required by the drivers to establish motor readiness to intervene after the take-over request. An integrated model approach of driver behavior in emergency take-over situations during automated driving is presented. It is argued that primarily cognitive and not motor processes determine the take-over time. Given this, insights can be derived for further research and the development of automated systems. PMID:25794922

  19. MaxMod: a hidden Markov model based novel interface to MODELLER for improved prediction of protein 3D models.

    PubMed

    Parida, Bikram K; Panda, Prasanna K; Misra, Namrata; Mishra, Barada K

    2015-02-01

    Modeling the three-dimensional (3D) structures of proteins assumes great significance because of its manifold applications in biomolecular research. Toward this goal, we present MaxMod, a graphical user interface (GUI) of the MODELLER program that combines profile hidden Markov model (profile HMM) method with Clustal Omega program to significantly improve the selection of homologous templates and target-template alignment for construction of accurate 3D protein models. MaxMod distinguishes itself from other existing GUIs of MODELLER software by implementing effortless modeling of proteins using templates that bear modified residues. Additionally, it provides various features such as loop optimization, express modeling (a feature where protein model can be generated directly from its sequence, without any further user intervention) and automatic update of PDB database, thus enhancing the user-friendly control of computational tasks. We find that HMM-based MaxMod performs better than other modeling packages in terms of execution time and model quality. MaxMod is freely available as a downloadable standalone tool for academic and non-commercial purpose at http://www.immt.res.in/maxmod/. PMID:25636267

  20. Sloan Digital Sky Survey photometric telescope automation and observing software

    SciTech Connect

    Eric H. Neilsen, Jr. et al.

    2002-10-16

    The photometric telescope (PT) provides observations necessary for the photometric calibration of the Sloan Digital Sky Survey (SDSS). Because the attention of the observing staff is occupied by the operation of the 2.5 meter telescope which takes the survey data proper, the PT must reliably take data with little supervision. In this paper we describe the PT's observing program, MOP, which automates most tasks necessary for observing. MOP's automated target selection is closely modeled on the actions a human observer might take, and is built upon a user interface that can be (and has been) used for manual operation. This results in an interface that makes it easy for an observer to track the activities of the automating procedures and intervene with minimum disturbance when necessary. MOP selects targets from the same list of standard star and calibration fields presented to the user, and chooses standard star fields covering ranges of airmass, color, and time necessary to monitor atmospheric extinction and produce a photometric solution. The software determines when additional standard star fields are unnecessary, and selects survey calibration fields according to availability and priority. Other automated features of MOP, such as maintaining the focus and keeping a night log, are also built around still functional manual interfaces, allowing the observer to be as active in observing as desired; MOP's automated features may be used as tools for manual observing, ignored entirely, or allowed to run the telescope with minimal supervision when taking routine data.

  1. An elasto-viscoplastic interface model for investigating the constitutive behavior of nacre

    NASA Astrophysics Data System (ADS)

    Tang, H.; Barthelat, F.; Espinosa, H. D.

    2007-07-01

    In order to better understand the strengthening mechanism observed in nacre, we have developed an interface computational model to simulate the behavior of the organic present at the interface between aragonite tablets. In the model, the single polymer-chain behavior is characterized by the worm-like-chain (WLC) model, which is in turn incorporated into the eight-chain cell model developed by Arruda and Boyce [Arruda, E.M., Boyce, M.C., 1993a. A three-dimensional constitutive model for the large stretches, with application to polymeric glasses. Int. J. Solids Struct. 40, 389-412] to achieve a continuum interface constitutive description. The interface model is formulated within a finite-deformation framework. A fully implicit time-integration algorithm is used for solving the discretized governing equations. Finite element simulations were performed on a representative volume element (RVE) to investigate the tensile response of nacre. The staggered arrangement of tablets and interface waviness obtained experimentally by Barthelat et al. [Barthelat, F., Tang, H., Zavattieri, P.D., Li, C.-M., Espinosa, H.D., 2007. On the mechanics of mother-of-pearl: a key feature in the material hierarchical structure. J. Mech. Phys. Solids 55 (2), 306-337] was included in the RVE simulations. The simulations showed that both the rate-dependence of the tensile response and hysteresis loops during loading, unloading and reloading cycles were captured by the model. Through a parametric study, the effect of the polymer constitutive response during tablet-climbing and its relation to interface hardening was investigated. It is shown that stiffening of the organic material is not required to achieve the experimentally observed strain hardening of nacre during tension. In fact, when ratios of contour length/persistent length experimentally identified are employed in the simulations, the predicted stress-strain behavior exhibits a deformation hardening consistent with the one measured

  2. a Plugin to Interface Openmodeller from Qgis for SPECIES' Potential Distribution Modelling

    NASA Astrophysics Data System (ADS)

    Becker, Daniel; Willmes, Christian; Bareth, Georg; Weniger, Gerd-Christian

    2016-06-01

    This contribution describes the development of a plugin for the geographic information system QGIS to interface the openModeller software package. The aim is to use openModeller to generate species' potential distribution models for various archaeological applications (site catchment analysis, for example). Since the usage of openModeller's command-line interface and configuration files can be a bit inconvenient, an extension of the QGIS user interface to handle these tasks, in combination with the management of the geographic data, was required. The implementation was realized in Python using PyQGIS and PyQT. The plugin, in combination with QGIS, handles the tasks of managing geographical data, data conversion, generation of configuration files required by openModeller and compilation of a project folder. The plugin proved to be very helpful with the task of compiling project datasets and configuration files for multiple instances of species occurrence datasets and the overall handling of openModeller. In addition, the plugin is easily extensible to take potential new requirements into account in the future.

  3. Modeling strategic use of human computer interfaces with novel hidden Markov models.

    PubMed

    Mariano, Laura J; Poore, Joshua C; Krum, David M; Schwartz, Jana L; Coskren, William D; Jones, Eric M

    2015-01-01

    Immersive software tools are virtual environments designed to give their users an augmented view of real-world data and ways of manipulating that data. As virtual environments, every action users make while interacting with these tools can be carefully logged, as can the state of the software and the information it presents to the user, giving these actions context. This data provides a high-resolution lens through which dynamic cognitive and behavioral processes can be viewed. In this report, we describe new methods for the analysis and interpretation of such data, utilizing a novel implementation of the Beta Process Hidden Markov Model (BP-HMM) for analysis of software activity logs. We further report the results of a preliminary study designed to establish the validity of our modeling approach. A group of 20 participants were asked to play a simple computer game, instrumented to log every interaction with the interface. Participants had no previous experience with the game's functionality or rules, so the activity logs collected during their naïve interactions capture patterns of exploratory behavior and skill acquisition as they attempted to learn the rules of the game. Pre- and post-task questionnaires probed for self-reported styles of problem solving, as well as task engagement, difficulty, and workload. We jointly modeled the activity log sequences collected from all participants using the BP-HMM approach, identifying a global library of activity patterns representative of the collective behavior of all the participants. Analyses show systematic relationships between both pre- and post-task questionnaires, self-reported approaches to analytic problem solving, and metrics extracted from the BP-HMM decomposition. Overall, we find that this novel approach to decomposing unstructured behavioral data within software environments provides a sensible means for understanding how users learn to integrate software functionality for strategic task pursuit. PMID

  4. Modeling strategic use of human computer interfaces with novel hidden Markov models

    PubMed Central

    Mariano, Laura J.; Poore, Joshua C.; Krum, David M.; Schwartz, Jana L.; Coskren, William D.; Jones, Eric M.

    2015-01-01

    Immersive software tools are virtual environments designed to give their users an augmented view of real-world data and ways of manipulating that data. As virtual environments, every action users make while interacting with these tools can be carefully logged, as can the state of the software and the information it presents to the user, giving these actions context. This data provides a high-resolution lens through which dynamic cognitive and behavioral processes can be viewed. In this report, we describe new methods for the analysis and interpretation of such data, utilizing a novel implementation of the Beta Process Hidden Markov Model (BP-HMM) for analysis of software activity logs. We further report the results of a preliminary study designed to establish the validity of our modeling approach. A group of 20 participants were asked to play a simple computer game, instrumented to log every interaction with the interface. Participants had no previous experience with the game's functionality or rules, so the activity logs collected during their naïve interactions capture patterns of exploratory behavior and skill acquisition as they attempted to learn the rules of the game. Pre- and post-task questionnaires probed for self-reported styles of problem solving, as well as task engagement, difficulty, and workload. We jointly modeled the activity log sequences collected from all participants using the BP-HMM approach, identifying a global library of activity patterns representative of the collective behavior of all the participants. Analyses show systematic relationships between both pre- and post-task questionnaires, self-reported approaches to analytic problem solving, and metrics extracted from the BP-HMM decomposition. Overall, we find that this novel approach to decomposing unstructured behavioral data within software environments provides a sensible means for understanding how users learn to integrate software functionality for strategic task pursuit. PMID

  5. Aircraft wing structural design optimization based on automated finite element modelling and ground structure approach

    NASA Astrophysics Data System (ADS)

    Yang, Weizhu; Yue, Zhufeng; Li, Lei; Wang, Peiyan

    2016-01-01

    An optimization procedure combining an automated finite element modelling (AFEM) technique with a ground structure approach (GSA) is proposed for structural layout and sizing design of aircraft wings. The AFEM technique, based on CATIA VBA scripting and PCL programming, is used to generate models automatically considering the arrangement of inner systems. GSA is used for local structural topology optimization. The design procedure is applied to a high-aspect-ratio wing. The arrangement of the integral fuel tank, landing gear and control surfaces is considered. For the landing gear region, a non-conventional initial structural layout is adopted. The positions of components, the number of ribs and local topology in the wing box and landing gear region are optimized to obtain a minimum structural weight. Constraints include tank volume, strength, buckling and aeroelastic parameters. The results show that the combined approach leads to a greater weight saving, i.e. 26.5%, compared with three additional optimizations based on individual design approaches.

  6. An automated method for generating analogic signals that embody the Markov kinetics of model ionic channels.

    PubMed

    Luchian, Tudor

    2005-08-30

    In this work we present an automated method for generating electrical signals which reflect the kinetics of ionic channels that have custom-tailored intermediate sub-states and intermediate reaction constants. The concept of our virtual single-channel waveform generator makes use of two software platforms, one for the numerical generation of single channel traces stemming from a pre-defined model and another for the digital-to-analog conversion of such numerical generated single channel traces. This technique of continuous generation and recording of the activity of a model ionic channel provides an efficient protocol to teach neophytes in the field of single-channel electrophysiology about its major phenomenological facets. Random analogic signals generated by using our technique can be successfully employed in a number of applications, such us: assisted learning of the single-molecule kinetic investigation via electrical recordings, impedance spectroscopy, the evaluation of linear frequency response of neurons and the study of stochastic resonance of ion channels. PMID:16054511

  7. The location of the thermodynamic atmosphere-ice interface in fully-coupled models

    NASA Astrophysics Data System (ADS)

    West, A. E.; McLaren, A. J.; Hewitt, H. T.; Best, M. J.

    2015-11-01

    In fully-coupled climate models, it is now normal to include a sea ice component with multiple layers, each having their own temperature. When coupling this component to an atmosphere model, it is more common for surface variables to be calculated in the sea ice component of the model, the equivalent of placing an interface immediately above the surface. This study uses a one-dimensional (1-D) version of the Los Alamos sea ice model (CICE) thermodynamic solver and the Met Office atmospheric surface exchange solver (JULES) to compare this method with that of allowing the surface variables to be calculated instead in the atmosphere, the equivalent of placing an interface immediately below the surface. The model is forced with a sensible heat flux derived from a sinusoidally varying near-surface air temperature. The two coupling methods are tested first with a 1-h coupling frequency, and then a 3-h coupling frequency, both commonly-used. With an above-surface interface, the resulting surface temperature and flux cycles contain large phase and amplitude errors, as well as having a very "blocky" shape. The simulation of both quantities is greatly improved when the interface is instead placed within the top ice layer, allowing surface variables to be calculated on the shorter timescale of the atmosphere. There is also an unexpected slight improvement in the simulation of the top-layer ice temperature by the ice model. The study concludes with a discussion of the implications of these results to three-dimensional modelling. An appendix examines the stability of the alternative method of coupling under various physically realistic scenarios.

  8. Visualization: A Mind-Machine Interface for Discovery.

    PubMed

    Nielsen, Cydney B

    2016-02-01

    Computation is critical for enabling us to process data volumes and model data complexities that are unthinkable by manual means. However, we are far from automating the sense-making process. Human knowledge and reasoning are critical for discovery. Visualization offers a powerful interface between mind and machine that should be further exploited in future genome analysis tools. PMID:26739384

  9. An Agent-Based Interface to Terrestrial Ecological Forecasting

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Nemani, Ramakrishna; Pang, Wan-Lin; Votava, Petr; Etzioni, Oren

    2004-01-01

    This paper describes a flexible agent-based ecological forecasting system that combines multiple distributed data sources and models to provide near-real-time answers to questions about the state of the Earth system We build on novel techniques in automated constraint-based planning and natural language interfaces to automatically generate data products based on descriptions of the desired data products.

  10. A DIFFUSE-INTERFACE APPROACH FOR MODELING TRANSPORT, DIFFUSION AND ADSORPTION/DESORPTION OF MATERIAL QUANTITIES ON A DEFORMABLE INTERFACE.

    PubMed

    Teigen, Knut Erik; Li, Xiangrong; Lowengrub, John; Wang, Fan; Voigt, Axel

    2009-12-01

    A method is presented to solve two-phase problems involving a material quantity on an interface. The interface can be advected, stretched, and change topology, and material can be adsorbed to or desorbed from it. The method is based on the use of a diffuse interface framework, which allows a simple implementation using standard finite-difference or finite-element techniques. Here, finite-difference methods on a block-structured adaptive grid are used, and the resulting equations are solved using a non-linear multigrid method. Interfacial flow with soluble surfactants is used as an example of the application of the method, and several test cases are presented demonstrating its accuracy and convergence. PMID:21373370

  11. A DIFFUSE-INTERFACE APPROACH FOR MODELING TRANSPORT, DIFFUSION AND ADSORPTION/DESORPTION OF MATERIAL QUANTITIES ON A DEFORMABLE INTERFACE*

    PubMed Central

    Teigen, Knut Erik; Li, Xiangrong; Lowengrub, John; Wang, Fan; Voigt, Axel

    2010-01-01

    A method is presented to solve two-phase problems involving a material quantity on an interface. The interface can be advected, stretched, and change topology, and material can be adsorbed to or desorbed from it. The method is based on the use of a diffuse interface framework, which allows a simple implementation using standard finite-difference or finite-element techniques. Here, finite-difference methods on a block-structured adaptive grid are used, and the resulting equations are solved using a non-linear multigrid method. Interfacial flow with soluble surfactants is used as an example of the application of the method, and several test cases are presented demonstrating its accuracy and convergence. PMID:21373370

  12. Third-generation electrokinetically pumped sheath-flow nanospray interface with improved stability and sensitivity for automated capillary zone electrophoresis-mass spectrometry analysis of complex proteome digests.

    PubMed

    Sun, Liangliang; Zhu, Guijie; Zhang, Zhenbin; Mou, Si; Dovichi, Norman J

    2015-05-01

    We have reported a set of electrokinetically pumped sheath flow nanoelectrospray interfaces to couple capillary zone electrophoresis with mass spectrometry. A separation capillary is threaded through a cross into a glass emitter. A side arm provides fluidic contact with a sheath buffer reservoir that is connected to a power supply. The potential applied to the sheath buffer drives electro-osmosis in the emitter to pump the sheath fluid at nanoliter per minute rates. Our first-generation interface placed a flat-tipped capillary in the emitter. Sensitivity was inversely related to orifice size and to the distance from the capillary tip to the emitter orifice. A second-generation interface used a capillary with an etched tip that allowed the capillary exit to approach within a few hundred micrometers of the emitter orifice, resulting in a significant increase in sensitivity. In both the first- and second-generation interfaces, the emitter diameter was typically 8 μm; these narrow orifices were susceptible to plugging and tended to have limited lifetime. We now report a third-generation interface that employs a larger diameter emitter orifice with very short distance between the capillary tip and the emitter orifice. This modified interface is much more robust and produces much longer lifetime than our previous designs with no loss in sensitivity. We evaluated the third-generation interface for a 5000 min (127 runs, 3.5 days) repetitive analysis of bovine serum albumin digest using an uncoated capillary. We observed a 10% relative standard deviation in peak area, an average of 160,000 theoretical plates, and very low carry-over (much less than 1%). We employed a linear-polyacrylamide (LPA)-coated capillary for single-shot, bottom-up proteomic analysis of 300 ng of Xenopus laevis fertilized egg proteome digest and identified 1249 protein groups and 4038 peptides in a 110 min separation using an LTQ-Orbitrap Velos mass spectrometer; peak capacity was ∼330. The

  13. Modeling the Effect of Interface Wear on Fatigue Hysteresis Behavior of Carbon Fiber-Reinforced Ceramic-Matrix Composites

    NASA Astrophysics Data System (ADS)

    Longbiao, Li

    2015-12-01

    An analytical method has been developed to investigate the effect of interface wear on fatigue hysteresis behavior in carbon fiber-reinforced ceramic-matrix composites (CMCs). The damage mechanisms, i.e., matrix multicracking, fiber/matrix interface debonding and interface wear, fibers fracture, slip and pull-out, have been considered. The statistical matrix multicracking model and fracture mechanics interface debonding criterion were used to determine the matrix crack spacing and interface debonded length. Upon first loading to fatigue peak stress and subsequent cyclic loading, the fibers failure probabilities and fracture locations were determined by combining the interface wear model and fiber statistical failure model based on the assumption that the loads carried by broken and intact fibers satisfy the global load sharing criterion. The effects of matrix properties, i.e., matrix cracking characteristic strength and matrix Weibull modulus, interface properties, i.e., interface shear stress and interface debonded energy, fiber properties, i.e., fiber Weibull modulus and fiber characteristic strength, and cycle number on fibers failure, hysteresis loops and interface slip, have been investigated. The hysteresis loops under fatigue loading from the present analytical method were in good agreement with experimental data.

  14. Numerical analysis of composite systems by using interphase/interface models

    NASA Astrophysics Data System (ADS)

    Chaboche, J. L.; Girard, R.; Schaff, A.

    1997-07-01

    The paper considers two classes of approaches for the numerical analysis of composite systems: the first one discretizes the assumed interphase (between matrix and fibre) as volumic elements and uses material models that degenerate from Continuum Damage Mechanics. The second one introduces interface elements that relate non linearly the normal and tangential tractions to the corresponding displacement discontinuities, incorporating a progressive decohesion, following the lines of Needleman (1987) and Tvergaard (1990). The respective capabilities of these two approaches are discussed on the basis of some numerical results obtained for a unidirectional metal matrix composite system. When the models are consistently adjusted they are able to reproduce the same kind of results. The advantages of the second class of method is underlined and two new versions of interface models are proposed that guarantee the continuity and the monotonicity of the shear stiffness between the progressive decohesion phase and the subsequent contact/friction law that plays role under compressive shear after complete separation.

  15. Integration Of Heat Transfer Coefficient In Glass Forming Modeling With Special Interface Element

    SciTech Connect

    Moreau, P.; Gregoire, S.; Lochegnies, D.; Cesar de Sa, J.

    2007-05-17

    Numerical modeling of the glass forming processes requires the accurate knowledge of the heat exchange between the glass and the forming tools. A laboratory testing is developed to determine the evolution of the heat transfer coefficient in different glass/mould contact conditions (contact pressure, temperature, lubrication...). In this paper, trials are performed to determine heat transfer coefficient evolutions in experimental conditions close to the industrial blow-and-blow process conditions. In parallel of this work, a special interface element is implemented in a commercial Finite Element code in order to deal with heat transfer between glass and mould for non-meshing meshes and evolutive contact. This special interface element, implemented by using user subroutines, permits to introduce the previous heat transfer coefficient evolutions in the numerical modelings at the glass/mould interface in function of the local temperatures, contact pressures, contact time and kind of lubrication. The blow-and-blow forming simulation of a perfume bottle is finally performed to assess the special interface element performance.

  16. Integration Of Heat Transfer Coefficient In Glass Forming Modeling With Special Interface Element

    NASA Astrophysics Data System (ADS)

    Moreau, P.; César de Sá, J.; Grégoire, S.; Lochegnies, D.

    2007-05-01

    Numerical modeling of the glass forming processes requires the accurate knowledge of the heat exchange between the glass and the forming tools. A laboratory testing is developed to determine the evolution of the heat transfer coefficient in different glass/mould contact conditions (contact pressure, temperature, lubrication…). In this paper, trials are performed to determine heat transfer coefficient evolutions in experimental conditions close to the industrial blow-and-blow process conditions. In parallel of this work, a special interface element is implemented in a commercial Finite Element code in order to deal with heat transfer between glass and mould for non-meshing meshes and evolutive contact. This special interface element, implemented by using user subroutines, permits to introduce the previous heat transfer coefficient evolutions in the numerical modelings at the glass/mould interface in function of the local temperatures, contact pressures, contact time and kind of lubrication. The blow-and-blow forming simulation of a perfume bottle is finally performed to assess the special interface element performance.

  17. Lattice-gas models of phase separation: interfaces, phase transitions, and multiphase flow

    SciTech Connect

    Rothman, D.H. ); Zaleski, S. )

    1994-10-01

    Momentum-conserving lattice gases are simple, discrete, microscopic models of fluids. This review describes their hydrodynamics, with particular attention given to the derivation of macroscopic constitutive equations from microscopic dynamics. Lattice-gas models of phase separation receive special emphasis. The current understanding of phase transitions in these momentum-conserving models is reviewed; included in this discussion is a summary of the dynamical properties of interfaces. Because the phase-separation models are microscopically time irreversible, interesting questions are raised about their relationship to real fluid mixtures. Simulation of certain complex-fluid problems, such as multiphase flow through porous media and the interaction of phase transitions with hydrodynamics, is illustrated.

  18. Distribution automation applications of fiber optics

    NASA Technical Reports Server (NTRS)

    Kirkham, Harold; Johnston, A.; Friend, H.

    1989-01-01

    Motivations for interest and research in distribution automation are discussed. The communication requirements of distribution automation are examined and shown to exceed the capabilities of power line carrier, radio, and telephone systems. A fiber optic based communication system is described that is co-located with the distribution system and that could satisfy the data rate and reliability requirements. A cost comparison shows that it could be constructed at a cost that is similar to that of a power line carrier system. The requirements for fiber optic sensors for distribution automation are discussed. The design of a data link suitable for optically-powered electronic sensing is presented. Empirical results are given. A modeling technique that was used to understand the reflections of guided light from a variety of surfaces is described. An optical position-indicator design is discussed. Systems aspects of distribution automation are discussed, in particular, the lack of interface, communications, and data standards. The economics of distribution automation are examined.

  19. Role of bulk and of interface contacts in the behavior of lattice model dimeric proteins.

    PubMed

    Tiana, G; Provasi, D; Broglia, R A

    2003-05-01

    Some dimeric proteins first fold and then dimerize (three-state dimers) while others first dimerize and then fold (two-state dimers). Within the framework of a minimal lattice model, we can distinguish between sequences following one or the other mechanism on the basis of the distribution of the ground state energy between bulk and interface contacts. The topology of contacts is very different for the bulk than for the interface: while the bulk displays a rich network of interactions, the dimer interface is built up of a set of essentially independent contacts. Consequently, the two sets of interactions play very different roles both, in the folding and in the evolutionary history of the protein. Three-state dimers, where a large fraction of energy is concentrated in few contacts buried in the bulk, and where the relative contact energy of interface contacts is considerably smaller than that associated with bulk contacts, fold according to a hierarchical pathway controlled by local elementary structures, as also happens in the folding of single-domain monomeric proteins. On the other hand, two-state dimers display a relative contact energy of interface contacts, which is larger than the corresponding quantity associated with the bulk. In this case, the assembly of the interface stabilizes the system and leads the two chains to fold. The specific properties of three-state dimers acquired through evolution are expected to be more robust than those of two-state dimers; a fact that has consequences on proteins connected with viral diseases. PMID:12786180

  20. Phononic band structures and stability analysis using radial basis function method with consideration of different interface models

    NASA Astrophysics Data System (ADS)

    Yan, Zhi-zhong; Wei, Chun-qiu; Zheng, Hui; Zhang, Chuanzeng

    2016-05-01

    In this paper, a meshless radial basis function (RBF) collocation method is developed to calculate the phononic band structures taking account of different interface models. The present method is validated by using the analytical results in the case of perfect interfaces. The stability is fully discussed based on the types of RBFs, the shape parameters and the node numbers. And the advantages of the proposed RBF method compared to the finite element method (FEM) are also illustrated. In addition, the influences of the spring-interface model and the three-phase model on the wave band gaps are investigated by comparing with the perfect interfaces. For different interface models, the effects of various interface conditions, length ratios and density ratios on the band gap width are analyzed. The comparison results of the two models show that the weakly bonded interface has a significant effect on the properties of phononic crystals. Besides, the band structures of the spring-interface model have certain similarities and differences with those of the three-phase model.

  1. Wall modeling for implicit large-eddy simulation and immersed-interface methods

    NASA Astrophysics Data System (ADS)

    Chen, Zhen Li; Hickel, Stefan; Devesa, Antoine; Berland, Julien; Adams, Nikolaus A.

    2014-02-01

    We propose and analyze a wall model based on the turbulent boundary layer equations (TBLE) for implicit large-eddy simulation (LES) of high Reynolds number wall-bounded flows in conjunction with a conservative immersed-interface method for mapping complex boundaries onto Cartesian meshes. Both implicit subgrid-scale model and immersed-interface treatment of boundaries offer high computational efficiency for complex flow configurations. The wall model operates directly on the Cartesian computational mesh without the need for a dual boundary-conforming mesh. The combination of wall model and implicit LES is investigated in detail for turbulent channel flow at friction Reynolds numbers from Re τ = 395 up to Re τ =100,000 on very coarse meshes. The TBLE wall model with implicit LES gives results of better quality than current explicit LES based on eddy viscosity subgrid-scale models with similar wall models. A straightforward formulation of the wall model performs well at moderately large Reynolds numbers. A logarithmic-layer mismatch, observed only at very large Reynolds numbers, is removed by introducing a new structure-based damping function. The performance of the overall approach is assessed for two generic configurations with flow separation: the backward-facing step at Re h = 5,000 and the periodic hill at Re H = 10,595 and Re H = 37,000 on very coarse meshes. The results confirm the observations made for the channel flow with respect to the good prediction quality and indicate that the combination of implicit LES, immersed-interface method, and TBLE-based wall modeling is a viable approach for simulating complex aerodynamic flows at high Reynolds numbers. They also reflect the limitations of TBLE-based wall models.

  2. Automated parameter estimation for biological models using Bayesian statistical model checking

    PubMed Central

    2015-01-01

    Background Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Domain experts usually estimate the values of these parameters by fitting the model to experimental data. Model fitting is usually expressed as an optimization problem that requires minimizing a cost-function which measures some notion of distance between the model and the data. This optimization problem is often solved by combining local and global search methods that tend to perform well for the specific application domain. When some prior information about parameters is available, methods such as Bayesian inference are commonly used for parameter learning. Choosing the appropriate parameter search technique requires detailed domain knowledge and insight into the underlying system. Results Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. Conclusions We have developed a new algorithmic technique for discovering parameters in complex stochastic models of

  3. Micromechanical modeling of the cement-bone interface: the effect of friction, morphology and material properties on the micromechanical response

    PubMed Central

    Janssen, Dennis; Mann, Kenneth A.; Verdonschot, Nico

    2008-01-01

    In order to gain insight into the micro-mechanical behavior of the cement-bone interface, the effect of parametric variations of frictional, morphological and material properties on the mechanical response of the cement-bone interface were analyzed using a finite element approach. Finite element models of a cement-bone interface specimen were created from micro-computed tomography data of a physical specimen that was sectioned from an in vitro cemented total hip arthroplasty. In five models the friction coefficient was varied (μ= 0.0; 0.3; 0.7; 1.0 and 3.0), while in one model an ideally bonded interface was assumed. In two models cement interface gaps and an optimal cement penetration were simulated. Finally, the effect of bone cement stiffness variations was simulated (2.0 and 2.5 GPa, relative to the default 3.0 GPa). All models were loaded for a cycle of fully reversible tension-compression. From the simulated stress-displacement curves the interface deformation, stiffness and hysteresis were calculated. The results indicate that in the current model the mechanical properties of the cement-bone interface were caused by frictional phenomena at the shape-closed interlock rather than by adhesive properties of the cement. Our findings furthermore show that in our model maximizing cement penetration improved the micromechanical response of the cement-bone interface stiffness, while interface gaps had a detrimental effect. Relative to the frictional and morphological variations, variations in the cement stiffness had only a modest effect on the micromechanical behavior of the cement-bone interface. The current study provides information that may help to better understand the load transfer mechanisms taking place at the cement-bone interface. PMID:18848699

  4. A new method for automated discontinuity trace mapping on rock mass 3D surface model

    NASA Astrophysics Data System (ADS)

    Li, Xiaojun; Chen, Jianqin; Zhu, Hehua

    2016-04-01

    This paper presents an automated discontinuity trace mapping method on a 3D surface model of rock mass. Feature points of discontinuity traces are first detected using the Normal Tensor Voting Theory, which is robust to noisy point cloud data. Discontinuity traces are then extracted from feature points in four steps: (1) trace feature point grouping, (2) trace segment growth, (3) trace segment connection, and (4) redundant trace segment removal. A sensitivity analysis is conducted to identify optimal values for the parameters used in the proposed method. The optimal triangular mesh element size is between 5 cm and 6 cm; the angle threshold in the trace segment growth step is between 70° and 90°; the angle threshold in the trace segment connection step is between 50° and 70°, and the distance threshold should be at least 15 times the mean triangular mesh element size. The method is applied to the excavation face trace mapping of a drill-and-blast tunnel. The results show that the proposed discontinuity trace mapping method is fast and effective and could be used as a supplement to traditional direct measurement of discontinuity traces.

  5. Multi-fractal analysis for vehicle distribution based on cellular automation model

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Li, Shi-Gao

    2015-09-01

    It is well known that traffic flow presents multi-fractal characteristics at time scales. The aim of this study is to test its multi-fractality at spacial scales. The vehicular cellular automation (CA) model is chosen as a tool to get vehicle positions on a single lane road. First, multi-fractal of vehicle distribution is checked, and multi-fractal spectrums are plotted. Second, analysis results show that the width of a multi-fractal spectrum expresses the ratio of the maximum to minimum densities, and the height difference between the left and right vertexes represents the relative size between the numbers of sections with the maximum and minimum densities. Finally, the effects of the random deceleration probability and the average density on homogeneity of vehicle distribution are analyzed. The results show that random deceleration increases the ratio of the maximum to minimum densities, and deceases the relative size between the numbers of sections with the maximum and minimum densities, when the global density is limited to a specific range. Therefore, the multi-fractal spectrum can be used to quantify the homogeneity of spacial distribution of traffic flow.

  6. Multiscale Modeling of Intergranular Fracture in Aluminum: Constitutive Relation For Interface Debonding

    NASA Technical Reports Server (NTRS)

    Yamakov, V.; Saether, E.; Glaessgen, E. H.

    2008-01-01

    Intergranular fracture is a dominant mode of failure in ultrafine grained materials. In the present study, the atomistic mechanisms of grain-boundary debonding during intergranular fracture in aluminum are modeled using a coupled molecular dynamics finite element simulation. Using a statistical mechanics approach, a cohesive-zone law in the form of a traction-displacement constitutive relationship, characterizing the load transfer across the plane of a growing edge crack, is extracted from atomistic simulations and then recast in a form suitable for inclusion within a continuum finite element model. The cohesive-zone law derived by the presented technique is free of finite size effects and is statistically representative for describing the interfacial debonding of a grain boundary (GB) interface examined at atomic length scales. By incorporating the cohesive-zone law in cohesive-zone finite elements, the debonding of a GB interface can be simulated in a coupled continuum-atomistic model, in which a crack starts in the continuum environment, smoothly penetrates the continuum-atomistic interface, and continues its propagation in the atomistic environment. This study is a step towards relating atomistically derived decohesion laws to macroscopic predictions of fracture and constructing multiscale models for nanocrystalline and ultrafine grained materials.

  7. The DaveMLTranslator: An Interface for DAVE-ML Aerodynamic Models

    NASA Technical Reports Server (NTRS)

    Hill, Melissa A.; Jackson, E. Bruce

    2007-01-01

    It can take weeks or months to incorporate a new aerodynamic model into a vehicle simulation and validate the performance of the model. The Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML) has been proposed as a means to reduce the time required to accomplish this task by defining a standard format for typical components of a flight dynamic model. The purpose of this paper is to describe an object-oriented C++ implementation of a class that interfaces a vehicle subsystem model specified in DAVE-ML and a vehicle simulation. Using the DaveMLTranslator class, aerodynamic or other subsystem models can be automatically imported and verified at run-time, significantly reducing the elapsed time between receipt of a DAVE-ML model and its integration into a simulation environment. The translator performs variable initializations, data table lookups, and mathematical calculations for the aerodynamic build-up, and executes any embedded static check-cases for verification. The implementation is efficient, enabling real-time execution. Simple interface code for the model inputs and outputs is the only requirement to integrate the DaveMLTranslator as a vehicle aerodynamic model. The translator makes use of existing table-lookup utilities from the Langley Standard Real-Time Simulation in C++ (LaSRS++). The design and operation of the translator class is described and comparisons with existing, conventional, C++ aerodynamic models of the same vehicle are given.

  8. Numerical modeling of the evolution of a generic clay/cement interface

    NASA Astrophysics Data System (ADS)

    Kosakowski, G.; Kulik, D. A.; Shao, H.; Dmytrieva, S. V.; Kolditz, O.

    2009-04-01

    The long-term evolution of interfaces between different materials in deep geological repositories for nuclear waste is governed by geochemical interactions in conjunction with mass and energy transport processes. A key role for the design of the multi-barrier system is the knowledge of long term changes at the interfaces between different materials. Recently, the GEMS-PSI research package (http://gems.web.psi.ch) for thermodynamic modeling of aquatic (geo)chemical systems by Gibbs Energy Minimization was coupled to the T(hermo)-H(ydro)-M(echanical)-C(hemical) transport code Geosys/Rockflow (http://www.ufz.de/index.php?en=11877). The GEM convex programming approach is complementary to the often-used Law of Mass Action (LMA) approach. It is computationally more expensive than LMA and requires more thermodynamic data, but has advantages for describing complex geochemical environments, like aqueous - solid solution equilibria that include two or more multi-component phases. We believe that the use of GEM method in reactive transport codes is a step towards a more realistic description of complex geochemical systems. The coupled code was verified by a widely used benchmark of dissolution-precipitation in a calcite-dolomite system, the retardation of radium close to a bentontite/cement interface due to incorporation in solid solutions, and the evolution of a generic clay/cement interface. The reactive transport simulations presented in this work were not adapted to specific cement or clay material compositions. We concentrated on a simplified, generic geochemical model and a simplified, diffusion dominated, setup for the transport. This makes it easier to test the coupling of the codes and investigate the effects of the numerical and conceptual parameters (e.g. discretization) on the evolution of the interface.

  9. Designing geo-spatial interfaces to scale process models: the GeoWEPP approach

    NASA Astrophysics Data System (ADS)

    Renschler, Chris S.

    2003-04-01

    Practical decision making in spatially distributed environmental assessment and management is increasingly based on environmental process models linked to geographical information systems. Powerful personal computers and Internet-accessible assessment tools are providing much greater public access to, and use of, environmental models and geo-spatial data. However traditional process models, such as the water erosion prediction project (WEPP), were not typically developed with a flexible graphical user interface (GUI) for applications across a wide range of spatial and temporal scales, utilizing readily available geo-spatial data of highly variable precision and accuracy, and communicating with a diverse spectrum of users with different levels of expertise. As the development of the geo-spatial interface for WEPP (GeoWEPP) demonstrates, the GUI plays a key role in facilitating effective communication between the tool developer and user about data and model scales. The GeoWEPP approach illustrates that it is critical to develop a scientific and functional framework for the design, implementation, and use of such geo-spatial model assessment tools. The way that GeoWEPP was developed and implemented suggests a framework and scaling theory leading to a practical approach for developing geo-spatial interfaces for process models. GeoWEPP accounts for fundamental water erosion processes, model, and users needs, but most important it also matches realistic data availability and environmental settings by enabling even non-GIS-literate users to assemble the available geo-spatial data quickly to start soil and water conservation planning. In general, it is potential users' spatial and temporal scales of interest, and scales of readily available data, that should drive model design or selection, as opposed to using or designing the most sophisticated process model as the starting point and then determining data needs and result scales.

  10. Modeling of ultrasound transmission through a solid-liquid interface comprising a network of gas pockets

    NASA Astrophysics Data System (ADS)

    Paumel, K.; Moysan, J.; Chatain, D.; Corneloup, G.; Baqué, F.

    2011-08-01

    Ultrasonic inspection of sodium-cooled fast reactor requires a good acoustic coupling between the transducer and the liquid sodium. Ultrasonic transmission through a solid surface in contact with liquid sodium can be complex due to the presence of microscopic gas pockets entrapped by the surface roughness. Experiments are run using substrates with controlled roughness consisting of a network of holes and a modeling approach is then developed. In this model, a gas pocket stiffness at a partially solid-liquid interface is defined. This stiffness is then used to calculate the transmission coefficient of ultrasound at the entire interface. The gas pocket stiffness has a static, as well as an inertial component, which depends on the ultrasonic frequency and the radiative mass.

  11. Modeling of ultrasound transmission through a solid-liquid interface comprising a network of gas pockets

    SciTech Connect

    Paumel, K.; Baque, F.; Moysan, J.; Corneloup, G.; Chatain, D.

    2011-08-15

    Ultrasonic inspection of sodium-cooled fast reactor requires a good acoustic coupling between the transducer and the liquid sodium. Ultrasonic transmission through a solid surface in contact with liquid sodium can be complex due to the presence of microscopic gas pockets entrapped by the surface roughness. Experiments are run using substrates with controlled roughness consisting of a network of holes and a modeling approach is then developed. In this model, a gas pocket stiffness at a partially solid-liquid interface is defined. This stiffness is then used to calculate the transmission coefficient of ultrasound at the entire interface. The gas pocket stiffness has a static, as well as an inertial component, which depends on the ultrasonic frequency and the radiative mass.

  12. A user interface for the Kansas Geological Survey slug test model.

    PubMed

    Esling, Steven P; Keller, John E

    2009-01-01

    The Kansas Geological Survey (KGS) developed a semianalytical solution for slug tests that incorporates the effects of partial penetration, anisotropy, and the presence of variable conductivity well skins. The solution can simulate either confined or unconfined conditions. The original model, written in FORTRAN, has a text-based interface with rigid input requirements and limited output options. We re-created the main routine for the KGS model as a Visual Basic macro that runs in most versions of Microsoft Excel and built a simple-to-use Excel spreadsheet interface that automatically displays the graphical results of the test. A comparison of the output from the original FORTRAN code to that of the new Excel spreadsheet version for three cases produced identical results. PMID:19583592

  13. Two-dimensional model of flows and interface instability in aluminum reduction cells

    NASA Astrophysics Data System (ADS)

    Zikanov, Oleg; Sun, Haijun; Ziegler, Donald

    2003-11-01

    We derive a two-dimensional model for the melt flows and interface instability in aluminum reduction cells. The model is based on the de St. Venant shallow water equations and incorporates the essential features of the system such as the magnetohydrodynamic instability mechanism and nonlinear coupling between the flows and interfacial waves. The model is applied to verify a recently proposed theory that explains the instability through the interaction between perturbations of horizontal electric currents in the aluminum layer and the imposed vertical magnetic field. We investigate the role of other factors, in particular, background melt flows and magnetic field perturbations.

  14. Ergonomic Models of Anthropometry, Human Biomechanics and Operator-Equipment Interfaces

    NASA Technical Reports Server (NTRS)

    Kroemer, Karl H. E. (Editor); Snook, Stover H. (Editor); Meadows, Susan K. (Editor); Deutsch, Stanley (Editor)

    1988-01-01

    The Committee on Human Factors was established in October 1980 by the Commission on Behavioral and Social Sciences and Education of the National Research Council. The committee is sponsored by the Office of Naval Research, the Air Force Office of Scientific Research, the Army Research Institute for the Behavioral and Social Sciences, the National Aeronautics and Space Administration, and the National Science Foundation. The workshop discussed the following: anthropometric models; biomechanical models; human-machine interface models; and research recommendations. A 17-page bibliography is included.

  15. Formulation of consumables management models: Mission planning processor payload interface definition

    NASA Technical Reports Server (NTRS)

    Torian, J. G.

    1977-01-01

    Consumables models required for the mission planning and scheduling function are formulated. The relation of the models to prelaunch, onboard, ground support, and postmission functions for the space transportation systems is established. Analytical models consisting of an orbiter planning processor with consumables data base is developed. A method of recognizing potential constraint violations in both the planning and flight operations functions, and a flight data file storage/retrieval of information over an extended period which interfaces with a flight operations processor for monitoring of the actual flights is presented.

  16. 19 CFR 24.25 - Statement processing and Automated Clearinghouse.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 19 Customs Duties 1 2013-04-01 2013-04-01 false Statement processing and Automated Clearinghouse... processing and Automated Clearinghouse. (a) Description. Statement processing is a voluntary automated program for participants in the Automated Broker Interface (ABI), allowing the grouping of...

  17. 19 CFR 24.25 - Statement processing and Automated Clearinghouse.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Statement processing and Automated Clearinghouse... processing and Automated Clearinghouse. (a) Description. Statement processing is a voluntary automated program for participants in the Automated Broker Interface (ABI), allowing the grouping of...

  18. 19 CFR 24.25 - Statement processing and Automated Clearinghouse.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 19 Customs Duties 1 2014-04-01 2014-04-01 false Statement processing and Automated Clearinghouse... processing and Automated Clearinghouse. (a) Description. Statement processing is a voluntary automated program for participants in the Automated Broker Interface (ABI), allowing the grouping of...

  19. 19 CFR 24.25 - Statement processing and Automated Clearinghouse.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 19 Customs Duties 1 2012-04-01 2012-04-01 false Statement processing and Automated Clearinghouse... processing and Automated Clearinghouse. (a) Description. Statement processing is a voluntary automated program for participants in the Automated Broker Interface (ABI), allowing the grouping of...

  20. 19 CFR 24.25 - Statement processing and Automated Clearinghouse.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 19 Customs Duties 1 2011-04-01 2011-04-01 false Statement processing and Automated Clearinghouse... processing and Automated Clearinghouse. (a) Description. Statement processing is a voluntary automated program for participants in the Automated Broker Interface (ABI), allowing the grouping of...

  1. Rigorous interpolation near tilted interfaces in 3-D finite-difference EM modelling

    NASA Astrophysics Data System (ADS)

    Shantsev, Daniil V.; Maaø, Frank A.

    2015-02-01

    We present a rigorous method for interpolation of electric and magnetic fields close to an interface with a conductivity contrast. The method takes into account not only a well-known discontinuity in the normal electric field, but also discontinuity in all the normal derivatives of electric and magnetic tangential fields. The proposed method is applied to marine 3-D controlled-source electromagnetic modelling (CSEM) where sources and receivers are located close to the seafloor separating conductive seawater and resistive formation. For the finite-difference scheme based on the Yee grid, the new interpolation is demonstrated to be much more accurate than alternative methods (interpolation using nodes on one side of the interface or interpolation using nodes on both sides, but ignoring the derivative jumps). The rigorous interpolation can handle arbitrary orientation of interface with respect to the grid, which is demonstrated on a marine CSEM example with a dipping seafloor. The interpolation coefficients are computed by minimizing a misfit between values at the nearest nodes and linear expansions of the continuous field components in the coordinate system aligned with the interface. The proposed interpolation operators can handle either uniform or non-uniform grids and can be applied to interpolation for both sources and receivers.

  2. Modeling Geometry and Progressive Failure of Material Interfaces in Plain Weave Composites

    NASA Technical Reports Server (NTRS)

    Hsu, Su-Yuen; Cheng, Ron-Bin

    2010-01-01

    A procedure combining a geometrically nonlinear, explicit-dynamics contact analysis, computer aided design techniques, and elasticity-based mesh adjustment is proposed to efficiently generate realistic finite element models for meso-mechanical analysis of progressive failure in textile composites. In the procedure, the geometry of fiber tows is obtained by imposing a fictitious expansion on the tows. Meshes resulting from the procedure are conformal with the computed tow-tow and tow-matrix interfaces but are incongruent at the interfaces. The mesh interfaces are treated as cohesive contact surfaces not only to resolve the incongruence but also to simulate progressive failure. The method is employed to simulate debonding at the material interfaces in a ceramic-matrix plain weave composite with matrix porosity and in a polymeric matrix plain weave composite without matrix porosity, both subject to uniaxial cyclic loading. The numerical results indicate progression of the interfacial damage during every loading and reverse loading event in a constant strain amplitude cyclic process. However, the composites show different patterns of damage advancement.

  3. Mathematical modeling of planar and spherical vapor-liquid phase interfaces for multicomponent fluids

    NASA Astrophysics Data System (ADS)

    Celný, David; Vinš, Václav; Planková, Barbora; Hrubý, Jan

    2016-03-01

    Development of methods for accurate modeling of phase interfaces is important for understanding various natural processes and for applications in technology such as power production and carbon dioxide separation and storage. In particular, prediction of the course of the non-equilibrium phase transition processes requires knowledge of the properties of the strongly curved phase interfaces of microscopic droplets. In our work, we focus on the spherical vapor-liquid phase interfaces for binary mixtures. We developed a robust computational method to determine the density and concentration profiles. The fundamentals of our approach lie in the Cahn-Hilliard gradient theory, allowing to transcribe the functional formulation into a system of ordinary Euler-Langrange equations. This system is then split and modified into a shape suitable for iterative computation. For this task, we combine the Newton-Raphson and the shooting methods providing a good convergence speed. For the thermodynamic roperties, the PC-SAFT equation of state is used. We determine the density and concentration profiles for spherical phase interfaces at various saturation factors for the binary mixture of CO2 and C9H20. The computed concentration profiles allow to the determine the work of formation and other characteristics of the microscopic droplets.

  4. Characterizing and Modeling Brittle Bi-material Interfaces Subjected to Shear

    NASA Astrophysics Data System (ADS)

    Anyfantis, Konstantinos N.; Berggreen, Christian

    2014-12-01

    This work is based on the investigation, both experimentally and numerically, of the Mode II fracture process and bond strength of bondlines formed in co-cured composite/metal joints. To this end, GFRP-to-steel double strap joints were tested in tension, so that the bi-material interface was subjected to shear with debonding occurring under Mode II conditions. The study of the debonding process and thus failure of the joints was based both on stress and energy considerations. Analytical formulas were utilized for the derivation of the respective shear strength and fracture toughness measures which characterize the bi-material interface, by considering the joint's failure load, geometry and involved materials. The derived stress and toughness magnitudes were further utilized as the parameters of an extrinsic cohesive law, applied in connection with the modeling the bi-material interface in a finite element simulation environment. It was concluded that interfacial fracture in the considered joints was driven by the fracture toughness and not by strength considerations, and that LEFM is well suited to analyze the failure of the joint. Additionally, the double strap joint geometry was identified and utilized as a characterization test for measuring the Mode II fracture toughness of brittle bi-material interfaces.

  5. Interfacing Cultured Neurons to Microtransducers Arrays: A Review of the Neuro-Electronic Junction Models

    PubMed Central

    Massobrio, Paolo; Massobrio, Giuseppe; Martinoia, Sergio

    2016-01-01

    Microtransducer arrays, both metal microelectrodes and silicon-based devices, are widely used as neural interfaces to measure, extracellularly, the electrophysiological activity of excitable cells. Starting from the pioneering works at the beginning of the 70's, improvements in manufacture methods, materials, and geometrical shape have been made. Nowadays, these devices are routinely used in different experimental conditions (both in vivo and in vitro), and for several applications ranging from basic research in neuroscience to more biomedical oriented applications. However, the use of these micro-devices deeply depends on the nature of the interface (coupling) between the cell membrane and the sensitive active surface of the microtransducer. Thus, many efforts have been oriented to improve coupling conditions. Particularly, in the latest years, two innovations related to the use of carbon nanotubes as interface material and to the development of micro-structures which can be engulfed by the cell membrane have been proposed. In this work, we review what can be simulated by using simple circuital models and what happens at the interface between the sensitive active surface of the microtransducer and the neuronal membrane of in vitro neurons. We finally focus our attention on these two novel technological solutions capable to improve the coupling between neuron and micro-nano transducer. PMID:27445657

  6. Interfacing Cultured Neurons to Microtransducers Arrays: A Review of the Neuro-Electronic Junction Models.

    PubMed

    Massobrio, Paolo; Massobrio, Giuseppe; Martinoia, Sergio

    2016-01-01

    Microtransducer arrays, both metal microelectrodes and silicon-based devices, are widely used as neural interfaces to measure, extracellularly, the electrophysiological activity of excitable cells. Starting from the pioneering works at the beginning of the 70's, improvements in manufacture methods, materials, and geometrical shape have been made. Nowadays, these devices are routinely used in different experimental conditions (both in vivo and in vitro), and for several applications ranging from basic research in neuroscience to more biomedical oriented applications. However, the use of these micro-devices deeply depends on the nature of the interface (coupling) between the cell membrane and the sensitive active surface of the microtransducer. Thus, many efforts have been oriented to improve coupling conditions. Particularly, in the latest years, two innovations related to the use of carbon nanotubes as interface material and to the development of micro-structures which can be engulfed by the cell membrane have been proposed. In this work, we review what can be simulated by using simple circuital models and what happens at the interface between the sensitive active surface of the microtransducer and the neuronal membrane of in vitro neurons. We finally focus our attention on these two novel technological solutions capable to improve the coupling between neuron and micro-nano transducer. PMID:27445657

  7. In Vitro Multitissue Interface Model Supports Rapid Vasculogenesis and Mechanistic Study of Vascularization across Tissue Compartments.

    PubMed

    Buno, Kevin P; Chen, Xuemei; Weibel, Justin A; Thiede, Stephanie N; Garimella, Suresh V; Yoder, Mervin C; Voytik-Harbin, Sherry L

    2016-08-31

    A significant challenge facing tissue engineers is the design and development of complex multitissue systems, including vascularized tissue-tissue interfaces. While conventional in vitro models focus on either vasculogenesis (de novo formation of blood vessels) or angiogenesis (vessels sprouting from existing vessels or endothelial monolayers), successful therapeutic vascularization strategies will likely rely on coordinated integration of both processes. To address this challenge, we developed a novel in vitro multitissue interface model in which human endothelial colony forming cell (ECFC)-encapsulated tissue spheres are embedded within a surrounding tissue microenvironment. This highly reproducible approach exploits biphilic surfaces (nanostructured surfaces with distinct superhydrophobic and hydrophilic regions) to (i) support tissue compartments with user-specified matrix composition and physical properties as well as cell type and density and (ii) introduce boundary conditions that prevent the cell-mediated tissue contraction routinely observed with conventional three-dimensional monodispersion cultures. This multitissue interface model was applied to test the hypothesis that independent control of cell-extracellular matrix (ECM) and cell-cell interactions would affect vascularization within the tissue sphere as well as across the tissue-tissue interface. We found that high-cell-density tissue spheres containing 5 × 10(6) ECFCs/mL exhibit rapid and robust vasculogenesis, forming highly interconnected, stable (as indicated by type IV collagen deposition) vessel networks within only 3 days. Addition of adipose-derived stromal cells (ASCs) in the surrounding tissue further enhanced vasculogenesis within the sphere as well as angiogenic vessel elongation across the tissue-tissue boundary, with both effects being dependent on the ASC density. Overall, results show that the ECFC density and ECFC-ASC crosstalk, in terms of paracrine and mechanophysical signaling

  8. Degenerate Ising model for atomistic simulation of crystal-melt interfaces

    SciTech Connect

    Schebarchov, D.; Schulze, T. P.; Hendy, S. C.

    2014-02-21

    One of the simplest microscopic models for a thermally driven first-order phase transition is an Ising-type lattice system with nearest-neighbour interactions, an external field, and a degeneracy parameter. The underlying lattice and the interaction coupling constant control the anisotropic energy of the phase boundary, the field strength represents the bulk latent heat, and the degeneracy quantifies the difference in communal entropy between the two phases. We simulate the (stochastic) evolution of this minimal model by applying rejection-free canonical and microcanonical Monte Carlo algorithms, and we obtain caloric curves and heat capacity plots for square (2D) and face-centred cubic (3D) lattices with periodic boundary conditions. Since the model admits precise adjustment of bulk latent heat and communal entropy, neither of which affect the interface properties, we are able to tune the crystal nucleation barriers at a fixed degree of undercooling and verify a dimension-dependent scaling expected from classical nucleation theory. We also analyse the equilibrium crystal-melt coexistence in the microcanonical ensemble, where we detect negative heat capacities and find that this phenomenon is more pronounced when the interface is the dominant contributor to the total entropy. The negative branch of the heat capacity appears smooth only when the equilibrium interface-area-to-volume ratio is not constant but varies smoothly with the excitation energy. Finally, we simulate microcanonical crystal nucleation and subsequent relaxation to an equilibrium Wulff shape, demonstrating the model's utility in tracking crystal-melt interfaces at the atomistic level.

  9. Molecular simulation of water vapor-liquid phase interfaces using TIP4P/2005 model

    NASA Astrophysics Data System (ADS)

    Planková, Barbora; Vinš, Václav; Hrubý, Jan; Duška, Michal; Němec, Tomáš; Celný, David

    2015-05-01

    Molecular dynamics simulations for water were run using the TIP4P/2005 model for temperatures ranging from 250 K to 600 K. The density profile, the surface tension and the thickness of the phase interface were calculated as preliminary results. The surface tension values matched nicely with the IAPWS correlation over wide range of temperatures. As a partial result, DL_POLY Classis was successfully used for tests of the new computing cluster in our institute.

  10. Degenerate Ising model for atomistic simulation of crystal-melt interfaces.

    PubMed

    Schebarchov, D; Schulze, T P; Hendy, S C

    2014-02-21

    One of the simplest microscopic models for a thermally driven first-order phase transition is an Ising-type lattice system with nearest-neighbour interactions, an external field, and a degeneracy parameter. The underlying lattice and the interaction coupling constant control the anisotropic energy of the phase boundary, the field strength represents the bulk latent heat, and the degeneracy quantifies the difference in communal entropy between the two phases. We simulate the (stochastic) evolution of this minimal model by applying rejection-free canonical and microcanonical Monte Carlo algorithms, and we obtain caloric curves and heat capacity plots for square (2D) and face-centred cubic (3D) lattices with periodic boundary conditions. Since the model admits precise adjustment of bulk latent heat and communal entropy, neither of which affect the interface properties, we are able to tune the crystal nucleation barriers at a fixed degree of undercooling and verify a dimension-dependent scaling expected from classical nucleation theory. We also analyse the equilibrium crystal-melt coexistence in the microcanonical ensemble, where we detect negative heat capacities and find that this phenomenon is more pronounced when the interface is the dominant contributor to the total entropy. The negative branch of the heat capacity appears smooth only when the equilibrium interface-area-to-volume ratio is not constant but varies smoothly with the excitation energy. Finally, we simulate microcanonical crystal nucleation and subsequent relaxation to an equilibrium Wulff shape, demonstrating the model's utility in tracking crystal-melt interfaces at the atomistic level. PMID:24559357

  11. An approximate model and empirical energy function for solute interactions with a water-phosphatidylcholine interface.

    PubMed Central

    Sanders, C R; Schwonek, J P

    1993-01-01

    An empirical model of a liquid crystalline (L alpha phase) phosphatidylcholine (PC) bilayer interface is presented along with a function which calculates the position-dependent energy of associated solutes. The model approximates the interface as a gradual two-step transition, the first step being from an aqueous phase to a phase of reduced polarity, but which maintains a high enough concentration of water and/or polar head group moieties to satisfy the hydrogen bond-forming potential of the solute. The second transition is from the hydrogen bonding/low polarity region to an effectively anhydrous hydrocarbon phase. The "interfacial energies" of solutes within this variable medium are calculated based upon atomic positions and atomic parameters describing general polarity and hydrogen bond donor/acceptor propensities. This function was tested for its ability to reproduce experimental water-solvent partitioning energies and water-bilayer partitioning data. In both cases, the experimental data was reproduced fairly well. Energy minimizations carried out on beta-hexyl glucopyranoside led to identification of a global minimum for the interface-associated glycolipid which exhibited glycosidic torsion angles in agreement with prior results (Hare, B.J., K.P. Howard, and J.H. Prestegard. 1993. Biophys. J. 64:392-398). Molecular dynamics simulations carried out upon this same molecule within the simulated interface led to results which were consistent with a number of experimentally based conclusions from previous work, but failed to quantitatively reproduce an available NMR quadrupolar/dipolar coupling data set (Sanders, C.R., and J.H. Prestegard. 1991. J. Am. Chem. Soc. 113:1987-1996). The proposed model and functions are readily incorporated into computational energy modeling algorithms and may prove useful in future studies of membrane-associated molecules. PMID:8241401

  12. Experiments and modeling of freshwater lenses in layered aquifers: Steady state interface geometry

    NASA Astrophysics Data System (ADS)

    Dose, Eduardo J.; Stoeckl, Leonard; Houben, Georg J.; Vacher, H. L.; Vassolo, Sara; Dietrich, Jörg; Himmelsbach, Thomas

    2014-02-01

    The interface geometry of freshwater lenses in layered aquifers was investigated by physical 2D laboratory experiments. The resulting steady-state geometries of the lenses were compared to existing analytical expressions from Dupuit-Ghyben-Herzberg (DGH) analysis of strip-island lenses for various cases of heterogeneity. Despite the vertical exaggeration of the physical models, which would seem to vitiate the assumption of vertical equipotentials, the fits with the DGH models were generally satisfactory. Observed deviations between the analytical and physical models can be attributed mainly to outflow zones along the shore line, which are not considered in the analytical models. As unconfined natural lenses have small outflow zones compared to their overall dimensions, and flow is mostly horizontal, the DGH analytical models should perform even better at full scale. Numerical models that do consider the outflow face generally gave a good fit to the physical models.

  13. Structure and application of an interface program between a geographic-information system and a ground-water flow model

    USGS Publications Warehouse

    Van Metre, P.C.

    1990-01-01

    A computer-program interface between a geographic-information system and a groundwater flow model links two unrelated software systems for use in developing the flow models. The interface program allows the modeler to compile and manage geographic components of a groundwater model within the geographic information system. A significant savings of time and effort is realized in developing, calibrating, and displaying the groundwater flow model. Four major guidelines were followed in developing the interface program: (1) no changes to the groundwater flow model code were to be made; (2) a data structure was to be designed within the geographic information system that follows the same basic data structure as the groundwater flow model; (3) the interface program was to be flexible enough to support all basic data options available within the model; and (4) the interface program was to be as efficient as possible in terms of computer time used and online-storage space needed. Because some programs in the interface are written in control-program language, the interface will run only on a computer with the PRIMOS operating system. (USGS)

  14. Evidence evaluation in fingerprint comparison and automated fingerprint identification systems--Modeling between finger variability.

    PubMed

    Egli Anthonioz, N M; Champod, C

    2014-02-01

    In the context of the investigation of the use of automated fingerprint identification systems (AFIS) for the evaluation of fingerprint evidence, the current study presents investigations into the variability of scores from an AFIS system when fingermarks from a known donor are compared to fingerprints that are not from the same source. The ultimate goal is to propose a model, based on likelihood ratios, which allows the evaluation of mark-to-print comparisons. In particular, this model, through its use of AFIS technology, benefits from the possibility of using a large amount of data, as well as from an already built-in proximity measure, the AFIS score. More precisely, the numerator of the LR is obtained from scores issued from comparisons between impressions from the same source and showing the same minutia configuration. The denominator of the LR is obtained by extracting scores from comparisons of the questioned mark with a database of non-matching sources. This paper focuses solely on the assignment of the denominator of the LR. We refer to it by the generic term of between-finger variability. The issues addressed in this paper in relation to between-finger variability are the required sample size, the influence of the finger number and general pattern, as well as that of the number of minutiae included and their configuration on a given finger. Results show that reliable estimation of between-finger variability is feasible with 10,000 scores. These scores should come from the appropriate finger number/general pattern combination as defined by the mark. Furthermore, strategies of obtaining between-finger variability when these elements cannot be conclusively seen on the mark (and its position with respect to other marks for finger number) have been presented. These results immediately allow case-by-case estimation of the between-finger variability in an operational setting. PMID:24447455

  15. Finite Element Modeling of Laminated Composite Plates with Locally Delaminated Interface Subjected to Impact Loading

    PubMed Central

    Abo Sabah, Saddam Hussein; Kueh, Ahmad Beng Hong

    2014-01-01

    This paper investigates the effects of localized interface progressive delamination on the behavior of two-layer laminated composite plates when subjected to low velocity impact loading for various fiber orientations. By means of finite element approach, the laminae stiffnesses are constructed independently from their interface, where a well-defined virtually zero-thickness interface element is discreetly adopted for delamination simulation. The present model has the advantage of simulating a localized interfacial condition at arbitrary locations, for various degeneration areas and intensities, under the influence of numerous boundary conditions since the interfacial description is expressed discretely. In comparison, the model shows good agreement with existing results from the literature when modeled in a perfectly bonded state. It is found that as the local delamination area increases, so does the magnitude of the maximum displacement history. Also, as top and bottom fiber orientations deviation increases, both central deflection and energy absorption increase although the relative maximum displacement correspondingly decreases when in contrast to the laminates perfectly bonded state. PMID:24696668

  16. Reduction of nonlinear embedded boundary models for problems with evolving interfaces

    NASA Astrophysics Data System (ADS)

    Balajewicz, Maciej; Farhat, Charbel

    2014-10-01

    Embedded boundary methods alleviate many computational challenges, including those associated with meshing complex geometries and solving problems with evolving domains and interfaces. Developing model reduction methods for computational frameworks based on such methods seems however to be challenging. Indeed, most popular model reduction techniques are projection-based, and rely on basis functions obtained from the compression of simulation snapshots. In a traditional interface-fitted computational framework, the computation of such basis functions is straightforward, primarily because the computational domain does not contain in this case a fictitious region. This is not the case however for an embedded computational framework because the computational domain typically contains in this case both real and ghost regions whose definitions complicate the collection and compression of simulation snapshots. The problem is exacerbated when the interface separating both regions evolves in time. This paper addresses this issue by formulating the snapshot compression problem as a weighted low-rank approximation problem where the binary weighting identifies the evolving component of the individual simulation snapshots. The proposed approach is application independent and therefore comprehensive. It is successfully demonstrated for the model reduction of several two-dimensional, vortex-dominated, fluid-structure interaction problems.

  17. A virtual interface for interactions with 3D models of the human body.

    PubMed

    De Paolis, Lucio T; Pulimeno, Marco; Aloisio, Giovanni

    2009-01-01

    The developed system is the first prototype of a virtual interface designed to avoid contact with the computer so that the surgeon is able to visualize 3D models of the patient's organs more effectively during surgical procedure or to use this in the pre-operative planning. The doctor will be able to rotate, to translate and to zoom in on 3D models of the patient's organs simply by moving his finger in free space; in addition, it is possible to choose to visualize all of the organs or only some of them. All of the interactions with the models happen in real-time using the virtual interface which appears as a touch-screen suspended in free space in a position chosen by the user when the application is started up. Finger movements are detected by means of an optical tracking system and are used to simulate touch with the interface and to interact by pressing the buttons present on the virtual screen. PMID:19377116

  18. EzGal: A Flexible Interface for Stellar Population Synthesis Models

    NASA Astrophysics Data System (ADS)

    Mancone, Conor L.; Gonzalez, Anthony H.

    2012-06-01

    We present EzGal, a flexible Python program designed to easily generate observable parameters (magnitudes, colors, and mass-to-light ratios) for arbitrary input stellar population synthesis (SPS) models. As has been demonstrated by various authors, for many applications the choice of input SPS models can be a significant source of systematic uncertainty. A key strength of EzGal is that it enables simple, direct comparison of different model sets so that the uncertainty introduced by choice of model set can be quantified. Its ability to work with new models will allow EzGal to remain useful as SPS modeling evolves to keep up with the latest research (such as varying IMFs). EzGal is also capable of generating composite stellar population models (CSPs) for arbitrary input star-formation histories and reddening laws, and it can be used to interpolate between metallicities for a given model set. To facilitate use, we have created an online interface to run EzGal and quickly generate magnitude and mass-to-light ratio predictions for a variety of star-formation histories and model sets. We make many commonly used SPS models available from the online interface, including the canonical Bruzual & Charlot models, an updated version of these models, the Maraston models, the BaSTI models, and the Flexible Stellar Population Synthesis (FSPS) models. We use EzGal to compare magnitude predictions for the model sets as a function of wavelength, age, metallicity, and star-formation history. From this comparison we quickly recover the well-known result that the models agree best in the optical for old solar-metallicity models, with differences at the ~0.1 mag level. Similarly, the most problematic regime for SPS modeling is for young ages (lsim2 Gyr) and long wavelengths (λ gsim 7500 Å), where thermally pulsating AGB stars are important and scatter between models can vary from 0.3 mag (Sloan i) to 0.7 mag (Ks). We find that these differences are not caused by one discrepant model

  19. An automated system to simulate the River discharge in Kyushu Island using the H08 model

    NASA Astrophysics Data System (ADS)

    Maji, A.; Jeon, J.; Seto, S.

    2015-12-01

    Kyushu Island is located in southwestern part of Japan, and it is often affected by typhoons and a Baiu front. There have been severe water-related disasters recorded in Kyushu Island. On the other hand, because of high population density and for crop growth, water resource is an important issue of Kyushu Island.The simulation of river discharge is important for water resource management and early warning of water-related disasters. This study attempts to apply H08 model to simulate river discharge in Kyushu Island. Geospatial meteorological and topographical data were obtained from Japanese Ministry of Land, Infrastructure, Transport and Tourism (MLIT) and Automated Meteorological Data Acquisition System (AMeDAS) of Japan Meteorological Agency (JMA). The number of the observation stations of AMeDAS is limited and is not quite satisfactory for the application of water resources models in Kyushu. It is necessary to spatially interpolate the point data to produce grid dataset. Meteorological grid dataset is produced by considering elevation dependence. Solar radiation is estimated from hourly sunshine duration by a conventional formula. We successfully improved the accuracy of interpolated data just by considering elevation dependence and found out that the bias is related to geographical location. The rain/snow classification is done by H08 model and is validated by comparing estimated and observed snow rate. The estimates tend to be larger than the corresponding observed values. A system to automatically produce daily meteorological grid dataset is being constructed.The geospatial river network data were produced by ArcGIS and they were utilized in the H08 model to simulate the river discharge. Firstly, this research is to compare simulated and measured specific discharge, which is the ratio of discharge to watershed area. Significant error between simulated and measured data were seen in some rivers. Secondly, the outputs by the coupled model including crop growth

  20. Modeling the Interface Instability and Mixing Flow During the Process of Liquid Explosion Dissemination

    NASA Astrophysics Data System (ADS)

    Li, L.; Xu, S. L.; Ren, Y. J.; Liu, G. R.; Ren, X. B.; Xie, W. J.; Li, Y. C.; Wang, Z. L.

    The liquid flow during the process of liquid explosion dissemination is a typical complex high-speed unsteady motion with multi-scale in space and time. The motion of liquid flow may be partitioned to several stages. The first is initial liquid expansion by the action of shock wave and explosive gaseous products. The second is breakup of liquid annulus and turbulent mixing, which is called near-field flow. The third is two-phase mixing flow of gas and liquid drops, which is called far-field flow. To first stage, a compressible inviscid liquid model was used, while an elastic and plastic model was used to depict the expansion of solid shell. Numerical study in two dimensional has been made by using the Arbitrary Euler-Lagrange (ALE) methods. In near-field, the unstable flow of liquid annulus is dominated by many factors. (1) The shock action of gaseous expansive products. (2) The geometric structure of wave system in liquid. (3) The local bubble and cavitating flow in annulus, induce much of local unstable interface, tear up interfaces, and enhance the instability and breakup of liquid annulus. In this paper, some postulations are proposed that the cavitations in liquid annulus are induced by shock wave and the flow of liquid annulus is a two phase flow (liquid and a discrete bubble groups). Some experimental results will be presented that the breakup of interface and turbulent mixing is visualized qualitatively and measured quantitatively by using shadow photography method. The primary results are some flow patten of interfaces and some transient flow parameters by which the nonlinear character will be obtained, and provide an experiential support for modeling to unstable interface flow and turbulent mixing. The two-phase mixing flow between liquid drops and gas in far-field can be studied by numerical methods where the turbulent motion of gas phase is represented with k-ɛ model in Euler system, the motion of particle phase is represented with particle stochastic

  1. Object-Based Integration of Photogrammetric and LiDAR Data for Automated Generation of Complex Polyhedral Building Models

    PubMed Central

    Kim, Changjae; Habib, Ayman

    2009-01-01

    This research is concerned with a methodology for automated generation of polyhedral building models for complex structures, whose rooftops are bounded by straight lines. The process starts by utilizing LiDAR data for building hypothesis generation and derivation of individual planar patches constituting building rooftops. Initial boundaries of these patches are then refined through the integration of LiDAR and photogrammetric data and hierarchical processing of the planar patches. Building models for complex structures are finally produced using the refined boundaries. The performance of the developed methodology is evaluated through qualitative and quantitative analysis of the generated building models from real data. PMID:22346722

  2. Automated quantification of carotid artery stenosis on contrast-enhanced MRA data using a deformable vascular tube model.

    PubMed

    Suinesiaputra, Avan; de Koning, Patrick J H; Zudilova-Seinstra, Elena; Reiber, Johan H C; van der Geest, Rob J

    2012-08-01

    The purpose of this study was to develop and validate a method for automated segmentation of the carotid artery lumen from volumetric MR Angiographic (MRA) images using a deformable tubular 3D Non-Uniform Rational B-Splines (NURBS) model. A flexible 3D tubular NURBS model was designed to delineate the carotid arterial lumen. User interaction was allowed to guide the model by placement of forbidden areas. Contrast-enhanced MRA (CE-MRA) from 21 patients with carotid atherosclerotic disease were included in this study. The validation was performed against expert drawn contours on multi-planar reformatted image slices perpendicular to the artery. Excellent linear correlations were found on cross-sectional area measurement (r = 0.98, P < 0.05) and on luminal diameter (r = 0.98, P < 0.05). Strong match in terms of the Dice similarity indices were achieved: 0.95 ± 0.02 (common carotid artery), 0.90 ± 0.07 (internal carotid artery), 0.87 ± 0.07 (external carotid artery), 0.88 ± 0.09 (carotid bifurcation) and 0.75 ± 0.20 (stenosed segments). Slight overestimation of stenosis grading by the automated method was observed. The mean differences was 7.20% (SD = 21.00%) and 5.2% (SD = 21.96%) when validated against two observers. Reproducibility in stenosis grade calculation by the automated method was high; the mean difference between two repeated analyses was 1.9 ± 7.3%. In conclusion, the automated method shows high potential for clinical application in the analysis of CE-MRA of carotid arteries. PMID:22160666

  3. Automated Bayesian model development for frequency detection in biological time series

    PubMed Central

    2011-01-01

    sampled data. Biological time series often deviate significantly from the requirements of optimality for Fourier transformation. In this paper we present an alternative approach based on Bayesian inference. We show the value of placing spectral analysis in the framework of Bayesian inference and demonstrate how model comparison can automate this procedure. PMID:21702910

  4. Automation based on knowledge modeling theory and its applications in engine diagnostic systems using Space Shuttle Main Engine vibrational data. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kim, Jonnathan H.

    1995-01-01

    Humans can perform many complicated tasks without explicit rules. This inherent and advantageous capability becomes a hurdle when a task is to be automated. Modern computers and numerical calculations require explicit rules and discrete numerical values. In order to bridge the gap between human knowledge and automating tools, a knowledge model is proposed. Knowledge modeling techniques are discussed and utilized to automate a labor and time intensive task of detecting anomalous bearing wear patterns in the Space Shuttle Main Engine (SSME) High Pressure Oxygen Turbopump (HPOTP).

  5. Modeling the Charge Transport in Graphene Nano Ribbon Interfaces for Nano Scale Electronic Devices

    NASA Astrophysics Data System (ADS)

    Kumar, Ravinder; Engles, Derick

    2015-05-01

    In this research work we have modeled, simulated and compared the electronic charge transport for Metal-Semiconductor-Metal interfaces of Graphene Nano Ribbons (GNR) with different geometries using First-Principle calculations and Non-Equilibrium Green's Function (NEGF) method. We modeled junctions of Armchair GNR strip sandwiched between two Zigzag strips with (Z-A-Z) and Zigzag GNR strip sandwiched between two Armchair strips with (A-Z-A) using semi-empirical Extended Huckle Theory (EHT) within the framework of Non-Equilibrium Green Function (NEGF). I-V characteristics of the interfaces were visualized for various transport parameters. The distinct changes in conductance and I-V curves reported as the Width across layers, Channel length (Central part) was varied at different bias voltages from -1V to 1 V with steps of 0.25 V. From the simulated results we observed that the conductance through A-Z-A graphene junction is in the range of 10-13 Siemens whereas the conductance through Z-A-Z graphene junction is in the range of 10-5 Siemens. These suggested conductance controlled mechanisms for the charge transport in the graphene interfaces with different geometries is important for the design of graphene based nano scale electronic devices like Graphene FETs, Sensors.

  6. Configuring a Graphical User Interface for Managing Local HYSPLIT Model Runs Through AWIPS

    NASA Technical Reports Server (NTRS)

    Wheeler, mark M.; Blottman, Peter F.; Sharp, David W.; Hoeth, Brian; VanSpeybroeck, Kurt M.

    2009-01-01

    Responding to incidents involving the release of harmful airborne pollutants is a continual challenge for Weather Forecast Offices in the National Weather Service. When such incidents occur, current protocol recommends forecaster-initiated requests of NOAA's Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model output through the National Centers of Environmental Prediction to obtain critical dispersion guidance. Individual requests are submitted manually through a secured web site, with desired multiple requests submitted in sequence, for the purpose of obtaining useful trajectory and concentration forecasts associated with the significant release of harmful chemical gases, radiation, wildfire smoke, etc., into local the atmosphere. To help manage the local HYSPLIT for both routine and emergency use, a graphical user interface was designed for operational efficiency. The interface allows forecasters to quickly determine the current HYSPLIT configuration for the list of predefined sites (e.g., fixed sites and floating sites), and to make any necessary adjustments to key parameters such as Input Model. Number of Forecast Hours, etc. When using the interface, forecasters will obtain desired output more confidently and without the danger of corrupting essential configuration files.

  7. Modeling interface trapping effect in organic field-effect transistor under illumination

    NASA Astrophysics Data System (ADS)

    Kwok, H. L.

    2009-02-01

    Organic field-effect transistors (OFETs) have received significant attention recently because of the potential application in low-cost flexible electronics. The physics behind their operation are relatively complex and require careful consideration particularly with respect to the effect of charge trapping at the insulator-semiconductor interface and field effect in a region with a thickness of a few molecular layers. Recent studies have shown that the so-called “onset” voltage ( V onset) in the rubrene OFET can vary significantly depending on past illumination and bias history. It is therefore important to define the role of the interface trap states in more concrete terms and show how they may affect device performance. In this work, we propose an equivalent-circuit model for the OFET to include mechanism(s) linked to trapping. This includes the existence of a light-sensitive “resistor” controlling charge flow into/out of the interface trap states. Based on the proposed equivalent-circuit model, an analytical expression of V onset is derived showing how it can depend on gate bias and illumination. Using data from the literature, we analyzed the I- V characteristics of a rubrene OFET after pulsed illumination and a tetracene OFET during steady-state illumination.

  8. Definition of common support equipment and space station interface requirements for IOC model technology experiments

    NASA Technical Reports Server (NTRS)

    Russell, Richard A.; Waiss, Richard D.

    1988-01-01

    A study was conducted to identify the common support equipment and Space Station interface requirements for the IOC (initial operating capabilities) model technology experiments. In particular, each principal investigator for the proposed model technology experiment was contacted and visited for technical understanding and support for the generation of the detailed technical backup data required for completion of this study. Based on the data generated, a strong case can be made for a dedicated technology experiment command and control work station consisting of a command keyboard, cathode ray tube, data processing and storage, and an alert/annunciator panel located in the pressurized laboratory.

  9. A model for the control mode man-computer interface dialogue

    NASA Technical Reports Server (NTRS)

    Chafin, R. L.

    1981-01-01

    A four stage model is presented for the control mode man-computer interface dialogue. It consists of context development, semantic development syntactic development, and command execution. Each stage is discussed in terms of the operator skill levels (naive, novice, competent, and expert) and pertinent human factors issues. These issues are human problem solving, human memory, and schemata. The execution stage is discussed in terms of the operators typing skills. This model provides an understanding of the human process in command mode activity for computer systems and a foundation for relating system characteristics to operator characteristics.

  10. PRay - A graphical user interface for interactive visualization and modification of rayinvr models

    NASA Astrophysics Data System (ADS)

    Fromm, T.

    2016-01-01

    PRay is a graphical user interface for interactive displaying and editing of velocity models for seismic refraction. It is optimized for editing rayinvr models but can also be used as a dynamic viewer for ray tracing results from other software. The main features are the graphical editing of nodes and fast adjusting of the display (stations and phases). It can be extended by user-defined shell scripts and links to phase picking software. PRay is open source software written in the scripting language Perl, runs on Unix-like operating systems including Mac OS X and provides a version controlled source code repository for community development.

  11. Multi-scale/multi-physical modeling in head/disk interface of magnetic data storage

    NASA Astrophysics Data System (ADS)

    Chung, Pil Seung; Smith, Robert; Vemuri, Sesha Hari; Jhon, Young In; Tak, Kyungjae; Moon, Il; Biegler, Lorenz T.; Jhon, Myung S.

    2012-04-01

    The model integration of the head-disk interface (HDI) in the hard disk drive system, which includes the hierarchy of highly interactive layers (magnetic layer, carbon overcoat (COC), lubricant, and air bearing system (ABS)), has recently been focused upon to resolve technical barriers and enhance reliability. Heat-assisted magnetic recording especially demands that the model simultaneously incorporates thermal and mechanical phenomena by considering the enormous combinatorial cases of materials and multi-scale/multi-physical phenomena. In this paper, we explore multi-scale/multi-physical simulation methods for HDI, which will holistically integrate magnetic layers, COC, lubricants, and ABS in non-isothermal conditions.

  12. Automated Detection and Classification of Rockfall Induced Seismic Signals with Hidden-Markov-Models

    NASA Astrophysics Data System (ADS)

    Zeckra, M.; Hovius, N.; Burtin, A.; Hammer, C.

    2015-12-01

    Originally introduced in speech recognition, Hidden Markov Models are applied in different research fields of pattern recognition. In seismology, this technique has recently been introduced to improve common detection algorithms, like STA/LTA ratio or cross-correlation methods. Mainly used for the monitoring of volcanic activity, this study is one of the first applications to seismic signals induced by geomorphologic processes. With an array of eight broadband seismometers deployed around the steep Illgraben catchment (Switzerland) with high-level erosion, we studied a sequence of landslides triggered over a period of several days in winter. A preliminary manual classification led us to identify three main seismic signal classes that were used as a start for the HMM automated detection and classification: (1) rockslide signal, including a failure source and the debris mobilization along the slope, (2) rockfall signal from the remobilization of debris along the unstable slope, and (3) single cracking signal from the affected cliff observed before the rockslide events. Besides the ability to classify the whole dataset automatically, the HMM approach reflects the origin and the interactions of the three signal classes, which helps us to understand this geomorphic crisis and the possible triggering mechanisms for slope processes. The temporal distribution of crack events (duration > 5s, frequency band [2-8] Hz) follows an inverse Omori law, leading to the catastrophic behaviour of the failure mechanisms and the interest for warning purposes in rockslide risk assessment. Thanks to a dense seismic array and independent weather observations in the landslide area, this dataset also provides information about the triggering mechanisms, which exhibit a tight link between rainfall and freezing level fluctuations.

  13. Temperature Control of Fimbriation Circuit Switch in Uropathogenic Escherichia coli: Quantitative Analysis via Automated Model Abstraction

    PubMed Central

    Kuwahara, Hiroyuki; Myers, Chris J.; Samoilov, Michael S.

    2010-01-01

    Uropathogenic Escherichia coli (UPEC) represent the predominant cause of urinary tract infections (UTIs). A key UPEC molecular virulence mechanism is type 1 fimbriae, whose expression is controlled by the orientation of an invertible chromosomal DNA element—the fim switch. Temperature has been shown to act as a major regulator of fim switching behavior and is overall an important indicator as well as functional feature of many urologic diseases, including UPEC host-pathogen interaction dynamics. Given this panoptic physiological role of temperature during UTI progression and notable empirical challenges to its direct in vivo studies, in silico modeling of corresponding biochemical and biophysical mechanisms essential to UPEC pathogenicity may significantly aid our understanding of the underlying disease processes. However, rigorous computational analysis of biological systems, such as fim switch temperature control circuit, has hereto presented a notoriously demanding problem due to both the substantial complexity of the gene regulatory networks involved as well as their often characteristically discrete and stochastic dynamics. To address these issues, we have developed an approach that enables automated multiscale abstraction of biological system descriptions based on reaction kinetics. Implemented as a computational tool, this method has allowed us to efficiently analyze the modular organization and behavior of the E. coli fimbriation switch circuit at different temperature settings, thus facilitating new insights into this mode of UPEC molecular virulence regulation. In particular, our results suggest that, with respect to its role in shutting down fimbriae expression, the primary function of FimB recombinase may be to effect a controlled down-regulation (rather than increase) of the ON-to-OFF fim switching rate via temperature-dependent suppression of competing dynamics mediated by recombinase FimE. Our computational analysis further implies that this down

  14. ISMARA: automated modeling of genomic signals as a democracy of regulatory motifs.

    PubMed

    Balwierz, Piotr J; Pachkov, Mikhail; Arnold, Phil; Gruber, Andreas J; Zavolan, Mihaela; van Nimwegen, Erik

    2014-05-01

    Accurate reconstruction of the regulatory networks that control gene expression is one of the key current challenges in molecular biology. Although gene expression and chromatin state dynamics are ultimately encoded by constellations of binding sites recognized by regulators such as transcriptions factors (TFs) and microRNAs (miRNAs), our understanding of this regulatory code and its context-dependent read-out remains very limited. Given that there are thousands of potential regulators in mammals, it is not practical to use direct experimentation to identify which of these play a key role for a particular system of interest. We developed a methodology that models gene expression or chromatin modifications in terms of genome-wide predictions of regulatory sites and completely automated it into a web-based tool called ISMARA (Integrated System for Motif Activity Response Analysis). Given only gene expression or chromatin state data across a set of samples as input, ISMARA identifies the key TFs and miRNAs driving expression/chromatin changes and makes detailed predictions regarding their regulatory roles. These include predicted activities of the regulators across the samples, their genome-wide targets, enriched gene categories among the targets, and direct interactions between the regulators. Applying ISMARA to data sets from well-studied systems, we show that it consistently identifies known key regulators ab initio. We also present a number of novel predictions including regulatory interactions in innate immunity, a master regulator of mucociliary differentiation, TFs consistently disregulated in cancer, and TFs that mediate specific chromatin modifications. PMID:24515121

  15. Testing of Environmental Satellite Bus-Instrument Interfaces Using Engineering Models

    NASA Technical Reports Server (NTRS)

    Gagnier, Donald; Hayner, Rick; Nosek, Thomas; Roza, Michael; Hendershot, James E.; Razzaghi, Andrea I.

    2004-01-01

    This paper discusses the formulation and execution of a laboratory test of the electrical interfaces between multiple atmospheric scientific instruments and the spacecraft bus that carries them. The testing, performed in 2002, used engineering models of the instruments and the Aura spacecraft bus electronics. Aura is one of NASA s Earth Observatory System missions. The test was designed to evaluate the complex interfaces in the command and data handling subsystems prior to integration of the complete flight instruments on the spacecraft. A problem discovered during the flight integration phase of the observatory can cause significant cost and schedule impacts. The tests successfully revealed problems and led to their resolution before the full-up integration phase, saving significant cost and schedule. This approach could be beneficial for future environmental satellite programs involving the integration of multiple, complex scientific instruments onto a spacecraft bus.

  16. Automated modeling of ecosystem CO2 fluxes based on closed chamber measurements: A standardized conceptual and practical approach

    NASA Astrophysics Data System (ADS)

    Hoffmann, Mathias; Jurisch, Nicole; Albiac Borraz, Elisa; Hagemann, Ulrike; Sommer, Michael; Augustin, Jürgen

    2015-04-01

    Closed chamber measurements are widely used for determining the CO2 exchange of small-scale or heterogeneous ecosystems. Among the chamber design and operational handling, the data processing procedure is a considerable source of uncertainty of obtained results. We developed a standardized automatic data processing algorithm, based on the language and statistical computing environment R© to (i) calculate measured CO2 flux rates, (ii) parameterize ecosystem respiration (Reco) and gross primary production (GPP) models, (iii) optionally compute an adaptive temperature model, (iv) model Reco, GPP and net ecosystem exchange (NEE), and (v) evaluate model uncertainty (calibration, validation and uncertainty prediction). The algorithm was tested for different manual and automatic chamber measurement systems (such as e.g. automated NEE-chambers and the LI-8100A soil CO2 Flux system) and ecosystems. Our study shows that even minor changes within the modelling approach may result in considerable differences of calculated flux rates, derived photosynthetic active radiation and temperature dependencies and subsequently modeled Reco, GPP and NEE balance of up to 25%. Thus, certain modeling implications will be given, since automated and standardized data processing procedures, based on clearly defined criteria, such as statistical parameters and thresholds are a prerequisite and highly desirable to guarantee the reproducibility, traceability of modelling results and encourage a better comparability between closed chamber based CO2 measurements.

  17. Prediction of hot spots in protein interfaces using a random forest model with hybrid features.

    PubMed

    Wang, Lin; Liu, Zhi-Ping; Zhang, Xiang-Sun; Chen, Luonan

    2012-03-01

    Prediction of hot spots in protein interfaces provides crucial information for the research on protein-protein interaction and drug design. Existing machine learning methods generally judge whether a given residue is likely to be a hot spot by extracting features only from the target residue. However, hot spots usually form a small cluster of residues which are tightly packed together at the center of protein interface. With this in mind, we present a novel method to extract hybrid features which incorporate a wide range of information of the target residue and its spatially neighboring residues, i.e. the nearest contact residue in the other face (mirror-contact residue) and the nearest contact residue in the same face (intra-contact residue). We provide a novel random forest (RF) model to effectively integrate these hybrid features for predicting hot spots in protein interfaces. Our method can achieve accuracy (ACC) of 82.4% and Matthew's correlation coefficient (MCC) of 0.482 in Alanine Scanning Energetics Database, and ACC of 77.6% and MCC of 0.429 in Binding Interface Database. In a comparison study, performance of our RF model exceeds other existing methods, such as Robetta, FOLDEF, KFC, KFC2, MINERVA and HotPoint. Of our hybrid features, three physicochemical features of target residues (mass, polarizability and isoelectric point), the relative side-chain accessible surface area and the average depth index of mirror-contact residues are found to be the main discriminative features in hot spots prediction. We also confirm that hot spots tend to form large contact surface areas between two interacting proteins. Source data and code are available at: http://www.aporc.org/doc/wiki/HotSpot. PMID:22258275

  18. The local structure factor near an interface; beyond extended capillary-wave models.

    PubMed

    Parry, A O; Rascón, C; Evans, R

    2016-06-22

    We investigate the local structure factor S (z;q) at a free liquid-gas interface in systems with short-ranged intermolecular forces and determine the corrections to the leading-order, capillary-wave-like, Goldstone mode divergence of S (z;q) known to occur for parallel (i.e. measured along the interface) wavevectors [Formula: see text]. We show from explicit solution of the inhomogeneous Ornstein-Zernike equation that for distances z far from the interface, where the profile decays exponentially, S (z;q) splits unambiguously into bulk and interfacial contributions. On each side of the interface, the interfacial contributions can be characterised by distinct liquid and gas wavevector dependent surface tensions, [Formula: see text] and [Formula: see text], which are determined solely by the bulk two-body and three-body direct correlation functions. At high temperatures, the wavevector dependence simplifies and is determined almost entirely by the appropriate bulk structure factor, leading to positive rigidity coefficients. Our predictions are confirmed by explicit calculation of S (z;q) within square-gradient theory and the Sullivan model. The results for the latter predict a striking temperature dependence for [Formula: see text] and [Formula: see text], and have implications for fluctuation effects. Our results account quantitatively for the findings of a recent very extensive simulation study by Höfling and Dietrich of the total structure factor in the interfacial region, in a system with a cut-off Lennard-Jones potential, in sharp contrast to extended capillary-wave models which failed completely to describe the simulation results. PMID:27115774

  19. The local structure factor near an interface; beyond extended capillary-wave models

    NASA Astrophysics Data System (ADS)

    Parry, A. O.; Rascón, C.; Evans, R.

    2016-06-01

    We investigate the local structure factor S (zq) at a free liquid–gas interface in systems with short-ranged intermolecular forces and determine the corrections to the leading-order, capillary-wave-like, Goldstone mode divergence of S (zq) known to occur for parallel (i.e. measured along the interface) wavevectors q\\to 0 . We show from explicit solution of the inhomogeneous Ornstein–Zernike equation that for distances z far from the interface, where the profile decays exponentially, S (zq) splits unambiguously into bulk and interfacial contributions. On each side of the interface, the interfacial contributions can be characterised by distinct liquid and gas wavevector dependent surface tensions, {σ l}(q) and {σg}(q) , which are determined solely by the bulk two-body and three-body direct correlation functions. At high temperatures, the wavevector dependence simplifies and is determined almost entirely by the appropriate bulk structure factor, leading to positive rigidity coefficients. Our predictions are confirmed by explicit calculation of S (zq) within square-gradient theory and the Sullivan model. The results for the latter predict a striking temperature dependence for {σ l}(q) and {σg}(q) , and have implications for fluctuation effects. Our results account quantitatively for the findings of a recent very extensive simulation study by Höfling and Dietrich of the total structure factor in the interfacial region, in a system with a cut-off Lennard-Jones potential, in sharp contrast to extended capillary-wave models which failed completely to describe the simulation results.

  20. Fracture permeability and seismic wave scattering--Poroelastic linear-slip interface model for heterogeneous fractures

    SciTech Connect

    Nakagawa, S.; Myer, L.R.

    2009-06-15

    Schoenberg's Linear-slip Interface (LSI) model for single, compliant, viscoelastic fractures has been extended to poroelastic fractures for predicting seismic wave scattering. However, this extended model results in no impact of the in-plane fracture permeability on the scattering. Recently, we proposed a variant of the LSI model considering the heterogeneity in the in-plane fracture properties. This modified model considers wave-induced, fracture-parallel fluid flow induced by passing seismic waves. The research discussed in this paper applies this new LSI model to heterogeneous fractures to examine when and how the permeability of a fracture is reflected in the scattering of seismic waves. From numerical simulations, we conclude that the heterogeneity in the fracture properties is essential for the scattering of seismic waves to be sensitive to the permeability of a fracture.