Science.gov

Sample records for automated modelling interface

  1. Testing of the Automated Fluid Interface System

    NASA Technical Reports Server (NTRS)

    Johnston, A. S.; Tyler, Tony R.

    1998-01-01

    The Automated Fluid Interface System (AFIS) is an advanced development prototype satellite servicer. The device was designed to transfer consumables from one spacecraft to another. An engineering model was built and underwent development testing at Marshall Space Flight Center. While the current AFIS is not suitable for spaceflight, testing and evaluation of the AFIS provided significant experience which would be beneficial in building a flight unit.

  2. Workload-Based Automated Interface Mode Selection

    DTIC Science & Technology

    2012-03-22

    Exposing more control and information gives the operator the ability to understand the state of the system better and take more complex actions, but at the...and require fast response, and may be a better target for these types of interface features. Many computer games already use adaptive interfaces to...introduces an agent into the system interface to assume responsibility for man- aging automation mode selection. The agent uses a novel dynamic scheme for

  3. Development and testing of the Automated Fluid Interface System

    NASA Technical Reports Server (NTRS)

    Milton, Martha E.; Tyler, Tony R.

    1993-01-01

    The Automated Fluid Interface System (AFIS) is an advanced development program aimed at becoming the standard interface for satellite servicing for years to come. The AFIS will be capable of transferring propellants, fluids, gasses, power, and cryogens from a tanker to an orbiting satellite. The AFIS program currently under consideration is a joint venture between the NASA/Marshall Space Flight Center and Moog, Inc. An engineering model has been built and is undergoing development testing to investigate the mechanism's abilities.

  4. Automated identification and indexing of dislocations in crystal interfaces

    DOE PAGES

    Stukowski, Alexander; Bulatov, Vasily V.; Arsenlis, Athanasios

    2012-10-31

    Here, we present a computational method for identifying partial and interfacial dislocations in atomistic models of crystals with defects. Our automated algorithm is based on a discrete Burgers circuit integral over the elastic displacement field and is not limited to specific lattices or dislocation types. Dislocations in grain boundaries and other interfaces are identified by mapping atomic bonds from the dislocated interface to an ideal template configuration of the coherent interface to reveal incompatible displacements induced by dislocations and to determine their Burgers vectors. Additionally, the algorithm generates a continuous line representation of each dislocation segment in the crystal andmore » also identifies dislocation junctions.« less

  5. Automated identification and indexing of dislocations in crystal interfaces

    SciTech Connect

    Stukowski, Alexander; Bulatov, Vasily V.; Arsenlis, Athanasios

    2012-10-31

    Here, we present a computational method for identifying partial and interfacial dislocations in atomistic models of crystals with defects. Our automated algorithm is based on a discrete Burgers circuit integral over the elastic displacement field and is not limited to specific lattices or dislocation types. Dislocations in grain boundaries and other interfaces are identified by mapping atomic bonds from the dislocated interface to an ideal template configuration of the coherent interface to reveal incompatible displacements induced by dislocations and to determine their Burgers vectors. Additionally, the algorithm generates a continuous line representation of each dislocation segment in the crystal and also identifies dislocation junctions.

  6. Automated Student Model Improvement

    ERIC Educational Resources Information Center

    Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.

    2012-01-01

    Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…

  7. Automation Interfaces of the Orion GNC Executive Architecture

    NASA Technical Reports Server (NTRS)

    Hart, Jeremy

    2009-01-01

    This viewgraph presentation describes Orion mission's automation Guidance, Navigation and Control (GNC) architecture and interfaces. The contents include: 1) Orion Background; 2) Shuttle/Orion Automation Comparison; 3) Orion Mission Sequencing; 4) Orion Mission Sequencing Display Concept; and 5) Status and Forward Plans.

  8. Towards automation of user interface design

    NASA Technical Reports Server (NTRS)

    Gastner, Rainer; Kraetzschmar, Gerhard K.; Lutz, Ernst

    1992-01-01

    This paper suggests an approach to automatic software design in the domain of graphical user interfaces. There are still some drawbacks in existing user interface management systems (UIMS's) which basically offer only quantitative layout specifications via direct manipulation. Our approach suggests a convenient way to get a default graphical user interface which may be customized and redesigned easily in further prototyping cycles.

  9. Control Interface and Tracking Control System for Automated Poultry Inspection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A new visible/near-infrared inspection system interface was developed in order to conduct research to test and implement an automated chicken inspection system for online operation on commercial chicken processing lines. The spectroscopic system demonstrated effective spectral acquisition and data ...

  10. Space station automation and robotics study. Operator-systems interface

    NASA Technical Reports Server (NTRS)

    1984-01-01

    This is the final report of a Space Station Automation and Robotics Planning Study, which was a joint project of the Boeing Aerospace Company, Boeing Commercial Airplane Company, and Boeing Computer Services Company. The study is in support of the Advanced Technology Advisory Committee established by NASA in accordance with a mandate by the U.S. Congress. Boeing support complements that provided to the NASA Contractor study team by four aerospace contractors, the Stanford Research Institute (SRI), and the California Space Institute. This study identifies automation and robotics (A&R) technologies that can be advanced by requirements levied by the Space Station Program. The methodology used in the study is to establish functional requirements for the operator system interface (OSI), establish the technologies needed to meet these requirements, and to forecast the availability of these technologies. The OSI would perform path planning, tracking and control, object recognition, fault detection and correction, and plan modifications in connection with extravehicular (EV) robot operations.

  11. Visual automated macromolecular model building.

    PubMed

    Langer, Gerrit G; Hazledine, Saul; Wiegels, Tim; Carolan, Ciaran; Lamzin, Victor S

    2013-04-01

    Automated model-building software aims at the objective interpretation of crystallographic diffraction data by means of the construction or completion of macromolecular models. Automated methods have rapidly gained in popularity as they are easy to use and generate reproducible and consistent results. However, the process of model building has become increasingly hidden and the user is often left to decide on how to proceed further with little feedback on what has preceded the output of the built model. Here, ArpNavigator, a molecular viewer tightly integrated into the ARP/wARP automated model-building package, is presented that directly controls model building and displays the evolving output in real time in order to make the procedure transparent to the user.

  12. SWISS-MODEL: an automated protein homology-modeling server

    PubMed Central

    Schwede, Torsten; Kopp, Jürgen; Guex, Nicolas; Peitsch, Manuel C.

    2003-01-01

    SWISS-MODEL (http://swissmodel.expasy.org) is a server for automated comparative modeling of three-dimensional (3D) protein structures. It pioneered the field of automated modeling starting in 1993 and is the most widely-used free web-based automated modeling facility today. In 2002 the server computed 120 000 user requests for 3D protein models. SWISS-MODEL provides several levels of user interaction through its World Wide Web interface: in the ‘first approach mode’ only an amino acid sequence of a protein is submitted to build a 3D model. Template selection, alignment and model building are done completely automated by the server. In the ‘alignment mode’, the modeling process is based on a user-defined target-template alignment. Complex modeling tasks can be handled with the ‘project mode’ using DeepView (Swiss-PdbViewer), an integrated sequence-to-structure workbench. All models are sent back via email with a detailed modeling report. WhatCheck analyses and ANOLEA evaluations are provided optionally. The reliability of SWISS-MODEL is continuously evaluated in the EVA-CM project. The SWISS-MODEL server is under constant development to improve the successful implementation of expert knowledge into an easy-to-use server. PMID:12824332

  13. Economics of automation for the design-to-mask interface

    NASA Astrophysics Data System (ADS)

    Erck, Wesley

    2009-04-01

    Mask order automation has increased steadily over the years through a variety of individual mask customer implementations. These have been supported by customer-specific software at the mask suppliers to support the variety of customer output formats. Some customers use the SEMI P10 1 standard, some use supplier-specific formats, and some use customer-specific formats. Some customers use little automation and depend instead on close customer-supplier relationships. Implementations are varied in quality and effectiveness. A major factor which has prolonged the adoption of more advanced and effective solutions has been a lack of understanding of the economic benefits. Some customers think standardized automation mainly benefits the mask supplier in order entry automation, but this ignores a number of other significant benefits which differ dramatically for each party in the supply chain. This paper discusses the nature of those differing advantages and presents simple models suited to four business cases: integrated device manufacturers (IDM), fabless companies, foundries and mask suppliers. Examples and estimates of the financial advantages for these business types will be shown.

  14. Automated ultrareliability models - A review

    NASA Technical Reports Server (NTRS)

    Bridgman, M. S.; Ness, W. G.

    1984-01-01

    Analytic models are required to assess the reliability of systems designed to ultrareliability requirements. This paper reviews the capabilities and limitations of five currently available automated reliability models which are applicable to fault-tolerant flight control systems. 'System' includes sensors, computers, and actuators. A set of review criteria including validation, configuration adaptability, and resource requirements for model evaluation are described. Five models, ARIES, CARE II, CARE III, CARSRA, and CAST, are assessed against the criteria, thereby characterizing their capabilities and limitations. This review should be helpful to potential users of the models.

  15. Geographic information system/watershed model interface

    USGS Publications Warehouse

    Fisher, Gary T.

    1989-01-01

    Geographic information systems allow for the interactive analysis of spatial data related to water-resources investigations. A conceptual design for an interface between a geographic information system and a watershed model includes functions for the estimation of model parameter values. Design criteria include ease of use, minimal equipment requirements, a generic data-base management system, and use of a macro language. An application is demonstrated for a 90.1-square-kilometer subbasin of the Patuxent River near Unity, Maryland, that performs automated derivation of watershed parameters for hydrologic modeling.

  16. Sharing control between humans and automation using haptic interface: primary and secondary task performance benefits.

    PubMed

    Griffiths, Paul G; Gillespie, R Brent

    2005-01-01

    This paper describes a paradigm for human/automation control sharing in which the automation acts through a motor coupled to a machine's manual control interface. The manual interface becomes a haptic display, continually informing the human about automation actions. While monitoring by feel, users may choose either to conform to the automation or override it and express their own control intentions. This paper's objective is to demonstrate that adding automation through haptic display can be used not only to improve performance on a primary task but also to reduce perceptual demands or free attention for a secondary task. Results are presented from three experiments in which 11 participants completed a lane-following task using a motorized steering wheel on a fixed-base driving simulator. The automation behaved like a copilot, assisting with lane following by applying torques to the steering wheel. Results indicate that haptic assist improves lane following by least 30%, p < .0001, while reducing visual demand by 29%, p < .0001, or improving reaction time in a secondary tone localization task by 18 ms, p = .0009. Potential applications of this research include the design of automation interfaces based on haptics that support human/automation control sharing better than traditional push-button automation interfaces.

  17. Automated parking garage system model

    NASA Technical Reports Server (NTRS)

    Collins, E. R., Jr.

    1975-01-01

    A one-twenty-fifth scale model of the key components of an automated parking garage system is described. The design of the model required transferring a vehicle from an entry level, vertically (+Z, -Z), to a storage location at any one of four storage positions (+X, -X, +Y, +Y, -Y) on the storage levels. There are three primary subsystems: (1) a screw jack to provide the vertical motion of the elevator, (2) a cam-driven track-switching device to provide X to Y motion, and (3) a transfer cart to provide horizontal travel and a small amount to vertical motion for transfer to the storage location. Motive power is provided by dc permanent magnet gear motors, one each for the elevator and track switching device and two for the transfer cart drive system (one driving the cart horizontally and the other providing the vertical transfer). The control system, through the use of a microprocessor, provides complete automation through a feedback system which utilizes sensing devices.

  18. On Abstractions and Simplifications in the Design of Human-Automation Interfaces

    NASA Technical Reports Server (NTRS)

    Heymann, Michael; Degani, Asaf; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This report addresses the design of human-automation interaction from a formal perspective that focuses on the information content of the interface, rather than the design of the graphical user interface. It also addresses the issue of the information provided to the user (e.g., user-manuals, training material, and all other resources). In this report, we propose a formal procedure for generating interfaces and user-manuals. The procedure is guided by two criteria: First, the interface must be correct, that is, with the given interface the user will be able to perform the specified tasks correctly. Second, the interface should be succinct. The report discusses the underlying concepts and the formal methods for this approach. Two examples are used to illustrate the procedure. The algorithm for constructing interfaces can be automated, and a preliminary software system for its implementation has been developed.

  19. On Abstractions and Simplifications in the Design of Human-Automation Interfaces

    NASA Technical Reports Server (NTRS)

    Heymann, Michael; Degani, Asaf; Shafto, Michael; Meyer, George; Clancy, Daniel (Technical Monitor)

    2001-01-01

    This report addresses the design of human-automation interaction from a formal perspective that focuses on the information content of the interface, rather than the design of the graphical user interface. It also addresses the, issue of the information provided to the user (e.g., user-manuals, training material, and all other resources). In this report, we propose a formal procedure for generating interfaces and user-manuals. The procedure is guided by two criteria: First, the interface must be correct, i.e., that with the given interface the user will be able to perform the specified tasks correctly. Second, the interface should be as succinct as possible. The report discusses the underlying concepts and the formal methods for this approach. Several examples are used to illustrate the procedure. The algorithm for constructing interfaces can be automated, and a preliminary software system for its implementation has been developed.

  20. Old and New Models for Office Automation.

    ERIC Educational Resources Information Center

    Cole, Eliot

    1983-01-01

    Discusses organization design as context for office automation; mature computer-based systems as one application of organization design variables; and emerging office automation systems (organizational information management, personal information management) as another application of these variables. Management information systems models and…

  1. Designing effective human-automation-plant interfaces: a control-theoretic perspective.

    PubMed

    Jamieson, Greg A; Vicente, Kim J

    2005-01-01

    In this article, we propose the application of a control-theoretic framework to human-automation interaction. The framework consists of a set of conceptual distinctions that should be respected in automation research and design. We demonstrate how existing automation interface designs in some nuclear plants fail to recognize these distinctions. We further show the value of the approach by applying it to modes of automation. The design guidelines that have been proposed in the automation literature are evaluated from the perspective of the framework. This comparison shows that the framework reveals insights that are frequently overlooked in this literature. A new set of design guidelines is introduced that builds upon the contributions of previous research and draws complementary insights from the control-theoretic framework. The result is a coherent and systematic approach to the design of human-automation-plant interfaces that will yield more concrete design criteria and a broader set of design tools. Applications of this research include improving the effectiveness of human-automation interaction design and the relevance of human-automation interaction research.

  2. Alloy Interface Interdiffusion Modeled

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo H.; Garces, Jorge E.; Abel, Phillip B.

    2003-01-01

    With renewed interest in developing nuclear-powered deep space probes, attention will return to improving the metallurgical processing of potential nuclear fuels so that they remain dimensionally stable over the years required for a successful mission. Previous work on fuel alloys at the NASA Glenn Research Center was primarily empirical, with virtually no continuing research. Even when empirical studies are exacting, they often fail to provide enough insight to guide future research efforts. In addition, from a fundamental theoretical standpoint, the actinide metals (which include materials used for nuclear fuels) pose a severe challenge to modern electronic-structure theory. Recent advances in quantum approximate atomistic modeling, coupled with first-principles derivation of needed input parameters, can help researchers develop new alloys for nuclear propulsion.

  3. Automated selection of brain regions for real-time fMRI brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Lührs, Michael; Sorger, Bettina; Goebel, Rainer; Esposito, Fabrizio

    2017-02-01

    Objective. Brain-computer interfaces (BCIs) implemented with real-time functional magnetic resonance imaging (rt-fMRI) use fMRI time-courses from predefined regions of interest (ROIs). To reach best performances, localizer experiments and on-site expert supervision are required for ROI definition. To automate this step, we developed two unsupervised computational techniques based on the general linear model (GLM) and independent component analysis (ICA) of rt-fMRI data, and compared their performances on a communication BCI. Approach. 3 T fMRI data of six volunteers were re-analyzed in simulated real-time. During a localizer run, participants performed three mental tasks following visual cues. During two communication runs, a letter-spelling display guided the subjects to freely encode letters by performing one of the mental tasks with a specific timing. GLM- and ICA-based procedures were used to decode each letter, respectively using compact ROIs and whole-brain distributed spatio-temporal patterns of fMRI activity, automatically defined from subject-specific or group-level maps. Main results. Letter-decoding performances were comparable to supervised methods. In combination with a similarity-based criterion, GLM- and ICA-based approaches successfully decoded more than 80% (average) of the letters. Subject-specific maps yielded optimal performances. Significance. Automated solutions for ROI selection may help accelerating the translation of rt-fMRI BCIs from research to clinical applications.

  4. Automation and Accountability in Decision Support System Interface Design

    ERIC Educational Resources Information Center

    Cummings, Mary L.

    2006-01-01

    When the human element is introduced into decision support system design, entirely new layers of social and ethical issues emerge but are not always recognized as such. This paper discusses those ethical and social impact issues specific to decision support systems and highlights areas that interface designers should consider during design with an…

  5. Automated, Parametric Geometry Modeling and Grid Generation for Turbomachinery Applications

    NASA Technical Reports Server (NTRS)

    Harrand, Vincent J.; Uchitel, Vadim G.; Whitmire, John B.

    2000-01-01

    The objective of this Phase I project is to develop a highly automated software system for rapid geometry modeling and grid generation for turbomachinery applications. The proposed system features a graphical user interface for interactive control, a direct interface to commercial CAD/PDM systems, support for IGES geometry output, and a scripting capability for obtaining a high level of automation and end-user customization of the tool. The developed system is fully parametric and highly automated, and, therefore, significantly reduces the turnaround time for 3D geometry modeling, grid generation and model setup. This facilitates design environments in which a large number of cases need to be generated, such as for parametric analysis and design optimization of turbomachinery equipment. In Phase I we have successfully demonstrated the feasibility of the approach. The system has been tested on a wide variety of turbomachinery geometries, including several impellers and a multi stage rotor-stator combination. In Phase II, we plan to integrate the developed system with turbomachinery design software and with commercial CAD/PDM software.

  6. Automated operations planning: Modeling MLRS operations

    SciTech Connect

    Cunningham, C.T.

    1992-03-05

    The multiples launch rocket system (MLRS) is a highly survivable and automated complement to conventional cannon artillery. For best survivability against counter-battery fire, MLRS operations rely on rapid shoot-and-scoot'' tactics by widely dispersed launchers. Such tactics may be difficult to include in a battlefield simulation without requiring players for the individual MLRS items: launchers and resupply vehicles. To reduce this demand on player resources, a computer model has been developed to automate the behavior of the items, consistent with the published operations doctrine. A player is required to determine the area of operation and certain key locations for an MLRS firing platoon. Analysis of trafficability in the operations area and direction the movement of the items, as the perform fire missions, resupply, and replenishment of platoon stocks, is completely automated. A finite state machine representation of the items is used. The model is currently implemented on a VAX 6310. It will be integrated with the Janus battlefield trainer.

  7. Automated operations planning: Modeling MLRS operations

    SciTech Connect

    Cunningham, C.T.

    1992-03-05

    The multiples launch rocket system (MLRS) is a highly survivable and automated complement to conventional cannon artillery. For best survivability against counter-battery fire, MLRS operations rely on rapid ``shoot-and-scoot`` tactics by widely dispersed launchers. Such tactics may be difficult to include in a battlefield simulation without requiring players for the individual MLRS items: launchers and resupply vehicles. To reduce this demand on player resources, a computer model has been developed to automate the behavior of the items, consistent with the published operations doctrine. A player is required to determine the area of operation and certain key locations for an MLRS firing platoon. Analysis of trafficability in the operations area and direction the movement of the items, as the perform fire missions, resupply, and replenishment of platoon stocks, is completely automated. A finite state machine representation of the items is used. The model is currently implemented on a VAX 6310. It will be integrated with the Janus battlefield trainer.

  8. Old and new models for office automation.

    PubMed

    Cole, E

    1983-05-01

    The emerging generation of office automation systems combines new and existing software and procedures. While managers may be able to select from a broad array of software tools, they may also be required to use certain others. This article discusses organization design as the context for office automation; mature computer-based systems as one application of organization design variables; and emerging office automation systems as another application of those variables. The article concludes that Management Information System models developed for mature systems may be helpful where the use of software application is required for the individual worker; diffusion of innovation models recently developed for computing systems may be helpful where the type of software is optional for the individual worker.

  9. Automated two-dimensional interface for capillary gas chromatography

    DOEpatents

    Strunk, Michael R.; Bechtold, William E.

    1996-02-20

    A multidimensional gas chromatograph (GC) system having wide bore capillary and narrow bore capillary GC columns in series and having a novel system interface. Heart cuts from a high flow rate sample, separated by a wide bore GC column, are collected and directed to a narrow bore GC column with carrier gas injected at a lower flow compatible with a mass spectrometer. A bimodal six-way valve is connected with the wide bore GC column outlet and a bimodal four-way valve is connected with the narrow bore GC column inlet. A trapping and retaining circuit with a cold trap is connected with the six-way valve and a transfer circuit interconnects the two valves. The six-way valve is manipulated between first and second mode positions to collect analyte, and the four-way valve is manipulated between third and fourth mode positions to allow carrier gas to sweep analyte from a deactivated cold trap, through the transfer circuit, and then to the narrow bore GC capillary column for separation and subsequent analysis by a mass spectrometer. Rotary valves have substantially the same bore width as their associated columns to minimize flow irregularities and resulting sample peak deterioration. The rotary valves are heated separately from the GC columns to avoid temperature lag and resulting sample deterioration.

  10. Automated two-dimensional interface for capillary gas chromatography

    DOEpatents

    Strunk, M.R.; Bechtold, W.E.

    1996-02-20

    A multidimensional gas chromatograph (GC) system is disclosed which has wide bore capillary and narrow bore capillary GC columns in series and has a novel system interface. Heart cuts from a high flow rate sample, separated by a wide bore GC column, are collected and directed to a narrow bore GC column with carrier gas injected at a lower flow compatible with a mass spectrometer. A bimodal six-way valve is connected with the wide bore GC column outlet and a bimodal four-way valve is connected with the narrow bore GC column inlet. A trapping and retaining circuit with a cold trap is connected with the six-way valve and a transfer circuit interconnects the two valves. The six-way valve is manipulated between first and second mode positions to collect analyte, and the four-way valve is manipulated between third and fourth mode positions to allow carrier gas to sweep analyte from a deactivated cold trap, through the transfer circuit, and then to the narrow bore GC capillary column for separation and subsequent analysis by a mass spectrometer. Rotary valves have substantially the same bore width as their associated columns to minimize flow irregularities and resulting sample peak deterioration. The rotary valves are heated separately from the GC columns to avoid temperature lag and resulting sample deterioration. 3 figs.

  11. The Automation of Nowcast Model Assessment Processes

    DTIC Science & Technology

    2016-09-01

    developed at NCAR through a grant from the United States Air Force 557th Weather Wing (formerly the Air Force Weather Agency), where NCAR is sponsored...that will automate real-time WRE-N model simulations, collect and quality control check weather observations for assimilation and verification, and...observations and performing quality -control checks for the pre-forecast data assimilation period. 2. Run the WRE-N model to generate model forecast data

  12. Design Through Manufacturing: The Solid Model - Finite Element Analysis Interface

    NASA Technical Reports Server (NTRS)

    Rubin, Carol

    2003-01-01

    State-of-the-art computer aided design (CAD) presently affords engineers the opportunity to create solid models of machine parts which reflect every detail of the finished product. Ideally, these models should fulfill two very important functions: (1) they must provide numerical control information for automated manufacturing of precision parts, and (2) they must enable analysts to easily evaluate the stress levels (using finite element analysis - FEA) for all structurally significant parts used in space missions. Today's state-of-the-art CAD programs perform function (1) very well, providing an excellent model for precision manufacturing. But they do not provide a straightforward and simple means of automating the translation from CAD to FEA models, especially for aircraft-type structures. The research performed during the fellowship period investigated the transition process from the solid CAD model to the FEA stress analysis model with the final goal of creating an automatic interface between the two. During the period of the fellowship a detailed multi-year program for the development of such an interface was created. The ultimate goal of this program will be the development of a fully parameterized automatic ProE/FEA translator for parts and assemblies, with the incorporation of data base management into the solution, and ultimately including computational fluid dynamics and thermal modeling in the interface.

  13. RCrane: semi-automated RNA model building.

    PubMed

    Keating, Kevin S; Pyle, Anna Marie

    2012-08-01

    RNA crystals typically diffract to much lower resolutions than protein crystals. This low-resolution diffraction results in unclear density maps, which cause considerable difficulties during the model-building process. These difficulties are exacerbated by the lack of computational tools for RNA modeling. Here, RCrane, a tool for the partially automated building of RNA into electron-density maps of low or intermediate resolution, is presented. This tool works within Coot, a common program for macromolecular model building. RCrane helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RCrane then allows the crystallographer to review the newly built structure and select alternative backbone conformations where desired. This tool can also be used to automatically correct the backbone structure of previously built nucleotides. These automated corrections can fix incorrect sugar puckers, steric clashes and other structural problems.

  14. Automated modeling of RNA 3D structure.

    PubMed

    Rother, Kristian; Rother, Magdalena; Skiba, Pawel; Bujnicki, Janusz M

    2014-01-01

    This chapter gives an overview over the current methods for automated modeling of RNA structures, with emphasis on template-based methods. The currently used approaches to RNA modeling are presented with a side view on the protein world, where many similar ideas have been used. Two main programs for automated template-based modeling are presented: ModeRNA assembling structures from fragments and MacroMoleculeBuilder performing a simulation to satisfy spatial restraints. Both approaches have in common that they require an alignment of the target sequence to a known RNA structure that is used as a modeling template. As a way to find promising template structures and to align the target and template sequences, we propose a pipeline combining the ParAlign and Infernal programs on RNA family data from Rfam. We also briefly summarize template-free methods for RNA 3D structure prediction. Typically, RNA structures generated by automated modeling methods require local or global optimization. Thus, we also discuss methods that can be used for local or global refinement of RNA structures.

  15. Automating risk analysis of software design models.

    PubMed

    Frydman, Maxime; Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P

    2014-01-01

    The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance.

  16. Modeling Increased Complexity and the Reliance on Automation: FLightdeck Automation Problems (FLAP) Model

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2014-01-01

    This paper highlights the development of a model that is focused on the safety issue of increasing complexity and reliance on automation systems in transport category aircraft. Recent statistics show an increase in mishaps related to manual handling and automation errors due to pilot complacency and over-reliance on automation, loss of situational awareness, automation system failures and/or pilot deficiencies. Consequently, the aircraft can enter a state outside the flight envelope and/or air traffic safety margins which potentially can lead to loss-of-control (LOC), controlled-flight-into-terrain (CFIT), or runway excursion/confusion accidents, etc. The goal of this modeling effort is to provide NASA's Aviation Safety Program (AvSP) with a platform capable of assessing the impacts of AvSP technologies and products towards reducing the relative risk of automation related accidents and incidents. In order to do so, a generic framework, capable of mapping both latent and active causal factors leading to automation errors, is developed. Next, the framework is converted into a Bayesian Belief Network model and populated with data gathered from Subject Matter Experts (SMEs). With the insertion of technologies and products, the model provides individual and collective risk reduction acquired by technologies and methodologies developed within AvSP.

  17. Model study of protein unfolding by interfaces

    NASA Astrophysics Data System (ADS)

    Chakarova, S. D.; Carlsson, A. E.

    2004-02-01

    We study interface-induced protein unfolding on hydrophobic and polar interfaces by means of a two-dimensional lattice model and an exhaustive enumeration ground-state structure search, for a set of model proteins of length 20 residues. We compare the effects of the two types of interfaces, and search for criteria that influence the retention of a protein’s native-state structure upon adsorption. We find that the unfolding proceeds by a large, sudden loss of native contacts. The unfolding at polar interfaces exhibits similar behavior to that at hydrophobic interfaces but with a much weaker interface coupling strength. Further, we find that the resistance of proteins to unfolding in our model is positively correlated with the magnitude of the folding energy in the native-state structure, the thermal stability (or energy gap) for that structure, and the interface energy for native-state adsorption. We find these factors to be of roughly equal importance.

  18. A Generalized Timeline Representation, Services, and Interface for Automating Space Mission Operations

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Johnston, Mark; Frank, Jeremy; Giuliano, Mark; Kavelaars, Alicia; Lenzen, Christoph; Policella, Nicola

    2012-01-01

    Most use a timeline based representation for operations modeling. Most model a core set of state, resource types. Most provide similar capabilities on this modeling to enable (semi) automated schedule generation. In this paper we explore the commonality of : representation and services for these timelines. These commonalities offer potential to be harmonized to enable interoperability, re-use.

  19. Automating the Modeling of the SEE Cross Section's Angular Dependence

    NASA Technical Reports Server (NTRS)

    Patterson, J. D.; Edmonds, L. D.

    2003-01-01

    An algorithm that automates the application of the alpha law in any SEE analysis is presented. This automation is essential for the widespread acceptance of the sophisticated cross section angular dependence model.

  20. Automation model of sewerage rehabilitation planning.

    PubMed

    Yang, M D; Su, T C

    2006-01-01

    The major steps of sewerage rehabilitation include inspection of sewerage, assessment of structural conditions, computation of structural condition grades, and determination of rehabilitation methods and materials. Conventionally, sewerage rehabilitation planning relies on experts with professional background that is tedious and time-consuming. This paper proposes an automation model of planning optimal sewerage rehabilitation strategies for the sewer system by integrating image process, clustering technology, optimization, and visualization display. Firstly, image processing techniques, such as wavelet transformation and co-occurrence features extraction, were employed to extract various characteristics of structural failures from CCTV inspection images. Secondly, a classification neural network was established to automatically interpret the structural conditions by comparing the extracted features with the typical failures in a databank. Then, to achieve optimal rehabilitation efficiency, a genetic algorithm was used to determine appropriate rehabilitation methods and substitution materials for the pipe sections with a risk of mal-function and even collapse. Finally, the result from the automation model can be visualized in a geographic information system in which essential information of the sewer system and sewerage rehabilitation plans are graphically displayed. For demonstration, the automation model of optimal sewerage rehabilitation planning was applied to a sewer system in east Taichung, Chinese Taiwan.

  1. A fully automated liquid–liquid extraction system utilizing interface detection

    PubMed Central

    Maslana, Eugene; Schmitt, Robert; Pan, Jeffrey

    2000-01-01

    The development of the Abbott Liquid-Liquid Extraction Station was a result of the need for an automated system to perform aqueous extraction on large sets of newly synthesized organic compounds used for drug discovery. The system utilizes a cylindrical laboratory robot to shuttle sample vials between two loading racks, two identical extraction stations, and a centrifuge. Extraction is performed by detecting the phase interface (by difference in refractive index) of the moving column of fluid drawn from the bottom of each vial containing a biphasic mixture. The integration of interface detection with fluid extraction maximizes sample throughput. Abbott-developed electronics process the detector signals. Sample mixing is performed by high-speed solvent injection. Centrifuging of the samples reduces interface emulsions. Operating software permits the user to program wash protocols with any one of six solvents per wash cycle with as many cycle repeats as necessary. Station capacity is eighty, 15 ml vials. This system has proven successful with a broad spectrum of both ethyl acetate and methylene chloride based chemistries. The development and characterization of this automated extraction system will be presented. PMID:18924693

  2. A Method for Automated Detection of Usability Problems from Client User Interface Events

    PubMed Central

    Saadawi, Gilan M.; Legowski, Elizabeth; Medvedeva, Olga; Chavan, Girish; Crowley, Rebecca S.

    2005-01-01

    Think-aloud usability analysis provides extremely useful data but is very time-consuming and expensive to perform because of the extensive manual video analysis that is required. We describe a simple method for automated detection of usability problems from client user interface events for a developing medical intelligent tutoring system. The method incorporates (1) an agent-based method for communication that funnels all interface events and system responses to a centralized database, (2) a simple schema for representing interface events and higher order subgoals, and (3) an algorithm that reproduces the criteria used for manual coding of usability problems. A correction factor was empirically determining to account for the slower task performance of users when thinking aloud. We tested the validity of the method by simultaneously identifying usability problems using TAU and manually computing them from stored interface event data using the proposed algorithm. All usability problems that did not rely on verbal utterances were detectable with the proposed method. PMID:16779121

  3. Automated system for measuring the surface dilational modulus of liquid-air interfaces

    NASA Astrophysics Data System (ADS)

    Stadler, Dominik; Hofmann, Matthias J.; Motschmann, Hubert; Shamonin, Mikhail

    2016-06-01

    The surface dilational modulus is a crucial parameter for describing the rheological properties of aqueous surfactant solutions. These properties are important for many technological processes. The present paper describes a fully automated instrument based on the oscillating bubble technique. It works in the frequency range from 1 Hz to 500 Hz, where surfactant exchange dynamics governs the relaxation process. The originality of instrument design is the consistent combination of modern measurement technologies with advanced imaging and signal processing algorithms. Key steps on the way to reliable and precise measurements are the excitation of harmonic oscillation of the bubble, phase sensitive evaluation of the pressure response, adjustment and maintenance of the bubble shape to half sphere geometry for compensation of thermal drifts, contour tracing of the bubbles video images, removal of noise and artefacts within the image for improving the reliability of the measurement, and, in particular, a complex trigger scheme for the measurement of the oscillation amplitude, which may vary with frequency as a result of resonances. The corresponding automation and programming tasks are described in detail. Various programming strategies, such as the use of MATLAB® software and native C++ code are discussed. An advance in the measurement technique is demonstrated by a fully automated measurement. The instrument has the potential to mature into a standard technique in the fields of colloid and interface chemistry and provides a significant extension of the frequency range to established competing techniques and state-of-the-art devices based on the same measurement principle.

  4. A Diffuse Interface Model with Immiscibility Preservation

    PubMed Central

    Tiwari, Arpit; Freund, Jonathan B.; Pantano, Carlos

    2013-01-01

    A new, simple, and computationally efficient interface capturing scheme based on a diffuse interface approach is presented for simulation of compressible multiphase flows. Multi-fluid interfaces are represented using field variables (interface functions) with associated transport equations that are augmented, with respect to an established formulation, to enforce a selected interface thickness. The resulting interface region can be set just thick enough to be resolved by the underlying mesh and numerical method, yet thin enough to provide an efficient model for dynamics of well-resolved scales. A key advance in the present method is that the interface regularization is asymptotically compatible with the thermodynamic mixture laws of the mixture model upon which it is constructed. It incorporates first-order pressure and velocity non-equilibrium effects while preserving interface conditions for equilibrium flows, even within the thin diffused mixture region. We first quantify the improved convergence of this formulation in some widely used one-dimensional configurations, then show that it enables fundamentally better simulations of bubble dynamics. Demonstrations include both a spherical bubble collapse, which is shown to maintain excellent symmetry despite the Cartesian mesh, and a jetting bubble collapse adjacent a wall. Comparisons show that without the new formulation the jet is suppressed by numerical diffusion leading to qualitatively incorrect results. PMID:24058207

  5. A diffuse interface model with immiscibility preservation

    SciTech Connect

    Tiwari, Arpit; Freund, Jonathan B.; Pantano, Carlos

    2013-11-01

    A new, simple, and computationally efficient interface capturing scheme based on a diffuse interface approach is presented for simulation of compressible multiphase flows. Multi-fluid interfaces are represented using field variables (interface functions) with associated transport equations that are augmented, with respect to an established formulation, to enforce a selected interface thickness. The resulting interface region can be set just thick enough to be resolved by the underlying mesh and numerical method, yet thin enough to provide an efficient model for dynamics of well-resolved scales. A key advance in the present method is that the interface regularization is asymptotically compatible with the thermodynamic mixture laws of the mixture model upon which it is constructed. It incorporates first-order pressure and velocity non-equilibrium effects while preserving interface conditions for equilibrium flows, even within the thin diffused mixture region. We first quantify the improved convergence of this formulation in some widely used one-dimensional configurations, then show that it enables fundamentally better simulations of bubble dynamics. Demonstrations include both a spherical-bubble collapse, which is shown to maintain excellent symmetry despite the Cartesian mesh, and a jetting bubble collapse adjacent a wall. Comparisons show that without the new formulation the jet is suppressed by numerical diffusion leading to qualitatively incorrect results.

  6. A Generalized Timeline Representation, Services, and Interface for Automating Space Mission Operations

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Johnston, Mark; Frank, Jeremy; Giuliano, Mark; Kavelaars, Alicia; Lenzen, Christoph; Policella, Nicola

    2012-01-01

    Numerous automated and semi-automated planning & scheduling systems have been developed for space applications. Most of these systems are model-based in that they encode domain knowledge necessary to predict spacecraft state and resources based on initial conditions and a proposed activity plan. The spacecraft state and resources as often modeled as a series of timelines, with a timeline or set of timelines to represent a state or resource key in the operations of the spacecraft. In this paper, we first describe a basic timeline representation that can represent a set of state, resource, timing, and transition constraints. We describe a number of planning and scheduling systems designed for space applications (and in many cases deployed for use of ongoing missions) and describe how they do and do not map onto this timeline model.

  7. Modeling Hydraulic Components for Automated FMEA of a Braking System

    DTIC Science & Technology

    2014-12-23

    has to be based on a library of generic, context-independent component models. The systems that offer support to the automated generation of fault ...Modeling Hydraulic Components for Automated FMEA of a Braking System Peter Struss, Alessandro Fraracci Tech. Univ. of Munich, 85748 Garching...the hydraulic part of a vehicle braking system . We describe the FMEA task and the application problem and outline the foundations for automating the

  8. Automation life-cycle cost model

    NASA Technical Reports Server (NTRS)

    Gathmann, Thomas P.; Reeves, Arlinda J.; Cline, Rick; Henrion, Max; Ruokangas, Corinne

    1992-01-01

    The problem domain being addressed by this contractual effort can be summarized by the following list: Automation and Robotics (A&R) technologies appear to be viable alternatives to current, manual operations; Life-cycle cost models are typically judged with suspicion due to implicit assumptions and little associated documentation; and Uncertainty is a reality for increasingly complex problems and few models explicitly account for its affect on the solution space. The objectives for this effort range from the near-term (1-2 years) to far-term (3-5 years). In the near-term, the envisioned capabilities of the modeling tool are annotated. In addition, a framework is defined and developed in the Decision Modelling System (DEMOS) environment. Our approach is summarized as follows: Assess desirable capabilities (structure into near- and far-term); Identify useful existing models/data; Identify parameters for utility analysis; Define tool framework; Encode scenario thread for model validation; and Provide transition path for tool development. This report contains all relevant, technical progress made on this contractual effort.

  9. Systems Engineering Interfaces: A Model Based Approach

    NASA Technical Reports Server (NTRS)

    Fosse, Elyse; Delp, Christopher

    2013-01-01

    Currently: Ops Rev developed and maintains a framework that includes interface-specific language, patterns, and Viewpoints. Ops Rev implements the framework to design MOS 2.0 and its 5 Mission Services. Implementation de-couples interfaces and instances of interaction Future: A Mission MOSE implements the approach and uses the model based artifacts for reviews. The framework extends further into the ground data layers and provides a unified methodology.

  10. Geometric Modeling Application Interface Program

    DTIC Science & Technology

    1990-11-01

    Manual IDEF-Extended ( IDEFIX ) Integrated Information Support System (IISS), ICAM Project 6201, Contract F33615-80-C-5155, December 1985. Interim...Differential Geometry of Curves and Surfaces, M. P. de Carmo, Prentice-Hall, Inc., 1976. IDEFIX Readers Reference, D. Appleton Company, December 1985...Modeling. IDEFI -- IDEF Information Modeling. IDEFIX -- IDEF Extended Information Modeling. IDEF2 -- IDEF Dynamics Modeling. IDSS -- Integrated Decision

  11. Bayesian Safety Risk Modeling of Human-Flightdeck Automation Interaction

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2015-01-01

    Usage of automatic systems in airliners has increased fuel efficiency, added extra capabilities, enhanced safety and reliability, as well as provide improved passenger comfort since its introduction in the late 80's. However, original automation benefits, including reduced flight crew workload, human errors or training requirements, were not achieved as originally expected. Instead, automation introduced new failure modes, redistributed, and sometimes increased workload, brought in new cognitive and attention demands, and increased training requirements. Modern airliners have numerous flight modes, providing more flexibility (and inherently more complexity) to the flight crew. However, the price to pay for the increased flexibility is the need for increased mode awareness, as well as the need to supervise, understand, and predict automated system behavior. Also, over-reliance on automation is linked to manual flight skill degradation and complacency in commercial pilots. As a result, recent accidents involving human errors are often caused by the interactions between humans and the automated systems (e.g., the breakdown in man-machine coordination), deteriorated manual flying skills, and/or loss of situational awareness due to heavy dependence on automated systems. This paper describes the development of the increased complexity and reliance on automation baseline model, named FLAP for FLightdeck Automation Problems. The model development process starts with a comprehensive literature review followed by the construction of a framework comprised of high-level causal factors leading to an automation-related flight anomaly. The framework was then converted into a Bayesian Belief Network (BBN) using the Hugin Software v7.8. The effects of automation on flight crew are incorporated into the model, including flight skill degradation, increased cognitive demand and training requirements along with their interactions. Besides flight crew deficiencies, automation system

  12. An interface tracking model for droplet electrocoalescence.

    SciTech Connect

    Erickson, Lindsay Crowl

    2013-09-01

    This report describes an Early Career Laboratory Directed Research and Development (LDRD) project to develop an interface tracking model for droplet electrocoalescence. Many fluid-based technologies rely on electrical fields to control the motion of droplets, e.g. microfluidic devices for high-speed droplet sorting, solution separation for chemical detectors, and purification of biodiesel fuel. Precise control over droplets is crucial to these applications. However, electric fields can induce complex and unpredictable fluid dynamics. Recent experiments (Ristenpart et al. 2009) have demonstrated that oppositely charged droplets bounce rather than coalesce in the presence of strong electric fields. A transient aqueous bridge forms between approaching drops prior to pinch-off. This observation applies to many types of fluids, but neither theory nor experiments have been able to offer a satisfactory explanation. Analytic hydrodynamic approximations for interfaces become invalid near coalescence, and therefore detailed numerical simulations are necessary. This is a computationally challenging problem that involves tracking a moving interface and solving complex multi-physics and multi-scale dynamics, which are beyond the capabilities of most state-of-the-art simulations. An interface-tracking model for electro-coalescence can provide a new perspective to a variety of applications in which interfacial physics are coupled with electrodynamics, including electro-osmosis, fabrication of microelectronics, fuel atomization, oil dehydration, nuclear waste reprocessing and solution separation for chemical detectors. We present a conformal decomposition finite element (CDFEM) interface-tracking method for the electrohydrodynamics of two-phase flow to demonstrate electro-coalescence. CDFEM is a sharp interface method that decomposes elements along fluid-fluid boundaries and uses a level set function to represent the interface.

  13. Interfacing a robotic station with a gas chromatograph for the full automation of the determination of organochlorine pesticides in vegetables

    SciTech Connect

    Torres, P.; Luque de Castro, M.D.

    1996-12-31

    A fully automated method for the determination of organochlorine pesticides in vegetables is proposed. The overall system acts as an {open_quotes}analytical black box{close_quotes} because a robotic station performs the prelimninary operations, from weighing to capping the leached analytes and location in an autosampler of an automated gas chromatograph with electron capture detection. The method has been applied to the determination of lindane, heptachlor, captan, chlordane and metoxcychlor in tea, marjoram, cinnamon, pennyroyal, and mint with good results in most cases. A gas chromatograph has been interfaced to a robotic station for the determination of pesticides in vegetables. 15 refs., 4 figs., 2 tabs.

  14. Computer modelling of metal - oxide interfaces

    NASA Astrophysics Data System (ADS)

    Purton, J.; Parker, S. C.; Bullett, D. W.

    1997-07-01

    We have used atomistic simulations to model oxide - metal interfaces. We have, for the first time, allowed the atoms on both sides of the interface to relax. The efficiency of the computational method means that calculations can be performed on complex interfaces containing several thousand atoms and do not require an arbitrary definition of the image plane to model the electrostatics across the dielectric discontinuity. We demonstrate the viability of the approach and the effect of relaxation on a range of MgO - Ag interfaces. Defective and faceted interfaces, as well as the ideal case, have been studied. The latter was chosen for comparison with previous theoretical calculations and experimental results. The wetting angle 0953-8984/9/27/004/img7 and work of adhesion 0953-8984/9/27/004/img8 for MgO{100} - Ag{100} are in reasonable agreement with experiment. As with ab initio electronic structure calculations the silver atoms have been shown to favour the position above the oxygen site.

  15. A Web Interface for Eco System Modeling

    NASA Astrophysics Data System (ADS)

    McHenry, K.; Kooper, R.; Serbin, S. P.; LeBauer, D. S.; Desai, A. R.; Dietze, M. C.

    2012-12-01

    We have developed the Predictive Ecosystem Analyzer (PEcAn) as an open-source scientific workflow system and ecoinformatics toolbox that manages the flow of information in and out of regional-scale terrestrial biosphere models, facilitates heterogeneous data assimilation, tracks data provenance, and enables more effective feedback between models and field research. The over-arching goal of PEcAn is to make otherwise complex analyses transparent, repeatable, and accessible to a diverse array of researchers, allowing both novice and expert users to focus on using the models to examine complex ecosystems rather than having to deal with complex computer system setup and configuration questions in order to run the models. Through the developed web interface we hide much of the data and model details and allow the user to simply select locations, ecosystem models, and desired data sources as inputs to the model. Novice users are guided by the web interface through setting up a model execution and plotting the results. At the same time expert users are given enough freedom to modify specific parameters before the model gets executed. This will become more important as more and more models are added to the PEcAn workflow as well as more and more data that will become available as NEON comes online. On the backend we support the execution of potentially computationally expensive models on different High Performance Computers (HPC) and/or clusters. The system can be configured with a single XML file that gives it the flexibility needed for configuring and running the different models on different systems using a combination of information stored in a database as well as pointers to files on the hard disk. While the web interface usually creates this configuration file, expert users can still directly edit it to fine tune the configuration.. Once a workflow is finished the web interface will allow for the easy creation of plots over result data while also allowing the user to

  16. Empirical Movement Models for Brain Computer Interfaces.

    PubMed

    Matlack, Charles; Chizeck, Howard; Moritz, Chet T

    2016-06-30

    For brain-computer interfaces (BCIs) which provide the user continuous position control, there is little standardization of performance metrics or evaluative tasks. One candidate metric is Fitts's law, which has been used to describe aimed movements across a range of computer interfaces, and has recently been applied to BCI tasks. Reviewing selected studies, we identify two basic problems with Fitts's law: its predictive performance is fragile, and the estimation of 'information transfer rate' from the model is unsupported. Our main contribution is the adaptation and validation of an alternative model to Fitts's law in the BCI context. We show that the Shannon-Welford model outperforms Fitts's law, showing robust predictive power when target distance and width have disproportionate effects on difficulty. Building on a prior study of the Shannon-Welford model, we show that identified model parameters offer a novel approach to quantitatively assess the role of controldisplay gain in speed/accuracy performance tradeoffs during brain control.

  17. Model-centric distribution automation: Capacity, reliability, and efficiency

    DOE PAGES

    Onen, Ahmet; Jung, Jaesung; Dilek, Murat; ...

    2016-02-26

    A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditionalmore » system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.« less

  18. Model-centric distribution automation: Capacity, reliability, and efficiency

    SciTech Connect

    Onen, Ahmet; Jung, Jaesung; Dilek, Murat; Cheng, Danling; Broadwater, Robert P.; Scirbona, Charlie; Cocks, George; Hamilton, Stephanie; Wang, Xiaoyu

    2016-02-26

    A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditional system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.

  19. Automating a human factors evaluation of graphical user interfaces for NASA applications: An update on CHIMES

    NASA Technical Reports Server (NTRS)

    Jiang, Jian-Ping; Murphy, Elizabeth D.; Bailin, Sidney C.; Truszkowski, Walter F.

    1993-01-01

    Capturing human factors knowledge about the design of graphical user interfaces (GUI's) and applying this knowledge on-line are the primary objectives of the Computer-Human Interaction Models (CHIMES) project. The current CHIMES prototype is designed to check a GUI's compliance with industry-standard guidelines, general human factors guidelines, and human factors recommendations on color usage. Following the evaluation, CHIMES presents human factors feedback and advice to the GUI designer. The paper describes the approach to modeling human factors guidelines, the system architecture, a new method developed to convert quantitative RGB primaries into qualitative color representations, and the potential for integrating CHIMES with user interface management systems (UIMS). Both the conceptual approach and its implementation are discussed. This paper updates the presentation on CHIMES at the first International Symposium on Ground Data Systems for Spacecraft Control.

  20. Modeling, Instrumentation, Automation, and Optimization of Water Resource Recovery Facilities.

    PubMed

    Sweeney, Michael W; Kabouris, John C

    2016-10-01

    A review of the literature published in 2015 on topics relating to water resource recovery facilities (WRRF) in the areas of modeling, automation, measurement and sensors and optimization of wastewater treatment (or water resource reclamation) is presented.

  1. Eye gaze tracking for endoscopic camera positioning: an application of a hardware/software interface developed to automate Aesop.

    PubMed

    Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K

    2008-01-01

    A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.

  2. Automated model integration at source code level: An approach for implementing models into the NASA Land Information System

    NASA Astrophysics Data System (ADS)

    Wang, S.; Peters-Lidard, C. D.; Mocko, D. M.; Kumar, S.; Nearing, G. S.; Arsenault, K. R.; Geiger, J. V.

    2014-12-01

    Model integration bridges the data flow between modeling frameworks and models. However, models usually do not fit directly into a particular modeling environment, if not designed for it. An example includes implementing different types of models into the NASA Land Information System (LIS), a software framework for land-surface modeling and data assimilation. Model implementation requires scientific knowledge and software expertise and may take a developer months to learn LIS and model software structure. Debugging and testing of the model implementation is also time-consuming due to not fully understanding LIS or the model. This time spent is costly for research and operational projects. To address this issue, an approach has been developed to automate model integration into LIS. With this in mind, a general model interface was designed to retrieve forcing inputs, parameters, and state variables needed by the model and to provide as state variables and outputs to LIS. Every model can be wrapped to comply with the interface, usually with a FORTRAN 90 subroutine. Development efforts need only knowledge of the model and basic programming skills. With such wrappers, the logic is the same for implementing all models. Code templates defined for this general model interface could be re-used with any specific model. Therefore, the model implementation can be done automatically. An automated model implementation toolkit was developed with Microsoft Excel and its built-in VBA language. It allows model specifications in three worksheets and contains FORTRAN 90 code templates in VBA programs. According to the model specification, the toolkit generates data structures and procedures within FORTRAN modules and subroutines, which transfer data between LIS and the model wrapper. Model implementation is standardized, and about 80 - 90% of the development load is reduced. In this presentation, the automated model implementation approach is described along with LIS programming

  3. Modeling strategic behavior in human-automation interaction: why an "aid" can (and should) go unused.

    PubMed

    Kirlik, A

    1993-06-01

    Task-offload aids (e.g., an autopilot, an "intelligent" assistant) can be selectively engaged by the human operator to dynamically delegate tasks to automation. Introducing such aids eliminates some task demands but creates new ones associated with programming, engaging, and disengaging the aiding device via an interface. The burdens associated with managing automation can sometimes outweigh the potential benefits of automation to improved system performance. Aid design parameters and features of the overall multitask context combine to determine whether or not a task-offload aid will effectively support the operator. A modeling and sensitivity analysis approach is presented that identifies effective strategies for human-automation interaction as a function of three task-context parameters and three aid design parameters. The analysis and modeling approaches provide resources for predicting how a well-adapted operator will use a given task-offload aid, and for specifying aid design features that ensure that automation will provide effective operator support in a multitask environment.

  4. Modeling strategic behavior in human-automation interaction - Why an 'aid' can (and should) go unused

    NASA Technical Reports Server (NTRS)

    Kirlik, Alex

    1993-01-01

    Task-offload aids (e.g., an autopilot, an 'intelligent' assistant) can be selectively engaged by the human operator to dynamically delegate tasks to automation. Introducing such aids eliminates some task demands but creates new ones associated with programming, engaging, and disengaging the aiding device via an interface. The burdens associated with managing automation can sometimes outweigh the potential benefits of automation to improved system performance. Aid design parameters and features of the overall multitask context combine to determine whether or not a task-offload aid will effectively support the operator. A modeling and sensitivity analysis approach is presented that identifies effective strategies for human-automation interaction as a function of three task-context parameters and three aid design parameters. The analysis and modeling approaches provide resources for predicting how a well-adapted operator will use a given task-offload aid, and for specifying aid design features that ensure that automation will provide effective operator support in a multitask environment.

  5. Automation Marketplace 2010: New Models, Core Systems

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2010-01-01

    In a year when a difficult economy presented fewer opportunities for immediate gains, the major industry players have defined their business strategies with fundamentally different concepts of library automation. This is no longer an industry where companies compete on the basis of the best or the most features in similar products but one where…

  6. Development and Design of a User Interface for a Computer Automated Heating, Ventilation, and Air Conditioning System

    SciTech Connect

    Anderson, B.; /Fermilab

    1999-10-08

    A user interface is created to monitor and operate the heating, ventilation, and air conditioning system. The interface is networked to the system's programmable logic controller. The controller maintains automated control of the system. The user through the interface is able to see the status of the system and override or adjust the automatic control features. The interface is programmed to show digital readouts of system equipment as well as visual queues of system operational statuses. It also provides information for system design and component interaction. The interface is made easier to read by simple designs, color coordination, and graphics. Fermi National Accelerator Laboratory (Fermi lab) conducts high energy particle physics research. Part of this research involves collision experiments with protons, and anti-protons. These interactions are contained within one of two massive detectors along Fermilab's largest particle accelerator the Tevatron. The D-Zero Assembly Building houses one of these detectors. At this time detector systems are being upgraded for a second experiment run, titled Run II. Unlike the previous run, systems at D-Zero must be computer automated so operators do not have to continually monitor and adjust these systems during the run. Human intervention should only be necessary for system start up and shut down, and equipment failure. Part of this upgrade includes the heating, ventilation, and air conditioning system (HVAC system). The HVAC system is responsible for controlling two subsystems, the air temperatures of the D-Zero Assembly Building and associated collision hall, as well as six separate water systems used in the heating and cooling of the air and detector components. The BYAC system is automated by a programmable logic controller. In order to provide system monitoring and operator control a user interface is required. This paper will address methods and strategies used to design and implement an effective user interface

  7. Modeling the Extreme-Pressure Lubricating Interface

    NASA Astrophysics Data System (ADS)

    Kaltchev, Matey; Gao, Feng; Lara-Romero, Javier; Tysoe, Wilfred

    2005-04-01

    Extreme-pressure lubricants are currently widely used in various areas of applications. However, despite of their common use, the fundamental aspects of the mechanism in which these lubricants reduce the friction coefficient are not clear yet. Earlier macrotribological experiments using chlorinated hydrocarbons have shown remarkable effectiveness. It has been proven that thin films that resemble those formed under tribological conditions can also be synthesized in ultrahigh vacuum when beams of chlorinated hydrocarbons are directed onto a clean iron surface. Here results obtained using X-ray photoelectron spectroscopy, temperature programmed desorption, atomic force microscopy and microtribological measurements of these films are presented. Substantial information about the fundamental properties and structure of this model lubricating interface is revealed. A mechanism of the formation of the interface under tribological conditions is also suggested.

  8. Modeling material interfaces with hybrid adhesion method

    DOE PAGES

    Brown, Nicholas Taylor; Qu, Jianmin; Martinez, Enrique

    2017-01-27

    A molecular dynamics simulation approach is presented to approximate layered material structures using discrete interatomic potentials through classical mechanics and the underlying principles of quantum mechanics. This method isolates the energetic contributions of the system into two pure material layers and an interfacial region used to simulate the adhesive properties of the diffused interface. The strength relationship of the adhesion contribution is calculated through small-scale separation calculations and applied to the molecular surfaces through an inter-layer bond criterion. By segregating the contributions into three regions and accounting for the interfacial excess energies through the adhesive surface bonds, it is possiblemore » to model each material with an independent potential while maintaining an acceptable level of accuracy in the calculation of mechanical properties. This method is intended for the atomistic study of the delamination mechanics, typically observed in thin-film applications. Therefore, the work presented in this paper focuses on mechanical tensile behaviors, with observations in the elastic modulus and the delamination failure mode. To introduce the hybrid adhesion method, we apply the approach to an ideal bulk copper sample, where an interface is created by disassociating the force potential in the middle of the structure. Various mechanical behaviors are compared to a standard EAM control model to demonstrate the adequacy of this approach in a simple setting. In addition, we demonstrate the robustness of this approach by applying it on (1) a Cu-Cu2O interface with interactions between two atom types, and (2) an Al-Cu interface with two dissimilar FCC lattices. These additional examples are verified against EAM and COMB control models to demonstrate the accurate simulation of failure through delamination, and the formation and propagation of dislocations under loads. Finally, the results conclude that by modeling the energy

  9. Perspectives of Interfacing People with Technology in the Development of Office Automation.

    ERIC Educational Resources Information Center

    Conroy, Thomas R.; Ewbank, Ray V. K.

    Noting the increasing impact of office automation on the workings of both people and organizations, this paper purposes the need for implementation methodologies, termed "self-actualizing systems," to introduce automation technologies into the office environment with a minimum of trauma to workers. Such methodologies, it contends, allow users to…

  10. Parmodel: a web server for automated comparative modeling of proteins.

    PubMed

    Uchôa, Hugo Brandão; Jorge, Guilherme Eberhart; Freitas Da Silveira, Nelson José; Camera, João Carlos; Canduri, Fernanda; De Azevedo, Walter Filgueira

    2004-12-24

    Parmodel is a web server for automated comparative modeling and evaluation of protein structures. The aim of this tool is to help inexperienced users to perform modeling, assessment, visualization, and optimization of protein models as well as crystallographers to evaluate structures solved experimentally. It is subdivided in four modules: Parmodel Modeling, Parmodel Assessment, Parmodel Visualization, and Parmodel Optimization. The main module is the Parmodel Modeling that allows the building of several models for a same protein in a reduced time, through the distribution of modeling processes on a Beowulf cluster. Parmodel automates and integrates the main softwares used in comparative modeling as MODELLER, Whatcheck, Procheck, Raster3D, Molscript, and Gromacs. This web server is freely accessible at .

  11. ModelMate - A graphical user interface for model analysis

    USGS Publications Warehouse

    Banta, Edward R.

    2011-01-01

    ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.

  12. Modeling and deadlock avoidance of automated manufacturing systems with multiple automated guided vehicles.

    PubMed

    Wu, Naiqi; Zhou, MengChu

    2005-12-01

    An automated manufacturing system (AMS) contains a number of versatile machines (or workstations), buffers, an automated material handling system (MHS), and is computer-controlled. An effective and flexible alternative for implementing MHS is to use automated guided vehicle (AGV) system. The deadlock issue in AMS is very important in its operation and has extensively been studied. The deadlock problems were separately treated for parts in production and transportation and many techniques were developed for each problem. However, such treatment does not take the advantage of the flexibility offered by multiple AGVs. In general, it is intractable to obtain maximally permissive control policy for either problem. Instead, this paper investigates these two problems in an integrated way. First we model an AGV system and part processing processes by resource-oriented Petri nets, respectively. Then the two models are integrated by using macro transitions. Based on the combined model, a novel control policy for deadlock avoidance is proposed. It is shown to be maximally permissive with computational complexity of O (n2) where n is the number of machines in AMS if the complexity for controlling the part transportation by AGVs is not considered. Thus, the complexity of deadlock avoidance for the whole system is bounded by the complexity in controlling the AGV system. An illustrative example shows its application and power.

  13. A catalog of automated analysis methods for enterprise models.

    PubMed

    Florez, Hector; Sánchez, Mario; Villalobos, Jorge

    2016-01-01

    Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool.

  14. Automated particulate sampler field test model operations guide

    SciTech Connect

    Bowyer, S.M.; Miley, H.S.

    1996-10-01

    The Automated Particulate Sampler Field Test Model Operations Guide is a collection of documents which provides a complete picture of the Automated Particulate Sampler (APS) and the Field Test in which it was evaluated. The Pacific Northwest National Laboratory (PNNL) Automated Particulate Sampler was developed for the purpose of radionuclide particulate monitoring for use under the Comprehensive Test Ban Treaty (CTBT). Its design was directed by anticipated requirements of small size, low power consumption, low noise level, fully automatic operation, and most predominantly the sensitivity requirements of the Conference on Disarmament Working Paper 224 (CDWP224). This guide is intended to serve as both a reference document for the APS and to provide detailed instructions on how to operate the sampler. This document provides a complete description of the APS Field Test Model and all the activity related to its evaluation and progression.

  15. Kinetic model of membrane extraction with a sorbent interface.

    PubMed

    Yang, M J; Adams, M; Pawliszyn, J

    1996-09-01

    Membrane extraction with a sorbent interface (MESI) is an unique sample preparation alternative for trace organic analysis. The main features of MESI include its solvent-free nature, the rugged and simple design with no moving parts for long-term reliable performance, the fact that it is a single-step process which ensures good precision, its easy automation, and its feasibility for on-site operation. Among the available membrane extraction modules designed for the MESI system, the headspace configuration has continued to show its superior durability and versatility in membrane applications. The headspace membrane extraction configuration effectively eliminates the need for a sampling pump and flow metering and hence prevents the extraction system from plugging and greatly simplifies the extraction process. The module can be used for extraction of VOCs from gaseous, aqueous, or solid samples. A mathematical model has been developed for headspace membrane extraction of an aqueous sample, based on the assumption that the aqueous phase is perfectly stirred. The model is in good agreement with the experimental benzene extraction results obtained with an efficient agitation method such as high-speed magnetic stirring or sonication. The model has also been used to study the effects of various extraction parameters with respect to the sensitivity and response time of the MESI system. Sample agitation facilities analyte mass transport and hence improves both the system sensitivity and the response time. The sensitivity of the extraction method also increases with an increase of the extraction temperature.

  16. Automated data acquisition technology development:Automated modeling and control development

    NASA Technical Reports Server (NTRS)

    Romine, Peter L.

    1995-01-01

    This report documents the completion of, and improvements made to, the software developed for automated data acquisition and automated modeling and control development on the Texas Micro rackmounted PC's. This research was initiated because a need was identified by the Metal Processing Branch of NASA Marshall Space Flight Center for a mobile data acquisition and data analysis system, customized for welding measurement and calibration. Several hardware configurations were evaluated and a PC based system was chosen. The Welding Measurement System (WMS), is a dedicated instrument strickly for use of data acquisition and data analysis. In addition to the data acquisition functions described in this thesis, WMS also supports many functions associated with process control. The hardware and software requirements for an automated acquisition system for welding process parameters, welding equipment checkout, and welding process modeling were determined in 1992. From these recommendations, NASA purchased the necessary hardware and software. The new welding acquisition system is designed to collect welding parameter data and perform analysis to determine the voltage versus current arc-length relationship for VPPA welding. Once the results of this analysis are obtained, they can then be used to develop a RAIL function to control welding startup and shutdown without torch crashing.

  17. Improving automation standards via semantic modelling: Application to ISA88.

    PubMed

    Dombayci, Canan; Farreres, Javier; Rodríguez, Horacio; Espuña, Antonio; Graells, Moisès

    2017-03-01

    Standardization is essential for automation. Extensibility, scalability, and reusability are important features for automation software that rely in the efficient modelling of the addressed systems. The work presented here is from the ongoing development of a methodology for semi-automatic ontology construction methodology from technical documents. The main aim of this work is to systematically check the consistency of technical documents and support the improvement of technical document consistency. The formalization of conceptual models and the subsequent writing of technical standards are simultaneously analyzed, and guidelines proposed for application to future technical standards. Three paradigms are discussed for the development of domain ontologies from technical documents, starting from the current state of the art, continuing with the intermediate method presented and used in this paper, and ending with the suggested paradigm for the future. The ISA88 Standard is taken as a representative case study. Linguistic techniques from the semi-automatic ontology construction methodology is applied to the ISA88 Standard and different modelling and standardization aspects that are worth sharing with the automation community is addressed. This study discusses different paradigms for developing and sharing conceptual models for the subsequent development of automation software, along with presenting the systematic consistency checking method.

  18. Computational design of patterned interfaces using reduced order models.

    PubMed

    Vattré, A J; Abdolrahim, N; Kolluri, K; Demkowicz, M J

    2014-08-29

    Patterning is a familiar approach for imparting novel functionalities to free surfaces. We extend the patterning paradigm to interfaces between crystalline solids. Many interfaces have non-uniform internal structures comprised of misfit dislocations, which in turn govern interface properties. We develop and validate a computational strategy for designing interfaces with controlled misfit dislocation patterns by tailoring interface crystallography and composition. Our approach relies on a novel method for predicting the internal structure of interfaces: rather than obtaining it from resource-intensive atomistic simulations, we compute it using an efficient reduced order model based on anisotropic elasticity theory. Moreover, our strategy incorporates interface synthesis as a constraint on the design process. As an illustration, we apply our approach to the design of interfaces with rapid, 1-D point defect diffusion. Patterned interfaces may be integrated into the microstructure of composite materials, markedly improving performance.

  19. Computational design of patterned interfaces using reduced order models

    PubMed Central

    Vattré, A. J.; Abdolrahim, N.; Kolluri, K.; Demkowicz, M. J.

    2014-01-01

    Patterning is a familiar approach for imparting novel functionalities to free surfaces. We extend the patterning paradigm to interfaces between crystalline solids. Many interfaces have non-uniform internal structures comprised of misfit dislocations, which in turn govern interface properties. We develop and validate a computational strategy for designing interfaces with controlled misfit dislocation patterns by tailoring interface crystallography and composition. Our approach relies on a novel method for predicting the internal structure of interfaces: rather than obtaining it from resource-intensive atomistic simulations, we compute it using an efficient reduced order model based on anisotropic elasticity theory. Moreover, our strategy incorporates interface synthesis as a constraint on the design process. As an illustration, we apply our approach to the design of interfaces with rapid, 1-D point defect diffusion. Patterned interfaces may be integrated into the microstructure of composite materials, markedly improving performance. PMID:25169868

  20. A new interface element for connecting independently modeled substructures

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.

    1993-01-01

    A new interface element based on the hybrid variational formulation is presented and demonstrated. The element provides a means of connecting independently modeled substructures whose nodes along the common boundary need not be coincident. The interface element extends previous work to include connecting an arbitrary number of substructures, the use of closed and generally curved interfaces, and the use of multiple, possibly nested, interfaces. Several applications of the element are presented and aspects of the implementation are discussed.

  1. Automated Environment Generation for Software Model Checking

    NASA Technical Reports Server (NTRS)

    Tkachuk, Oksana; Dwyer, Matthew B.; Pasareanu, Corina S.

    2003-01-01

    A key problem in model checking open systems is environment modeling (i.e., representing the behavior of the execution context of the system under analysis). Software systems are fundamentally open since their behavior is dependent on patterns of invocation of system components and values defined outside the system but referenced within the system. Whether reasoning about the behavior of whole programs or about program components, an abstract model of the environment can be essential in enabling sufficiently precise yet tractable verification. In this paper, we describe an approach to generating environments of Java program fragments. This approach integrates formally specified assumptions about environment behavior with sound abstractions of environment implementations to form a model of the environment. The approach is implemented in the Bandera Environment Generator (BEG) which we describe along with our experience using BEG to reason about properties of several non-trivial concurrent Java programs.

  2. Automated adaptive inference of phenomenological dynamical models

    NASA Astrophysics Data System (ADS)

    Daniels, Bryan C.; Nemenman, Ilya

    2015-08-01

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved.

  3. Automated adaptive inference of phenomenological dynamical models

    PubMed Central

    Daniels, Bryan C.; Nemenman, Ilya

    2015-01-01

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved. PMID:26293508

  4. Models for Automated Tube Performance Calculations

    SciTech Connect

    C. Brunkhorst

    2002-12-12

    High power radio-frequency systems, as typically used in fusion research devices, utilize vacuum tubes. Evaluation of vacuum tube performance involves data taken from tube operating curves. The acquisition of data from such graphical sources is a tedious process. A simple modeling method is presented that will provide values of tube currents for a given set of element voltages. These models may be used as subroutines in iterative solutions of amplifier operating conditions for a specific loading impedance.

  5. Qgui: A high-throughput interface for automated setup and analysis of free energy calculations and empirical valence bond simulations in biological systems.

    PubMed

    Isaksen, Geir Villy; Andberg, Tor Arne Heim; Åqvist, Johan; Brandsdal, Bjørn Olav

    2015-07-01

    Structural information and activity data has increased rapidly for many protein targets during the last decades. In this paper, we present a high-throughput interface (Qgui) for automated free energy and empirical valence bond (EVB) calculations that use molecular dynamics (MD) simulations for conformational sampling. Applications to ligand binding using both the linear interaction energy (LIE) method and the free energy perturbation (FEP) technique are given using the estrogen receptor (ERα) as a model system. Examples of free energy profiles obtained using the EVB method for the rate-limiting step of the enzymatic reaction catalyzed by trypsin are also shown. In addition, we present calculation of high-precision Arrhenius plots to obtain the thermodynamic activation enthalpy and entropy with Qgui from running a large number of EVB simulations.

  6. A power line data communication interface using spread spectrum technology in home automation

    SciTech Connect

    Shwehdi, M.H.; Khan, A.Z.

    1996-07-01

    Building automation technology is rapidly developing towards more reliable communication systems, devices that control electronic equipments. These equipment if controlled leads to efficient energy management, and savings on the monthly electricity bill. Power Line communication (PLC) has been one of the dreams of the electronics industry for decades, especially for building automation. It is the purpose of this paper to demonstrate communication methods among electronic control devices through an AC power line carrier within the buildings for more efficient energy control. The paper outlines methods of communication over a powerline, namely the X-10 and CE bus. It also introduces the spread spectrum technology as to increase speed to 100--150 times faster than the X-10 system. The powerline carrier has tremendous applications in the field of building automation. The paper presents an attempt to realize a smart house concept, so called, in which all home electronic devices from a coffee maker to a water heater microwave to chaos robots will be utilized by an intelligent network whenever one wishes to do so. The designed system may be applied very profitably to help in energy management for both customer and utility.

  7. Automation of Cyber Penetration Testing Using the Detect, Identify, Predict, React Intelligence Automation Model

    DTIC Science & Technology

    2013-09-01

    With increased computing power available, intelligent automation is a clear choice for simplifying the lives of both administrators and developers...with manual cyber penetration [1]. With increased computing power available, intelligent automation is a clear choice for simplifying the lives... power intensive, and basic automation has the limitation of only finding the specific vulnerabilities which it is programmed to find. Penetration

  8. Automated macromolecular model building for X-ray crystallography using ARP/wARP version 7.

    PubMed

    Langer, Gerrit; Cohen, Serge X; Lamzin, Victor S; Perrakis, Anastassis

    2008-01-01

    ARP/wARP is a software suite to build macromolecular models in X-ray crystallography electron density maps. Structural genomics initiatives and the study of complex macromolecular assemblies and membrane proteins all rely on advanced methods for 3D structure determination. ARP/wARP meets these needs by providing the tools to obtain a macromolecular model automatically, with a reproducible computational procedure. ARP/wARP 7.0 tackles several tasks: iterative protein model building including a high-level decision-making control module; fast construction of the secondary structure of a protein; building flexible loops in alternate conformations; fully automated placement of ligands, including a choice of the best-fitting ligand from a 'cocktail'; and finding ordered water molecules. All protocols are easy to handle by a nonexpert user through a graphical user interface or a command line. The time required is typically a few minutes although iterative model building may take a few hours.

  9. A Rigorous Sharp Interface Limit of a Diffuse Interface Model Related to Tumor Growth

    NASA Astrophysics Data System (ADS)

    Rocca, Elisabetta; Scala, Riccardo

    2016-11-01

    In this paper, we study the rigorous sharp interface limit of a diffuse interface model related to the dynamics of tumor growth, when a parameter ɛ, representing the interface thickness between the tumorous and non-tumorous cells, tends to zero. More in particular, we analyze here a gradient-flow-type model arising from a modification of the recently introduced model for tumor growth dynamics in Hawkins-Daruud et al. (Int J Numer Math Biomed Eng 28:3-24, 2011) (cf. also Hilhorst et al. Math Models Methods Appl Sci 25:1011-1043, 2015). Exploiting the techniques related to both gradient flows and gamma convergence, we recover a condition on the interface Γ relating the chemical and double-well potentials, the mean curvature, and the normal velocity.

  10. Elevator model based on a tiny PLC for teaching automation

    NASA Astrophysics Data System (ADS)

    Kim, Kee Hwan; Lee, Young Dae

    2005-12-01

    The development of control related applications requires knowledge of different subject matters like mechanical components, control equipment and physics. To understand the behavior of these heterogeneous applications is not easy especially the students who begin to study the electronic engineering. In order to introduce to them the most common components and skills necessary to put together a functioning automated system, we have designed a simple elevator model controlled by a PLC which was designed based on a microcontroller.

  11. A Process Model of Trust in Automation: A Signal Detection Theory Based Approach

    DTIC Science & Technology

    2014-01-01

    lead to trust in automation. We also discuss a simple process model , which helps us understand the results. Our experimental paradigm suggests that...participants are agnostic to the automation s behavior; instead, they merely focus on alarm rate. A process model suggests this is the result of a simple reward structure and a non-explicit cost of trusting the automation.

  12. A conservative interface-interaction model with insoluble surfactant

    NASA Astrophysics Data System (ADS)

    Schranner, Felix S.; Adams, Nikolaus A.

    2016-12-01

    In this paper we extend the conservative interface-interaction method of Hu et al. (2006) [34], adapted for weakly-compressible flows by Luo et al. (2015) [37], to include the effects of viscous, capillary, and Marangoni stresses consistently as momentum-exchange terms at the sharp interface. The interface-interaction method is coupled with insoluble surfactant transport which employs the underlying sharp-interface representation. Unlike previous methods, we thus achieve discrete global conservation in terms of interface interactions and a consistently sharp interface representation. The interface is reconstructed locally, and a sub-cell correction of the interface curvature improves the evaluation of capillary stresses and surfactant diffusion in particular for marginal mesh resolutions. For a range of numerical test cases we demonstrate accuracy and robustness of the method. In particular, we show that the method is at least as accurate as previous diffuse-interface models while exhibiting throughout the considered test cases improved computational efficiency. We believe that the method is attractive for high-resolution level-set interface-tracking simulations as it straightforwardly incorporates the effects of variable surface tension into the underlying conservative interface-interaction approach.

  13. Automated Environmental Simulation Model Tor Analyzing Wound Fiber Optic Bobbins

    NASA Astrophysics Data System (ADS)

    Edwards, Eugene; Ruffin, Paul B.

    1987-01-01

    The life of optical fibers under stress for an extended period of time is limited by static fatigue caused by stress corrosion in the presence of moisture. In order to predict the life of wound optical fibers, it is necessary to accelerate the aging process by simulating the storage environment (stress, temperature, and humidity) in a short period of time. Existing environmental test systems have been proven useful in the simulation of the storage environment; however, the data is limited due to the manual mode of operation. An automated environmental simulation model is developed to control, collect, process, and analyze optical loss data while measuring temperature and humidity. The environmental conditions for optical fibers wound for various applications are simulated in order to understand the interrelationships between wound fiber parameters including spool composition/design, winding tension, adhesives, and fiber cable design. Experimental investigations are carried out to expose wound optical fiber to simulated environments while monitoring changes in the optical and mechanical characteristics of the fibers. Based on the preliminary results of the data obtained, the automated simulation system is proven acceptable for performing routine modeling and evaluations. The automated system is a valuable instrument to aid in the characterization of optical fibers.

  14. Automated photogrammetry for three-dimensional models of urban spaces

    NASA Astrophysics Data System (ADS)

    Leberl, Franz; Meixner, Philipp; Wendel, Andreas; Irschara, Arnold

    2012-02-01

    The location-aware Internet is inspiring intensive work addressing the automated assembly of three-dimensional models of urban spaces with their buildings, circulation spaces, vegetation, signs, even their above-ground and underground utility lines. Two-dimensional geographic information systems (GISs) and municipal utility information exist and can serve to guide the creation of models being built with aerial, sometimes satellite imagery, streetside images, indoor imaging, and alternatively with light detection and ranging systems (LiDARs) carried on airplanes, cars, or mounted on tripods. We review the results of current research to automate the information extraction from sensor data. We show that aerial photography at ground sampling distances (GSD) of 1 to 10 cm is well suited to provide geometry data about building facades and roofs, that streetside imagery at 0.5 to 2 cm is particularly interesting when it is collected within community photo collections (CPCs) by the general public, and that the transition to digital imaging has opened the no-cost option of highly overlapping images in support of a more complete and thus more economical automation. LiDAR-systems are a widely used source of three-dimensional data, but they deliver information not really superior to digital photography.

  15. Rapid Automated Aircraft Simulation Model Updating from Flight Data

    NASA Technical Reports Server (NTRS)

    Brian, Geoff; Morelli, Eugene A.

    2011-01-01

    Techniques to identify aircraft aerodynamic characteristics from flight measurements and compute corrections to an existing simulation model of a research aircraft were investigated. The purpose of the research was to develop a process enabling rapid automated updating of aircraft simulation models using flight data and apply this capability to all flight regimes, including flight envelope extremes. The process presented has the potential to improve the efficiency of envelope expansion flight testing, revision of control system properties, and the development of high-fidelity simulators for pilot training.

  16. Multibody dynamics model building using graphical interfaces

    NASA Technical Reports Server (NTRS)

    Macala, Glenn A.

    1989-01-01

    In recent years, the extremely laborious task of manually deriving equations of motion for the simulation of multibody spacecraft dynamics has largely been eliminated. Instead, the dynamicist now works with commonly available general purpose dynamics simulation programs which generate the equations of motion either explicitly or implicitly via computer codes. The user interface to these programs has predominantly been via input data files, each with its own required format and peculiarities, causing errors and frustrations during program setup. Recent progress in a more natural method of data input for dynamics programs: the graphical interface, is described.

  17. The need to consider temporal variability when modelling exchange at the sediment-water interface

    USGS Publications Warehouse

    Rosenberry, Donald O.

    2011-01-01

    Most conceptual or numerical models of flows and processes at the sediment-water interface assume steady-state conditions and do not consider temporal variability. The steady-state assumption is required because temporal variability, if quantified at all, is usually determined on a seasonal or inter-annual scale. In order to design models that can incorporate finer-scale temporal resolution we first need to measure variability at a finer scale. Automated seepage meters that can measure flow across the sediment-water interface with temporal resolution of seconds to minutes were used in a variety of settings to characterize seepage response to rainfall, wind, and evapotranspiration. Results indicate that instantaneous seepage fluxes can be much larger than values commonly reported in the literature, although seepage does not always respond to hydrological processes. Additional study is needed to understand the reasons for the wide range and types of responses to these hydrologic and atmospheric events.

  18. Automated modelling of spatially-distributed glacier ice thickness and volume

    NASA Astrophysics Data System (ADS)

    James, William H. M.; Carrivick, Jonathan L.

    2016-07-01

    Ice thickness distribution and volume are both key parameters for glaciological and hydrological applications. This study presents VOLTA (Volume and Topography Automation), which is a Python script tool for ArcGISTM that requires just a digital elevation model (DEM) and glacier outline(s) to model distributed ice thickness, volume and bed topography. Ice thickness is initially estimated at points along an automatically generated centreline network based on the perfect-plasticity rheology assumption, taking into account a valley side drag component of the force balance equation. Distributed ice thickness is subsequently interpolated using a glaciologically correct algorithm. For five glaciers with independent field-measured bed topography, VOLTA modelled volumes were between 26.5% (underestimate) and 16.6% (overestimate) of that derived from field observations. Greatest differences were where an asymmetric valley cross section shape was present or where significant valley infill had occurred. Compared with other methods of modelling ice thickness and volume, key advantages of VOLTA are: a fully automated approach and a user friendly graphical user interface (GUI), GIS consistent geometry, fully automated centreline generation, inclusion of a side drag component in the force balance equation, estimation of glacier basal shear stress for each individual glacier, fully distributed ice thickness output and the ability to process multiple glaciers rapidly. VOLTA is capable of regional scale ice volume assessment, which is a key parameter for exploring glacier response to climate change. VOLTA also permits subtraction of modelled ice thickness from the input surface elevation to produce an ice-free DEM, which is a key input for reconstruction of former glaciers. VOLTA could assist with prediction of future glacier geometry changes and hence in projection of future meltwater fluxes.

  19. Automated Decomposition of Model-based Learning Problems

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Millar, Bill

    1996-01-01

    A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.

  20. Developing a Graphical User Interface to Automate the Estimation and Prediction of Risk Values for Flood Protective Structures using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Hasan, M.; Helal, A.; Gabr, M.

    2014-12-01

    In this project, we focus on providing a computer-automated platform for a better assessment of the potential failures and retrofit measures of flood-protecting earth structures, e.g., dams and levees. Such structures play an important role during extreme flooding events as well as during normal operating conditions. Furthermore, they are part of other civil infrastructures such as water storage and hydropower generation. Hence, there is a clear need for accurate evaluation of stability and functionality levels during their service lifetime so that the rehabilitation and maintenance costs are effectively guided. Among condition assessment approaches based on the factor of safety, the limit states (LS) approach utilizes numerical modeling to quantify the probability of potential failures. The parameters for LS numerical modeling include i) geometry and side slopes of the embankment, ii) loading conditions in terms of rate of rising and duration of high water levels in the reservoir, and iii) cycles of rising and falling water levels simulating the effect of consecutive storms throughout the service life of the structure. Sample data regarding the correlations of these parameters are available through previous research studies. We have unified these criteria and extended the risk assessment in term of loss of life through the implementation of a graphical user interface to automate input parameters that divides data into training and testing sets, and then feeds them into Artificial Neural Network (ANN) tool through MATLAB programming. The ANN modeling allows us to predict risk values of flood protective structures based on user feedback quickly and easily. In future, we expect to fine-tune the software by adding extensive data on variations of parameters.

  1. The Application of the Cumulative Logistic Regression Model to Automated Essay Scoring

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; Sinharay, Sandip

    2010-01-01

    Most automated essay scoring programs use a linear regression model to predict an essay score from several essay features. This article applied a cumulative logit model instead of the linear regression model to automated essay scoring. Comparison of the performances of the linear regression model and the cumulative logit model was performed on a…

  2. COMPASS: A Framework for Automated Performance Modeling and Prediction

    SciTech Connect

    Lee, Seyong; Meredith, Jeremy S; Vetter, Jeffrey S

    2015-01-01

    Flexible, accurate performance predictions offer numerous benefits such as gaining insight into and optimizing applications and architectures. However, the development and evaluation of such performance predictions has been a major research challenge, due to the architectural complexities. To address this challenge, we have designed and implemented a prototype system, named COMPASS, for automated performance model generation and prediction. COMPASS generates a structured performance model from the target application's source code using automated static analysis, and then, it evaluates this model using various performance prediction techniques. As we demonstrate on several applications, the results of these predictions can be used for a variety of purposes, such as design space exploration, identifying performance tradeoffs for applications, and understanding sensitivities of important parameters. COMPASS can generate these predictions across several types of applications from traditional, sequential CPU applications to GPU-based, heterogeneous, parallel applications. Our empirical evaluation demonstrates a maximum overhead of 4%, flexibility to generate models for 9 applications, speed, ease of creation, and very low relative errors across a diverse set of architectures.

  3. Modeling and Control of the Automated Radiator Inspection Device

    NASA Technical Reports Server (NTRS)

    Dawson, Darren

    1991-01-01

    Many of the operations performed at the Kennedy Space Center (KSC) are dangerous and repetitive tasks which make them ideal candidates for robotic applications. For one specific application, KSC is currently in the process of designing and constructing a robot called the Automated Radiator Inspection Device (ARID), to inspect the radiator panels on the orbiter. The following aspects of the ARID project are discussed: modeling of the ARID; design of control algorithms; and nonlinear based simulation of the ARID. Recommendations to assist KSC personnel in the successful completion of the ARID project are given.

  4. An automated, integrated approach to Space Station structural modeling

    NASA Technical Reports Server (NTRS)

    Lindenmoyer, Alan J.; Habermeyer, John A.

    1989-01-01

    NASA and its contractors have developed an integrated, interdisciplinary CAD/analysis system designated IDEAS(double asterisk)2 in order to conduct evaluations of alternative Space Station concepts' performance over the projected course of the Station's evolution in orbit. Attention is presently given to the requirements associated with automated FEM-building methods applicable to Space Station system-level structural dynamic analysis, and the ways in which IDEAS(double asterisk)2 addresses these requirements. Advantage is taken of the interactive capabilities of the SUPERTAB FEM preprocessor system for Space Station model manipulation and modification.

  5. Ray tracing in discontinuous velocity model with implicit Interface

    NASA Astrophysics Data System (ADS)

    Zhang, Jianxing; Yang, Qin; Meng, Xianhai; Li, Jigang

    2016-07-01

    Ray tracing in the velocity model containing complex discontinuities is still facing many challenges. The main difficulty arises from the detection of the spatial relationship between the rays and the interfaces that are usually described in non-linear parametric forms. We propose a novel model representation method that can facilitate the implementation of classical shooting-ray methods. In the representation scheme, each interface is expressed as the zero contour of a signed distance field. A multi-copy strategy is adopted to describe the volumetric properties within blocks. The implicit description of the interface makes it easier to detect the ray-interface intersection. The direct calculation of the intersection point is converted into the problem of judging the signs of a ray segment's endpoints. More importantly, the normal to the interface at the intersection point can be easily acquired according to the signed distance field of the interface. The multiple storage of the velocity property in the proximity of the interface can provide accurate and unambiguous velocity information of the intersection point. Thus, the departing ray path can be determined easily and robustly. In addition, the new representation method can describe velocity models containing very complex geological structures, such as faults, salt domes, intrusions, and pinches, without any simplification. The examples on synthetic and real models validate the robustness and accuracy of the ray tracing based on the proposed model representation scheme.

  6. Automated Physico-Chemical Cell Model Development through Information Theory

    SciTech Connect

    Peter J. Ortoleva

    2005-11-29

    The objective of this project was to develop predictive models of the chemical responses of microbial cells to variations in their surroundings. The application of these models is optimization of environmental remediation and energy-producing biotechnical processes.The principles on which our project is based are as follows: chemical thermodynamics and kinetics; automation of calibration through information theory; integration of multiplex data (e.g. cDNA microarrays, NMR, proteomics), cell modeling, and bifurcation theory to overcome cellular complexity; and the use of multiplex data and information theory to calibrate and run an incomplete model. In this report we review four papers summarizing key findings and a web-enabled, multiple module workflow we have implemented that consists of a set of interoperable systems biology computational modules.

  7. Automated extraction of knowledge for model-based diagnostics

    NASA Technical Reports Server (NTRS)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  8. T:XML: A Tool Supporting User Interface Model Transformation

    NASA Astrophysics Data System (ADS)

    López-Jaquero, Víctor; Montero, Francisco; González, Pascual

    Model driven development of user interfaces is based on the transformation of an abstract specification into the final user interface the user will interact with. The design of transformation rules to carry out this transformation process is a key issue in any model-driven user interface development approach. In this paper, we introduce T:XML, an integrated development environment for managing, creating and previewing transformation rules. The tool supports the specification of transformation rules by using a graphical notation that works on the basis of the transformation of the input model into a graph-based representation. T:XML allows the design and execution of transformation rules in an integrated development environment. Furthermore, the designer can also preview how the generated user interface looks like after the transformations have been applied. These previewing capabilities can be used to quickly create prototypes to discuss with the users in user-centered design methods.

  9. A model for critiquing based on automated medical records.

    PubMed

    van der Lei, J; Musen, M A

    1991-08-01

    We describe the design of a critiquing system, HyperCritic, that relies on automated medical records for its data input. The purpose of the system is to advise general practitioners who are treating patients who have hypertension. HyperCritic has access to the data stored in a primary-care information system that supports a fully automated medical record. Hyper-Critic relies on data in the automated medical record to critique the management of hypertensive patients, avoiding a consultation-style interaction with the user. The first step in the critiquing process involves the interpretation of the medical record in an attempt to discover the physician's actions and decisions. After detecting the relevant events in the medical record, HyperCritic views the task of critiquing as the assignment of critiquing statements to these patient-specific events. Critiquing statements are defined as recommendations involving one or more suggestions for possible modifications in the actions of the physician. The core of the model underlying HyperCritic is that the process of generating the critiquing statements is viewed as the application of a limited set of abstract critiquing tasks. We distinguish four categories of critiquing tasks: preparation tasks, selection tasks, monitoring tasks, and responding tasks. The execution of these critiquing tasks requires specific medical factual knowledge. This factual knowledge is separated from the critiquing tasks and is stored in a medical fact base. The principal advantage demonstrated by HyperCritic is the adaption of a domain-independent critiquing structure. We show how this domain-independent critiquing structure can be used to facilitate knowledge acquisition and maintenance of the system.

  10. Intelligent Adaptive Systems: Literature Research of Design Guidance for Intelligent Adaptive Automation and Interfaces

    DTIC Science & Technology

    2007-09-01

    basic’, ‘ business ’, ‘industrial’ and ‘military’. Table 4: Number of references grouped by level of experimentation, peer review and domain...or business target domains; and, green = military target domain. Reference. Full reference of article. Overview. Summary of the main conceptual...Organizational Model (organizational or business processes); • Task Model (high-level tasks and goals of agents in the system); • Agent Model (who

  11. Control of a Wheelchair in an Indoor Environment Based on a Brain-Computer Interface and Automated Navigation.

    PubMed

    Zhang, Rui; Li, Yuanqing; Yan, Yongyong; Zhang, Hao; Wu, Shaoyu; Yu, Tianyou; Gu, Zhenghui

    2016-01-01

    The concept of controlling a wheelchair using brain signals is promising. However, the continuous control of a wheelchair based on unstable and noisy electroencephalogram signals is unreliable and generates a significant mental burden for the user. A feasible solution is to integrate a brain-computer interface (BCI) with automated navigation techniques. This paper presents a brain-controlled intelligent wheelchair with the capability of automatic navigation. Using an autonomous navigation system, candidate destinations and waypoints are automatically generated based on the existing environment. The user selects a destination using a motor imagery (MI)-based or P300-based BCI. According to the determined destination, the navigation system plans a short and safe path and navigates the wheelchair to the destination. During the movement of the wheelchair, the user can issue a stop command with the BCI. Using our system, the mental burden of the user can be substantially alleviated. Furthermore, our system can adapt to changes in the environment. Two experiments based on MI and P300 were conducted to demonstrate the effectiveness of our system.

  12. Automation on the generation of genome-scale metabolic models.

    PubMed

    Reyes, R; Gamermann, D; Montagud, A; Fuente, D; Triana, J; Urchueguía, J F; de Córdoba, P Fernández

    2012-12-01

    Nowadays, the reconstruction of genome-scale metabolic models is a nonautomatized and interactive process based on decision making. This lengthy process usually requires a full year of one person's work in order to satisfactory collect, analyze, and validate the list of all metabolic reactions present in a specific organism. In order to write this list, one manually has to go through a huge amount of genomic, metabolomic, and physiological information. Currently, there is no optimal algorithm that allows one to automatically go through all this information and generate the models taking into account probabilistic criteria of unicity and completeness that a biologist would consider. This work presents the automation of a methodology for the reconstruction of genome-scale metabolic models for any organism. The methodology that follows is the automatized version of the steps implemented manually for the reconstruction of the genome-scale metabolic model of a photosynthetic organism, Synechocystis sp. PCC6803. The steps for the reconstruction are implemented in a computational platform (COPABI) that generates the models from the probabilistic algorithms that have been developed. For validation of the developed algorithm robustness, the metabolic models of several organisms generated by the platform have been studied together with published models that have been manually curated. Network properties of the models, like connectivity and average shortest mean path of the different models, have been compared and analyzed.

  13. An Interactive Tool For Semi-automated Statistical Prediction Using Earth Observations and Models

    NASA Astrophysics Data System (ADS)

    Zaitchik, B. F.; Berhane, F.; Tadesse, T.

    2015-12-01

    We developed a semi-automated statistical prediction tool applicable to concurrent analysis or seasonal prediction of any time series variable in any geographic location. The tool was developed using Shiny, JavaScript, HTML and CSS. A user can extract a predictand by drawing a polygon over a region of interest on the provided user interface (global map). The user can select the Climatic Research Unit (CRU) precipitation or Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS) as predictand. They can also upload their own predictand time series. Predictors can be extracted from sea surface temperature, sea level pressure, winds at different pressure levels, air temperature at various pressure levels, and geopotential height at different pressure levels. By default, reanalysis fields are applied as predictors, but the user can also upload their own predictors, including a wide range of compatible satellite-derived datasets. The package generates correlations of the variables selected with the predictand. The user also has the option to generate composites of the variables based on the predictand. Next, the user can extract predictors by drawing polygons over the regions that show strong correlations (composites). Then, the user can select some or all of the statistical prediction models provided. Provided models include Linear Regression models (GLM, SGLM), Tree-based models (bagging, random forest, boosting), Artificial Neural Network, and other non-linear models such as Generalized Additive Model (GAM) and Multivariate Adaptive Regression Splines (MARS). Finally, the user can download the analysis steps they used, such as the region they selected, the time period they specified, the predictand and predictors they chose and preprocessing options they used, and the model results in PDF or HTML format. Key words: Semi-automated prediction, Shiny, R, GLM, ANN, RF, GAM, MARS

  14. Development of an automated core model for nuclear reactors

    SciTech Connect

    Mosteller, R.D.

    1998-12-31

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of this project was to develop an automated package of computer codes that can model the steady-state behavior of nuclear-reactor cores of various designs. As an added benefit, data produced for steady-state analysis also can be used as input to the TRAC transient-analysis code for subsequent safety analysis of the reactor at any point in its operating lifetime. The basic capability to perform steady-state reactor-core analysis already existed in the combination of the HELIOS lattice-physics code and the NESTLE advanced nodal code. In this project, the automated package was completed by (1) obtaining cross-section libraries for HELIOS, (2) validating HELIOS by comparing its predictions to results from critical experiments and from the MCNP Monte Carlo code, (3) validating NESTLE by comparing its predictions to results from numerical benchmarks and to measured data from operating reactors, and (4) developing a linkage code to transform HELIOS output into NESTLE input.

  15. User interface for ground-water modeling: Arcview extension

    USGS Publications Warehouse

    Tsou, M.-S.; Whittemore, D.O.

    2001-01-01

    Numerical simulation for ground-water modeling often involves handling large input and output data sets. A geographic information system (GIS) provides an integrated platform to manage, analyze, and display disparate data and can greatly facilitate modeling efforts in data compilation, model calibration, and display of model parameters and results. Furthermore, GIS can be used to generate information for decision making through spatial overlay and processing of model results. Arc View is the most widely used Windows-based GIS software that provides a robust user-friendly interface to facilitate data handling and display. An extension is an add-on program to Arc View that provides additional specialized functions. An Arc View interface for the ground-water flow and transport models MODFLOW and MT3D was built as an extension for facilitating modeling. The extension includes preprocessing of spatially distributed (point, line, and polygon) data for model input and postprocessing of model output. An object database is used for linking user dialogs and model input files. The Arc View interface utilizes the capabilities of the 3D Analyst extension. Models can be automatically calibrated through the Arc View interface by external linking to such programs as PEST. The efficient pre- and postprocessing capabilities and calibration link were demonstrated for ground-water modeling in southwest Kansas.

  16. Ab initio diffuse-interface model for lithiated electrode interface evolution

    NASA Astrophysics Data System (ADS)

    Stournara, Maria E.; Kumar, Ravi; Qi, Yue; Sheldon, Brian W.

    2016-07-01

    The study of chemical segregation at interfaces, and in particular the ability to predict the thickness of segregated layers via analytical expressions or computational modeling, is a fundamentally challenging topic in the design of novel heterostructured materials. This issue is particularly relevant for the phase-field (PF) methodology, which has become a prominent tool for describing phase transitions. These models rely on phenomenological parameters that pertain to the interfacial energy and thickness, quantities that cannot be experimentally measured. Instead of back-calculating these parameters from experimental data, here we combine a set of analytical expressions based on the Cahn-Hilliard approach with ab initio calculations to compute the gradient energy parameter κ and the thickness λ of the segregated Li layer at the LixSi-Cu interface. With this bottom-up approach we calculate the thickness λ of the Li diffuse interface to be on the order of a few nm, in agreement with prior experimental secondary ion mass spectrometry observations. Our analysis indicates that Li segregation is primarily driven by solution thermodynamics, while the strain contribution in this system is relatively small. This combined scheme provides an essential first step in the systematic evaluation of the thermodynamic parameters of the PF methodology, and we believe that it can serve as a framework for the development of quantitative interface models in the field of Li-ion batteries.

  17. Geometrical model for the energy of semicoherent interphase interfaces

    PubMed Central

    Ecob, Roger C.; Ralph, Brian

    1980-01-01

    The basis for the considerations given in this paper is the O-lattice description of crystalline interfaces of Bollmann. In the development of his approach presented here, all possible interfacial planes between two crystal phases having a defined orientation relationship are considered. The energies of these interfaces are then computed in terms of the energies of the primary intrinsic dislocations. A number of modeling interactions are incorporated into this approach, and a better agreement with experimental data is thus obtained. PMID:16592796

  18. Flightdeck Automation Problems (FLAP) Model for Safety Technology Portfolio Assessment

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2014-01-01

    NASA's Aviation Safety Program (AvSP) develops and advances methodologies and technologies to improve air transportation safety. The Safety Analysis and Integration Team (SAIT) conducts a safety technology portfolio assessment (PA) to analyze the program content, to examine the benefits and risks of products with respect to program goals, and to support programmatic decision making. The PA process includes systematic identification of current and future safety risks as well as tracking several quantitative and qualitative metrics to ensure the program goals are addressing prominent safety risks accurately and effectively. One of the metrics within the PA process involves using quantitative aviation safety models to gauge the impact of the safety products. This paper demonstrates the role of aviation safety modeling by providing model outputs and evaluating a sample of portfolio elements using the Flightdeck Automation Problems (FLAP) model. The model enables not only ranking of the quantitative relative risk reduction impact of all portfolio elements, but also highlighting the areas with high potential impact via sensitivity and gap analyses in support of the program office. Although the model outputs are preliminary and products are notional, the process shown in this paper is essential to a comprehensive PA of NASA's safety products in the current program and future programs/projects.

  19. Radiation budget measurement/model interface research

    SciTech Connect

    Vonderhaar, T.H.

    1981-10-01

    The NIMBUS 6 data were analyzed to form an up to date climatology of the Earth radiation budget as a basis for numerical model definition studies. Global maps depicting infrared emitted flux, net flux and albedo from processed NIMBUS 6 data for July, 1977, are presented. Zonal averages of net radiation flux for April, May, and June and zonal mean emitted flux and net flux for the December to January period are also presented. The development of two models is reported. The first is a statistical dynamical model with vertical and horizontal resolution. The second model is a two level global linear balance model. The results of time integration of the model up to 120 days, to simulate the January circulation, are discussed. Average zonal wind, meridonal wind component, vertical velocity, and moisture budget are among the parameters addressed.

  20. Effective World Modeling: Multisensor Data Fusion Methodology for Automated Driving

    PubMed Central

    Elfring, Jos; Appeldoorn, Rein; van den Dries, Sjoerd; Kwakkernaat, Maurice

    2016-01-01

    The number of perception sensors on automated vehicles increases due to the increasing number of advanced driver assistance system functions and their increasing complexity. Furthermore, fail-safe systems require redundancy, thereby increasing the number of sensors even further. A one-size-fits-all multisensor data fusion architecture is not realistic due to the enormous diversity in vehicles, sensors and applications. As an alternative, this work presents a methodology that can be used to effectively come up with an implementation to build a consistent model of a vehicle’s surroundings. The methodology is accompanied by a software architecture. This combination minimizes the effort required to update the multisensor data fusion system whenever sensors or applications are added or replaced. A series of real-world experiments involving different sensors and algorithms demonstrates the methodology and the software architecture. PMID:27727171

  1. A distributed data component for the open modeling interface

    Technology Transfer Automated Retrieval System (TEKTRAN)

    As the volume of collected data continues to increase in the environmental sciences, so does the need for effective means for accessing those data. We have developed an Open Modeling Interface (OpenMI) data component that retrieves input data for model components from environmental information syste...

  2. Interface tension and interface entropy in the 2+1 flavor Nambu-Jona-Lasinio model

    NASA Astrophysics Data System (ADS)

    Ke, Wei-yao; Liu, Yu-xin

    2014-04-01

    We study the QCD phases and their transitions in the 2+1 flavor Nambu-Jona-Lasinio model, with a focus on the interface effects such as the interface tension, the interface entropy, and the critical bubble size in the coexistence region of the first-order phase transitions. Our results show that under the thin-wall approximation, the interface contribution to the total entropy density changes its discontinuity scale in the first-order phase transition. However, the entropy density of the dynamical chiral symmetry (DCS) phase is always greater than that of the dynamical chiral symmetry broken (DCSB) phase in both the heating and hadronization processes. To address this entropy puzzle, the thin-wall approximation is evaluated in the present work. We find that the puzzle can be attributed to an overestimate of the critical bubble size at low temperature in the hadronization process. With an improvement on the thin-wall approximation, the entropy puzzle is well solved with the total entropy density of the hadron-DCSB phase exceeding apparently that of the DCS-quark phase at low temperature.

  3. Automated robust generation of compact 3D statistical shape models

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaz; Likar, Bostjan; Tomazevic, Dejan; Pernus, Franjo

    2004-05-01

    Ascertaining the detailed shape and spatial arrangement of anatomical structures is important not only within diagnostic settings but also in the areas of planning, simulation, intraoperative navigation, and tracking of pathology. Robust, accurate and efficient automated segmentation of anatomical structures is difficult because of their complexity and inter-patient variability. Furthermore, the position of the patient during image acquisition, the imaging device and protocol, image resolution, and other factors induce additional variations in shape and appearance. Statistical shape models (SSMs) have proven quite successful in capturing structural variability. A possible approach to obtain a 3D SSM is to extract reference voxels by precisely segmenting the structure in one, reference image. The corresponding voxels in other images are determined by registering the reference image to each other image. The SSM obtained in this way describes statistically plausible shape variations over the given population as well as variations due to imperfect registration. In this paper, we present a completely automated method that significantly reduces shape variations induced by imperfect registration, thus allowing a more accurate description of variations. At each iteration, the derived SSM is used for coarse registration, which is further improved by describing finer variations of the structure. The method was tested on 64 lumbar spinal column CT scans, from which 23, 38, 45, 46 and 42 volumes of interest containing vertebra L1, L2, L3, L4 and L5, respectively, were extracted. Separate SSMs were generated for each vertebra. The results show that the method is capable of reducing the variations induced by registration errors.

  4. Finite driving rates in interface models of Barkhausen noise.

    PubMed

    de Queiroz, S L; Bahiana, M

    2001-12-01

    We consider a single-interface model for the description of Barkhausen noise in soft ferromagnetic materials. Previously, the model was used only in the adiabatic regime of infinitely slow field ramping. We introduce finite driving rates and analyze the scaling of event sizes and durations for different regimes of the driving rate. Coexistence of intermittency, with nontrivial scaling laws, and finite-velocity interface motion is observed for high enough driving rates. Power spectra show a decay approximately omega(-t), with t<2 for finite driving rates, revealing the influence of the internal structure of avalanches.

  5. Modelling biological invasions: Individual to population scales at interfaces.

    PubMed

    Belmonte-Beitia, J; Woolley, T E; Scott, J G; Maini, P K; Gaffney, E A

    2013-10-07

    Extracting the population level behaviour of biological systems from that of the individual is critical in understanding dynamics across multiple scales and thus has been the subject of numerous investigations. Here, the influence of spatial heterogeneity in such contexts is explored for interfaces with a separation of the length scales characterising the individual and the interface, a situation that can arise in applications involving cellular modelling. As an illustrative example, we consider cell movement between white and grey matter in the brain which may be relevant in considering the invasive dynamics of glioma. We show that while one can safely neglect intrinsic noise, at least when considering glioma cell invasion, profound differences in population behaviours emerge in the presence of interfaces with only subtle alterations in the dynamics at the individual level. Transport driven by local cell sensing generates predictions of cell accumulations along interfaces where cell motility changes. This behaviour is not predicted with the commonly used Fickian diffusion transport model, but can be extracted from preliminary observations of specific cell lines in recent, novel, cryo-imaging. Consequently, these findings suggest a need to consider the impact of individual behaviour, spatial heterogeneity and especially interfaces in experimental and modelling frameworks of cellular dynamics, for instance in the characterisation of glioma cell motility.

  6. Microfluidics on liquid handling stations (μF-on-LHS): an industry compatible chip interface between microfluidics and automated liquid handling stations.

    PubMed

    Waldbaur, Ansgar; Kittelmann, Jörg; Radtke, Carsten P; Hubbuch, Jürgen; Rapp, Bastian E

    2013-06-21

    We describe a generic microfluidic interface design that allows the connection of microfluidic chips to established industrial liquid handling stations (LHS). A molding tool has been designed that allows fabrication of low-cost disposable polydimethylsiloxane (PDMS) chips with interfaces that provide convenient and reversible connection of the microfluidic chip to industrial LHS. The concept allows complete freedom of design for the microfluidic chip itself. In this setup all peripheral fluidic components (such as valves and pumps) usually required for microfluidic experiments are provided by the LHS. Experiments (including readout) can be carried out fully automated using the hardware and software provided by LHS manufacturer. Our approach uses a chip interface that is compatible with widely used and industrially established LHS which is a significant advancement towards near-industrial experimental design in microfluidics and will greatly facilitate the acceptance and translation of microfluidics technology in industry.

  7. Fight deck human-automation mode confusion detection using a generalized fuzzy hidden Markov model

    NASA Astrophysics Data System (ADS)

    Lyu, Hao Lyu

    Due to the need for aviation safety, convenience, and efficiency, the autopilot has been introduced into the cockpit. The fast development of the autopilot has brought great benefits to the aviation industry. On the human side, the flight deck has been designed to be a complex, tightly-coupled, and spatially distributed system. The problem of dysfunctional interaction between the pilot and the automation (human-automation interaction issue) has become more and more visible. Thus, detection of a mismatch between the pilot's expectation and automation's behavior in a timely manner is required. In order to solve this challenging problem, separate modeling of the pilot and the automation is necessary. In this thesis, an intent-based framework is introduced to detect the human-automation interaction issue. Under this framework, the pilot's expectation of the aircraft is modeled by pilot intent while the behavior of the automation system is modeled by automation intent. The mode confusion is detected when the automation intent differs from the pilot intent. The pilot intent is inferred by comparing the target value set by the pilot with the aircraft's current state. Meanwhile, the automation intent is inferred through the Generalized Fuzzy Hidden Markov Model (GFHMM), which is an extension of the classical Hidden Markov Model. The stochastic characteristic of the ``hidden'' intents is considered by introducing fuzzy logic. Different from the previous approaches of inferring automation intent, GFHMM does not require a probabilistic model for certain flight modes as prior knowledge. The parameters of GFHMM (initial fuzzy density of the intent, fuzzy transmission density, and fuzzy emission density) are determined through the flight data by using a machine learning technique, the Fuzzy C-Means clustering algorithm (FCM). Lastly, both the pilot's and automation's intent inference algorithms and the mode confusion detection method are validated through flight data.

  8. Modelling interfacial cracking with non-matching cohesive interface elements

    NASA Astrophysics Data System (ADS)

    Nguyen, Vinh Phu; Nguyen, Chi Thanh; Bordas, Stéphane; Heidarpour, Amin

    2016-11-01

    Interfacial cracking occurs in many engineering problems such as delamination in composite laminates, matrix/interface debonding in fibre reinforced composites etc. Computational modelling of these interfacial cracks usually employs compatible or matching cohesive interface elements. In this paper, incompatible or non-matching cohesive interface elements are proposed for interfacial fracture mechanics problems. They allow non-matching finite element discretisations of the opposite crack faces thus lifting the constraint on the compatible discretisation of the domains sharing the interface. The formulation is based on a discontinuous Galerkin method and works with both initially elastic and rigid cohesive laws. The proposed formulation has the following advantages compared to classical interface elements: (i) non-matching discretisations of the domains and (ii) no high dummy stiffness. Two and three dimensional quasi-static fracture simulations are conducted to demonstrate the method. Our method not only simplifies the meshing process but also it requires less computational demands, compared with standard interface elements, for problems that involve materials/solids having a large mismatch in stiffnesses.

  9. Molecular Modeling of Water Interfaces: From Molecular Spectroscopy to Thermodynamics.

    PubMed

    Nagata, Yuki; Ohto, Tatsuhiko; Backus, Ellen H G; Bonn, Mischa

    2016-04-28

    Understanding aqueous interfaces at the molecular level is not only fundamentally important, but also highly relevant for a variety of disciplines. For instance, electrode-water interfaces are relevant for electrochemistry, as are mineral-water interfaces for geochemistry and air-water interfaces for environmental chemistry; water-lipid interfaces constitute the boundaries of the cell membrane, and are thus relevant for biochemistry. One of the major challenges in these fields is to link macroscopic properties such as interfacial reactivity, solubility, and permeability as well as macroscopic thermodynamic and spectroscopic observables to the structure, structural changes, and dynamics of molecules at these interfaces. Simulations, by themselves, or in conjunction with appropriate experiments, can provide such molecular-level insights into aqueous interfaces. In this contribution, we review the current state-of-the-art of three levels of molecular dynamics (MD) simulation: ab initio, force field, and coarse-grained. We discuss the advantages, the potential, and the limitations of each approach for studying aqueous interfaces, by assessing computations of the sum-frequency generation spectra and surface tension. The comparison of experimental and simulation data provides information on the challenges of future MD simulations, such as improving the force field models and the van der Waals corrections in ab initio MD simulations. Once good agreement between experimental observables and simulation can be established, the simulation can be used to provide insights into the processes at a level of detail that is generally inaccessible to experiments. As an example we discuss the mechanism of the evaporation of water. We finish by presenting an outlook outlining four future challenges for molecular dynamics simulations of aqueous interfacial systems.

  10. NASA: Model development for human factors interfacing

    NASA Technical Reports Server (NTRS)

    Smith, L. L.

    1984-01-01

    The results of an intensive literature review in the general topics of human error analysis, stress and job performance, and accident and safety analysis revealed no usable techniques or approaches for analyzing human error in ground or space operations tasks. A task review model is described and proposed to be developed in order to reduce the degree of labor intensiveness in ground and space operations tasks. An extensive number of annotated references are provided.

  11. AUTOMATED GEOSPATICAL WATERSHED ASSESSMENT (AGWA): A GIS-BASED HYDROLOICAL MODELING TOOL FOR WATERSHED ASSESSMENT AND ANALYSIS

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment tool (AGWA) is a GIS interface jointly developed by the USDA Agricultural Research Service, the U.S. Environmental Protection Agency, the University of Arizona, and the University of Wyoming to automate the parameterization and execut...

  12. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT (AGWA): A GIS-BASED HYRDOLOGIC MODELING TOOL FOR WATERSHED ASSESSMENT AND ANALYSIS

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment tool (AGWA) is a GIS interface jointly

    developed by the USDA Agricultural Research Service, the U.S. Environmental Protection

    Agency, the University of Arizona, and the University of Wyoming to automate the

    parame...

  13. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT (AGWA): A GIS-BASED HYDROLOGIC MODELING TOOL FOR WATERSHED ASSESSMENT AND ANALYSIS

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment tool (AGWA) is a GIS interface jointly developed by the USDA Agricultural Research Service, the U.S. Environmental Protection Agency, the University of Arizona, and the University of Wyoming to automate the parameterization and execu...

  14. Attenuation of numerical artefacts in the modelling of fluid interfaces

    NASA Astrophysics Data System (ADS)

    Evrard, Fabien; van Wachem, Berend G. M.; Denner, Fabian

    2015-11-01

    Numerical artefacts in the modelling of fluid interfaces, such as parasitic currents or spurious capillary waves, present a considerable problem in two-phase flow modelling. Parasitic currents result from an imperfect evaluation of the interface curvature and can severely affect the flow, whereas spatially underresolved (spurious) capillary waves impose strict limits on the time-step and, hence, dictate the required computational resources for surface-tension-dominated flows. By applying an additional shear stress term at the fluid interface, thereby dissipating the surface energy associated with small wavelengths, we have been able to considerably reduce the adverse impact of parasitic currents and mitigate the time-step limit imposed by capillary waves. However, a careful choice of the applied interface viscosity is crucial, since an excess of additional dissipation compromises the accuracy of the solution. We present the derivation of the additional interfacial shear stress term, explain the underlying physical mechanism and discuss the impact on parasitic currents and interface instabilities based on a variety of numerical experiments. We acknowledge financial support from the Engineering and Physical Sciences Research Council (EPSRC) through Grant No. EP/M021556/1 and from PETROBRAS.

  15. Automated optic disk boundary detection by modified active contour model.

    PubMed

    Xu, Juan; Chutatape, Opas; Chew, Paul

    2007-03-01

    This paper presents a novel deformable-model-based algorithm for fully automated detection of optic disk boundary in fundus images. The proposed method improves and extends the original snake (deforming-only technique) in two aspects: clustering and smoothing update. The contour points are first self-separated into edge-point group or uncertain-point group by clustering after each deformation, and these contour points are then updated by different criteria based on different groups. The updating process combines both the local and global information of the contour to achieve the balance of contour stability and accuracy. The modifications make the proposed algorithm more accurate and robust to blood vessel occlusions, noises, ill-defined edges and fuzzy contour shapes. The comparative results show that the proposed method can estimate the disk boundaries of 100 test images closer to the groundtruth, as measured by mean distance to closest point (MDCP) <3 pixels, with the better success rate when compared to those obtained by gradient vector flow snake (GVF-snake) and modified active shape models (ASM).

  16. Interface fracture and composite deformation of model laminates

    NASA Astrophysics Data System (ADS)

    Fox, Matthew R.

    Model laminates were studied to improve the understanding of composite mechanical behavior. NiAl/Mo and NiAl/Cr model laminates, with a series of interfaces, were bonded at 1100°C. Reaction layers were present in all laminates, varying in thickness with bonding conditions. Interface fracture strengths and resistances were determined under primarily mode II loading conditions using a novel technique, the asymmetrically-loaded shear (ALS) test, in which one layer of the laminate was loaded in compression, producing a stable interface crack. The NiAl/Mo interface was also fractured in four-point bending. A small amount of plasticity was found to play a role in crack initiation. During steady-state mode II interface fracture of NiAl/Mo model laminates, large-scale slip was observed near the crack tip in the NiAl adjacent to the interface. After testing, the local slope and curvature of the interface were characterized at intervals along the interface and at slip locations to qualitatively describe local stresses present at and just ahead of the crack tip. The greatest percentage of slip occurred where closing forces on the crack tip were below the maximum value and were decreasing with crack growth. A mechanism for crack propagation is presented describing the role of large-scale slip in crack propagation. The mechanical response of structural laminates in 3-D stress states, as would be present in a polycrystalline aggregate composed of lamellar grains, are lacking. In order to understand the response of laminates composed of hard and soft phases, Pb/Zn laminates were prepared and tested in compression with varying lamellar orientation relative to the loading axis. A model describing the mechanical response in a general state assuming elastic-perfectly plastic isotropic layers was developed. For the 90° laminate, a different approach was applied, using the friction hill concepts used in forging analyses. With increasing ratios of cross-sectional radius to layer

  17. Individual Differences in Response to Automation: The Five Factor Model of Personality

    ERIC Educational Resources Information Center

    Szalma, James L.; Taylor, Grant S.

    2011-01-01

    This study examined the relationship of operator personality (Five Factor Model) and characteristics of the task and of adaptive automation (reliability and adaptiveness--whether the automation was well-matched to changes in task demand) to operator performance, workload, stress, and coping. This represents the first investigation of how the Five…

  18. Aviation Safety: Modeling and Analyzing Complex Interactions between Humans and Automated Systems

    NASA Technical Reports Server (NTRS)

    Rungta, Neha; Brat, Guillaume; Clancey, William J.; Linde, Charlotte; Raimondi, Franco; Seah, Chin; Shafto, Michael

    2013-01-01

    The on-going transformation from the current US Air Traffic System (ATS) to the Next Generation Air Traffic System (NextGen) will force the introduction of new automated systems and most likely will cause automation to migrate from ground to air. This will yield new function allocations between humans and automation and therefore change the roles and responsibilities in the ATS. Yet, safety in NextGen is required to be at least as good as in the current system. We therefore need techniques to evaluate the safety of the interactions between humans and automation. We think that current human factor studies and simulation-based techniques will fall short in front of the ATS complexity, and that we need to add more automated techniques to simulations, such as model checking, which offers exhaustive coverage of the non-deterministic behaviors in nominal and off-nominal scenarios. In this work, we present a verification approach based both on simulations and on model checking for evaluating the roles and responsibilities of humans and automation. Models are created using Brahms (a multi-agent framework) and we show that the traditional Brahms simulations can be integrated with automated exploration techniques based on model checking, thus offering a complete exploration of the behavioral space of the scenario. Our formal analysis supports the notion of beliefs and probabilities to reason about human behavior. We demonstrate the technique with the Ueberligen accident since it exemplifies authority problems when receiving conflicting advices from human and automated systems.

  19. Designers' models of the human-computer interface

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Breedin, Sarah D.

    1993-01-01

    Understanding design models of the human-computer interface (HCI) may produce two types of benefits. First, interface development often requires input from two different types of experts: human factors specialists and software developers. Given the differences in their backgrounds and roles, human factors specialists and software developers may have different cognitive models of the HCI. Yet, they have to communicate about the interface as part of the design process. If they have different models, their interactions are likely to involve a certain amount of miscommunication. Second, the design process in general is likely to be guided by designers' cognitive models of the HCI, as well as by their knowledge of the user, tasks, and system. Designers do not start with a blank slate; rather they begin with a general model of the object they are designing. The author's approach to a design model of the HCI was to have three groups make judgments of categorical similarity about the components of an interface: human factors specialists with HCI design experience, software developers with HCI design experience, and a baseline group of computer users with no experience in HCI design. The components of the user interface included both display components such as windows, text, and graphics, and user interaction concepts, such as command language, editing, and help. The judgments of the three groups were analyzed using hierarchical cluster analysis and Pathfinder. These methods indicated, respectively, how the groups categorized the concepts, and network representations of the concepts for each group. The Pathfinder analysis provides greater information about local, pairwise relations among concepts, whereas the cluster analysis shows global, categorical relations to a greater extent.

  20. Atomic Models of Strong Solids Interfaces Viewed as Composite Structures

    NASA Astrophysics Data System (ADS)

    Staffell, I.; Shang, J. L.; Kendall, K.

    2014-02-01

    This paper looks back through the 1960s to the invention of carbon fibres and the theories of Strong Solids. In particular it focuses on the fracture mechanics paradox of strong composites containing weak interfaces. From Griffith theory, it is clear that three parameters must be considered in producing a high strength composite:- minimising defects; maximising the elastic modulus; and raising the fracture energy along the crack path. The interface then introduces two further factors:- elastic modulus mismatch causing crack stopping; and debonding along a brittle interface due to low interface fracture energy. Consequently, an understanding of the fracture energy of a composite interface is needed. Using an interface model based on atomic interaction forces, it is shown that a single layer of contaminant atoms between the matrix and the reinforcement can reduce the interface fracture energy by an order of magnitude, giving a large delamination effect. The paper also looks to a future in which cars will be made largely from composite materials. Radical improvements in automobile design are necessary because the number of cars worldwide is predicted to double. This paper predicts gains in fuel economy by suggesting a new theory of automobile fuel consumption using an adaptation of Coulomb's friction law. It is demonstrated both by experiment and by theoretical argument that the energy dissipated in standard vehicle tests depends only on weight. Consequently, moving from metal to fibre construction can give a factor 2 improved fuel economy performance, roughly the same as moving from a petrol combustion drive to hydrogen fuel cell propulsion. Using both options together can give a factor 4 improvement, as demonstrated by testing a composite car using the ECE15 protocol.

  1. Critical Interfaces in the Random-Bond Potts Model

    NASA Astrophysics Data System (ADS)

    Jacobsen, Jesper L.; Le Doussal, Pierre; Picco, Marco; Santachiara, Raoul; Wiese, Kay Jörg

    2009-02-01

    We study geometrical properties of interfaces in the random-temperature q-states Potts model as an example of a conformal field theory weakly perturbed by quenched disorder. Using conformal perturbation theory in q-2 we compute the fractal dimension of Fortuin-Kasteleyn (FK) domain walls. We also compute it numerically both via the Wolff cluster algorithm for q=3 and via transfer-matrix evaluations. We also obtain numerical results for the fractal dimension of spin clusters interfaces for q=3. These are found numerically consistent with the duality κspinκFK=16 as expressed in putative SLE parameters.

  2. Critical interfaces in the random-bond Potts model.

    PubMed

    Jacobsen, Jesper L; Le Doussal, Pierre; Picco, Marco; Santachiara, Raoul; Wiese, Kay Jörg

    2009-02-20

    We study geometrical properties of interfaces in the random-temperature q-states Potts model as an example of a conformal field theory weakly perturbed by quenched disorder. Using conformal perturbation theory in q-2 we compute the fractal dimension of Fortuin-Kasteleyn (FK) domain walls. We also compute it numerically both via the Wolff cluster algorithm for q=3 and via transfer-matrix evaluations. We also obtain numerical results for the fractal dimension of spin clusters interfaces for q=3. These are found numerically consistent with the duality kappaspinkappaFK=16 as expressed in putative SLE parameters.

  3. Automated forward mechanical modeling of wrinkle ridges on Mars

    NASA Astrophysics Data System (ADS)

    Nahm, Amanda; Peterson, Samuel

    2016-04-01

    One of the main goals of the InSight mission to Mars is to understand the internal structure of Mars [1], in part through passive seismology. Understanding the shallow surface structure of the landing site is critical to the robust interpretation of recorded seismic signals. Faults, such as the wrinkle ridges abundant in the proposed landing site in Elysium Planitia, can be used to determine the subsurface structure of the regions they deform. Here, we test a new automated method for modeling of the topography of a wrinkle ridge (WR) in Elysium Planitia, allowing for faster and more robust determination of subsurface fault geometry for interpretation of the local subsurface structure. We perform forward mechanical modeling of fault-related topography [e.g., 2, 3], utilizing the modeling program Coulomb [4, 5] to model surface displacements surface induced by blind thrust faulting. Fault lengths are difficult to determine for WR; we initially assume a fault length of 30 km, but also test the effects of different fault lengths on model results. At present, we model the wrinkle ridge as a single blind thrust fault with a constant fault dip, though WR are likely to have more complicated fault geometry [e.g., 6-8]. Typically, the modeling is performed using the Coulomb GUI. This approach can be time consuming, requiring user inputs to change model parameters and to calculate the associated displacements for each model, which limits the number of models and parameter space that can be tested. To reduce active user computation time, we have developed a method in which the Coulomb GUI is bypassed. The general modeling procedure remains unchanged, and a set of input files is generated before modeling with ranges of pre-defined parameter values. The displacement calculations are divided into two suites. For Suite 1, a total of 3770 input files were generated in which the fault displacement (D), dip angle (δ), depth to upper fault tip (t), and depth to lower fault tip (B

  4. Influence of atmospheric stability on model wind turbine wake interface

    NASA Astrophysics Data System (ADS)

    Taylor, Amelia; Gomez, Virgilio; Novoa, Santiago; Pol, Suhas; Westergaard, Carsten; Castillo, Luciano

    2014-11-01

    Differences in wind turbine wake deficit recovery for various atmospheric stability conditions (stratification) have been attributed to turbulence intensity levels at different conditions. It is shown that buoyancy differences at the wind turbine wake interface should be considered in addition to varying turbulence intensity to describe the net momentum transport across the wake interface. Mixing, induced by tip and hub vortices or wake swirl, induces these buoyancy differences. The above hypothesis was tested using field measurements of the wake interface for a 1.17 m model turbine installed at 6.25 m hub height. Atmospheric conditions were characterized using a 10 m meteorological tower upstream of the turbine, while a vertical rake of sonic anemometers clustered around the hub height on a downstream tower measured the wake. Data was collected over the course of seven months, during varying stability conditions, and with five different turbine configurations - including a single turbine at three different positions, two turbines in a column, and three turbines in a column. Presented are results showing the behavior of the wake (particularly the wake interface), for unstable, stable, and neutral conditions. We observed that the swirl in the wake causes mixing of the inflow, leading to a constant density profile in the far wake that causes density jumps at the wake interfaces for stratified inflow.

  5. A general graphical user interface for automatic reliability modeling

    NASA Technical Reports Server (NTRS)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1991-01-01

    Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.

  6. A conforming to interface structured adaptive mesh refinement technique for modeling fracture problems

    NASA Astrophysics Data System (ADS)

    Soghrati, Soheil; Xiao, Fei; Nagarajan, Anand

    2016-12-01

    A Conforming to Interface Structured Adaptive Mesh Refinement (CISAMR) technique is introduced for the automated transformation of a structured grid into a conforming mesh with appropriate element aspect ratios. The CISAMR algorithm is composed of three main phases: (i) Structured Adaptive Mesh Refinement (SAMR) of the background grid; (ii) r-adaptivity of the nodes of elements cut by the crack; (iii) sub-triangulation of the elements deformed during the r-adaptivity process and those with hanging nodes generated during the SAMR process. The required considerations for the treatment of crack tips and branching cracks are also discussed in this manuscript. Regardless of the complexity of the problem geometry and without using iterative smoothing or optimization techniques, CISAMR ensures that aspect ratios of conforming elements are lower than three. Multiple numerical examples are presented to demonstrate the application of CISAMR for modeling linear elastic fracture problems with intricate morphologies.

  7. A conforming to interface structured adaptive mesh refinement technique for modeling fracture problems

    NASA Astrophysics Data System (ADS)

    Soghrati, Soheil; Xiao, Fei; Nagarajan, Anand

    2017-04-01

    A Conforming to Interface Structured Adaptive Mesh Refinement (CISAMR) technique is introduced for the automated transformation of a structured grid into a conforming mesh with appropriate element aspect ratios. The CISAMR algorithm is composed of three main phases: (i) Structured Adaptive Mesh Refinement (SAMR) of the background grid; (ii) r-adaptivity of the nodes of elements cut by the crack; (iii) sub-triangulation of the elements deformed during the r-adaptivity process and those with hanging nodes generated during the SAMR process. The required considerations for the treatment of crack tips and branching cracks are also discussed in this manuscript. Regardless of the complexity of the problem geometry and without using iterative smoothing or optimization techniques, CISAMR ensures that aspect ratios of conforming elements are lower than three. Multiple numerical examples are presented to demonstrate the application of CISAMR for modeling linear elastic fracture problems with intricate morphologies.

  8. Automated model selection in covariance estimation and spatial whitening of MEG and EEG signals.

    PubMed

    Engemann, Denis A; Gramfort, Alexandre

    2015-03-01

    Magnetoencephalography and electroencephalography (M/EEG) measure non-invasively the weak electromagnetic fields induced by post-synaptic neural currents. The estimation of the spatial covariance of the signals recorded on M/EEG sensors is a building block of modern data analysis pipelines. Such covariance estimates are used in brain-computer interfaces (BCI) systems, in nearly all source localization methods for spatial whitening as well as for data covariance estimation in beamformers. The rationale for such models is that the signals can be modeled by a zero mean Gaussian distribution. While maximizing the Gaussian likelihood seems natural, it leads to a covariance estimate known as empirical covariance (EC). It turns out that the EC is a poor estimate of the true covariance when the number of samples is small. To address this issue the estimation needs to be regularized. The most common approach downweights off-diagonal coefficients, while more advanced regularization methods are based on shrinkage techniques or generative models with low rank assumptions: probabilistic PCA (PPCA) and factor analysis (FA). Using cross-validation all of these models can be tuned and compared based on Gaussian likelihood computed on unseen data. We investigated these models on simulations, one electroencephalography (EEG) dataset as well as magnetoencephalography (MEG) datasets from the most common MEG systems. First, our results demonstrate that different models can be the best, depending on the number of samples, heterogeneity of sensor types and noise properties. Second, we show that the models tuned by cross-validation are superior to models with hand-selected regularization. Hence, we propose an automated solution to the often overlooked problem of covariance estimation of M/EEG signals. The relevance of the procedure is demonstrated here for spatial whitening and source localization of MEG signals.

  9. Computer modelling of nanoscale diffusion phenomena at epitaxial interfaces

    NASA Astrophysics Data System (ADS)

    Michailov, M.; Ranguelov, B.

    2014-05-01

    The present study outlines an important area in the application of computer modelling to interface phenomena. Being relevant to the fundamental physical problem of competing atomic interactions in systems with reduced dimensionality, these phenomena attract special academic attention. On the other hand, from a technological point of view, detailed knowledge of the fine atomic structure of surfaces and interfaces correlates with a large number of practical problems in materials science. Typical examples are formation of nanoscale surface patterns, two-dimensional superlattices, atomic intermixing at an epitaxial interface, atomic transport phenomena, structure and stability of quantum wires on surfaces. We discuss here a variety of diffusion mechanisms that control surface-confined atomic exchange, formation of alloyed atomic stripes and islands, relaxation of pure and alloyed atomic terraces, diffusion of clusters and their stability in an external field. The computational model refines important details of diffusion of adatoms and clusters accounting for the energy barriers at specific atomic sites: smooth domains, terraces, steps and kinks. The diffusion kinetics, integrity and decomposition of atomic islands in an external field are considered in detail and assigned to specific energy regions depending on the cluster stability in mass transport processes. The presented ensemble of diffusion scenarios opens a way for nanoscale surface design towards regular atomic interface patterns with exotic physical features.

  10. Universality in Sandpiles, Interface Depinning, and Earthquake Models

    SciTech Connect

    Paczuski, M.; Boettcher, S. |

    1996-07-01

    Recent numerical results for a model describing dispersive transport in ricepiles are explained by mapping the model to the depinning transition of an elastic interface that is dragged at one end through a random medium. The average velocity of transport vanishes with system size {ital L} as {l_angle}{ital v}{r_angle}{approximately}{ital L}{sup 2{minus}{ital D}}{approximately}{ital L}{sup {minus}0.23}, and the avalanche size distribution exponent {tau}=2{minus}1/{ital D}{approx_equal}1.55, where {ital D}{approx_equal}2.23 from interface depinning. We conjecture that the purely deterministic Burridge-Knopoff {open_quote}{open_quote}train{close_quote}{close_quote} model for earthquakes is in the same universality class. {copyright} {ital 1996 The American Physical Society.}

  11. A Hybrid Tool for User Interface Modeling and Prototyping

    NASA Astrophysics Data System (ADS)

    Trætteberg, Hallvard

    Although many methods have been proposed, model-based development methods have only to some extent been adopted for UI design. In particular, they are not easy to combine with user-centered design methods. In this paper, we present a hybrid UI modeling and GUI prototyping tool, which is designed to fit better with IS development and UI design traditions. The tool includes a diagram editor for domain and UI models and an execution engine that integrates UI behavior, live UI components and sample data. Thus, both model-based user interface design and prototyping-based iterative design are supported

  12. Developing A Laser Shockwave Model For Characterizing Diffusion Bonded Interfaces

    SciTech Connect

    James A. Smith; Jeffrey M. Lacy; Barry H. Rabin

    2014-07-01

    12. Other advances in QNDE and related topics: Preferred Session Laser-ultrasonics Developing A Laser Shockwave Model For Characterizing Diffusion Bonded Interfaces 41st Annual Review of Progress in Quantitative Nondestructive Evaluation Conference QNDE Conference July 20-25, 2014 Boise Centre 850 West Front Street Boise, Idaho 83702 James A. Smith, Jeffrey M. Lacy, Barry H. Rabin, Idaho National Laboratory, Idaho Falls, ID ABSTRACT: The US National Nuclear Security Agency has a Global Threat Reduction Initiative (GTRI) which is assigned with reducing the worldwide use of high-enriched uranium (HEU). A salient component of that initiative is the conversion of research reactors from HEU to low enriched uranium (LEU) fuels. An innovative fuel is being developed to replace HEU. The new LEU fuel is based on a monolithic fuel made from a U-Mo alloy foil encapsulated in Al-6061 cladding. In order to complete the fuel qualification process, the laser shock technique is being developed to characterize the clad-clad and fuel-clad interface strengths in fresh and irradiated fuel plates. The Laser Shockwave Technique (LST) is being investigated to characterize interface strength in fuel plates. LST is a non-contact method that uses lasers for the generation and detection of large amplitude acoustic waves to characterize interfaces in nuclear fuel plates. However the deposition of laser energy into the containment layer on specimen’s surface is intractably complex. The shock wave energy is inferred from the velocity on the backside and the depth of the impression left on the surface from the high pressure plasma pulse created by the shock laser. To help quantify the stresses and strengths at the interface, a finite element model is being developed and validated by comparing numerical and experimental results for back face velocities and front face depressions with experimental results. This paper will report on initial efforts to develop a finite element model for laser

  13. Multiscale modeling of droplet interface bilayer membrane networks

    PubMed Central

    Freeman, Eric C.; Farimani, Amir B.; Aluru, Narayana R.; Philen, Michael K.

    2015-01-01

    Droplet interface bilayer (DIB) networks are considered for the development of stimuli-responsive membrane-based materials inspired by cellular mechanics. These DIB networks are often modeled as combinations of electrical circuit analogues, creating complex networks of capacitors and resistors that mimic the biomolecular structures. These empirical models are capable of replicating data from electrophysiology experiments, but these models do not accurately capture the underlying physical phenomena and consequently do not allow for simulations of material functionalities beyond the voltage-clamp or current-clamp conditions. The work presented here provides a more robust description of DIB network behavior through the development of a hierarchical multiscale model, recognizing that the macroscopic network properties are functions of their underlying molecular structure. The result of this research is a modeling methodology based on controlled exchanges across the interfaces of neighboring droplets. This methodology is validated against experimental data, and an extension case is provided to demonstrate possible future applications of droplet interface bilayer networks. PMID:26594262

  14. Symmetric model of compressible granular mixtures with permeable interfaces

    NASA Astrophysics Data System (ADS)

    Saurel, Richard; Le Martelot, Sébastien; Tosello, Robert; Lapébie, Emmanuel

    2014-12-01

    Compressible granular materials are involved in many applications, some of them being related to energetic porous media. Gas permeation effects are important during their compaction stage, as well as their eventual chemical decomposition. Also, many situations involve porous media separated from pure fluids through two-phase interfaces. It is thus important to develop theoretical and numerical formulations to deal with granular materials in the presence of both two-phase interfaces and gas permeation effects. Similar topic was addressed for fluid mixtures and interfaces with the Discrete Equations Method (DEM) [R. Abgrall and R. Saurel, "Discrete equations for physical and numerical compressible multiphase mixtures," J. Comput. Phys. 186(2), 361-396 (2003)] but it seemed impossible to extend this approach to granular media as intergranular stress [K. K. Kuo, V. Yang, and B. B. Moore, "Intragranular stress, particle-wall friction and speed of sound in granular propellant beds," J. Ballist. 4(1), 697-730 (1980)] and associated configuration energy [J. B. Bdzil, R. Menikoff, S. F. Son, A. K. Kapila, and D. S. Stewart, "Two-phase modeling of deflagration-to-detonation transition in granular materials: A critical examination of modeling issues," Phys. Fluids 11, 378 (1999)] were present with significant effects. An approach to deal with fluid-porous media interfaces was derived in Saurel et al. ["Modelling dynamic and irreversible powder compaction," J. Fluid Mech. 664, 348-396 (2010)] but its validity was restricted to weak velocity disequilibrium only. Thanks to a deeper analysis, the DEM is successfully extended to granular media modelling in the present paper. It results in an enhanced version of the Baer and Nunziato ["A two-phase mixture theory for the deflagration-to-detonation transition (DDT) in reactive granular materials," Int. J. Multiphase Flow 12(6), 861-889 (1986)] model as symmetry of the formulation is now preserved. Several computational examples are

  15. Generalized model for solid-on-solid interface growth.

    PubMed

    Richele, M F; Atman, A P F

    2015-05-01

    We present a probabilistic cellular automaton (PCA) model to study solid-on-solid interface growth in which the transition rules depend on the local morphology of the profile obtained from the interface representation of the PCA. We show that the model is able to reproduce a wide range of patterns whose critical roughening exponents are associated to different universality classes, including random deposition, Edwards-Wilkinson, and Kardar-Parisi-Zhang. By means of the growth exponent method, we consider a particular set of the model parameters to build the two-dimensional phase diagram corresponding to a planar cut of the higher dimensional parameter space. A strong indication of phase transition between different universality classes can be observed, evincing different regimes of deposition, from layer-by-layer to Volmer-Weber and Stransk-Krastanov-like modes. We expect that this model can be useful to predict the morphological properties of interfaces obtained at different surface deposition problems, since it allows us to simulate several experimental situations by setting the values of the specific transition probabilities in a very simple and direct way.

  16. Quantitative model studies for interfaces in organic electronic devices

    NASA Astrophysics Data System (ADS)

    Gottfried, J. Michael

    2016-11-01

    In organic light-emitting diodes and similar devices, organic semiconductors are typically contacted by metal electrodes. Because the resulting metal/organic interfaces have a large impact on the performance of these devices, their quantitative understanding is indispensable for the further rational development of organic electronics. A study by Kröger et al (2016 New J. Phys. 18 113022) of an important single-crystal based model interface provides detailed insight into its geometric and electronic structure and delivers valuable benchmark data for computational studies. In view of the differences between typical surface-science model systems and real devices, a ‘materials gap’ is identified that needs to be addressed by future research to make the knowledge obtained from fundamental studies even more beneficial for real-world applications.

  17. Stability of finite difference models containing two boundaries or interfaces

    NASA Technical Reports Server (NTRS)

    Trefethen, L. N.

    1984-01-01

    The stability of finite difference models of hyperbolic initial boundary value problems is connected with the propagation and reflection of parasitic waves. Wave propagation ideas are applied to models containing two boundaires or interfaces, where repeated reflection of trapped wave packets is a potential new source of instability. Various known instability phenomena are accounted for in a unified way. Results show: (1) dissipativity does not ensure stability when three or more formulas are concatenated at a boundary or internal interface; (2) algebraic GKS instabilities can be converted by a second boundary to exponential instabilities only when an infinite numerical reflection coefficient is present; and (3) GKS-stability and P-stability can be established in certain problems by showing that all numerical reflection coefficients have modulus less than 1.

  18. A Stepwise Fitting Procedure for automated fitting of Ecopath with Ecosim models

    NASA Astrophysics Data System (ADS)

    Scott, Erin; Serpetti, Natalia; Steenbeek, Jeroen; Heymans, Johanna Jacomina

    The Stepwise Fitting Procedure automates testing of alternative hypotheses used for fitting Ecopath with Ecosim (EwE) models to observation reference data (Mackinson et al. 2009). The calibration of EwE model predictions to observed data is important to evaluate any model that will be used for ecosystem based management. Thus far, the model fitting procedure in EwE has been carried out manually: a repetitive task involving setting > 1000 specific individual searches to find the statistically 'best fit' model. The novel fitting procedure automates the manual procedure therefore producing accurate results and lets the modeller concentrate on investigating the 'best fit' model for ecological accuracy.

  19. Wavelet transforms in a critical interface model for Barkhausen noise.

    PubMed

    de Queiroz, S L A

    2008-02-01

    We discuss the application of wavelet transforms to a critical interface model which is known to provide a good description of Barkhausen noise in soft ferromagnets. The two-dimensional version of the model (one-dimensional interface) is considered, mainly in the adiabatic limit of very slow driving. On length scales shorter than a crossover length (which grows with the strength of the surface tension), the effective interface roughness exponent zeta is approximately 1.20 , close to the expected value for the universality class of the quenched Edwards-Wilkinson model. We find that the waiting times between avalanches are fully uncorrelated, as the wavelet transform of their autocorrelations scales as white noise. Similarly, detrended size-size correlations give a white-noise wavelet transform. Consideration of finite driving rates, still deep within the intermittent regime, shows the wavelet transform of correlations scaling as 1/f(1.5) for intermediate frequencies. This behavior is ascribed to intra-avalanche correlations.

  20. Automated MRI Segmentation for Individualized Modeling of Current Flow in the Human Head

    PubMed Central

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-01-01

    Objective High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography (HD-EEG) require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images (MRI) requires labor-intensive manual segmentation, even when leveraging available automated segmentation tools. Also, accurate placement of many high-density electrodes on individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach A fully automated segmentation technique based on Statical Parametric Mapping 8 (SPM8), including an improved tissue probability map (TPM) and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on 4 healthy subjects and 7 stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. Main results The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view (FOV) extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly

  1. Automated MRI segmentation for individualized modeling of current flow in the human head

    NASA Astrophysics Data System (ADS)

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-12-01

    Objective. High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets.Main results. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly.Significance. Fully

  2. Processing and review interface for strong motion data (PRISM) software, version 1.0.0—Methodology and automated processing

    USGS Publications Warehouse

    Jones, Jeanne; Kalkan, Erol; Stephens, Christopher

    2017-02-23

    A continually increasing number of high-quality digital strong-motion records from stations of the National Strong-Motion Project (NSMP) of the U.S. Geological Survey (USGS), as well as data from regional seismic networks within the United States, call for automated processing of strong-motion records with human review limited to selected significant or flagged records. The NSMP has developed the Processing and Review Interface for Strong Motion data (PRISM) software to meet this need. In combination with the Advanced National Seismic System Quake Monitoring System (AQMS), PRISM automates the processing of strong-motion records. When used without AQMS, PRISM provides batch-processing capabilities. The PRISM version 1.0.0 is platform independent (coded in Java), open source, and does not depend on any closed-source or proprietary software. The software consists of two major components: a record processing engine and a review tool that has a graphical user interface (GUI) to manually review, edit, and process records. To facilitate use by non-NSMP earthquake engineers and scientists, PRISM (both its processing engine and review tool) is easy to install and run as a stand-alone system on common operating systems such as Linux, OS X, and Windows. PRISM was designed to be flexible and extensible in order to accommodate new processing techniques. This report provides a thorough description and examples of the record processing features supported by PRISM. All the computing features of PRISM have been thoroughly tested.

  3. Automated finite element modeling of the lumbar spine: Using a statistical shape model to generate a virtual population of models.

    PubMed

    Campbell, J Q; Petrella, A J

    2016-09-06

    Population-based modeling of the lumbar spine has the potential to be a powerful clinical tool. However, developing a fully parameterized model of the lumbar spine with accurate geometry has remained a challenge. The current study used automated methods for landmark identification to create a statistical shape model of the lumbar spine. The shape model was evaluated using compactness, generalization ability, and specificity. The primary shape modes were analyzed visually, quantitatively, and biomechanically. The biomechanical analysis was performed by using the statistical shape model with an automated method for finite element model generation to create a fully parameterized finite element model of the lumbar spine. Functional finite element models of the mean shape and the extreme shapes (±3 standard deviations) of all 17 shape modes were created demonstrating the robust nature of the methods. This study represents an advancement in finite element modeling of the lumbar spine and will allow population-based modeling in the future.

  4. Bacterial Adhesion to Hexadecane (Model NAPL)-Water Interfaces

    NASA Astrophysics Data System (ADS)

    Ghoshal, S.; Zoueki, C. R.; Tufenkji, N.

    2009-05-01

    The rates of biodegradation of NAPLs have been shown to be influenced by the adhesion of hydrocarbon- degrading microorganisms as well as their proximity to the NAPL-water interface. Several studies provide evidence for bacterial adhesion or biofilm formation at alkane- or crude oil-water interfaces, but there is a significant knowledge gap in our understanding of the processes that influence initial adhesion of bacteria on to NAPL-water interfaces. In this study bacterial adhesion to hexadecane, and a series of NAPLs comprised of hexadecane amended with toluene, and/or with asphaltenes and resins, which are the surface active fractions of crude oils, were examined using a Microbial Adhesion to Hydrocarbons (MATH) assay. The microorganisms employed were Mycobacterium kubicae, Pseudomonas aeruginosa and Pseudomonas putida, which are hydrocarbon degraders or soil microorganisms. MATH assays as well as electrophoretic mobility measurements of the bacterial cells and the NAPL droplet surfaces in aqueous solutions were conducted at three solution pHs (4, 6 and 7). Asphaltenes and resins were shown to generally decrease microbial adhesion. Results of the MATH assay were not in qualitative agreement with theoretical predictions of bacteria- hydrocarbon interactions based on the extended Derjaguin-Landau-Verwey-Overbeek (XDLVO) model of free energy of interaction between the cell and NAPL droplets. In this model the free energy of interaction between two colloidal particles is predicted based on electrical double layer, van der Waals and hydrophobic forces. It is likely that the steric repulsion between bacteria and NAPL surfaces, caused by biopolymers on bacterial surfaces and aphaltenes and resins at the NAPL-water interface contributed to the decreased adhesion compared to that predicted by the XDLVO model.

  5. Industrial Automation Mechanic Model Curriculum Project. Final Report.

    ERIC Educational Resources Information Center

    Toledo Public Schools, OH.

    This document describes a demonstration program that developed secondary level competency-based instructional materials for industrial automation mechanics. Program activities included task list compilation, instructional materials research, learning activity packet (LAP) development, construction of lab elements, system implementation,…

  6. Modelling molecule-surface interactions--an automated quantum-classical approach using a genetic algorithm.

    PubMed

    Herbers, Claudia R; Johnston, Karen; van der Vegt, Nico F A

    2011-06-14

    We present an automated and efficient method to develop force fields for molecule-surface interactions. A genetic algorithm (GA) is used to parameterise a classical force field so that the classical adsorption energy landscape of a molecule on a surface matches the corresponding landscape from density functional theory (DFT) calculations. The procedure performs a sophisticated search in the parameter phase space and converges very quickly. The method is capable of fitting a significant number of structures and corresponding adsorption energies. Water on a ZnO(0001) surface was chosen as a benchmark system but the method is implemented in a flexible way and can be applied to any system of interest. In the present case, pairwise Lennard Jones (LJ) and Coulomb potentials are used to describe the molecule-surface interactions. In the course of the fitting procedure, the LJ parameters are refined in order to reproduce the adsorption energy landscape. The classical model is capable of describing a wide range of energies, which is essential for a realistic description of a fluid-solid interface.

  7. Thermal Edge-Effects Model for Automated Tape Placement of Thermoplastic Composites

    NASA Technical Reports Server (NTRS)

    Costen, Robert C.

    2000-01-01

    Two-dimensional thermal models for automated tape placement (ATP) of thermoplastic composites neglect the diffusive heat transport that occurs between the newly placed tape and the cool substrate beside it. Such lateral transport can cool the tape edges prematurely and weaken the bond. The three-dimensional, steady state, thermal transport equation is solved by the Green's function method for a tape of finite width being placed on an infinitely wide substrate. The isotherm for the glass transition temperature on the weld interface is used to determine the distance inward from the tape edge that is prematurely cooled, called the cooling incursion Delta a. For the Langley ATP robot, Delta a = 0.4 mm for a unidirectional lay-up of PEEK/carbon fiber composite, and Delta a = 1.2 mm for an isotropic lay-up. A formula for Delta a is developed and applied to a wide range of operating conditions. A surprise finding is that Delta a need not decrease as the Peclet number Pe becomes very large, where Pe is the dimensionless ratio of inertial to diffusive heat transport. Conformable rollers that increase the consolidation length would also increase Delta a, unless other changes are made, such as proportionally increasing the material speed. To compensate for premature edge cooling, the thermal input could be extended past the tape edges by the amount Delta a. This method should help achieve uniform weld strength and crystallinity across the width of the tape.

  8. ShowFlow: A practical interface for groundwater modeling

    SciTech Connect

    Tauxe, J.D.

    1990-12-01

    ShowFlow was created to provide a user-friendly, intuitive environment for researchers and students who use computer modeling software. What traditionally has been a workplace available only to those familiar with command-line based computer systems is now within reach of almost anyone interested in the subject of modeling. In the case of this edition of ShowFlow, the user can easily experiment with simulations using the steady state gaussian plume groundwater pollutant transport model SSGPLUME, though ShowFlow can be rewritten to provide a similar interface for any computer model. Included in this thesis is all the source code for both the ShowFlow application for Microsoft{reg sign} Windows{trademark} and the SSGPLUME model, a User's Guide, and a Developer's Guide for converting ShowFlow to run other model programs. 18 refs., 13 figs.

  9. Behavior of asphaltene model compounds at w/o interfaces.

    PubMed

    Nordgård, Erland L; Sørland, Geir; Sjöblom, Johan

    2010-02-16

    Asphaltenes, present in significant amounts in heavy crude oil, contains subfractions capable of stabilizing water-in-oil emulsions. Still, the composition of these subfractions is not known in detail, and the actual mechanism behind emulsion stability is dependent on perceived interfacial concentrations and compositions. This study aims at utilizing polyaromatic surfactants which contains an acidic moiety as model compounds for the surface-active subfraction of asphaltenes. A modified pulse-field gradient (PFG) NMR method has been used to study droplet sizes and stability of emulsions prepared with asphaltene model compounds. The method has been compared to the standard microscopy droplet counting method. Arithmetic and volumetric mean droplet sizes as a function of surfactant concentration and water content clearly showed that the interfacial area was dependent on the available surfactant at the emulsion interface. Adsorption of the model compounds onto hydrophilic silica has been investigated by UV depletion, and minor differences in the chemical structure of the model compounds caused significant differences in the affinity toward this highly polar surface. The cross-sectional areas obtained have been compared to areas from the surface-to-volume ratio found by NMR and gave similar results for one of the two model compounds. The mean molecular area for this compound suggested a tilted geometry of the aromatic core with respect to the interface, which has also been proposed for real asphaltenic samples. The film behavior was further investigated using a liquid-liquid Langmuir trough supporting the ability to form stable interfacial films. This study supports that acidic, or strong hydrogen-bonding fractions, can promote stable water-in-oil emulsion. The use of model compounds opens up for studying emulsion behavior and demulsifier efficiency based on true interfacial concentrations rather than perceived interfaces.

  10. Language Model Applications to Spelling with Brain-Computer Interfaces

    PubMed Central

    Mora-Cortes, Anderson; Manyakov, Nikolay V.; Chumerin, Nikolay; Van Hulle, Marc M.

    2014-01-01

    Within the Ambient Assisted Living (AAL) community, Brain-Computer Interfaces (BCIs) have raised great hopes as they provide alternative communication means for persons with disabilities bypassing the need for speech and other motor activities. Although significant advancements have been realized in the last decade, applications of language models (e.g., word prediction, completion) have only recently started to appear in BCI systems. The main goal of this article is to review the language model applications that supplement non-invasive BCI-based communication systems by discussing their potential and limitations, and to discern future trends. First, a brief overview of the most prominent BCI spelling systems is given, followed by an in-depth discussion of the language models applied to them. These language models are classified according to their functionality in the context of BCI-based spelling: the static/dynamic nature of the user interface, the use of error correction and predictive spelling, and the potential to improve their classification performance by using language models. To conclude, the review offers an overview of the advantages and challenges when implementing language models in BCI-based communication systems when implemented in conjunction with other AAL technologies. PMID:24675760

  11. A diffuse interface model of grain boundary faceting

    NASA Astrophysics Data System (ADS)

    Abdeljawad, Fadi; Medlin, Douglas; Zimmerman, Jonathan; Hattar, Khalid; Foiles, Stephen

    Incorporating anisotropy into thermodynamic treatments of interfaces dates back to over a century ago. For a given orientation of two abutting grains in a pure metal, depressions in the grain boundary (GB) energy may exist as a function of GB inclination, defined by the plane normal. Therefore, an initially flat GB may facet resulting in a hill-and-valley structure. Herein, we present a diffuse interface model of GB faceting that is capable of capturing anisotropic GB energies and mobilities, and accounting for the excess energy due to facet junctions and their non-local interactions. The hallmark of our approach is the ability to independently examine the role of each of the interface properties on the faceting behavior. As a demonstration, we consider the Σ 5 < 001 > tilt GB in iron, where faceting along the { 310 } and { 210 } planes was experimentally observed. Linear stability analysis and numerical examples highlight the role of junction energy and associated non-local interactions on the resulting facet length scales. On the whole, our modeling approach provides a general framework to examine the spatio-temporal evolution of highly anisotropic GBs in polycrystalline metals. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  12. Developing a laser shockwave model for characterizing diffusion bonded interfaces

    SciTech Connect

    Lacy, Jeffrey M. Smith, James A. Rabin, Barry H.

    2015-03-31

    The US National Nuclear Security Agency has a Global Threat Reduction Initiative (GTRI) with the goal of reducing the worldwide use of high-enriched uranium (HEU). A salient component of that initiative is the conversion of research reactors from HEU to low enriched uranium (LEU) fuels. An innovative fuel is being developed to replace HEU in high-power research reactors. The new LEU fuel is a monolithic fuel made from a U-Mo alloy foil encapsulated in Al-6061 cladding. In order to support the fuel qualification process, the Laser Shockwave Technique (LST) is being developed to characterize the clad-clad and fuel-clad interface strengths in fresh and irradiated fuel plates. LST is a non-contact method that uses lasers for the generation and detection of large amplitude acoustic waves to characterize interfaces in nuclear fuel plates. However, because the deposition of laser energy into the containment layer on a specimen's surface is intractably complex, the shock wave energy is inferred from the surface velocity measured on the backside of the fuel plate and the depth of the impression left on the surface by the high pressure plasma pulse created by the shock laser. To help quantify the stresses generated at the interfaces, a finite element method (FEM) model is being utilized. This paper will report on initial efforts to develop and validate the model by comparing numerical and experimental results for back surface velocities and front surface depressions in a single aluminum plate representative of the fuel cladding.

  13. Developing a laser shockwave model for characterizing diffusion bonded interfaces

    NASA Astrophysics Data System (ADS)

    Lacy, Jeffrey M.; Smith, James A.; Rabin, Barry H.

    2015-03-01

    The US National Nuclear Security Agency has a Global Threat Reduction Initiative (GTRI) with the goal of reducing the worldwide use of high-enriched uranium (HEU). A salient component of that initiative is the conversion of research reactors from HEU to low enriched uranium (LEU) fuels. An innovative fuel is being developed to replace HEU in high-power research reactors. The new LEU fuel is a monolithic fuel made from a U-Mo alloy foil encapsulated in Al-6061 cladding. In order to support the fuel qualification process, the Laser Shockwave Technique (LST) is being developed to characterize the clad-clad and fuel-clad interface strengths in fresh and irradiated fuel plates. LST is a non-contact method that uses lasers for the generation and detection of large amplitude acoustic waves to characterize interfaces in nuclear fuel plates. However, because the deposition of laser energy into the containment layer on a specimen's surface is intractably complex, the shock wave energy is inferred from the surface velocity measured on the backside of the fuel plate and the depth of the impression left on the surface by the high pressure plasma pulse created by the shock laser. To help quantify the stresses generated at the interfaces, a finite element method (FEM) model is being utilized. This paper will report on initial efforts to develop and validate the model by comparing numerical and experimental results for back surface velocities and front surface depressions in a single aluminum plate representative of the fuel cladding.

  14. A sharp interface evolutionary model for shape memory alloys

    NASA Astrophysics Data System (ADS)

    Knüpfer, Hans; Kružík, Martin

    2016-11-01

    We show the existence of an energetic solution to a quasistatic evolutionary model of shape memory alloys. Elastic behavior of each material phase/variant is described by polyconvex energy density. Additionally, to every phase boundary, there is an interface-polyconvex energy assigned, introduced by M. \\v{S}ilhav\\'{y}. The model considers internal variables describing the evolving spatial arrangement of the material phases and a deformation mapping with its first-order gradients. It allows for injectivity and orientation-preservation of deformations. Moreover, the resulting material microstructures have finite length scales.

  15. Automated eukaryotic gene structure annotation using EVidenceModeler and the Program to Assemble Spliced Alignments.

    PubMed

    Haas, Brian J; Salzberg, Steven L; Zhu, Wei; Pertea, Mihaela; Allen, Jonathan E; Orvis, Joshua; White, Owen; Buell, C Robin; Wortman, Jennifer R

    2008-01-11

    EVidenceModeler (EVM) is presented as an automated eukaryotic gene structure annotation tool that reports eukaryotic gene structures as a weighted consensus of all available evidence. EVM, when combined with the Program to Assemble Spliced Alignments (PASA), yields a comprehensive, configurable annotation system that predicts protein-coding genes and alternatively spliced isoforms. Our experiments on both rice and human genome sequences demonstrate that EVM produces automated gene structure annotation approaching the quality of manual curation.

  16. a Deformable Template Model with Feature Tracking for Automated Ivus Segmentation

    NASA Astrophysics Data System (ADS)

    Manandhar, Prakash; Hau Chen, Chi

    2010-02-01

    Intravascular Ultrasound (IVUS) can be used to create a 3D vascular profile of arteries for preventative prediction of Coronary Heart Disease (CHD). Segmentation of individual B-scan frames is a crucial step for creating profiles. Manual segmentation is too labor intensive to be of routine use. Automated segmentation algorithms are not yet accurate enough. We present a method of tracking features across frames of ultrasound data to increase automated segmentation accuracy using a deformable template model.

  17. Automated Eukaryotic Gene Structure Annotation Using EVidenceModeler and the Program to Assemble Spliced Alignments

    SciTech Connect

    Haas, B J; Salzberg, S L; Zhu, W; Pertea, M; Allen, J E; Orvis, J; White, O; Buell, C R; Wortman, J R

    2007-12-10

    EVidenceModeler (EVM) is presented as an automated eukaryotic gene structure annotation tool that reports eukaryotic gene structures as a weighted consensus of all available evidence. EVM, when combined with the Program to Assemble Spliced Alignments (PASA), yields a comprehensive, configurable annotation system that predicts protein-coding genes and alternatively spliced isoforms. Our experiments on both rice and human genome sequences demonstrate that EVM produces automated gene structure annotation approaching the quality of manual curation.

  18. Interface Management for a NASA Flight Project Using Model-Based Systems Engineering (MBSE)

    NASA Technical Reports Server (NTRS)

    Vipavetz, Kevin; Shull, Thomas A.; Infeld, Samatha; Price, Jim

    2016-01-01

    The goal of interface management is to identify, define, control, and verify interfaces; ensure compatibility; provide an efficient system development; be on time and within budget; while meeting stakeholder requirements. This paper will present a successful seven-step approach to interface management used in several NASA flight projects. The seven-step approach using Model Based Systems Engineering will be illustrated by interface examples from the Materials International Space Station Experiment-X (MISSE-X) project. The MISSE-X was being developed as an International Space Station (ISS) external platform for space environmental studies, designed to advance the technology readiness of materials and devices critical for future space exploration. Emphasis will be given to best practices covering key areas such as interface definition, writing good interface requirements, utilizing interface working groups, developing and controlling interface documents, handling interface agreements, the use of shadow documents, the importance of interface requirement ownership, interface verification, and product transition.

  19. High level modelling and design of asynchronous interface logic

    NASA Astrophysics Data System (ADS)

    Yakovlev, A. V.; Koelmans, A. M.; Lavagno, L.

    1993-11-01

    The authors propose a new methodology to design asynchronous circuits that is divided in two stages: abstract synthesis and logic synthesis. The first state is carried out by refining an abstract model, based on logic predicates describing the correct input-output behavior of the circuit, into a labelled Petri net and then into a formalization of timing diagrams (the Signal Transition Graph). This refinement involves hierarchical decomposition of the initial implementation until its size can be handled by automated logic synthesis tools, as well as replacing symbolic events occurring on the input-output ports of the labelled Petri net with up and down transitions occurring on the input-output wires of a circuit implementation.

  20. A biological model for controlling interface growth and morphology.

    SciTech Connect

    Hoyt, Jeffrey John; Holm, Elizabeth Ann

    2004-01-01

    Biological systems create proteins that perform tasks more efficiently and precisely than conventional chemicals. For example, many plants and animals produce proteins to control the freezing of water. Biological antifreeze proteins (AFPs) inhibit the solidification process, even below the freezing point. These molecules bond to specific sites at the ice/water interface and are theorized to suppress solidification chemically or geometrically. In this project, we investigated the theoretical and experimental data on AFPs and performed analyses to understand the unique physics of AFPs. The experimental literature was analyzed to determine chemical mechanisms and effects of protein bonding at ice surfaces, specifically thermodynamic freezing point depression, suppression of ice nucleation, decrease in dendrite growth kinetics, solute drag on the moving solid/liquid interface, and stearic pinning of the ice interface. Stearic pinning was found to be the most likely candidate to explain experimental results, including freezing point depression, growth morphologies, and thermal hysteresis. A new stearic pinning model was developed and applied to AFPs, with excellent quantitative results. Understanding biological antifreeze mechanisms could enable important medical and engineering applications, but considerable future work will be necessary.

  1. A symbolic/subsymbolic interface protocol for cognitive modeling

    PubMed Central

    Simen, Patrick; Polk, Thad

    2009-01-01

    Researchers studying complex cognition have grown increasingly interested in mapping symbolic cognitive architectures onto subsymbolic brain models. Such a mapping seems essential for understanding cognition under all but the most extreme viewpoints (namely, that cognition consists exclusively of digitally implemented rules; or instead, involves no rules whatsoever). Making this mapping reduces to specifying an interface between symbolic and subsymbolic descriptions of brain activity. To that end, we propose parameterization techniques for building cognitive models as programmable, structured, recurrent neural networks. Feedback strength in these models determines whether their components implement classically subsymbolic neural network functions (e.g., pattern recognition), or instead, logical rules and digital memory. These techniques support the implementation of limited production systems. Though inherently sequential and symbolic, these neural production systems can exploit principles of parallel, analog processing from decision-making models in psychology and neuroscience to explain the effects of brain damage on problem solving behavior. PMID:20711520

  2. Towards an Improved Pilot-Vehicle Interface for Highly Automated Aircraft: Evaluation of the Haptic Flight Control System

    NASA Technical Reports Server (NTRS)

    Schutte, Paul; Goodrich, Kenneth; Williams, Ralph

    2012-01-01

    The control automation and interaction paradigm (e.g., manual, autopilot, flight management system) used on virtually all large highly automated aircraft has long been an exemplar of breakdowns in human factors and human-centered design. An alternative paradigm is the Haptic Flight Control System (HFCS) that is part of NASA Langley Research Center s Naturalistic Flight Deck Concept. The HFCS uses only stick and throttle for easily and intuitively controlling the actual flight of the aircraft without losing any of the efficiency and operational benefits of the current paradigm. Initial prototypes of the HFCS are being evaluated and this paper describes one such evaluation. In this evaluation we examined claims regarding improved situation awareness, appropriate workload, graceful degradation, and improved pilot acceptance. Twenty-four instrument-rated pilots were instructed to plan and fly four different flights in a fictitious airspace using a moderate fidelity desktop simulation. Three different flight control paradigms were tested: Manual control, Full Automation control, and a simplified version of the HFCS. Dependent variables included both subjective (questionnaire) and objective (SAGAT) measures of situation awareness, workload (NASA-TLX), secondary task performance, time to recognize automation failures, and pilot preference (questionnaire). The results showed a statistically significant advantage for the HFCS in a number of measures. Results that were not statistically significant still favored the HFCS. The results suggest that the HFCS does offer an attractive and viable alternative to the tactical components of today s FMS/autopilot control system. The paper describes further studies that are planned to continue to evaluate the HFCS.

  3. A Model of Process-Based Automation: Cost and Quality Implications in the Medication Management Process

    ERIC Educational Resources Information Center

    Spaulding, Trent Joseph

    2011-01-01

    The objective of this research is to understand how a set of systems, as defined by the business process, creates value. The three studies contained in this work develop the model of process-based automation. The model states that complementarities among systems are specified by handoffs in the business process. The model also provides theory to…

  4. Spherical wave reflection in layered media with rough interfaces: Three-dimensional modeling.

    PubMed

    Pinson, Samuel; Cordioli, Julio; Guillon, Laurent

    2016-08-01

    In the context of sediment characterization, layer interface roughnesses may be responsible for sound-speed profile measurement uncertainties. To study the roughness influence, a three-dimensional (3D) modeling of a layered seafloor with rough interfaces is necessary. Although roughness scattering has an abundant literature, 3D modeling of spherical wave reflection on rough interfaces is generally limited to a single interface (using Kirchhoff-Helmholtz integral) or computationally expensive techniques (finite difference or finite element method). In this work, it is demonstrated that the wave reflection over a layered medium with irregular interfaces can be modeled as a sum of integrals over each interface. The main approximations of the method are the tangent-plane approximation, the Born approximation (multiple reflection between interfaces are neglected) and flat-interface approximation for the transmitted waves into the sediment. The integration over layer interfaces results in a method with reasonable computation cost.

  5. A numerical model and spreadsheet interface for pumping test analysis.

    PubMed

    Johnson, G S; Cosgrove, D M; Frederick, D B

    2001-01-01

    Curve-matching techniques have been the standard method of aquifer test analysis for several decades. A variety of techniques provide the capability of evaluating test data from confined, unconfined, leaky aquitard, and other conditions. Each technique, however, is accompanied by a set of assumptions, and evaluation of a combination of conditions can be complicated or impossible due to intractable mathematics or nonuniqueness of the solution. Numerical modeling of pumping tests provides two major advantages: (1) the user can choose which properties to calibrate and what assumptions to make; and (2) in the calibration process the user is gaining insights into the conceptual model of the flow system and uncertainties in the analysis. Routine numerical modeling of pumping tests is now practical due to computer hardware and software advances of the last decade. The RADFLOW model and spreadsheet interface presented in this paper is an easy-to-use numerical model for estimation of aquifer properties from pumping test data. Layered conceptual models and their properties are evaluated in a trial-and-error estimation procedure. The RADFLOW model can treat most combinations of confined, unconfined, leaky aquitard, partial penetration, and borehole storage conditions. RADFLOW is especially useful in stratified aquifer systems with no identifiable lateral boundaries. It has been verified to several analytical solutions and has been applied in the Snake River Plain Aquifer to develop and test conceptual models and provide estimates of aquifer properties. Because the model assumes axially symmetrical flow, it is limited to representing multiple aquifer layers that are laterally continuous.

  6. First principles modeling of the metal-electrolyte interface: A novel approach to the study of the electrochemical interface

    SciTech Connect

    Fernandez-Serra, Maria Victoria

    2016-09-12

    The research objective of this proposal is the computational modeling of the metal-electrolyte interface purely from first principles. The accurate calculation of the electrostatic potential at electrically biased metal-electrolyte interfaces is a current challenge for periodic “ab-initio” simulations. It is also an essential requisite for predicting the correspondence between the macroscopic voltage and the microscopic interfacial charge distribution in electrochemical fuel cells. This interfacial charge distribution is the result of the chemical bonding between solute and metal atoms, and therefore cannot be accurately calculated with the use of semi-empirical classical force fields. The project aims to study in detail the structure and dynamics of aqueous electrolytes at metallic interfaces taking into account the effect of the electrode potential. Another side of the project is to produce an accurate method to simulate the water/metal interface. While both experimental and theoretical surface scientists have made a lot of progress on the understanding and characterization of both atomistic structures and reactions at the solid/vacuum interface, the theoretical description of electrochemical interfaces is still lacking behind. A reason for this is that a complete and accurate first principles description of both the liquid and the metal interfaces is still computationally too expensive and complex, since their characteristics are governed by the explicit atomic and electronic structure built at the interface as a response to environmental conditions. This project will characterize in detail how different theoretical levels of modeling describer the metal/water interface. In particular the role of van der Waals interactions will be carefully analyzed and prescriptions to perform accurate simulations will be produced.

  7. Modeling the Energy Use of a Connected and Automated Transportation System (Poster)

    SciTech Connect

    Gonder, J.; Brown, A.

    2014-07-01

    Early research points to large potential impacts of connected and automated vehicles (CAVs) on transportation energy use - dramatic savings, increased use, or anything in between. Due to a lack of suitable data and integrated modeling tools to explore these complex future systems, analyses to date have relied on simple combinations of isolated effects. This poster proposes a framework for modeling the potential energy implications from increasing penetration of CAV technologies and for assessing technology and policy options to steer them toward favorable energy outcomes. Current CAV modeling challenges include estimating behavior change, understanding potential vehicle-to-vehicle interactions, and assessing traffic flow and vehicle use under different automation scenarios. To bridge these gaps and develop a picture of potential future automated systems, NREL is integrating existing modeling capabilities with additional tools and data inputs to create a more fully integrated CAV assessment toolkit.

  8. Analysis of a diffuse interface model of multispecies tumor growth

    NASA Astrophysics Data System (ADS)

    Dai, Mimi; Feireisl, Eduard; Rocca, Elisabetta; Schimperna, Giulio; Schonbek, Maria E.

    2017-04-01

    We consider a diffuse interface model for tumor growth recently proposed in Chen et al (2014 Int. J. Numer. Methods Biomed. Eng. 30 726–54). In this new approach sharp interfaces are replaced by narrow transition layers arising due to adhesive forces among the cell species. Hence, a continuum thermodynamically consistent model is introduced. The resulting PDE system couples four different types of equations: a Cahn–Hilliard type equation for the tumor cells (which include proliferating and dead cells), a Darcy law for the tissue velocity field, whose divergence may be different from 0 and depend on the other variables, a transport equation for the proliferating (viable) tumor cells, and a quasi-static reaction diffusion equation for the nutrient concentration. We establish existence of weak solutions for the PDE system coupled with suitable initial and boundary conditions. In particular, the proliferation function at the boundary is supposed to be nonnegative on the set where the velocity \\mathbf{u} satisfies \\mathbf{u}\\centerdot ν >0 , where ν is the outer normal to the boundary of the domain.

  9. Diffuse-Interface Modelling of Flow in Porous Media

    NASA Astrophysics Data System (ADS)

    Addy, Doug; Pradas, Marc; Schmuck, Marcus; Kalliadasis, Serafim

    2016-11-01

    Multiphase flows are ubiquitous in a wide spectrum of scientific and engineering applications, and their computational modelling often poses many challenges associated with the presence of free boundaries and interfaces. Interfacial flows in porous media encounter additional challenges and complexities due to their inherently multiscale behaviour. Here we investigate the dynamics of interfaces in porous media using an effective convective Cahn-Hilliard (CH) equation recently developed in from a Stokes-CH equation for microscopic heterogeneous domains by means of a homogenization methodology, where the microscopic details are taken into account as effective tensor coefficients which are given by a Poisson equation. The equations are decoupled under appropriate assumptions and solved in series using a classic finite-element formulation with the open-source software FEniCS. We investigate the effects of different microscopic geometries, including periodic and non-periodic, at the bulk fluid flow, and find that our model is able to describe the effective macroscopic behaviour without the need to resolve the microscopic details.

  10. Modeling organohalide perovskites for photovoltaic applications: From materials to interfaces

    NASA Astrophysics Data System (ADS)

    de Angelis, Filippo

    2015-03-01

    The field of hybrid/organic photovoltaics has been revolutionized in 2012 by the first reports of solid-state solar cells based on organohalide perovskites, now topping at 20% efficiency. First-principles modeling has been widely applied to the dye-sensitized solar cells field, and more recently to perovskite-based solar cells. The computational design and screening of new materials has played a major role in advancing the DSCs field. Suitable modeling strategies may also offer a view of the crucial heterointerfaces ruling the device operational mechanism. I will illustrate how simulation tools can be employed in the emerging field of perovskite solar cells. The performance of the proposed simulation toolbox along with the fundamental modeling strategies are presented using selected examples of relevant materials and interfaces. The main issue with hybrid perovskite modeling is to be able to accurately describe their structural, electronic and optical features. These materials show a degree of short range disorder, due to the presence of mobile organic cations embedded within the inorganic matrix, requiring to average their properties over a molecular dynamics trajectory. Due to the presence of heavy atoms (e.g. Sn and Pb) their electronic structure must take into account spin-orbit coupling (SOC) in an effective way, possibly including GW corrections. The proposed SOC-GW method constitutes the basis for tuning the materials electronic and optical properties, rationalizing experimental trends. Modeling charge generation in perovskite-sensitized TiO2 interfaces is then approached based on a SOC-DFT scheme, describing alignment of energy levels in a qualitatively correct fashion. The role of interfacial chemistry on the device performance is finally discussed. The research leading to these results has received funding from the European Union Seventh Framework Programme [FP7/2007 2013] under Grant Agreement No. 604032 of the MESO project.

  11. Electroviscoelasticity of liquid/liquid interfaces: fractional-order model.

    PubMed

    Spasic, Aleksandar M; Lazarevic, Mihailo P

    2005-02-01

    A number of theories that describe the behavior of liquid-liquid interfaces have been developed and applied to various dispersed systems, e.g., Stokes, Reiner-Rivelin, Ericksen, Einstein, Smoluchowski, and Kinch. A new theory of electroviscoelasticity describes the behavior of electrified liquid-liquid interfaces in fine dispersed systems and is based on a new constitutive model of liquids. According to this model liquid-liquid droplet or droplet-film structure (collective of particles) is considered as a macroscopic system with internal structure determined by the way the molecules (ions) are tuned (structured) into the primary components of a cluster configuration. How the tuning/structuring occurs depends on the physical fields involved, both potential (elastic forces) and nonpotential (resistance forces). All these microelements of the primary structure can be considered as electromechanical oscillators assembled into groups, so that excitation by an external physical field may cause oscillations at the resonant/characteristic frequency of the system itself (coupling at the characteristic frequency). Up to now, three possible mathematical formalisms have been discussed related to the theory of electroviscoelasticity. The first is the tension tensor model, where the normal and tangential forces are considered, only in mathematical formalism, regardless of their origin (mechanical and/or electrical). The second is the Van der Pol derivative model, presented by linear and nonlinear differential equations. Finally, the third model presents an effort to generalize the previous Van der Pol equation: the ordinary time derivative and integral are now replaced with the corresponding fractional-order time derivative and integral of order p<1.

  12. Automated model-based calibration of imaging spectrographs

    NASA Astrophysics Data System (ADS)

    Kosec, Matjaž; Bürmen, Miran; Tomaževič, Dejan; Pernuš, Franjo; Likar, Boštjan

    2012-03-01

    Hyper-spectral imaging has gained recognition as an important non-invasive research tool in the field of biomedicine. Among the variety of available hyperspectral imaging systems, systems comprising an imaging spectrograph, lens, wideband illumination source and a corresponding camera stand out for the short acquisition time and good signal to noise ratio. The individual images acquired by imaging spectrograph-based systems contain full spectral information along one spatial dimension. Due to the imperfections in the camera lens and in particular the optical components of the imaging spectrograph, the acquired images are subjected to spatial and spectral distortions, resulting in scene dependent nonlinear spectral degradations and spatial misalignments which need to be corrected. However, the existing correction methods require complex calibration setups and a tedious manual involvement, therefore, the correction of the distortions is often neglected. Such simplified approach can lead to significant errors in the analysis of the acquired hyperspectral images. In this paper, we present a novel fully automated method for correction of the geometric and spectral distortions in the acquired images. The method is based on automated non-rigid registration of the reference and acquired images corresponding to the proposed calibration object incorporating standardized spatial and spectral information. The obtained transformation was successfully used for sub-pixel correction of various hyperspectral images, resulting in significant improvement of the spectral and spatial alignment. It was found that the proposed calibration is highly accurate and suitable for routine use in applications involving either diffuse reflectance or transmittance measurement setups.

  13. Individual differences in response to automation: the five factor model of personality.

    PubMed

    Szalma, James L; Taylor, Grant S

    2011-06-01

    This study examined the relationship of operator personality (Five Factor Model) and characteristics of the task and of adaptive automation (reliability and adaptiveness-whether the automation was well-matched to changes in task demand) to operator performance, workload, stress, and coping. This represents the first investigation of how the Five Factors relate to human response to automation. One-hundred-sixty-one college students experienced either 75% or 95% reliable automation provided with task loads of either two or four displays to be monitored. The task required threat detection in a simulated uninhabited ground vehicle (UGV) task. Task demand exerted the strongest influence on outcome variables. Automation characteristics did not directly impact workload or stress, but effects did emerge in the context of trait-task interactions that varied as a function of the dimension of workload and stress. The pattern of relationships of traits to dependent variables was generally moderated by at least one task factor. Neuroticism was related to poorer performance in some conditions, and all five traits were associated with at least one measure of workload and stress. Neuroticism generally predicted increased workload and stress and the other traits predicted decreased levels of these states. However, in the case of the relation of Extraversion and Agreeableness to Worry, Frustration, and avoidant coping, the direction of effects varied across task conditions. The results support incorporation of individual differences into automation design by identifying the relevant person characteristics and using the information to determine what functions to automate and the form and level of automation.

  14. The electrical behavior of GaAs-insulator interfaces - A discrete energy interface state model

    NASA Technical Reports Server (NTRS)

    Kazior, T. E.; Lagowski, J.; Gatos, H. C.

    1983-01-01

    The relationship between the electrical behavior of GaAs Metal Insulator Semiconductor (MIS) structures and the high density discrete energy interface states (0.7 and 0.9 eV below the conduction band) was investigated utilizing photo- and thermal emission from the interface states in conjunction with capacitance measurements. It was found that all essential features of the anomalous behavior of GaAs MIS structures, such as the frequency dispersion and the C-V hysteresis, can be explained on the basis of nonequilibrium charging and discharging of the high density discrete energy interface states.

  15. ORIGAMI -- The Oak Ridge Geometry Analysis and Modeling Interface

    SciTech Connect

    Burns, T.J.

    1996-04-01

    A revised ``ray-tracing`` package which is a superset of the geometry specifications of the radiation transport codes MORSE, MASH (GIFT Versions 4 and 5), HETC, and TORT has been developed by ORNL. Two additional CAD-based formats are also included as part of the superset: the native format of the BRL-CAD system--MGED, and the solid constructive geometry subset of the IGES specification. As part of this upgrade effort, ORNL has designed an Xwindows-based utility (ORIGAMI) to facilitate the construction, manipulation, and display of the geometric models required by the MASH code. Since the primary design criterion for this effort was that the utility ``see`` the geometric model exactly as the radiation transport code does, ORIGAMI is designed to utilize the same ``ray-tracing`` package as the revised version of MASH. ORIGAMI incorporates the functionality of two previously developed graphical utilities, CGVIEW and ORGBUG, into a single consistent interface.

  16. Model Based Control Design Using SLPS "Simulink PSpice Interface"

    NASA Astrophysics Data System (ADS)

    Moslehpour, Saeid; Kulcu, Ercan K.; Alnajjar, Hisham

    This paper elaborates on the new integration offered with the PSpice SLPS interface and the MATLAB simulink products. SLPS links the two widely used design products, PSpice and Mathwork's Simulink simulator. The SLPS simulation environment supports the substitution of an actual electronic block with an "ideal model", better known as the mathematical simulink model. Thus enabling the designer to identify and correct integration issues of electronics within a system. Moreover, stress audit can be performed by using the PSpice smoke analysis which helps to verify whether the components are working within the manufacturer's safe operating limits. It is invaluable since many companies design and test the electronics separately from the system level. Therefore, integrations usually are not discovered until the prototype level, causing critical time delays in getting a product to the market.

  17. A Binary Programming Approach to Automated Test Assembly for Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Finkelman, Matthew D.; Kim, Wonsuk; Roussos, Louis; Verschoor, Angela

    2010-01-01

    Automated test assembly (ATA) has been an area of prolific psychometric research. Although ATA methodology is well developed for unidimensional models, its application alongside cognitive diagnosis models (CDMs) is a burgeoning topic. Two suggested procedures for combining ATA and CDMs are to maximize the cognitive diagnostic index and to use a…

  18. Data for Environmental Modeling (D4EM): Background and Applications of Data Automation

    EPA Science Inventory

    The Data for Environmental Modeling (D4EM) project demonstrates the development of a comprehensive set of open source software tools that overcome obstacles to accessing data needed by automating the process of populating model input data sets with environmental data available fr...

  19. Mathematical modeling of dispersion in single interface flow analysis.

    PubMed

    Rodrigues, S Sofia M; Marques, Karine L; Lopes, João A; Santos, João L M; Lima, José L F C

    2010-03-24

    This work describes the optimization of the recently proposed fluid management methodology single interface flow analysis (SIFA) using chemometrics modelling. The influence of the most important physical and hydrodynamic flow parameters of SIFA systems on the axial dispersion coefficients estimated with the axially dispersed plug-flow model, was evaluated with chemometrics linear (multivariate linear regression) and non-linear (simple multiplicative and feed-forward neural networks) models. A D-optimal experimental design built with three reaction coil properties (length, configuration and internal diameter), flow-cell volume and flow rate, was adopted to generate the experimental data. Bromocresol green was used as the dye solution and the analytical signals were monitored by spectrophotometric detection at 614 nm. Results demonstrate that, independent of the model type, the statistically relevant parameters were the reactor coil length and internal diameter and the flow rate. The linear and non-linear multiplicative models were able to estimate the axial dispersion coefficient with validation r(2)=0.86. Artificial neural networks estimated the same parameter with an increased accuracy (r(2)=0.93), demonstrating that relations between the physical parameters and the dispersion phenomena are highly non-linear. The analysis of the response surface control charts simulated with the developed models allowed the interpretation of the relationships between the physical parameters and the dispersion processes.

  20. The Interface Between Theory and Data in Structural Equation Models

    USGS Publications Warehouse

    Grace, James B.; Bollen, Kenneth A.

    2006-01-01

    Structural equation modeling (SEM) holds the promise of providing natural scientists the capacity to evaluate complex multivariate hypotheses about ecological systems. Building on its predecessors, path analysis and factor analysis, SEM allows for the incorporation of both observed and unobserved (latent) variables into theoretically based probabilistic models. In this paper we discuss the interface between theory and data in SEM and the use of an additional variable type, the composite, for representing general concepts. In simple terms, composite variables specify the influences of collections of other variables and can be helpful in modeling general relationships of the sort commonly of interest to ecologists. While long recognized as a potentially important element of SEM, composite variables have received very limited use, in part because of a lack of theoretical consideration, but also because of difficulties that arise in parameter estimation when using conventional solution procedures. In this paper we present a framework for discussing composites and demonstrate how the use of partially reduced form models can help to overcome some of the parameter estimation and evaluation problems associated with models containing composites. Diagnostic procedures for evaluating the most appropriate and effective use of composites are illustrated with an example from the ecological literature. It is argued that an ability to incorporate composite variables into structural equation models may be particularly valuable in the study of natural systems, where concepts are frequently multifaceted and the influences of suites of variables are often of interest.

  1. A robust and flexible Geospatial Modeling Interface (GMI) for environmental model deployment and evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper provides an overview of the GMI (Geospatial Modeling Interface) simulation framework for environmental model deployment and assessment. GMI currently provides access to multiple environmental models including AgroEcoSystem-Watershed (AgES-W), Nitrate Leaching and Economic Analysis 2 (NLEA...

  2. Modeling and diagnosing interface mix in layered ICF implosions

    NASA Astrophysics Data System (ADS)

    Weber, C. R.; Berzak Hopkins, L. F.; Clark, D. S.; Haan, S. W.; Ho, D. D.; Meezan, N. B.; Milovich, J. L.; Robey, H. F.; Smalyuk, V. A.; Thomas, C. A.

    2015-11-01

    Mixing at the fuel-ablator interface of an inertial confinement fusion (ICF) implosion can arise from an unfavorable in-flight Atwood number between the cryogenic DT fuel and the ablator. High-Z dopant is typically added to the ablator to control the Atwood number, but recent high-density carbon (HDC) capsules have been shot at the National Ignition Facility (NIF) without this added dopant. Highly resolved post-shot modeling of these implosions shows that there was significant mixing of ablator material into the dense DT fuel. This mix lowers the fuel density and results in less overall compression, helping to explain the measured ratio of down scattered-to-primary neutrons. Future experimental designs will seek to improve this issue through adding dopant and changing the x-ray spectra with a different hohlraum wall material. To test these changes, we are designing an experimental platform to look at the growth of this mixing layer. This technique uses side-on radiography to measure the spatial extent of an embedded high-Z tracer layer near the interface. Work performed under the auspices of the U.S. D.O.E. by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.

  3. SOLVE and RESOLVE: automated structure solution, density modification and model building.

    PubMed

    Terwilliger, Thomas

    2004-01-01

    The software SOLVE and RESOLVE can carry out all the steps in macromolecular structure solution, from scaling and heavy-atom location through phasing, density modification and model-building in the MAD, SAD and MIR cases. SOLVE uses scoring scheme to convert the decision-making in macromolecular structure solution to an optimization problem. RESOLVE carries out the identification of NCS, density modification and automated model-building. The procedure is fully automated and can function at resolutions as low as 3 A.

  4. The Model and Control Methods of Access to Information and Technology Resources of Automated Control Systems in Water Supply Industry

    NASA Astrophysics Data System (ADS)

    Rytov, M. Yu; Spichyack, S. A.; Fedorov, V. P.; Petreshin, D. I.

    2017-01-01

    The paper describes a formalized control model of access to information and technological resources of automated control systems at water supply enterprises. The given model considers the availability of various communication links with information systems and technological equipment. There are also studied control methods of access to information and technological resources of automated control systems at water supply enterprises. On the basis of the formalized control model and appropriate methods there was developed a software-hardware complex for rapid access to information and technological resources of automated control systems, which contains an administrator’s automated workplace and ultimate users.

  5. SN_GUI: a graphical user interface for snowpack modeling

    NASA Astrophysics Data System (ADS)

    Spreitzhofer, G.; Fierz, C.; Lehning, M.

    2004-10-01

    SNOWPACK is a physical snow cover model. The model not only serves as a valuable research tool, but also runs operationally on a network of high Alpine automatic weather and snow measurement sites. In order to facilitate the operation of SNOWPACK and the interpretation of the results obtained by this model, a user-friendly graphical user interface for snowpack modeling, named SN_GUI, was created. This Java-based and thus platform-independent tool can be operated in two modes, one designed to fulfill the requirements of avalanche warning services (e.g. by providing information about critical layers within the snowpack that are closely related to the avalanche activity), and the other one offering a variety of additional options satisfying the needs of researchers. The user of SN_GUI is graphically guided through the entire process of creating snow cover simulations. The starting point is the efficient creation of input parameter files for SNOWPACK, followed by the launching of SNOWPACK with a variety of parameter settings. Finally, after the successful termination of the run, a number of interactive display options may be used to visualize the model output. Among these are vertical profiles and time profiles for many parameters. Besides other features, SN_GUI allows the use of various color, time and coordinate scales, and the comparison of measured and observed parameters.

  6. Modeling the Electrical Contact Resistance at Steel-Carbon Interfaces

    NASA Astrophysics Data System (ADS)

    Brimmo, Ayoola T.; Hassan, Mohamed I.

    2016-01-01

    In the aluminum smelting industry, electrical contact resistance at the stub-carbon (steel-carbon) interface has been recurrently reported to be of magnitudes that legitimately necessitate concern. Mitigating this via finite element modeling has been the focus of a number of investigations, with the pressure- and temperature-dependent contact resistance relation frequently cited as a factor that limits the accuracy of such models. In this study, pressure- and temperature-dependent relations are derived from the most extensively cited works that have experimentally characterized the electrical contact resistance at these contacts. These relations are applied in a validated thermo-electro-mechanical finite element model used to estimate the voltage drop across a steel-carbon laboratory setup. By comparing the models' estimate of the contact electrical resistance with experimental measurements, we deduce the applicability of the different relations over a range of temperatures. The ultimate goal of this study is to apply mathematical modeling in providing pressure- and temperature-dependent relations that best describe the steel-carbon electrical contact resistance and identify the best fit relation at specific thermodynamic conditions.

  7. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1991-01-01

    Some of the many analytical models in human-computer interface design that are currently being developed are described. The usefulness of analytical models for human-computer interface design is evaluated. Can the use of analytical models be recommended to interface designers? The answer, based on the empirical research summarized here, is: not at this time. There are too many unanswered questions concerning the validity of models and their ability to meet the practical needs of design organizations.

  8. Parallelization of a hydrological model using the message passing interface

    USGS Publications Warehouse

    Wu, Yiping; Li, Tiejian; Sun, Liqun; Chen, Ji

    2013-01-01

    With the increasing knowledge about the natural processes, hydrological models such as the Soil and Water Assessment Tool (SWAT) are becoming larger and more complex with increasing computation time. Additionally, other procedures such as model calibration, which may require thousands of model iterations, can increase running time and thus further reduce rapid modeling and analysis. Using the widely-applied SWAT as an example, this study demonstrates how to parallelize a serial hydrological model in a Windows® environment using a parallel programing technology—Message Passing Interface (MPI). With a case study, we derived the optimal values for the two parameters (the number of processes and the corresponding percentage of work to be distributed to the master process) of the parallel SWAT (P-SWAT) on an ordinary personal computer and a work station. Our study indicates that model execution time can be reduced by 42%–70% (or a speedup of 1.74–3.36) using multiple processes (two to five) with a proper task-distribution scheme (between the master and slave processes). Although the computation time cost becomes lower with an increasing number of processes (from two to five), this enhancement becomes less due to the accompanied increase in demand for message passing procedures between the master and all slave processes. Our case study demonstrates that the P-SWAT with a five-process run may reach the maximum speedup, and the performance can be quite stable (fairly independent of a project size). Overall, the P-SWAT can help reduce the computation time substantially for an individual model run, manual and automatic calibration procedures, and optimization of best management practices. In particular, the parallelization method we used and the scheme for deriving the optimal parameters in this study can be valuable and easily applied to other hydrological or environmental models.

  9. Automated NMR fragment based screening identified a novel interface blocker to the LARG/RhoA complex.

    PubMed

    Gao, Jia; Ma, Rongsheng; Wang, Wei; Wang, Na; Sasaki, Ryan; Snyderman, David; Wu, Jihui; Ruan, Ke

    2014-01-01

    The small GTPase cycles between the inactive GDP form and the activated GTP form, catalyzed by the upstream guanine exchange factors. The modulation of such process by small molecules has been proven to be a fruitful route for therapeutic intervention to prevent the over-activation of the small GTPase. The fragment based approach emerging in the past decade has demonstrated its paramount potential in the discovery of inhibitors targeting such novel and challenging protein-protein interactions. The details regarding the procedure of NMR fragment screening from scratch have been rarely disclosed comprehensively, thus restricts its wider applications. To achieve a consistent screening applicable to a number of targets, we developed a highly automated protocol to cover every aspect of NMR fragment screening as possible, including the construction of small but diverse libray, determination of the aqueous solubility by NMR, grouping compounds with mutual dispersity to a cocktail, and the automated processing and visualization of the ligand based screening spectra. We exemplified our streamlined screening in RhoA alone and the complex of the small GTPase RhoA and its upstream guanine exchange factor LARG. Two hits were confirmed from the primary screening in cocktail and secondary screening over individual hits for LARG/RhoA complex, while one of them was also identified from the screening for RhoA alone. HSQC titration of the two hits over RhoA and LARG alone, respectively, identified one compound binding to RhoA.GDP at a 0.11 mM affinity, and perturbed the residues at the switch II region of RhoA. This hit blocked the formation of the LARG/RhoA complex, validated by the native gel electrophoresis, and the titration of RhoA to ¹⁵N labeled LARG in the absence and presence the compound, respectively. It therefore provides us a starting point toward a more potent inhibitor to RhoA activation catalyzed by LARG.

  10. Automated NMR Fragment Based Screening Identified a Novel Interface Blocker to the LARG/RhoA Complex

    PubMed Central

    Gao, Jia; Ma, Rongsheng; Wang, Wei; Wang, Na; Sasaki, Ryan; Snyderman, David; Wu, Jihui; Ruan, Ke

    2014-01-01

    The small GTPase cycles between the inactive GDP form and the activated GTP form, catalyzed by the upstream guanine exchange factors. The modulation of such process by small molecules has been proven to be a fruitful route for therapeutic intervention to prevent the over-activation of the small GTPase. The fragment based approach emerging in the past decade has demonstrated its paramount potential in the discovery of inhibitors targeting such novel and challenging protein-protein interactions. The details regarding the procedure of NMR fragment screening from scratch have been rarely disclosed comprehensively, thus restricts its wider applications. To achieve a consistent screening applicable to a number of targets, we developed a highly automated protocol to cover every aspect of NMR fragment screening as possible, including the construction of small but diverse libray, determination of the aqueous solubility by NMR, grouping compounds with mutual dispersity to a cocktail, and the automated processing and visualization of the ligand based screening spectra. We exemplified our streamlined screening in RhoA alone and the complex of the small GTPase RhoA and its upstream guanine exchange factor LARG. Two hits were confirmed from the primary screening in cocktail and secondary screening over individual hits for LARG/RhoA complex, while one of them was also identified from the screening for RhoA alone. HSQC titration of the two hits over RhoA and LARG alone, respectively, identified one compound binding to RhoA.GDP at a 0.11 mM affinity, and perturbed the residues at the switch II region of RhoA. This hit blocked the formation of the LARG/RhoA complex, validated by the native gel electrophoresis, and the titration of RhoA to 15N labeled LARG in the absence and presence the compound, respectively. It therefore provides us a starting point toward a more potent inhibitor to RhoA activation catalyzed by LARG. PMID:24505392

  11. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT (AGWA): A GIS-BASED HYDROLOGICAL MODELING TOOL FOR WATERSHED MANAGEMENT AND LANDSCAPE ASSESSMENT

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment (http://www.epa.gov/nerlesd1/land-sci/agwa/introduction.htm and www.tucson.ars.ag.gov/agwa) tool is a GIS interface jointly developed by the U.S. Environmental Protection Agency, USDA-Agricultural Research Service, and the University ...

  12. Epsilon-Q: An Automated Analyzer Interface for Mass Spectral Library Search and Label-Free Protein Quantification.

    PubMed

    Cho, Jin-Young; Lee, Hyoung-Joo; Jeong, Seul-Ki; Paik, Young-Ki

    2017-04-04

    Mass spectrometry (MS) is a widely used proteome analysis tool for biomedical science. In an MS-based bottom-up proteomic approach to protein identification, sequence database (DB) searching has been routinely used because of its simplicity and convenience. However, searching a sequence DB with multiple variable modification options can increase processing time, false-positive errors in large and complicated MS data sets. Spectral library searching is an alternative solution, avoiding the limitations of sequence DB searching and allowing the detection of more peptides with high sensitivity. Unfortunately, this technique has less proteome coverage, resulting in limitations in the detection of novel and whole peptide sequences in biological samples. To solve these problems, we previously developed the "Combo-Spec Search" method, which uses manually multiple references and simulated spectral library searching to analyze whole proteomes in a biological sample. In this study, we have developed a new analytical interface tool called "Epsilon-Q" to enhance the functions of both the Combo-Spec Search method and label-free protein quantification. Epsilon-Q performs automatically multiple spectral library searching, class-specific false-discovery rate control, and result integration. It has a user-friendly graphical interface and demonstrates good performance in identifying and quantifying proteins by supporting standard MS data formats and spectrum-to-spectrum matching powered by SpectraST. Furthermore, when the Epsilon-Q interface is combined with the Combo-Spec search method, called the Epsilon-Q system, it shows a synergistic function by outperforming other sequence DB search engines for identifying and quantifying low-abundance proteins in biological samples. The Epsilon-Q system can be a versatile tool for comparative proteome analysis based on multiple spectral libraries and label-free quantification.

  13. Automated Measurement and Statistical Modeling of Elastic Laminae in Arteries

    PubMed Central

    Xu, Hai; Hu, Jin-Jia; Humphrey, Jay D.; Liu, Jyh-Charn

    2010-01-01

    Structural features of elastic laminae within arteries can provide vital information for both the mechanobiology and the biomechanics of the wall. In this paper, we propose, test, and illustrate a new computer-based scheme for automated analysis of regional distributions of elastic laminae thickness, inter-lamellar distances, and fragmentation (furcation points) from standard histological images. Our scheme eliminates potential artifacts produced by tissue cutting, automatically aligns tissue according to physiologic orientations, and performs cross-sectional measurements along radial directions. A statistical randomized complete block design (RCBD) and F-test were used to assess potential (non)-uniformity of lamellar thicknesses and separations along both radial and circumferential directions. Illustrative results for both normotensive and hypertensive thoracic porcine aorta revealed marked heterogeneity along the radial direction in nearly stress-free samples. Clearly, regional measurements can provide more detailed information about morphologic changes that cannot be gained by globally averaged evaluations alone. We also found that quantifying Furcation Point densities offers new information about potential elastin fragmentation, particularly in response to increased loading due to hypertension. PMID:20221934

  14. Modeling and dynamic simulation of ultraviolet induced growing interfaces

    NASA Astrophysics Data System (ADS)

    Flicstein, J.; Guillonneau, E.; Pata, S.; Kee Chun, L. S.; Palmier, J. F.; Daguet, C.; Courant, J. L.

    1999-01-01

    A solid-on-solid (SOS) model to simulate SiN:H dynamic surface characteristics in ultraviolet chemical vapor deposition (CVD) onto indium phosphide is presented. It is recognized that the nucleation process occurs at an UV induced active charged center on the surface of the substrate. Photolysis rates are determined using bond dissociation energies for molecular processes to generate active adsorbed species. The microscopic activation energy in elementary processes depends on the configuration of neighbouring atoms. Monte Carlo-Metropolis method using microscopic activation energy barriers is taken into account in molecular processes by a three-dimensional algorithm. The model includes lattice coordination and atom-atom interactions out to third-nearest neighbours. The molecular events are chosen with a probability of occurrence that depends on the kinetic rates at each atomic site. Stable incorporation of main species is enabled. Three-dimensional simulation of a growing interface indicates validation of a thermally activated rough-smooth transition for submicronic thick layers in the Kardar-Parisi-Zhang model.

  15. Automated Discovery and Modeling of Sequential Patterns Preceding Events of Interest

    NASA Technical Reports Server (NTRS)

    Rohloff, Kurt

    2010-01-01

    The integration of emerging data manipulation technologies has enabled a paradigm shift in practitioners' abilities to understand and anticipate events of interest in complex systems. Example events of interest include outbreaks of socio-political violence in nation-states. Rather than relying on human-centric modeling efforts that are limited by the availability of SMEs, automated data processing technologies has enabled the development of innovative automated complex system modeling and predictive analysis technologies. We introduce one such emerging modeling technology - the sequential pattern methodology. We have applied the sequential pattern methodology to automatically identify patterns of observed behavior that precede outbreaks of socio-political violence such as riots, rebellions and coups in nation-states. The sequential pattern methodology is a groundbreaking approach to automated complex system model discovery because it generates easily interpretable patterns based on direct observations of sampled factor data for a deeper understanding of societal behaviors that is tolerant of observation noise and missing data. The discovered patterns are simple to interpret and mimic human's identifications of observed trends in temporal data. Discovered patterns also provide an automated forecasting ability: we discuss an example of using discovered patterns coupled with a rich data environment to forecast various types of socio-political violence in nation-states.

  16. Mental Models, Trust, and Reliance: Exploring the Effect of Human Perceptions on Automation Use

    DTIC Science & Technology

    2009-06-01

    Journal of Industrial Ergonomics , 28, 85. Cannon-Bowers, J. A., Salas, E., & Converse, S. (2001). Chapter 12: Shared mental models in expert team...M., & Seong, Y. (2001). Assessment of operator trust in and utilization of automated decision-aids under different framing conditions. International

  17. Evaluation of automated cell disruptor methods for oomycetous and ascomycetous model organisms

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Two automated cell disruptor-based methods for RNA extraction; disruption of thawed cells submerged in TRIzol Reagent (method QP), and direct disruption of frozen cells on dry ice (method CP), were optimized for a model oomycete, Phytophthora capsici, and compared with grinding in a mortar and pestl...

  18. Automated Test Assembly for Cognitive Diagnosis Models Using a Genetic Algorithm

    ERIC Educational Resources Information Center

    Finkelman, Matthew; Kim, Wonsuk; Roussos, Louis A.

    2009-01-01

    Much recent psychometric literature has focused on cognitive diagnosis models (CDMs), a promising class of instruments used to measure the strengths and weaknesses of examinees. This article introduces a genetic algorithm to perform automated test assembly alongside CDMs. The algorithm is flexible in that it can be applied whether the goal is to…

  19. Automated volumetric grid generation for finite element modeling of human hand joints

    SciTech Connect

    Hollerbach, K.; Underhill, K.; Rainsberger, R.

    1995-02-01

    We are developing techniques for finite element analysis of human joints. These techniques need to provide high quality results rapidly in order to be useful to a physician. The research presented here increases model quality and decreases user input time by automating the volumetric mesh generation step.

  20. Modeling Multiple Human-Automation Distributed Systems using Network-form Games

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume

    2012-01-01

    The paper describes at a high-level the network-form game framework (based on Bayes net and game theory), which can be used to model and analyze safety issues in large, distributed, mixed human-automation systems such as NextGen.

  1. Advances in automated noise data acquisition and noise source modeling for power reactors

    SciTech Connect

    Clapp, N.E. Jr.; Kryter, R.C.; Sweeney, F.J.; Renier, J.A.

    1981-01-01

    A newly expanded program, directed toward achieving a better appreciation of both the strengths and limitations of on-line, noise-based, long-term surveillance programs for nuclear reactors, is described. Initial results in the complementary experimental (acquisition and automated screening of noise signatures) and theoretical (stochastic modeling of likely noise sources) areas of investigation are given.

  2. Developing Novel Automated Apparatus for Studying Battery of Social Behaviors in Mutant Mouse Models for Autism

    DTIC Science & Technology

    2013-06-01

    the females). Task 2b: Automated behavioral phenotyping of a mouse model for autism using the video - and RFID-based tracking technology Over the...behavioral traits and the relationship between environmental-gene interactions in mouse models for autism . Finally, since our experimental platform poses no...animal research models . 5 Body Task 1: Develop a combined video - and RFID-based experimental system to allow high- throughput standardized

  3. The Automated Geospatial Watershed Assessment Tool (AGWA): Developing Post-Fire Model Parameters Using Precipitation and Runoff Records from Gauged Watersheds

    NASA Astrophysics Data System (ADS)

    Sheppard, B. S.; Goodrich, D. C.; Guertin, D. P.; Burns, I. S.; Canfield, E.; Sidman, G.

    2014-12-01

    New tools and functionality have been incorporated into the Automated Geospatial Watershed Assessment Tool (AGWA) to assess the impacts of wildfire on runoff and erosion. AGWA (see: www.tucson.ars.ag.gov/agwa or http://www.epa.gov/esd/land-sci/agwa/) is a GIS interface jointly developed by the USDA-Agricultural Research Service, the U.S. Environmental Protection Agency, the University of Arizona, and the University of Wyoming to automate the parameterization and execution of a suite of hydrologic and erosion models (RHEM, WEPP, KINEROS2 and SWAT). Through an intuitive interface the user selects an outlet from which AGWA delineates and discretizes the watershed using a Digital Elevation Model (DEM). The watershed model elements are then intersected with terrain, soils, and land cover data layers to derive the requisite model input parameters. With the addition of a burn severity map AGWA can be used to model post wildfire changes to a catchment. By applying the same design storm to burned and unburned conditions a rapid assessment of the watershed can be made and areas that are the most prone to flooding can be identified. Post-fire precipitation and runoff records from gauged forested watersheds are now being used to make improvements to post fire model input parameters. Rainfall and runoff pairs have been selected from these records in order to calibrate parameter values for surface roughness and saturated hydraulic conductivity used in the KINEROS2 model. Several objective functions will be tried in the calibration process. Results will be validated. Currently Department of Interior Burn Area Emergency Response (DOI BAER) teams are using the AGWA-KINEROS2 modeling interface to assess hydrologically imposed risk immediately following wild fire. These parameter refinements are being made to further improve the quality of these assessments.

  4. Graphical User Interface for Simulink Integrated Performance Analysis Model

    NASA Technical Reports Server (NTRS)

    Durham, R. Caitlyn

    2009-01-01

    The J-2X Engine (built by Pratt & Whitney Rocketdyne,) in the Upper Stage of the Ares I Crew Launch Vehicle, will only start within a certain range of temperature and pressure for Liquid Hydrogen and Liquid Oxygen propellants. The purpose of the Simulink Integrated Performance Analysis Model is to verify that in all reasonable conditions the temperature and pressure of the propellants are within the required J-2X engine start boxes. In order to run the simulation, test variables must be entered at all reasonable values of parameters such as heat leak and mass flow rate. To make this testing process as efficient as possible in order to save the maximum amount of time and money, and to show that the J-2X engine will start when it is required to do so, a graphical user interface (GUI) was created to allow the input of values to be used as parameters in the Simulink Model, without opening or altering the contents of the model. The GUI must allow for test data to come from Microsoft Excel files, allow those values to be edited before testing, place those values into the Simulink Model, and get the output from the Simulink Model. The GUI was built using MATLAB, and will run the Simulink simulation when the Simulate option is activated. After running the simulation, the GUI will construct a new Microsoft Excel file, as well as a MATLAB matrix file, using the output values for each test of the simulation so that they may graphed and compared to other values.

  5. Adaptive Automation Triggered by EEG-Based Mental Workload Index: A Passive Brain-Computer Interface Application in Realistic Air Traffic Control Environment.

    PubMed

    Aricò, Pietro; Borghini, Gianluca; Di Flumeri, Gianluca; Colosimo, Alfredo; Bonelli, Stefano; Golfetti, Alessia; Pozzi, Simone; Imbert, Jean-Paul; Granger, Géraud; Benhacene, Raïlane; Babiloni, Fabio

    2016-01-01

    Adaptive Automation (AA) is a promising approach to keep the task workload demand within appropriate levels in order to avoid both the under- and over-load conditions, hence enhancing the overall performance and safety of the human-machine system. The main issue on the use of AA is how to trigger the AA solutions without affecting the operative task. In this regard, passive Brain-Computer Interface (pBCI) systems are a good candidate to activate automation, since they are able to gather information about the covert behavior (e.g., mental workload) of a subject by analyzing its neurophysiological signals (i.e., brain activity), and without interfering with the ongoing operational activity. We proposed a pBCI system able to trigger AA solutions integrated in a realistic Air Traffic Management (ATM) research simulator developed and hosted at ENAC (École Nationale de l'Aviation Civile of Toulouse, France). Twelve Air Traffic Controller (ATCO) students have been involved in the experiment and they have been asked to perform ATM scenarios with and without the support of the AA solutions. Results demonstrated the effectiveness of the proposed pBCI system, since it enabled the AA mostly during the high-demanding conditions (i.e., overload situations) inducing a reduction of the mental workload under which the ATCOs were operating. On the contrary, as desired, the AA was not activated when workload level was under the threshold, to prevent too low demanding conditions that could bring the operator's workload level toward potentially dangerous conditions of underload.

  6. Adaptive Automation Triggered by EEG-Based Mental Workload Index: A Passive Brain-Computer Interface Application in Realistic Air Traffic Control Environment

    PubMed Central

    Aricò, Pietro; Borghini, Gianluca; Di Flumeri, Gianluca; Colosimo, Alfredo; Bonelli, Stefano; Golfetti, Alessia; Pozzi, Simone; Imbert, Jean-Paul; Granger, Géraud; Benhacene, Raïlane; Babiloni, Fabio

    2016-01-01

    Adaptive Automation (AA) is a promising approach to keep the task workload demand within appropriate levels in order to avoid both the under- and over-load conditions, hence enhancing the overall performance and safety of the human-machine system. The main issue on the use of AA is how to trigger the AA solutions without affecting the operative task. In this regard, passive Brain-Computer Interface (pBCI) systems are a good candidate to activate automation, since they are able to gather information about the covert behavior (e.g., mental workload) of a subject by analyzing its neurophysiological signals (i.e., brain activity), and without interfering with the ongoing operational activity. We proposed a pBCI system able to trigger AA solutions integrated in a realistic Air Traffic Management (ATM) research simulator developed and hosted at ENAC (École Nationale de l'Aviation Civile of Toulouse, France). Twelve Air Traffic Controller (ATCO) students have been involved in the experiment and they have been asked to perform ATM scenarios with and without the support of the AA solutions. Results demonstrated the effectiveness of the proposed pBCI system, since it enabled the AA mostly during the high-demanding conditions (i.e., overload situations) inducing a reduction of the mental workload under which the ATCOs were operating. On the contrary, as desired, the AA was not activated when workload level was under the threshold, to prevent too low demanding conditions that could bring the operator's workload level toward potentially dangerous conditions of underload. PMID:27833542

  7. Does screw-bone interface modelling matter in finite element analyses?

    PubMed

    MacLeod, Alisdair R; Pankaj, Pankaj; Simpson, A Hamish R W

    2012-06-01

    The effect of screw-bone interface modelling strategies was evaluated in the setting of a tibial mid-shaft fracture stabilised using locking plates. Three interface models were examined: fully bonded interface; screw with sliding contact with bone; and screw with sliding contact with bone in an undersized pilot hole. For the simulation of the last interface condition we used a novel thermal expansion approach to generate the pre-stress that the bone would be exposed to during screw insertion. The study finds that the global load-deformation response is not influenced by the interface modelling approach employed; the deformation varied by less than 1% between different interaction models. However, interface modelling is found to have a considerable impact on the local stress-strain environment within the bone in the vicinity of the screws. Frictional and tied representations did not have significantly different peak strain values (<5% difference); the frictional interface had higher peak compressive strains while the tied interface had higher tensile strains. The undersized pilot hole simulation produced the largest strains. The peak minimum principal strains for the frictional interface were 26% of those for the undersized pilot hole simulation at a load of 770 N. It is concluded that the commonly used tie constraint can be used effectively when the only interest is the global load-deformation behaviour. Different contact interface models, however, alter the mechanical response around screw holes leading to different predictions for screw loosening, bone damage and stress shielding.

  8. Challenges in Modeling of the Plasma-Material Interface

    NASA Astrophysics Data System (ADS)

    Krstic, Predrag; Meyer, Fred; Allain, Jean Paul

    2013-09-01

    Plasma-Material Interface mixes materials of the two worlds, creating a new entity, a dynamical surface, which communicates between the two and represent one of the most challenging areas of multidisciplinary science, with many fundamental processes and synergies. How to build an integrated theoretical-experimental approach? Without mutual validation of experiment and theory chances very slim to have believable results? The outreach of the PMI science modeling at the fusion plasma facilities is illustrated by the significant step forward in understanding achieved recently by the quantum-classical modeling of the lithiated carbon surfaces irradiated by deuterium, showing surprisingly large role of oxygen in the deuterium retention and erosion chemistry. The plasma-facing walls of the next-generation fusion reactors will be exposed to high fluxes of neutrons and plasma-particles and will operate at high temperatures for thermodynamic efficiency. To this end we have been studying the evolution dynamics of vacancies and interstitials to the saturated dpa doses of tungsten surfaces bombarded by self-atoms, as well as the plasma-surface interactions of the damaged surfaces (erosion, hydrogen and helium uptake and fuzz formation). PSK and FWM acknowledge support of the ORNL LDRD program.

  9. Driven Interfaces: From Flow to Creep Through Model Reduction

    NASA Astrophysics Data System (ADS)

    Agoritsas, Elisabeth; García-García, Reinaldo; Lecomte, Vivien; Truskinovsky, Lev; Vandembroucq, Damien

    2016-09-01

    The response of spatially extended systems to a force leading their steady state out of equilibrium is strongly affected by the presence of disorder. We focus on the mean velocity induced by a constant force applied on one-dimensional interfaces. In the absence of disorder, the velocity is linear in the force. In the presence of disorder, it is widely admitted, as well as experimentally and numerically verified, that the velocity presents a stretched exponential dependence in the force (the so-called `creep law'), which is out of reach of linear response, or more generically of direct perturbative expansions at small force. In dimension one, there is no exact analytical derivation of such a law, even from a theoretical physical point of view. We propose an effective model with two degrees of freedom, constructed from the full spatially extended model, that captures many aspects of the creep phenomenology. It provides a justification of the creep law form of the velocity-force characteristics, in a quasistatic approximation. It allows, moreover, to capture the non-trivial effects of short-range correlations in the disorder, which govern the low-temperature asymptotics. It enables us to establish a phase diagram where the creep law manifests itself in the vicinity of the origin in the force-system-size-temperature coordinates. Conjointly, we characterise the crossover between the creep regime and a linear-response regime that arises due to finite system size.

  10. Automated dynamic analytical model improvement for damped structures

    NASA Technical Reports Server (NTRS)

    Fuh, J. S.; Berman, A.

    1985-01-01

    A method is described to improve a linear nonproportionally damped analytical model of a structure. The procedure finds the smallest changes in the analytical model such that the improved model matches the measured modal parameters. Features of the method are: (1) ability to properly treat complex valued modal parameters of a damped system; (2) applicability to realistically large structural models; and (3) computationally efficiency without involving eigensolutions and inversion of a large matrix.

  11. Modelling the inhomogeneous SiC Schottky interface

    NASA Astrophysics Data System (ADS)

    Gammon, P. M.; Pérez-Tomás, A.; Shah, V. A.; Vavasour, O.; Donchev, E.; Pang, J. S.; Myronov, M.; Fisher, C. A.; Jennings, M. R.; Leadley, D. R.; Mawby, P. A.

    2013-12-01

    For the first time, the I-V-T dataset of a Schottky diode has been accurately modelled, parameterised, and fully fit, incorporating the effects of interface inhomogeneity, patch pinch-off and resistance, and ideality factors that are both heavily temperature and voltage dependent. A Ni/SiC Schottky diode is characterised at 2 K intervals from 20 to 320 K, which, at room temperature, displays low ideality factors (n < 1.01) that suggest that these diodes may be homogeneous. However, at cryogenic temperatures, excessively high (n > 8), voltage dependent ideality factors and evidence of the so-called "thermionic field emission effect" within a T0-plot, suggest significant inhomogeneity. Two models are used, each derived from Tung's original interactive parallel conduction treatment of barrier height inhomogeneity that can reproduce these commonly seen effects in single temperature I-V traces. The first model incorporates patch pinch-off effects and produces accurate and reliable fits above around 150 K, and at current densities lower than 10-5 A cm-2. Outside this region, we show that resistive effects within a given patch are responsible for the excessive ideality factors, and a second simplified model incorporating these resistive effects as well as pinch-off accurately reproduces the entire temperature range. Analysis of these fitting parameters reduces confidence in those fits above 230 K, and questions are raised about the physical interpretation of the fitting parameters. Despite this, both methods used are shown to be useful tools for accurately reproducing I-V-T data over a large temperature range.

  12. Interfacing click chemistry with automated oligonucleotide synthesis for the preparation of fluorescent DNA probes containing internal xanthene and cyanine dyes.

    PubMed

    Astakhova, I Kira; Wengel, Jesper

    2013-01-14

    Double-labeled oligonucleotide probes containing fluorophores interacting by energy-transfer mechanisms are essential for modern bioanalysis, molecular diagnostics, and in vivo imaging techniques. Although bright xanthene and cyanine dyes are gaining increased prominence within these fields, little attention has thus far been paid to probes containing these dyes internally attached, a fact which is mainly due to the quite challenging synthesis of such oligonucleotide probes. Herein, by using 2'-O-propargyl uridine phosphoramidite and a series of xanthenes and cyanine azide derivatives, we have for the first time performed solid-phase copper(I)-catalyzed azide-alkyne cycloaddition (CuAAC) click labeling during the automated phosphoramidite oligonucleotide synthesis followed by postsynthetic click reactions in solution. We demonstrate that our novel strategy is rapid and efficient for the preparation of novel oligonucleotide probes containing internally positioned xanthene and cyanine dye pairs and thus represents a significant step forward for the preparation of advanced fluorescent oligonucleotide probes. Furthermore, we demonstrate that the novel xanthene and cyanine labeled probes display unusual and very promising photophysical properties resulting from energy-transfer interactions between the fluorophores controlled by nucleic acid assembly. Potential benefits of using these novel fluorescent probes within, for example, molecular diagnostics and fluorescence microscopy include: Considerable Stokes shifts (40-110 nm), quenched fluorescence of single-stranded probes accompanied by up to 7.7-fold light-up effect of emission upon target DNA/RNA binding, remarkable sensitivity to single-nucleotide mismatches, generally high fluorescence brightness values (FB up to 26), and hence low limit of target detection values (LOD down to <5 nM).

  13. Modeling growth, coalescence, and stability of helium precipitates on patterned interfaces

    NASA Astrophysics Data System (ADS)

    Yuryev, D. V.; Demkowicz, M. J.

    2017-01-01

    We develop a phase field simulation to model morphology evolution of helium (He) precipitates on solid-state interfaces. Our approach accounts for differences in precipitate contact angles arising from location-dependent interface energies and is capable of describing precipitate growth, coalescence, and de-wetting from the interface. We demonstrate our approach for interfaces with linear chains of wettable patches and find that different wetting energies and patch spacings give rise to four distinct classes of helium precipitate morphologies. Our method may be adapted to other scenarios involving fluids precipitating on non-uniform solid-state interfaces as well as to precipitation on patterned surfaces.

  14. Analytical and numerical modeling of non-collinear shear wave mixing at an imperfect interface

    NASA Astrophysics Data System (ADS)

    Zhang, Ziyin; Nagy, Peter B.; Hassan, Waled

    2016-02-01

    Non-collinear shear wave mixing at an imperfect interface between two solids can be exploited for nonlinear ultrasonic assessment of bond quality. In this study we developed two analytical models for nonlinear imperfect interfaces. The first model uses a finite nonlinear interfacial stiffness representation of an imperfect interface of vanishing thickness, while the second model relies on a thin nonlinear interphase layer to represent an imperfect interface region. The second model is actually a derivative of the first model obtained by calculating the equivalent interfacial stiffness of a thin isotropic nonlinear interphase layer in the quasi-static approximation. The predictions of both analytical models were numerically verified by comparison to COMSOL finite element simulations. These models can accurately predict the excess nonlinearity caused by interface imperfections based on the strength of the reflected and transmitted mixed longitudinal waves produced by them under non-collinear shear wave interrogation.

  15. Nonparametric Bayesian Modeling for Automated Database Schema Matching

    SciTech Connect

    Ferragut, Erik M; Laska, Jason A

    2015-01-01

    The problem of merging databases arises in many government and commercial applications. Schema matching, a common first step, identifies equivalent fields between databases. We introduce a schema matching framework that builds nonparametric Bayesian models for each field and compares them by computing the probability that a single model could have generated both fields. Our experiments show that our method is more accurate and faster than the existing instance-based matching algorithms in part because of the use of nonparametric Bayesian models.

  16. An alternative accident prediction model for highway-rail interfaces.

    PubMed

    Austin, Ross D; Carson, Jodi L

    2002-01-01

    Safety levels at highway/rail interfaces continue to be of major concern despite an ever-increasing focus on improved design and appurtenance application practices. Despite the encouraging trend towards improved safety, accident frequencies remain high, many of which result in fatalities. More than half of these accidents occur at public crossings, where active warning devices (i.e. gates, lights, bells, etc.) are in place and functioning properly. This phenomenon speaks directly to the need to re-examine both safety evaluation (i.e. accident prediction) methods and design practices at highway-rail crossings. With respect to earlier developed accident prediction methods, the Peabody Dimmick Formula, the New Hampshire Index and the National Cooperative Highway Research Program (NCHRP) Hazard Index, all lack descriptive capabilities due to their limited number of explanatory variables. Further, each has unique limitations that are detailed in this paper. The US Department of Transportation's (USDOT) Accident Prediction Formula, which is most widely, also has limitations related to the complexity of the three-stage formula and its decline in accident prediction model accuracy over time. This investigation resulted in the development of an alternate highway-rail crossing accident prediction model, using negative binomial regression that shows great promise. The benefit to be gained through the application of this alternate model is (1) a greatly simplified, one-step estimation process; (2) comparable supporting data requirements and (3) interpretation of both the magnitude and direction of the effect of the factors found to significantly influence highway-rail crossing accident frequencies.

  17. Comparison of Joint Modeling Approaches Including Eulerian Sliding Interfaces

    SciTech Connect

    Lomov, I; Antoun, T; Vorobiev, O

    2009-12-16

    Accurate representation of discontinuities such as joints and faults is a key ingredient for high fidelity modeling of shock propagation in geologic media. The following study was done to improve treatment of discontinuities (joints) in the Eulerian hydrocode GEODYN (Lomov and Liu 2005). Lagrangian methods with conforming meshes and explicit inclusion of joints in the geologic model are well suited for such an analysis. Unfortunately, current meshing tools are unable to automatically generate adequate hexahedral meshes for large numbers of irregular polyhedra. Another concern is that joint stiffness in such explicit computations requires significantly reduced time steps, with negative implications for both the efficiency and quality of the numerical solution. An alternative approach is to use non-conforming meshes and embed joint information into regular computational elements. However, once slip displacement on the joints become comparable to the zone size, Lagrangian (even non-conforming) meshes could suffer from tangling and decreased time step problems. The use of non-conforming meshes in an Eulerian solver may alleviate these difficulties and provide a viable numerical approach for modeling the effects of faults on the dynamic response of geologic materials. We studied shock propagation in jointed/faulted media using a Lagrangian and two Eulerian approaches. To investigate the accuracy of this joint treatment the GEODYN calculations have been compared with results from the Lagrangian code GEODYN-L which uses an explicit treatment of joints via common plane contact. We explore two approaches to joint treatment in the code, one for joints with finite thickness and the other for tight joints. In all cases the sliding interfaces are tracked explicitly without homogenization or blending the joint and block response into an average response. In general, rock joints will introduce an increase in normal compliance in addition to a reduction in shear strength. In the

  18. Automated model formulation for time-varying flexible structures

    NASA Technical Reports Server (NTRS)

    Glass, B. J.; Hanagud, S.

    1989-01-01

    Presented here is an identification technique that uses the sensor information to choose a new model out of a finite set of discrete model space, in order to follow the observed changes to the given time varying flexible structure. Boundary condition sets or other information on model variations are used to organize the set of possible models laterally into a search tree with levels of abstraction used to order the models vertically within branches. An object-oriented programming approach is used to represent the model set in the search tree. A modified A (asterisk) best first search algorithm finds the model where the model response best matches the current observations. Several extensions to this methodology are discussed. Methods of possible integration of rules with the current search algorithm are considered to give weight to interpreted trends that may be found in a series of observations. This capability might lead, for instance, to identifying a model that incorporates a progressive damage rather than with incorrect paramenters such as added mass. Another new direction is to consider the use of noisy time domain sensor feedback rather than frequency domain information in the search algorithm to improve the real-time capability of the developed procedure.

  19. Reduced complexity structural modeling for automated airframe synthesis

    NASA Technical Reports Server (NTRS)

    Hajela, Prabhat

    1987-01-01

    A procedure is developed for the optimum sizing of wing structures based on representing the built-up finite element assembly of the structure by equivalent beam models. The reduced-order beam models are computationally less demanding in an optimum design environment which dictates repetitive analysis of several trial designs. The design procedure is implemented in a computer program requiring geometry and loading information to create the wing finite element model and its equivalent beam model, and providing a rapid estimate of the optimum weight obtained from a fully stressed design approach applied to the beam. The synthesis procedure is demonstrated for representative conventional-cantilever and joined wing configurations.

  20. Driver-centred vehicle automation: using network analysis for agent-based modelling of the driver in highly automated driving systems.

    PubMed

    Banks, Victoria A; Stanton, Neville A

    2016-11-01

    To the average driver, the concept of automation in driving infers that they can become completely 'hands and feet free'. This is a common misconception, however, one that has been shown through the application of Network Analysis to new Cruise Assist technologies that may feature on our roads by 2020. Through the adoption of a Systems Theoretic approach, this paper introduces the concept of driver-initiated automation which reflects the role of the driver in highly automated driving systems. Using a combination of traditional task analysis and the application of quantitative network metrics, this agent-based modelling paper shows how the role of the driver remains an integral part of the driving system implicating the need for designers to ensure they are provided with the tools necessary to remain actively in-the-loop despite giving increasing opportunities to delegate their control to the automated subsystems. Practitioner Summary: This paper describes and analyses a driver-initiated command and control system of automation using representations afforded by task and social networks to understand how drivers remain actively involved in the task. A network analysis of different driver commands suggests that such a strategy does maintain the driver in the control loop.

  1. Interface controlled plastic flow modelled by strain gradient plasticity theory

    NASA Astrophysics Data System (ADS)

    Pardoen, Thomas; Massart, Thierry J.

    The resistance to plastic flow in metals is often dominated by the presence of interfaces which interfere with dislocation nucleation and motion. Interfaces can be static such as grain and phase boundaries or dynamic such as new boundaries resulting from a phase transformation. The interface can be hard and fully impenetrable to dislocations, or soft and partly or fully transparent. The interactions between dislocations and interfaces constitute the main mechanism controlling the strength and strain hardening capacity of many metallic systems especially in very fine microstructures with a high density of interfaces. A phenomenological strain gradient plasticity theory is used to introduce, within a continuum framework, higher order boundary conditions which empirically represent the effect of interfaces on plastic flow. The strength of the interfaces can evolve during the loading in order to enrich the description of their response. The behaviour of single and dual phase steels, with possible TRIP effect, accounting for the interactions with static and dynamic boundaries, is addressed, with a specific focus on the size dependent strength and ductility balance. The size dependent response of weak precipitate free zones surrounding grain boundaries is treated as an example involving more than one microstructural length scale.

  2. A 2-D Interface Element for Coupled Analysis of Independently Modeled 3-D Finite Element Subdomains

    NASA Technical Reports Server (NTRS)

    Kandil, Osama A.

    1998-01-01

    Over the past few years, the development of the interface technology has provided an analysis framework for embedding detailed finite element models within finite element models which are less refined. This development has enabled the use of cascading substructure domains without the constraint of coincident nodes along substructure boundaries. The approach used for the interface element is based on an alternate variational principle often used in deriving hybrid finite elements. The resulting system of equations exhibits a high degree of sparsity but gives rise to a non-positive definite system which causes difficulties with many of the equation solvers in general-purpose finite element codes. Hence the global system of equations is generally solved using, a decomposition procedure with pivoting. The research reported to-date for the interface element includes the one-dimensional line interface element and two-dimensional surface interface element. Several large-scale simulations, including geometrically nonlinear problems, have been reported using the one-dimensional interface element technology; however, only limited applications are available for the surface interface element. In the applications reported to-date, the geometry of the interfaced domains exactly match each other even though the spatial discretization within each domain may be different. As such, the spatial modeling of each domain, the interface elements and the assembled system is still laborious. The present research is focused on developing a rapid modeling procedure based on a parametric interface representation of independently defined subdomains which are also independently discretized.

  3. Petri net-based modelling of human-automation conflicts in aviation.

    PubMed

    Pizziol, Sergio; Tessier, Catherine; Dehais, Frédéric

    2014-01-01

    Analyses of aviation safety reports reveal that human-machine conflicts induced by poor automation design are remarkable precursors of accidents. A review of different crew-automation conflicting scenarios shows that they have a common denominator: the autopilot behaviour interferes with the pilot's goal regarding the flight guidance via 'hidden' mode transitions. Considering both the human operator and the machine (i.e. the autopilot or the decision functions) as agents, we propose a Petri net model of those conflicting interactions, which allows them to be detected as deadlocks in the Petri net. In order to test our Petri net model, we designed an autoflight system that was formally analysed to detect conflicting situations. We identified three conflicting situations that were integrated in an experimental scenario in a flight simulator with 10 general aviation pilots. The results showed that the conflicts that we had a-priori identified as critical had impacted the pilots' performance. Indeed, the first conflict remained unnoticed by eight participants and led to a potential collision with another aircraft. The second conflict was detected by all the participants but three of them did not manage the situation correctly. The last conflict was also detected by all the participants but provoked typical automation surprise situation as only one declared that he had understood the autopilot behaviour. These behavioural results are discussed in terms of workload and number of fired 'hidden' transitions. Eventually, this study reveals that both formal and experimental approaches are complementary to identify and assess the criticality of human-automation conflicts. Practitioner Summary: We propose a Petri net model of human-automation conflicts. An experiment was conducted with general aviation pilots performing a scenario involving three conflicting situations to test the soundness of our formal approach. This study reveals that both formal and experimental approaches

  4. Proteomics for Validation of Automated Gene Model Predictions

    SciTech Connect

    Zhou, Kemin; Panisko, Ellen A.; Magnuson, Jon K.; Baker, Scott E.; Grigoriev, Igor V.

    2008-02-14

    High-throughput liquid chromatography mass spectrometry (LC-MS)-based proteomic analysis has emerged as a powerful tool for functional annotation of genome sequences. These analyses complement the bioinformatic and experimental tools used for deriving, verifying, and functionally annotating models of genes and their transcripts. Furthermore, proteomics extends verification and functional annotation to the level of the translation product of the gene model.

  5. AISIM (Automated Interactive Simulation Model) Training Examples Manual.

    DTIC Science & Technology

    1982-02-26

    19 7 Messae rraffic Characteristics .................... 21 8 Message Traffic Matrices ........................... 22 9 Example I Model Structure...Figure 7. Message destinations are ’elected according to tne traffic matrices presenteu in Figare 8. Dalta messages each nave three destinations wnica...destination, processed and eliminated. Eaca of tnese events can be modeled by procedural operacions . Page 69 * 1.-- *,..., .*’. Aany slots circulate on

  6. Web Interface for Modeling Fog Oil Dispersion During Training

    NASA Astrophysics Data System (ADS)

    Lozar, Robert C.

    2002-08-01

    Predicting the dispersion of military camouflage training materials-Smokes and Obscurants (SO)-is a rapidly improving science. The Defense Threat Reduction Agency (DTRA) developed the Hazard Prediction and Assessment Capability (HPAC), a software package that allows the modeling of the dispersion of several potentially detrimental materials. ERDC/CERL characterized the most commonly used SO material, fog oil in HPAC terminology, to predict the SO dispersion characteristics in various training scenarios that might have an effect on Threatened and Endangered Species (TES) at DoD installations. To make the configuration more user friendly, the researchers implemented an initial web-interface version of HPAC with a modifiable fog-oil component that can be applied at any installation in the world. By this method, an installation SO trainer can plan the location and time of fog oil training activities and is able to predict the degree to which various areas will be effected, particularly important in ensuring the appropriate management of TES on a DoD installation.

  7. Automated parametrical antenna modelling for ambient assisted living applications

    NASA Astrophysics Data System (ADS)

    Kazemzadeh, R.; John, W.; Mathis, W.

    2012-09-01

    In this paper a parametric modeling technique for a fast polynomial extraction of the physically relevant parameters of inductively coupled RFID/NFC (radio frequency identification/near field communication) antennas is presented. The polynomial model equations are obtained by means of a three-step procedure: first, full Partial Element Equivalent Circuit (PEEC) antenna models are determined by means of a number of parametric simulations within the input parameter range of a certain antenna class. Based on these models, the RLC antenna parameters are extracted in a subsequent model reduction step. Employing these parameters, polynomial equations describing the antenna parameter with respect to (w.r.t.) the overall antenna input parameter range are extracted by means of polynomial interpolation and approximation of the change of the polynomials' coefficients. The described approach is compared to the results of a reference PEEC solver with regard to accuracy and computation effort.

  8. Achieving runtime adaptability through automated model evolution and variant selection

    NASA Astrophysics Data System (ADS)

    Mosincat, Adina; Binder, Walter; Jazayeri, Mehdi

    2014-01-01

    Dynamically adaptive systems propose adaptation by means of variants that are specified in the system model at design time and allow for a fixed set of different runtime configurations. However, in a dynamic environment, unanticipated changes may result in the inability of the system to meet its quality requirements. To allow the system to react to these changes, this article proposes a solution for automatically evolving the system model by integrating new variants and periodically validating the existing ones based on updated quality parameters. To illustrate this approach, the article presents a BPEL-based framework using a service composition model to represent the functional requirements of the system. The framework estimates quality of service (QoS) values based on information provided by a monitoring mechanism, ensuring that changes in QoS are reflected in the system model. The article shows how the evolved model can be used at runtime to increase the system's autonomic capabilities and delivered QoS.

  9. Man power/cost estimation model: Automated planetary projects

    NASA Technical Reports Server (NTRS)

    Kitchen, L. D.

    1975-01-01

    A manpower/cost estimation model is developed which is based on a detailed level of financial analysis of over 30 million raw data points which are then compacted by more than three orders of magnitude to the level at which the model is applicable. The major parameter of expenditure is manpower (specifically direct labor hours) for all spacecraft subsystem and technical support categories. The resultant model is able to provide a mean absolute error of less than fifteen percent for the eight programs comprising the model data base. The model includes cost saving inheritance factors, broken down in four levels, for estimating follow-on type programs where hardware and design inheritance are evident or expected.

  10. Metal oxide-graphene field-effect transistor: interface trap density extraction model

    PubMed Central

    Najam, Faraz; Lau, Kah Cheong; Lim, Cheng Siong; Yu, Yun Seop

    2016-01-01

    Summary A simple to implement model is presented to extract interface trap density of graphene field effect transistors. The presence of interface trap states detrimentally affects the device drain current–gate voltage relationship I ds–V gs. At the moment, there is no analytical method available to extract the interface trap distribution of metal-oxide-graphene field effect transistor (MOGFET) devices. The model presented here extracts the interface trap distribution of MOGFET devices making use of available experimental capacitance–gate voltage C tot–V gs data and a basic set of equations used to define the device physics of MOGFET devices. The model was used to extract the interface trap distribution of 2 experimental devices. Device parameters calculated using the extracted interface trap distribution from the model, including surface potential, interface trap charge and interface trap capacitance compared very well with their respective experimental counterparts. The model enables accurate calculation of the surface potential affected by trap charge. Other models ignore the effect of trap charge and only calculate the ideal surface potential. Such ideal surface potential when used in a surface potential based drain current model will result in an inaccurate prediction of the drain current. Accurate calculation of surface potential that can later be used in drain current model is highlighted as a major advantage of the model. PMID:27826511

  11. Metal oxide-graphene field-effect transistor: interface trap density extraction model.

    PubMed

    Najam, Faraz; Lau, Kah Cheong; Lim, Cheng Siong; Yu, Yun Seop; Tan, Michael Loong Peng

    2016-01-01

    A simple to implement model is presented to extract interface trap density of graphene field effect transistors. The presence of interface trap states detrimentally affects the device drain current-gate voltage relationship Ids-Vgs. At the moment, there is no analytical method available to extract the interface trap distribution of metal-oxide-graphene field effect transistor (MOGFET) devices. The model presented here extracts the interface trap distribution of MOGFET devices making use of available experimental capacitance-gate voltage Ctot-Vgs data and a basic set of equations used to define the device physics of MOGFET devices. The model was used to extract the interface trap distribution of 2 experimental devices. Device parameters calculated using the extracted interface trap distribution from the model, including surface potential, interface trap charge and interface trap capacitance compared very well with their respective experimental counterparts. The model enables accurate calculation of the surface potential affected by trap charge. Other models ignore the effect of trap charge and only calculate the ideal surface potential. Such ideal surface potential when used in a surface potential based drain current model will result in an inaccurate prediction of the drain current. Accurate calculation of surface potential that can later be used in drain current model is highlighted as a major advantage of the model.

  12. Use of noncrystallographic symmetry for automated model building at medium to low resolution.

    PubMed

    Wiegels, Tim; Lamzin, Victor S

    2012-04-01

    A novel method is presented for the automatic detection of noncrystallographic symmetry (NCS) in macromolecular crystal structure determination which does not require the derivation of molecular masks or the segmentation of density. It was found that throughout structure determination the NCS-related parts may be differently pronounced in the electron density. This often results in the modelling of molecular fragments of variable length and accuracy, especially during automated model-building procedures. These fragments were used to identify NCS relations in order to aid automated model building and refinement. In a number of test cases higher completeness and greater accuracy of the obtained structures were achieved, specifically at a crystallographic resolution of 2.3 Å or poorer. In the best case, the method allowed the building of up to 15% more residues automatically and a tripling of the average length of the built fragments.

  13. Automated mask creation from a 3D model using Faethm.

    SciTech Connect

    Schiek, Richard Louis; Schmidt, Rodney Cannon

    2007-11-01

    We have developed and implemented a method which given a three-dimensional object can infer from topology the two-dimensional masks needed to produce that object with surface micro-machining. The masks produced by this design tool can be generic, process independent masks, or if given process constraints, specific for a target process. This design tool calculates the two-dimensional mask set required to produce a given three-dimensional model by investigating the vertical topology of the model.

  14. Implementation of the Automated Numerical Model Performance Metrics System

    DTIC Science & Technology

    2011-09-26

    gradient ( BLG ) are computed forming additional sets of matches, all of which can be statistically analysed. More RP33 parameters are being considered for...terminology for the type of model, the domain over which the model ran, a 10-digit date- time group and, if applicable, the one TAU. The date-time group ...extensions are used in this software package. The root part of the name is a 10-digit date-time group with the same layout as described above

  15. Introducing a new open source GIS user interface for the SWAT model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Soil and Water Assessment Tool (SWAT) model is a robust watershed modelling tool. It typically uses the ArcSWAT interface to create its inputs. ArcSWAT is public domain software which works in the licensed ArcGIS environment. The aim of this paper was to develop an open source user interface ...

  16. Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.

    PubMed

    Huynh, Linh; Tagkopoulos, Ilias

    2015-08-21

    In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.

  17. A simplified cellular automation model for city traffic

    SciTech Connect

    Simon, P.M.; Nagel, K. |

    1997-12-31

    The authors systematically investigate the effect of blockage sites in a cellular automata model for traffic flow. Different scheduling schemes for the blockage sites are considered. None of them returns a linear relationship between the fraction of green time and the throughput. The authors use this information for a fast implementation of traffic in Dallas.

  18. Automated biowaste sampling system urine subsystem operating model, part 1

    NASA Technical Reports Server (NTRS)

    Fogal, G. L.; Mangialardi, J. K.; Rosen, F.

    1973-01-01

    The urine subsystem automatically provides for the collection, volume sensing, and sampling of urine from six subjects during space flight. Verification of the subsystem design was a primary objective of the current effort which was accomplished thru the detail design, fabrication, and verification testing of an operating model of the subsystem.

  19. Automated volumetric breast density derived by shape and appearance modeling

    NASA Astrophysics Data System (ADS)

    Malkov, Serghei; Kerlikowske, Karla; Shepherd, John

    2014-03-01

    The image shape and texture (appearance) estimation designed for facial recognition is a novel and promising approach for application in breast imaging. The purpose of this study was to apply a shape and appearance model to automatically estimate percent breast fibroglandular volume (%FGV) using digital mammograms. We built a shape and appearance model using 2000 full-field digital mammograms from the San Francisco Mammography Registry with known %FGV measured by single energy absorptiometry method. An affine transformation was used to remove rotation, translation and scale. Principal Component Analysis (PCA) was applied to extract significant and uncorrelated components of %FGV. To build an appearance model, we transformed the breast images into the mean texture image by piecewise linear image transformation. Using PCA the image pixels grey-scale values were converted into a reduced set of the shape and texture features. The stepwise regression with forward selection and backward elimination was used to estimate the outcome %FGV with shape and appearance features and other system parameters. The shape and appearance scores were found to correlate moderately to breast %FGV, dense tissue volume and actual breast volume, body mass index (BMI) and age. The highest Pearson correlation coefficient was equal 0.77 for the first shape PCA component and actual breast volume. The stepwise regression method with ten-fold cross-validation to predict %FGV from shape and appearance variables and other system outcome parameters generated a model with a correlation of r2 = 0.8. In conclusion, a shape and appearance model demonstrated excellent feasibility to extract variables useful for automatic %FGV estimation. Further exploring and testing of this approach is warranted.

  20. Drivers' communicative interactions: on-road observations and modelling for integration in future automation systems.

    PubMed

    Portouli, Evangelia; Nathanael, Dimitris; Marmaras, Nicolas

    2014-01-01

    Social interactions with other road users are an essential component of the driving activity and may prove critical in view of future automation systems; still up to now they have received only limited attention in the scientific literature. In this paper, it is argued that drivers base their anticipations about the traffic scene to a large extent on observations of social behaviour of other 'animate human-vehicles'. It is further argued that in cases of uncertainty, drivers seek to establish a mutual situational awareness through deliberate communicative interactions. A linguistic model is proposed for modelling these communicative interactions. Empirical evidence from on-road observations and analysis of concurrent running commentary by 25 experienced drivers support the proposed model. It is suggested that the integration of a social interactions layer based on illocutionary acts in future driving support and automation systems will improve their performance towards matching human driver's expectations. Practitioner Summary: Interactions between drivers on the road may play a significant role in traffic coordination. On-road observations and running commentaries are presented as empirical evidence to support a model of such interactions; incorporation of drivers' interactions in future driving support and automation systems may improve their performance towards matching driver's expectations.

  1. An automated in vitro model for the evaluation of ultrasound modalities measuring myocardial deformation

    PubMed Central

    2010-01-01

    Background Echocardiography is the method of choice when one wishes to examine myocardial function. Qualitative assessment of the 2D grey scale images obtained is subjective, and objective methods are required. Speckle Tracking Ultrasound is an emerging technology, offering an objective mean of quantifying left ventricular wall motion. However, before a new ultrasound technology can be adopted in the clinic, accuracy and reproducibility needs to be investigated. Aim It was hypothesized that the collection of ultrasound sample data from an in vitro model could be automated. The aim was to optimize an in vitro model to allow for efficient collection of sample data. Material & Methods A tissue-mimicking phantom was made from water, gelatin powder, psyllium fibers and a preservative. Sonomicrometry crystals were molded into the phantom. The solid phantom was mounted in a stable stand and cyclically compressed. Peak strain was then measured by Speckle Tracking Ultrasound and sonomicrometry. Results We succeeded in automating the acquisition and analysis of sample data. Sample data was collected at a rate of 200 measurement pairs in 30 minutes. We found good agreement between Speckle Tracking Ultrasound and sonomicrometry in the in vitro model. Best agreement was 0.83 ± 0.70%. Worst agreement was -1.13 ± 6.46%. Conclusions It has been shown possible to automate a model that can be used for evaluating the in vitro accuracy and precision of ultrasound modalities measuring deformation. Sonomicrometry and Speckle Tracking Ultrasound had acceptable agreement. PMID:20822532

  2. Singularity-free finite element model of bone through automated voxel-based reconstruction.

    PubMed

    Esposito, L; Bifulco, P; Gargiulo, P; Fraldi, M

    2016-02-01

    Computed tomography (CT) provides both anatomical and density information about tissues. Bone is segmented by raw images and Finite Element Method (FEM) voxel-based meshing technique is achieved by matching each CT voxel to a single finite element (FE). As a consequence of the automated model reconstruction, unstable elements - i.e. elements insufficiently anchored to the whole model and thus potentially involved in partial rigid body motion - can be generated, a crucial problem in obtaining consistent FE models, hindering mechanical analyses. Through the classification of instabilities on topological connections between elements, a numerical procedure is proposed in order to avoid unconstrained models.

  3. An automated tool for face recognition using visual attention and active shape models analysis.

    PubMed

    Faro, A; Giordano, D; Spampinato, C

    2006-01-01

    An entirely automated approach for the recognition of the face of a people starting from her/his images is presented. The approach uses a computational attention module to find automatically the most relevant facial features using the Focus Of Attentions (FOA) These features are used to build the model of a face during the learning phase and for recognition during the testing phase. The landmarking of the features is performed by applying the active contour model (ACM) technique, whereas the active shape model (ASM) is adopted for constructing a flexible model of the selected facial features. The advantages of this approach and opportunities for further improvements are discussed.

  4. An Improvement in Thermal Modelling of Automated Tape Placement Process

    NASA Astrophysics Data System (ADS)

    Barasinski, Anaïs; Leygue, Adrien; Soccard, Eric; Poitou, Arnaud

    2011-01-01

    The thermoplastic tape placement process offers the possibility of manufacturing large laminated composite parts with all kinds of geometries (double curved i.e.). This process is based on the fusion bonding of a thermoplastic tape on a substrate. It has received a growing interest during last years because of its non autoclave abilities. In order to control and optimize the quality of the manufactured part, we need to predict the temperature field throughout the processing of the laminate. In this work, we focus on a thermal modeling of this process which takes in account the imperfect bonding existing between the different layers of the substrate by introducing thermal contact resistance in the model. This study is leaning on experimental results which inform us that the value of the thermal resistance evolves with temperature and pressure applied on the material.

  5. An Improvement in Thermal Modelling of Automated Tape Placement Process

    SciTech Connect

    Barasinski, Anaies; Leygue, Adrien; Poitou, Arnaud; Soccard, Eric

    2011-01-17

    The thermoplastic tape placement process offers the possibility of manufacturing large laminated composite parts with all kinds of geometries (double curved i.e.). This process is based on the fusion bonding of a thermoplastic tape on a substrate. It has received a growing interest during last years because of its non autoclave abilities.In order to control and optimize the quality of the manufactured part, we need to predict the temperature field throughout the processing of the laminate. In this work, we focus on a thermal modeling of this process which takes in account the imperfect bonding existing between the different layers of the substrate by introducing thermal contact resistance in the model. This study is leaning on experimental results which inform us that the value of the thermal resistance evolves with temperature and pressure applied on the material.

  6. User's Manual for the Object User Interface (OUI): An Environmental Resource Modeling Framework

    USGS Publications Warehouse

    Markstrom, Steven L.; Koczot, Kathryn M.

    2008-01-01

    The Object User Interface is a computer application that provides a framework for coupling environmental-resource models and for managing associated temporal and spatial data. The Object User Interface is designed to be easily extensible to incorporate models and data interfaces defined by the user. Additionally, the Object User Interface is highly configurable through the use of a user-modifiable, text-based control file that is written in the eXtensible Markup Language. The Object User Interface user's manual provides (1) installation instructions, (2) an overview of the graphical user interface, (3) a description of the software tools, (4) a project example, and (5) specifications for user configuration and extension.

  7. Interface-tracking electro-hydrodynamic model for droplet coalescence

    NASA Astrophysics Data System (ADS)

    Crowl Erickson, Lindsay; Noble, David

    2012-11-01

    Many fluid-based technologies rely on electrical fields to control the motion of droplets, e.g. micro-fluidic devices for high-speed droplet sorting, solution separation for chemical detectors, and purification of biodiesel fuel. Precise control over droplets is crucial to these applications. However, electric fields can induce complex and unpredictable fluid dynamics. Recent experiments (Ristenpart et al. 2009) have demonstrated that oppositely charged droplets bounce rather than coalesce in the presence of strong electric fields. Analytic hydrodynamic approximations for interfaces become invalid near coalescence, and therefore detailed numerical simulations are necessary. We present a conformal decomposition finite element (CDFEM) interface-tracking method for two-phase flow to demonstrate electro-coalescence. CDFEM is a sharp interface method that decomposes elements along fluid-fluid boundaries and uses a level set function to represent the interface. The electro-hydrodynamic equations solved allow for convection of charge and charge accumulation at the interface, both of which may be important factors for the pinch-off dynamics in this parameter regime.

  8. Finite Element Modeling of Viscoelastic Behavior and Interface Damage in Adhesively Bonded Joints

    DTIC Science & Technology

    2012-01-01

    Interface Damage in Adhesively Bonded Joints Feifei Cheng, Ö. Özgü Özsoy and J.N. Reddy* Advanced Computational Mechanics Laboratory, Department of...the adhesive and damage analysis of adhesive-adherend interfaces in adhesively bonded joints. First, viscoelastic finite element analysis of a model...nominal stress criterion and mixed-mode energy criterion are used to determine the damage initiation and evolution at the interface, respectively

  9. Automated nonlinear system modeling with multiple fuzzy neural networks and kernel smoothing.

    PubMed

    Yu, Wen; Li, Xiaoou

    2010-10-01

    This paper, presents a novel identification approach using fuzzy neural networks. It focuses on structure and parameters uncertainties which have been widely explored in the literatures. The main contribution of this paper is that an integrated analytic framework is proposed for automated structure selection and parameter identification. A kernel smoothing technique is used to generate a model structure automatically in a fixed time interval. To cope with structural change, a hysteresis strategy is proposed to guarantee finite times switching and desired performance.

  10. Knowledge Based Simulation: An Artificial Intelligence Approach to System Modeling and Automating the Simulation Life Cycle.

    DTIC Science & Technology

    1988-04-13

    Simulation: An Artificial Intelligence Approach to System Modeling and Automating the Simulation Life Cycle Mark S. Fox, Nizwer Husain, Malcolm...years of research in the application of Artificial Intelligence to Simulation. Our focus has been in two areas: the use of Al knowledge representation...this problem by using Artificial Intelligence (Al) knowledge representation techniques, such as frames, to represent the objects and their

  11. A COMSOL-GEMS interface for modeling coupled reactive-transport geochemical processes

    NASA Astrophysics Data System (ADS)

    Azad, Vahid Jafari; Li, Chang; Verba, Circe; Ideker, Jason H.; Isgor, O. Burkan

    2016-07-01

    An interface was developed between COMSOL MultiphysicsTM finite element analysis software and (geo)chemical modeling platform, GEMS, for the reactive-transport modeling of (geo)chemical processes in variably saturated porous media. The two standalone software packages are managed from the interface that uses a non-iterative operator splitting technique to couple the transport (COMSOL) and reaction (GEMS) processes. The interface allows modeling media with complex chemistry (e.g. cement) using GEMS thermodynamic database formats. Benchmark comparisons show that the developed interface can be used to predict a variety of reactive-transport processes accurately. The full functionality of the interface was demonstrated to model transport processes, governed by extended Nernst-Plank equation, in Class H Portland cement samples in high pressure and temperature autoclaves simulating systems that are used to store captured carbon dioxide (CO2) in geological reservoirs.

  12. Modeling the flow in diffuse interface methods of solidification

    NASA Astrophysics Data System (ADS)

    Subhedar, A.; Steinbach, I.; Varnik, F.

    2015-08-01

    Fluid dynamical equations in the presence of a diffuse solid-liquid interface are investigated via a volume averaging approach. The resulting equations exhibit the same structure as the standard Navier-Stokes equation for a Newtonian fluid with a constant viscosity, the effect of the solid phase fraction appearing in the drag force only. This considerably simplifies the use of the lattice Boltzmann method as a fluid dynamics solver in solidification simulations. Galilean invariance is also satisfied within this approach. Further, we investigate deviations between the diffuse and sharp interface flow profiles via both quasiexact numerical integration and lattice Boltzmann simulations. It emerges from these studies that the freedom in choosing the solid-liquid coupling parameter h provides a flexible way of optimizing the diffuse interface-flow simulations. Once h is adapted for a given spatial resolution, the simulated flow profiles reach an accuracy comparable to quasiexact numerical simulations.

  13. Modeling of the water network at protein-RNA interfaces.

    PubMed

    Li, Yiyu; Sutch, Brian T; Bui, Huynh-Hoa; Gallaher, Timothy K; Haworth, Ian S

    2011-06-27

    Water plays an important role in the mediation of biomolecular interactions. Thus, accurate prediction and evaluation of water-mediated interactions is an important element in the computational design of interfaces involving proteins, RNA, and DNA. Here, we use an algorithm (WATGEN) to predict the locations of interfacial water molecules for a data set of 224 protein-RNA interfaces. The accuracy of the prediction is validated against water molecules present in the X-ray structures of 105 of these complexes. The complexity of the water networks is deconvoluted through definition of the characteristics of each water molecule based on its bridging properties between the protein and RNA and on its depth in the interface with respect to the bulk solvent. This approach has the potential for scoring the water network for incorporation into the computational design of protein-RNA complexes.

  14. Models for identification of erroneous atom-to-atom mapping of reactions performed by automated algorithms.

    PubMed

    Muller, Christophe; Marcou, Gilles; Horvath, Dragos; Aires-de-Sousa, João; Varnek, Alexandre

    2012-12-21

    Machine learning (SVM and JRip rule learner) methods have been used in conjunction with the Condensed Graph of Reaction (CGR) approach to identify errors in the atom-to-atom mapping of chemical reactions produced by an automated mapping tool by ChemAxon. The modeling has been performed on the three first enzymatic classes of metabolic reactions from the KEGG database. Each reaction has been converted into a CGR representing a pseudomolecule with conventional (single, double, aromatic, etc.) bonds and dynamic bonds characterizing chemical transformations. The ChemAxon tool was used to automatically detect the matching atom pairs in reagents and products. These automated mappings were analyzed by the human expert and classified as "correct" or "wrong". ISIDA fragment descriptors generated for CGRs for both correct and wrong mappings were used as attributes in machine learning. The learned models have been validated in n-fold cross-validation on the training set followed by a challenge to detect correct and wrong mappings within an external test set of reactions, never used for learning. Results show that both SVM and JRip models detect most of the wrongly mapped reactions. We believe that this approach could be used to identify erroneous atom-to-atom mapping performed by any automated algorithm.

  15. Piloted Simulation of a Model-Predictive Automated Recovery System

    NASA Technical Reports Server (NTRS)

    Liu, James (Yuan); Litt, Jonathan; Sowers, T. Shane; Owens, A. Karl; Guo, Ten-Huei

    2014-01-01

    This presentation describes a model-predictive automatic recovery system for aircraft on the verge of a loss-of-control situation. The system determines when it must intervene to prevent an imminent accident, resulting from a poor approach. It estimates the altitude loss that would result from a go-around maneuver at the current flight condition. If the loss is projected to violate a minimum altitude threshold, the maneuver is automatically triggered. The system deactivates to allow landing once several criteria are met. Piloted flight simulator evaluation showed the system to provide effective envelope protection during extremely unsafe landing attempts. The results demonstrate how flight and propulsion control can be integrated to recover control of the vehicle automatically and prevent a potential catastrophe.

  16. Modelling of series of types of automated trenchless works tunneling

    NASA Astrophysics Data System (ADS)

    Gendarz, P.; Rzasinski, R.

    2016-08-01

    Microtunneling is the newest method for making underground installations. Show method is the result of experience and methods applied in other, previous methods of trenchless underground works. It is considered reasonable to elaborate a series of types of construction of tunneling machines, to develop this particular earthworks method. There are many design solutions of machines, but the current goal is to develop non - excavation robotized machine. Erosion machines with main dimensions of the tunnels which are: 1600, 2000, 2500, 3150 are design with use of the computer aided methods. Series of types of construction of tunneling machines creating process was preceded by analysis of current state. The verification of practical methodology of creating the systematic part series was based on the designed erosion machines series of types. There were developed: method of construction similarity of the erosion machines, algorithmic methods of quantitative construction attributes variant analyzes in the I-DEAS advanced graphical program, relational and program parameterization. There manufacturing process of the parts will be created, which allows to verify the technological process on the CNC machines. The models of designed will be modified and the construction will be consulted with erosion machine users and manufacturers like: Tauber Rohrbau GmbH & Co.KG from Minster, OHL ZS a.s. from Brna,. The companies’ acceptance will result in practical verification by JUMARPOL company.

  17. A semi-automated vascular access system for preclinical models

    NASA Astrophysics Data System (ADS)

    Berry-Pusey, B. N.; Chang, Y. C.; Prince, S. W.; Chu, K.; David, J.; Taschereau, R.; Silverman, R. W.; Williams, D.; Ladno, W.; Stout, D.; Tsao, T. C.; Chatziioannou, A.

    2013-08-01

    Murine models are used extensively in biological and translational research. For many of these studies it is necessary to access the vasculature for the injection of biologically active agents. Among the possible methods for accessing the mouse vasculature, tail vein injections are a routine but critical step for many experimental protocols. To perform successful tail vein injections, a high skill set and experience is required, leaving most scientists ill-suited to perform this task. This can lead to a high variability between injections, which can impact experimental results. To allow more scientists to perform tail vein injections and to decrease the variability between injections, a vascular access system (VAS) that semi-automatically inserts a needle into the tail vein of a mouse was developed. The VAS uses near infrared light, image processing techniques, computer controlled motors, and a pressure feedback system to insert the needle and to validate its proper placement within the vein. The VAS was tested by injecting a commonly used radiolabeled probe (FDG) into the tail veins of five mice. These mice were then imaged using micro-positron emission tomography to measure the percentage of the injected probe remaining in the tail. These studies showed that, on average, the VAS leaves 3.4% of the injected probe in the tail. With these preliminary results, the VAS system demonstrates the potential for improving the accuracy of tail vein injections in mice.

  18. Swimming of a model ciliate near an air-liquid interface

    NASA Astrophysics Data System (ADS)

    Wang, S.; Ardekani, A. M.

    2013-06-01

    In this work, the role of the hydrodynamic forces on a swimming microorganism near an air-liquid interface is studied. The lubrication theory is utilized to analyze hydrodynamic effects within the narrow gap between a flat interface and a small swimmer. By using an archetypal low-Reynolds-number swimming model called “squirmer,” we find that the magnitude of the vertical swimming velocity is on the order of O(ɛlnɛ), where ɛ is the ratio of the gap width to the swimmer's body size. The reduced swimming velocity near an interface can explain experimental observations of the aggregation of microorganisms near a liquid interface.

  19. Swimming of a model ciliate near an air-liquid interface.

    PubMed

    Wang, S; Ardekani, A M

    2013-06-01

    In this work, the role of the hydrodynamic forces on a swimming microorganism near an air-liquid interface is studied. The lubrication theory is utilized to analyze hydrodynamic effects within the narrow gap between a flat interface and a small swimmer. By using an archetypal low-Reynolds-number swimming model called "squirmer," we find that the magnitude of the vertical swimming velocity is on the order of O(εlnε), where ε is the ratio of the gap width to the swimmer's body size. The reduced swimming velocity near an interface can explain experimental observations of the aggregation of microorganisms near a liquid interface.

  20. A Contextual Model for Identity Management (IdM) Interfaces

    ERIC Educational Resources Information Center

    Fuller, Nathaniel J.

    2014-01-01

    The usability of Identity Management (IdM) systems is highly dependent upon design that simplifies the processes of identification, authentication, and authorization. Recent findings reveal two critical problems that degrade IdM usability: (1) unfeasible techniques for managing various digital identifiers, and (2) ambiguous security interfaces.…

  1. Modeling Interfacial Thermal Boundary Conductance of Engineered Interfaces

    DTIC Science & Technology

    2014-08-31

    Interfaces Award Number: FA9550-09-1-0245 Program Manager: Dr. Jason Marshall / Dr. John Luginsland RTB-5, Plasma and Electroenergetic Physics Air Force...thermal boundary resistance at GaN/substrate interface”, Electronics Letters 40, 81–83 (2004). 22A. Sarua, H. Ji, K. P. Hilton, D. J. Wallis , M. J

  2. Automated calibration of a stream solute transport model: Implications for interpretation of biogeochemical parameters

    USGS Publications Warehouse

    Scott, D.T.; Gooseff, M.N.; Bencala, K.E.; Runkel, R.L.

    2003-01-01

    The hydrologic processes of advection, dispersion, and transient storage are the primary physical mechanisms affecting solute transport in streams. The estimation of parameters for a conservative solute transport model is an essential step to characterize transient storage and other physical features that cannot be directly measured, and often is a preliminary step in the study of reactive solutes. Our study used inverse modeling to estimate parameters of the transient storage model OTIS (One dimensional Transport with Inflow and Storage). Observations from a tracer injection experiment performed on Uvas Creek, California, USA, are used to illustrate the application of automated solute transport model calibration to conservative and nonconservative stream solute transport. A computer code for universal inverse modeling (UCODE) is used for the calibrations. Results of this procedure are compared with a previous study that used a trial-and-error parameter estimation approach. The results demonstrated 1) importance of the proper estimation of discharge and lateral inflow within the stream system; 2) that although the fit of the observations is not much better when transient storage is invoked, a more randomly distributed set of residuals resulted (suggesting non-systematic error), indicating that transient storage is occurring; 3) that inclusion of transient storage for a reactive solute (Sr2+) provided a better fit to the observations, highlighting the importance of robust model parameterization; and 4) that applying an automated calibration inverse modeling estimation approach resulted in a comprehensive understanding of the model results and the limitation of input data.

  3. Modeling Auditory-Haptic Interface Cues from an Analog Multi-line Telephone

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Anderson, Mark R.; Bittner, Rachael M.

    2012-01-01

    The Western Electric Company produced a multi-line telephone during the 1940s-1970s using a six-button interface design that provided robust tactile, haptic and auditory cues regarding the "state" of the communication system. This multi-line telephone was used as a model for a trade study comparison of two interfaces: a touchscreen interface (iPad)) versus a pressure-sensitive strain gauge button interface (Phidget USB interface controllers). The experiment and its results are detailed in the authors' AES 133rd convention paper " Multimodal Information Management: Evaluation of Auditory and Haptic Cues for NextGen Communication Dispays". This Engineering Brief describes how the interface logic, visual indications, and auditory cues of the original telephone were synthesized using MAX/MSP, including the logic for line selection, line hold, and priority line activation.

  4. A comparison of automated anatomical–behavioural mapping methods in a rodent model of stroke☆

    PubMed Central

    Crum, William R.; Giampietro, Vincent P.; Smith, Edward J.; Gorenkova, Natalia; Stroemer, R. Paul; Modo, Michel

    2013-01-01

    Neurological damage, due to conditions such as stroke, results in a complex pattern of structural changes and significant behavioural dysfunctions; the automated analysis of magnetic resonance imaging (MRI) and discovery of structural–behavioural correlates associated with these disorders remains challenging. Voxel lesion symptom mapping (VLSM) has been used to associate behaviour with lesion location in MRI, but this analysis requires the definition of lesion masks on each subject and does not exploit the rich structural information in the images. Tensor-based morphometry (TBM) has been used to perform voxel-wise structural analyses over the entire brain; however, a combination of lesion hyper-intensities and subtle structural remodelling away from the lesion might confound the interpretation of TBM. In this study, we compared and contrasted these techniques in a rodent model of stroke (n = 58) to assess the efficacy of these techniques in a challenging pre-clinical application. The results from the automated techniques were compared using manually derived region-of-interest measures of the lesion, cortex, striatum, ventricle and hippocampus, and considered against model power calculations. The automated TBM techniques successfully detect both lesion and non-lesion effects, consistent with manual measurements. These techniques do not require manual segmentation to the same extent as VLSM and should be considered part of the toolkit for the unbiased analysis of pre-clinical imaging-based studies. PMID:23727124

  5. IDEF3 and IDEF4 automation system requirements document and system environment models

    NASA Technical Reports Server (NTRS)

    Blinn, Thomas M.

    1989-01-01

    The requirements specification is provided for the IDEF3 and IDEF4 tools that provide automated support for IDEF3 and IDEF4 modeling. The IDEF3 method is a scenario driven process flow description capture method intended to be used by domain experts to represent the knowledge about how a particular system or process works. The IDEF3 method provides modes to represent both (1) Process Flow Description to capture the relationships between actions within the context of a specific scenario, and (2) Object State Transition to capture the allowable transitions of an object in the domain. The IDEF4 method provides a method for capturing the (1) Class Submodel or object hierarchy, (2) Method Submodel or the procedures associated with each classes of objects, and (3) the Dispath Matching or the relationships between the objects and methods in the object oriented design. The requirements specified describe the capabilities that a fully functional IDEF3 or IDEF4 automated tool should support.

  6. NeuroGPS: automated localization of neurons for brain circuits using L1 minimization model

    NASA Astrophysics Data System (ADS)

    Quan, Tingwei; Zheng, Ting; Yang, Zhongqing; Ding, Wenxiang; Li, Shiwei; Li, Jing; Zhou, Hang; Luo, Qingming; Gong, Hui; Zeng, Shaoqun

    2013-04-01

    Drawing the map of neuronal circuits at microscopic resolution is important to explain how brain works. Recent progresses in fluorescence labeling and imaging techniques have enabled measuring the whole brain of a rodent like a mouse at submicron-resolution. Considering the huge volume of such datasets, automatic tracing and reconstruct the neuronal connections from the image stacks is essential to form the large scale circuits. However, the first step among which, automated location the soma across different brain areas remains a challenge. Here, we addressed this problem by introducing L1 minimization model. We developed a fully automated system, NeuronGlobalPositionSystem (NeuroGPS) that is robust to the broad diversity of shape, size and density of the neurons in a mouse brain. This method allows locating the neurons across different brain areas without human intervention. We believe this method would facilitate the analysis of the neuronal circuits for brain function and disease studies.

  7. Hydro-mechanical regimes of deforming subduction interface: modeling versus observations

    NASA Astrophysics Data System (ADS)

    Zheng, L.; Gerya, T.; May, D.

    2015-12-01

    A lot of evidence indicates that fluid flows exist in the subduction interface, including seismic observation, magnetotelluric imaging, heat flow modeling, etc. Fluid percolation should strongly modify rock deformation affected by fluid-induced weakening within the subduction interface. Hence, we study the fluid-rock interaction along the subduction interface using a visco-plastic hydro-mechanical model, in which rock deformation and fluid percolation are self-consistently coupled. Based on a series of 2D numerical experiments, we found two typical hydro-mechanical regimes of deforming subduction interface: (1) coupled and (2) decoupled. In the case of the coupled regime, the tectonic movement of the subduction interface is divided into blocks; newly generated faults are distributed uniformly , say fault band; fluid activity concentrates inside the faults. In the case of the decoupled regime, the upper layer of the subduction interface stops moving while the lower layer continues moving along with the subduction slab; a primary fault is generated at the centre of the subduction interface, or namely decoupled interface. Available observations suggests that both coupled and decoupled regimes can be observed in the nature at different scales. Systematic parameter study suggests that it is mainly the magnitude of the yield strength of subducted rocks depending on their cohesion and friction coefficient, which control the transition between the coupled and decoupled subduction interface regimes.

  8. EST2uni: an open, parallel tool for automated EST analysis and database creation, with a data mining web interface and microarray expression data integration

    PubMed Central

    Forment, Javier; Gilabert, Francisco; Robles, Antonio; Conejero, Vicente; Nuez, Fernando; Blanca, Jose M

    2008-01-01

    Background Expressed sequence tag (EST) collections are composed of a high number of single-pass, redundant, partial sequences, which need to be processed, clustered, and annotated to remove low-quality and vector regions, eliminate redundancy and sequencing errors, and provide biologically relevant information. In order to provide a suitable way of performing the different steps in the analysis of the ESTs, flexible computation pipelines adapted to the local needs of specific EST projects have to be developed. Furthermore, EST collections must be stored in highly structured relational databases available to researchers through user-friendly interfaces which allow efficient and complex data mining, thus offering maximum capabilities for their full exploitation. Results We have created EST2uni, an integrated, highly-configurable EST analysis pipeline and data mining software package that automates the pre-processing, clustering, annotation, database creation, and data mining of EST collections. The pipeline uses standard EST analysis tools and the software has a modular design to facilitate the addition of new analytical methods and their configuration. Currently implemented analyses include functional and structural annotation, SNP and microsatellite discovery, integration of previously known genetic marker data and gene expression results, and assistance in cDNA microarray design. It can be run in parallel in a PC cluster in order to reduce the time necessary for the analysis. It also creates a web site linked to the database, showing collection statistics, with complex query capabilities and tools for data mining and retrieval. Conclusion The software package presented here provides an efficient and complete bioinformatics tool for the management of EST collections which is very easy to adapt to the local needs of different EST projects. The code is freely available under the GPL license and can be obtained at . This site also provides detailed instructions for

  9. A New Tool for Inundation Modeling: Community Modeling Interface for Tsunamis (ComMIT)

    NASA Astrophysics Data System (ADS)

    Titov, V. V.; Moore, C. W.; Greenslade, D. J. M.; Pattiaratchi, C.; Badal, R.; Synolakis, C. E.; Kânoğlu, U.

    2011-11-01

    Almost 5 years after the 26 December 2004 Indian Ocean tragedy, the 10 August 2009 Andaman tsunami demonstrated that accurate forecasting is possible using the tsunami community modeling tool Community Model Interface for Tsunamis (ComMIT). ComMIT is designed for ease of use, and allows dissemination of results to the community while addressing concerns associated with proprietary issues of bathymetry and topography. It uses initial conditions from a precomputed propagation database, has an easy-to-interpret graphical interface, and requires only portable hardware. ComMIT was initially developed for Indian Ocean countries with support from the United Nations Educational, Scientific, and Cultural Organization (UNESCO), the United States Agency for International Development (USAID), and the National Oceanic and Atmospheric Administration (NOAA). To date, more than 60 scientists from 17 countries in the Indian Ocean have been trained and are using it in operational inundation mapping.

  10. Automated classification of atherosclerotic plaque from magnetic resonance images using predictive models.

    PubMed

    Anderson, Russell W; Stomberg, Christopher; Hahm, Charles W; Mani, Venkatesh; Samber, Daniel D; Itskovich, Vitalii V; Valera-Guallar, Laura; Fallon, John T; Nedanov, Pavel B; Huizenga, Joel; Fayad, Zahi A

    2007-01-01

    The information contained within multicontrast magnetic resonance images (MRI) promises to improve tissue classification accuracy, once appropriately analyzed. Predictive models capture relationships empirically, from known outcomes thereby combining pattern classification with experience. In this study, we examine the applicability of predictive modeling for atherosclerotic plaque component classification of multicontrast ex vivo MR images using stained, histopathological sections as ground truth. Ten multicontrast images from seven human coronary artery specimens were obtained on a 9.4 T imaging system using multicontrast-weighted fast spin-echo (T1-, proton density-, and T2-weighted) imaging with 39-mum isotropic voxel size. Following initial data transformations, predictive modeling focused on automating the identification of specimen's plaque, lipid, and media. The outputs of these three models were used to calculate statistics such as total plaque burden and the ratio of hard plaque (fibrous tissue) to lipid. Both logistic regression and an artificial neural network model (Relevant Input Processor Network-RIPNet) were used for predictive modeling. When compared against segmentation resulting from cluster analysis, the RIPNet models performed between 25 and 30% better in absolute terms. This translates to a 50% higher true positive rate over given levels of false positives. This work indicates that it is feasible to build an automated system of plaque detection using MRI and data mining.

  11. Automation Reliability in Unmanned Aerial Vehicle Control: A Reliance-Compliance Model of Automation Dependence in High Workload

    DTIC Science & Technology

    2006-01-01

    2001; St. John & Manes , 2002; Yeh, Merlo, Wickens, & Brandenburg, 2003), particularly in circumstances when human resources to the unaided task are...evasive actions, and, in the worst-case scenario, to lead to suffi- cient distrust of the automated system that true alarms are ignored – the “cry wolf ...performance in detecting TOOs, just as such benefits have been observed in other studies (e.g., Maltz & Shinar, 2003; St. John & Manes , 2002; Yaacov et al

  12. An automation of design and modelling tasks in NX Siemens environment with original software - generator module

    NASA Astrophysics Data System (ADS)

    Zbiciak, M.; Grabowik, C.; Janik, W.

    2015-11-01

    Nowadays the design constructional process is almost exclusively aided with CAD/CAE/CAM systems. It is evaluated that nearly 80% of design activities have a routine nature. These design routine tasks are highly susceptible to automation. Design automation is usually made with API tools which allow building original software responsible for adding different engineering activities. In this paper the original software worked out in order to automate engineering tasks at the stage of a product geometrical shape design is presented. The elaborated software works exclusively in NX Siemens CAD/CAM/CAE environment and was prepared in Microsoft Visual Studio with application of the .NET technology and NX SNAP library. The software functionality allows designing and modelling of spur and helicoidal involute gears. Moreover, it is possible to estimate relative manufacturing costs. With the Generator module it is possible to design and model both standard and non-standard gear wheels. The main advantage of the model generated in such a way is its better representation of an involute curve in comparison to those which are drawn in specialized standard CAD systems tools. It comes from fact that usually in CAD systems an involute curve is drawn by 3 points that respond to points located on the addendum circle, the reference diameter of a gear and the base circle respectively. In the Generator module the involute curve is drawn by 11 involute points which are located on and upper the base and the addendum circles therefore 3D gear wheels models are highly accurate. Application of the Generator module makes the modelling process very rapid so that the gear wheel modelling time is reduced to several seconds. During the conducted research the analysis of differences between standard 3 points and 11 points involutes was made. The results and conclusions drawn upon analysis are shown in details.

  13. Simulation of evaporation of a sessile drop using a diffuse interface model

    NASA Astrophysics Data System (ADS)

    Sefiane, Khellil; Ding, Hang; Sahu, Kirti; Matar, Omar

    2008-11-01

    We consider here the evaporation dynamics of a Newtonian liquid sessile drop using an improved diffuse interface model. The governing equations for the drop and surrounding vapour are both solved, and separated by the order parameter (i.e. volume fraction), based on the previous work of Ding et al. JCP 2007. The diffuse interface model has been shown to be successful in modelling the moving contact line problems (Jacqmin 2000; Ding and Spelt 2007, 2008). Here, a pinned contact line of the drop is assumed. The evaporative mass flux at the liquid-vapour interface is a function of local temperature constitutively and treated as a source term in the interface evolution equation, i.e. Cahn-Hilliard equation. The model is validated by comparing its predictions with data available in the literature. The evaporative dynamics are illustrated in terms of drop snapshots, and a quantitative comparison with the results using a free surface model are made.

  14. An advanced distributed automated extraction of drainage network model on high-resolution DEM

    NASA Astrophysics Data System (ADS)

    Mao, Y.; Ye, A.; Xu, J.; Ma, F.; Deng, X.; Miao, C.; Gong, W.; Di, Z.

    2014-07-01

    A high-resolution and high-accuracy drainage network map is a prerequisite for simulating the water cycle in land surface hydrological models. The objective of this study was to develop a new automated extraction of drainage network model, which can get high-precision continuous drainage network on high-resolution DEM (Digital Elevation Model). The high-resolution DEM need too much computer resources to extract drainage network. The conventional GIS method often can not complete to calculate on high-resolution DEM of big basins, because the number of grids is too large. In order to decrease the computation time, an advanced distributed automated extraction of drainage network model (Adam) was proposed in the study. The Adam model has two features: (1) searching upward from outlet of basin instead of sink filling, (2) dividing sub-basins on low-resolution DEM, and then extracting drainage network on sub-basins of high-resolution DEM. The case study used elevation data of the Shuttle Radar Topography Mission (SRTM) at 3 arc-second resolution in Zhujiang River basin, China. The results show Adam model can dramatically reduce the computation time. The extracting drainage network was continuous and more accurate than HydroSHEDS (Hydrological data and maps based on Shuttle Elevation Derivatives at multiple Scales).

  15. Interface-capturing lattice Boltzmann equation model for two-phase flows

    NASA Astrophysics Data System (ADS)

    Lou, Qin; Guo, Zhaoli

    2015-01-01

    In this work, an interface-capturing lattice Boltzmann equation (LBE) model is proposed for two-phase flows. In the model, a Lax-Wendroff propagation scheme and a properly chosen equilibrium distribution function are employed. The Lax-Wendroff scheme is used to provide an adjustable Courant-Friedrichs-Lewy (CFL) number, and the equilibrium distribution is presented to remove the dependence of the relaxation time on the CFL number. As a result, the interface can be captured accurately by decreasing the CFL number. A theoretical expression is derived for the chemical potential gradient by solving the LBE directly for a two-phase system with a flat interface. The result shows that the gradient of the chemical potential is proportional to the square of the CFL number, which explains why the proposed model is able to capture the interface naturally with a small CFL number, and why large interface error exists in the standard LBE model. Numerical tests, including a one-dimensional flat interface problem, a two-dimensional circular droplet problem, and a three-dimensional spherical droplet problem, demonstrate that the proposed LBE model performs well and can capture a sharp interface with a suitable CFL number.

  16. Interface-capturing lattice Boltzmann equation model for two-phase flows.

    PubMed

    Lou, Qin; Guo, Zhaoli

    2015-01-01

    In this work, an interface-capturing lattice Boltzmann equation (LBE) model is proposed for two-phase flows. In the model, a Lax-Wendroff propagation scheme and a properly chosen equilibrium distribution function are employed. The Lax-Wendroff scheme is used to provide an adjustable Courant-Friedrichs-Lewy (CFL) number, and the equilibrium distribution is presented to remove the dependence of the relaxation time on the CFL number. As a result, the interface can be captured accurately by decreasing the CFL number. A theoretical expression is derived for the chemical potential gradient by solving the LBE directly for a two-phase system with a flat interface. The result shows that the gradient of the chemical potential is proportional to the square of the CFL number, which explains why the proposed model is able to capture the interface naturally with a small CFL number, and why large interface error exists in the standard LBE model. Numerical tests, including a one-dimensional flat interface problem, a two-dimensional circular droplet problem, and a three-dimensional spherical droplet problem, demonstrate that the proposed LBE model performs well and can capture a sharp interface with a suitable CFL number.

  17. Interface Modeling for Electro-Osmosis in Subgrade Structures

    DTIC Science & Technology

    2004-12-01

    aggregate and different clays ( kaolinite , montmorillonite , limestone and quartz sands) created to simulate below grade structures. A direct current 30...Quartz Sand 100 Sieve Ca Montmorillonite Na Montmorillonite Kaolinite The test setup used a 0.45 water to cement ratio concrete cylinder... Kaolinite cell Figure 4. Measured pH for Concrete and Na Montmorillonite cell 4 Scaling occurred at the interface between the anode

  18. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    SciTech Connect

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-11-15

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  19. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    PubMed Central

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-01-01

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  20. An architecture and model for cognitive engineering simulation analysis - Application to advanced aviation automation

    NASA Technical Reports Server (NTRS)

    Corker, Kevin M.; Smith, Barry R.

    1993-01-01

    The process of designing crew stations for large-scale, complex automated systems is made difficult because of the flexibility of roles that the crew can assume, and by the rapid rate at which system designs become fixed. Modern cockpit automation frequently involves multiple layers of control and display technology in which human operators must exercise equipment in augmented, supervisory, and fully automated control modes. In this context, we maintain that effective human-centered design is dependent on adequate models of human/system performance in which representations of the equipment, the human operator(s), and the mission tasks are available to designers for manipulation and modification. The joint Army-NASA Aircrew/Aircraft Integration (A3I) Program, with its attendant Man-machine Integration Design and Analysis System (MIDAS), was initiated to meet this challenge. MIDAS provides designers with a test bed for analyzing human-system integration in an environment in which both cognitive human function and 'intelligent' machine function are described in similar terms. This distributed object-oriented simulation system, its architecture and assumptions, and our experiences from its application in advanced aviation crew stations are described.

  1. A Framework for Automated Spine and Vertebrae Interpolation-Based Detection and Model-Based Segmentation.

    PubMed

    Korez, Robert; Ibragimov, Bulat; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2015-08-01

    Automated and semi-automated detection and segmentation of spinal and vertebral structures from computed tomography (CT) images is a challenging task due to a relatively high degree of anatomical complexity, presence of unclear boundaries and articulation of vertebrae with each other, as well as due to insufficient image spatial resolution, partial volume effects, presence of image artifacts, intensity variations and low signal-to-noise ratio. In this paper, we describe a novel framework for automated spine and vertebrae detection and segmentation from 3-D CT images. A novel optimization technique based on interpolation theory is applied to detect the location of the whole spine in the 3-D image and, using the obtained location of the whole spine, to further detect the location of individual vertebrae within the spinal column. The obtained vertebra detection results represent a robust and accurate initialization for the subsequent segmentation of individual vertebrae, which is performed by an improved shape-constrained deformable model approach. The framework was evaluated on two publicly available CT spine image databases of 50 lumbar and 170 thoracolumbar vertebrae. Quantitative comparison against corresponding reference vertebra segmentations yielded an overall mean centroid-to-centroid distance of 1.1 mm and Dice coefficient of 83.6% for vertebra detection, and an overall mean symmetric surface distance of 0.3 mm and Dice coefficient of 94.6% for vertebra segmentation. The results indicate that by applying the proposed automated detection and segmentation framework, vertebrae can be successfully detected and accurately segmented in 3-D from CT spine images.

  2. Statistical modelling of networked human-automation performance using working memory capacity.

    PubMed

    Ahmed, Nisar; de Visser, Ewart; Shaw, Tyler; Mohamed-Ameen, Amira; Campbell, Mark; Parasuraman, Raja

    2014-01-01

    This study examines the challenging problem of modelling the interaction between individual attentional limitations and decision-making performance in networked human-automation system tasks. Analysis of real experimental data from a task involving networked supervision of multiple unmanned aerial vehicles by human participants shows that both task load and network message quality affect performance, but that these effects are modulated by individual differences in working memory (WM) capacity. These insights were used to assess three statistical approaches for modelling and making predictions with real experimental networked supervisory performance data: classical linear regression, non-parametric Gaussian processes and probabilistic Bayesian networks. It is shown that each of these approaches can help designers of networked human-automated systems cope with various uncertainties in order to accommodate future users by linking expected operating conditions and performance from real experimental data to observable cognitive traits like WM capacity. Practitioner Summary: Working memory (WM) capacity helps account for inter-individual variability in operator performance in networked unmanned aerial vehicle supervisory tasks. This is useful for reliable performance prediction near experimental conditions via linear models; robust statistical prediction beyond experimental conditions via Gaussian process models and probabilistic inference about unknown task conditions/WM capacities via Bayesian network models.

  3. A phase-field point-particle model for particle-laden interfaces

    NASA Astrophysics Data System (ADS)

    Gu, Chuan; Botto, Lorenzo

    2014-11-01

    The irreversible attachment of solid particles to fluid interfaces is exploited in a variety of applications, such as froth flotation and Pickering emulsions. Critical in these applications is to predict particle transport in and near the interface, and the two-way coupling between the particles and the interface. While it is now possible to carry out particle-resolved simulations of these systems, simulating relatively large systems with many particles remains challenging. We present validation studies and preliminary results for a hybrid Eulerian-Lagrangian simulation method, in which the dynamics of the interface is fully-resolved by a phase-field approach, while the particles are treated in the ``point-particle'' approximation. With this method, which represents a compromise between the competing needs of resolving particle and interface scale phenomena, we are able to simulate the adsorption of a large number of particles in the interface of drops, and particle-interface interactions during the spinodal coarsening of a multiphase system. While this method models the adsorption phenomenon efficiently and with reasonable accuracy, it still requires understanding subtle issues related to the modelling of hydrodynamic and capillary forces for particles in contact with interface.

  4. A coupled damage-plasticity model for the cyclic behavior of shear-loaded interfaces

    NASA Astrophysics Data System (ADS)

    Carrara, P.; De Lorenzis, L.

    2015-12-01

    The present work proposes a novel thermodynamically consistent model for the behavior of interfaces under shear (i.e. mode-II) cyclic loading conditions. The interface behavior is defined coupling damage and plasticity. The admissible states' domain is formulated restricting the tangential interface stress to non-negative values, which makes the model suitable e.g. for interfaces with thin adherends. Linear softening is assumed so as to reproduce, under monotonic conditions, a bilinear mode-II interface law. Two damage variables govern respectively the loss of strength and of stiffness of the interface. The proposed model needs the evaluation of only four independent parameters, i.e. three defining the monotonic mode-II interface law, and one ruling the fatigue behavior. This limited number of parameters and their clear physical meaning facilitate experimental calibration. Model predictions are compared with experimental results on fiber reinforced polymer sheets externally bonded to concrete involving different load histories, and an excellent agreement is obtained.

  5. a Psycholinguistic Model for Simultaneous Translation, and Proficiency Assessment by Automated Acoustic Analysis of Discourse.

    NASA Astrophysics Data System (ADS)

    Yaghi, Hussein M.

    Two separate but related issues are addressed: how simultaneous translation (ST) works on a cognitive level and how such translation can be objectively assessed. Both of these issues are discussed in the light of qualitative and quantitative analyses of a large corpus of recordings of ST and shadowing. The proposed ST model utilises knowledge derived from a discourse analysis of the data, many accepted facts in the psychology tradition, and evidence from controlled experiments that are carried out here. This model has three advantages: (i) it is based on analyses of extended spontaneous speech rather than word-, syllable-, or clause -bound stimuli; (ii) it draws equally on linguistic and psychological knowledge; and (iii) it adopts a non-traditional view of language called 'the linguistic construction of reality'. The discourse-based knowledge is also used to develop three computerised systems for the assessment of simultaneous translation: one is a semi-automated system that treats the content of the translation; and two are fully automated, one of which is based on the time structure of the acoustic signals whilst the other is based on their cross-correlation. For each system, several parameters of performance are identified, and they are correlated with assessments rendered by the traditional, subjective, qualitative method. Using signal processing techniques, the acoustic analysis of discourse leads to the conclusion that quality in simultaneous translation can be assessed quantitatively with varying degrees of automation. It identifies as measures of performance (i) three content-based standards; (ii) four time management parameters that reflect the influence of the source on the target language time structure; and (iii) two types of acoustical signal coherence. Proficiency in ST is shown to be directly related to coherence and speech rate but inversely related to omission and delay. High proficiency is associated with a high degree of simultaneity and

  6. Modeling interface roughness scattering in a layered seabed for normal-incident chirp sonar signals.

    PubMed

    Tang, Dajun; Hefner, Brian T

    2012-04-01

    Downward looking sonar, such as the chirp sonar, is widely used as a sediment survey tool in shallow water environments. Inversion of geo-acoustic parameters from such sonar data precedes the availability of forward models. An exact numerical model is developed to initiate the simulation of the acoustic field produced by such a sonar in the presence of multiple rough interfaces. The sediment layers are assumed to be fluid layers with non-intercepting rough interfaces.

  7. Intelligent sensor-model automated control of PMR-15 autoclave processing

    NASA Technical Reports Server (NTRS)

    Hart, S.; Kranbuehl, D.; Loos, A.; Hinds, B.; Koury, J.

    1992-01-01

    An intelligent sensor model system has been built and used for automated control of the PMR-15 cure process in the autoclave. The system uses frequency-dependent FM sensing (FDEMS), the Loos processing model, and the Air Force QPAL intelligent software shell. The Loos model is used to predict and optimize the cure process including the time-temperature dependence of the extent of reaction, flow, and part consolidation. The FDEMS sensing system in turn monitors, in situ, the removal of solvent, changes in the viscosity, reaction advancement and cure completion in the mold continuously throughout the processing cycle. The sensor information is compared with the optimum processing conditions from the model. The QPAL composite cure control system allows comparison of the sensor monitoring with the model predictions to be broken down into a series of discrete steps and provides a language for making decisions on what to do next regarding time-temperature and pressure.

  8. Towards automated 3D finite element modeling of direct fiber reinforced composite dental bridge.

    PubMed

    Li, Wei; Swain, Michael V; Li, Qing; Steven, Grant P

    2005-07-01

    An automated 3D finite element (FE) modeling procedure for direct fiber reinforced dental bridge is established on the basis of computer tomography (CT) scan data. The model presented herein represents a two-unit anterior cantilever bridge that includes a maxillary right incisor as an abutment and a maxillary left incisor as a cantilever pontic bonded by adhesive and reinforced fibers. The study aims at gathering fundamental knowledge for design optimization of this type of innovative composite dental bridges. To promote the automatic level of numerical analysis and computational design of new dental biomaterials, this report pays particular attention to the mathematical modeling, mesh generation, and validation of numerical models. To assess the numerical accuracy and to validate the model established, a convergence test and experimental verification are also presented.

  9. The enhanced Software Life Cyle Support Environment (ProSLCSE): Automation for enterprise and process modeling

    NASA Technical Reports Server (NTRS)

    Milligan, James R.; Dutton, James E.

    1993-01-01

    In this paper, we have introduced a comprehensive method for enterprise modeling that addresses the three important aspects of how an organization goes about its business. FirstEP includes infrastructure modeling, information modeling, and process modeling notations that are intended to be easy to learn and use. The notations stress the use of straightforward visual languages that are intuitive, syntactically simple, and semantically rich. ProSLCSE will be developed with automated tools and services to facilitate enterprise modeling and process enactment. In the spirit of FirstEP, ProSLCSE tools will also be seductively easy to use. Achieving fully managed, optimized software development and support processes will be long and arduous for most software organizations, and many serious problems will have to be solved along the way. ProSLCSE will provide the ability to document, communicate, and modify existing processes, which is the necessary first step.

  10. Design Through Manufacturing: The Solid Model-Finite Element Analysis Interface

    NASA Technical Reports Server (NTRS)

    Rubin, Carol

    2002-01-01

    State-of-the-art computer aided design (CAD) presently affords engineers the opportunity to create solid models of machine parts reflecting every detail of the finished product. Ideally, in the aerospace industry, these models should fulfill two very important functions: (1) provide numerical. control information for automated manufacturing of precision parts, and (2) enable analysts to easily evaluate the stress levels (using finite element analysis - FEA) for all structurally significant parts used in aircraft and space vehicles. Today's state-of-the-art CAD programs perform function (1) very well, providing an excellent model for precision manufacturing. But they do not provide a straightforward and simple means of automating the translation from CAD to FEA models, especially for aircraft-type structures. Presently, the process of preparing CAD models for FEA consumes a great deal of the analyst's time.

  11. Effects of modeling errors on trajectory predictions in air traffic control automation

    NASA Technical Reports Server (NTRS)

    Jackson, Michael R. C.; Zhao, Yiyuan; Slattery, Rhonda

    1996-01-01

    Air traffic control automation synthesizes aircraft trajectories for the generation of advisories. Trajectory computation employs models of aircraft performances and weather conditions. In contrast, actual trajectories are flown in real aircraft under actual conditions. Since synthetic trajectories are used in landing scheduling and conflict probing, it is very important to understand the differences between computed trajectories and actual trajectories. This paper examines the effects of aircraft modeling errors on the accuracy of trajectory predictions in air traffic control automation. Three-dimensional point-mass aircraft equations of motion are assumed to be able to generate actual aircraft flight paths. Modeling errors are described as uncertain parameters or uncertain input functions. Pilot or autopilot feedback actions are expressed as equality constraints to satisfy control objectives. A typical trajectory is defined by a series of flight segments with different control objectives for each flight segment and conditions that define segment transitions. A constrained linearization approach is used to analyze trajectory differences caused by various modeling errors by developing a linear time varying system that describes the trajectory errors, with expressions to transfer the trajectory errors across moving segment transitions. A numerical example is presented for a complete commercial aircraft descent trajectory consisting of several flight segments.

  12. Streamflow forecasting using the modular modeling system and an object-user interface

    USGS Publications Warehouse

    Jeton, A.E.

    2001-01-01

    The U.S. Geological Survey (USGS), in cooperation with the Bureau of Reclamation (BOR), developed a computer program to provide a general framework needed to couple disparate environmental resource models and to manage the necessary data. The Object-User Interface (OUI) is a map-based interface for models and modeling data. It provides a common interface to run hydrologic models and acquire, browse, organize, and select spatial and temporal data. One application is to assist river managers in utilizing streamflow forecasts generated with the Precipitation-Runoff Modeling System running in the Modular Modeling System (MMS), a distributed-parameter watershed model, and the National Weather Service Extended Streamflow Prediction (ESP) methodology.

  13. A gradient-descent-based approach for transparent linguistic interface generation in fuzzy models.

    PubMed

    Chen, Long; Chen, C L Philip; Pedrycz, Witold

    2010-10-01

    Linguistic interface is a group of linguistic terms or fuzzy descriptions that describe variables in a system utilizing corresponding membership functions. Its transparency completely or partly decides the interpretability of fuzzy models. This paper proposes a GRadiEnt-descEnt-based Transparent lInguistic iNterface Generation (GREETING) approach to overcome the disadvantage of traditional linguistic interface generation methods where the consideration of the interpretability aspects of linguistic interface is limited. In GREETING, the widely used interpretability criteria of linguistic interface are considered and optimized. The numeric experiments on the data sets from University of California, Irvine (UCI) machine learning databases demonstrate the feasibility and superiority of the proposed GREETING method. The GREETING method is also applied to fuzzy decision tree generation. It is shown that GREETING generates better transparent fuzzy decision trees in terms of better classification rates and comparable tree sizes.

  14. Quantification of Pulmonary Fibrosis in a Bleomycin Mouse Model Using Automated Histological Image Analysis

    PubMed Central

    Gilhodes, Jean-Claude; Kreuz, Sebastian; Stierstorfer, Birgit; Stiller, Detlef; Wollin, Lutz

    2017-01-01

    Current literature on pulmonary fibrosis induced in animal models highlights the need of an accurate, reliable and reproducible histological quantitative analysis. One of the major limits of histological scoring concerns the fact that it is observer-dependent and consequently subject to variability, which may preclude comparative studies between different laboratories. To achieve a reliable and observer-independent quantification of lung fibrosis we developed an automated software histological image analysis performed from digital image of entire lung sections. This automated analysis was compared to standard evaluation methods with regard to its validation as an end-point measure of fibrosis. Lung fibrosis was induced in mice by intratracheal administration of bleomycin (BLM) at 0.25, 0.5, 0.75 and 1 mg/kg. A detailed characterization of BLM-induced fibrosis was performed 14 days after BLM administration using lung function testing, micro-computed tomography and Ashcroft scoring analysis. Quantification of fibrosis by automated analysis was assessed based on pulmonary tissue density measured from thousands of micro-tiles processed from digital images of entire lung sections. Prior to analysis, large bronchi and vessels were manually excluded from the original images. Measurement of fibrosis has been expressed by two indexes: the mean pulmonary tissue density and the high pulmonary tissue density frequency. We showed that tissue density indexes gave access to a very accurate and reliable quantification of morphological changes induced by BLM even for the lowest concentration used (0.25 mg/kg). A reconstructed 2D-image of the entire lung section at high resolution (3.6 μm/pixel) has been performed from tissue density values allowing the visualization of their distribution throughout fibrotic and non-fibrotic regions. A significant correlation (p<0.0001) was found between automated analysis and the above standard evaluation methods. This correlation establishes

  15. An automated model-based aim point distribution system for solar towers

    NASA Astrophysics Data System (ADS)

    Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen

    2016-05-01

    Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.

  16. An automated procedure for material parameter evaluation for viscoplastic constitutive models

    NASA Technical Reports Server (NTRS)

    Imbrie, P. K.; James, G. H.; Hill, P. S.; Allen, D. H.; Haisler, W. E.

    1988-01-01

    An automated procedure is presented for evaluating the material parameters in Walker's exponential viscoplastic constitutive model for metals at elevated temperature. Both physical and numerical approximations are utilized to compute the constants for Inconel 718 at 1100 F. When intermediate results are carefully scrutinized and engineering judgement applied, parameters may be computed which yield stress output histories that are in agreement with experimental results. A qualitative assessment of the theta-plot method for predicting the limiting value of stress is also presented. The procedure may also be used as a basis to develop evaluation schemes for other viscoplastic constitutive theories of this type.

  17. The Murine Femoral Allograft Model and a Semi-automated Histomorphometric Analysis Tool

    PubMed Central

    Dhillon, Robinder S.; Zhang, Longze; Schwarz, Edward M.; Boyce, Brendan F.; Xie, Chao

    2014-01-01

    SUMMARY Preclinical studies on bone repair remain a high priority due to the unresolved clinical problems associated with treating critical segmental defects and complications of fracture healing. Over the last decade the murine femoral allograft model has gained popularity due to its standardized surgery and potential for examining a vast array of radiographic, biomechanical and histological outcome measures. Here, we describe these methods and a novel semi-automated histomorphometric approach to quantify the amount of bone, cartilage and undifferentiated mesenchymal tissue in demineralized paraffin sections of allografted murine femurs using the VisioPharm Image Analysis Software System. PMID:24482164

  18. Automated Translation and Thermal Zoning of Digital Building Models for Energy Analysis

    SciTech Connect

    Jones, Nathaniel L.; McCrone, Colin J.; Walter, Bruce J.; Pratt, Kevin B.; Greenberg, Donald P.

    2013-08-26

    Building energy simulation is valuable during the early stages of design, when decisions can have the greatest impact on energy performance. However, preparing digital design models for building energy simulation typically requires tedious manual alteration. This paper describes a series of five automated steps to translate geometric data from an unzoned CAD model into a multi-zone building energy model. First, CAD input is interpreted as geometric surfaces with materials. Second, surface pairs defining walls of various thicknesses are identified. Third, normal directions of unpaired surfaces are determined. Fourth, space boundaries are defined. Fifth, optionally, settings from previous simulations are applied, and spaces are aggregated into a smaller number of thermal zones. Building energy models created quickly using this method can offer guidance throughout the design process.

  19. Automated Optimization of Water–Water Interaction Parameters for a Coarse-Grained Model

    PubMed Central

    2015-01-01

    We have developed an automated parameter optimization software framework (ParOpt) that implements the Nelder–Mead simplex algorithm and applied it to a coarse-grained polarizable water model. The model employs a tabulated, modified Morse potential with decoupled short- and long-range interactions incorporating four water molecules per interaction site. Polarizability is introduced by the addition of a harmonic angle term defined among three charged points within each bead. The target function for parameter optimization was based on the experimental density, surface tension, electric field permittivity, and diffusion coefficient. The model was validated by comparison of statistical quantities with experimental observation. We found very good performance of the optimization procedure and good agreement of the model with experiment. PMID:24460506

  20. Automated determination of fibrillar structures by simultaneous model building and fiber diffraction refinement.

    PubMed

    Potrzebowski, Wojciech; André, Ingemar

    2015-07-01

    For highly oriented fibrillar molecules, three-dimensional structures can often be determined from X-ray fiber diffraction data. However, because of limited information content, structure determination and validation can be challenging. We demonstrate that automated structure determination of protein fibers can be achieved by guiding the building of macromolecular models with fiber diffraction data. We illustrate the power of our approach by determining the structures of six bacteriophage viruses de novo using fiber diffraction data alone and together with solid-state NMR data. Furthermore, we demonstrate the feasibility of molecular replacement from monomeric and fibrillar templates by solving the structure of a plant virus using homology modeling and protein-protein docking. The generated models explain the experimental data to the same degree as deposited reference structures but with improved structural quality. We also developed a cross-validation method for model selection. The results highlight the power of fiber diffraction data as structural constraints.

  1. Direct analysis of pesticide residues in olive oil by on-line reversed phase liquid chromatography-gas chromatography using an automated through oven transfer adsorption desorption (TOTAD) interface.

    PubMed

    Sanchez, Raquel; Vázquez, Ana; Riquelme, David; Villén, Jesús

    2003-10-08

    A fully automated on-line reversed phase liquid chromatography-gas chromatography system is described. The system uses a prototype of the automated through oven transfer adsorption desorption interface. The system is demonstrated by presenting a new rapid method for the determination of pesticide residue in olive oil, which is injected directly with no sample pretreatment step other than filtration. Methanol:water is used as the eluent in the LC preseparation step, while the LC fraction containing the pesticide is automatically transferred to the gas chromatograph. Detection limits of pesticides varied from 0.18 to 0.44 mg/L when a flame ionization detector was used. As an example, relative standard deviation and linear calibration are presented for terbutryne.

  2. Automation of Scientific Modeling and Visualization Using Model5D and ModelPOV Software

    NASA Astrophysics Data System (ADS)

    Artemov, Yuri; Schwartz, Brian

    2000-03-01

    When scientists try to visualize complex phenomena, they often choose to do programming on their own. Although such powerful packages as Mathematica or Matlab are convenient in small to medium size simulations, they do not perform well in massive 3D computations, and they have very limited ability of volume rendering and animation of 3D data. Such programs as Bob and Vis5D are specifically tailored to visualization of volume data. However, learning data file formats of these programs is time consuming and error prone task. In this report we present Model5D software, which greatly simplifies the process of calculation of scalar multi-variable time-dependent 3D data and its preparation for visualization in Bob or Vis5D. Numerical model of any kind intended for calculation on a regular 3D grid can be implemented as an 'engine' (dynamic link library, actually performing calculation) and a 'model' (collection of parameters, etc.). Engine and model are implemented as small modules, which can be easily exchanged over the Internet. The model functionality is incorporated into engine by using the templates provided and a C++ compiler. The calculations can be performed in a batch mode. ModelPOV, which prepares data for popular ray tracer POV-Ray, is to Model5D as vector graphics to bitmapped graphics. ModelPOV is especially useful for visualization of large number of particles. Example of using these tools for visualization of vortices in superconductors is discussed.

  3. Generating Phenotypical Erroneous Human Behavior to Evaluate Human-automation Interaction Using Model Checking

    PubMed Central

    Bolton, Matthew L.; Bass, Ellen J.; Siminiceanu, Radu I.

    2012-01-01

    Breakdowns in complex systems often occur as a result of system elements interacting in unanticipated ways. In systems with human operators, human-automation interaction associated with both normative and erroneous human behavior can contribute to such failures. Model-driven design and analysis techniques provide engineers with formal methods tools and techniques capable of evaluating how human behavior can contribute to system failures. This paper presents a novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models. The generated erroneous behavior is capable of replicating Hollnagel’s zero-order phenotypes of erroneous action for omissions, jumps, repetitions, and intrusions. Multiple phenotypical acts can occur in sequence, thus allowing for the generation of higher order phenotypes. The task behavior model pattern capable of generating erroneous behavior can be integrated into a formal system model so that system safety properties can be formally verified with a model checker. This allows analysts to prove that a human-automation interactive system (as represented by the model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. We present benchmarks related to the size of the statespace and verification time of models to show how the erroneous human behavior generation process scales. We demonstrate the method with a case study: the operation of a radiation therapy machine. A potential problem resulting from a generated erroneous human action is discovered. A design intervention is presented which prevents this problem from occurring. We discuss how our method could be used to evaluate larger applications and recommend future paths of development. PMID:23105914

  4. A comparison of molecular dynamics and diffuse interface model predictions of Lennard-Jones fluid evaporation

    SciTech Connect

    Barbante, Paolo; Frezzotti, Aldo; Gibelli, Livio

    2014-12-09

    The unsteady evaporation of a thin planar liquid film is studied by molecular dynamics simulations of Lennard-Jones fluid. The obtained results are compared with the predictions of a diffuse interface model in which capillary Korteweg contributions are added to hydrodynamic equations, in order to obtain a unified description of the liquid bulk, liquid-vapor interface and vapor region. Particular care has been taken in constructing a diffuse interface model matching the thermodynamic and transport properties of the Lennard-Jones fluid. The comparison of diffuse interface model and molecular dynamics results shows that, although good agreement is obtained in equilibrium conditions, remarkable deviations of diffuse interface model predictions from the reference molecular dynamics results are observed in the simulation of liquid film evaporation. It is also observed that molecular dynamics results are in good agreement with preliminary results obtained from a composite model which describes the liquid film by a standard hydrodynamic model and the vapor by the Boltzmann equation. The two mathematical model models are connected by kinetic boundary conditions assuming unit evaporation coefficient.

  5. Bilateral Automated Continuous Distraction Osteogenesis in a Design Model: Proof of Principle

    PubMed Central

    Peacock, Zachary S.; Tricomi, Brad J.; Faquin, William C.; Magill, John C.; Murphy, Brian A.; Kaban, Leonard B.; Troulis, Maria J.

    2015-01-01

    The purpose of this study was to demonstrate that automated, continuous, curvilinear distraction osteogenesis (DO) in a minipig model is effective when performed bilaterally, at rates up to 3mm/day, to achieve clinically relevant lengthening. A Yucatan minipig in the mixed dentition phase, underwent bilateral, continuous DO at a rate of 2 mm/day at the center of rotation; 1.0 and 3.0 mm/day at the superior and inferior regions, respectively. The distraction period was 13 days with no latency period. Vector and rate of distraction were remotely monitored without radiographs, using the device sensor. After fixation and euthanasia, the mandible and digastric muscles were harvested. The ex-vivo appearance, stability, and radiodensity of the regenerate were evaluated using a semi-quantitative scale. Percent surface area (PSA) occupied by bone, fibrous tissue, cartilage, and hematoma were calculated using histomorphometrics. The effects of DO on the digastric muscles and mandibular condyles were assessed via microscopy and degenerative changes were quantified. The animal was distracted to 21 mm and 24 mm on the right and left sides, respectively. Clinical appearance, stability, and radiodensity were scored as ‘3’ bilaterally indicating osseous union. The total PSA occupied by bone (right = 75.53±2.19%; left PSA = 73.11±2.18%) approached that of an unoperated mandible (84.67±0.86%). Digastric muscles and condyles showed negligible degenerative or abnormal histologic changes. This proof of principle study is the first report of osseous healing with no ill-effect on associated soft tissue and the mandibular condyle using bilateral, automated, continuous, curvilinear DO at rates up to 3 mm/day. The model approximates potential human application of continuous automated distraction with a semiburied device. PMID:26594967

  6. Modeling and preliminary testing socket-residual limb interface stiffness of above-elbow prostheses.

    PubMed

    Sensinger, Jonathon W; Weir, Richard F ff

    2008-04-01

    The interface between the socket and residual limb can have a significant effect on the performance of a prosthesis. Specifically, knowledge of the rotational stiffness of the socket-residual limb (S-RL) interface is extremely useful in designing new prostheses and evaluating new control paradigms, as well as in comparing existing and new socket technologies. No previous studies, however, have examined the rotational stiffness of S-RL interfaces. To address this problem, a math model is compared to a more complex finite element analysis, to see if the math model sufficiently captures the main effects of S-RL interface rotational stiffness. Both of these models are then compared to preliminary empirical testing, in which a series of X-rays, called fluoroscopy, is taken to obtain the movement of the bone relative to the socket. Force data are simultaneously recorded, and the combination of force and movement data are used to calculate the empirical rotational stiffness of elbow S-RL interface. The empirical rotational stiffness values are then compared to the models, to see if values of Young's modulus obtained in other studies at localized points may be used to determine the global rotational stiffness of the S-RL interface. Findings include agreement between the models and empirical results and the ability of persons to significantly modulate the rotational stiffness of their S-RL interface a little less than one order of magnitude. The floor and ceiling of this range depend significantly on socket length and co-contraction levels, but not on residual limb diameter or bone diameter. Measured trans-humeral S-RL interface rotational stiffness values ranged from 24-140 Nm/rad for the four subjects tested in this study.

  7. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1993-01-01

    Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.

  8. Revisiting automated G-protein coupled receptor modeling: the benefit of additional template structures for a neurokinin-1 receptor model.

    PubMed

    Kneissl, Benny; Leonhardt, Bettina; Hildebrandt, Andreas; Tautermann, Christofer S

    2009-05-28

    The feasibility of automated procedures for the modeling of G-protein coupled receptors (GPCR) is investigated on the example of the human neurokinin-1 (NK1) receptor. We use a combined method of homology modeling and molecular docking and analyze the information content of the resulting docking complexes regarding the binding mode for further refinements. Moreover, we explore the impact of different template structures, the bovine rhodopsin structure, the human beta(2) adrenergic receptor, and in particular a combination of both templates to include backbone flexibility in the target conformational space. Our results for NK1 modeling demonstrate that model selection from a set of decoys can in general not solely rely on docking experiments but still requires additional mutagenesis data. However, an enrichment factor of 2.6 in a nearly fully automated approach indicates that reasonable models can be created automatically if both available templates are used for model construction. Thus, the recently resolved GPCR structures open new ways to improve the model building fundamentally.

  9. Interface mechanics in lower-limb external prosthetics: a review of finite element models.

    PubMed

    Zachariah, S G; Sanders, J E

    1996-12-01

    The distribution of mechanical stress at the interface between a residual limb and prosthetic socket is an important design consideration in lower-limb prosthetics. Stresses must be distributed so that the amputee is stable and comfortable, while avoiding trauma to the tissues of the residual limb. Numerical estimation of the stresses at the interface through finite element (FE) modeling can potentially provide researchers and prosthetists with a tool to aid in the design of the prosthetic socket. This review addresses FE modeling of interface stresses in lower-limb external prosthetics. The modeling methodologies adopted by analysts are described. Verification of FE estimates of interface stress against experimental data by different analysts is presented and the likely sources of error discussed. While the performance of the models is encouraging, there are definite limitations to all of them, necessitating further improvements. Parametric analysis of the sensitivity of interface stress to model parameters provides a tool to identify model weaknesses and to suggest possible refinements. Parametric analyses by different analysts are also presented and potential refinements discussed. Finally, directions for future work in prosthetic FE modeling are suggested.

  10. Effective automated prediction of vertebral column pathologies based on logistic model tree with SMOTE preprocessing.

    PubMed

    Karabulut, Esra Mahsereci; Ibrikci, Turgay

    2014-05-01

    This study develops a logistic model tree based automation system based on for accurate recognition of types of vertebral column pathologies. Six biomechanical measures are used for this purpose: pelvic incidence, pelvic tilt, lumbar lordosis angle, sacral slope, pelvic radius and grade of spondylolisthesis. A two-phase classification model is employed in which the first step is preprocessing the data by use of Synthetic Minority Over-sampling Technique (SMOTE), and the second one is feeding the classifier Logistic Model Tree (LMT) with the preprocessed data. We have achieved an accuracy of 89.73 %, and 0.964 Area Under Curve (AUC) in computer based automatic detection of the pathology. This was validated via a 10-fold-cross-validation experiment conducted on clinical records of 310 patients. The study also presents a comparative analysis of the vertebral column data with the use of several machine learning algorithms.

  11. Framework for non-coherent interface models at finite displacement jumps and finite strains

    NASA Astrophysics Data System (ADS)

    Ottosen, Niels Saabye; Ristinmaa, Matti; Mosler, Jörn

    2016-05-01

    This paper deals with a novel constitutive framework suitable for non-coherent interfaces, such as cracks, undergoing large deformations in a geometrically exact setting. For this type of interface, the displacement field shows a jump across the interface. Within the engineering community, so-called cohesive zone models are frequently applied in order to describe non-coherent interfaces. However, for existing models to comply with the restrictions imposed by (a) thermodynamical consistency (e.g., the second law of thermodynamics), (b) balance equations (in particular, balance of angular momentum) and (c) material frame indifference, these models are essentially fiber models, i.e. models where the traction vector is collinear with the displacement jump. This constraints the ability to model shear and, in addition, anisotropic effects are excluded. A novel, extended constitutive framework which is consistent with the above mentioned fundamental physical principles is elaborated in this paper. In addition to the classical tractions associated with a cohesive zone model, the main idea is to consider additional tractions related to membrane-like forces and out-of-plane shear forces acting within the interface. For zero displacement jump, i.e. coherent interfaces, this framework degenerates to existing formulations presented in the literature. For hyperelasticity, the Helmholtz energy of the proposed novel framework depends on the displacement jump as well as on the tangent vectors of the interface with respect to the current configuration - or equivalently - the Helmholtz energy depends on the displacement jump and the surface deformation gradient. It turns out that by defining the Helmholtz energy in terms of the invariants of these variables, all above-mentioned fundamental physical principles are automatically fulfilled. Extensions of the novel framework necessary for material degradation (damage) and plasticity are also covered.

  12. Improved automated diagnosis of misfire in internal combustion engines based on simulation models

    NASA Astrophysics Data System (ADS)

    Chen, Jian; Bond Randall, Robert

    2015-12-01

    In this paper, a new advance in the application of Artificial Neural Networks (ANNs) to the automated diagnosis of misfires in Internal Combustion engines(IC engines) is detailed. The automated diagnostic system comprises three stages: fault detection, fault localization and fault severity identification. Particularly, in the severity identification stage, separate Multi-Layer Perceptron networks (MLPs) with saturating linear transfer functions were designed for individual speed conditions, so they could achieve finer classification. In order to obtain sufficient data for the network training, numerical simulation was used to simulate different ranges of misfires in the engine. The simulation models need to be updated and evaluated using experimental data, so a series of experiments were first carried out on the engine test rig to capture the vibration signals for both normal condition and with a range of misfires. Two methods were used for the misfire diagnosis: one is based on the torsional vibration signals of the crankshaft and the other on the angular acceleration signals (rotational motion) of the engine block. Following the signal processing of the experimental and simulation signals, the best features were selected as the inputs to ANN networks. The ANN systems were trained using only the simulated data and tested using real experimental cases, indicating that the simulation model can be used for a wider range of faults for which it can still be considered valid. The final results have shown that the diagnostic system based on simulation can efficiently diagnose misfire, including location and severity.

  13. Semi-automated calibration method for modelling of mountain permafrost evolution in Switzerland

    NASA Astrophysics Data System (ADS)

    Marmy, Antoine; Rajczak, Jan; Delaloye, Reynald; Hilbich, Christin; Hoelzle, Martin; Kotlarski, Sven; Lambiel, Christophe; Noetzli, Jeannette; Phillips, Marcia; Salzmann, Nadine; Staub, Benno; Hauck, Christian

    2016-11-01

    Permafrost is a widespread phenomenon in mountainous regions of the world such as the European Alps. Many important topics such as the future evolution of permafrost related to climate change and the detection of permafrost related to potential natural hazards sites are of major concern to our society. Numerical permafrost models are the only tools which allow for the projection of the future evolution of permafrost. Due to the complexity of the processes involved and the heterogeneity of Alpine terrain, models must be carefully calibrated, and results should be compared with observations at the site (borehole) scale. However, for large-scale applications, a site-specific model calibration for a multitude of grid points would be very time-consuming. To tackle this issue, this study presents a semi-automated calibration method using the Generalized Likelihood Uncertainty Estimation (GLUE) as implemented in a 1-D soil model (CoupModel) and applies it to six permafrost sites in the Swiss Alps. We show that this semi-automated calibration method is able to accurately reproduce the main thermal condition characteristics with some limitations at sites with unique conditions such as 3-D air or water circulation, which have to be calibrated manually. The calibration obtained was used for global and regional climate model (GCM/RCM)-based long-term climate projections under the A1B climate scenario (EU-ENSEMBLES project) specifically downscaled at each borehole site. The projection shows general permafrost degradation with thawing at 10 m, even partially reaching 20 m depth by the end of the century, but with different timing among the sites and with partly considerable uncertainties due to the spread of the applied climatic forcing.

  14. Automated 3D Damaged Cavity Model Builder for Lower Surface Acreage Tile on Orbiter

    NASA Technical Reports Server (NTRS)

    Belknap, Shannon; Zhang, Michael

    2013-01-01

    The 3D Automated Thermal Tool for Damaged Acreage Tile Math Model builder was developed to perform quickly and accurately 3D thermal analyses on damaged lower surface acreage tiles and structures beneath the damaged locations on a Space Shuttle Orbiter. The 3D model builder created both TRASYS geometric math models (GMMs) and SINDA thermal math models (TMMs) to simulate an idealized damaged cavity in the damaged tile(s). The GMMs are processed in TRASYS to generate radiation conductors between the surfaces in the cavity. The radiation conductors are inserted into the TMMs, which are processed in SINDA to generate temperature histories for all of the nodes on each layer of the TMM. The invention allows a thermal analyst to create quickly and accurately a 3D model of a damaged lower surface tile on the orbiter. The 3D model builder can generate a GMM and the correspond ing TMM in one or two minutes, with the damaged cavity included in the tile material. A separate program creates a configuration file, which would take a couple of minutes to edit. This configuration file is read by the model builder program to determine the location of the damage, the correct tile type, tile thickness, structure thickness, and SIP thickness of the damage, so that the model builder program can build an accurate model at the specified location. Once the models are built, they are processed by the TRASYS and SINDA.

  15. Analytic Element Modeling of Steady Interface Flow in Multilayer Aquifers Using AnAqSim.

    PubMed

    Fitts, Charles R; Godwin, Joshua; Feiner, Kathleen; McLane, Charles; Mullendore, Seth

    2015-01-01

    This paper presents the analytic element modeling approach implemented in the software AnAqSim for simulating steady groundwater flow with a sharp fresh-salt interface in multilayer (three-dimensional) aquifer systems. Compared with numerical methods for variable-density interface modeling, this approach allows quick model construction and can yield useful guidance about the three-dimensional configuration of an interface even at a large scale. The approach employs subdomains and multiple layers as outlined by Fitts (2010) with the addition of discharge potentials for shallow interface flow (Strack 1989). The following simplifying assumptions are made: steady flow, a sharp interface between fresh- and salt water, static salt water, and no resistance to vertical flow and hydrostatic heads within each fresh water layer. A key component of this approach is a transition to a thin fixed minimum fresh water thickness mode when the fresh water thickness approaches zero. This allows the solution to converge and determine the steady interface position without a long transient simulation. The approach is checked against the widely used numerical codes SEAWAT and SWI/MODFLOW and a hypothetical application of the method to a coastal wellfield is presented.

  16. Control of enterprise interfaces for supply chain enterprise modeling

    SciTech Connect

    Interrante, L.D.; Macfarlane, J.F.

    1995-04-01

    There is a current trend for manufacturing enterprises in a supply chain of a particular industry to join forces in an attempt to promote efficiencies and improve competitive position. Such alliances occur in the context of specific legal and business agreements such that each enterprise retains a majority of its business and manufacturing information as private and shares other information with its trading partners. Shared information may include enterprise demand projections, capacities, finished goods inventories, and aggregate production schedules. Evidence of the trend toward information sharing includes the recent emphases on vendor-managed inventories, quick response, and Electronic Data Interchange (EDI) standards. The increased competition brought on by the global marketplace is driving industries to consider the advantages of trading partner agreements. Aggregate-level forecasts, supply-chain production smoothing, and aggregate-level inventory policies can reduce holding costs, record-keeping overhead, and lead time in product development. The goal of this research is to orchestrate information exchange among trading partners to allow for aggregate-level analysis to enhance supply chain efficiency. The notion of Enterprise Interface Control (EIC) is introduced as a means of accomplishing this end.

  17. Mathematical modeling of the head-disk interface (abstract)

    NASA Astrophysics Data System (ADS)

    Crone, Robert M.; Jhon, Myung S.

    1993-05-01

    State-of-the-art theoretical and numerical techniques required to simulate the head-disk interface (HDI) of future magnetic storage devices is presented. The severity of operating conditions (i.e., attempts to achieve flying heights as low as 40 nm) pose several challenges. Large transient pressure gradients can be established within air bearing leading to numerical oscillations as well as to increased program execution times. Enhanced gaseous rarefaction effects must also be incorporated into the analysis. In the present study, accurate nonoscillatory air bearing pressure distributions were obtained using a high resolution finite element algorithm to solve the generalized Reynolds equation. Higher order gaseous rarefaction effects are incorporated into generalized Reynolds equations using the total mass flow rate coefficient predicted from the linearized Boltzmann equation. The form of the generalized Reynolds equation that is presented in this paper is an improved version of the continued fraction approximation previously proposed by Crone et al.1 A simple scaling analysis, which is based upon the results of the linearized Boltzmann equation, will also be presented to study the effect of slider miniaturization, as well as to obtain a novel interpretation of accelerated wear and accelerated flyability test results.

  18. An automated shell for management of parametric dispersion/deposition modeling

    SciTech Connect

    Paddock, R.A.; Absil, M.J.G.; Peerenboom, J.P.; Newsom, D.E.; North, M.J.; Coskey, R.J. Jr.

    1994-03-01

    In 1993, the US Army tasked Argonne National Laboratory to perform a study of chemical agent dispersion and deposition for the Chemical Stockpile Emergency Preparedness Program using an existing Army computer model. The study explored a wide range of situations in terms of six parameters: agent type, quantity released, liquid droplet size, release height, wind speed, and atmospheric stability. A number of discrete values of interest were chosen for each parameter resulting in a total of 18,144 possible different combinations of parameter values. Therefore, the need arose for a systematic method to assemble the large number of input streams for the model, filter out unrealistic combinations of parameter values, run the model, and extract the results of interest from the extensive model output. To meet these needs, we designed an automated shell for the computer model. The shell processed the inputs, ran the model, and reported the results of interest. By doing so, the shell compressed the time needed to perform the study and freed the researchers to focus on the evaluation and interpretation of the model predictions. The results of the study are still under review by the Army and other agencies; therefore, it would be premature to discuss the results in this paper. However, the design of the shell could be applied to other hazards for which multiple-parameter modeling is performed. This paper describes the design and operation of the shell as an example for other hazards and models.

  19. Pilot interaction with cockpit automation 2: An experimental study of pilots' model and awareness of the Flight Management System

    NASA Technical Reports Server (NTRS)

    Sarter, Nadine B.; Woods, David D.

    1994-01-01

    Technological developments have made it possible to automate more and more functions on the commercial aviation flight deck and in other dynamic high-consequence domains. This increase in the degrees of freedom in design has shifted questions away from narrow technological feasibility. Many concerned groups, from designers and operators to regulators and researchers, have begun to ask questions about how we should use the possibilities afforded by technology skillfully to support and expand human performance. In this article, we report on an experimental study that addressed these questions by examining pilot interaction with the current generation of flight deck automation. Previous results on pilot-automation interaction derived from pilot surveys, incident reports, and training observations have produced a corpus of features and contexts in which human-machine coordination is likely to break down (e.g., automation surprises). We used these data to design a simulated flight scenario that contained a variety of probes designed to reveal pilots' mental model of one major component of flight deck automation: the Flight Management System (FMS). The events within the scenario were also designed to probe pilots' ability to apply their knowledge and understanding in specific flight contexts and to examine their ability to track the status and behavior of the automated system (mode awareness). Although pilots were able to 'make the system work' in standard situations, the results reveal a variety of latent problems in pilot-FMS interaction that can affect pilot performance in nonnormal time critical situations.

  20. General MACOS Interface for Modeling and Analysis for Controlled Optical Systems

    NASA Technical Reports Server (NTRS)

    Sigrist, Norbert; Basinger, Scott A.; Redding, David C.

    2012-01-01

    The General MACOS Interface (GMI) for Modeling and Analysis for Controlled Optical Systems (MACOS) enables the use of MATLAB as a front-end for JPL s critical optical modeling package, MACOS. MACOS is JPL s in-house optical modeling software, which has proven to be a superb tool for advanced systems engineering of optical systems. GMI, coupled with MACOS, allows for seamless interfacing with modeling tools from other disciplines to make possible integration of dynamics, structures, and thermal models with the addition of control systems for deformable optics and other actuated optics. This software package is designed as a tool for analysts to quickly and easily use MACOS without needing to be an expert at programming MACOS. The strength of MACOS is its ability to interface with various modeling/development platforms, allowing evaluation of system performance with thermal, mechanical, and optical modeling parameter variations. GMI provides an improved means for accessing selected key MACOS functionalities. The main objective of GMI is to marry the vast mathematical and graphical capabilities of MATLAB with the powerful optical analysis engine of MACOS, thereby providing a useful tool to anyone who can program in MATLAB. GMI also improves modeling efficiency by eliminating the need to write an interface function for each task/project, reducing error sources, speeding up user/modeling tasks, and making MACOS well suited for fast prototyping.

  1. A method of designing smartphone interface based on the extended user's mental model

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Li, Fengmin; Bian, Jiali; Pan, Juchen; Song, Song

    2017-01-01

    The user's mental model is the core guiding theory of product design, especially practical products. The essence of practical product is a tool which is used by users to meet their needs. Then, the most important feature of a tool is usability. The design method based on the user's mental model provides a series of practical and feasible theoretical guidance for improving the usability of the product according to the user's awareness of things. In this paper, we propose a method of designing smartphone interface based on the extended user's mental model according to further research on user groups. This approach achieves personalized customization of smartphone application interface and enhance application using efficiency.

  2. CHANNEL MORPHOLOGY TOOL (CMT): A GIS-BASED AUTOMATED EXTRACTION MODEL FOR CHANNEL GEOMETRY

    SciTech Connect

    JUDI, DAVID; KALYANAPU, ALFRED; MCPHERSON, TIMOTHY; BERSCHEID, ALAN

    2007-01-17

    This paper describes an automated Channel Morphology Tool (CMT) developed in ArcGIS 9.1 environment. The CMT creates cross-sections along a stream centerline and uses a digital elevation model (DEM) to create station points with elevations along each of the cross-sections. The generated cross-sections may then be exported into a hydraulic model. Along with the rapid cross-section generation the CMT also eliminates any cross-section overlaps that might occur due to the sinuosity of the channels using the Cross-section Overlap Correction Algorithm (COCoA). The CMT was tested by extracting cross-sections from a 5-m DEM for a 50-km channel length in Houston, Texas. The extracted cross-sections were compared directly with surveyed cross-sections in terms of the cross-section area. Results indicated that the CMT-generated cross-sections satisfactorily matched the surveyed data.

  3. An Accuracy Assessment of Automated Photogrammetric Techniques for 3d Modeling of Complex Interiors

    NASA Astrophysics Data System (ADS)

    Georgantas, A.; Brédif, M.; Pierrot-Desseilligny, M.

    2012-07-01

    This paper presents a comparison of automatic photogrammetric techniques to terrestrial laser scanning for 3D modelling of complex interior spaces. We try to evaluate the automated photogrammetric techniques not only in terms of their geometric quality compared to laser scanning but also in terms of cost in money, acquisition and computational time. To this purpose we chose as test site a modern building's stairway. APERO/MICMAC ( ©IGN )which is an Open Source photogrammetric software was used for the production of the 3D photogrammetric point cloud which was compared to the one acquired by a Leica Scanstation 2 laser scanner. After performing various qualitative and quantitative controls we present the advantages and disadvantages of each 3D modelling method applied in a complex interior of a modern building.

  4. Automated finite element meshing of the lumbar spine: Verification and validation with 18 specimen-specific models.

    PubMed

    Campbell, J Q; Coombs, D J; Rao, M; Rullkoetter, P J; Petrella, A J

    2016-09-06

    The purpose of this study was to seek broad verification and validation of human lumbar spine finite element models created using a previously published automated algorithm. The automated algorithm takes segmented CT scans of lumbar vertebrae, automatically identifies important landmarks and contact surfaces, and creates a finite element model. Mesh convergence was evaluated by examining changes in key output variables in response to mesh density. Semi-direct validation was performed by comparing experimental results for a single specimen to the automated finite element model results for that specimen with calibrated material properties from a prior study. Indirect validation was based on a comparison of results from automated finite element models of 18 individual specimens, all using one set of generalized material properties, to a range of data from the literature. A total of 216 simulations were run and compared to 186 experimental data ranges in all six primary bending modes up to 7.8Nm with follower loads up to 1000N. Mesh convergence results showed less than a 5% difference in key variables when the original mesh density was doubled. The semi-direct validation results showed that the automated method produced results comparable to manual finite element modeling methods. The indirect validation results showed a wide range of outcomes due to variations in the geometry alone. The studies showed that the automated models can be used to reliably evaluate lumbar spine biomechanics, specifically within our intended context of use: in pure bending modes, under relatively low non-injurious simulated in vivo loads, to predict torque rotation response, disc pressures, and facet forces.

  5. A methodology for model-based development and automated verification of software for aerospace systems

    NASA Astrophysics Data System (ADS)

    Martin, L.; Schatalov, M.; Hagner, M.; Goltz, U.; Maibaum, O.

    Today's software for aerospace systems typically is very complex. This is due to the increasing number of features as well as the high demand for safety, reliability, and quality. This complexity also leads to significant higher software development costs. To handle the software complexity, a structured development process is necessary. Additionally, compliance with relevant standards for quality assurance is a mandatory concern. To assure high software quality, techniques for verification are necessary. Besides traditional techniques like testing, automated verification techniques like model checking become more popular. The latter examine the whole state space and, consequently, result in a full test coverage. Nevertheless, despite the obvious advantages, this technique is rarely yet used for the development of aerospace systems. In this paper, we propose a tool-supported methodology for the development and formal verification of safety-critical software in the aerospace domain. The methodology relies on the V-Model and defines a comprehensive work flow for model-based software development as well as automated verification in compliance to the European standard series ECSS-E-ST-40C. Furthermore, our methodology supports the generation and deployment of code. For tool support we use the tool SCADE Suite (Esterel Technology), an integrated design environment that covers all the requirements for our methodology. The SCADE Suite is well established in avionics and defense, rail transportation, energy and heavy equipment industries. For evaluation purposes, we apply our approach to an up-to-date case study of the TET-1 satellite bus. In particular, the attitude and orbit control software is considered. The behavioral models for the subsystem are developed, formally verified, and optimized.

  6. Dynamic Distribution and Layouting of Model-Based User Interfaces in Smart Environments

    NASA Astrophysics Data System (ADS)

    Roscher, Dirk; Lehmann, Grzegorz; Schwartze, Veit; Blumendorf, Marco; Albayrak, Sahin

    The developments in computer technology in the last decade change the ways of computer utilization. The emerging smart environments make it possible to build ubiquitous applications that assist users during their everyday life, at any time, in any context. But the variety of contexts-of-use (user, platform and environment) makes the development of such ubiquitous applications for smart environments and especially its user interfaces a challenging and time-consuming task. We propose a model-based approach, which allows adapting the user interface at runtime to numerous (also unknown) contexts-of-use. Based on a user interface modelling language, defining the fundamentals and constraints of the user interface, a runtime architecture exploits the description to adapt the user interface to the current context-of-use. The architecture provides automatic distribution and layout algorithms for adapting the applications also to contexts unforeseen at design time. Designers do not specify predefined adaptations for each specific situation, but adaptation constraints and guidelines. Furthermore, users are provided with a meta user interface to influence the adaptations according to their needs. A smart home energy management system serves as running example to illustrate the approach.

  7. Fullerene film on metal surface: Diffusion of metal atoms and interface model

    SciTech Connect

    Li, Wen-jie; Li, Hai-Yang; Li, Hong-Nian; Wang, Peng; Wang, Xiao-Xiong; Wang, Jia-Ou; Wu, Rui; Qian, Hai-Jie; Ibrahim, Kurash

    2014-05-12

    We try to understand the fact that fullerene film behaves as n-type semiconductor in electronic devices and establish a model describing the energy level alignment at fullerene/metal interfaces. The C{sub 60}/Ag(100) system was taken as a prototype and studied with photoemission measurements. The photoemission spectra revealed that the Ag atoms of the substrate diffused far into C{sub 60} film and donated electrons to the molecules. So the C{sub 60} film became n-type semiconductor with the Ag atoms acting as dopants. The C{sub 60}/Ag(100) interface should be understood as two sub-interfaces on both sides of the molecular layer directly contacting with the substrate. One sub-interface is Fermi level alignment, and the other is vacuum level alignment.

  8. Dosimetry Modeling for Predicting Radiolytic Production at the Spent Fuel - Water Interface

    SciTech Connect

    Miller, William H.; Kline, Amanda J.; Hanson, Brady D.

    2006-04-30

    Modeling of the alpha, beta, and gamma dose from spent fuel as a function of particle size and fuel to water ratio was examined. These doses will be combined with modeling of G values and interactions to determine the concentration of various species formed at the fuel water interface and their affect on dissolution rates.

  9. Rethinking Design Process: Using 3D Digital Models as an Interface in Collaborative Session

    ERIC Educational Resources Information Center

    Ding, Suining

    2008-01-01

    This paper describes a pilot study for an alternative design process by integrating a designer-user collaborative session with digital models. The collaborative session took place in a 3D AutoCAD class for a real world project. The 3D models served as an interface for designer-user collaboration during the design process. Students not only learned…

  10. Parallelization and High-Performance Computing Enables Automated Statistical Inference of Multi-scale Models.

    PubMed

    Jagiella, Nick; Rickert, Dennis; Theis, Fabian J; Hasenauer, Jan

    2017-02-22

    Mechanistic understanding of multi-scale biological processes, such as cell proliferation in a changing biological tissue, is readily facilitated by computational models. While tools exist to construct and simulate multi-scale models, the statistical inference of the unknown model parameters remains an open problem. Here, we present and benchmark a parallel approximate Bayesian computation sequential Monte Carlo (pABC SMC) algorithm, tailored for high-performance computing clusters. pABC SMC is fully automated and returns reliable parameter estimates and confidence intervals. By running the pABC SMC algorithm for ∼10(6) hr, we parameterize multi-scale models that accurately describe quantitative growth curves and histological data obtained in vivo from individual tumor spheroid growth in media droplets. The models capture the hybrid deterministic-stochastic behaviors of 10(5)-10(6) of cells growing in a 3D dynamically changing nutrient environment. The pABC SMC algorithm reliably converges to a consistent set of parameters. Our study demonstrates a proof of principle for robust, data-driven modeling of multi-scale biological systems and the feasibility of multi-scale model parameterization through statistical inference.

  11. Atomistic Modeling of Corrosion Events at the Interface between a Metal and Its Environment

    DOE PAGES

    Taylor, Christopher D.

    2012-01-01

    Atomistic simulation is a powerful tool for probing the structure and properties of materials and the nature of chemical reactions. Corrosion is a complex process that involves chemical reactions occurring at the interface between a material and its environment and is, therefore, highly suited to study by atomistic modeling techniques. In this paper, the complex nature of corrosion processes and mechanisms is briefly reviewed. Various atomistic methods for exploring corrosion mechanisms are then described, and recent applications in the literature surveyed. Several instances of the application of atomistic modeling to corrosion science are then reviewed in detail, including studies ofmore » the metal-water interface, the reaction of water on electrified metallic interfaces, the dissolution of metal atoms from metallic surfaces, and the role of competitive adsorption in controlling the chemical nature and structure of a metallic surface. Some perspectives are then given concerning the future of atomistic modeling in the field of corrosion science.« less

  12. Geometric Modeling Applications Interface Program. Schema Manager User’s Manual

    DTIC Science & Technology

    1990-11-01

    Standard, Product Definition Data Interface (PDDI), Project 5601, Contract F33516-82-5036, July 1984. Information Modeling Manual IDEF-Extended ( IDEFIX ...Englewood Cliffs, N.J. Differential Geometry of Curves and Surfaces, M. P. de Carmo, Prentice-Hall, Inc., 1976. IDEFIX Readers Reference, D. Appleton...IDEF Information Modeling. IDEFIX -IDEF Extended Information Modeling. IDEF2 -IDEF Dynamics Modeling. IDSS - Integrated Decision Support System

  13. Diffuse photon remission along unique spiral paths on a cylindrical interface is modeled by photon remission along a straight line on a semi-infinite interface.

    PubMed

    Zhang, Anqi; Piao, Daqing; Yao, Gang; Bunting, Charles F; Jiang, Yuhao

    2011-03-01

    We demonstrate that, for a long cylindrical applicator that interfaces concavely or convexly with a scattering-dominant medium, a unique set of spiral-shaped directions exist on the tissue-applicator interface, along which the diffuse photon remission is essentially modeled by the photon remission along a straight line on a semi-infinite interface. This interesting phenomenon, which is validated in steady state in this work by finite-element and Monte Carlo methods, may be particularly useful for simplifying deeper-tissue sensing in endoscopic imaging geometry.

  14. Numerical Modelling of Subduction Plate Interface, Technical Advances for Outstanding Questions

    NASA Astrophysics Data System (ADS)

    Le Pourhiet, L.; Ruh, J.; Pranger, C. C.; Zheng, L.; van Dinther, Y.; May, D.; Gerya, T.; Burov, E. B.

    2015-12-01

    The subduction zone interface is the place of the largest earthquakes on earth. Compared to the size of a subduction zone itself, it constitutes a very thin zone (few kilometers) with effective rheological behaviour that varies as a function of pressure, temperature, loading, nature of the material locally embedded within the interface as well as the amount of water, melts and CO2. Capturing the behaviour of this interface and its evolution in time is crucial, yet modelling it is not an easy task. In the last decade, thermo-mechanical models of subduction zone have flourished in the literature. They mostly focused on the long-term dynamics of the subduction; e.g. flat subduction, slab detachment or exhumation. The models were validated models against PTt path of exhumed material as well as topography. The models that could reproduce the data all included a mechanically weak subduction channel made of extremely weak and non cohesive material. While this subduction channel model is very convenient at large scale and might apply to some real subduction zones, it does not capture the many geological field evidences that point out the exhumation of very large slice of almost pristine oceanic crust along localised shear zone. Moreover, modelling of sismological and geodetic data using short term tectonic modelling approach also point out that large localised patches rupture within the subduction interface, which is in accordance with geological data but not with large-scale long-term tectonic models. I will present how high resolution models permit to produce slicing at the subduction interface and give clues on how the plate coupling and effective location of the plate interface vary over a few millions of year time scale. I will then discuss the implication of these new high-resolution long-term models of subduction zone on earthquake generation, report progress in the development of self-consistent thermomechanical codes which can handle large strain, high resolution

  15. Automated parameter tuning applied to sea ice in a global climate model

    NASA Astrophysics Data System (ADS)

    Roach, Lettie A.; Tett, Simon F. B.; Mineter, Michael J.; Yamazaki, Kuniko; Rae, Cameron D.

    2017-03-01

    Abstract This study investigates the hypothesis that a significant portion of spread in climate model projections of sea ice is due to poorly-constrained model parameters. New automated methods for optimization are applied to historical sea ice in a global coupled climate model (HadCM3) in order to calculate the combination of parameters required to reduce the difference between simulation and observations to within the range of model noise. The optimized parameters result in a simulated sea-ice time series which is more consistent with Arctic observations throughout the satellite record (1980-present), particularly in the September minimum, than the standard configuration of HadCM3. Divergence from observed Antarctic trends and mean regional sea ice distribution reflects broader structural uncertainty in the climate model. We also find that the optimized parameters do not cause adverse effects on the model climatology. This simple approach provides evidence for the contribution of parameter uncertainty to spread in sea ice extent trends and could be customized to investigate uncertainties in other climate variables.

  16. Examining Uncertainty in Demand Response Baseline Models and Variability in Automated Response to Dynamic Pricing

    SciTech Connect

    Mathieu, Johanna L.; Callaway, Duncan S.; Kiliccote, Sila

    2011-08-15

    Controlling electric loads to deliver power system services presents a number of interesting challenges. For example, changes in electricity consumption of Commercial and Industrial (C&I) facilities are usually estimated using counterfactual baseline models, and model uncertainty makes it difficult to precisely quantify control responsiveness. Moreover, C&I facilities exhibit variability in their response. This paper seeks to understand baseline model error and demand-side variability in responses to open-loop control signals (i.e. dynamic prices). Using a regression-based baseline model, we define several Demand Response (DR) parameters, which characterize changes in electricity use on DR days, and then present a method for computing the error associated with DR parameter estimates. In addition to analyzing the magnitude of DR parameter error, we develop a metric to determine how much observed DR parameter variability is attributable to real event-to-event variability versus simply baseline model error. Using data from 38 C&I facilities that participated in an automated DR program in California, we find that DR parameter errors are large. For most facilities, observed DR parameter variability is likely explained by baseline model error, not real DR parameter variability; however, a number of facilities exhibit real DR parameter variability. In some cases, the aggregate population of C&I facilities exhibits real DR parameter variability, resulting in implications for the system operator with respect to both resource planning and system stability.

  17. Automated Generation of Fault Management Artifacts from a Simple System Model

    NASA Technical Reports Server (NTRS)

    Kennedy, Andrew K.; Day, John C.

    2013-01-01

    Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.

  18. A model for investigating the behaviour of non-spherical particles at interfaces.

    PubMed

    Morris, G; Neethling, S J; Cilliers, J J

    2011-02-01

    This paper introduces a simple method for modelling non-spherical particles with a fixed contact angle at an interface whilst also providing a method to fix the particles orientation. It is shown how a wide variety of particle shapes (spherical, ellipsoidal, disc) can be created from a simple initial geometry containing only six vertices. The shapes are made from one continuous surface with edges and corners treated as smooth curves not discontinuities. As such, particles approaching cylindrical and orthorhombic shapes can be simulated but the contact angle crossing the edges will be fixed. Non-spherical particles, when attached to an interface can cause large distortions in the surface which affect the forces acting on the particle. The model presented is capable of resolving this distortion of the surface around the particle at the interface as well as allowing for the particle's orientation to be controlled. It is shown that, when considering orthorhombic particles with rounded edges, the flatter the particle the more energetically stable it is to sit flat at the interface. However, as the particle becomes more cube like, the effects of contact angle have a greater effect on the energetically stable orientations. Results for cylindrical particles with rounded edges are also discussed. The model presented allows the user to define the shape, dimensions, contact angle and orientation of the particle at the interface allowing more in-depth investigation of the complex phenomenon of 3D film distortion around an attached particle and the forces that arise due to it.

  19. Operando X-ray Investigation of Electrode/Electrolyte Interfaces in Model Solid Oxide Fuel Cells.

    PubMed

    Volkov, Sergey; Vonk, Vedran; Khorshidi, Navid; Franz, Dirk; Kubicek, Markus; Kilic, Volkan; Felici, Roberto; Huber, Tobias M; Navickas, Edvinas; Rupp, Ghislain M; Fleig, Jürgen; Stierle, Andreas

    2016-06-14

    We employed operando anomalous surface X-ray diffraction to investigate the buried interface between the cathode and the electrolyte of a model solid oxide fuel cell with atomic resolution. The cell was studied under different oxygen pressures at elevated temperatures and polarizations by external potential control. Making use of anomalous X-ray diffraction effects at the Y and Zr K-edges allowed us to resolve the interfacial structure and chemical composition of a (100)-oriented, 9.5 mol % yttria-stabilized zirconia (YSZ) single crystal electrolyte below a La0.6Sr0.4CoO3-δ (LSC) electrode. We observe yttrium segregation toward the YSZ/LSC electrolyte/electrode interface under reducing conditions. Under oxidizing conditions, the interface becomes Y depleted. The yttrium segregation is corroborated by an enhanced outward relaxation of the YSZ interfacial metal ion layer. At the same time, an increase in point defect concentration in the electrolyte at the interface was observed, as evidenced by reduced YSZ crystallographic site occupancies for the cations as well as the oxygen ions. Such changes in composition are expected to strongly influence the oxygen ion transport through this interface which plays an important role for the performance of solid oxide fuel cells. The structure of the interface is compared to the bare YSZ(100) surface structure near the microelectrode under identical conditions and to the structure of the YSZ(100) surface prepared under ultrahigh vacuum conditions.

  20. Operando X-ray Investigation of Electrode/Electrolyte Interfaces in Model Solid Oxide Fuel Cells

    PubMed Central

    2016-01-01

    We employed operando anomalous surface X-ray diffraction to investigate the buried interface between the cathode and the electrolyte of a model solid oxide fuel cell with atomic resolution. The cell was studied under different oxygen pressures at elevated temperatures and polarizations by external potential control. Making use of anomalous X-ray diffraction effects at the Y and Zr K-edges allowed us to resolve the interfacial structure and chemical composition of a (100)-oriented, 9.5 mol % yttria-stabilized zirconia (YSZ) single crystal electrolyte below a La0.6Sr0.4CoO3−δ (LSC) electrode. We observe yttrium segregation toward the YSZ/LSC electrolyte/electrode interface under reducing conditions. Under oxidizing conditions, the interface becomes Y depleted. The yttrium segregation is corroborated by an enhanced outward relaxation of the YSZ interfacial metal ion layer. At the same time, an increase in point defect concentration in the electrolyte at the interface was observed, as evidenced by reduced YSZ crystallographic site occupancies for the cations as well as the oxygen ions. Such changes in composition are expected to strongly influence the oxygen ion transport through this interface which plays an important role for the performance of solid oxide fuel cells. The structure of the interface is compared to the bare YSZ(100) surface structure near the microelectrode under identical conditions and to the structure of the YSZ(100) surface prepared under ultrahigh vacuum conditions. PMID:27346923

  1. Distributed model predictive control with hierarchical architecture for communication: application in automated irrigation channels

    NASA Astrophysics Data System (ADS)

    Farhadi, Alireza; Khodabandehlou, Ali

    2016-08-01

    This paper is concerned with a distributed model predictive control (DMPC) method that is based on a distributed optimisation method with two-level architecture for communication. Feasibility (constraints satisfaction by the approximated solution), convergence and optimality of this distributed optimisation method are mathematically proved. For an automated irrigation channel, the satisfactory performance of the proposed DMPC method in attenuation of the undesired upstream transient error propagation and amplification phenomenon is illustrated and compared with the performance of another DMPC method that exploits a single-level architecture for communication. It is illustrated that the DMPC that exploits a two-level architecture for communication has a better performance by better managing communication overhead.

  2. An Automated Application Framework to Model Disordered Materials Based on a High Throughput First Principles Approach

    NASA Astrophysics Data System (ADS)

    Oses, Corey; Yang, Kesong; Curtarolo, Stefano; Duke Univ Collaboration; UC San Diego Collaboration

    Predicting material properties of disordered systems remains a long-standing and formidable challenge in rational materials design. To address this issue, we introduce an automated software framework capable of modeling partial occupation within disordered materials using a high-throughput (HT) first principles approach. At the heart of the approach is the construction of supercells containing a virtually equivalent stoichiometry to the disordered material. All unique supercell permutations are enumerated and material properties of each are determined via HT electronic structure calculations. In accordance with a canonical ensemble of supercell states, the framework evaluates ensemble average properties of the system as a function of temperature. As proof of concept, we examine the framework's final calculated properties of a zinc chalcogenide (ZnS1-xSex), a wide-gap oxide semiconductor (MgxZn1-xO), and an iron alloy (Fe1-xCux) at various stoichiometries.

  3. Smart Frameworks and Self-Describing Models: Model Metadata for Automated Coupling of Hydrologic Process Components (Invited)

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.

    2013-12-01

    Model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that allow heterogeneous sets of process models to be assembled in a plug-and-play manner to create composite "system models". These mechanisms facilitate code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers, e.g. by requiring them to provide their output in specific forms that meet the input requirements of other models. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can compare the answers to these queries with similar answers from other process models in a collection and then automatically call framework

  4. Modeling Nitrogen Cycle at the Surface-Subsurface Water Interface

    NASA Astrophysics Data System (ADS)

    Marzadri, A.; Tonina, D.; Bellin, A.

    2011-12-01

    Anthropogenic activities, primarily food and energy production, have altered the global nitrogen cycle, increasing reactive dissolved inorganic nitrogen, Nr, chiefly ammonium NH4+ and nitrate NO3-, availability in many streams worldwide. Increased Nr promotes biological activity often with negative consequences such as water body eutrophication and emission of nitrous oxide gas, N2O, an important greenhouse gas as a by-product of denitrification. The hyporheic zone may play an important role in processing Nr and returning it to the atmosphere. Here, we present a process-based three-dimensional semi-analytical model, which couples hyporheic hydraulics with biogeochemical reactions and transport equations. Transport is solved by means of particle tracking with negligible local dispersion and biogeochemical reactions modeled by linearized Monod's kinetics with temperature dependant reaction rate coefficients. Comparison of measured and predicted N2O emissions from 7 natural stream shows a good match. We apply our model to gravel bed rivers with alternate bar morphology to investigate the role of hyporheic hydraulic, depth of alluvium, relative availability of stream concentration of NO3- and NH4+ and water temperature on nitrogen gradients within the sediment. Our model shows complex concentration dynamics, which depend on hyporheic residence time distribution and consequently on streambed morphology, within the hyporheic zone. Nitrogen gas emissions from the hyporheic zone increase with alluvium depth in large low-gradient streams but not in small steep streams. On the other hand, hyporheic water temperature influences nitrification/denitrification processes mainly in small-steep than large low-gradient streams, because of the long residence times, which offset the slow reaction rates induced by low temperatures in the latter stream. The overall conclusion of our analysis is that river morphology has a major impact on biogeochemical processes such as nitrification

  5. Accident prediction model for railway-highway interfaces.

    PubMed

    Oh, Jutaek; Washington, Simon P; Nam, Doohee

    2006-03-01

    Considerable past research has explored relationships between vehicle accidents and geometric design and operation of road sections, but relatively little research has examined factors that contribute to accidents at railway-highway crossings. Between 1998 and 2002 in Korea, about 95% of railway accidents occurred at highway-rail grade crossings, resulting in 402 accidents, of which about 20% resulted in fatalities. These statistics suggest that efforts to reduce crashes at these locations may significantly reduce crash costs. The objective of this paper is to examine factors associated with railroad crossing crashes. Various statistical models are used to examine the relationships between crossing accidents and features of crossings. The paper also compares accident models developed in the United States and the safety effects of crossing elements obtained using Korea data. Crashes were observed to increase with total traffic volume and average daily train volumes. The proximity of crossings to commercial areas and the distance of the train detector from crossings are associated with larger numbers of accidents, as is the time duration between the activation of warning signals and gates. The unique contributions of the paper are the application of the gamma probability model to deal with underdispersion and the insights obtained regarding railroad crossing related vehicle crashes.

  6. Model studies of Rayleigh instabilities via microdesigned interfaces

    SciTech Connect

    Glaeser, Andreas M.

    2000-10-17

    The energetic and kinetic properties of surfaces play a critical role in defining the microstructural changes that occur during sintering and high-temperature use of ceramics. Characterization of surface diffusion in ceramics is particularly difficult, and significant variations in reported values of surface diffusivities arise even in well-studied systems. Effects of impurities, surface energy anisotropy, and the onset of surface attachment limited kinetics (SALK) are believed to contribute to this variability. An overview of the use of Rayleigh instabilities as a means of characterizing surface diffusivities is presented. The development of models of morphological evolution that account for effects of surface energy anisotropy is reviewed, and the potential interplay between impurities and surface energy anisotropy is addressed. The status of experimental studies of Rayleigh instabilities in sapphire utilizing lithographically introduced pore channels of controlled geometry and crystallography is summarized. Results of model studies indicate that impurities can significantly influence both the spatial and temporal characteristics of Rayleigh instabilities; this is attributed at least in part to impurity effects on the surface energy anisotropy. Related model experiments indicate that the onset of SALK may also contribute significantly to apparent variations in surface diffusion coefficients.

  7. Modeling and matching of landmarks for automation of Mars Rover localization

    NASA Astrophysics Data System (ADS)

    Wang, Jue

    The Mars Exploration Rover (MER) mission, begun in January 2004, has been extremely successful. However, decision-making for many operation tasks of the current MER mission and the 1997 Mars Pathfinder mission is performed on Earth through a predominantly manual, time-consuming process. Unmanned planetary rover navigation is ideally expected to reduce rover idle time, diminish the need for entering safe-mode, and dynamically handle opportunistic science events without required communication to Earth. Successful automation of rover navigation and localization during the extraterrestrial exploration requires that accurate position and attitude information can be received by a rover and that the rover has the support of simultaneous localization and mapping. An integrated approach with Bundle Adjustment (BA) and Visual Odometry (VO) can efficiently refine the rover position. However, during the MER mission, BA is done manually because of the difficulty in the automation of the cross-sitetie points selection. This dissertation proposes an automatic approach to select cross-site tie points from multiple rover sites based on the methods of landmark extraction, landmark modeling, and landmark matching. The first step in this approach is that important landmarks such as craters and rocks are defined. Methods of automatic feature extraction and landmark modeling are then introduced. Complex models with orientation angles and simple models without those angles are compared. The results have shown that simple models can provide reasonably good results. Next, the sensitivity of different modeling parameters is analyzed. Based on this analysis, cross-site rocks are matched through two complementary stages: rock distribution pattern matching and rock model matching. In addition, a preliminary experiment on orbital and ground landmark matching is also briefly introduced. Finally, the reliability of the cross-site tie points selection is validated by fault detection, which

  8. Automated As-Built Model Generation of Subway Tunnels from Mobile LiDAR Data

    PubMed Central

    Arastounia, Mostafa

    2016-01-01

    This study proposes fully-automated methods for as-built model generation of subway tunnels employing mobile Light Detection and Ranging (LiDAR) data. The employed dataset is acquired by a Velodyne HDL 32E and covers 155 m of a subway tunnel containing six million points. First, the tunnel’s main axis and cross sections are extracted. Next, a preliminary model is created by fitting an ellipse to each extracted cross section. The model is refined by employing residual analysis and Baarda’s data snooping method to eliminate outliers. The final model is then generated by applying least squares adjustment to outlier-free data. The obtained results indicate that the tunnel’s main axis and 1551 cross sections at 0.1 m intervals are successfully extracted. Cross sections have an average semi-major axis of 7.8508 m with a standard deviation of 0.2 mm and semi-minor axis of 7.7509 m with a standard deviation of 0.1 mm. The average normal distance of points from the constructed model (average absolute error) is also 0.012 m. The developed algorithm is applicable to tunnels with any horizontal orientation and degree of curvature since it makes no assumptions, nor does it use any a priori knowledge regarding the tunnel’s curvature and horizontal orientation. PMID:27649172

  9. pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling

    NASA Astrophysics Data System (ADS)

    Florian Wellmann, J.; Thiele, Sam T.; Lindsay, Mark D.; Jessell, Mark W.

    2016-03-01

    We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilize the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.

  10. pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling

    NASA Astrophysics Data System (ADS)

    Wellmann, J. F.; Thiele, S. T.; Lindsay, M. D.; Jessell, M. W.

    2015-11-01

    We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilise the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a~link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential-fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.

  11. Automated As-Built Model Generation of Subway Tunnels from Mobile LiDAR Data.

    PubMed

    Arastounia, Mostafa

    2016-09-13

    This study proposes fully-automated methods for as-built model generation of subway tunnels employing mobile Light Detection and Ranging (LiDAR) data. The employed dataset is acquired by a Velodyne HDL 32E and covers 155 m of a subway tunnel containing six million points. First, the tunnel's main axis and cross sections are extracted. Next, a preliminary model is created by fitting an ellipse to each extracted cross section. The model is refined by employing residual analysis and Baarda's data snooping method to eliminate outliers. The final model is then generated by applying least squares adjustment to outlier-free data. The obtained results indicate that the tunnel's main axis and 1551 cross sections at 0.1 m intervals are successfully extracted. Cross sections have an average semi-major axis of 7.8508 m with a standard deviation of 0.2 mm and semi-minor axis of 7.7509 m with a standard deviation of 0.1 mm. The average normal distance of points from the constructed model (average absolute error) is also 0.012 m. The developed algorithm is applicable to tunnels with any horizontal orientation and degree of curvature since it makes no assumptions, nor does it use any a priori knowledge regarding the tunnel's curvature and horizontal orientation.

  12. Modelling and simulation of a moving interface problem: freeze drying of black tea extract

    NASA Astrophysics Data System (ADS)

    Aydin, Ebubekir Sıddık; Yucel, Ozgun; Sadikoglu, Hasan

    2017-01-01

    The moving interface separates the material that is subjected to the freeze drying process as dried and frozen. Therefore, the accurate modeling the moving interface reduces the process time and energy consumption by improving the heat and mass transfer predictions during the process. To describe the dynamic behavior of the drying stages of the freeze-drying, a case study of brewed black tea extract in storage trays including moving interface was modeled that the heat and mass transfer equations were solved using orthogonal collocation method based on Jacobian polynomial approximation. Transport parameters and physical properties describing the freeze drying of black tea extract were evaluated by fitting the experimental data using Levenberg-Marquardt algorithm. Experimental results showed good agreement with the theoretical predictions.

  13. Diffuse interface models of locally inextensible vesicles in a viscous fluid

    NASA Astrophysics Data System (ADS)

    Aland, Sebastian; Egerer, Sabine; Lowengrub, John; Voigt, Axel

    2014-11-01

    We present a new diffuse interface model for the dynamics of inextensible vesicles in a viscous fluid with inertial forces. A new feature of this work is the implementation of the local inextensibility condition in the diffuse interface context. Local inextensibility is enforced by using a local Lagrange multiplier, which provides the necessary tension force at the interface. We introduce a new equation for the local Lagrange multiplier whose solution essentially provides a harmonic extension of the multiplier off the interface while maintaining the local inextensibility constraint near the interface. We also develop a local relaxation scheme that dynamically corrects local stretching/compression errors thereby preventing their accumulation. Asymptotic analysis is presented that shows that our new system converges to a relaxed version of the inextensible sharp interface model. This is also verified numerically. To solve the equations, we use an adaptive finite element method with implicit coupling between the Navier-Stokes and the diffuse interface inextensibility equations. Numerical simulations of a single vesicle in a shear flow at different Reynolds numbers demonstrate that errors in enforcing local inextensibility may accumulate and lead to large differences in the dynamics in the tumbling regime and smaller differences in the inclination angle of vesicles in the tank-treading regime. The local relaxation algorithm is shown to prevent the accumulation of stretching and compression errors very effectively. Simulations of two vesicles in an extensional flow show that local inextensibility plays an important role when vesicles are in close proximity by inhibiting fluid drainage in the near contact region.

  14. Interface modeling to predict well casing damage for big hill strategic petroleum reserve.

    SciTech Connect

    Ehgartner, Brian L.; Park, Byoung Yoon

    2012-02-01

    Oil leaks were found in well casings of Caverns 105 and 109 at the Big Hill Strategic Petroleum Reserve site. According to the field observations, two instances of casing damage occurred at the depth of the interface between the caprock and top of salt. This damage could be caused by interface movement induced by cavern volume closure due to salt creep. A three dimensional finite element model, which allows each cavern to be configured individually, was constructed to investigate shear and vertical displacements across each interface. The model contains interfaces between each lithology and a shear zone to examine the interface behavior in a realistic manner. This analysis results indicate that the casings of Caverns 105 and 109 failed by shear stress that exceeded shear strength due to the horizontal movement of the top of salt relative to the caprock, and tensile stress due to the downward movement of the top of salt from the caprock, respectively. The casings of Caverns 101, 110, 111 and 114, located at the far ends of the field, are predicted to be failed by shear stress in the near future. The casings of inmost Caverns 107 and 108 are predicted to be failed by tensile stress in the near future.

  15. Damage evolution of bi-body model composed of weakly cemented soft rock and coal considering different interface effect.

    PubMed

    Zhao, Zenghui; Lv, Xianzhou; Wang, Weiming; Tan, Yunliang

    2016-01-01

    Considering the structure effect of tunnel stability in western mining of China, three typical kinds of numerical model were respectively built as follows based on the strain softening constitutive model and linear elastic-perfectly plastic model for soft rock and interface: R-M, R-C(s)-M and R-C(w)-M. Calculation results revealed that the stress-strain relation and failure characteristics of the three models vary between each other. The combination model without interface or with a strong interface presented continuous failure, while weak interface exhibited 'cut off' effect. Thus, conceptual models of bi-material model and bi-body model were established. Then numerical experiments of tri-axial compression were carried out for the two models. The relationships between stress evolution, failure zone and deformation rate fluctuations as well as the displacement of interface were detailed analyzed. Results show that two breakaway points of deformation rate actually demonstrate the starting and penetration of the main rupture, respectively. It is distinguishable due to the large fluctuation. The bi-material model shows general continuous failure while bi-body model shows 'V' type shear zone in weak body and failure in strong body near the interface due to the interface effect. With the increasing of confining pressure, the 'cut off' effect of weak interface is not obvious. These conclusions lay the theoretical foundation for further development of constitutive model for soft rock-coal combination body.

  16. Microcontroller for automation application

    NASA Technical Reports Server (NTRS)

    Cooper, H. W.

    1975-01-01

    The description of a microcontroller currently being developed for automation application was given. It is basically an 8-bit microcomputer with a 40K byte random access memory/read only memory, and can control a maximum of 12 devices through standard 15-line interface ports.

  17. Context based mixture model for cell phase identification in automated fluorescence microscopy

    PubMed Central

    Wang, Meng; Zhou, Xiaobo; King, Randy W; Wong, Stephen TC

    2007-01-01

    Background Automated identification of cell cycle phases of individual live cells in a large population captured via automated fluorescence microscopy technique is important for cancer drug discovery and cell cycle studies. Time-lapse fluorescence microscopy images provide an important method to study the cell cycle process under different conditions of perturbation. Existing methods are limited in dealing with such time-lapse data sets while manual analysis is not feasible. This paper presents statistical data analysis and statistical pattern recognition to perform this task. Results The data is generated from Hela H2B GFP cells imaged during a 2-day period with images acquired 15 minutes apart using an automated time-lapse fluorescence microscopy. The patterns are described with four kinds of features, including twelve general features, Haralick texture features, Zernike moment features, and wavelet features. To generate a new set of features with more discriminate power, the commonly used feature reduction techniques are used, which include Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), Maximum Margin Criterion (MMC), Stepwise Discriminate Analysis based Feature Selection (SDAFS), and Genetic Algorithm based Feature Selection (GAFS). Then, we propose a Context Based Mixture Model (CBMM) for dealing with the time-series cell sequence information and compare it to other traditional classifiers: Support Vector Machine (SVM), Neural Network (NN), and K-Nearest Neighbor (KNN). Being a standard practice in machine learning, we systematically compare the performance of a number of common feature reduction techniques and classifiers to select an optimal combination of a feature reduction technique and a classifier. A cellular database containing 100 manually labelled subsequence is built for evaluating the performance of the classifiers. The generalization error is estimated using the cross validation technique. The experimental results show

  18. Experimental modelling of material interfaces with ultracold atoms

    NASA Astrophysics Data System (ADS)

    Corcovilos, Theodore A.; Brooke, Robert W. A.; Gillis, Julie; Ruggiero, Anthony C.; Tiber, Gage D.; Zaccagnini, Christopher A.

    2014-05-01

    We present a design for a new experimental apparatus for studying the physics of junctions using ultracold potassium atoms (K-39 and K-40). Junctions will be modeled using holographically projected 2D optical potentials. These potentials can be engineered to contain arbitrary features such as junctions between dissimilar lattices or the intentional insertion of defects. Long-term investigation goals include edge states, scattering at defects, and quantum depletion at junctions. In this poster we show our overall apparatus design and our progress in building experimental subsystems including the vacuum system, extended cavity diode lasers, digital temperature and current control circuits for the lasers, and the saturated absorption spectroscopy system. Funding provided by the Bayer School of Natural and Environmental.

  19. The performance of automated case-mix adjustment regression model building methods in a health outcome prediction setting.

    PubMed

    Jen, Min-Hua; Bottle, Alex; Kirkwood, Graham; Johnston, Ron; Aylin, Paul

    2011-09-01

    We have previously described a system for monitoring a number of healthcare outcomes using case-mix adjustment models. It is desirable to automate the model fitting process in such a system if monitoring covers a large number of outcome measures or subgroup analyses. Our aim was to compare the performance of three different variable selection strategies: "manual", "automated" backward elimination and re-categorisation, and including all variables at once, irrespective of their apparent importance, with automated re-categorisation. Logistic regression models for predicting in-hospital mortality and emergency readmission within 28 days were fitted to an administrative database for 78 diagnosis groups and 126 procedures from 1996 to 2006 for National Health Services hospital trusts in England. The performance of models was assessed with Receiver Operating Characteristic (ROC) c statistics, (measuring discrimination) and Brier score (assessing the average of the predictive accuracy). Overall, discrimination was similar for diagnoses and procedures and consistently better for mortality than for emergency readmission. Brier scores were generally low overall (showing higher accuracy) and were lower for procedures than diagnoses, with a few exceptions for emergency readmission within 28 days. Among the three variable selection strategies, the automated procedure had similar performance to the manual method in almost all cases except low-risk groups with few outcome events. For the rapid generation of multiple case-mix models we suggest applying automated modelling to reduce the time required, in particular when examining different outcomes of large numbers of procedures and diseases in routinely collected administrative health data.

  20. Automated recognition of the iliac muscle and modeling of muscle fiber direction in torso CT images

    NASA Astrophysics Data System (ADS)

    Kamiya, N.; Zhou, X.; Azuma, K.; Muramatsu, C.; Hara, T.; Fujita, H.

    2016-03-01

    The iliac muscle is an important skeletal muscle related to ambulatory function. The muscles related to ambulatory function are the psoas major and iliac muscles, collectively defined as the iliopsoas muscle. We have proposed an automated recognition method of the iliac muscle. Muscle fibers of the iliac muscle have a characteristic running pattern. Therefore, we used 20 cases from a training database to model the movement of the muscle fibers of the iliac muscle. In the recognition process, the existing position of the iliac muscle was estimated by applying the muscle fiber model. To generate an approximation mask by using a muscle fiber model, a candidate region of the iliac muscle was obtained. Finally, the muscle region was identified by using values from the gray value and boundary information. The experiments were performed by using the 20 cases without abnormalities in the skeletal muscle for modeling. The recognition result in five cases obtained a 76.9% average concordance rate. In the visual evaluation, overextraction of other organs was not observed in 85% of the cases. Therefore, the proposed method is considered to be effective in the recognition of the initial region of the iliac muscle. In the future, we will integrate the recognition method of the psoas major muscle in developing an analytical technique for the iliopsoas area. Furthermore, development of a sophisticated muscle function analysis method is necessary.

  1. A Model of Subdiffusive Interface Dynamics with a Local Conservation of Minimum Height

    NASA Astrophysics Data System (ADS)

    Koduvely, Hari M.; Dhar, Deepak

    1998-01-01

    We define a new model of interface roughening in one dimension which has the property that the minimum of interface height is conserved locally during the evolution. This model corresponds to the limit q → ∞ of the q-color dimer deposition-evaporation model introduced by us earlier [Hari Menon and Dhar, J. Phys. A: Math. Gen. 28:6517 (1995)]. We present numerical evidence from Monte Carlo simulations and the exact diagonalization of the evolution operator on finite rings that growth of correlations in this model is subdiffusive with dynamical exponent z≍2.5. For periodic boundary conditions, the variation of the gap in the relaxation spectrum with system size appears to involve a logarithmic correction term. Some generalizations of the model are briefly discussed.

  2. Automated generation of high-quality training data for appearance-based object models

    NASA Astrophysics Data System (ADS)

    Becker, Stefan; Voelker, Arno; Kieritz, Hilke; Hübner, Wolfgang; Arens, Michael

    2013-11-01

    Methods for automated person detection and person tracking are essential core components in modern security and surveillance systems. Most state-of-the-art person detectors follow a statistical approach, where prototypical appearances of persons are learned from training samples with known class labels. Selecting appropriate learning samples has a significant impact on the quality of the generated person detectors. For example, training a classifier on a rigid body model using training samples with strong pose variations is in general not effective, irrespective of the classifiers capabilities. Generation of high-quality training data is, apart from performance issues, a very time consuming process, comprising a significant amount of manual work. Furthermore, due to inevitable limitations of freely available training data, corresponding classifiers are not always transferable to a given sensor and are only applicable in a well-defined narrow variety of scenes and camera setups. Semi-supervised learning methods are a commonly used alternative to supervised training, in general requiring only few labeled samples. However, as a drawback semi-supervised methods always include a generative component, which is known to be difficult to learn. Therefore, automated processes for generating training data sets for supervised methods are needed. Such approaches could either help to better adjust classifiers to respective hardware, or serve as a complement to existing data sets. Towards this end, this paper provides some insights into the quality requirements of automatically generated training data for supervised learning methods. Assuming a static camera, labels are generated based on motion detection by background subtraction with respect to weak constraints on the enclosing bounding box of the motion blobs. Since this labeling method consists of standard components, we illustrate the effectiveness by adapting a person detector to cameras of a sensor network. While varying

  3. Conceptual Design Model of Instructional Interfaces: Courseware for Inclusive Education System (IID4C) Distance Learning

    ERIC Educational Resources Information Center

    Tosho, Abdulrauf; Mutalib, Ariffin Abdul; Abdul-Salam, Sobihatun Nur

    2016-01-01

    This paper describes an ongoing study related to a conceptual design model, which is specific to instructional interface design to enhance courseware usage. It was found that most of the existing courseware applications focus on the needs of certain target with most of the courseware offer too little to inclusive learners. In addition, the use of…

  4. The integrity of welded interfaces in ultra high molecular weight polyethylene: Part 1-Model.

    PubMed

    Buckley, C Paul; Wu, Junjie; Haughie, David W

    2006-06-01

    The difficulty of eradicating memory of powder-particle interfaces in UHMWPE for bearing surfaces for hip and knee replacements is well-known, and 'fusion defects' have been implicated frequently in joint failures. During processing the polymer is formed into solid directly from the reactor powder, under pressure and at temperatures above the melting point, and two types of inter-particle defect occur: Type 1 (consolidation-deficient) and Type 2 (diffusion-deficient). To gain quantitative information on the extent of the problem, the formation of macroscopic butt welds in this material was studied, by (1) modelling the process and (2) measuring experimentally the resultant evolution of interface toughness. This paper reports on the model. A quantitative measure of interface structural integrity is defined, and related to the "maximum reptated molecular weight" introduced previously. The model assumes an idealised surface topography. It is used to calculate the evolution of interface integrity during welding, for given values of temperature, pressure, and parameters describing the surfaces, and a given molar mass distribution. Only four material properties are needed for the calculation; all of them available for polyethylene. The model shows that, for UHMWPE typically employed in knee transplants, the rate of eradication of Type 1 defects is highly sensitive to surface topography, process temperature and pressure. Also, even if Type 1 defects are prevented, Type 2 defects heal extremely slowly. They must be an intrinsic feature of UHMWPE for all reasonable forming conditions, and products and forming processes should be designed accordingly.

  5. A Monthly Water-Balance Model Driven By a Graphical User Interface

    USGS Publications Warehouse

    McCabe, Gregory J.; Markstrom, Steven L.

    2007-01-01

    This report describes a monthly water-balance model driven by a graphical user interface, referred to as the Thornthwaite monthly water-balance program. Computations of monthly water-balance components of the hydrologic cycle are made for a specified location. The program can be used as a research tool, an assessment tool, and a tool for classroom instruction.

  6. Numerical simulations of the moving contact line problem using a diffuse-interface model

    NASA Astrophysics Data System (ADS)

    Afzaal, Muhammad; Sibley, David; Duncan, Andrew; Yatsyshin, Petr; Duran-Olivencia, Miguel A.; Nold, Andreas; Savva, Nikos; Schmuck, Markus; Kalliadasis, Serafim

    2015-11-01

    Moving contact lines are a ubiquitous phenomenon both in nature and in many modern technologies. One prevalent way of numerically tackling the problem is with diffuse-interface (phase-field) models, where the classical sharp-interface model of continuum mechanics is relaxed to one with a finite thickness fluid-fluid interface, capturing physics from mesoscopic lengthscales. The present work is devoted to the study of the contact line between two fluids confined by two parallel plates, i.e. a dynamically moving meniscus. Our approach is based on a coupled Navier-Stokes/Cahn-Hilliard model. This system of partial differential equations allows a tractable numerical solution to be computed, capturing diffusive and advective effects in a prototypical case study in a finite-element framework. Particular attention is paid to the static and dynamic contact angle of the meniscus advancing or receding between the plates. The results obtained from our approach are compared to the classical sharp-interface model to elicit the importance of considering diffusion and associated effects. We acknowledge financial support from European Research Council via Advanced Grant No. 247031.

  7. AgRISTARS: Yield model development/soil moisture. Interface control document

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The interactions and support functions required between the crop Yield Model Development (YMD) Project and Soil Moisture (SM) Project are defined. The requirements for YMD support of SM and vice-versa are outlined. Specific tasks in support of these interfaces are defined for development of support functions.

  8. World-wide distribution automation systems

    SciTech Connect

    Devaney, T.M.

    1994-12-31

    A worldwide power distribution automation system is outlined. Distribution automation is defined and the status of utility automation is discussed. Other topics discussed include a distribution management system, substation feeder, and customer functions, potential benefits, automation costs, planning and engineering considerations, automation trends, databases, system operation, computer modeling of system, and distribution management systems.

  9. Facial pressure zones of an oronasal interface for noninvasive ventilation: a computer model analysis* **

    PubMed Central

    Barros, Luana Souto; Talaia, Pedro; Drummond, Marta; Natal-Jorge, Renato

    2014-01-01

    OBJECTIVE: To study the effects of an oronasal interface (OI) for noninvasive ventilation, using a three-dimensional (3D) computational model with the ability to simulate and evaluate the main pressure zones (PZs) of the OI on the human face. METHODS: We used a 3D digital model of the human face, based on a pre-established geometric model. The model simulated soft tissues, skull, and nasal cartilage. The geometric model was obtained by 3D laser scanning and post-processed for use in the model created, with the objective of separating the cushion from the frame. A computer simulation was performed to determine the pressure required in order to create the facial PZs. We obtained descriptive graphical images of the PZs and their intensity. RESULTS: For the graphical analyses of each face-OI model pair and their respective evaluations, we ran 21 simulations. The computer model identified several high-impact PZs in the nasal bridge and paranasal regions. The variation in soft tissue depth had a direct impact on the amount of pressure applied (438-724 cmH2O). CONCLUSIONS: The computer simulation results indicate that, in patients submitted to noninvasive ventilation with an OI, the probability of skin lesion is higher in the nasal bridge and paranasal regions. This methodology could increase the applicability of biomechanical research on noninvasive ventilation interfaces, providing the information needed in order to choose the interface that best minimizes the risk of skin lesion. PMID:25610506

  10. Cryo-EM Data Are Superior to Contact and Interface Information in Integrative Modeling

    PubMed Central

    de Vries, Sjoerd J.; Chauvot de Beauchêne, Isaure; Schindler, Christina E.M.; Zacharias, Martin

    2016-01-01

    Protein-protein interactions carry out a large variety of essential cellular processes. Cryo-electron microscopy (cryo-EM) is a powerful technique for the modeling of protein-protein interactions at a wide range of resolutions, and recent developments have caused a revolution in the field. At low resolution, cryo-EM maps can drive integrative modeling of the interaction, assembling existing structures into the map. Other experimental techniques can provide information on the interface or on the contacts between the monomers in the complex. This inevitably raises the question regarding which type of data is best suited to drive integrative modeling approaches. Systematic comparison of the prediction accuracy and specificity of the different integrative modeling paradigms is unavailable to date. Here, we compare EM-driven, interface-driven, and contact-driven integrative modeling paradigms. Models were generated for the protein docking benchmark using the ATTRACT docking engine and evaluated using the CAPRI two-star criterion. At 20 Å resolution, EM-driven modeling achieved a success rate of 100%, outperforming the other paradigms even with perfect interface and contact information. Therefore, even very low resolution cryo-EM data is superior in predicting heterodimeric and heterotrimeric protein assemblies. Our study demonstrates that a force field is not necessary, cryo-EM data alone is sufficient to accurately guide the monomers into place. The resulting rigid models successfully identify regions of conformational change, opening up perspectives for targeted flexible remodeling. PMID:26846888

  11. Sloan Digital Sky Survey photometric telescope automation and observing software

    SciTech Connect

    Eric H. Neilsen, Jr. et al.

    2002-10-16

    The photometric telescope (PT) provides observations necessary for the photometric calibration of the Sloan Digital Sky Survey (SDSS). Because the attention of the observing staff is occupied by the operation of the 2.5 meter telescope which takes the survey data proper, the PT must reliably take data with little supervision. In this paper we describe the PT's observing program, MOP, which automates most tasks necessary for observing. MOP's automated target selection is closely modeled on the actions a human observer might take, and is built upon a user interface that can be (and has been) used for manual operation. This results in an interface that makes it easy for an observer to track the activities of the automating procedures and intervene with minimum disturbance when necessary. MOP selects targets from the same list of standard star and calibration fields presented to the user, and chooses standard star fields covering ranges of airmass, color, and time necessary to monitor atmospheric extinction and produce a photometric solution. The software determines when additional standard star fields are unnecessary, and selects survey calibration fields according to availability and priority. Other automated features of MOP, such as maintaining the focus and keeping a night log, are also built around still functional manual interfaces, allowing the observer to be as active in observing as desired; MOP's automated features may be used as tools for manual observing, ignored entirely, or allowed to run the telescope with minimal supervision when taking routine data.

  12. A prototype natural language interface to a large complex knowledge base, the Foundational Model of Anatomy.

    PubMed

    Distelhorst, Gregory; Srivastava, Vishrut; Rosse, Cornelius; Brinkley, James F

    2003-01-01

    We describe a constrained natural language interface to a large knowledge base, the Foundational Model of Anatomy (FMA). The interface, called GAPP, handles simple or nested questions that can be parsed to the form, subject-relation-object, where subject or object is unknown. With the aid of domain-specific dictionaries the parsed sentence is converted to queries in the StruQL graph-searching query language, then sent to a server we developed, called OQAFMA, that queries the FMA and returns output as XML. Preliminary evaluation shows that GAPP has the potential to be used in the evaluation of the FMA by domain experts in anatomy.

  13. Modelling and interpreting biologically crusted dryland soil sub-surface structure using automated micropenetrometry

    NASA Astrophysics Data System (ADS)

    Hoon, Stephen R.; Felde, Vincent J. M. N. L.; Drahorad, Sylvie L.; Felix-Henningsen, Peter

    2015-04-01

    Soil penetrometers are used routinely to determine the shear strength of soils and deformable sediments both at the surface and throughout a depth profile in disciplines as diverse as soil science, agriculture, geoengineering and alpine avalanche-safety (e.g. Grunwald et al. 2001, Van Herwijnen et al. 2009). Generically, penetrometers comprise two principal components: An advancing probe, and a transducer; the latter to measure the pressure or force required to cause the probe to penetrate or advance through the soil or sediment. The force transducer employed to determine the pressure can range, for example, from a simple mechanical spring gauge to an automatically data-logged electronic transducer. Automated computer control of the penetrometer step size and probe advance rate enables precise measurements to be made down to a resolution of 10's of microns, (e.g. the automated electronic micropenetrometer (EMP) described by Drahorad 2012). Here we discuss the determination, modelling and interpretation of biologically crusted dryland soil sub-surface structures using automated micropenetrometry. We outline a model enabling the interpretation of depth dependent penetration resistance (PR) profiles and their spatial differentials using the model equations, σ {}(z) ={}σ c0{}+Σ 1n[σ n{}(z){}+anz + bnz2] and dσ /dz = Σ 1n[dσ n(z) /dz{} {}+{}Frn(z)] where σ c0 and σ n are the plastic deformation stresses for the surface and nth soil structure (e.g. soil crust, layer, horizon or void) respectively, and Frn(z)dz is the frictional work done per unit volume by sliding the penetrometer rod an incremental distance, dz, through the nth layer. Both σ n(z) and Frn(z) are related to soil structure. They determine the form of σ {}(z){} measured by the EMP transducer. The model enables pores (regions of zero deformation stress) to be distinguished from changes in layer structure or probe friction. We have applied this method to both artificial calibration soils in the

  14. Automated Geometric Model Builder Using Range Image Sensor Data: Final Acquistion

    SciTech Connect

    Diegert, C.; Sackos, J.

    1999-02-01

    This report documents a data collection where we recorded redundant range image data from multiple views of a simple scene, and recorded accurate survey measurements of the same scene. Collecting these data was a focus of the research project Automated Geometric Model Builder Using Range Image Sensor Data (96-0384), supported by Sandia's Laboratory-Directed Research and Development (LDRD) Program during fiscal years 1996, 1997, and 1998. The data described here are available from the authors on CDROM, or electronically over the Internet. Included in this data distribution are Computer-Aided Design (CAD) models we constructed from the survey measurements. The CAD models are compatible with the SolidWorks 98 Plus system, the modern Computer-Aided Design software system that is central to Sandia's DeskTop Engineering Project (DTEP). Integration of our measurements (as built) with the constructive geometry process of the CAD system (as designed) delivers on a vision of the research project. This report on our final data collection will also serve as a final report on the project.

  15. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    SciTech Connect

    Robertson, J.; Polly, B.; Collis, J.

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  16. Automated Decisional Model for Optimum Economic Order Quantity Determination Using Price Regressive Rates

    NASA Astrophysics Data System (ADS)

    Roşu, M. M.; Tarbă, C. I.; Neagu, C.

    2016-11-01

    The current models for inventory management are complementary, but together they offer a large pallet of elements for solving complex problems of companies when wanting to establish the optimum economic order quantity for unfinished products, row of materials, goods etc. The main objective of this paper is to elaborate an automated decisional model for the calculus of the economic order quantity taking into account the price regressive rates for the total order quantity. This model has two main objectives: first, to determine the periodicity when to be done the order n or the quantity order q; second, to determine the levels of stock: lighting control, security stock etc. In this way we can provide the answer to two fundamental questions: How much must be ordered? When to Order? In the current practice, the business relationships with its suppliers are based on regressive rates for price. This means that suppliers may grant discounts, from a certain level of quantities ordered. Thus, the unit price of the products is a variable which depends on the order size. So, the most important element for choosing the optimum for the economic order quantity is the total cost for ordering and this cost depends on the following elements: the medium price per units, the stock cost, the ordering cost etc.

  17. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    SciTech Connect

    and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  18. Partially Automated Method for Localizing Standardized Acupuncture Points on the Heads of Digital Human Models

    PubMed Central

    Kim, Jungdae; Kang, Dae-In

    2015-01-01

    Having modernized imaging tools for precise positioning of acupuncture points over the human body where the traditional therapeutic method is applied is essential. For that reason, we suggest a more systematic positioning method that uses X-ray computer tomographic images to precisely position acupoints. Digital Korean human data were obtained to construct three-dimensional head-skin and skull surface models of six individuals. Depending on the method used to pinpoint the positions of the acupoints, every acupoint was classified into one of three types: anatomical points, proportional points, and morphological points. A computational algorithm and procedure were developed for partial automation of the positioning. The anatomical points were selected by using the structural characteristics of the skin surface and skull. The proportional points were calculated from the positions of the anatomical points. The morphological points were also calculated by using some control points related to the connections between the source and the target models. All the acupoints on the heads of the six individual were displayed on three-dimensional computer graphical image models. This method may be helpful for developing more accurate experimental designs and for providing more quantitative volumetric methods for performing analyses in acupuncture-related research. PMID:26101534

  19. DockTope: a Web-based tool for automated pMHC-I modelling

    PubMed Central

    Menegatti Rigo, Maurício; Amaral Antunes, Dinler; Vaz de Freitas, Martiela; Fabiano de Almeida Mendes, Marcus; Meira, Lindolfo; Sinigaglia, Marialva; Fioravanti Vieira, Gustavo

    2015-01-01

    The immune system is constantly challenged, being required to protect the organism against a wide variety of infectious pathogens and, at the same time, to avoid autoimmune disorders. One of the most important molecules involved in these events is the Major Histocompatibility Complex class I (MHC-I), responsible for binding and presenting small peptides from the intracellular environment to CD8+ T cells. The study of peptide:MHC-I (pMHC-I) molecules at a structural level is crucial to understand the molecular mechanisms underlying immunologic responses. Unfortunately, there are few pMHC-I structures in the Protein Data Bank (PDB) (especially considering the total number of complexes that could be formed combining different peptides), and pMHC-I modelling tools are scarce. Here, we present DockTope, a free and reliable web-based tool for pMHC-I modelling, based on crystal structures from the PDB. DockTope is fully automated and allows any researcher to construct a pMHC-I complex in an efficient way. We have reproduced a dataset of 135 non-redundant pMHC-I structures from the PDB (Cα RMSD below 1 Å). Modelling of pMHC-I complexes is remarkably important, contributing to the knowledge of important events such as cross-reactivity, autoimmunity, cancer therapy, transplantation and rational vaccine design. PMID:26674250

  20. VoICE: A semi-automated pipeline for standardizing vocal analysis across models

    PubMed Central

    Burkett, Zachary D.; Day, Nancy F.; Peñagarikano, Olga; Geschwind, Daniel H.; White, Stephanie A.

    2015-01-01

    The study of vocal communication in animal models provides key insight to the neurogenetic basis for speech and communication disorders. Current methods for vocal analysis suffer from a lack of standardization, creating ambiguity in cross-laboratory and cross-species comparisons. Here, we present VoICE (Vocal Inventory Clustering Engine), an approach to grouping vocal elements by creating a high dimensionality dataset through scoring spectral similarity between all vocalizations within a recording session. This dataset is then subjected to hierarchical clustering, generating a dendrogram that is pruned into meaningful vocalization “types” by an automated algorithm. When applied to birdsong, a key model for vocal learning, VoICE captures the known deterioration in acoustic properties that follows deafening, including altered sequencing. In a mammalian neurodevelopmental model, we uncover a reduced vocal repertoire of mice lacking the autism susceptibility gene, Cntnap2. VoICE will be useful to the scientific community as it can standardize vocalization analyses across species and laboratories. PMID:26018425

  1. Open boundary conditions for the Diffuse Interface Model in 1-D

    NASA Astrophysics Data System (ADS)

    Desmarais, J. L.; Kuerten, J. G. M.

    2014-04-01

    New techniques are developed for solving multi-phase flows in unbounded domains using the Diffuse Interface Model in 1-D. They extend two open boundary conditions originally designed for the Navier-Stokes equations. The non-dimensional formulation of the DIM generalizes the approach to any fluid. The equations support a steady state whose analytical approximation close to the critical point depends only on temperature. This feature enables the use of detectors at the boundaries switching between conventional boundary conditions in bulk phases and a multi-phase strategy in interfacial regions. Moreover, the latter takes advantage of the steady state approximation to minimize the interface-boundary interactions. The techniques are applied to fluids experiencing a phase transition and where the interface between the phases travels through one of the boundaries. When the interface crossing the boundary is fully developed, the technique greatly improves results relative to cases where conventional boundary conditions can be used. Limitations appear when the interface crossing the boundary is not a stable equilibrium between the two phases: the terms responsible for creating the true balance between the phases perturb the interior solution. Both boundary conditions present good numerical stability properties: the error remains bounded when the initial conditions or the far field values are perturbed. For the PML, the influence of its main parameters on the global error is investigated to make a compromise between computational costs and maximum error. The approach can be extended to multiple spatial dimensions.

  2. Nuclear Reactor/Hydrogen Process Interface Including the HyPEP Model

    SciTech Connect

    Steven R. Sherman

    2007-05-01

    The Nuclear Reactor/Hydrogen Plant interface is the intermediate heat transport loop that will connect a very high temperature gas-cooled nuclear reactor (VHTR) to a thermochemical, high-temperature electrolysis, or hybrid hydrogen production plant. A prototype plant called the Next Generation Nuclear Plant (NGNP) is planned for construction and operation at the Idaho National Laboratory in the 2018-2021 timeframe, and will involve a VHTR, a high-temperature interface, and a hydrogen production plant. The interface is responsible for transporting high-temperature thermal energy from the nuclear reactor to the hydrogen production plant while protecting the nuclear plant from operational disturbances at the hydrogen plant. Development of the interface is occurring under the DOE Nuclear Hydrogen Initiative (NHI) and involves the study, design, and development of high-temperature heat exchangers, heat transport systems, materials, safety, and integrated system models. Research and development work on the system interface began in 2004 and is expected to continue at least until the start of construction of an engineering-scale demonstration plant.

  3. A computational model for stress reduction at the skin-implant interface of osseointegrated prostheses.

    PubMed

    Yerneni, Srinivasu; Dhaher, Yasin; Kuiken, Todd A

    2012-04-01

    Osseointegrated implants (OI)s for transfemoral prosthetic attachment offer amputees an alternative to the traditional socket attachment. Potential benefits include a natural transfer of loads directly to the skeleton via the percutaneous abutment, relief of pain and discomfort of residual limb soft tissues by eliminating sockets, increased sensory feedback, and improved function. Despite the benefits, the skin-implant interface remains a critical limitation, as it is highly prone to bacterial infection. One approach to improve clinical outcomes is to minimize stress concentrations at the skin-implant interface due to shear loading, reducing soft tissue breakdown and subsequent risk of infection. We hypothesized that broadening the bone base at the distal end of the femur would provide added surface area for skin adhesion and reduce stresses at the skin-implant interface. We tested this hypothesis using finite element models of an OI in a residual limb. Results showed a dramatic decrease in stress reduction, with up to ~90% decrease in stresses at the skin-implant interface as cortical bone thickness increased from 2 to 8 mm. The findings in this study suggests that surgical techniques could stabilize the skin-implant interface, thus enhancing a skin-to-bone seal around the percutaneous device and minimizing infection.

  4. An Agent-Based Interface to Terrestrial Ecological Forecasting

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Nemani, Ramakrishna; Pang, Wan-Lin; Votava, Petr; Etzioni, Oren

    2004-01-01

    This paper describes a flexible agent-based ecological forecasting system that combines multiple distributed data sources and models to provide near-real-time answers to questions about the state of the Earth system We build on novel techniques in automated constraint-based planning and natural language interfaces to automatically generate data products based on descriptions of the desired data products.

  5. Modeling Complex Cross-Systems Software Interfaces Using SysML

    NASA Technical Reports Server (NTRS)

    Mandutianu, Sanda; Morillo, Ron; Simpson, Kim; Liepack, Otfrid; Bonanne, Kevin

    2013-01-01

    The complex flight and ground systems for NASA human space exploration are designed, built, operated and managed as separate programs and projects. However, each system relies on one or more of the other systems in order to accomplish specific mission objectives, creating a complex, tightly coupled architecture. Thus, there is a fundamental need to understand how each system interacts with the other. To determine if a model-based system engineering approach could be utilized to assist with understanding the complex system interactions, the NASA Engineering and Safety Center (NESC) sponsored a task to develop an approach for performing cross-system behavior modeling. This paper presents the results of applying Model Based Systems Engineering (MBSE) principles using the System Modeling Language (SysML) to define cross-system behaviors and how they map to crosssystem software interfaces documented in system-level Interface Control Documents (ICDs).

  6. Developing Human-Machine Interfaces to Support Appropriate Trust and Reliance on Automated Combat Identification Systems (Developpement d’Interfaces Homme-Machine Pour Appuyer la Confiance dans les Systemes Automatises d’Identification au Combat)

    DTIC Science & Technology

    2008-03-31

    on automation; the ‘response bias’ approach. This new approach is based on Signal Detection Theory (SDT) (Macmillan & Creelman , 1991; Wickens...SDT), response bias will vary with the expectation of the target probability, whereas their sensitivity will stay constant (Macmillan & Creelman ...measures, C has the simplest statistical properties (Macmillan & Creelman , 1991, p273), and it was also the measure used in Dzindolet et al.’s study

  7. Modeling of tunneling current in ultrathin MOS structure with interface trap charge and fixed oxide charge

    NASA Astrophysics Data System (ADS)

    Hu, Bo; Huang, Shi-Hua; Wu, Feng-Min

    2013-01-01

    A model based on analysis of the self-consistent Poisson—Schrodinger equation is proposed to investigate the tunneling current of electrons in the inversion layer of a p-type metal-oxide-semiconductor (MOS) structure. In this model, the influences of interface trap charge (ITC) at the Si—SiO2 interface and fixed oxide charge (FOC) in the oxide region are taken into account, and one-band effective mass approximation is used. The tunneling probability is obtained by employing the transfer matrix method. Further, the effects of in-plane momentum on the quantization in the electron motion perpendicular to the Si—SiO2 interface of a MOS device are investigated. Theoretical simulation results indicate that both ITC and FOC have great influence on the tunneling current through a MOS structure when their densities are larger than 1012 cm-2, which results from the great change of bound electrons near the Si—SiO2 interface and the oxide region. Therefore, for real ultrathin MOS structures with ITC and FOC, this model can give a more accurate description for the tunneling current in the inversion layer.

  8. Acoustic Response of Underwater Munitions near a Sediment Interface: Measurement Model Comparisons and Classification Schemes

    DTIC Science & Technology

    2015-04-23

    FINAL REPORT Acoustic Response of Underwater Munitions near a Sediment Interface: Measurement Model Comparisons and Classification Schemes SERDP...6 Figure 2. Effect of fish on acoustic color templates during GULFEX12 …………… 8 Figure 3. Selection of targets deployed during TREX13 and BAYEX14...deployed during TREX13 and BAYEX14 …… 29 Figure 16. Ray diagrams for the acoustic ray model …………………………… 29 Figure 17. Model-model and data-model

  9. Integrating Automated Data into Ecosystem Models: How Can We Drink from a Firehose?

    NASA Astrophysics Data System (ADS)

    Allen, M. F.; Harmon, T. C.

    2014-12-01

    Sensors and imaging are changing the way we are measuring ecosystem behavior. Within short time frames, we are able to capture how organisms behave in response to rapid change, and detect events that alter composition and shift states. To transform these observations into process-level understanding, we need to efficiently interpret signals. One way to do this is to automatically integrate the data into ecosystem models. In our soil carbon cycling studies, we collect continuous time series for meteorological conditions, soil processes, and automated imagery. To characterize the timing and clarity of change behavior in our data, we adopted signal-processing approaches like coupled wavelet/coherency analyses. In situ CO2 measurements allow us to visualize when root/microbial activity results in CO2 being respired from the soil surface, versus when other chemical/physical phenomena may alter gas pathways. While these approaches are interesting in understanding individual phenomena, they fail to get us beyond the study of individual processes. Sensor data are compared with the outputs from ecosystem models to detect the patterns in specific phenomena or to revise model parameters or traits. For instance, we measured unexpected levels of soil CO2 in a tropical ecosystem. By examining small-scale ecosystem model parameters, we were able to pinpoint those parameters that needed to be altered to resemble the data outputs. However, we do not capture the essence of large-scale ecosystem shifts. The time is right to utilize real-time data assimilation as an additional forcing of ecosystem models. Continuous, diurnal soil temperature and moisture, along with hourly hyphal or root growth could feed into well-established ecosystem models such as HYDRUS or DayCENT. This approach would provide instantaneous "measurements" of shifting ecosystem processes as they occur, allowing us to identify critical process connections more efficiently.

  10. Automated de novo phasing and model building of coiled-coil proteins.

    PubMed

    Rämisch, Sebastian; Lizatović, Robert; André, Ingemar

    2015-03-01

    Models generated by de novo structure prediction can be very useful starting points for molecular replacement for systems where suitable structural homologues cannot be readily identified. Protein-protein complexes and de novo-designed proteins are examples of systems that can be challenging to phase. In this study, the potential of de novo models of protein complexes for use as starting points for molecular replacement is investigated. The approach is demonstrated using homomeric coiled-coil proteins, which are excellent model systems for oligomeric systems. Despite the stereotypical fold of coiled coils, initial phase estimation can be difficult and many structures have to be solved with experimental phasing. A method was developed for automatic structure determination of homomeric coiled coils from X-ray diffraction data. In a benchmark set of 24 coiled coils, ranging from dimers to pentamers with resolutions down to 2.5 Å, 22 systems were automatically solved, 11 of which had previously been solved by experimental phasing. The generated models contained 71-103% of the residues present in the deposited structures, had the correct sequence and had free R values that deviated on average by 0.01 from those of the respective reference structures. The electron-density maps were of sufficient quality that only minor manual editing was necessary to produce final structures. The method, named CCsolve, combines methods for de novo structure prediction, initial phase estimation and automated model building into one pipeline. CCsolve is robust against errors in the initial models and can readily be modified to make use of alternative crystallographic software. The results demonstrate the feasibility of de novo phasing of protein-protein complexes, an approach that could also be employed for other small systems beyond coiled coils.

  11. The Design and Implementation of a Visual User Interface for a Structured Model Management System

    DTIC Science & Technology

    1988-03-01

    Marshall McLuhan and Quentin Fiore The Medium is the Message (1967) We could easily continue that the computer interface is an extension of the user. To...OR) [Ref.2, p.1]. Managers may feel overly dependent on these MS/OR practitioners who more fully understand the underlying concepts of modeling...point. The program presupposes that the user understands struc- tured modeling concepts, but makes no further assumptions regarding the user’s computer

  12. What determines the take-over time? An integrated model approach of driver take-over after automated driving.

    PubMed

    Zeeb, Kathrin; Buchner, Axel; Schrauf, Michael

    2015-05-01

    In recent years the automation level of driver assistance systems has increased continuously. One of the major challenges for highly automated driving is to ensure a safe driver take-over of the vehicle guidance. This must be ensured especially when the driver is engaged in non-driving related secondary tasks. For this purpose it is essential to find indicators of the driver's readiness to take over and to gain more knowledge about the take-over process in general. A simulator study was conducted to explore how drivers' allocation of visual attention during highly automated driving influences a take-over action in response to an emergency situation. Therefore we recorded drivers' gaze behavior during automated driving while simultaneously engaging in a visually demanding secondary task, and measured their reaction times in a take-over situation. According to their gaze behavior the drivers were categorized into "high", "medium" and "low-risk". The gaze parameters were found to be suitable for predicting the readiness to take-over the vehicle, in such a way that high-risk drivers reacted late and more often inappropriately in the take-over situation. However, there was no difference among the driver groups in the time required by the drivers to establish motor readiness to intervene after the take-over request. An integrated model approach of driver behavior in emergency take-over situations during automated driving is presented. It is argued that primarily cognitive and not motor processes determine the take-over time. Given this, insights can be derived for further research and the development of automated systems.

  13. Automated choroidal segmentation of 1060 nm OCT in healthy and pathologic eyes using a statistical model

    PubMed Central

    Kajić, Vedran; Esmaeelpour, Marieh; Považay, Boris; Marshall, David; Rosin, Paul L.; Drexler, Wolfgang

    2011-01-01

    A two stage statistical model based on texture and shape for fully automatic choroidal segmentation of normal and pathologic eyes obtained by a 1060 nm optical coherence tomography (OCT) system is developed. A novel dynamic programming approach is implemented to determine location of the retinal pigment epithelium/ Bruch’s membrane /choriocapillaris (RBC) boundary. The choroid–sclera interface (CSI) is segmented using a statistical model. The algorithm is robust even in presence of speckle noise, low signal (thick choroid), retinal pigment epithelium (RPE) detachments and atrophy, drusen, shadowing and other artifacts. Evaluation against a set of 871 manually segmented cross-sectional scans from 12 eyes achieves an average error rate of 13%, computed per tomogram as a ratio of incorrectly classified pixels and the total layer surface. For the first time a fully automatic choroidal segmentation algorithm is successfully applied to a wide range of clinical volumetric OCT data. PMID:22254171

  14. An elasto-viscoplastic interface model for investigating the constitutive behavior of nacre

    NASA Astrophysics Data System (ADS)

    Tang, H.; Barthelat, F.; Espinosa, H. D.

    2007-07-01

    In order to better understand the strengthening mechanism observed in nacre, we have developed an interface computational model to simulate the behavior of the organic present at the interface between aragonite tablets. In the model, the single polymer-chain behavior is characterized by the worm-like-chain (WLC) model, which is in turn incorporated into the eight-chain cell model developed by Arruda and Boyce [Arruda, E.M., Boyce, M.C., 1993a. A three-dimensional constitutive model for the large stretches, with application to polymeric glasses. Int. J. Solids Struct. 40, 389-412] to achieve a continuum interface constitutive description. The interface model is formulated within a finite-deformation framework. A fully implicit time-integration algorithm is used for solving the discretized governing equations. Finite element simulations were performed on a representative volume element (RVE) to investigate the tensile response of nacre. The staggered arrangement of tablets and interface waviness obtained experimentally by Barthelat et al. [Barthelat, F., Tang, H., Zavattieri, P.D., Li, C.-M., Espinosa, H.D., 2007. On the mechanics of mother-of-pearl: a key feature in the material hierarchical structure. J. Mech. Phys. Solids 55 (2), 306-337] was included in the RVE simulations. The simulations showed that both the rate-dependence of the tensile response and hysteresis loops during loading, unloading and reloading cycles were captured by the model. Through a parametric study, the effect of the polymer constitutive response during tablet-climbing and its relation to interface hardening was investigated. It is shown that stiffening of the organic material is not required to achieve the experimentally observed strain hardening of nacre during tension. In fact, when ratios of contour length/persistent length experimentally identified are employed in the simulations, the predicted stress-strain behavior exhibits a deformation hardening consistent with the one measured

  15. a Plugin to Interface Openmodeller from Qgis for SPECIES' Potential Distribution Modelling

    NASA Astrophysics Data System (ADS)

    Becker, Daniel; Willmes, Christian; Bareth, Georg; Weniger, Gerd-Christian

    2016-06-01

    This contribution describes the development of a plugin for the geographic information system QGIS to interface the openModeller software package. The aim is to use openModeller to generate species' potential distribution models for various archaeological applications (site catchment analysis, for example). Since the usage of openModeller's command-line interface and configuration files can be a bit inconvenient, an extension of the QGIS user interface to handle these tasks, in combination with the management of the geographic data, was required. The implementation was realized in Python using PyQGIS and PyQT. The plugin, in combination with QGIS, handles the tasks of managing geographical data, data conversion, generation of configuration files required by openModeller and compilation of a project folder. The plugin proved to be very helpful with the task of compiling project datasets and configuration files for multiple instances of species occurrence datasets and the overall handling of openModeller. In addition, the plugin is easily extensible to take potential new requirements into account in the future.

  16. Integrated surface and groundwater modelling in the Thames Basin, UK using the Open Modelling Interface

    NASA Astrophysics Data System (ADS)

    Mackay, Jonathan; Abesser, Corinna; Hughes, Andrew; Jackson, Chris; Kingdon, Andrew; Mansour, Majdi; Pachocka, Magdalena; Wang, Lei; Williams, Ann

    2013-04-01

    The River Thames catchment is situated in the south-east of England. It covers approximately 16,000 km2 and is the most heavily populated river basin in the UK. It is also one of the driest and has experienced severe drought events in the recent past. With the onset of climate change and human exploitation of our environment, there are now serious concerns over the sustainability of water resources in this basin with 6 million m3 consumed every day for public water supply alone. Groundwater in the Thames basin is extremely important, providing 40% of water for public supply. The principal aquifer is the Chalk, a dual permeability limestone, which has been extensively studied to understand its hydraulic properties. The fractured Jurassic limestone in the upper catchment also forms an important aquifer, supporting baseflow downstream during periods of drought. These aquifers are unconnected other than through the River Thames and its tributaries, which provide two-thirds of London's drinking water. Therefore, to manage these water resources sustainably and to make robust projections into the future, surface and groundwater processes must be considered in combination. This necessitates the simulation of the feedbacks and complex interactions between different parts of the water cycle, and the development of integrated environmental models. The Open Modelling Interface (OpenMI) standard provides a method through which environmental models of varying complexity and structure can be linked, allowing them to run simultaneously and exchange data at each timestep. This architecture has allowed us to represent the surface and subsurface flow processes within the Thames basin at an appropriate level of complexity based on our understanding of particular hydrological processes and features. We have developed a hydrological model in OpenMI which integrates a process-driven, gridded finite difference groundwater model of the Chalk with a more simplistic, semi

  17. Distribution automation applications of fiber optics

    NASA Technical Reports Server (NTRS)

    Kirkham, Harold; Johnston, A.; Friend, H.

    1989-01-01

    Motivations for interest and research in distribution automation are discussed. The communication requirements of distribution automation are examined and shown to exceed the capabilities of power line carrier, radio, and telephone systems. A fiber optic based communication system is described that is co-located with the distribution system and that could satisfy the data rate and reliability requirements. A cost comparison shows that it could be constructed at a cost that is similar to that of a power line carrier system. The requirements for fiber optic sensors for distribution automation are discussed. The design of a data link suitable for optically-powered electronic sensing is presented. Empirical results are given. A modeling technique that was used to understand the reflections of guided light from a variety of surfaces is described. An optical position-indicator design is discussed. Systems aspects of distribution automation are discussed, in particular, the lack of interface, communications, and data standards. The economics of distribution automation are examined.

  18. Aircraft wing structural design optimization based on automated finite element modelling and ground structure approach

    NASA Astrophysics Data System (ADS)

    Yang, Weizhu; Yue, Zhufeng; Li, Lei; Wang, Peiyan

    2016-01-01

    An optimization procedure combining an automated finite element modelling (AFEM) technique with a ground structure approach (GSA) is proposed for structural layout and sizing design of aircraft wings. The AFEM technique, based on CATIA VBA scripting and PCL programming, is used to generate models automatically considering the arrangement of inner systems. GSA is used for local structural topology optimization. The design procedure is applied to a high-aspect-ratio wing. The arrangement of the integral fuel tank, landing gear and control surfaces is considered. For the landing gear region, a non-conventional initial structural layout is adopted. The positions of components, the number of ribs and local topology in the wing box and landing gear region are optimized to obtain a minimum structural weight. Constraints include tank volume, strength, buckling and aeroelastic parameters. The results show that the combined approach leads to a greater weight saving, i.e. 26.5%, compared with three additional optimizations based on individual design approaches.

  19. Automated fit quantification of tibial nail designs during the insertion using computer three-dimensional modelling.

    PubMed

    Amarathunga, Jayani P; Schuetz, Michael A; Yarlagadda, Prasad Kvd; Schmutz, Beat

    2014-12-01

    Intramedullary nailing is the standard fixation method for displaced diaphyseal fractures of the tibia. An optimal nail design should both facilitate insertion and anatomically fit the bone geometry at its final position in order to reduce the risk of stress fractures and malalignments. Due to the nonexistence of suitable commercial software, we developed a software tool for the automated fit assessment of nail designs. Furthermore, we demonstrated that an optimised nail, which fits better at the final position, is also easier to insert. Three-dimensional models of two nail designs and 20 tibiae were used. The fitting was quantified in terms of surface area, maximum distance, sum of surface areas and sum of maximum distances by which the nail was protruding into the cortex. The software was programmed to insert the nail into the bone model and to quantify the fit at defined increment levels. On average, the misfit during the insertion in terms of the four fitting parameters was smaller for the Expert Tibial Nail Proximal bend (476.3 mm(2), 1.5 mm, 2029.8 mm(2), 6.5 mm) than the Expert Tibial Nail (736.7 mm(2), 2.2 mm, 2491.4 mm(2), 8.0 mm). The differences were statistically significant (p ≤ 0.05). The software could be used by nail implant manufacturers for the purpose of implant design validation.

  20. Evaluation of a commercial electro-kinetically pumped sheath-flow nanospray interface coupled to an automated capillary zone electrophoresis system.

    PubMed

    Peuchen, Elizabeth H; Zhu, Guije; Sun, Liangliang; Dovichi, Norman J

    2017-03-01

    Capillary zone electrophoresis-electrospray ionization-mass spectrometry (CZE-ESI-MS) is attracting renewed attention for proteomic and metabolomic analysis. An important reason for this interest is the maturation and commercialization of interfaces for coupling CZE with ESI-MS. One of these interfaces is an electro-kinetically pumped sheath flow nanospray interface developed by the Dovichi group, in which a very low sheath flow is generated based on electroosmosis within a glass emitter. CMP Scientific has commercialized this interface as the EMASS-II ion source. In this work, we compared the performance of the EMASS-II ion source with our in-house system. The performance of the systems is equivalent. We also coupled the EMASS-II ion source with a PrinCE Next|480 capillary electrophoresis autosampler and an Orbitrap mass spectrometer, and analyzed this system's performance in terms of sensitivity, reproducibility, and separation performance for separation of tryptic digests, intact proteins, and amino acids. The system produced reproducible analysis of BSA digest; the RSDs of peptide intensity and migration time across 24 runs were less than 20 and 6%, respectively. The system produced a linear calibration curve of intensity across a 30-fold range of tryptic digest concentration. The combination of a commercial autosampler and electrospray interface efficiently separated amino acids, peptides, and intact proteins, and only required 5 μL of sample for analysis. Graphical Abstract The commercial and locally constructed versions of the interface provide similar numbers of protein identifications from a Xenopus laevis fertilized egg digest.

  1. Modeling the Effect of Interface Wear on Fatigue Hysteresis Behavior of Carbon Fiber-Reinforced Ceramic-Matrix Composites

    NASA Astrophysics Data System (ADS)

    Longbiao, Li

    2015-12-01

    An analytical method has been developed to investigate the effect of interface wear on fatigue hysteresis behavior in carbon fiber-reinforced ceramic-matrix composites (CMCs). The damage mechanisms, i.e., matrix multicracking, fiber/matrix interface debonding and interface wear, fibers fracture, slip and pull-out, have been considered. The statistical matrix multicracking model and fracture mechanics interface debonding criterion were used to determine the matrix crack spacing and interface debonded length. Upon first loading to fatigue peak stress and subsequent cyclic loading, the fibers failure probabilities and fracture locations were determined by combining the interface wear model and fiber statistical failure model based on the assumption that the loads carried by broken and intact fibers satisfy the global load sharing criterion. The effects of matrix properties, i.e., matrix cracking characteristic strength and matrix Weibull modulus, interface properties, i.e., interface shear stress and interface debonded energy, fiber properties, i.e., fiber Weibull modulus and fiber characteristic strength, and cycle number on fibers failure, hysteresis loops and interface slip, have been investigated. The hysteresis loops under fatigue loading from the present analytical method were in good agreement with experimental data.

  2. Modeling strategic use of human computer interfaces with novel hidden Markov models

    PubMed Central

    Mariano, Laura J.; Poore, Joshua C.; Krum, David M.; Schwartz, Jana L.; Coskren, William D.; Jones, Eric M.

    2015-01-01

    Immersive software tools are virtual environments designed to give their users an augmented view of real-world data and ways of manipulating that data. As virtual environments, every action users make while interacting with these tools can be carefully logged, as can the state of the software and the information it presents to the user, giving these actions context. This data provides a high-resolution lens through which dynamic cognitive and behavioral processes can be viewed. In this report, we describe new methods for the analysis and interpretation of such data, utilizing a novel implementation of the Beta Process Hidden Markov Model (BP-HMM) for analysis of software activity logs. We further report the results of a preliminary study designed to establish the validity of our modeling approach. A group of 20 participants were asked to play a simple computer game, instrumented to log every interaction with the interface. Participants had no previous experience with the game's functionality or rules, so the activity logs collected during their naïve interactions capture patterns of exploratory behavior and skill acquisition as they attempted to learn the rules of the game. Pre- and post-task questionnaires probed for self-reported styles of problem solving, as well as task engagement, difficulty, and workload. We jointly modeled the activity log sequences collected from all participants using the BP-HMM approach, identifying a global library of activity patterns representative of the collective behavior of all the participants. Analyses show systematic relationships between both pre- and post-task questionnaires, self-reported approaches to analytic problem solving, and metrics extracted from the BP-HMM decomposition. Overall, we find that this novel approach to decomposing unstructured behavioral data within software environments provides a sensible means for understanding how users learn to integrate software functionality for strategic task pursuit. PMID

  3. Third-generation electrokinetically pumped sheath-flow nanospray interface with improved stability and sensitivity for automated capillary zone electrophoresis-mass spectrometry analysis of complex proteome digests.

    PubMed

    Sun, Liangliang; Zhu, Guijie; Zhang, Zhenbin; Mou, Si; Dovichi, Norman J

    2015-05-01

    We have reported a set of electrokinetically pumped sheath flow nanoelectrospray interfaces to couple capillary zone electrophoresis with mass spectrometry. A separation capillary is threaded through a cross into a glass emitter. A side arm provides fluidic contact with a sheath buffer reservoir that is connected to a power supply. The potential applied to the sheath buffer drives electro-osmosis in the emitter to pump the sheath fluid at nanoliter per minute rates. Our first-generation interface placed a flat-tipped capillary in the emitter. Sensitivity was inversely related to orifice size and to the distance from the capillary tip to the emitter orifice. A second-generation interface used a capillary with an etched tip that allowed the capillary exit to approach within a few hundred micrometers of the emitter orifice, resulting in a significant increase in sensitivity. In both the first- and second-generation interfaces, the emitter diameter was typically 8 μm; these narrow orifices were susceptible to plugging and tended to have limited lifetime. We now report a third-generation interface that employs a larger diameter emitter orifice with very short distance between the capillary tip and the emitter orifice. This modified interface is much more robust and produces much longer lifetime than our previous designs with no loss in sensitivity. We evaluated the third-generation interface for a 5000 min (127 runs, 3.5 days) repetitive analysis of bovine serum albumin digest using an uncoated capillary. We observed a 10% relative standard deviation in peak area, an average of 160,000 theoretical plates, and very low carry-over (much less than 1%). We employed a linear-polyacrylamide (LPA)-coated capillary for single-shot, bottom-up proteomic analysis of 300 ng of Xenopus laevis fertilized egg proteome digest and identified 1249 protein groups and 4038 peptides in a 110 min separation using an LTQ-Orbitrap Velos mass spectrometer; peak capacity was ∼330. The

  4. A DIFFUSE-INTERFACE APPROACH FOR MODELING TRANSPORT, DIFFUSION AND ADSORPTION/DESORPTION OF MATERIAL QUANTITIES ON A DEFORMABLE INTERFACE*

    PubMed Central

    Teigen, Knut Erik; Li, Xiangrong; Lowengrub, John; Wang, Fan; Voigt, Axel

    2010-01-01

    A method is presented to solve two-phase problems involving a material quantity on an interface. The interface can be advected, stretched, and change topology, and material can be adsorbed to or desorbed from it. The method is based on the use of a diffuse interface framework, which allows a simple implementation using standard finite-difference or finite-element techniques. Here, finite-difference methods on a block-structured adaptive grid are used, and the resulting equations are solved using a non-linear multigrid method. Interfacial flow with soluble surfactants is used as an example of the application of the method, and several test cases are presented demonstrating its accuracy and convergence. PMID:21373370

  5. Analytical model for Transient Current Technique (TCT) signal prediction and analysis for thin interface characterization

    NASA Astrophysics Data System (ADS)

    Bronuzzi, J.; Mapelli, A.; Sallese, J. M.

    2016-12-01

    A silicon wafer bonding technique has been recently proposed for the fabrication of monolithic silicon radiation detectors. This new process would enable direct bonding of a read-out electronic chip wafer on a highly resistive silicon substrate wafer. Therefore, monolithic silicon detectors could be fabricated in this way which would allow the free choice of electronic chips and high resistive silicon bulk, even from different providers. Moreover, a monolithic detector with a high resistive bulk would also be available. Electrical properties of the bonded interface are then critical for this application. Indeed, mobile charges generated by radiation inside the bonded bulk are expected to transit through the interface to be collected by the read-out electronics. In order to characterize this interface, the concept of Transient Current Technique (TCT) has been explored by means of numerical simulations combined with a physics based analytical model. In this work, the analytical model giving insight into the physics behind the TCT dependence upon interface traps is validated using both TCAD simulations and experimental measurements.

  6. Integration Of Heat Transfer Coefficient In Glass Forming Modeling With Special Interface Element

    SciTech Connect

    Moreau, P.; Gregoire, S.; Lochegnies, D.; Cesar de Sa, J.

    2007-05-17

    Numerical modeling of the glass forming processes requires the accurate knowledge of the heat exchange between the glass and the forming tools. A laboratory testing is developed to determine the evolution of the heat transfer coefficient in different glass/mould contact conditions (contact pressure, temperature, lubrication...). In this paper, trials are performed to determine heat transfer coefficient evolutions in experimental conditions close to the industrial blow-and-blow process conditions. In parallel of this work, a special interface element is implemented in a commercial Finite Element code in order to deal with heat transfer between glass and mould for non-meshing meshes and evolutive contact. This special interface element, implemented by using user subroutines, permits to introduce the previous heat transfer coefficient evolutions in the numerical modelings at the glass/mould interface in function of the local temperatures, contact pressures, contact time and kind of lubrication. The blow-and-blow forming simulation of a perfume bottle is finally performed to assess the special interface element performance.

  7. Integration Of Heat Transfer Coefficient In Glass Forming Modeling With Special Interface Element

    NASA Astrophysics Data System (ADS)

    Moreau, P.; César de Sá, J.; Grégoire, S.; Lochegnies, D.

    2007-05-01

    Numerical modeling of the glass forming processes requires the accurate knowledge of the heat exchange between the glass and the forming tools. A laboratory testing is developed to determine the evolution of the heat transfer coefficient in different glass/mould contact conditions (contact pressure, temperature, lubrication…). In this paper, trials are performed to determine heat transfer coefficient evolutions in experimental conditions close to the industrial blow-and-blow process conditions. In parallel of this work, a special interface element is implemented in a commercial Finite Element code in order to deal with heat transfer between glass and mould for non-meshing meshes and evolutive contact. This special interface element, implemented by using user subroutines, permits to introduce the previous heat transfer coefficient evolutions in the numerical modelings at the glass/mould interface in function of the local temperatures, contact pressures, contact time and kind of lubrication. The blow-and-blow forming simulation of a perfume bottle is finally performed to assess the special interface element performance.

  8. Psychovegetative syndrome diagnosis: an automated psychophysiological investigation and mathematical modeling approach.

    PubMed

    Brelidze, Z; Samadashvili, Z; Khachapuridze, G; Kubaneishvili, E; Nozadze, Z; Benidze, I; Tsitskishvili, N

    1995-01-01

    1. INTRODUCTION. The main purpose of our work was to create the informational expert system of psychovegetative syndrome diagnosis by applying clinical data and estimating the functioning of the central and peripheral part of regulatory apparatus of the human organism, taking into consideration parallel and consecutive sensory, motor, associative, emotional drive systems, and internal body state. We used automatized psychophysiological investigation and mathematical models. For this purpose the following principal tasks have been prepared: the creation of database of quantifiable estimation patient state; the definition and automation of psychophysiological investigation; mathematical modeling of vegetative functions using a non-invasive sample and its connection with real psychophysiological experiment; mathematical modeling of organisms inner medium homeostasis; and the creation of an informational-expert system of psychovegetative syndrome diagnosis. 2. DATABASE OF ESTIMATION OF PATIENTS STATE. The medical records of the DB "PATIENT" contain data on patient psychic and somatoneurological status. 3. AUTOMATED PSYCHOPHYSIOLOGICAL INVESTIGATION. Psychophysiological investigation enables estimation of the functioning of several subsystems of the human organism and establishes an interrelationship between them by means of electrophysiological data and performance parameters. The study of psychophysiological provision of behavior by psychophysiological investigation enables us to get information about adaptational mechanisms of the patient under certain environmental loads. By means of special mathematical provision, the mathematical elaboration of biosignals as performance parameters has been realized; also realized were the formation of received parameters in the database, the estimation of separated parameters in the view of informativity, and the establishment of diagnostic patterns. 4. MATHEMATICAL MODELS FOR ESTIMATION INTERNAL BODY STATE. The proposed

  9. Interfacing MATLAB and Python Optimizers to Black-Box Environmental Simulation Models

    NASA Astrophysics Data System (ADS)

    Matott, L. S.; Leung, K.; Tolson, B.

    2009-12-01

    A common approach for utilizing environmental models in a management or policy-analysis context is to incorporate them into a simulation-optimization framework - where an underlying process-based environmental model is linked with an optimization search algorithm. The optimization search algorithm iteratively adjusts various model inputs (i.e. parameters or design variables) in order to minimize an application-specific objective function computed on the basis of model outputs (i.e. response variables). Numerous optimization algorithms have been applied to the simulation-optimization of environmental systems and this research investigated the use of optimization libraries and toolboxes that are readily available in MATLAB and Python - two popular high-level programming languages. Inspired by model-independent calibration codes (e.g. PEST and UCODE), a small piece of interface software (known as PIGEON) was developed. PIGEON allows users to interface Python and MATLAB optimizers with arbitrary black-box environmental models without writing any additional interface code. An initial set of benchmark tests (involving more than 20 MATLAB and Python optimization algorithms) were performed to validate the interface software - results highlight the need to carefully consider such issues as numerical precision in output files and enforcement (or not) of parameter limits. Additional benchmark testing considered the problem of fitting isotherm expressions to laboratory data - with an emphasis on dual-mode expressions combining non-linear isotherms with a linear partitioning component. With respect to the selected isotherm fitting problems, derivative-free search algorithms significantly outperformed gradient-based algorithms. Attempts to improve gradient-based performance, via parameter tuning and also via several alternative multi-start approaches, were largely unsuccessful.

  10. An evaluation of automated model-building procedures for protein crystallography.

    PubMed

    Badger, John

    2003-05-01

    The computer programs ARP/wARP, MAID and RESOLVE are designed to build protein structures into experimentally phased electron-density maps without any user intervention, requiring only diffraction data and sequence information. However, the MAID and RESOLVE systems, which seek to extend the range of automated model-building to approximately 3 A resolution, have yet to receive significant testing outside the small numbers of data sets used in their development. Since these two systems employ a large number of scoring functions and decision-making heuristics, additional tests are required to establish their usefulness to the crystallographic community. To independently evaluate these programs, their performance was tested using a database containing 41 experimentally phased maps between 1.3 and 2.9 A resolution from a diverse set of protein structures. At resolutions higher than 2.3 A the most successful program was ARP/wARP 6.0, which accurately built an average of 90% of the main chain. This system builds somewhat larger fractions of the model than the previous version ARP/wARP 5.1, which accurately built an average of 87% of the main chain. Although not specifically designed for model building into high-resolution maps, MAID and RESOLVE were also quite successful in this resolution regime, typically building approximately 80% of the main chain. At 2.3-2.7 A resolution the MAID and RESOLVE programs automatically built approximately 75% of the main-chain atoms in the protein structures used in these tests, which would significantly accelerate the model-building process. Data sets at lower resolution proved more problematic for these programs, although many of the secondary-structure elements were correctly identified and fitted.

  11. Micromechanical modeling of the cement-bone interface: the effect of friction, morphology and material properties on the micromechanical response

    PubMed Central

    Janssen, Dennis; Mann, Kenneth A.; Verdonschot, Nico

    2008-01-01

    In order to gain insight into the micro-mechanical behavior of the cement-bone interface, the effect of parametric variations of frictional, morphological and material properties on the mechanical response of the cement-bone interface were analyzed using a finite element approach. Finite element models of a cement-bone interface specimen were created from micro-computed tomography data of a physical specimen that was sectioned from an in vitro cemented total hip arthroplasty. In five models the friction coefficient was varied (μ= 0.0; 0.3; 0.7; 1.0 and 3.0), while in one model an ideally bonded interface was assumed. In two models cement interface gaps and an optimal cement penetration were simulated. Finally, the effect of bone cement stiffness variations was simulated (2.0 and 2.5 GPa, relative to the default 3.0 GPa). All models were loaded for a cycle of fully reversible tension-compression. From the simulated stress-displacement curves the interface deformation, stiffness and hysteresis were calculated. The results indicate that in the current model the mechanical properties of the cement-bone interface were caused by frictional phenomena at the shape-closed interlock rather than by adhesive properties of the cement. Our findings furthermore show that in our model maximizing cement penetration improved the micromechanical response of the cement-bone interface stiffness, while interface gaps had a detrimental effect. Relative to the frictional and morphological variations, variations in the cement stiffness had only a modest effect on the micromechanical behavior of the cement-bone interface. The current study provides information that may help to better understand the load transfer mechanisms taking place at the cement-bone interface. PMID:18848699

  12. A theoretical model and phase field simulation on the evolution of interface roughness in the oxidation process

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Fang, Dai-Ning; Liu, Bin

    2012-01-01

    An oxidation kinetics model is developed to account for the effects of the oxidation interface curvature and the oxidation-induced volume change or Pilling-Bedworth ratio. For the oxidation of Fe-Cr-Al-Y alloy fiber, the predictions agree well with experimental results. By considering the influence of the oxidation interface curvature on oxidation rates, the evolution of fluctuant oxidation interface is predicted. We also developed the phase field method (PFM) to simulate the evolution of the interface roughness. Both the theoretical model and the PFM results show that the interface will become smooth during high temperature oxidation. Stress distribution and evolution are calculated by PFM, which indicates that the stress level decreases as the interface morphology evolves.

  13. Towards the virtual artery: a multiscale model for vascular physiology at the physics-chemistry-biology interface.

    PubMed

    Hoekstra, Alfons G; Alowayyed, Saad; Lorenz, Eric; Melnikova, Natalia; Mountrakis, Lampros; van Rooij, Britt; Svitenkov, Andrew; Závodszky, Gábor; Zun, Pavel

    2016-11-13

    This discussion paper introduces the concept of the Virtual Artery as a multiscale model for arterial physiology and pathologies at the physics-chemistry-biology (PCB) interface. The cellular level is identified as the mesoscopic level, and we argue that by coupling cell-based models with other relevant models on the macro- and microscale, a versatile model of arterial health and disease can be composed. We review the necessary ingredients, both models of arteries at many different scales, as well as generic methods to compose multiscale models. Next, we discuss how this can be combined into the virtual artery. Finally, we argue that the concept of models at the PCB interface could or perhaps should become a powerful paradigm, not only as in our case for studying physiology, but also for many other systems that have such PCB interfaces.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'.

  14. Towards the virtual artery: a multiscale model for vascular physiology at the physics-chemistry-biology interface

    NASA Astrophysics Data System (ADS)

    Hoekstra, Alfons G.; Alowayyed, Saad; Lorenz, Eric; Melnikova, Natalia; Mountrakis, Lampros; van Rooij, Britt; Svitenkov, Andrew; Závodszky, Gábor; Zun, Pavel

    2016-11-01

    This discussion paper introduces the concept of the Virtual Artery as a multiscale model for arterial physiology and pathologies at the physics-chemistry-biology (PCB) interface. The cellular level is identified as the mesoscopic level, and we argue that by coupling cell-based models with other relevant models on the macro- and microscale, a versatile model of arterial health and disease can be composed. We review the necessary ingredients, both models of arteries at many different scales, as well as generic methods to compose multiscale models. Next, we discuss how this can be combined into the virtual artery. Finally, we argue that the concept of models at the PCB interface could or perhaps should become a powerful paradigm, not only as in our case for studying physiology, but also for many other systems that have such PCB interfaces. This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'.

  15. Including nonequilibrium interface kinetics in a continuum model for melting nanoscaled particles

    NASA Astrophysics Data System (ADS)

    Back, Julian M.; McCue, Scott W.; Moroney, Timothy J.

    2014-11-01

    The melting temperature of a nanoscaled particle is known to decrease as the curvature of the solid-melt interface increases. This relationship is most often modelled by a Gibbs-Thomson law, with the decrease in melting temperature proposed to be a product of the curvature of the solid-melt interface and the surface tension. Such a law must break down for sufficiently small particles, since the curvature becomes singular in the limit that the particle radius vanishes. Furthermore, the use of this law as a boundary condition for a Stefan-type continuum model is problematic because it leads to a physically unrealistic form of mathematical blow-up at a finite particle radius. By numerical simulation, we show that the inclusion of nonequilibrium interface kinetics in the Gibbs-Thomson law regularises the continuum model, so that the mathematical blow up is suppressed. As a result, the solution continues until complete melting, and the corresponding melting temperature remains finite for all time. The results of the adjusted model are consistent with experimental findings of abrupt melting of nanoscaled particles. This small-particle regime appears to be closely related to the problem of melting a superheated particle.

  16. Including nonequilibrium interface kinetics in a continuum model for melting nanoscaled particles.

    PubMed

    Back, Julian M; McCue, Scott W; Moroney, Timothy J

    2014-11-17

    The melting temperature of a nanoscaled particle is known to decrease as the curvature of the solid-melt interface increases. This relationship is most often modelled by a Gibbs-Thomson law, with the decrease in melting temperature proposed to be a product of the curvature of the solid-melt interface and the surface tension. Such a law must break down for sufficiently small particles, since the curvature becomes singular in the limit that the particle radius vanishes. Furthermore, the use of this law as a boundary condition for a Stefan-type continuum model is problematic because it leads to a physically unrealistic form of mathematical blow-up at a finite particle radius. By numerical simulation, we show that the inclusion of nonequilibrium interface kinetics in the Gibbs-Thomson law regularises the continuum model, so that the mathematical blow up is suppressed. As a result, the solution continues until complete melting, and the corresponding melting temperature remains finite for all time. The results of the adjusted model are consistent with experimental findings of abrupt melting of nanoscaled particles. This small-particle regime appears to be closely related to the problem of melting a superheated particle.

  17. An SPH model for multiphase flows with complex interfaces and large density differences

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Zong, Z.; Liu, M. B.; Zou, L.; Li, H. T.; Shu, C.

    2015-02-01

    In this paper, an improved SPH model for multiphase flows with complex interfaces and large density differences is developed. The multiphase SPH model is based on the assumption of pressure continuity over the interfaces and avoids directly using the information of neighboring particles' densities or masses in solving governing equations. In order to improve computational accuracy and to obtain smooth pressure fields, a corrected density re-initialization is applied. A coupled dynamic solid boundary treatment (SBT) is implemented both to reduce numerical oscillations and to prevent unphysical particle penetration in the boundary area. The density correction and coupled dynamics SBT algorithms are modified to adapt to the density discontinuity on fluid interfaces in multiphase simulation. A cut-off value of the particle density is set to avoid negative pressure, which can lead to severe numerical difficulties and may even terminate the simulations. Three representative numerical examples, including a Rayleigh-Taylor instability test, a non-Boussinesq problem and a dam breaking simulation, are presented and compared with analytical results or experimental data. It is demonstrated that the present SPH model is capable of modeling complex multiphase flows with large interfacial deformations and density ratios.

  18. Multiscale Modeling of Intergranular Fracture in Aluminum: Constitutive Relation For Interface Debonding

    NASA Technical Reports Server (NTRS)

    Yamakov, V.; Saether, E.; Glaessgen, E. H.

    2008-01-01

    Intergranular fracture is a dominant mode of failure in ultrafine grained materials. In the present study, the atomistic mechanisms of grain-boundary debonding during intergranular fracture in aluminum are modeled using a coupled molecular dynamics finite element simulation. Using a statistical mechanics approach, a cohesive-zone law in the form of a traction-displacement constitutive relationship, characterizing the load transfer across the plane of a growing edge crack, is extracted from atomistic simulations and then recast in a form suitable for inclusion within a continuum finite element model. The cohesive-zone law derived by the presented technique is free of finite size effects and is statistically representative for describing the interfacial debonding of a grain boundary (GB) interface examined at atomic length scales. By incorporating the cohesive-zone law in cohesive-zone finite elements, the debonding of a GB interface can be simulated in a coupled continuum-atomistic model, in which a crack starts in the continuum environment, smoothly penetrates the continuum-atomistic interface, and continues its propagation in the atomistic environment. This study is a step towards relating atomistically derived decohesion laws to macroscopic predictions of fracture and constructing multiscale models for nanocrystalline and ultrafine grained materials.

  19. Modeling and control of tissue compression and temperature for automation in robot-assisted surgery.

    PubMed

    Sinha, Utkarsh; Li, Baichun; Sankaranarayanan, Ganesh

    2014-01-01

    Robotic surgery is being used widely due to its various benefits that includes reduced patient trauma and increased dexterity and ergonomics for the operating surgeon. Making the whole or part of the surgical procedure autonomous increases patient safety and will enable the robotic surgery platform to be used in telesurgery. In this work, an Electrosurgery procedure that involves tissue compression and application of heat such as the coaptic vessel closure has been automated. A MIMO nonlinear model characterizing the tissue stiffness and conductance under compression was feedback linearized and tuned PID controllers were used to control the system to achieve both the displacement and temperature constraints. A reference input for both the constraints were chosen as a ramp and hold trajectory which reflect the real constraints that exist in an actual surgical procedure. Our simulations showed that the controllers successfully tracked the reference trajectories with minimal deviation and in finite time horizon. The MIMO system with controllers developed in this work can be used to drive a surgical robot autonomously and perform electrosurgical procedures such as coaptic vessel closures.

  20. A resampling-based Markovian model for automated colon cancer diagnosis.

    PubMed

    Ozdemir, Erdem; Sokmensuer, Cenk; Gunduz-Demir, Cigdem

    2012-01-01

    In recent years, there has been a great effort in the research of implementing automated diagnostic systems for tissue images. One major challenge in this implementation is to design systems that are robust to image variations. In order to meet this challenge, it is important to learn the systems on a large number of labeled images from a different range of variation. However, acquiring labeled images is quite difficult in this domain, and hence, the labeled training data are typically very limited. Although the issue of having limited labeled data is acknowledged by many researchers, it has rarely been considered in the system design. This paper successfully addresses this issue, introducing a new resampling framework to simulate variations in tissue images. This framework generates multiple sequences from an image for its representation and models them using a Markov process. Working with colon tissue images, our experiments show that this framework increases the generalization capacity of a learner by increasing the size and variation of the training data and improves the classification performance of a given image by combining the decisions obtained on its sequences.

  1. Parametric surface modeling and registration for comparison of manual and automated segmentation of the hippocampus.

    PubMed

    Shen, Li; Firpi, Hiram A; Saykin, Andrew J; West, John D

    2009-06-01

    Accurate and efficient segmentation of the hippocampus from brain images is a challenging issue. Although experienced anatomic tracers can be reliable, manual segmentation is a time consuming process and may not be feasible for large-scale neuroimaging studies. In this article, we compare an automated method, FreeSurfer (V4), with a published manual protocol on the determination of hippocampal boundaries from magnetic resonance imaging scans, using data from an existing mild cognitive impairment/Alzheimer's disease cohort. To perform the comparison, we develop an enhanced spherical harmonic processing framework to model and register these hippocampal traces. The framework treats the two hippocampi as a single geometric configuration and extracts the positional, orientation, and shape variables in a multiobject setting. We apply this framework to register manual tracing and FreeSurfer results together and the two methods show stronger agreement on position and orientation than shape measures. Work is in progress to examine a refined FreeSurfer segmentation strategy and an improved agreement on shape features is expected.

  2. Automated segmentation and geometrical modeling of the tricuspid aortic valve in 3D echocardiographic images.

    PubMed

    Pouch, Alison M; Wang, Hongzhi; Takabe, Manabu; Jackson, Benjamin M; Sehgal, Chandra M; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A

    2013-01-01

    The aortic valve has been described with variable anatomical definitions, and the consistency of 2D manual measurement of valve dimensions in medical image data has been questionable. Given the importance of image-based morphological assessment in the diagnosis and surgical treatment of aortic valve disease, there is considerable need to develop a standardized framework for 3D valve segmentation and shape representation. Towards this goal, this work integrates template-based medial modeling and multi-atlas label fusion techniques to automatically delineate and quantitatively describe aortic leaflet geometry in 3D echocardiographic (3DE) images, a challenging task that has been explored only to a limited extent. The method makes use of expert knowledge of aortic leaflet image appearance, generates segmentations with consistent topology, and establishes a shape-based coordinate system on the aortic leaflets that enables standardized automated measurements. In this study, the algorithm is evaluated on 11 3DE images of normal human aortic leaflets acquired at mid systole. The clinical relevance of the method is its ability to capture leaflet geometry in 3DE image data with minimal user interaction while producing consistent measurements of 3D aortic leaflet geometry.

  3. A modelling approach demonstrating micromechanical changes in the tibial cemented interface due to in vivo service.

    PubMed

    Srinivasan, Priyanka; Miller, Mark A; Verdonschot, Nico; Mann, Kenneth A; Janssen, Dennis

    2017-02-27

    Post-operative changes in trabecular bone morphology at the cement-bone interface can vary depending on time in service. This study aims to investigate how micromotion and bone strains change at the tibial bone-cement interface before and after cementation. This work discusses whether the morphology of the post-mortem interface can be explained by studying changes in these mechanical quantities. Three post-mortem cement-bone interface specimens showing varying levels of bone resorption (minimal, extensive and intermediate) were selected for this study Using image segmentation techniques, masks of the post-mortem bone were dilated to fill up the mould spaces in the cement to obtain the immediately post-operative situation. Finite element (FE) models of the post-mortem and post-operative situation were created from these segmentation masks. Subsequent removal of the cement layer resulted in the pre-operative situation. FE micromotion and bone strains were analyzed for the interdigitated trabecular bone. For all specimens micromotion increased from the post-operative to the post-mortem models (distally, in specimen 1: 0.1 to 0.5µm; specimen 2: 0.2 to 0.8µm; specimen 3: 0.27 to 1.62µm). Similarly bone strains were shown to increase from post-operative to post-mortem (distally, in specimen 1: -185 to -389µε; specimen 2: -170 to -824µε; specimen 3: -216 to -1024µε). Post-mortem interdigitated bone was found to be strain shielded in comparison with supporting bone indicating that failure of bone would occur distal to the interface. These results indicate that stress shielding of interdigitated trabeculae is a plausible explanation for resorption patterns observed in post-mortem specimens.

  4. The DaveMLTranslator: An Interface for DAVE-ML Aerodynamic Models

    NASA Technical Reports Server (NTRS)

    Hill, Melissa A.; Jackson, E. Bruce

    2007-01-01

    It can take weeks or months to incorporate a new aerodynamic model into a vehicle simulation and validate the performance of the model. The Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML) has been proposed as a means to reduce the time required to accomplish this task by defining a standard format for typical components of a flight dynamic model. The purpose of this paper is to describe an object-oriented C++ implementation of a class that interfaces a vehicle subsystem model specified in DAVE-ML and a vehicle simulation. Using the DaveMLTranslator class, aerodynamic or other subsystem models can be automatically imported and verified at run-time, significantly reducing the elapsed time between receipt of a DAVE-ML model and its integration into a simulation environment. The translator performs variable initializations, data table lookups, and mathematical calculations for the aerodynamic build-up, and executes any embedded static check-cases for verification. The implementation is efficient, enabling real-time execution. Simple interface code for the model inputs and outputs is the only requirement to integrate the DaveMLTranslator as a vehicle aerodynamic model. The translator makes use of existing table-lookup utilities from the Langley Standard Real-Time Simulation in C++ (LaSRS++). The design and operation of the translator class is described and comparisons with existing, conventional, C++ aerodynamic models of the same vehicle are given.

  5. Automated parameter estimation for biological models using Bayesian statistical model checking

    PubMed Central

    2015-01-01

    Background Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Domain experts usually estimate the values of these parameters by fitting the model to experimental data. Model fitting is usually expressed as an optimization problem that requires minimizing a cost-function which measures some notion of distance between the model and the data. This optimization problem is often solved by combining local and global search methods that tend to perform well for the specific application domain. When some prior information about parameters is available, methods such as Bayesian inference are commonly used for parameter learning. Choosing the appropriate parameter search technique requires detailed domain knowledge and insight into the underlying system. Results Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. Conclusions We have developed a new algorithmic technique for discovering parameters in complex stochastic models of

  6. Facilitating access to laboratory guidelines by modeling their contents and designing a computerized user interface.

    PubMed

    Yasini, Mobin; Duclos, Catherine; Lamy, Jean-Baptiste; Venot, Alain

    2011-01-01

    Laboratory tests are not always prescribed appropriately. Guidelines for some important laboratory tests have been developed by expert panels in the Parisian region to maximize the appropriateness of laboratory medicine. However; these recommendations are not frequently consulted by physicians and nurses. We developed a system facilitating consultation of these guidelines, to increase their usability. Elements of information contained in these documents were identified and included in recommendations of different categories. UML modeling was used to represent these categories and their relationships to each other in the guidelines. We used the generated model to implement a computerized interface. The prototype interface, based on web-based technology was found to be rapid and easy to use. By clicking on provided keywords, information about the subject sought is highlighted whilst retaining the entire text of the guideline on-screen.

  7. A user interface for the Kansas Geological Survey slug test model.

    PubMed

    Esling, Steven P; Keller, John E

    2009-01-01

    The Kansas Geological Survey (KGS) developed a semianalytical solution for slug tests that incorporates the effects of partial penetration, anisotropy, and the presence of variable conductivity well skins. The solution can simulate either confined or unconfined conditions. The original model, written in FORTRAN, has a text-based interface with rigid input requirements and limited output options. We re-created the main routine for the KGS model as a Visual Basic macro that runs in most versions of Microsoft Excel and built a simple-to-use Excel spreadsheet interface that automatically displays the graphical results of the test. A comparison of the output from the original FORTRAN code to that of the new Excel spreadsheet version for three cases produced identical results.

  8. Computer modelling of the surface tension of the gas-liquid and liquid-liquid interface.

    PubMed

    Ghoufi, Aziz; Malfreyt, Patrice; Tildesley, Dominic J

    2016-03-07

    This review presents the state of the art in molecular simulations of interfacial systems and of the calculation of the surface tension from the underlying intermolecular potential. We provide a short account of different methodological factors (size-effects, truncation procedures, long-range corrections and potential models) that can affect the results of the simulations. Accurate calculations are presented for the calculation of the surface tension as a function of the temperature, pressure and composition by considering the planar gas-liquid interface of a range of molecular fluids. In particular, we consider the challenging problems of reproducing the interfacial tension of salt solutions as a function of the salt molality; the simulations of spherical interfaces including the calculation of the sign and size of the Tolman length for a spherical droplet; the use of coarse-grained models in the calculation of the interfacial tension of liquid-liquid surfaces and the mesoscopic simulations of oil-water-surfactant interfacial systems.

  9. Modeling of ultrasound transmission through a solid-liquid interface comprising a network of gas pockets

    NASA Astrophysics Data System (ADS)

    Paumel, K.; Moysan, J.; Chatain, D.; Corneloup, G.; Baqué, F.

    2011-08-01

    Ultrasonic inspection of sodium-cooled fast reactor requires a good acoustic coupling between the transducer and the liquid sodium. Ultrasonic transmission through a solid surface in contact with liquid sodium can be complex due to the presence of microscopic gas pockets entrapped by the surface roughness. Experiments are run using substrates with controlled roughness consisting of a network of holes and a modeling approach is then developed. In this model, a gas pocket stiffness at a partially solid-liquid interface is defined. This stiffness is then used to calculate the transmission coefficient of ultrasound at the entire interface. The gas pocket stiffness has a static, as well as an inertial component, which depends on the ultrasonic frequency and the radiative mass.

  10. Towards Automated Seismic Moment Tensor Inversion in Australia Using 3D Structural Model

    NASA Astrophysics Data System (ADS)

    Hingee, M.; Tkalcic, H.; Fichtner, A.; Sambridge, M.; Kennett, B. L.; Gorbatov, A.

    2009-12-01

    functions. Implementation of this 3D model will improve warning systems, and we present results that are an important step towards automated MT inversion in Australia. [1] Fichtner, A., Kennett, B.L.N., Igel, H., Bunge, H.-P., 2009. Full seismic waveform tomography for upper-mantle structure in the Australasian region using adjoint methods. Geophys. J. Int., in press.

  11. Interfacing Cultured Neurons to Microtransducers Arrays: A Review of the Neuro-Electronic Junction Models.

    PubMed

    Massobrio, Paolo; Massobrio, Giuseppe; Martinoia, Sergio

    2016-01-01

    Microtransducer arrays, both metal microelectrodes and silicon-based devices, are widely used as neural interfaces to measure, extracellularly, the electrophysiological activity of excitable cells. Starting from the pioneering works at the beginning of the 70's, improvements in manufacture methods, materials, and geometrical shape have been made. Nowadays, these devices are routinely used in different experimental conditions (both in vivo and in vitro), and for several applications ranging from basic research in neuroscience to more biomedical oriented applications. However, the use of these micro-devices deeply depends on the nature of the interface (coupling) between the cell membrane and the sensitive active surface of the microtransducer. Thus, many efforts have been oriented to improve coupling conditions. Particularly, in the latest years, two innovations related to the use of carbon nanotubes as interface material and to the development of micro-structures which can be engulfed by the cell membrane have been proposed. In this work, we review what can be simulated by using simple circuital models and what happens at the interface between the sensitive active surface of the microtransducer and the neuronal membrane of in vitro neurons. We finally focus our attention on these two novel technological solutions capable to improve the coupling between neuron and micro-nano transducer.

  12. Interfacing Cultured Neurons to Microtransducers Arrays: A Review of the Neuro-Electronic Junction Models

    PubMed Central

    Massobrio, Paolo; Massobrio, Giuseppe; Martinoia, Sergio

    2016-01-01

    Microtransducer arrays, both metal microelectrodes and silicon-based devices, are widely used as neural interfaces to measure, extracellularly, the electrophysiological activity of excitable cells. Starting from the pioneering works at the beginning of the 70's, improvements in manufacture methods, materials, and geometrical shape have been made. Nowadays, these devices are routinely used in different experimental conditions (both in vivo and in vitro), and for several applications ranging from basic research in neuroscience to more biomedical oriented applications. However, the use of these micro-devices deeply depends on the nature of the interface (coupling) between the cell membrane and the sensitive active surface of the microtransducer. Thus, many efforts have been oriented to improve coupling conditions. Particularly, in the latest years, two innovations related to the use of carbon nanotubes as interface material and to the development of micro-structures which can be engulfed by the cell membrane have been proposed. In this work, we review what can be simulated by using simple circuital models and what happens at the interface between the sensitive active surface of the microtransducer and the neuronal membrane of in vitro neurons. We finally focus our attention on these two novel technological solutions capable to improve the coupling between neuron and micro-nano transducer. PMID:27445657

  13. Modeling Geometry and Progressive Failure of Material Interfaces in Plain Weave Composites

    NASA Technical Reports Server (NTRS)

    Hsu, Su-Yuen; Cheng, Ron-Bin

    2010-01-01

    A procedure combining a geometrically nonlinear, explicit-dynamics contact analysis, computer aided design techniques, and elasticity-based mesh adjustment is proposed to efficiently generate realistic finite element models for meso-mechanical analysis of progressive failure in textile composites. In the procedure, the geometry of fiber tows is obtained by imposing a fictitious expansion on the tows. Meshes resulting from the procedure are conformal with the computed tow-tow and tow-matrix interfaces but are incongruent at the interfaces. The mesh interfaces are treated as cohesive contact surfaces not only to resolve the incongruence but also to simulate progressive failure. The method is employed to simulate debonding at the material interfaces in a ceramic-matrix plain weave composite with matrix porosity and in a polymeric matrix plain weave composite without matrix porosity, both subject to uniaxial cyclic loading. The numerical results indicate progression of the interfacial damage during every loading and reverse loading event in a constant strain amplitude cyclic process. However, the composites show different patterns of damage advancement.

  14. Characterizing and Modeling Brittle Bi-material Interfaces Subjected to Shear

    NASA Astrophysics Data System (ADS)

    Anyfantis, Konstantinos N.; Berggreen, Christian

    2014-12-01

    This work is based on the investigation, both experimentally and numerically, of the Mode II fracture process and bond strength of bondlines formed in co-cured composite/metal joints. To this end, GFRP-to-steel double strap joints were tested in tension, so that the bi-material interface was subjected to shear with debonding occurring under Mode II conditions. The study of the debonding process and thus failure of the joints was based both on stress and energy considerations. Analytical formulas were utilized for the derivation of the respective shear strength and fracture toughness measures which characterize the bi-material interface, by considering the joint's failure load, geometry and involved materials. The derived stress and toughness magnitudes were further utilized as the parameters of an extrinsic cohesive law, applied in connection with the modeling the bi-material interface in a finite element simulation environment. It was concluded that interfacial fracture in the considered joints was driven by the fracture toughness and not by strength considerations, and that LEFM is well suited to analyze the failure of the joint. Additionally, the double strap joint geometry was identified and utilized as a characterization test for measuring the Mode II fracture toughness of brittle bi-material interfaces.

  15. Influence of thermal fluctuations on the geometry of interfaces of the quenched Ising model.

    PubMed

    Corberi, Federico; Lippiello, Eugenio; Zannetti, Marco

    2008-07-01

    We study the role of the quench temperature Tf in the phase-ordering kinetics of the Ising model with single spin flip in d=2,3 . Equilibrium interfaces are flat at Tf=0 , whereas at Tf>0 they are curved and rough (above the roughening temperature in d=3 ). We show, by means of scaling arguments and numerical simulations, that this geometrical difference is important for the phase-ordering kinetics as well. In particular, while the growth exponent z=2 of the size of domains L(t) approximately t 1/z is unaffected by Tf, other exponents related to the interface geometry take different values at Tf=0 or Tf>0 . For Tf>0 a crossover phenomenon is observed from an early stage where interfaces are still flat and the system behaves as at Tf=0 , to the asymptotic regime with curved interfaces characteristic of Tf>0 . Furthermore, it is shown that the roughening length, although subdominant with respect to L(t) , produces appreciable correction to scaling up to very long times in d=2 .

  16. Mathematical modeling of planar and spherical vapor-liquid phase interfaces for multicomponent fluids

    NASA Astrophysics Data System (ADS)

    Celný, David; Vinš, Václav; Planková, Barbora; Hrubý, Jan

    2016-03-01

    Development of methods for accurate modeling of phase interfaces is important for understanding various natural processes and for applications in technology such as power production and carbon dioxide separation and storage. In particular, prediction of the course of the non-equilibrium phase transition processes requires knowledge of the properties of the strongly curved phase interfaces of microscopic droplets. In our work, we focus on the spherical vapor-liquid phase interfaces for binary mixtures. We developed a robust computational method to determine the density and concentration profiles. The fundamentals of our approach lie in the Cahn-Hilliard gradient theory, allowing to transcribe the functional formulation into a system of ordinary Euler-Langrange equations. This system is then split and modified into a shape suitable for iterative computation. For this task, we combine the Newton-Raphson and the shooting methods providing a good convergence speed. For the thermodynamic roperties, the PC-SAFT equation of state is used. We determine the density and concentration profiles for spherical phase interfaces at various saturation factors for the binary mixture of CO2 and C9H20. The computed concentration profiles allow to the determine the work of formation and other characteristics of the microscopic droplets.

  17. A comprehensive flexoelectric model for droplet interface bilayers acting as sensors and energy harvesters

    NASA Astrophysics Data System (ADS)

    Kancharala, Ashok; Freeman, Eric; Philen, Michael

    2016-10-01

    Droplet interface bilayers have found applications in the development of biologically-inspired mechanosensors. In this research, a comprehensive flexoelectric framework has been developed to predict the mechanoelectric capabilities of the biological membrane under mechanical excitation for sensing and energy harvesting applications. The dynamic behavior of the droplets has been modeled using nonlinear finite element analysis, coupled with a flexoelectric model for predicting the resulting material polarization. This coupled model allows for the prediction of the mechanoelectrical response of the droplets under excitation. Using the developed framework, the potential for sensing and energy harvesting through lipid membranes is investigated.

  18. Ergonomic Models of Anthropometry, Human Biomechanics and Operator-Equipment Interfaces

    NASA Technical Reports Server (NTRS)

    Kroemer, Karl H. E. (Editor); Snook, Stover H. (Editor); Meadows, Susan K. (Editor); Deutsch, Stanley (Editor)

    1988-01-01

    The Committee on Human Factors was established in October 1980 by the Commission on Behavioral and Social Sciences and Education of the National Research Council. The committee is sponsored by the Office of Naval Research, the Air Force Office of Scientific Research, the Army Research Institute for the Behavioral and Social Sciences, the National Aeronautics and Space Administration, and the National Science Foundation. The workshop discussed the following: anthropometric models; biomechanical models; human-machine interface models; and research recommendations. A 17-page bibliography is included.

  19. Formulation of consumables management models: Mission planning processor payload interface definition

    NASA Technical Reports Server (NTRS)

    Torian, J. G.

    1977-01-01

    Consumables models required for the mission planning and scheduling function are formulated. The relation of the models to prelaunch, onboard, ground support, and postmission functions for the space transportation systems is established. Analytical models consisting of an orbiter planning processor with consumables data base is developed. A method of recognizing potential constraint violations in both the planning and flight operations functions, and a flight data file storage/retrieval of information over an extended period which interfaces with a flight operations processor for monitoring of the actual flights is presented.

  20. Degenerate Ising model for atomistic simulation of crystal-melt interfaces

    SciTech Connect

    Schebarchov, D.; Schulze, T. P.; Hendy, S. C.

    2014-02-21

    One of the simplest microscopic models for a thermally driven first-order phase transition is an Ising-type lattice system with nearest-neighbour interactions, an external field, and a degeneracy parameter. The underlying lattice and the interaction coupling constant control the anisotropic energy of the phase boundary, the field strength represents the bulk latent heat, and the degeneracy quantifies the difference in communal entropy between the two phases. We simulate the (stochastic) evolution of this minimal model by applying rejection-free canonical and microcanonical Monte Carlo algorithms, and we obtain caloric curves and heat capacity plots for square (2D) and face-centred cubic (3D) lattices with periodic boundary conditions. Since the model admits precise adjustment of bulk latent heat and communal entropy, neither of which affect the interface properties, we are able to tune the crystal nucleation barriers at a fixed degree of undercooling and verify a dimension-dependent scaling expected from classical nucleation theory. We also analyse the equilibrium crystal-melt coexistence in the microcanonical ensemble, where we detect negative heat capacities and find that this phenomenon is more pronounced when the interface is the dominant contributor to the total entropy. The negative branch of the heat capacity appears smooth only when the equilibrium interface-area-to-volume ratio is not constant but varies smoothly with the excitation energy. Finally, we simulate microcanonical crystal nucleation and subsequent relaxation to an equilibrium Wulff shape, demonstrating the model's utility in tracking crystal-melt interfaces at the atomistic level.

  1. Molecular simulation of water vapor-liquid phase interfaces using TIP4P/2005 model

    NASA Astrophysics Data System (ADS)

    Planková, Barbora; Vinš, Václav; Hrubý, Jan; Duška, Michal; Němec, Tomáš; Celný, David

    2015-05-01

    Molecular dynamics simulations for water were run using the TIP4P/2005 model for temperatures ranging from 250 K to 600 K. The density profile, the surface tension and the thickness of the phase interface were calculated as preliminary results. The surface tension values matched nicely with the IAPWS correlation over wide range of temperatures. As a partial result, DL_POLY Classis was successfully used for tests of the new computing cluster in our institute.

  2. Structure and application of an interface program between a geographic-information system and a ground-water flow model

    USGS Publications Warehouse

    Van Metre, P.C.

    1990-01-01

    A computer-program interface between a geographic-information system and a groundwater flow model links two unrelated software systems for use in developing the flow models. The interface program allows the modeler to compile and manage geographic components of a groundwater model within the geographic information system. A significant savings of time and effort is realized in developing, calibrating, and displaying the groundwater flow model. Four major guidelines were followed in developing the interface program: (1) no changes to the groundwater flow model code were to be made; (2) a data structure was to be designed within the geographic information system that follows the same basic data structure as the groundwater flow model; (3) the interface program was to be flexible enough to support all basic data options available within the model; and (4) the interface program was to be as efficient as possible in terms of computer time used and online-storage space needed. Because some programs in the interface are written in control-program language, the interface will run only on a computer with the PRIMOS operating system. (USGS)

  3. Modelisation microstructurale en fatigue/fluage a froid des alliages de titane quasi alpha par le modele des automates cellulaires

    NASA Astrophysics Data System (ADS)

    Boutana, Mohammed Nabil

    Les proprietes d'emploi des alliages de titane sont extremement dependantes a certains aspects des microstructures developpees lors de leur elaboration. Ces microstructures peuvent etre fortement heterogenes du point de vue de leur orientation cristallographique et de leur repartition spatiale. Leurs influences sur le comportement du materiau et son endommagement precoce sont des questions qui sont actuellement soulevees. Dans le present projet de doctorat on chercher a repondre a cette question mais aussi de presenter des solutions tangibles quant a l'utilisation securitaire de ces alliages. Un nouveau modele appele automate cellulaire a ete developpe pour simuler le comportement mecanique des alliages de titane en fatigue-fluage a froid. Ces modeles ont permet de mieux comprendre la correlation entre la microstructure et le comportement mecanique du materiau et surtout une analyse detaillee du comportement local du materiau. Mots-cles: Automate cellulaire, fatigue/fluage, alliage de titane, inclusion d'Eshelby, modelisation

  4. U-10Mo/Zr Interface Modeling using a Microstructure-Based FEM Approach

    SciTech Connect

    Soulami, Ayoub; Xu, Zhijie; Joshi, Vineet V.; Burkes, Douglas; Lavender, Curt A.; McGarrah, Eric J.

    2016-04-25

    The U-10Mo in low enrichments (LEU) has been identified as the most promising alternative to the current highly enriched uranium (HEU) used in the United States’ fleet of high performance research reactors (USHPRRs). The nominal configuration of the new LEU U-10Mo plate fuel comprises a U-10Mo fuel foil enriched to slightly less than 20% U-235 (0.08” to 0.02” thick), a thin Zr interlayer/diffusion barrier (25 m thick) and a relatively thick outer can of 6061 aluminum. Currently the Zr interlayer is clad by hot roll bonding. Previous studies and observations revealed a thinning of the zirconium (Zr) layer during this fuel fabrication process, which is not desirable from the fuel performance perspective. Coarse UMo grains, dendritic structures, Mo concentration segregation, carbides, and porosity are present in the as-cast material and can lead to a nonuniform UMo/Zr interface. The purpose of the current work is to investigate the effects of these microstructural parameters on the Zr coating variation. A microstructure-based finite-element method model was used in this work, and a study on the effect of homogenization on the interface between U-10Mo and Zr was conducted. The model uses actual backscattered electron–scanning electron microscopy microstructures, Mo concentrations, and mechanical properties to predict the behavior of a representative volume element under compressive loading during the rolling process. The model successfully predicted the experimentally observed thinning of the Zr layer in the as-cast material. The model also uses results from a homogenization model as an input, and a study on the effect of different levels of homogenization on the interface indicated that homogenization helps decrease this thinning. This model can be considered a predictive tool representing a first step for model integration and an input into a larger fuel fabrication performance model.

  5. Simplifying the interaction between cognitive models and task environments with the JSON Network Interface.

    PubMed

    Hope, Ryan M; Schoelles, Michael J; Gray, Wayne D

    2014-12-01

    Process models of cognition, written in architectures such as ACT-R and EPIC, should be able to interact with the same software with which human subjects interact. By eliminating the need to simulate the experiment, this approach would simplify the modeler's effort, while ensuring that all steps required of the human are also required by the model. In practice, the difficulties of allowing one software system to interact with another present a significant barrier to any modeler who is not also skilled at this type of programming. The barrier increases if the programming language used by the modeling software differs from that used by the experimental software. The JSON Network Interface simplifies this problem for ACT-R modelers, and potentially, modelers using other systems.

  6. Experiments and modeling of freshwater lenses in layered aquifers: Steady state interface geometry

    NASA Astrophysics Data System (ADS)

    Dose, Eduardo J.; Stoeckl, Leonard; Houben, Georg J.; Vacher, H. L.; Vassolo, Sara; Dietrich, Jörg; Himmelsbach, Thomas

    2014-02-01

    The interface geometry of freshwater lenses in layered aquifers was investigated by physical 2D laboratory experiments. The resulting steady-state geometries of the lenses were compared to existing analytical expressions from Dupuit-Ghyben-Herzberg (DGH) analysis of strip-island lenses for various cases of heterogeneity. Despite the vertical exaggeration of the physical models, which would seem to vitiate the assumption of vertical equipotentials, the fits with the DGH models were generally satisfactory. Observed deviations between the analytical and physical models can be attributed mainly to outflow zones along the shore line, which are not considered in the analytical models. As unconfined natural lenses have small outflow zones compared to their overall dimensions, and flow is mostly horizontal, the DGH analytical models should perform even better at full scale. Numerical models that do consider the outflow face generally gave a good fit to the physical models.

  7. Finite Element Modeling of Laminated Composite Plates with Locally Delaminated Interface Subjected to Impact Loading

    PubMed Central

    Abo Sabah, Saddam Hussein; Kueh, Ahmad Beng Hong

    2014-01-01

    This paper investigates the effects of localized interface progressive delamination on the behavior of two-layer laminated composite plates when subjected to low velocity impact loading for various fiber orientations. By means of finite element approach, the laminae stiffnesses are constructed independently from their interface, where a well-defined virtually zero-thickness interface element is discreetly adopted for delamination simulation. The present model has the advantage of simulating a localized interfacial condition at arbitrary locations, for various degeneration areas and intensities, under the influence of numerous boundary conditions since the interfacial description is expressed discretely. In comparison, the model shows good agreement with existing results from the literature when modeled in a perfectly bonded state. It is found that as the local delamination area increases, so does the magnitude of the maximum displacement history. Also, as top and bottom fiber orientations deviation increases, both central deflection and energy absorption increase although the relative maximum displacement correspondingly decreases when in contrast to the laminates perfectly bonded state. PMID:24696668

  8. Reduction of nonlinear embedded boundary models for problems with evolving interfaces

    NASA Astrophysics Data System (ADS)

    Balajewicz, Maciej; Farhat, Charbel

    2014-10-01

    Embedded boundary methods alleviate many computational challenges, including those associated with meshing complex geometries and solving problems with evolving domains and interfaces. Developing model reduction methods for computational frameworks based on such methods seems however to be challenging. Indeed, most popular model reduction techniques are projection-based, and rely on basis functions obtained from the compression of simulation snapshots. In a traditional interface-fitted computational framework, the computation of such basis functions is straightforward, primarily because the computational domain does not contain in this case a fictitious region. This is not the case however for an embedded computational framework because the computational domain typically contains in this case both real and ghost regions whose definitions complicate the collection and compression of simulation snapshots. The problem is exacerbated when the interface separating both regions evolves in time. This paper addresses this issue by formulating the snapshot compression problem as a weighted low-rank approximation problem where the binary weighting identifies the evolving component of the individual simulation snapshots. The proposed approach is application independent and therefore comprehensive. It is successfully demonstrated for the model reduction of several two-dimensional, vortex-dominated, fluid-structure interaction problems.

  9. A virtual interface for interactions with 3D models of the human body.

    PubMed

    De Paolis, Lucio T; Pulimeno, Marco; Aloisio, Giovanni

    2009-01-01

    The developed system is the first prototype of a virtual interface designed to avoid contact with the computer so that the surgeon is able to visualize 3D models of the patient's organs more effectively during surgical procedure or to use this in the pre-operative planning. The doctor will be able to rotate, to translate and to zoom in on 3D models of the patient's organs simply by moving his finger in free space; in addition, it is possible to choose to visualize all of the organs or only some of them. All of the interactions with the models happen in real-time using the virtual interface which appears as a touch-screen suspended in free space in a position chosen by the user when the application is started up. Finger movements are detected by means of an optical tracking system and are used to simulate touch with the interface and to interact by pressing the buttons present on the virtual screen.

  10. Metrology automation reliability

    NASA Astrophysics Data System (ADS)

    Chain, Elizabeth E.

    1996-09-01

    At Motorola's MOS-12 facility automated measurements on 200- mm diameter wafers proceed in a hands-off 'load-and-go' mode requiring only wafer loading, measurement recipe loading, and a 'run' command for processing. Upon completion of all sample measurements, the data is uploaded to the factory's data collection software system via a SECS II interface, eliminating the requirement of manual data entry. The scope of in-line measurement automation has been extended to the entire metrology scheme from job file generation to measurement and data collection. Data analysis and comparison to part specification limits is also carried out automatically. Successful integration of automated metrology into the factory measurement system requires that automated functions, such as autofocus and pattern recognition algorithms, display a high degree of reliability. In the 24- hour factory reliability data can be collected automatically on every part measured. This reliability data is then uploaded to the factory data collection software system at the same time as the measurement data. Analysis of the metrology reliability data permits improvements to be made as needed, and provides an accurate accounting of automation reliability. This reliability data has so far been collected for the CD-SEM (critical dimension scanning electron microscope) metrology tool, and examples are presented. This analysis method can be applied to such automated in-line measurements as CD, overlay, particle and film thickness measurements.

  11. An Automated Method to Identify Mesoscale Convective Complexes in the Regional Climate Model Evaluation System

    NASA Astrophysics Data System (ADS)

    Whitehall, K. D.; Jenkins, G. S.; Mattmann, C. A.; Waliser, D. E.; Kim, J.; Goodale, C. E.; Hart, A. F.; Ramirez, P.; Whittell, J.; Zimdars, P. A.

    2012-12-01

    Mesoscale convective complexes (MCCs) are large (2 - 3 x 105 km2) nocturnal convectively-driven weather systems that are generally associated with high precipitation events in short durations (less than 12hrs) in various locations through out the tropics and midlatitudes (Maddox 1980). These systems are particularly important for climate in the West Sahel region, where the precipitation associated with them is a principal component of the rainfall season (Laing and Fritsch 1993). These systems occur on weather timescales and are historically identified from weather data analysis via manual and more recently automated processes (Miller and Fritsch 1991, Nesbett 2006, Balmey and Reason 2012). The Regional Climate Model Evaluation System (RCMES) is an open source tool designed for easy evaluation of climate and Earth system data through access to standardized datasets, and intrinsic tools that perform common analysis and visualization tasks (Hart et al. 2011). The RCMES toolkit also provides the flexibility of user-defined subroutines for further metrics, visualization and even dataset manipulation. The purpose of this study is to present a methodology for identifying MCCs in observation datasets using the RCMES framework. TRMM 3 hourly datasets will be used to demonstrate the methodology for 2005 boreal summer. This method promotes the use of open source software for scientific data systems to address a concern to multiple stakeholders in the earth sciences. A historical MCC dataset provides a platform with regards to further studies of the variability of frequency on various timescales of MCCs that is important for many including climate scientists, meteorologists, water resource managers, and agriculturalists. The methodology of using RCMES for searching and clipping datasets will engender a new realm of studies as users of the system will no longer be restricted to solely using the datasets as they reside in their own local systems; instead will be afforded rapid

  12. Evidence evaluation in fingerprint comparison and automated fingerprint identification systems--Modeling between finger variability.

    PubMed

    Egli Anthonioz, N M; Champod, C

    2014-02-01

    In the context of the investigation of the use of automated fingerprint identification systems (AFIS) for the evaluation of fingerprint evidence, the current study presents investigations into the variability of scores from an AFIS system when fingermarks from a known donor are compared to fingerprints that are not from the same source. The ultimate goal is to propose a model, based on likelihood ratios, which allows the evaluation of mark-to-print comparisons. In particular, this model, through its use of AFIS technology, benefits from the possibility of using a large amount of data, as well as from an already built-in proximity measure, the AFIS score. More precisely, the numerator of the LR is obtained from scores issued from comparisons between impressions from the same source and showing the same minutia configuration. The denominator of the LR is obtained by extracting scores from comparisons of the questioned mark with a database of non-matching sources. This paper focuses solely on the assignment of the denominator of the LR. We refer to it by the generic term of between-finger variability. The issues addressed in this paper in relation to between-finger variability are the required sample size, the influence of the finger number and general pattern, as well as that of the number of minutiae included and their configuration on a given finger. Results show that reliable estimation of between-finger variability is feasible with 10,000 scores. These scores should come from the appropriate finger number/general pattern combination as defined by the mark. Furthermore, strategies of obtaining between-finger variability when these elements cannot be conclusively seen on the mark (and its position with respect to other marks for finger number) have been presented. These results immediately allow case-by-case estimation of the between-finger variability in an operational setting.

  13. Combat Automation for Airborne Weapon Systems: Man/Machine Interface Trends and Technologies (L’Automatisation du Combat Aerien: Tendances et Technologies pour l’Interface Homme/Machine)

    DTIC Science & Technology

    1993-04-01

    d’action alternatives. Un Systeme bien integre doit concilier dc multiples sources de donnees, potentiellement contradictoires, relatives aux situations...comments constitute my persona ! evaluations of and observations on the content of each presentation. In no sen»; at they intended to summarize the...Aircraft Combat. Evaluation Aid, Expert Sys- tem, Man Machine Interface. 24-2 1-INTRODUCTION Dans les missions atSriennes aussi bien reelles que

  14. A two-dimensional-reference interaction site model theory for solvation structure near solid-liquid interface.

    PubMed

    Iida, Kenji; Sato, Hirofumi

    2011-12-28

    We develop a new equation to describe solvation structure near solid-liquid interface at the atomic-level. The developed equation focuses on anisotropy of solvation structure near the interface by using two-dimensional density distribution of solvent along two directions, one of which is perpendicular to the interface and the other is parallel to the interface. As a first application of the equation, we treat a system where a solid modeled by an atomistic wall is immersed in solvent water. The preferential adsorption position of water molecules and the change of water orientation by charging the wall are discussed.

  15. An automated system to simulate the River discharge in Kyushu Island using the H08 model

    NASA Astrophysics Data System (ADS)

    Maji, A.; Jeon, J.; Seto, S.

    2015-12-01

    Kyushu Island is located in southwestern part of Japan, and it is often affected by typhoons and a Baiu front. There have been severe water-related disasters recorded in Kyushu Island. On the other hand, because of high population density and for crop growth, water resource is an important issue of Kyushu Island.The simulation of river discharge is important for water resource management and early warning of water-related disasters. This study attempts to apply H08 model to simulate river discharge in Kyushu Island. Geospatial meteorological and topographical data were obtained from Japanese Ministry of Land, Infrastructure, Transport and Tourism (MLIT) and Automated Meteorological Data Acquisition System (AMeDAS) of Japan Meteorological Agency (JMA). The number of the observation stations of AMeDAS is limited and is not quite satisfactory for the application of water resources models in Kyushu. It is necessary to spatially interpolate the point data to produce grid dataset. Meteorological grid dataset is produced by considering elevation dependence. Solar radiation is estimated from hourly sunshine duration by a conventional formula. We successfully improved the accuracy of interpolated data just by considering elevation dependence and found out that the bias is related to geographical location. The rain/snow classification is done by H08 model and is validated by comparing estimated and observed snow rate. The estimates tend to be larger than the corresponding observed values. A system to automatically produce daily meteorological grid dataset is being constructed.The geospatial river network data were produced by ArcGIS and they were utilized in the H08 model to simulate the river discharge. Firstly, this research is to compare simulated and measured specific discharge, which is the ratio of discharge to watershed area. Significant error between simulated and measured data were seen in some rivers. Secondly, the outputs by the coupled model including crop growth

  16. Blocking and Blending: Different Assembly Models of Cyclodextrin and Sodium Caseinate at the Oil/Water Interface.

    PubMed

    Xu, Hua-Neng; Liu, Huan-Huan; Zhang, Lianfu

    2015-08-25

    The stability of cyclodextrin (CD)-based emulsions is attributed to the formation of a solid film of oil-CD complexes at the oil/water interface. However, competitive interactions between CDs and other components at the interface still need to be understood. Here we develop two different routes that allow the incorporation of a model protein (sodium caseinate, SC) into emulsions based on β-CD. One route is the components adsorbed simultaneously from a mixed solution to the oil/water interface (route I), and the other is SC was added to a previously established CD-stabilized interface (route II). The adsorption mechanism of β-CD modified by SC at the oil/water interface is investigated by rheological and optical methods. Strong sensitivity of the rheological behavior to the routes is indicated by both steady-state and small-deformation oscillatory experiments. Possible β-CD/SC interaction models at the interface are proposed. In route I, the protein, due to its higher affinity for the interface, adsorbs strongly at the interface with blocking of the adsorption of β-CD and formation of oil-CD complexes. In route II, the protein penetrates and blends into the preadsorbed layer of oil-CD complexes already formed at the interface. The revelation of interfacial assembly is expected to help better understand CD-based emulsions in natural systems and improve their designs in engineering applications.

  17. Modeling the Interface Instability and Mixing Flow During the Process of Liquid Explosion Dissemination

    NASA Astrophysics Data System (ADS)

    Li, L.; Xu, S. L.; Ren, Y. J.; Liu, G. R.; Ren, X. B.; Xie, W. J.; Li, Y. C.; Wang, Z. L.

    The liquid flow during the process of liquid explosion dissemination is a typical complex high-speed unsteady motion with multi-scale in space and time. The motion of liquid flow may be partitioned to several stages. The first is initial liquid expansion by the action of shock wave and explosive gaseous products. The second is breakup of liquid annulus and turbulent mixing, which is called near-field flow. The third is two-phase mixing flow of gas and liquid drops, which is called far-field flow. To first stage, a compressible inviscid liquid model was used, while an elastic and plastic model was used to depict the expansion of solid shell. Numerical study in two dimensional has been made by using the Arbitrary Euler-Lagrange (ALE) methods. In near-field, the unstable flow of liquid annulus is dominated by many factors. (1) The shock action of gaseous expansive products. (2) The geometric structure of wave system in liquid. (3) The local bubble and cavitating flow in annulus, induce much of local unstable interface, tear up interfaces, and enhance the instability and breakup of liquid annulus. In this paper, some postulations are proposed that the cavitations in liquid annulus are induced by shock wave and the flow of liquid annulus is a two phase flow (liquid and a discrete bubble groups). Some experimental results will be presented that the breakup of interface and turbulent mixing is visualized qualitatively and measured quantitatively by using shadow photography method. The primary results are some flow patten of interfaces and some transient flow parameters by which the nonlinear character will be obtained, and provide an experiential support for modeling to unstable interface flow and turbulent mixing. The two-phase mixing flow between liquid drops and gas in far-field can be studied by numerical methods where the turbulent motion of gas phase is represented with k-ɛ model in Euler system, the motion of particle phase is represented with particle stochastic

  18. Studying dissolution with a model integrating solid-liquid interface kinetics and diffusion kinetics.

    PubMed

    Gao, Jeff Y

    2012-12-18

    A dissolution model that integrates the solid-liquid interface kinetics and the mass transport kinetics is introduced. Such a model reduces to the Noyes-Whitney equation under special conditions, but offers expanded range of applicability and flexibility fitting dissolution profiles when interfacial kinetics and interfacial concentration deviate from the assumptions implied in the Noyes-Whitney equation. General solutions to the integrated dissolution model derived for noninteractive solutes as well as for solutes participating in ionization equilibrium are discussed. Parameters defining the integrated dissolution model are explained conceptually along with practical ways for their determinations. Conditions under which the model exhibits supersaturation features are elaborated. Simulated dissolution profiles using the integrated dissolution model for published experimental data exhibiting supersaturation features are illustrated.

  19. Simulation models for the electric power requirements in an automated guideway transit system. Final report aug 78-Aug 79

    SciTech Connect

    Williams, G.H.

    1980-04-01

    This report describes a computer simulation model developed at the Transportation Systems Center to study the electrical power distribution characteristics of Automated Guideway Transit (AGT) systems. The objective of this simulation effort is to provide a means for determining the power distribution requirements of AGT systems and for evaluating their performances under varied operating conditions. Typical systems which could be modeled include the Morgantown Personal Rapid Transit System, the Dallas-Fort Worth Airtrans System, or one of the proposed Downtown People Movers. This report specifically describes a Fortran computer program which models the electric power requirements of a typical AGT system.

  20. Automation based on knowledge modeling theory and its applications in engine diagnostic systems using Space Shuttle Main Engine vibrational data. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kim, Jonnathan H.

    1995-01-01

    Humans can perform many complicated tasks without explicit rules. This inherent and advantageous capability becomes a hurdle when a task is to be automated. Modern computers and numerical calculations require explicit rules and discrete numerical values. In order to bridge the gap between human knowledge and automating tools, a knowledge model is proposed. Knowledge modeling techniques are discussed and utilized to automate a labor and time intensive task of detecting anomalous bearing wear patterns in the Space Shuttle Main Engine (SSME) High Pressure Oxygen Turbopump (HPOTP).

  1. Configuring a Graphical User Interface for Managing Local HYSPLIT Model Runs Through AWIPS

    NASA Technical Reports Server (NTRS)

    Wheeler, mark M.; Blottman, Peter F.; Sharp, David W.; Hoeth, Brian; VanSpeybroeck, Kurt M.

    2009-01-01

    Responding to incidents involving the release of harmful airborne pollutants is a continual challenge for Weather Forecast Offices in the National Weather Service. When such incidents occur, current protocol recommends forecaster-initiated requests of NOAA's Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model output through the National Centers of Environmental Prediction to obtain critical dispersion guidance. Individual requests are submitted manually through a secured web site, with desired multiple requests submitted in sequence, for the purpose of obtaining useful trajectory and concentration forecasts associated with the significant release of harmful chemical gases, radiation, wildfire smoke, etc., into local the atmosphere. To help manage the local HYSPLIT for both routine and emergency use, a graphical user interface was designed for operational efficiency. The interface allows forecasters to quickly determine the current HYSPLIT configuration for the list of predefined sites (e.g., fixed sites and floating sites), and to make any necessary adjustments to key parameters such as Input Model. Number of Forecast Hours, etc. When using the interface, forecasters will obtain desired output more confidently and without the danger of corrupting essential configuration files.

  2. Modeling the Charge Transport in Graphene Nano Ribbon Interfaces for Nano Scale Electronic Devices

    NASA Astrophysics Data System (ADS)

    Kumar, Ravinder; Engles, Derick

    2015-05-01

    In this research work we have modeled, simulated and compared the electronic charge transport for Metal-Semiconductor-Metal interfaces of Graphene Nano Ribbons (GNR) with different geometries using First-Principle calculations and Non-Equilibrium Green's Function (NEGF) method. We modeled junctions of Armchair GNR strip sandwiched between two Zigzag strips with (Z-A-Z) and Zigzag GNR strip sandwiched between two Armchair strips with (A-Z-A) using semi-empirical Extended Huckle Theory (EHT) within the framework of Non-Equilibrium Green Function (NEGF). I-V characteristics of the interfaces were visualized for various transport parameters. The distinct changes in conductance and I-V curves reported as the Width across layers, Channel length (Central part) was varied at different bias voltages from -1V to 1 V with steps of 0.25 V. From the simulated results we observed that the conductance through A-Z-A graphene junction is in the range of 10-13 Siemens whereas the conductance through Z-A-Z graphene junction is in the range of 10-5 Siemens. These suggested conductance controlled mechanisms for the charge transport in the graphene interfaces with different geometries is important for the design of graphene based nano scale electronic devices like Graphene FETs, Sensors.

  3. A model for the control mode man-computer interface dialogue

    NASA Technical Reports Server (NTRS)

    Chafin, R. L.

    1981-01-01

    A four stage model is presented for the control mode man-computer interface dialogue. It consists of context development, semantic development syntactic development, and command execution. Each stage is discussed in terms of the operator skill levels (naive, novice, competent, and expert) and pertinent human factors issues. These issues are human problem solving, human memory, and schemata. The execution stage is discussed in terms of the operators typing skills. This model provides an understanding of the human process in command mode activity for computer systems and a foundation for relating system characteristics to operator characteristics.

  4. PRay - A graphical user interface for interactive visualization and modification of rayinvr models

    NASA Astrophysics Data System (ADS)

    Fromm, T.

    2016-01-01

    PRay is a graphical user interface for interactive displaying and editing of velocity models for seismic refraction. It is optimized for editing rayinvr models but can also be used as a dynamic viewer for ray tracing results from other software. The main features are the graphical editing of nodes and fast adjusting of the display (stations and phases). It can be extended by user-defined shell scripts and links to phase picking software. PRay is open source software written in the scripting language Perl, runs on Unix-like operating systems including Mac OS X and provides a version controlled source code repository for community development.

  5. Testing of Environmental Satellite Bus-Instrument Interfaces Using Engineering Models

    NASA Technical Reports Server (NTRS)

    Gagnier, Donald; Hayner, Rick; Nosek, Thomas; Roza, Michael; Hendershot, James E.; Razzaghi, Andrea I.

    2004-01-01

    This paper discusses the formulation and execution of a laboratory test of the electrical interfaces between multiple atmospheric scientific instruments and the spacecraft bus that carries them. The testing, performed in 2002, used engineering models of the instruments and the Aura spacecraft bus electronics. Aura is one of NASA s Earth Observatory System missions. The test was designed to evaluate the complex interfaces in the command and data handling subsystems prior to integration of the complete flight instruments on the spacecraft. A problem discovered during the flight integration phase of the observatory can cause significant cost and schedule impacts. The tests successfully revealed problems and led to their resolution before the full-up integration phase, saving significant cost and schedule. This approach could be beneficial for future environmental satellite programs involving the integration of multiple, complex scientific instruments onto a spacecraft bus.

  6. Automated modeling of ecosystem CO2 fluxes based on closed chamber measurements: A standardized conceptual and practical approach

    NASA Astrophysics Data System (ADS)

    Hoffmann, Mathias; Jurisch, Nicole; Albiac Borraz, Elisa; Hagemann, Ulrike; Sommer, Michael; Augustin, Jürgen

    2015-04-01

    Closed chamber measurements are widely used for determining the CO2 exchange of small-scale or heterogeneous ecosystems. Among the chamber design and operational handling, the data processing procedure is a considerable source of uncertainty of obtained results. We developed a standardized automatic data processing algorithm, based on the language and statistical computing environment R© to (i) calculate measured CO2 flux rates, (ii) parameterize ecosystem respiration (Reco) and gross primary production (GPP) models, (iii) optionally compute an adaptive temperature model, (iv) model Reco, GPP and net ecosystem exchange (NEE), and (v) evaluate model uncertainty (calibration, validation and uncertainty prediction). The algorithm was tested for different manual and automatic chamber measurement systems (such as e.g. automated NEE-chambers and the LI-8100A soil CO2 Flux system) and ecosystems. Our study shows that even minor changes within the modelling approach may result in considerable differences of calculated flux rates, derived photosynthetic active radiation and temperature dependencies and subsequently modeled Reco, GPP and NEE balance of up to 25%. Thus, certain modeling implications will be given, since automated and standardized data processing procedures, based on clearly defined criteria, such as statistical parameters and thresholds are a prerequisite and highly desirable to guarantee the reproducibility, traceability of modelling results and encourage a better comparability between closed chamber based CO2 measurements.

  7. Automated Detection and Classification of Rockfall Induced Seismic Signals with Hidden-Markov-Models

    NASA Astrophysics Data System (ADS)

    Zeckra, M.; Hovius, N.; Burtin, A.; Hammer, C.

    2015-12-01

    Originally introduced in speech recognition, Hidden Markov Models are applied in different research fields of pattern recognition. In seismology, this technique has recently been introduced to improve common detection algorithms, like STA/LTA ratio or cross-correlation methods. Mainly used for the monitoring of volcanic activity, this study is one of the first applications to seismic signals induced by geomorphologic processes. With an array of eight broadband seismometers deployed around the steep Illgraben catchment (Switzerland) with high-level erosion, we studied a sequence of landslides triggered over a period of several days in winter. A preliminary manual classification led us to identify three main seismic signal classes that were used as a start for the HMM automated detection and classification: (1) rockslide signal, including a failure source and the debris mobilization along the slope, (2) rockfall signal from the remobilization of debris along the unstable slope, and (3) single cracking signal from the affected cliff observed before the rockslide events. Besides the ability to classify the whole dataset automatically, the HMM approach reflects the origin and the interactions of the three signal classes, which helps us to understand this geomorphic crisis and the possible triggering mechanisms for slope processes. The temporal distribution of crack events (duration > 5s, frequency band [2-8] Hz) follows an inverse Omori law, leading to the catastrophic behaviour of the failure mechanisms and the interest for warning purposes in rockslide risk assessment. Thanks to a dense seismic array and independent weather observations in the landslide area, this dataset also provides information about the triggering mechanisms, which exhibit a tight link between rainfall and freezing level fluctuations.

  8. Automated detection of arterial input function in DSC perfusion MRI in a stroke rat model

    NASA Astrophysics Data System (ADS)

    Yeh, M.-Y.; Lee, T.-H.; Yang, S.-T.; Kuo, H.-H.; Chyi, T.-K.; Liu, H.-L.

    2009-05-01

    Quantitative cerebral blood flow (CBF) estimation requires deconvolution of the tissue concentration time curves with an arterial input function (AIF). However, image-based determination of AIF in rodent is challenged due to limited spatial resolution. We evaluated the feasibility of quantitative analysis using automated AIF detection and compared the results with commonly applied semi-quantitative analysis. Permanent occlusion of bilateral or unilateral common carotid artery was used to induce cerebral ischemia in rats. The image using dynamic susceptibility contrast method was performed on a 3-T magnetic resonance scanner with a spin-echo echo-planar-image sequence (TR/TE = 700/80 ms, FOV = 41 mm, matrix = 64, 3 slices, SW = 2 mm), starting from 7 s prior to contrast injection (1.2 ml/kg) at four different time points. For quantitative analysis, CBF was calculated by the AIF which was obtained from 10 voxels with greatest contrast enhancement after deconvolution. For semi-quantitative analysis, relative CBF was estimated by the integral divided by the first moment of the relaxivity time curves. We observed if the AIFs obtained in the three different ROIs (whole brain, hemisphere without lesion and hemisphere with lesion) were similar, the CBF ratios (lesion/normal) between quantitative and semi-quantitative analyses might have a similar trend at different operative time points. If the AIFs were different, the CBF ratios might be different. We concluded that using local maximum one can define proper AIF without knowing the anatomical location of arteries in a stroke rat model.

  9. The local structure factor near an interface; beyond extended capillary-wave models

    NASA Astrophysics Data System (ADS)

    Parry, A. O.; Rascón, C.; Evans, R.

    2016-06-01

    We investigate the local structure factor S (zq) at a free liquid-gas interface in systems with short-ranged intermolecular forces and determine the corrections to the leading-order, capillary-wave-like, Goldstone mode divergence of S (zq) known to occur for parallel (i.e. measured along the interface) wavevectors q\\to 0 . We show from explicit solution of the inhomogeneous Ornstein-Zernike equation that for distances z far from the interface, where the profile decays exponentially, S (zq) splits unambiguously into bulk and interfacial contributions. On each side of the interface, the interfacial contributions can be characterised by distinct liquid and gas wavevector dependent surface tensions, {σ l}(q) and {σg}(q) , which are determined solely by the bulk two-body and three-body direct correlation functions. At high temperatures, the wavevector dependence simplifies and is determined almost entirely by the appropriate bulk structure factor, leading to positive rigidity coefficients. Our predictions are confirmed by explicit calculation of S (zq) within square-gradient theory and the Sullivan model. The results for the latter predict a striking temperature dependence for {σ l}(q) and {σg}(q) , and have implications for fluctuation effects. Our results account quantitatively for the findings of a recent very extensive simulation study by Höfling and Dietrich of the total structure factor in the interfacial region, in a system with a cut-off Lennard-Jones potential, in sharp contrast to extended capillary-wave models which failed completely to describe the simulation results.

  10. Automating spectral measurements

    NASA Astrophysics Data System (ADS)

    Goldstein, Fred T.

    2008-09-01

    This paper discusses the architecture of software utilized in spectroscopic measurements. As optical coatings become more sophisticated, there is mounting need to automate data acquisition (DAQ) from spectrophotometers. Such need is exacerbated when 100% inspection is required, ancillary devices are utilized, cost reduction is crucial, or security is vital. While instrument manufacturers normally provide point-and-click DAQ software, an application programming interface (API) may be missing. In such cases automation is impossible or expensive. An API is typically provided in libraries (*.dll, *.ocx) which may be embedded in user-developed applications. Users can thereby implement DAQ automation in several Windows languages. Another possibility, developed by FTG as an alternative to instrument manufacturers' software, is the ActiveX application (*.exe). ActiveX, a component of many Windows applications, provides means for programming and interoperability. This architecture permits a point-and-click program to act as automation client and server. Excel, for example, can control and be controlled by DAQ applications. Most importantly, ActiveX permits ancillary devices such as barcode readers and XY-stages to be easily and economically integrated into scanning procedures. Since an ActiveX application has its own user-interface, it can be independently tested. The ActiveX application then runs (visibly or invisibly) under DAQ software control. Automation capabilities are accessed via a built-in spectro-BASIC language with industry-standard (VBA-compatible) syntax. Supplementing ActiveX, spectro-BASIC also includes auxiliary serial port commands for interfacing programmable logic controllers (PLC). A typical application is automatic filter handling.

  11. Interfacing comprehensive rotorcraft analysis with advanced aeromechanics and vortex wake models

    NASA Astrophysics Data System (ADS)

    Liu, Haiying

    This dissertation describes three aspects of the comprehensive rotorcraft analysis. First, a physics-based methodology for the modeling of hydraulic devices within multibody-based comprehensive models of rotorcraft systems is developed. This newly proposed approach can predict the fully nonlinear behavior of hydraulic devices, and pressure levels in the hydraulic chambers are coupled with the dynamic response of the system. The proposed hydraulic device models are implemented in a multibody code and calibrated by comparing their predictions with test bench measurements for the UH-60 helicopter lead-lag damper. Predicted peak damping forces were found to be in good agreement with measurements, while the model did not predict the entire time history of damper force to the same level of accuracy. The proposed model evaluates relevant hydraulic quantities such as chamber pressures, orifice flow rates, and pressure relief valve displacements. This model could be used to design lead-lag dampers with desirable force and damping characteristics. The second part of this research is in the area of computational aeroelasticity, in which an interface between computational fluid dynamics (CFD) and computational structural dynamics (CSD) is established. This interface enables data exchange between CFD and CSD with the goal of achieving accurate airloads predictions. In this work, a loose coupling approach based on the delta-airloads method is developed in a finite-element method based multibody dynamics formulation, DYMORE. To validate this aerodynamic interface, a CFD code, OVERFLOW-2, is loosely coupled with a CSD program, DYMORE, to compute the airloads of different flight conditions for Sikorsky UH-60 aircraft. This loose coupling approach has good convergence characteristics. The predicted airloads are found to be in good agreement with the experimental data, although not for all flight conditions. In addition, the tight coupling interface between the CFD program, OVERFLOW

  12. Monkey models for brain-machine interfaces: the need for maintaining diversity.

    PubMed

    Nuyujukian, Paul; Fan, Joline M; Gilja, Vikash; Kalanithi, Paul S; Chestek, Cindy A; Shenoy, Krishna V

    2011-01-01

    Brain-machine interfaces (BMIs) aim to help disabled patients by translating neural signals from the brain into control signals for guiding prosthetic arms, computer cursors, and other assistive devices. Animal models are central to the development of these systems and have helped enable the successful translation of the first generation of BMIs. As we move toward next-generation systems, we face the question of which animal models will aid broader patient populations and achieve even higher performance, robustness, and functionality. We review here four general types of rhesus monkey models employed in BMI research, and describe two additional, complementary models. Given the physiological diversity of neurological injury and disease, we suggest a need to maintain the current diversity of animal models and to explore additional alternatives, as each mimic different aspects of injury or disease.

  13. Fracture permeability and seismic wave scattering--Poroelastic linear-slip interface model for heterogeneous fractures

    SciTech Connect

    Nakagawa, S.; Myer, L.R.

    2009-06-15

    Schoenberg's Linear-slip Interface (LSI) model for single, compliant, viscoelastic fractures has been extended to poroelastic fractures for predicting seismic wave scattering. However, this extended model results in no impact of the in-plane fracture permeability on the scattering. Recently, we proposed a variant of the LSI model considering the heterogeneity in the in-plane fracture properties. This modified model considers wave-induced, fracture-parallel fluid flow induced by passing seismic waves. The research discussed in this paper applies this new LSI model to heterogeneous fractures to examine when and how the permeability of a fracture is reflected in the scattering of seismic waves. From numerical simulations, we conclude that the heterogeneity in the fracture properties is essential for the scattering of seismic waves to be sensitive to the permeability of a fracture.

  14. Automating the analytical laboratory via the Chemical Analysis Automation paradigm

    SciTech Connect

    Hollen, R.; Rzeszutko, C.

    1997-10-01

    To address the need for standardization within the analytical chemistry laboratories of the nation, the Chemical Analysis Automation (CAA) program within the US Department of Energy, Office of Science and Technology`s Robotic Technology Development Program is developing laboratory sample analysis systems that will automate the environmental chemical laboratories. The current laboratory automation paradigm consists of islands-of-automation that do not integrate into a system architecture. Thus, today the chemist must perform most aspects of environmental analysis manually using instrumentation that generally cannot communicate with other devices in the laboratory. CAA is working towards a standardized and modular approach to laboratory automation based upon the Standard Analysis Method (SAM) architecture. Each SAM system automates a complete chemical method. The building block of a SAM is known as the Standard Laboratory Module (SLM). The SLM, either hardware or software, automates a subprotocol of an analysis method and can operate as a standalone or as a unit within a SAM. The CAA concept allows the chemist to easily assemble an automated analysis system, from sample extraction through data interpretation, using standardized SLMs without the worry of hardware or software incompatibility or the necessity of generating complicated control programs. A Task Sequence Controller (TSC) software program schedules and monitors the individual tasks to be performed by each SLM configured within a SAM. The chemist interfaces with the operation of the TSC through the Human Computer Interface (HCI), a logical, icon-driven graphical user interface. The CAA paradigm has successfully been applied in automating EPA SW-846 Methods 3541/3620/8081 for the analysis of PCBs in a soil matrix utilizing commercially available equipment in tandem with SLMs constructed by CAA.

  15. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    PubMed

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  16. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP

    PubMed Central

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740

  17. Organic solar cells: a rigorous model of the donor-acceptor interface for various bulk heterojunction morphologies

    NASA Astrophysics Data System (ADS)

    Raba, Adam; Leroy, Yann; Cordan, Anne-Sophie

    2014-02-01

    Theoretical studies of organic solar cells are mostly based on one dimensional models. Despite their accuracy to reproduce most of the experimental trends, they intrinsically cannot correctly integrate the effects of morphology in cells based on a bulk heterojunction structure. Therefore, accounting for these effects requires the development of two dimensional models, in which donor and acceptor domains are explicitly distinct. In this context, we propose an analytical approach, which focuses on the description of the interface between the two domains. Assuming pinned charge transfer states, we rigorously derive the corresponding boundary conditions and explore the differences between this model and other existing models in the literature for various morphologies of the active layer. On one hand, all tested models are equivalent for an ideal interdigitated bulk heterojunction solar cell with a planar donor-acceptor interface, but divergences between the models rise for small sizes of the donor domain. On the other hand, we carried out a comparison on a less ideal case of cell, with a rough interface between the two domains. Simulations with such cells exhibit distinct behaviors for each model. We conclude that the boundary condition for the interface between the materials is of great importance for the study of solar cells with a non-planar interface. The model must account initially for the roughness of the interface.

  18. Easy-to-use interface

    SciTech Connect

    Blattner, M M; Blattner, D O; Tong, Y

    1999-04-01

    Easy-to-use interfaces are a class of interfaces that fall between public access interfaces and graphical user interfaces in usability and cognitive difficulty. We describe characteristics of easy-to-use interfaces by the properties of four dimensions: selection, navigation, direct manipulation, and contextual metaphors. Another constraint we introduced was to include as little text as possible, and what text we have will be in at least four languages. Formative evaluations were conducted to identify and isolate these characteristics. Our application is a visual interface for a home automation system intended for a diverse set of users. The design will be expanded to accommodate the visually disabled in the near future.

  19. Modeling single molecule junction mechanics as a probe of interface bonding

    NASA Astrophysics Data System (ADS)

    Hybertsen, Mark S.

    2017-03-01

    Using the atomic force microscope based break junction approach, applicable to metal point contacts and single molecule junctions, measurements can be repeated thousands of times resulting in rich data sets characterizing the properties of an ensemble of nanoscale junction structures. This paper focuses on the relationship between the measured force extension characteristics including bond rupture and the properties of the interface bonds in the junction. A set of exemplary model junction structures has been analyzed using density functional theory based calculations to simulate the adiabatic potential surface that governs the junction elongation. The junction structures include representative molecules that bond to the electrodes through amine, methylsulfide, and pyridine links. The force extension characteristics are shown to be most effectively analyzed in a scaled form with maximum sustainable force and the distance between the force zero and force maximum as scale factors. Widely used, two parameter models for chemical bond potential energy versus bond length are found to be nearly identical in scaled form. Furthermore, they fit well to the present calculations of N-Au and S-Au donor-acceptor bonds, provided no other degrees of freedom are allowed to relax. Examination of the reduced problem of a single interface, but including relaxation of atoms proximal to the interface bond, shows that a single-bond potential form renormalized by an effective harmonic potential in series fits well to the calculated results. This allows relatively accurate extraction of the interface bond energy. Analysis of full junction models shows cooperative effects that go beyond the mechanical series inclusion of the second bond in the junction, the spectator bond that does not rupture. Calculations for a series of diaminoalkanes as a function of molecule length indicate that the most important cooperative effect is due to the interactions between the dipoles induced by the donor

  20. Modeling single molecule junction mechanics as a probe of interface bonding

    DOE PAGES

    Hybertsen, Mark S.

    2017-03-07

    Using the atomic force microscope based break junction approach, applicable to metal point contacts and single molecule junctions, measurements can be repeated thousands of times resulting in rich data sets characterizing the properties of an ensemble of nanoscale junction structures. This paper focuses on the relationship between the measured force extension characteristics including bond rupture and the properties of the interface bonds in the junction. We analyzed a set of exemplary model junction structures using density functional theory based calculations to simulate the adiabatic potential surface that governs the junction elongation. The junction structures include representative molecules that bond tomore » the electrodes through amine, methylsulfide, and pyridine links. The force extension characteristics are shown to be most effectively analyzed in a scaled form with maximum sustainable force and the distance between the force zero and force maximum as scale factors. Widely used, two parameter models for chemical bond potential energy versus bond length are found to be nearly identical in scaled form. Furthermore, they fit well to the present calculations of N–Au and S–Au donor-acceptor bonds, provided no other degrees of freedom are allowed to relax. Examination of the reduced problem of a single interface, but including relaxation of atoms proximal to the interface bond, shows that a single-bond potential form renormalized by an effective harmonic potential in series fits well to the calculated results. This, then, allows relatively accurate extraction of the interface bond energy. Analysis of full junction models shows cooperative effects that go beyond the mechanical series inclusion of the second bond in the junction, the spectator bond that does not rupture. Calculations for a series of diaminoalkanes as a function of molecule length indicate that the most important cooperative effect is due to the interactions between the dipoles induced by