Science.gov

Sample records for automated modelling interface

  1. Automated identification and indexing of dislocations in crystal interfaces

    DOE PAGESBeta

    Stukowski, Alexander; Bulatov, Vasily V.; Arsenlis, Athanasios

    2012-10-31

    Here, we present a computational method for identifying partial and interfacial dislocations in atomistic models of crystals with defects. Our automated algorithm is based on a discrete Burgers circuit integral over the elastic displacement field and is not limited to specific lattices or dislocation types. Dislocations in grain boundaries and other interfaces are identified by mapping atomic bonds from the dislocated interface to an ideal template configuration of the coherent interface to reveal incompatible displacements induced by dislocations and to determine their Burgers vectors. Additionally, the algorithm generates a continuous line representation of each dislocation segment in the crystal andmore » also identifies dislocation junctions.« less

  2. Automated identification and indexing of dislocations in crystal interfaces

    SciTech Connect

    Stukowski, Alexander; Bulatov, Vasily V.; Arsenlis, Athanasios

    2012-10-31

    Here, we present a computational method for identifying partial and interfacial dislocations in atomistic models of crystals with defects. Our automated algorithm is based on a discrete Burgers circuit integral over the elastic displacement field and is not limited to specific lattices or dislocation types. Dislocations in grain boundaries and other interfaces are identified by mapping atomic bonds from the dislocated interface to an ideal template configuration of the coherent interface to reveal incompatible displacements induced by dislocations and to determine their Burgers vectors. Additionally, the algorithm generates a continuous line representation of each dislocation segment in the crystal and also identifies dislocation junctions.

  3. Automated Student Model Improvement

    ERIC Educational Resources Information Center

    Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.

    2012-01-01

    Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational

  4. Automation Interfaces of the Orion GNC Executive Architecture

    NASA Technical Reports Server (NTRS)

    Hart, Jeremy

    2009-01-01

    This viewgraph presentation describes Orion mission's automation Guidance, Navigation and Control (GNC) architecture and interfaces. The contents include: 1) Orion Background; 2) Shuttle/Orion Automation Comparison; 3) Orion Mission Sequencing; 4) Orion Mission Sequencing Display Concept; and 5) Status and Forward Plans.

  5. Towards automation of user interface design

    NASA Technical Reports Server (NTRS)

    Gastner, Rainer; Kraetzschmar, Gerhard K.; Lutz, Ernst

    1992-01-01

    This paper suggests an approach to automatic software design in the domain of graphical user interfaces. There are still some drawbacks in existing user interface management systems (UIMS's) which basically offer only quantitative layout specifications via direct manipulation. Our approach suggests a convenient way to get a default graphical user interface which may be customized and redesigned easily in further prototyping cycles.

  6. Space station automation and robotics study. Operator-systems interface

    NASA Technical Reports Server (NTRS)

    1984-01-01

    This is the final report of a Space Station Automation and Robotics Planning Study, which was a joint project of the Boeing Aerospace Company, Boeing Commercial Airplane Company, and Boeing Computer Services Company. The study is in support of the Advanced Technology Advisory Committee established by NASA in accordance with a mandate by the U.S. Congress. Boeing support complements that provided to the NASA Contractor study team by four aerospace contractors, the Stanford Research Institute (SRI), and the California Space Institute. This study identifies automation and robotics (A&R) technologies that can be advanced by requirements levied by the Space Station Program. The methodology used in the study is to establish functional requirements for the operator system interface (OSI), establish the technologies needed to meet these requirements, and to forecast the availability of these technologies. The OSI would perform path planning, tracking and control, object recognition, fault detection and correction, and plan modifications in connection with extravehicular (EV) robot operations.

  7. Automated visual imaging interface for the plant floor

    NASA Astrophysics Data System (ADS)

    Wutke, John R.

    1991-03-01

    The paper will provide an overview of the challenges facing a user of automated visual imaging (" AVI" ) machines and the philosophies that should be employed in designing them. As manufacturing tools and equipment become more sophisticated it is increasingly difficult to maintain an efficient interaction between the operator and machine. The typical user of an AVI machine in a production environment is technically unsophisticated. Also operator and machine ergonomics are often a neglected or poorly addressed part of an efficient manufacturing process. This paper presents a number of man-machine interface design techniques and philosophies that effectively solve these problems.

  8. SWISS-MODEL: An automated protein homology-modeling server.

    PubMed

    Schwede, Torsten; Kopp, Jrgen; Guex, Nicolas; Peitsch, Manuel C

    2003-07-01

    SWISS-MODEL (http://swissmodel.expasy.org) is a server for automated comparative modeling of three-dimensional (3D) protein structures. It pioneered the field of automated modeling starting in 1993 and is the most widely-used free web-based automated modeling facility today. In 2002 the server computed 120 000 user requests for 3D protein models. SWISS-MODEL provides several levels of user interaction through its World Wide Web interface: in the 'first approach mode' only an amino acid sequence of a protein is submitted to build a 3D model. Template selection, alignment and model building are done completely automated by the server. In the 'alignment mode', the modeling process is based on a user-defined target-template alignment. Complex modeling tasks can be handled with the 'project mode' using DeepView (Swiss-PdbViewer), an integrated sequence-to-structure workbench. All models are sent back via email with a detailed modeling report. WhatCheck analyses and ANOLEA evaluations are provided optionally. The reliability of SWISS-MODEL is continuously evaluated in the EVA-CM project. The SWISS-MODEL server is under constant development to improve the successful implementation of expert knowledge into an easy-to-use server. PMID:12824332

  9. ATAM - Automated Trade Assessment Modeling

    NASA Technical Reports Server (NTRS)

    Vallone, Antonio; Wu, Mei-Zong; Hogie, Keith

    1989-01-01

    Automated Trade Assessment Modeling program, ATAM, one of software tools designed to assess candidate architectures for data-management system of Space Station. Designed to discriminate among candidates having equally acceptable performance and reliability characteristics. Utilizes data base, defined by user, containing information on candidate architecture. Assesses such trade factors of system as weight, power consumption, and life-cycle cost. Produces detailed parameter assessments as well as single figure of merit for candidate architecture. Written in Microsoft FORTRAN.

  10. Geographic information system/watershed model interface

    USGS Publications Warehouse

    Fisher, Gary T.

    1989-01-01

    Geographic information systems allow for the interactive analysis of spatial data related to water-resources investigations. A conceptual design for an interface between a geographic information system and a watershed model includes functions for the estimation of model parameter values. Design criteria include ease of use, minimal equipment requirements, a generic data-base management system, and use of a macro language. An application is demonstrated for a 90.1-square-kilometer subbasin of the Patuxent River near Unity, Maryland, that performs automated derivation of watershed parameters for hydrologic modeling.

  11. Automated parking garage system model

    NASA Technical Reports Server (NTRS)

    Collins, E. R., Jr.

    1975-01-01

    A one-twenty-fifth scale model of the key components of an automated parking garage system is described. The design of the model required transferring a vehicle from an entry level, vertically (+Z, -Z), to a storage location at any one of four storage positions (+X, -X, +Y, +Y, -Y) on the storage levels. There are three primary subsystems: (1) a screw jack to provide the vertical motion of the elevator, (2) a cam-driven track-switching device to provide X to Y motion, and (3) a transfer cart to provide horizontal travel and a small amount to vertical motion for transfer to the storage location. Motive power is provided by dc permanent magnet gear motors, one each for the elevator and track switching device and two for the transfer cart drive system (one driving the cart horizontally and the other providing the vertical transfer). The control system, through the use of a microprocessor, provides complete automation through a feedback system which utilizes sensing devices.

  12. On Abstractions and Simplifications in the Design of Human-Automation Interfaces

    NASA Technical Reports Server (NTRS)

    Heymann, Michael; Degani, Asaf; Shafto, Michael; Meyer, George; Clancy, Daniel (Technical Monitor)

    2001-01-01

    This report addresses the design of human-automation interaction from a formal perspective that focuses on the information content of the interface, rather than the design of the graphical user interface. It also addresses the, issue of the information provided to the user (e.g., user-manuals, training material, and all other resources). In this report, we propose a formal procedure for generating interfaces and user-manuals. The procedure is guided by two criteria: First, the interface must be correct, i.e., that with the given interface the user will be able to perform the specified tasks correctly. Second, the interface should be as succinct as possible. The report discusses the underlying concepts and the formal methods for this approach. Several examples are used to illustrate the procedure. The algorithm for constructing interfaces can be automated, and a preliminary software system for its implementation has been developed.

  13. On Abstractions and Simplifications in the Design of Human-Automation Interfaces

    NASA Technical Reports Server (NTRS)

    Heymann, Michael; Degani, Asaf; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This report addresses the design of human-automation interaction from a formal perspective that focuses on the information content of the interface, rather than the design of the graphical user interface. It also addresses the issue of the information provided to the user (e.g., user-manuals, training material, and all other resources). In this report, we propose a formal procedure for generating interfaces and user-manuals. The procedure is guided by two criteria: First, the interface must be correct, that is, with the given interface the user will be able to perform the specified tasks correctly. Second, the interface should be succinct. The report discusses the underlying concepts and the formal methods for this approach. Two examples are used to illustrate the procedure. The algorithm for constructing interfaces can be automated, and a preliminary software system for its implementation has been developed.

  14. Automated Volumetric Analysis of Interface Fluid in Descemet Stripping Automated Endothelial Keratoplasty Using Intraoperative Optical Coherence Tomography

    PubMed Central

    Xu, David; Dupps, William J.; Srivastava, Sunil K.; Ehlers, Justis P.

    2014-01-01

    Purpose. We demonstrated a novel automated algorithm for segmentation of intraoperative optical coherence tomography (iOCT) imaging of fluid interface gap in Descemet stripping automated endothelial keratoplasty (DSAEK) and evaluated the effect of intraoperative maneuvers to promote graft apposition on interface dimensions. Methods. A total of 30 eyes of 29 patients from the anterior segment arm of the PIONEER study was included in this analysis. The iOCT scans were entered into an automated algorithm that delineated the spatial extent of the fluid interface gap in three dimensions between donor and host cornea during surgery. The algorithm was validated against manual segmentation, and performance was evaluated by absolute accuracy and intraclass correlation coefficient. Patients underwent DSAEK using a standard sequence of maneuvers, including controlled elevation of IOP and compressive corneal sweep to promote graft adhesion. Measurement of interface fluid volume, en face area, and maximal interface height were compared between scans before anterior chamber infusion, after pressure elevation alone, and after corneal sweep with pressure elevation using dependent-samples t-test. Results. The algorithm achieved 87% absolute accuracy and an intraclass correlation of 0.96. Nine datasets of a total of 84 (11%) required human correction of segmentation errors. Mean interface fluid volume was significantly decreased by corneal sweep (P = 0.021) and by both maneuvers combined (P = 0.046). Mean en face area was significantly decreased by corneal sweep (P = 0.010) and the maneuvers combined (P < 0.001). Maximal interface height was significantly decreased by pressure elevation (P = 0.010), corneal sweep (P = 0.009), and the maneuvers combined (P = 0.010). Conclusions. Quantitative analysis of iOCT volumetric scans shows the significant effect of controlled pressure elevation and corneal sweep on graft apposition in DSAEK. Computerized iOCT analysis yields objective measurements of interface fluid intraoperatively, which provides information on anatomic outcomes and could be used in future trials. PMID:25103262

  15. Hierarchical interface-enriched finite element method: An automated technique for mesh-independent simulations

    NASA Astrophysics Data System (ADS)

    Soghrati, Soheil

    2014-10-01

    A hierarchical interface-enriched finite element method (HIFEM) is introduced for the mesh-independent treatment of problems with complex morphologies. The proposed method provides an automated framework to capture gradient discontinuities associated with multiple materials interfaces that are in a close proximity or contact, while using finite element meshes that do not conform to the problem geometry. While yielding an optimal precision and convergence rate, other principal advantages of HIFEM include the ease of implementation and the ability to compute enriched solutions for a variety of complex materials interface configurations. Moreover, the construction of enrichment functions in this method is independent of the number and sequence of materials interfaces introduced to the nonconforming mesh. An immediate benefit of this feature is the ability to add new materials phases to already enriched nonconforming elements, without the need to remove/modify existing enrichments or sub-elements. In addition to detailed convergence study, several example problems are presented to show the application of HIFEM for modeling various engineering problems, including woven composites, heterogeneous materials systems, and actively-cooled microvascular systems.

  16. Designing effective human-automation-plant interfaces: a control-theoretic perspective.

    PubMed

    Jamieson, Greg A; Vicente, Kim J

    2005-01-01

    In this article, we propose the application of a control-theoretic framework to human-automation interaction. The framework consists of a set of conceptual distinctions that should be respected in automation research and design. We demonstrate how existing automation interface designs in some nuclear plants fail to recognize these distinctions. We further show the value of the approach by applying it to modes of automation. The design guidelines that have been proposed in the automation literature are evaluated from the perspective of the framework. This comparison shows that the framework reveals insights that are frequently overlooked in this literature. A new set of design guidelines is introduced that builds upon the contributions of previous research and draws complementary insights from the control-theoretic framework. The result is a coherent and systematic approach to the design of human-automation-plant interfaces that will yield more concrete design criteria and a broader set of design tools. Applications of this research include improving the effectiveness of human-automation interaction design and the relevance of human-automation interaction research. PMID:15960084

  17. Alloy Interface Interdiffusion Modeled

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo H.; Garces, Jorge E.; Abel, Phillip B.

    2003-01-01

    With renewed interest in developing nuclear-powered deep space probes, attention will return to improving the metallurgical processing of potential nuclear fuels so that they remain dimensionally stable over the years required for a successful mission. Previous work on fuel alloys at the NASA Glenn Research Center was primarily empirical, with virtually no continuing research. Even when empirical studies are exacting, they often fail to provide enough insight to guide future research efforts. In addition, from a fundamental theoretical standpoint, the actinide metals (which include materials used for nuclear fuels) pose a severe challenge to modern electronic-structure theory. Recent advances in quantum approximate atomistic modeling, coupled with first-principles derivation of needed input parameters, can help researchers develop new alloys for nuclear propulsion.

  18. Cooperative control - The interface challenge for men and automated machines

    NASA Technical Reports Server (NTRS)

    Hankins, W. W., III; Orlando, N. E.

    1984-01-01

    The research issues associated with the increasing autonomy and independence of machines and their evolving relationships to human beings are explored. The research, conducted by Langley Research Center (LaRC), will produce a new social work order in which the complementary attributes of robots and human beings, which include robots' greater strength and precision and humans' greater physical and intellectual dexterity, are necessary for systems of cooperation. Attention is given to the tools for performing the research, including the Intelligent Systems Research Laboratory (ISRL) and industrial manipulators, as well as to the research approaches taken by the Automation Technology Branch (ATB) of LaRC to achieve high automation levels. The ATB is focusing on artificial intelligence research through DAISIE, a system which tends to organize its environment into hierarchical controller/planner abstractions.

  19. Automation and Accountability in Decision Support System Interface Design

    ERIC Educational Resources Information Center

    Cummings, Mary L.

    2006-01-01

    When the human element is introduced into decision support system design, entirely new layers of social and ethical issues emerge but are not always recognized as such. This paper discusses those ethical and social impact issues specific to decision support systems and highlights areas that interface designers should consider during design with an…

  20. Automated, Parametric Geometry Modeling and Grid Generation for Turbomachinery Applications

    NASA Technical Reports Server (NTRS)

    Harrand, Vincent J.; Uchitel, Vadim G.; Whitmire, John B.

    2000-01-01

    The objective of this Phase I project is to develop a highly automated software system for rapid geometry modeling and grid generation for turbomachinery applications. The proposed system features a graphical user interface for interactive control, a direct interface to commercial CAD/PDM systems, support for IGES geometry output, and a scripting capability for obtaining a high level of automation and end-user customization of the tool. The developed system is fully parametric and highly automated, and, therefore, significantly reduces the turnaround time for 3D geometry modeling, grid generation and model setup. This facilitates design environments in which a large number of cases need to be generated, such as for parametric analysis and design optimization of turbomachinery equipment. In Phase I we have successfully demonstrated the feasibility of the approach. The system has been tested on a wide variety of turbomachinery geometries, including several impellers and a multi stage rotor-stator combination. In Phase II, we plan to integrate the developed system with turbomachinery design software and with commercial CAD/PDM software.

  1. Model-Based Design of Air Traffic Controller-Automation Interaction

    NASA Technical Reports Server (NTRS)

    Romahn, Stephan; Callantine, Todd J.; Palmer, Everett A.; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    A model of controller and automation activities was used to design the controller-automation interactions necessary to implement a new terminal area air traffic management concept. The model was then used to design a controller interface that provides the requisite information and functionality. Using data from a preliminary study, the Crew Activity Tracking System (CATS) was used to help validate the model as a computational tool for describing controller performance.

  2. Task-focused modeling in automated agriculture

    NASA Astrophysics Data System (ADS)

    Vriesenga, Mark R.; Peleg, K.; Sklansky, Jack

    1993-01-01

    Machine vision systems analyze image data to carry out automation tasks. Our interest is in machine vision systems that rely on models to achieve their designed task. When the model is interrogated from an a priori menu of questions, the model need not be complete. Instead, the machine vision system can use a partial model that contains a large amount of information in regions of interest and less information elsewhere. We propose an adaptive modeling scheme for machine vision, called task-focused modeling, which constructs a model having just sufficient detail to carry out the specified task. The model is detailed in regions of interest to the task and is less detailed elsewhere. This focusing effect saves time and reduces the computational effort expended by the machine vision system. We illustrate task-focused modeling by an example involving real-time micropropagation of plants in automated agriculture.

  3. User interface design principles for the SSM/PMAD automated power system

    NASA Technical Reports Server (NTRS)

    Jakstas, Laura M.; Myers, Chris J.

    1991-01-01

    Martin Marietta has developed a user interface for the space station module power management and distribution (SSM/PMAD) automated power system testbed which provides human access to the functionality of the power system, as well as exemplifying current techniques in user interface design. The testbed user interface was designed to enable an engineer to operate the system easily without having significant knowledge of computer systems, as well as provide an environment in which the engineer can monitor and interact with the SSM/PMAD system hardware. The design of the interface supports a global view of the most important data from the various hardware and software components, as well as enabling the user to obtain additional or more detailed data when needed. The components and representations of the SSM/PMAD testbed user interface are examined. An engineer's interactions with the system are also described.

  4. Automated two-dimensional interface for capillary gas chromatography

    DOEpatents

    Strunk, M.R.; Bechtold, W.E.

    1996-02-20

    A multidimensional gas chromatograph (GC) system is disclosed which has wide bore capillary and narrow bore capillary GC columns in series and has a novel system interface. Heart cuts from a high flow rate sample, separated by a wide bore GC column, are collected and directed to a narrow bore GC column with carrier gas injected at a lower flow compatible with a mass spectrometer. A bimodal six-way valve is connected with the wide bore GC column outlet and a bimodal four-way valve is connected with the narrow bore GC column inlet. A trapping and retaining circuit with a cold trap is connected with the six-way valve and a transfer circuit interconnects the two valves. The six-way valve is manipulated between first and second mode positions to collect analyte, and the four-way valve is manipulated between third and fourth mode positions to allow carrier gas to sweep analyte from a deactivated cold trap, through the transfer circuit, and then to the narrow bore GC capillary column for separation and subsequent analysis by a mass spectrometer. Rotary valves have substantially the same bore width as their associated columns to minimize flow irregularities and resulting sample peak deterioration. The rotary valves are heated separately from the GC columns to avoid temperature lag and resulting sample deterioration. 3 figs.

  5. Automated two-dimensional interface for capillary gas chromatography

    DOEpatents

    Strunk, Michael R.; Bechtold, William E.

    1996-02-20

    A multidimensional gas chromatograph (GC) system having wide bore capillary and narrow bore capillary GC columns in series and having a novel system interface. Heart cuts from a high flow rate sample, separated by a wide bore GC column, are collected and directed to a narrow bore GC column with carrier gas injected at a lower flow compatible with a mass spectrometer. A bimodal six-way valve is connected with the wide bore GC column outlet and a bimodal four-way valve is connected with the narrow bore GC column inlet. A trapping and retaining circuit with a cold trap is connected with the six-way valve and a transfer circuit interconnects the two valves. The six-way valve is manipulated between first and second mode positions to collect analyte, and the four-way valve is manipulated between third and fourth mode positions to allow carrier gas to sweep analyte from a deactivated cold trap, through the transfer circuit, and then to the narrow bore GC capillary column for separation and subsequent analysis by a mass spectrometer. Rotary valves have substantially the same bore width as their associated columns to minimize flow irregularities and resulting sample peak deterioration. The rotary valves are heated separately from the GC columns to avoid temperature lag and resulting sample deterioration.

  6. FORCe: Fully Online and Automated Artifact Removal for Brain-Computer Interfacing.

    PubMed

    Daly, Ian; Scherer, Reinhold; Billinger, Martin; Mller-Putz, Gernot

    2015-09-01

    A fully automated and online artifact removal method for the electroencephalogram (EEG) is developed for use in brain-computer interfacing (BCI). The method (FORCe) is based upon a novel combination of wavelet decomposition, independent component analysis, and thresholding. FORCe is able to operate on a small channel set during online EEG acquisition and does not require additional signals (e.g., electrooculogram signals). Evaluation of FORCe is performed offline on EEG recorded from 13 BCI particpants with cerebral palsy (CP) and online with three healthy participants. The method outperforms the state-of the-art automated artifact removal methods Lagged Auto-Mutual Information Clustering (LAMIC) and Fully Automated Statistical Thresholding for EEG artifact Rejection (FASTER), and is able to remove a wide range of artifact types including blink, electromyogram (EMG), and electrooculogram (EOG) artifacts. PMID:25134085

  7. Design Through Manufacturing: The Solid Model - Finite Element Analysis Interface

    NASA Technical Reports Server (NTRS)

    Rubin, Carol

    2003-01-01

    State-of-the-art computer aided design (CAD) presently affords engineers the opportunity to create solid models of machine parts which reflect every detail of the finished product. Ideally, these models should fulfill two very important functions: (1) they must provide numerical control information for automated manufacturing of precision parts, and (2) they must enable analysts to easily evaluate the stress levels (using finite element analysis - FEA) for all structurally significant parts used in space missions. Today's state-of-the-art CAD programs perform function (1) very well, providing an excellent model for precision manufacturing. But they do not provide a straightforward and simple means of automating the translation from CAD to FEA models, especially for aircraft-type structures. The research performed during the fellowship period investigated the transition process from the solid CAD model to the FEA stress analysis model with the final goal of creating an automatic interface between the two. During the period of the fellowship a detailed multi-year program for the development of such an interface was created. The ultimate goal of this program will be the development of a fully parameterized automatic ProE/FEA translator for parts and assemblies, with the incorporation of data base management into the solution, and ultimately including computational fluid dynamics and thermal modeling in the interface.

  8. RCrane: semi-automated RNA model building

    PubMed Central

    Keating, Kevin S.; Pyle, Anna Marie

    2012-01-01

    RNA crystals typically diffract to much lower resolutions than protein crystals. This low-resolution diffraction results in unclear density maps, which cause considerable difficulties during the model-building process. These difficulties are exacerbated by the lack of computational tools for RNA modeling. Here, RCrane, a tool for the partially automated building of RNA into electron-density maps of low or intermediate resolution, is presented. This tool works within Coot, a common program for macromolecular model building. RCrane helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RCrane then allows the crystallographer to review the newly built structure and select alternative backbone conformations where desired. This tool can also be used to automatically correct the backbone structure of previously built nucleotides. These automated corrections can fix incorrect sugar puckers, steric clashes and other structural problems. PMID:22868764

  9. Automating Risk Analysis of Software Design Models

    PubMed Central

    Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P.

    2014-01-01

    The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance. PMID:25136688

  10. Automating risk analysis of software design models.

    PubMed

    Frydman, Maxime; Ruiz, Guifr; Heymann, Elisa; Csar, Eduardo; Miller, Barton P

    2014-01-01

    The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance. PMID:25136688

  11. Automated dynamic analytical model improvement

    NASA Technical Reports Server (NTRS)

    Berman, A.

    1981-01-01

    A method is developed and illustrated which finds minimum changes in analytical mass and stiffness matrices to make them consistent with a set of measured normal modes and natural frequencies. The corrected model is an improved base for studies of physical changes, changes in boundary conditions, and for prediction of forced responses. Features of the method are: efficient procedures not requiring solutions of the eigenproblem; the model may have more degrees of freedom than the test data; modal displacements at all the analytical degrees of freedom are obtained; the frequency dependence of the coordinate transformations are properly treated.

  12. Modeling Increased Complexity and the Reliance on Automation: FLightdeck Automation Problems (FLAP) Model

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2014-01-01

    This paper highlights the development of a model that is focused on the safety issue of increasing complexity and reliance on automation systems in transport category aircraft. Recent statistics show an increase in mishaps related to manual handling and automation errors due to pilot complacency and over-reliance on automation, loss of situational awareness, automation system failures and/or pilot deficiencies. Consequently, the aircraft can enter a state outside the flight envelope and/or air traffic safety margins which potentially can lead to loss-of-control (LOC), controlled-flight-into-terrain (CFIT), or runway excursion/confusion accidents, etc. The goal of this modeling effort is to provide NASA's Aviation Safety Program (AvSP) with a platform capable of assessing the impacts of AvSP technologies and products towards reducing the relative risk of automation related accidents and incidents. In order to do so, a generic framework, capable of mapping both latent and active causal factors leading to automation errors, is developed. Next, the framework is converted into a Bayesian Belief Network model and populated with data gathered from Subject Matter Experts (SMEs). With the insertion of technologies and products, the model provides individual and collective risk reduction acquired by technologies and methodologies developed within AvSP.

  13. RCrane: semi-automated RNA model building

    SciTech Connect

    Keating, Kevin S.; Pyle, Anna Marie

    2012-08-01

    RCrane is a new tool for the partially automated building of RNA crystallographic models into electron-density maps of low or intermediate resolution. This tool helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RNA crystals typically diffract to much lower resolutions than protein crystals. This low-resolution diffraction results in unclear density maps, which cause considerable difficulties during the model-building process. These difficulties are exacerbated by the lack of computational tools for RNA modeling. Here, RCrane, a tool for the partially automated building of RNA into electron-density maps of low or intermediate resolution, is presented. This tool works within Coot, a common program for macromolecular model building. RCrane helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RCrane then allows the crystallographer to review the newly built structure and select alternative backbone conformations where desired. This tool can also be used to automatically correct the backbone structure of previously built nucleotides. These automated corrections can fix incorrect sugar puckers, steric clashes and other structural problems.

  14. Atomistic modeling of dislocation-interface interactions

    SciTech Connect

    Wang, Jian; Valone, Steven M; Beyerlein, Irene J; Misra, Amit; Germann, T. C.

    2011-01-31

    Using atomic scale models and interface defect theory, we first classify interface structures into a few types with respect to geometrical factors, then study the interfacial shear response and further simulate the dislocation-interface interactions using molecular dynamics. The results show that the atomic scale structural characteristics of both heterophases and homophases interfaces play a crucial role in (i) their mechanical responses and (ii) the ability of incoming lattice dislocations to transmit across them.

  15. Automation model of sewerage rehabilitation planning.

    PubMed

    Yang, M D; Su, T C

    2006-01-01

    The major steps of sewerage rehabilitation include inspection of sewerage, assessment of structural conditions, computation of structural condition grades, and determination of rehabilitation methods and materials. Conventionally, sewerage rehabilitation planning relies on experts with professional background that is tedious and time-consuming. This paper proposes an automation model of planning optimal sewerage rehabilitation strategies for the sewer system by integrating image process, clustering technology, optimization, and visualization display. Firstly, image processing techniques, such as wavelet transformation and co-occurrence features extraction, were employed to extract various characteristics of structural failures from CCTV inspection images. Secondly, a classification neural network was established to automatically interpret the structural conditions by comparing the extracted features with the typical failures in a databank. Then, to achieve optimal rehabilitation efficiency, a genetic algorithm was used to determine appropriate rehabilitation methods and substitution materials for the pipe sections with a risk of mal-function and even collapse. Finally, the result from the automation model can be visualized in a geographic information system in which essential information of the sewer system and sewerage rehabilitation plans are graphically displayed. For demonstration, the automation model of optimal sewerage rehabilitation planning was applied to a sewer system in east Taichung, Chinese Taiwan. PMID:17302324

  16. A Generalized Timeline Representation, Services, and Interface for Automating Space Mission Operations

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Johnston, Mark; Frank, Jeremy; Giuliano, Mark; Kavelaars, Alicia; Lenzen, Christoph; Policella, Nicola

    2012-01-01

    Most use a timeline based representation for operations modeling. Most model a core set of state, resource types. Most provide similar capabilities on this modeling to enable (semi) automated schedule generation. In this paper we explore the commonality of : representation and services for these timelines. These commonalities offer potential to be harmonized to enable interoperability, re-use.

  17. Automating the Modeling of the SEE Cross Section's Angular Dependence

    NASA Technical Reports Server (NTRS)

    Patterson, J. D.; Edmonds, L. D.

    2003-01-01

    An algorithm that automates the application of the alpha law in any SEE analysis is presented. This automation is essential for the widespread acceptance of the sophisticated cross section angular dependence model.

  18. Automated Expert Modeling and Student Evaluation

    Energy Science and Technology Software Center (ESTSC)

    2012-09-12

    AEMASE searches a database of recorded events for combinations of events that are of interest. It compares matching combinations to a statistical model to determine similarity to previous events of interest and alerts the user as new matching examples are found. AEMASE is currently used by weapons tactics instructors to find situations of interest in recorded tactical training scenarios. AEMASE builds on a sub-component, the Relational Blackboard (RBB), which is being released as open-source software.more » AEMASE builds on RBB adding interactive expert model construction (automated knowledge capture) and re-evaluation of scenario data.« less

  19. Automated Expert Modeling and Student Evaluation

    SciTech Connect

    2012-09-12

    AEMASE searches a database of recorded events for combinations of events that are of interest. It compares matching combinations to a statistical model to determine similarity to previous events of interest and alerts the user as new matching examples are found. AEMASE is currently used by weapons tactics instructors to find situations of interest in recorded tactical training scenarios. AEMASE builds on a sub-component, the Relational Blackboard (RBB), which is being released as open-source software. AEMASE builds on RBB adding interactive expert model construction (automated knowledge capture) and re-evaluation of scenario data.

  20. Automated statistical modeling of analytical measurement systems

    SciTech Connect

    Jacobson, J J

    1992-08-01

    The statistical modeling of analytical measurement systems at the Idaho Chemical Processing Plant (ICPP) has been completely automated through computer software. The statistical modeling of analytical measurement systems is one part of a complete quality control program used by the Remote Analytical Laboratory (RAL) at the ICPP. The quality control program is an integration of automated data input, measurement system calibration, database management, and statistical process control. The quality control program and statistical modeling program meet the guidelines set forth by the American Society for Testing Materials and American National Standards Institute. A statistical model is a set of mathematical equations describing any systematic bias inherent in a measurement system and the precision of a measurement system. A statistical model is developed from data generated from the analysis of control standards. Control standards are samples which are made up at precise known levels by an independent laboratory and submitted to the RAL. The RAL analysts who process control standards do not know the values of those control standards. The object behind statistical modeling is to describe real process samples in terms of their bias and precision and, to verify that a measurement system is operating satisfactorily. The processing of control standards gives us this ability.

  1. Interfacing materials models with fire field models

    SciTech Connect

    Nicolette, V.F.; Tieszen, S.R.; Moya, J.L.

    1995-12-01

    For flame spread over solid materials, there has traditionally been a large technology gap between fundamental combustion research and the somewhat simplistic approaches used for practical, real-world applications. Recent advances in computational hardware and computational fluid dynamics (CFD)-based software have led to the development of fire field models. These models, when used in conjunction with material burning models, have the potential to bridge the gap between research and application by implementing physics-based engineering models in a transient, multi-dimensional tool. This paper discusses the coupling that is necessary between fire field models and burning material models for the simulation of solid material fires. Fire field models are capable of providing detailed information about the local fire environment. This information serves as an input to the solid material combustion submodel, which subsequently calculates the impact of the fire environment on the material. The response of the solid material (in terms of thermal response, decomposition, charring, and off-gassing) is then fed back into the field model as a source of mass, momentum and energy. The critical parameters which must be passed between the field model and the material burning model have been identified. Many computational issues must be addressed when developing such an interface. Some examples include the ability to track multiple fuels and species, local ignition criteria, and the need to use local grid refinement over the burning material of interest.

  2. Development of a commercial Automated Laser Gas Interface (ALGI) for AMS

    NASA Astrophysics Data System (ADS)

    Daniel, R.; Mores, M.; Kitchen, R.; Sundquist, M.; Hauser, T.; Stodola, M.; Tannenbaum, S.; Skipper, P.; Liberman, R.; Young, G.; Corless, S.; Tucker, M.

    2013-01-01

    National Electrostatics Corporation (NEC), Massachusetts Institute of Technology (MIT), and GlaxoSmithKline (GSK) collectively have been developing an interface to introduce CO2 produced by the laser combustion of liquid chromatograph eluate deposited on a CuO substrate directly into the ion source of an AMS system, thereby bypassing the customary graphitization process. The Automated Laser Gas Interface (ALGI) converts dried liquid samples to CO2 gas quickly and efficiently, allowing 96 samples to be measured in as little as 16 h. 14C:12C ratios stabilize typically within 2 min of analysis time per sample. Presented is the recent progress of NECs ALGI, a stand-alone accessory to an NEC gas-enabled multi-cathode source of negative ions by Cs sputtering (MC-SNICS) ion source.

  3. A method for automated detection of usability problems from client user interface events.

    PubMed

    Saadawi, Gilan M; Legowski, Elizabeth; Medvedeva, Olga; Chavan, Girish; Crowley, Rebecca S

    2005-01-01

    Think-aloud usability analysis provides extremely useful data but is very time-consuming and expensive to perform because of the extensive manual video analysis that is required. We describe a simple method for automated detection of usability problems from client user interface events for a developing medical intelligent tutoring system. The method incorporates (1) an agent-based method for communication that funnels all interface events and system responses to a centralized database, (2) a simple schema for representing interface events and higher order subgoals, and (3) an algorithm that reproduces the criteria used for manual coding of usability problems. A correction factor was empirically determining to account for the slower task performance of users when thinking aloud. We tested the validity of the method by simultaneously identifying usability problems using TAU and manually computing them from stored interface event data using the proposed algorithm. All usability problems that did not rely on verbal utterances were detectable with the proposed method. PMID:16779121

  4. A Method for Automated Detection of Usability Problems from Client User Interface Events

    PubMed Central

    Saadawi, Gilan M.; Legowski, Elizabeth; Medvedeva, Olga; Chavan, Girish; Crowley, Rebecca S.

    2005-01-01

    Think-aloud usability analysis provides extremely useful data but is very time-consuming and expensive to perform because of the extensive manual video analysis that is required. We describe a simple method for automated detection of usability problems from client user interface events for a developing medical intelligent tutoring system. The method incorporates (1) an agent-based method for communication that funnels all interface events and system responses to a centralized database, (2) a simple schema for representing interface events and higher order subgoals, and (3) an algorithm that reproduces the criteria used for manual coding of usability problems. A correction factor was empirically determining to account for the slower task performance of users when thinking aloud. We tested the validity of the method by simultaneously identifying usability problems using TAU and manually computing them from stored interface event data using the proposed algorithm. All usability problems that did not rely on verbal utterances were detectable with the proposed method. PMID:16779121

  5. Automated PSF Modeling for Hubble Images

    NASA Astrophysics Data System (ADS)

    Hamilton, Timothy S.

    2014-01-01

    Two techniques have commonly been used to model the Point Spread Function (PSF) of Hubble images: a natural PSF from observed stars, and an artificial PSF from the TinyTim software. PSF models need to be matched in color, subpixel centering, and focus, which takes a good deal of time and effort and slows down work in large surveys. I present the public release of software for automating an artificial PSF fit. This program needs very little user interaction and is designed to be used in anything from one PSF in a single image up to many in a large-scale survey. Applications to active galaxies are shown from the CANDELS survey, where the PSF is subtracted to show the host galaxy.

  6. A Generalized Timeline Representation, Services, and Interface for Automating Space Mission Operations

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Johnston, Mark; Frank, Jeremy; Giuliano, Mark; Kavelaars, Alicia; Lenzen, Christoph; Policella, Nicola

    2012-01-01

    Numerous automated and semi-automated planning & scheduling systems have been developed for space applications. Most of these systems are model-based in that they encode domain knowledge necessary to predict spacecraft state and resources based on initial conditions and a proposed activity plan. The spacecraft state and resources as often modeled as a series of timelines, with a timeline or set of timelines to represent a state or resource key in the operations of the spacecraft. In this paper, we first describe a basic timeline representation that can represent a set of state, resource, timing, and transition constraints. We describe a number of planning and scheduling systems designed for space applications (and in many cases deployed for use of ongoing missions) and describe how they do and do not map onto this timeline model.

  7. A diffuse interface model with immiscibility preservation

    SciTech Connect

    Tiwari, Arpit; Freund, Jonathan B.; Pantano, Carlos

    2013-11-01

    A new, simple, and computationally efficient interface capturing scheme based on a diffuse interface approach is presented for simulation of compressible multiphase flows. Multi-fluid interfaces are represented using field variables (interface functions) with associated transport equations that are augmented, with respect to an established formulation, to enforce a selected interface thickness. The resulting interface region can be set just thick enough to be resolved by the underlying mesh and numerical method, yet thin enough to provide an efficient model for dynamics of well-resolved scales. A key advance in the present method is that the interface regularization is asymptotically compatible with the thermodynamic mixture laws of the mixture model upon which it is constructed. It incorporates first-order pressure and velocity non-equilibrium effects while preserving interface conditions for equilibrium flows, even within the thin diffused mixture region. We first quantify the improved convergence of this formulation in some widely used one-dimensional configurations, then show that it enables fundamentally better simulations of bubble dynamics. Demonstrations include both a spherical-bubble collapse, which is shown to maintain excellent symmetry despite the Cartesian mesh, and a jetting bubble collapse adjacent a wall. Comparisons show that without the new formulation the jet is suppressed by numerical diffusion leading to qualitatively incorrect results.

  8. A Diffuse Interface Model with Immiscibility Preservation

    PubMed Central

    Tiwari, Arpit; Freund, Jonathan B.; Pantano, Carlos

    2013-01-01

    A new, simple, and computationally efficient interface capturing scheme based on a diffuse interface approach is presented for simulation of compressible multiphase flows. Multi-fluid interfaces are represented using field variables (interface functions) with associated transport equations that are augmented, with respect to an established formulation, to enforce a selected interface thickness. The resulting interface region can be set just thick enough to be resolved by the underlying mesh and numerical method, yet thin enough to provide an efficient model for dynamics of well-resolved scales. A key advance in the present method is that the interface regularization is asymptotically compatible with the thermodynamic mixture laws of the mixture model upon which it is constructed. It incorporates first-order pressure and velocity non-equilibrium effects while preserving interface conditions for equilibrium flows, even within the thin diffused mixture region. We first quantify the improved convergence of this formulation in some widely used one-dimensional configurations, then show that it enables fundamentally better simulations of bubble dynamics. Demonstrations include both a spherical bubble collapse, which is shown to maintain excellent symmetry despite the Cartesian mesh, and a jetting bubble collapse adjacent a wall. Comparisons show that without the new formulation the jet is suppressed by numerical diffusion leading to qualitatively incorrect results. PMID:24058207

  9. Automation life-cycle cost model

    NASA Technical Reports Server (NTRS)

    Gathmann, Thomas P.; Reeves, Arlinda J.; Cline, Rick; Henrion, Max; Ruokangas, Corinne

    1992-01-01

    The problem domain being addressed by this contractual effort can be summarized by the following list: Automation and Robotics (A&R) technologies appear to be viable alternatives to current, manual operations; Life-cycle cost models are typically judged with suspicion due to implicit assumptions and little associated documentation; and Uncertainty is a reality for increasingly complex problems and few models explicitly account for its affect on the solution space. The objectives for this effort range from the near-term (1-2 years) to far-term (3-5 years). In the near-term, the envisioned capabilities of the modeling tool are annotated. In addition, a framework is defined and developed in the Decision Modelling System (DEMOS) environment. Our approach is summarized as follows: Assess desirable capabilities (structure into near- and far-term); Identify useful existing models/data; Identify parameters for utility analysis; Define tool framework; Encode scenario thread for model validation; and Provide transition path for tool development. This report contains all relevant, technical progress made on this contractual effort.

  10. Generalized model for interface description

    NASA Astrophysics Data System (ADS)

    Barbier, Antoine

    1998-05-01

    In this paper a complete formalism for interface description is presented. The interfaces to be described may be continuous or granular and are characterized by a few very general parameters: the coverable part of the surface; the island generating function depending on the crystallographic symmetry of the islands or clusters; the maximal height an island may reach before it is touching another, the distribution of island heights and the concentrations of the evaporated elements. All these parameters may depend on coverage, time, temperature etc. allowing for the description of many phenomena which affect the island shape. The formalism is versatile and the field of its potential applications is large and may grow. It is well suited to describe almost everything which happens at the surface: the growth of binary systems, alloys, alloying, diffusion, clusters etc. It is naturally well suited for describing the intensities measured by spectroscopies. The formalism is explicitly extended to these techniques. A consequence of the generalized description is to break down some of the well accepted ideas in this field like: premonolayer breaks, the occurrence of breaks only if the growth is of a layer by layer type and the ability of such techniques to describe, without any other data, growth modes. Finally new applications are suggested for spectroscopies: adsorbate surface roughening and multilayer growth monitoring.

  11. Bayesian Safety Risk Modeling of Human-Flightdeck Automation Interaction

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2015-01-01

    Usage of automatic systems in airliners has increased fuel efficiency, added extra capabilities, enhanced safety and reliability, as well as provide improved passenger comfort since its introduction in the late 80's. However, original automation benefits, including reduced flight crew workload, human errors or training requirements, were not achieved as originally expected. Instead, automation introduced new failure modes, redistributed, and sometimes increased workload, brought in new cognitive and attention demands, and increased training requirements. Modern airliners have numerous flight modes, providing more flexibility (and inherently more complexity) to the flight crew. However, the price to pay for the increased flexibility is the need for increased mode awareness, as well as the need to supervise, understand, and predict automated system behavior. Also, over-reliance on automation is linked to manual flight skill degradation and complacency in commercial pilots. As a result, recent accidents involving human errors are often caused by the interactions between humans and the automated systems (e.g., the breakdown in man-machine coordination), deteriorated manual flying skills, and/or loss of situational awareness due to heavy dependence on automated systems. This paper describes the development of the increased complexity and reliance on automation baseline model, named FLAP for FLightdeck Automation Problems. The model development process starts with a comprehensive literature review followed by the construction of a framework comprised of high-level causal factors leading to an automation-related flight anomaly. The framework was then converted into a Bayesian Belief Network (BBN) using the Hugin Software v7.8. The effects of automation on flight crew are incorporated into the model, including flight skill degradation, increased cognitive demand and training requirements along with their interactions. Besides flight crew deficiencies, automation system failures and anomalies of avionic systems are also incorporated. The resultant model helps simulate the emergence of automation-related issues in today's modern airliners from a top-down, generalized approach, which serves as a platform to evaluate NASA developed technologies

  12. Systems Engineering Interfaces: A Model Based Approach

    NASA Technical Reports Server (NTRS)

    Fosse, Elyse; Delp, Christopher

    2013-01-01

    Currently: Ops Rev developed and maintains a framework that includes interface-specific language, patterns, and Viewpoints. Ops Rev implements the framework to design MOS 2.0 and its 5 Mission Services. Implementation de-couples interfaces and instances of interaction Future: A Mission MOSE implements the approach and uses the model based artifacts for reviews. The framework extends further into the ground data layers and provides a unified methodology.

  13. Model compilation: An approach to automated model derivation

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo

    1990-01-01

    An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.

  14. Modeling Europa's Ice-Ocean Interface

    NASA Astrophysics Data System (ADS)

    Elsenousy, A.; Vance, S.; Bills, B. G.

    2014-12-01

    This work focuses on modeling the ice-ocean interface on Jupiter's Moon (Europa); mainly from the standpoint of heat and salt transfer relationship with emphasis on the basal ice growth rate and its implications to Europa's tidal response. Modeling the heat and salt flux at Europa's ice/ocean interface is necessary to understand the dynamics of Europa's ocean and its interaction with the upper ice shell as well as the history of active turbulence at this area. To achieve this goal, we used McPhee et al., 2008 parameterizations on Earth's ice/ocean interface that was developed to meet Europa's ocean dynamics. We varied one parameter at a time to test its influence on both; "h" the basal ice growth rate and on "R" the double diffusion tendency strength. The double diffusion tendency "R" was calculated as the ratio between the interface heat exchange coefficient αh to the interface salt exchange coefficient αs. Our preliminary results showed a strong double diffusion tendency R ~200 at Europa's ice-ocean interface for plausible changes in the heat flux due to onset or elimination of a hydrothermal activity, suggesting supercooling and a strong tendency for forming frazil ice.

  15. Automated Extraction of Planetary Digital Elevation Models

    NASA Astrophysics Data System (ADS)

    Andre, S. L.; Andre, T. C.; Robinson, M. S.

    2003-12-01

    Digital elevation models (DEMs) are invaluable products for planetary terrain interpretation [i.e. 1,2,3]. Typically, stereo matching programs require a user-selected set of corresponding points in the left and right images (seed points) to initiate automated stereo matching routines, which then find matching points between the two images. User input of seed points for each stereo pair can be a tedious and time-consuming step. An automated stereo matching tool for planetary images is useful in reducing or eliminating the need for human interaction (and potential error) in choosing initial seed points. In our software, we implement an adaptive least squares (ALS) correlation algorithm [4] and a sheet-growing algorithm [5]. The ALS algorithm matches a patch in the left image to a patch in the right image; this algorithm iteratively minimizes the sum of the squares between the patches to determine optimal transformation parameters. Successful matches are then used to predict matches for the locations of surrounding unmatched points (sheet growing algorithm). Matching is initiated using either automatically generated seed points or manually picked seed points. We are developing strategies to identify and reduce the number of errors produced by the stereo matching software; additional constraints may be applied after the matching process to check the validity of each match. We are currently testing the stereo matcher on image pairs using correlation patch sizes ranging from 9x 9 pixels to 25x 25 pixels. A rigorous error analysis will be performed to better assess the quality of the results. Initial results of DEMs derived from Mariner 10 images compare well with DEMs generated by another area-based stereo matcher [6]. Our ultimate goal is to produce a user-friendly, robust stereo matcher tool that can be used by the planetary science community across a wide variety of image datasets. [1] Herrick R. and Sharpton V. 2000, JGR 105, 20245-20262. [2] Oberst J. et al. 1997, Eos 78, 445-450. [3] Smith D. et al. 1999, Science 284, 1495-1503. [4] Gruen A. 1985, S. Afr. J. of Photogramm. Rem. Sens. Cart. 14(3), 175-187. [5] Otto G. and Chau T. 1989, Image Vision Comput. 7, 83-94. [6] Cook A. and Robinson M. 2000, JGR 105,9429-9443.

  16. Formally verifying humanautomation interaction as part of a system model: limitations and tradeoffs

    PubMed Central

    Bass, Ellen J.

    2011-01-01

    Both the human factors engineering (HFE) and formal methods communities are concerned with improving the design of safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to perform formal verification of humanautomation interaction with a programmable device. This effort utilizes a system architecture composed of independent models of the human mission, human task behavior, human-device interface, device automation, and operational environment. The goals of this architecture were to allow HFE practitioners to perform formal verifications of realistic systems that depend on humanautomation interaction in a reasonable amount of time using representative models, intuitive modeling constructs, and decoupled models of system components that could be easily changed to support multiple analyses. This framework was instantiated using a patient controlled analgesia pump in a two phased process where models in each phase were verified using a common set of specifications. The first phase focused on the mission, human-device interface, and device automation; and included a simple, unconstrained human task behavior model. The second phase replaced the unconstrained task model with one representing normative pump programming behavior. Because models produced in the first phase were too large for the model checker to verify, a number of model revisions were undertaken that affected the goals of the effort. While the use of human task behavior models in the second phase helped mitigate model complexity, verification time increased. Additional modeling tools and technological developments are necessary for model checking to become a more usable technique for HFE. PMID:21572930

  17. An interface tracking model for droplet electrocoalescence.

    SciTech Connect

    Erickson, Lindsay Crowl

    2013-09-01

    This report describes an Early Career Laboratory Directed Research and Development (LDRD) project to develop an interface tracking model for droplet electrocoalescence. Many fluid-based technologies rely on electrical fields to control the motion of droplets, e.g. microfluidic devices for high-speed droplet sorting, solution separation for chemical detectors, and purification of biodiesel fuel. Precise control over droplets is crucial to these applications. However, electric fields can induce complex and unpredictable fluid dynamics. Recent experiments (Ristenpart et al. 2009) have demonstrated that oppositely charged droplets bounce rather than coalesce in the presence of strong electric fields. A transient aqueous bridge forms between approaching drops prior to pinch-off. This observation applies to many types of fluids, but neither theory nor experiments have been able to offer a satisfactory explanation. Analytic hydrodynamic approximations for interfaces become invalid near coalescence, and therefore detailed numerical simulations are necessary. This is a computationally challenging problem that involves tracking a moving interface and solving complex multi-physics and multi-scale dynamics, which are beyond the capabilities of most state-of-the-art simulations. An interface-tracking model for electro-coalescence can provide a new perspective to a variety of applications in which interfacial physics are coupled with electrodynamics, including electro-osmosis, fabrication of microelectronics, fuel atomization, oil dehydration, nuclear waste reprocessing and solution separation for chemical detectors. We present a conformal decomposition finite element (CDFEM) interface-tracking method for the electrohydrodynamics of two-phase flow to demonstrate electro-coalescence. CDFEM is a sharp interface method that decomposes elements along fluid-fluid boundaries and uses a level set function to represent the interface.

  18. A Web Interface for Eco System Modeling

    NASA Astrophysics Data System (ADS)

    McHenry, K.; Kooper, R.; Serbin, S. P.; LeBauer, D. S.; Desai, A. R.; Dietze, M. C.

    2012-12-01

    We have developed the Predictive Ecosystem Analyzer (PEcAn) as an open-source scientific workflow system and ecoinformatics toolbox that manages the flow of information in and out of regional-scale terrestrial biosphere models, facilitates heterogeneous data assimilation, tracks data provenance, and enables more effective feedback between models and field research. The over-arching goal of PEcAn is to make otherwise complex analyses transparent, repeatable, and accessible to a diverse array of researchers, allowing both novice and expert users to focus on using the models to examine complex ecosystems rather than having to deal with complex computer system setup and configuration questions in order to run the models. Through the developed web interface we hide much of the data and model details and allow the user to simply select locations, ecosystem models, and desired data sources as inputs to the model. Novice users are guided by the web interface through setting up a model execution and plotting the results. At the same time expert users are given enough freedom to modify specific parameters before the model gets executed. This will become more important as more and more models are added to the PEcAn workflow as well as more and more data that will become available as NEON comes online. On the backend we support the execution of potentially computationally expensive models on different High Performance Computers (HPC) and/or clusters. The system can be configured with a single XML file that gives it the flexibility needed for configuring and running the different models on different systems using a combination of information stored in a database as well as pointers to files on the hard disk. While the web interface usually creates this configuration file, expert users can still directly edit it to fine tune the configuration.. Once a workflow is finished the web interface will allow for the easy creation of plots over result data while also allowing the user to download the results for further processing. The current workflow in the web interface is a simple linear workflow, but will be expanded to allow for more complex workflows. We are working with Kepler and Cyberintegrator to allow for these more complex workflows as well as collecting provenance of the workflow being executed. This provenance regarding model executions is stored in a database along with the derived results. All of this information is then accessible using the BETY database web frontend. The PEcAn interface.

  19. Interfacing a robotic station with a gas chromatograph for the full automation of the determination of organochlorine pesticides in vegetables

    SciTech Connect

    Torres, P.; Luque de Castro, M.D.

    1996-12-31

    A fully automated method for the determination of organochlorine pesticides in vegetables is proposed. The overall system acts as an {open_quotes}analytical black box{close_quotes} because a robotic station performs the prelimninary operations, from weighing to capping the leached analytes and location in an autosampler of an automated gas chromatograph with electron capture detection. The method has been applied to the determination of lindane, heptachlor, captan, chlordane and metoxcychlor in tea, marjoram, cinnamon, pennyroyal, and mint with good results in most cases. A gas chromatograph has been interfaced to a robotic station for the determination of pesticides in vegetables. 15 refs., 4 figs., 2 tabs.

  20. Transitions in a probabilistic interface growth model

    NASA Astrophysics Data System (ADS)

    Alves, S. G.; Moreira, J. G.

    2011-04-01

    We study a generalization of the Wolf-Villain (WV) interface growth model based on a probabilistic growth rule. In the WV model, particles are randomly deposited onto a substrate and subsequently move to a position nearby where the binding is strongest. We introduce a growth probability which is proportional to a power of the number ni of bindings of the site i: p_i\\propto n_i^\

  1. Automating a human factors evaluation of graphical user interfaces for NASA applications: An update on CHIMES

    NASA Technical Reports Server (NTRS)

    Jiang, Jian-Ping; Murphy, Elizabeth D.; Bailin, Sidney C.; Truszkowski, Walter F.

    1993-01-01

    Capturing human factors knowledge about the design of graphical user interfaces (GUI's) and applying this knowledge on-line are the primary objectives of the Computer-Human Interaction Models (CHIMES) project. The current CHIMES prototype is designed to check a GUI's compliance with industry-standard guidelines, general human factors guidelines, and human factors recommendations on color usage. Following the evaluation, CHIMES presents human factors feedback and advice to the GUI designer. The paper describes the approach to modeling human factors guidelines, the system architecture, a new method developed to convert quantitative RGB primaries into qualitative color representations, and the potential for integrating CHIMES with user interface management systems (UIMS). Both the conceptual approach and its implementation are discussed. This paper updates the presentation on CHIMES at the first International Symposium on Ground Data Systems for Spacecraft Control.

  2. Automated model integration at source code level: An approach for implementing models into the NASA Land Information System

    NASA Astrophysics Data System (ADS)

    Wang, S.; Peters-Lidard, C. D.; Mocko, D. M.; Kumar, S.; Nearing, G. S.; Arsenault, K. R.; Geiger, J. V.

    2014-12-01

    Model integration bridges the data flow between modeling frameworks and models. However, models usually do not fit directly into a particular modeling environment, if not designed for it. An example includes implementing different types of models into the NASA Land Information System (LIS), a software framework for land-surface modeling and data assimilation. Model implementation requires scientific knowledge and software expertise and may take a developer months to learn LIS and model software structure. Debugging and testing of the model implementation is also time-consuming due to not fully understanding LIS or the model. This time spent is costly for research and operational projects. To address this issue, an approach has been developed to automate model integration into LIS. With this in mind, a general model interface was designed to retrieve forcing inputs, parameters, and state variables needed by the model and to provide as state variables and outputs to LIS. Every model can be wrapped to comply with the interface, usually with a FORTRAN 90 subroutine. Development efforts need only knowledge of the model and basic programming skills. With such wrappers, the logic is the same for implementing all models. Code templates defined for this general model interface could be re-used with any specific model. Therefore, the model implementation can be done automatically. An automated model implementation toolkit was developed with Microsoft Excel and its built-in VBA language. It allows model specifications in three worksheets and contains FORTRAN 90 code templates in VBA programs. According to the model specification, the toolkit generates data structures and procedures within FORTRAN modules and subroutines, which transfer data between LIS and the model wrapper. Model implementation is standardized, and about 80 - 90% of the development load is reduced. In this presentation, the automated model implementation approach is described along with LIS programming interfaces, the general model interface and five case studies, including a regression model, Noah-MP, FASST, SAC-HTET/SNOW-17, and FLake. These different models vary in complexity with software structure. Also, we will describe how these complexities were overcome through using this approach and results of model benchmarks within LIS.

  3. Automation Marketplace 2010: New Models, Core Systems

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2010-01-01

    In a year when a difficult economy presented fewer opportunities for immediate gains, the major industry players have defined their business strategies with fundamentally different concepts of library automation. This is no longer an industry where companies compete on the basis of the best or the most features in similar products but one where…

  4. Automation Marketplace 2010: New Models, Core Systems

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2010-01-01

    In a year when a difficult economy presented fewer opportunities for immediate gains, the major industry players have defined their business strategies with fundamentally different concepts of library automation. This is no longer an industry where companies compete on the basis of the best or the most features in similar products but one where

  5. XRLSim model specifications and user interfaces

    SciTech Connect

    Young, K.D.; Breitfeller, E.; Woodruff, J.P.

    1989-12-01

    The two chapters in this manual document the engineering development leading to modification of XRLSim -- an Ada-based computer program developed to provide a realistic simulation of an x-ray laser weapon platform. Complete documentation of the FY88 effort to develop XRLSim was published in April 1989, as UCID-21736:XRLSIM Model Specifications and User Interfaces, by L. C. Ng, D. T. Gavel, R. M. Shectman. P. L. Sholl, and J. P. Woodruff. The FY89 effort has been primarily to enhance the x-ray laser weapon-platform model fidelity. Chapter 1 of this manual details enhancements made to XRLSim model specifications during FY89. Chapter 2 provides the user with changes in user interfaces brought about by these enhancements. This chapter is offered as a series of deletions, replacements, and insertions to the original document to enable XRLSim users to implement enhancements developed during FY89.

  6. Interface dynamics in planar neural field models

    PubMed Central

    2012-01-01

    Neural field models describe the coarse-grained activity of populations of interacting neurons. Because of the laminar structure of real cortical tissue they are often studied in two spatial dimensions, where they are well known to generate rich patterns of spatiotemporal activity. Such patterns have been interpreted in a variety of contexts ranging from the understanding of visual hallucinations to the generation of electroencephalographic signals. Typical patterns include localized solutions in the form of traveling spots, as well as intricate labyrinthine structures. These patterns are naturally defined by the interface between low and high states of neural activity. Here we derive the equations of motion for such interfaces and show, for a Heaviside firing rate, that the normal velocity of an interface is given in terms of a non-local Biot-Savart type interaction over the boundaries of the high activity regions. This exact, but dimensionally reduced, system of equations is solved numerically and shown to be in excellent agreement with the full nonlinear integral equation defining the neural field. We develop a linear stability analysis for the interface dynamics that allows us to understand the mechanisms of pattern formation that arise from instabilities of spots, rings, stripes and fronts. We further show how to analyze neural field models with linear adaptation currents, and determine the conditions for the dynamic instability of spots that can give rise to breathers and traveling waves. PMID:22655970

  7. Eye gaze tracking for endoscopic camera positioning: an application of a hardware/software interface developed to automate Aesop.

    PubMed

    Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K

    2008-01-01

    A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform. PMID:18391246

  8. Modeling strategic behavior in human-automation interaction: why an "aid" can (and should) go unused.

    PubMed

    Kirlik, A

    1993-06-01

    Task-offload aids (e.g., an autopilot, an "intelligent" assistant) can be selectively engaged by the human operator to dynamically delegate tasks to automation. Introducing such aids eliminates some task demands but creates new ones associated with programming, engaging, and disengaging the aiding device via an interface. The burdens associated with managing automation can sometimes outweigh the potential benefits of automation to improved system performance. Aid design parameters and features of the overall multitask context combine to determine whether or not a task-offload aid will effectively support the operator. A modeling and sensitivity analysis approach is presented that identifies effective strategies for human-automation interaction as a function of three task-context parameters and three aid design parameters. The analysis and modeling approaches provide resources for predicting how a well-adapted operator will use a given task-offload aid, and for specifying aid design features that ensure that automation will provide effective operator support in a multitask environment. PMID:8349287

  9. Modeling strategic behavior in human-automation interaction - Why an 'aid' can (and should) go unused

    NASA Technical Reports Server (NTRS)

    Kirlik, Alex

    1993-01-01

    Task-offload aids (e.g., an autopilot, an 'intelligent' assistant) can be selectively engaged by the human operator to dynamically delegate tasks to automation. Introducing such aids eliminates some task demands but creates new ones associated with programming, engaging, and disengaging the aiding device via an interface. The burdens associated with managing automation can sometimes outweigh the potential benefits of automation to improved system performance. Aid design parameters and features of the overall multitask context combine to determine whether or not a task-offload aid will effectively support the operator. A modeling and sensitivity analysis approach is presented that identifies effective strategies for human-automation interaction as a function of three task-context parameters and three aid design parameters. The analysis and modeling approaches provide resources for predicting how a well-adapted operator will use a given task-offload aid, and for specifying aid design features that ensure that automation will provide effective operator support in a multitask environment.

  10. An Automated Translator for Model Checking Time Constrained Workflow Systems

    NASA Astrophysics Data System (ADS)

    Mashiyat, Ahmed Shah; Rabbi, Fazle; Wang, Hao; Maccaull, Wendy

    Workflows have proven to be a useful conceptualization for the automation of business processes. While formal verification methods (e.g., model checking) can help ensure the reliability of workflow systems, the industrial uptake of such methods has been slow largely due to the effort involved in modeling and the memory required to verify complex systems. Incorporation of time constraints in such systems exacerbates the latter problem. We present an automated translator, YAWL2DVE-t, which takes as input a time constrained workflow model built with the graphical modeling tool YAWL, and outputs the model in DVE, the system specification language for the distributed LTL model checker DiVinE. The automated translator, together with the graphical editor and the distributed model checker, provides a method for rapid design, verification and refactoring of time constrained workflow systems. We present a realistic case study developed through collaboration with the local health authority.

  11. Development and Design of a User Interface for a Computer Automated Heating, Ventilation, and Air Conditioning System

    SciTech Connect

    Anderson, B.; /Fermilab

    1999-10-08

    A user interface is created to monitor and operate the heating, ventilation, and air conditioning system. The interface is networked to the system's programmable logic controller. The controller maintains automated control of the system. The user through the interface is able to see the status of the system and override or adjust the automatic control features. The interface is programmed to show digital readouts of system equipment as well as visual queues of system operational statuses. It also provides information for system design and component interaction. The interface is made easier to read by simple designs, color coordination, and graphics. Fermi National Accelerator Laboratory (Fermi lab) conducts high energy particle physics research. Part of this research involves collision experiments with protons, and anti-protons. These interactions are contained within one of two massive detectors along Fermilab's largest particle accelerator the Tevatron. The D-Zero Assembly Building houses one of these detectors. At this time detector systems are being upgraded for a second experiment run, titled Run II. Unlike the previous run, systems at D-Zero must be computer automated so operators do not have to continually monitor and adjust these systems during the run. Human intervention should only be necessary for system start up and shut down, and equipment failure. Part of this upgrade includes the heating, ventilation, and air conditioning system (HVAC system). The HVAC system is responsible for controlling two subsystems, the air temperatures of the D-Zero Assembly Building and associated collision hall, as well as six separate water systems used in the heating and cooling of the air and detector components. The BYAC system is automated by a programmable logic controller. In order to provide system monitoring and operator control a user interface is required. This paper will address methods and strategies used to design and implement an effective user interface. Background material pertinent to the BYAC system will cover the separate water and air subsystems and their purposes. In addition programming and system automation will also be covered.

  12. An automated hydride generation interface to ICPMS for measuring total arsenic in environmental samples.

    PubMed

    Sengupta, Mrinal K; Dasgupta, Purnendu K

    2009-12-01

    An automated hydride generation (AHG) interface to inductive coupled plasma mass spectroscopy (ICPMS) was developed for measuring arsenic in environmental samples. This technique provides statistically indistinguishable response slopes (within about 3%) for hydride generation-ICPMS (HG-ICPMS) analysis of all major As species, inorganic As(III), dimethylarsinic acid (DMA), monomethylarsonic acid (MMA), and inorganic As(V); this has not previously been achieved. Previously, sample pretreatment to convert all forms of As into As(V) has been a prerequisite for measuring total arsenic in complex matrices. Under our operating conditions, arsenobetaine (AsB), until now regarded to be inert, also generates a hydride (albeit the response is only approximately 7% of others). The limit of detection (LOD) based on three times the standard deviation of the blank with this technique for AsB, DMA, As(III), MMA, and As(V) is 90, 66, 63, 63, and 63 pg As, respectively. This AHG-ICPMS technique was compared with a flow injection-UV photolysis-HG-ICPMS (FI-UV-ICPMS) and liquid chromatography-UV-HG-ICPMS analysis of arsenic content in National Institute of Standards & Technology (NIST) standard rice flour (standard reference material: SRM 1568a) and rice samples collected from West Bengal, India. Both oxidative acid digestion and methanol:water (1:1) extraction were used. The analytical results for total As in the SRM 1568a digest were 99.2 +/- 0.6 and 100.2 +/- 0.8% of the certified value (290 +/- 3 microg As/kg) by the AHG-ICPMS and the FI-UV-HG-ICPMS techniques, respectively. For rice extracts and the digests, the two techniques provided results that were correlated with linear r2 values of 0.9988 and 0.9987 with intercepts statistically indistinguishable from zero. Chromatographic analysis indicated that As in these rice samples were 75-90% inorganic. PMID:19891455

  13. A Hierarchical Test Model and Automated Test Framework for RTC

    NASA Astrophysics Data System (ADS)

    Lim, Jae-Hee; Song, Suk-Hoon; Kuc, Tae-Yong; Park, Hong-Seong; Kim, Hong-Seak

    This paper presents a hierarchical test model and automated test framework for robot software components of RTC(Robot Technology Component) combined with hardware module. The hierarchical test model consists of three levels of testing based on V-model : unit test, integration test, and system test. The automated test framework incorporates four components of test data generation, test manager, test execution, and test monitoring. The proposed testing model and its automation framework is proven to be efficient for testing of developed robotic software components in terms of time and cost. The feasibility and effectiveness of proposed architecture for robot components testing are illustrated through an application example along with embedded robotic testbed equipped with range sensor hardware and its software component modeled as an RTC.

  14. ModelMate - A graphical user interface for model analysis

    USGS Publications Warehouse

    Banta, Edward R.

    2011-01-01

    ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.

  15. Design and implementation of a user-friendly interface for DIII-D neutral beam automated operation

    SciTech Connect

    Phillips, J.; Colleraine, A.P.; Hong, R.; Kim, J.; Lee, R.L.; Wight, J.J.

    1989-12-01

    The operational interface to the DIII-D neutral beam system, in use for the past 10 years, consisted of several interactive devices that the operator used to sequence neutral beam conditioning and plasma heating shots. Each of four independent MODCOMP Classic control computers (for four DIII-D beamlines) included a touch screen, rotary knobs, an interactive dual port terminal, and a keyboard to selectively address each of five display screens. Most of the hardware had become obsolete and repair was becoming increasingly expensive. It was clear that the hardware could be replaced with current equipment, while improving the ergonomics of control. Combined with an ongoing effort to increase the degree of automated operation and its reliability, a single microcomputer-based interface for each of the four neutral beam MODCOMP Classic control computers was developed, effectively replacing some twenty pieces of hardware. Macintosh II microcomputers were selected, with 1 megabyte of RAM and off-the-shelf'' input/output (I/O) consisting of a mouse, serial ports, and two monochrome high-resolution video monitors. The software is written in PASCAL and adopts standard Macintosh window'' techniques. From the Macintosh interface to the MODCOMP Classic, the operator can control the power supply setpoints, adjust ion source timing and synchronization, call up waveform displays on the Grinnell color display system, view the sequencing of procedures to ready a neutral beam shot, and add operator comments to an automated shot logging system. 3 refs., 2 figs.

  16. Automated particulate sampler field test model operations guide

    SciTech Connect

    Bowyer, S.M.; Miley, H.S.

    1996-10-01

    The Automated Particulate Sampler Field Test Model Operations Guide is a collection of documents which provides a complete picture of the Automated Particulate Sampler (APS) and the Field Test in which it was evaluated. The Pacific Northwest National Laboratory (PNNL) Automated Particulate Sampler was developed for the purpose of radionuclide particulate monitoring for use under the Comprehensive Test Ban Treaty (CTBT). Its design was directed by anticipated requirements of small size, low power consumption, low noise level, fully automatic operation, and most predominantly the sensitivity requirements of the Conference on Disarmament Working Paper 224 (CDWP224). This guide is intended to serve as both a reference document for the APS and to provide detailed instructions on how to operate the sampler. This document provides a complete description of the APS Field Test Model and all the activity related to its evaluation and progression.

  17. Automated data acquisition technology development:Automated modeling and control development

    NASA Technical Reports Server (NTRS)

    Romine, Peter L.

    1995-01-01

    This report documents the completion of, and improvements made to, the software developed for automated data acquisition and automated modeling and control development on the Texas Micro rackmounted PC's. This research was initiated because a need was identified by the Metal Processing Branch of NASA Marshall Space Flight Center for a mobile data acquisition and data analysis system, customized for welding measurement and calibration. Several hardware configurations were evaluated and a PC based system was chosen. The Welding Measurement System (WMS), is a dedicated instrument strickly for use of data acquisition and data analysis. In addition to the data acquisition functions described in this thesis, WMS also supports many functions associated with process control. The hardware and software requirements for an automated acquisition system for welding process parameters, welding equipment checkout, and welding process modeling were determined in 1992. From these recommendations, NASA purchased the necessary hardware and software. The new welding acquisition system is designed to collect welding parameter data and perform analysis to determine the voltage versus current arc-length relationship for VPPA welding. Once the results of this analysis are obtained, they can then be used to develop a RAIL function to control welding startup and shutdown without torch crashing.

  18. Radiation budget measurement/model interface

    NASA Technical Reports Server (NTRS)

    Vonderhaar, T. H.; Ciesielski, P.; Randel, D.; Stevens, D.

    1983-01-01

    This final report includes research results from the period February, 1981 through November, 1982. Two new results combine to form the final portion of this work. They are the work by Hanna (1982) and Stevens to successfully test and demonstrate a low-order spectral climate model and the work by Ciesielski et al. (1983) to combine and test the new radiation budget results from NIMBUS-7 with earlier satellite measurements. Together, the two related activities set the stage for future research on radiation budget measurement/model interfacing. Such combination of results will lead to new applications of satellite data to climate problems. The objectives of this research under the present contract are therefore satisfied. Additional research reported herein includes the compilation and documentation of the radiation budget data set a Colorado State University and the definition of climate-related experiments suggested after lengthy analysis of the satellite radiation budget experiments.

  19. Modeling interfaces between solids: Application to Li battery materials

    NASA Astrophysics Data System (ADS)

    Lepley, N. D.; Holzwarth, N. A. W.

    2015-12-01

    We present a general scheme to model an energy for analyzing interfaces between crystalline solids, quantitatively including the effects of varying configurations and lattice strain. This scheme is successfully applied to the modeling of likely interface geometries of several solid state battery materials including Li metal, Li3PO4 , Li3PS4 , Li2O , and Li2S . Our formalism, together with a partial density of states analysis, allows us to characterize the thickness, stability, and transport properties of these interfaces. We find that all of the interfaces in this study are stable with the exception of Li3PS4/Li . For this chemically unstable interface, the partial density of states helps to identify mechanisms associated with the interface reactions. Our energetic measure of interfaces and our analysis of the band alignment between interface materials indicate multiple factors, which may be predictors of interface stability, an important property of solid electrolyte systems.

  20. Variational Implicit Solvation with Solute Molecular Mechanics: From Diffuse-Interface to Sharp-Interface Models

    PubMed Central

    Li, Bo; Zhao, Yanxiang

    2013-01-01

    Central in a variational implicit-solvent description of biomolecular solvation is an effective free-energy functional of the solute atomic positions and the solute-solvent interface (i.e., the dielectric boundary). The free-energy functional couples together the solute molecular mechanical interaction energy, the solute-solvent interfacial energy, the solute-solvent van der Waals interaction energy, and the electrostatic energy. In recent years, the sharp-interface version of the variational implicit-solvent model has been developed and used for numerical computations of molecular solvation. In this work, we propose a diffuse-interface version of the variational implicit-solvent model with solute molecular mechanics. We also analyze both the sharp-interface and diffuse-interface models. We prove the existence of free-energy minimizers and obtain their bounds. We also prove the convergence of the diffuse-interface model to the sharp-interface model in the sense of Γ-convergence. We further discuss properties of sharp-interface free-energy minimizers, the boundary conditions and the coupling of the Poisson–Boltzmann equation in the diffuse-interface model, and the convergence of forces from diffuse-interface to sharp-interface descriptions. Our analysis relies on the previous works on the problem of minimizing surface areas and on our observations on the coupling between solute molecular mechanical interactions with the continuum solvent. Our studies justify rigorously the self consistency of the proposed diffuse-interface variational models of implicit solvation. PMID:24058213

  1. Computational design of patterned interfaces using reduced order models

    PubMed Central

    Vattré, A. J.; Abdolrahim, N.; Kolluri, K.; Demkowicz, M. J.

    2014-01-01

    Patterning is a familiar approach for imparting novel functionalities to free surfaces. We extend the patterning paradigm to interfaces between crystalline solids. Many interfaces have non-uniform internal structures comprised of misfit dislocations, which in turn govern interface properties. We develop and validate a computational strategy for designing interfaces with controlled misfit dislocation patterns by tailoring interface crystallography and composition. Our approach relies on a novel method for predicting the internal structure of interfaces: rather than obtaining it from resource-intensive atomistic simulations, we compute it using an efficient reduced order model based on anisotropic elasticity theory. Moreover, our strategy incorporates interface synthesis as a constraint on the design process. As an illustration, we apply our approach to the design of interfaces with rapid, 1-D point defect diffusion. Patterned interfaces may be integrated into the microstructure of composite materials, markedly improving performance. PMID:25169868

  2. Computational design of patterned interfaces using reduced order models.

    PubMed

    Vattr, A J; Abdolrahim, N; Kolluri, K; Demkowicz, M J

    2014-01-01

    Patterning is a familiar approach for imparting novel functionalities to free surfaces. We extend the patterning paradigm to interfaces between crystalline solids. Many interfaces have non-uniform internal structures comprised of misfit dislocations, which in turn govern interface properties. We develop and validate a computational strategy for designing interfaces with controlled misfit dislocation patterns by tailoring interface crystallography and composition. Our approach relies on a novel method for predicting the internal structure of interfaces: rather than obtaining it from resource-intensive atomistic simulations, we compute it using an efficient reduced order model based on anisotropic elasticity theory. Moreover, our strategy incorporates interface synthesis as a constraint on the design process. As an illustration, we apply our approach to the design of interfaces with rapid, 1-D point defect diffusion. Patterned interfaces may be integrated into the microstructure of composite materials, markedly improving performance. PMID:25169868

  3. A new interface element for connecting independently modeled substructures

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.

    1993-01-01

    A new interface element based on the hybrid variational formulation is presented and demonstrated. The element provides a means of connecting independently modeled substructures whose nodes along the common boundary need not be coincident. The interface element extends previous work to include connecting an arbitrary number of substructures, the use of closed and generally curved interfaces, and the use of multiple, possibly nested, interfaces. Several applications of the element are presented and aspects of the implementation are discussed.

  4. Hexapods with fieldbus interfaces for automated manufacturing of opto-mechanical components

    NASA Astrophysics Data System (ADS)

    Schreiber, Steffen; Muellerleile, Christian; Frietsch, Markus; Gloess, Rainer

    2013-09-01

    The adjustment of opto-mechanical components in manufacturing processes often requires precise motion in all six degrees of freedom with nanometer range resolution and absence of hysteresis. Parallel kinematic systems are predestined for such tasks due to their compact design, low inertia and high stiffness resulting in rapid settling behavior. To achieve adequate system performance, specialized motion controllers are required to handle the complex kinematic models for the different types of Hexapods and the associated extensive calculations of inverse kinematics. These controllers often rely on proprietary command languages, a fact that demands a high level of familiarization. This paper describes how the integration of fieldbus interfaces into Hexapod controllers simplifies the communication while providing higher flexibility. By using standardized communication protocols with cycle times down to 12.5 ?s it is straightforward to control multiple Hexapods and other devices by superordinate PLCs of different manufacturers. The paper also illustrates how to simplify adjustment and alignment processes by combining scanning algorithms with user defined coordinate systems.

  5. Automated adaptive inference of phenomenological dynamical models

    NASA Astrophysics Data System (ADS)

    Daniels, Bryan C.; Nemenman, Ilya

    2015-08-01

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved.

  6. Automated adaptive inference of phenomenological dynamical models

    PubMed Central

    Daniels, Bryan C.; Nemenman, Ilya

    2015-01-01

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved. PMID:26293508

  7. Automated adaptive inference of phenomenological dynamical models.

    PubMed

    Daniels, Bryan C; Nemenman, Ilya

    2015-01-01

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved. PMID:26293508

  8. Automated Environment Generation for Software Model Checking

    NASA Technical Reports Server (NTRS)

    Tkachuk, Oksana; Dwyer, Matthew B.; Pasareanu, Corina S.

    2003-01-01

    A key problem in model checking open systems is environment modeling (i.e., representing the behavior of the execution context of the system under analysis). Software systems are fundamentally open since their behavior is dependent on patterns of invocation of system components and values defined outside the system but referenced within the system. Whether reasoning about the behavior of whole programs or about program components, an abstract model of the environment can be essential in enabling sufficiently precise yet tractable verification. In this paper, we describe an approach to generating environments of Java program fragments. This approach integrates formally specified assumptions about environment behavior with sound abstractions of environment implementations to form a model of the environment. The approach is implemented in the Bandera Environment Generator (BEG) which we describe along with our experience using BEG to reason about properties of several non-trivial concurrent Java programs.

  9. Models for Automated Tube Performance Calculations

    SciTech Connect

    C. Brunkhorst

    2002-12-12

    High power radio-frequency systems, as typically used in fusion research devices, utilize vacuum tubes. Evaluation of vacuum tube performance involves data taken from tube operating curves. The acquisition of data from such graphical sources is a tedious process. A simple modeling method is presented that will provide values of tube currents for a given set of element voltages. These models may be used as subroutines in iterative solutions of amplifier operating conditions for a specific loading impedance.

  10. State modeling and pass automation in spacecraft control

    NASA Technical Reports Server (NTRS)

    Klein, J.; Kulp, D.; Rashkin, R.

    1996-01-01

    The integrated monitoring and control commercial off-the-shelf system (IMACCS), which demonstrates the feasibility of automating spacecraft monitoring and control activities through the use of state modeling, is described together with its use. The use of the system for the control and ground support of the solar, anomalous and magnetic particle explorer (SAMPEX) spacecraft is considered. A key component of IMACCS is the Altair mission control system which implements finite state modeling as an element of its expert system capability. Using the finite state modeling and state transition capabilities of the Altair mission control system, IMACCS features automated monitoring, routine pass support, anomaly resolution and emergency 'lights on again' response. Automatic orbit determination and the production of typical flight dynamics products exists. These functionalities are described.

  11. Model Search: Formalizing and Automating Constraint Solving in MDE Platforms

    NASA Astrophysics Data System (ADS)

    Kleiner, Mathias; Del Fabro, Marcos Didonet; Albert, Patrick

    Model Driven Engineering (MDE) and constraint programming (CP) have been widely used and combined in different applications. However, existing results are either ad-hoc, not fully integrated or manually executed. In this article, we present a formalization and an approach for automating constraint-based solving in a MDE platform. Our approach generalizes existing work by combining known MDE concepts with CP techniques into a single operation called model search. We present the theoretical basis for model search, as well as an automated process that details the involved operations. We validate our approach by comparing two implemented solutions (one based on Alloy/SAT, the other on OPL/CP), and by executing them over an academic use-case.

  12. Automated texture registration on 3D models

    NASA Astrophysics Data System (ADS)

    Pelagotti, A.; Uccheddu, F.; Picchioni, F.

    2011-11-01

    3D models are often lacking a photorealistic appearance, due to low quality of the acquired texture, or to the complete absence of it. Moreover, especially in case of reality based models, it is often of specific interest to texture with images different from photos, like multispectral/multimodal views (InfraRed, X-rays, UV fluorescence etc), or images taken in different moments in time. In this work, a fully automatic approach for texture mapping is proposed. The method relies on the automatic extraction from the model geometry of appropriate depth maps, in form of images, whose pixels maintain an exact correspondence with vertices of the 3D model. A multiresolution greedy method is then proposed to generate the candidate depth maps which could be related with the given texture. In order to select the best match, a suited similarity measure is computed, based on Maximixation of Mutual Information (MMI). 3D texturing is then applied to the portion of the model which is visualized in the texture.

  13. Qgui: A high-throughput interface for automated setup and analysis of free energy calculations and empirical valence bond simulations in biological systems.

    PubMed

    Isaksen, Geir Villy; Andberg, Tor Arne Heim; Åqvist, Johan; Brandsdal, Bjørn Olav

    2015-07-01

    Structural information and activity data has increased rapidly for many protein targets during the last decades. In this paper, we present a high-throughput interface (Qgui) for automated free energy and empirical valence bond (EVB) calculations that use molecular dynamics (MD) simulations for conformational sampling. Applications to ligand binding using both the linear interaction energy (LIE) method and the free energy perturbation (FEP) technique are given using the estrogen receptor (ERα) as a model system. Examples of free energy profiles obtained using the EVB method for the rate-limiting step of the enzymatic reaction catalyzed by trypsin are also shown. In addition, we present calculation of high-precision Arrhenius plots to obtain the thermodynamic activation enthalpy and entropy with Qgui from running a large number of EVB simulations. PMID:26080356

  14. A power line data communication interface using spread spectrum technology in home automation

    SciTech Connect

    Shwehdi, M.H.; Khan, A.Z.

    1996-07-01

    Building automation technology is rapidly developing towards more reliable communication systems, devices that control electronic equipments. These equipment if controlled leads to efficient energy management, and savings on the monthly electricity bill. Power Line communication (PLC) has been one of the dreams of the electronics industry for decades, especially for building automation. It is the purpose of this paper to demonstrate communication methods among electronic control devices through an AC power line carrier within the buildings for more efficient energy control. The paper outlines methods of communication over a powerline, namely the X-10 and CE bus. It also introduces the spread spectrum technology as to increase speed to 100--150 times faster than the X-10 system. The powerline carrier has tremendous applications in the field of building automation. The paper presents an attempt to realize a smart house concept, so called, in which all home electronic devices from a coffee maker to a water heater microwave to chaos robots will be utilized by an intelligent network whenever one wishes to do so. The designed system may be applied very profitably to help in energy management for both customer and utility.

  15. Automated refinement and inference of analytical models for metabolic networks.

    PubMed

    Schmidt, Michael D; Vallabhajosyula, Ravishankar R; Jenkins, Jerry W; Hood, Jonathan E; Soni, Abhishek S; Wikswo, John P; Lipson, Hod

    2011-10-01

    The reverse engineering of metabolic networks from experimental data is traditionally a labor-intensive task requiring a priori systems knowledge. Using a proven model as a test system, we demonstrate an automated method to simplify this process by modifying an existing or related model--suggesting nonlinear terms and structural modifications--or even constructing a new model that agrees with the system's time series observations. In certain cases, this method can identify the full dynamical model from scratch without prior knowledge or structural assumptions. The algorithm selects between multiple candidate models by designing experiments to make their predictions disagree. We performed computational experiments to analyze a nonlinear seven-dimensional model of yeast glycolytic oscillations. This approach corrected mistakes reliably in both approximated and overspecified models. The method performed well to high levels of noise for most states, could identify the correct model de novo, and make better predictions than ordinary parametric regression and neural network models. We identified an invariant quantity in the model, which accurately derived kinetics and the numerical sensitivity coefficients of the system. Finally, we compared the system to dynamic flux estimation and discussed the scaling and application of this methodology to automated experiment design and control in biological systems in real time. PMID:21832805

  16. Automated refinement and inference of analytical models for metabolic networks

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael D.; Vallabhajosyula, Ravishankar R.; Jenkins, Jerry W.; Hood, Jonathan E.; Soni, Abhishek S.; Wikswo, John P.; Lipson, Hod

    2011-10-01

    The reverse engineering of metabolic networks from experimental data is traditionally a labor-intensive task requiring a priori systems knowledge. Using a proven model as a test system, we demonstrate an automated method to simplify this process by modifying an existing or related model--suggesting nonlinear terms and structural modifications--or even constructing a new model that agrees with the system's time series observations. In certain cases, this method can identify the full dynamical model from scratch without prior knowledge or structural assumptions. The algorithm selects between multiple candidate models by designing experiments to make their predictions disagree. We performed computational experiments to analyze a nonlinear seven-dimensional model of yeast glycolytic oscillations. This approach corrected mistakes reliably in both approximated and overspecified models. The method performed well to high levels of noise for most states, could identify the correct model de novo, and make better predictions than ordinary parametric regression and neural network models. We identified an invariant quantity in the model, which accurately derived kinetics and the numerical sensitivity coefficients of the system. Finally, we compared the system to dynamic flux estimation and discussed the scaling and application of this methodology to automated experiment design and control in biological systems in real time.

  17. Automated refinement and inference of analytical models for metabolic networks

    PubMed Central

    Schmidt, Michael D; Vallabhajosyula, Ravishankar R; Jenkins, Jerry W; Hood, Jonathan E; Soni, Abhishek S; Wikswo, John P; Lipson, Hod

    2013-01-01

    The reverse engineering of metabolic networks from experimental data is traditionally a labor-intensive task requiring a priori systems knowledge. Using a proven model as a test system, we demonstrate an automated method to simplify this process by modifying an existing or related model – suggesting nonlinear terms and structural modifications – or even constructing a new model that agrees with the system’s time-series observations. In certain cases, this method can identify the full dynamical model from scratch without prior knowledge or structural assumptions. The algorithm selects between multiple candidate models by designing experiments to make their predictions disagree. We performed computational experiments to analyze a nonlinear seven-dimensional model of yeast glycolytic oscillations. This approach corrected mistakes reliably in both approximated and overspecified models. The method performed well to high levels of noise for most states, could identify the correct model de novo, and make better predictions than ordinary parametric regression and neural network models. We identified an invariant quantity in the model, which accurately derived kinetics and the numerical sensitivity coefficients of the system. Finally, we compared the system to dynamic flux estimation and discussed the scaling and application of this methodology to automated experiment design and control in biological systems in real-time. PMID:21832805

  18. Automated photogrammetry for three-dimensional models of urban spaces

    NASA Astrophysics Data System (ADS)

    Leberl, Franz; Meixner, Philipp; Wendel, Andreas; Irschara, Arnold

    2012-02-01

    The location-aware Internet is inspiring intensive work addressing the automated assembly of three-dimensional models of urban spaces with their buildings, circulation spaces, vegetation, signs, even their above-ground and underground utility lines. Two-dimensional geographic information systems (GISs) and municipal utility information exist and can serve to guide the creation of models being built with aerial, sometimes satellite imagery, streetside images, indoor imaging, and alternatively with light detection and ranging systems (LiDARs) carried on airplanes, cars, or mounted on tripods. We review the results of current research to automate the information extraction from sensor data. We show that aerial photography at ground sampling distances (GSD) of 1 to 10 cm is well suited to provide geometry data about building facades and roofs, that streetside imagery at 0.5 to 2 cm is particularly interesting when it is collected within community photo collections (CPCs) by the general public, and that the transition to digital imaging has opened the no-cost option of highly overlapping images in support of a more complete and thus more economical automation. LiDAR-systems are a widely used source of three-dimensional data, but they deliver information not really superior to digital photography.

  19. Reassembly and interfacing neural models registered on biological model databases.

    PubMed

    Otake, Mihoko; Takagi, Toshihisa

    2005-01-01

    The importance of modeling and simulation of biological process is growing for further understanding of living systems at all scales from molecular to cellular, organic, and individuals. In the field of neuroscience, there are so called platform simulators, the de-facto standard neural simulators. More than a hundred neural models are registered on the model database. These models are executable in corresponding simulation environments. But usability of the registered models is not sufficient. In order to make use of the model, the users have to identify the input, output and internal state variables and parameters of the models. The roles and units of each variable and parameter are not explicitly defined in the model files. These are suggested implicitly in the papers where the simulation results are demonstrated. In this study, we propose a novel method of reassembly and interfacing models registered on biological model database. The method was applied to the neural models registered on one of the typical biological model database, ModelDB. The results are described in detail with the hippocampal pyramidal neuron model. The model is executable in NEURON simulator environment, which demonstrates that somatic EPSP amplitude is independent of synapse location. Input and output parameters and variables were identified successfully, and the results of the simulation were recorded in the organized form with annotations. PMID:16901091

  20. Rapid Automated Aircraft Simulation Model Updating from Flight Data

    NASA Technical Reports Server (NTRS)

    Brian, Geoff; Morelli, Eugene A.

    2011-01-01

    Techniques to identify aircraft aerodynamic characteristics from flight measurements and compute corrections to an existing simulation model of a research aircraft were investigated. The purpose of the research was to develop a process enabling rapid automated updating of aircraft simulation models using flight data and apply this capability to all flight regimes, including flight envelope extremes. The process presented has the potential to improve the efficiency of envelope expansion flight testing, revision of control system properties, and the development of high-fidelity simulators for pilot training.

  1. Automated Decomposition of Model-based Learning Problems

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Millar, Bill

    1996-01-01

    A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.

  2. Multibody dynamics model building using graphical interfaces

    NASA Technical Reports Server (NTRS)

    Macala, Glenn A.

    1989-01-01

    In recent years, the extremely laborious task of manually deriving equations of motion for the simulation of multibody spacecraft dynamics has largely been eliminated. Instead, the dynamicist now works with commonly available general purpose dynamics simulation programs which generate the equations of motion either explicitly or implicitly via computer codes. The user interface to these programs has predominantly been via input data files, each with its own required format and peculiarities, causing errors and frustrations during program setup. Recent progress in a more natural method of data input for dynamics programs: the graphical interface, is described.

  3. The Application of the Cumulative Logistic Regression Model to Automated Essay Scoring

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; Sinharay, Sandip

    2010-01-01

    Most automated essay scoring programs use a linear regression model to predict an essay score from several essay features. This article applied a cumulative logit model instead of the linear regression model to automated essay scoring. Comparison of the performances of the linear regression model and the cumulative logit model was performed on a

  4. Rapid Prototyping of Hydrologic Model Interfaces with IPython

    NASA Astrophysics Data System (ADS)

    Farthing, M. W.; Winters, K. D.; Ahmadia, A. J.; Hesser, T.; Howington, S. E.; Johnson, B. D.; Tate, J.; Kees, C. E.

    2014-12-01

    A significant gulf still exists between the state of practice and state of the art in hydrologic modeling. Part of this gulf is due to the lack of adequate pre- and post-processing tools for newly developed computational models. The development of user interfaces has traditionally lagged several years behind the development of a particular computational model or suite of models. As a result, models with mature interfaces often lack key advancements in model formulation, solution methods, and/or software design and technology. Part of the problem has been a focus on developing monolithic tools to provide comprehensive interfaces for the entire suite of model capabilities. Such efforts require expertise in software libraries and frameworks for creating user interfaces (e.g., Tcl/Tk, Qt, and MFC). These tools are complex and require significant investment in project resources (time and/or money) to use. Moreover, providing the required features for the entire range of possible applications and analyses creates a cumbersome interface. For a particular site or application, the modeling requirements may be simplified or at least narrowed, which can greatly reduce the number and complexity of options that need to be accessible to the user. However, monolithic tools usually are not adept at dynamically exposing specific workflows. Our approach is to deliver highly tailored interfaces to users. These interfaces may be site and/or process specific. As a result, we end up with many, customized interfaces rather than a single, general-use tool. For this approach to be successful, it must be efficient to create these tailored interfaces. We need technology for creating quality user interfaces that is accessible and has a low barrier for integration into model development efforts. Here, we present efforts to leverage IPython notebooks as tools for rapid prototyping of site and application-specific user interfaces. We provide specific examples from applications in near-shore environments as well as levee analysis. We discuss our design decisions and methodology for developing customized interfaces, strategies for delivery of the interfaces to users in various computing environments, as well as implications for the design/implementation of simulation models.

  5. Developing a Graphical User Interface to Automate the Estimation and Prediction of Risk Values for Flood Protective Structures using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Hasan, M.; Helal, A.; Gabr, M.

    2014-12-01

    In this project, we focus on providing a computer-automated platform for a better assessment of the potential failures and retrofit measures of flood-protecting earth structures, e.g., dams and levees. Such structures play an important role during extreme flooding events as well as during normal operating conditions. Furthermore, they are part of other civil infrastructures such as water storage and hydropower generation. Hence, there is a clear need for accurate evaluation of stability and functionality levels during their service lifetime so that the rehabilitation and maintenance costs are effectively guided. Among condition assessment approaches based on the factor of safety, the limit states (LS) approach utilizes numerical modeling to quantify the probability of potential failures. The parameters for LS numerical modeling include i) geometry and side slopes of the embankment, ii) loading conditions in terms of rate of rising and duration of high water levels in the reservoir, and iii) cycles of rising and falling water levels simulating the effect of consecutive storms throughout the service life of the structure. Sample data regarding the correlations of these parameters are available through previous research studies. We have unified these criteria and extended the risk assessment in term of loss of life through the implementation of a graphical user interface to automate input parameters that divides data into training and testing sets, and then feeds them into Artificial Neural Network (ANN) tool through MATLAB programming. The ANN modeling allows us to predict risk values of flood protective structures based on user feedback quickly and easily. In future, we expect to fine-tune the software by adding extensive data on variations of parameters.

  6. Modeling and Control of the Automated Radiator Inspection Device

    NASA Technical Reports Server (NTRS)

    Dawson, Darren

    1991-01-01

    Many of the operations performed at the Kennedy Space Center (KSC) are dangerous and repetitive tasks which make them ideal candidates for robotic applications. For one specific application, KSC is currently in the process of designing and constructing a robot called the Automated Radiator Inspection Device (ARID), to inspect the radiator panels on the orbiter. The following aspects of the ARID project are discussed: modeling of the ARID; design of control algorithms; and nonlinear based simulation of the ARID. Recommendations to assist KSC personnel in the successful completion of the ARID project are given.

  7. Automated quantitative gait analysis in animal models of movement disorders

    PubMed Central

    2010-01-01

    Background Accurate and reproducible behavioral tests in animal models are of major importance in the development and evaluation of new therapies for central nervous system disease. In this study we investigated for the first time gait parameters of rat models for Parkinson's disease (PD), Huntington's disease (HD) and stroke using the Catwalk method, a novel automated gait analysis test. Static and dynamic gait parameters were measured in all animal models, and these data were compared to readouts of established behavioral tests, such as the cylinder test in the PD and stroke rats and the rotarod tests for the HD group. Results Hemiparkinsonian rats were generated by unilateral injection of the neurotoxin 6-hydroxydopamine in the striatum or in the medial forebrain bundle. For Huntington's disease, a transgenic rat model expressing a truncated huntingtin fragment with multiple CAG repeats was used. Thirdly, a stroke model was generated by a photothrombotic induced infarct in the right sensorimotor cortex. We found that multiple gait parameters were significantly altered in all three disease models compared to their respective controls. Behavioural deficits could be efficiently measured using the cylinder test in the PD and stroke animals, and in the case of the PD model, the deficits in gait essentially confirmed results obtained by the cylinder test. However, in the HD model and the stroke model the Catwalk analysis proved more sensitive than the rotarod test and also added new and more detailed information on specific gait parameters. Conclusion The automated quantitative gait analysis test may be a useful tool to study both motor impairment and recovery associated with various neurological motor disorders. PMID:20691122

  8. Anomalous interface roughening in porous media: Experiment and model

    NASA Astrophysics Data System (ADS)

    Buldyrev, S. V.; Barabsi, A.-L.; Caserta, F.; Havlin, S.; Stanley, H. E.; Vicsek, T.

    1992-06-01

    We report measurements of the interface formed when a wet front propagates in paper by imbibition and we find anomalous roughening with exponent ?=0.63+/-0.04. We also formulate an imbibition model that agrees with the experimental morphology. The main ingredient of the model is the propagation and pinning of a self-affine interface in the presence of quenched disorder, with erosion of overhangs. By relating our model to directed percolation, we find ?~=0.63.

  9. Automated Physico-Chemical Cell Model Development through Information Theory

    SciTech Connect

    Peter J. Ortoleva

    2005-11-29

    The objective of this project was to develop predictive models of the chemical responses of microbial cells to variations in their surroundings. The application of these models is optimization of environmental remediation and energy-producing biotechnical processes.The principles on which our project is based are as follows: chemical thermodynamics and kinetics; automation of calibration through information theory; integration of multiplex data (e.g. cDNA microarrays, NMR, proteomics), cell modeling, and bifurcation theory to overcome cellular complexity; and the use of multiplex data and information theory to calibrate and run an incomplete model. In this report we review four papers summarizing key findings and a web-enabled, multiple module workflow we have implemented that consists of a set of interoperable systems biology computational modules.

  10. Automated extraction of knowledge for model-based diagnostics

    NASA Technical Reports Server (NTRS)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  11. Finite element modeling of frictionally restrained composite interfaces

    NASA Technical Reports Server (NTRS)

    Ballarini, Roberto; Ahmed, Shamim

    1989-01-01

    The use of special interface finite elements to model frictional restraint in composite interfaces is described. These elements simulate Coulomb friction at the interface, and are incorporated into a standard finite element analysis of a two-dimensional isolated fiber pullout test. Various interfacial characteristics, such as the distribution of stresses at the interface, the extent of slip and delamination, load diffusion from fiber to matrix, and the amount of fiber extraction or depression are studied for different friction coefficients. The results are compared to those obtained analytically using a singular integral equation approach, and those obtained by assuming a constant interface shear strength. The usefulness of these elements in micromechanical modeling of fiber-reinforced composite materials is highlighted.

  12. Petri net modelling of buffers in automated manufacturing systems.

    PubMed

    Zhou, M; Dicesare, F

    1996-01-01

    This paper presents Petri net models of buffers and a methodology by which buffers can be included in a system without introducing deadlocks or overflows. The context is automated manufacturing. The buffers and models are classified as random order or order preserved (first-in-first-out or last-in-first-out), single-input-single-output or multiple-input-multiple-output, part type and/or space distinguishable or indistinguishable, and bounded or safe. Theoretical results for the development of Petri net models which include buffer modules are developed. This theory provides the conditions under which the system properties of boundedness, liveness, and reversibility are preserved. The results are illustrated through two manufacturing system examples: a multiple machine and multiple buffer production line and an automatic storage and retrieval system in the context of flexible manufacturing. PMID:18263017

  13. Automated geo/ortho registered aerial imagery product generation using the mapping system interface card (MSIC)

    NASA Astrophysics Data System (ADS)

    Bratcher, Tim; Kroutil, Robert; Lanouette, André; Lewis, Paul E.; Miller, David; Shen, Sylvia; Thomas, Mark

    2013-05-01

    The development concept paper for the MSIC system was first introduced in August 2012 by these authors. This paper describes the final assembly, testing, and commercial availability of the Mapping System Interface Card (MSIC). The 2.3kg MSIC is a self-contained, compact variable configuration, low cost real-time precision metadata annotator with embedded INS/GPS designed specifically for use in small aircraft. The MSIC was specifically designed to convert commercial-off-the-shelf (COTS) digital cameras and imaging/non-imaging spectrometers with Camera Link standard data streams into mapping systems for airborne emergency response and scientific remote sensing applications. COTS digital cameras and imaging/non-imaging spectrometers covering the ultraviolet through long-wave infrared wavelengths are important tools now readily available and affordable for use by emergency responders and scientists. The MSIC will significantly enhance the capability of emergency responders and scientists by providing a direct transformation of these important COTS sensor tools into low-cost real-time aerial mapping systems.

  14. A new seismically constrained subduction interface model for Central America

    NASA Astrophysics Data System (ADS)

    Kyriakopoulos, C.; Newman, A. V.; Thomas, A. M.; Moore-Driskell, M.; Farmer, G. T.

    2015-08-01

    We provide a detailed, seismically defined three-dimensional model for the subducting plate interface along the Middle America Trench between northern Nicaragua and southern Costa Rica. The model uses data from a weighted catalog of about 30,000 earthquake hypocenters compiled from nine catalogs to constrain the interface through a process we term the "maximum seismicity method." The method determines the average position of the largest cluster of microseismicity beneath an a priori functional surface above the interface. This technique is applied to all seismicity above 40 km depth, the approximate intersection of the hanging wall Mohorovi?i? discontinuity, where seismicity likely lies along the plate interface. Below this depth, an envelope above 90% of seismicity approximates the slab surface. Because of station proximity to the interface, this model provides highest precision along the interface beneath the Nicoya Peninsula of Costa Rica, an area where marked geometric changes coincide with crustal transitions and topography observed seaward of the trench. The new interface is useful for a number of geophysical studies that aim to understand subduction zone earthquake behavior and geodynamic and tectonic development of convergent plate boundaries.

  15. T:XML: A Tool Supporting User Interface Model Transformation

    NASA Astrophysics Data System (ADS)

    Lpez-Jaquero, Vctor; Montero, Francisco; Gonzlez, Pascual

    Model driven development of user interfaces is based on the transformation of an abstract specification into the final user interface the user will interact with. The design of transformation rules to carry out this transformation process is a key issue in any model-driven user interface development approach. In this paper, we introduce T:XML, an integrated development environment for managing, creating and previewing transformation rules. The tool supports the specification of transformation rules by using a graphical notation that works on the basis of the transformation of the input model into a graph-based representation. T:XML allows the design and execution of transformation rules in an integrated development environment. Furthermore, the designer can also preview how the generated user interface looks like after the transformations have been applied. These previewing capabilities can be used to quickly create prototypes to discuss with the users in user-centered design methods.

  16. Molecular modeling of cracks at interfaces in nanoceramic composites

    NASA Astrophysics Data System (ADS)

    Pavia, F.; Curtin, W. A.

    2013-10-01

    Toughness in Ceramic Matrix Composites (CMCs) is achieved if crack deflection can occur at the fiber/matrix interface, preventing crack penetration into the fiber and enabling energy-dissipating fiber pullout. To investigate toughening in nanoscale CMCs, direct atomistic models are used to study how matrix cracks behave as a function of the degree of interfacial bonding/sliding, as controlled by the density of C interstitial atoms, at the interface between carbon nanotubes (CNTs) and a diamond matrix. Under all interface conditions studied, incident matrix cracks do not penetrate into the nanotube. Under increased loading, weaker interfaces fail in shear while stronger interfaces do not fail and, instead, the CNT fails once the stress on the CNT reaches its tensile strength. An analytic shear lag model captures all of the micromechanical details as a function of loading and material parameters. Interface deflection versus fiber penetration is found to depend on the relative bond strengths of the interface and the CNT, with CNT failure occurring well below the prediction of the toughness-based continuum He-Hutchinson model. The shear lag model, in contrast, predicts the CNT failure point and shows that the nanoscale embrittlement transition occurs at an interface shear strength scaling as ?s~?? rather than ?s~? typically prevailing for micron scale composites, where ? and ? are the CNT failure strain and stress, respectively. Interface bonding also lowers the effective fracture strength in SWCNTs, due to formation of defects, but does not play a role in DWCNTs having interwall coupling, which are weaker than SWCNTs but less prone to damage in the outerwall.

  17. Development of an automated core model for nuclear reactors

    SciTech Connect

    Mosteller, R.D.

    1998-12-31

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of this project was to develop an automated package of computer codes that can model the steady-state behavior of nuclear-reactor cores of various designs. As an added benefit, data produced for steady-state analysis also can be used as input to the TRAC transient-analysis code for subsequent safety analysis of the reactor at any point in its operating lifetime. The basic capability to perform steady-state reactor-core analysis already existed in the combination of the HELIOS lattice-physics code and the NESTLE advanced nodal code. In this project, the automated package was completed by (1) obtaining cross-section libraries for HELIOS, (2) validating HELIOS by comparing its predictions to results from critical experiments and from the MCNP Monte Carlo code, (3) validating NESTLE by comparing its predictions to results from numerical benchmarks and to measured data from operating reactors, and (4) developing a linkage code to transform HELIOS output into NESTLE input.

  18. Automated Upscaling of River Networks for Macroscale Hydrological Modeling

    NASA Astrophysics Data System (ADS)

    Wu, H.; Kimball, J. S.; Lettenmaier, D. P.

    2008-12-01

    Regional upscaling of river networks and flow directions to coarse spatial scales commensurate with global climate models (GCMs) is necessary for representing the lateral movement of water, sediment and nutrients in macroscale hydrological modeling studies. Most upscaling methods involve time-intensive and subjective manual corrections of disconnected river segments and flow paths defined at relatively coarse spatial scales. We developed a new approach for automated extraction and spatial upscaling of river networks and flow directions from relatively fine scale DEM information. Model outputs include flow accumulation, flow direction and river network structure. The algorithm determines downstream hierarchical flow paths for each grid cell while preserving predominant flow paths defined from the baseline, fine scale DEM. Downstream flow paths and directions are prioritized according to upstream contributing drainage areas. Additional constraints are defined to minimize the occurrence of broken or false river segments. The algorithm prioritizes river channels by length and selects the longest effective stem river segment for each grid cell to collect water from upstream areas. The algorithm also maintains consistency in basin area calculations by minimizing the growth of bigger basins and boundary areas at coarser spatial scales. We applied the algorithm to produce a series of global river datasets at variable spatial resolutions including 1/16, 1/8, 1/4, 1/2, 1, and 2 degrees. The model results indicate several advantages over other commonly used approaches, including: (1) accurate, automated extraction of river network and flow paths at any spatial scale without the need for intensive manual correction; (2) consistency of flow path shape, flow path density, drainage area (basin area), and (3) flow distance between the upscaled river networks and baseline fine scale river network/flow direction information.

  19. MESA: An Interactive Modeling and Simulation Environment for Intelligent Systems Automation

    NASA Technical Reports Server (NTRS)

    Charest, Leonard

    1994-01-01

    This report describes MESA, a software environment for creating applications that automate NASA mission opterations. MESA enables intelligent automation by utilizing model-based reasoning techniques developed in the field of Artificial Intelligence. Model-based reasoning techniques are realized in Mesa through native support of causal modeling and discrete event simulation.

  20. Control of a Wheelchair in an Indoor Environment Based on a Brain-Computer Interface and Automated Navigation.

    PubMed

    Zhang, Rui; Li, Yuanqing; Yan, Yongyong; Zhang, Hao; Wu, Shaoyu; Yu, Tianyou; Gu, Zhenghui

    2016-01-01

    The concept of controlling a wheelchair using brain signals is promising. However, the continuous control of a wheelchair based on unstable and noisy electroencephalogram signals is unreliable and generates a significant mental burden for the user. A feasible solution is to integrate a brain-computer interface (BCI) with automated navigation techniques. This paper presents a brain-controlled intelligent wheelchair with the capability of automatic navigation. Using an autonomous navigation system, candidate destinations and waypoints are automatically generated based on the existing environment. The user selects a destination using a motor imagery (MI)-based or P300-based BCI. According to the determined destination, the navigation system plans a short and safe path and navigates the wheelchair to the destination. During the movement of the wheelchair, the user can issue a stop command with the BCI. Using our system, the mental burden of the user can be substantially alleviated. Furthermore, our system can adapt to changes in the environment. Two experiments based on MI and P300 were conducted to demonstrate the effectiveness of our system. PMID:26054072

  1. An Automated 3d Indoor Topological Navigation Network Modelling

    NASA Astrophysics Data System (ADS)

    Jamali, A.; Rahman, A. A.; Boguslawski, P.; Gold, C. M.

    2015-10-01

    Indoor navigation is important for various applications such as disaster management and safety analysis. In the last decade, indoor environment has been a focus of wide research; that includes developing techniques for acquiring indoor data (e.g. Terrestrial laser scanning), 3D indoor modelling and 3D indoor navigation models. In this paper, an automated 3D topological indoor network generated from inaccurate 3D building models is proposed. In a normal scenario, 3D indoor navigation network derivation needs accurate 3D models with no errors (e.g. gap, intersect) and two cells (e.g. rooms, corridors) should touch each other to build their connections. The presented 3D modeling of indoor navigation network is based on surveying control points and it is less dependent on the 3D geometrical building model. For reducing time and cost of indoor building data acquisition process, Trimble LaserAce 1000 as surveying instrument is used. The modelling results were validated against an accurate geometry of indoor building environment which was acquired using Trimble M3 total station.

  2. User interface for ground-water modeling: Arcview extension

    USGS Publications Warehouse

    Tsou, M.-S.; Whittemore, D.O.

    2001-01-01

    Numerical simulation for ground-water modeling often involves handling large input and output data sets. A geographic information system (GIS) provides an integrated platform to manage, analyze, and display disparate data and can greatly facilitate modeling efforts in data compilation, model calibration, and display of model parameters and results. Furthermore, GIS can be used to generate information for decision making through spatial overlay and processing of model results. Arc View is the most widely used Windows-based GIS software that provides a robust user-friendly interface to facilitate data handling and display. An extension is an add-on program to Arc View that provides additional specialized functions. An Arc View interface for the ground-water flow and transport models MODFLOW and MT3D was built as an extension for facilitating modeling. The extension includes preprocessing of spatially distributed (point, line, and polygon) data for model input and postprocessing of model output. An object database is used for linking user dialogs and model input files. The Arc View interface utilizes the capabilities of the 3D Analyst extension. Models can be automatically calibrated through the Arc View interface by external linking to such programs as PEST. The efficient pre- and postprocessing capabilities and calibration link were demonstrated for ground-water modeling in southwest Kansas.

  3. Interfaces in the Potts model II: Antonov's rule and rigidity of the order disorder interface

    NASA Astrophysics Data System (ADS)

    Messager, Alain; Miracle-Sole, Salvador; Ruiz, Jean; Shlosman, Senya

    1991-09-01

    Within the ferromagnetic q-state Potts model we discuss the wetting of the interface between two ordered phases a and b by the disordered phase f at the transition temperature. In two or more dimensions and for q large we establish the validity of the Antonov's rule, ? ab = ? af + ? fb , where ? denotes the surface tension between the considered phases. We also prove that at this temperature, in three or more dimensions the interface between any ordered phase and the disordered one is rigid.

  4. Minimal model for charge transfer excitons at the dielectric interface

    NASA Astrophysics Data System (ADS)

    Ono, Shota; Ohno, Kaoru

    2016-03-01

    A theoretical description of the charge transfer (CT) exciton across the donor-acceptor interface without the use of a completely localized hole (or electron) is a challenge in the field of organic solar cells. We calculate the total wave function of the CT exciton by solving an effective two-particle Schrödinger equation for the inhomogeneous dielectric interface. We formulate the magnitude of the CT and construct a minimal model of the CT exciton under the breakdown of inversion symmetry. We demonstrate that both a light hole mass and a hole localization along the normal to the dielectric interface are crucial to yield the CT exciton.

  5. Flightdeck Automation Problems (FLAP) Model for Safety Technology Portfolio Assessment

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2014-01-01

    NASA's Aviation Safety Program (AvSP) develops and advances methodologies and technologies to improve air transportation safety. The Safety Analysis and Integration Team (SAIT) conducts a safety technology portfolio assessment (PA) to analyze the program content, to examine the benefits and risks of products with respect to program goals, and to support programmatic decision making. The PA process includes systematic identification of current and future safety risks as well as tracking several quantitative and qualitative metrics to ensure the program goals are addressing prominent safety risks accurately and effectively. One of the metrics within the PA process involves using quantitative aviation safety models to gauge the impact of the safety products. This paper demonstrates the role of aviation safety modeling by providing model outputs and evaluating a sample of portfolio elements using the Flightdeck Automation Problems (FLAP) model. The model enables not only ranking of the quantitative relative risk reduction impact of all portfolio elements, but also highlighting the areas with high potential impact via sensitivity and gap analyses in support of the program office. Although the model outputs are preliminary and products are notional, the process shown in this paper is essential to a comprehensive PA of NASA's safety products in the current program and future programs/projects.

  6. Stable, reproducible, and automated capillary zone electrophoresis-tandem mass spectrometry system with an electrokinetically pumped sheath-flow nanospray interface.

    PubMed

    Zhu, Guijie; Sun, Liangliang; Yan, Xiaojing; Dovichi, Norman J

    2014-01-31

    A PrinCE autosampler was coupled to a Q-Exactive mass spectrometer by an electrokinetically pumped sheath-flow nanospray interface to perform automated capillary zone electrophoresis-electrospray ionization-tandem mass spectrometry (CZE-ESI-MS/MS). 20ng aliquots of an Escherichia coli digest were injected to evaluate the system. Eight sequential injections over an 8-h period identified 1115±70 (relative standard deviation, RSD=6%) peptides and 270±8 (RSD=3%) proteins per run. The average RSDs of migration time, peak intensity, and peak area were 3%, 24% and 19%, respectively, for 340 peptides with high intensity. This is the first report of an automated CZE-ESI-MS/MS system using the electrokinetically pumped sheath-flow nanospray interface. The results demonstrate that this system is capable of reproducibly identifying over 1000 peptides from an E. coli tryptic digest in a 1-h analysis time. PMID:24439510

  7. Empirical rheological model for rough or grooved bonded interfaces.

    PubMed

    Belloncle, Valentina Vlasie; Rousseau, Martine

    2007-12-01

    In the industrial sector, it is common to use metal/adhesive/metal structural bonds. The cohesion of such structures can be improved by preliminary chemical treatments (degreasing with solvents, alkaline, or acid pickling), electrochemical treatments (anodising), or mechanical treatments (abrasion, sandblasting, grooving) of the metallic plates. All these pretreatments create some asperities, ranging from roughnesses to grooves. On the other hand, in damage solid mechanics and in non-destructive testing, rheological models are used to measure the strength of bonded interfaces. However, these models do not take into account the interlocking of the adhesive in the porosities. Here, an empirical rheological model taking into account the interlocking effects is developed. This model depends on a characteristic parameter representing the average porosity along the interface, which considerably simplifies the corresponding stress and displacement jump conditions. The paper deals with the influence of this interface model on the ultrasonic guided modes of the structure. PMID:17659313

  8. Neuroengineering modeling of single neuron and neural interface.

    PubMed

    Hu, X L; Zhang, Y T; Yao, J

    2002-01-01

    The single neuron has attracted widespread attention as an elementary unit for understanding the electrophysiological mechanisms of nervous systems and for exploring the functions of biological neural networks. Over the past decades, much modeling work on neural interface has been presented in support of experimental findings in neural engineering. This article reviews the recent research results on modeling electrical activities of the single neuron, electrical synapse, neuromuscular junction, and neural interfaces at cochlea. Single neuron models vary form to illustrate how neurons fire and what the firing patterns mean. Focusing on these two questions, recent modeling work on single neurons is discussed. The modeling of neural receptors at inner and outer hair cells is examined to explain the transforming procedure from sounds to electrical signals. The low-pass characteristics of electrical synapse and neuromuscular junction are also discussed in an attempt to understand the mechanism of electrical transmission across the interfaces. PMID:12739750

  9. Device model for electronic processes at organic/organic interfaces

    NASA Astrophysics Data System (ADS)

    Liu, Feilong; Paul Ruden, P.; Campbell, Ian. H.; Smith, Darryl L.

    2012-05-01

    Interfaces between different organic materials can play a key role in determining organic semiconductor device characteristics. Here, we present a physics-based one-dimensional model with the goal of exploring critical processes at organic/organic interfaces. Specifically, we envision a simple bilayer structure consisting of an electron transport layer (ETL), a hole transport layer (HTL), and the interface between them. The model calculations focus on the following aspects: (1) the microscopic physical processes at the interface, such as exciton formation/dissociation, exciplex formation/dissociation, and geminate/nongeminate recombination; (2) the treatment of the interface parameters and the discretization method; and (3) the application of this model to different devices, such as organic light emitting diodes and photovoltaic cells. At the interface, an electron on an ETL molecule can interact with a hole on an adjacent HTL molecule and form an intermolecular excited state (exciplex). If either the electron or the hole transfers across the interface, an exciton can be formed. The exciton may subsequently diffuse into the relevant layer and relax to the ground state. A strong effective electric field at the interface can cause excitons or exciplexes to dissociate into electrons in the ETL and holes in the HTL. Geminate recombination may occur when the Coulomb interaction between the electron and the hole generated at the interface by exciton dissociation causes the formation of a correlated state that then relaxes to the ground state. The relative impacts of the different processes on measurable macroscopic device characteristics are explored in our calculations by varying the corresponding kinetic coefficients. As it is the aim of this work to investigate effects associated with the organic/organic interface, its treatment in the numerical calculations is of critical importance. We model the interface as a continuous but rather sharp transition from the ETL to the HTL. The model is applied to different devices where different microscopic processes dominate. We discuss the results for an organic light emitting device with exciton or exciplex emission and for a photovoltaic device with or without geminate recombination. In the examples, C60 and tetracene parameters are used for the ETL and HTL materials, respectively.

  10. Back to the Future: A Non-Automated Method of Constructing Transfer Models

    ERIC Educational Resources Information Center

    Feng, Mingyu; Beck, Joseph

    2009-01-01

    Representing domain knowledge is important for constructing educational software, and automated approaches have been proposed to construct and refine such models. In this paper, instead of applying automated and computationally intensive approaches, we simply start with existing hand-constructed transfer models at various levels of granularity and

  11. Modeling of interface behavior in carbon nanotube composites.

    SciTech Connect

    Hammerand, Daniel Carl; Awasthi, Amnaya P.; Lagoudas, Dimitris C.

    2006-05-01

    This research focuses on the development of a constitutive model for carbon nanotube polymer composites incorporating nanoscale attributes of the interface between the nanotube and polymer. Carbon nanotube polymer composites exhibit promising properties, as structural materials and the current work will motivate improvement in their load transfer capabilities. Since separation events occur at different length and time scales, the current work also addresses the challenge of multiscale modeling in interpreting inputs at different length and time scales. The nanoscale phase separation phenomena are investigated using molecular dynamics (MD) simulations. The simulations based on MD provide grounds for developing a cohesive zone model for the interface based on laws of thermodynamics.

  12. A distributed data component for the open modeling interface

    Technology Transfer Automated Retrieval System (TEKTRAN)

    As the volume of collected data continues to increase in the environmental sciences, so does the need for effective means for accessing those data. We have developed an Open Modeling Interface (OpenMI) data component that retrieves input data for model components from environmental information syste...

  13. Automated robust generation of compact 3D statistical shape models

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaz; Likar, Bostjan; Tomazevic, Dejan; Pernus, Franjo

    2004-05-01

    Ascertaining the detailed shape and spatial arrangement of anatomical structures is important not only within diagnostic settings but also in the areas of planning, simulation, intraoperative navigation, and tracking of pathology. Robust, accurate and efficient automated segmentation of anatomical structures is difficult because of their complexity and inter-patient variability. Furthermore, the position of the patient during image acquisition, the imaging device and protocol, image resolution, and other factors induce additional variations in shape and appearance. Statistical shape models (SSMs) have proven quite successful in capturing structural variability. A possible approach to obtain a 3D SSM is to extract reference voxels by precisely segmenting the structure in one, reference image. The corresponding voxels in other images are determined by registering the reference image to each other image. The SSM obtained in this way describes statistically plausible shape variations over the given population as well as variations due to imperfect registration. In this paper, we present a completely automated method that significantly reduces shape variations induced by imperfect registration, thus allowing a more accurate description of variations. At each iteration, the derived SSM is used for coarse registration, which is further improved by describing finer variations of the structure. The method was tested on 64 lumbar spinal column CT scans, from which 23, 38, 45, 46 and 42 volumes of interest containing vertebra L1, L2, L3, L4 and L5, respectively, were extracted. Separate SSMs were generated for each vertebra. The results show that the method is capable of reducing the variations induced by registration errors.

  14. Integration of finite element modeling with solid modeling through a dynamic interface

    NASA Technical Reports Server (NTRS)

    Shephard, Mark S.

    1987-01-01

    Finite element modeling is dominated by geometric modeling type operations. Therefore, an effective interface to geometric modeling requires access to both the model and the modeling functionality used to create it. The use of a dynamic interface that addresses these needs through the use of boundary data structures and geometric operators is discussed.

  15. Solid phase extraction-liquid chromatography (SPE-LC) interface for automated peptide separation and identification by tandem mass spectrometry

    NASA Astrophysics Data System (ADS)

    Hørning, Ole Bjeld; Theodorsen, Søren; Vorm, Ole; Jensen, Ole Nørregaard

    2007-12-01

    Reversed-phase solid phase extraction (SPE) is a simple and widely used technique for desalting and concentration of peptide and protein samples prior to mass spectrometry analysis. Often, SPE sample preparation is done manually and the samples eluted, dried and reconstituted into 96-well titer plates for subsequent LC-MS/MS analysis. To reduce the number of sample handling stages and increase throughput, we developed a robotic system to interface off-line SPE to LC-ESI-MS/MS. Samples were manually loaded onto disposable SPE tips that subsequently were connected in-line with a capillary chromatography column. Peptides were recovered from the SPE column and separated on the RP-LC column using isocratic elution conditions and analysed by electrospray tandem mass spectrometry. Peptide mixtures eluted within approximately 5 min, with individual peptide peak resolution of ~7 s (FWHM), making the SPE-LC suited for analysis of medium complex samples (3-12 protein components). For optimum performance, the isocratic flow rate was reduced to 30 nL/min, producing nanoelectrospray like conditions which ensure high ionisation efficiency and sensitivity. Using a modified autosampler for mounting and disposing of the SPE tips, the SPE-LC-MS/MS system could analyse six samples per hour, and up to 192 SPE tips in one batch. The relatively high sample throughput, medium separation power and high sensitivity makes the automated SPE-LC-MS/MS setup attractive for proteomics experiments as demonstrated by the identification of the components of simple protein mixtures and of proteins recovered from 2DE gels.

  16. NASA: Model development for human factors interfacing

    NASA Technical Reports Server (NTRS)

    Smith, L. L.

    1984-01-01

    The results of an intensive literature review in the general topics of human error analysis, stress and job performance, and accident and safety analysis revealed no usable techniques or approaches for analyzing human error in ground or space operations tasks. A task review model is described and proposed to be developed in order to reduce the degree of labor intensiveness in ground and space operations tasks. An extensive number of annotated references are provided.

  17. Model annotation for synthetic biology: automating model to nucleotide sequence conversion

    PubMed Central

    Misirli, Goksel; Hallinan, Jennifer S.; Yu, Tommy; Lawson, James R.; Wimalaratne, Sarala M.; Cooling, Michael T.; Wipat, Anil

    2011-01-01

    Motivation: The need for the automated computational design of genetic circuits is becoming increasingly apparent with the advent of ever more complex and ambitious synthetic biology projects. Currently, most circuits are designed through the assembly of models of individual parts such as promoters, ribosome binding sites and coding sequences. These low level models are combined to produce a dynamic model of a larger device that exhibits a desired behaviour. The larger model then acts as a blueprint for physical implementation at the DNA level. However, the conversion of models of complex genetic circuits into DNA sequences is a non-trivial undertaking due to the complexity of mapping the model parts to their physical manifestation. Automating this process is further hampered by the lack of computationally tractable information in most models. Results: We describe a method for automatically generating DNA sequences from dynamic models implemented in CellML and Systems Biology Markup Language (SBML). We also identify the metadata needed to annotate models to facilitate automated conversion, and propose and demonstrate a method for the markup of these models using RDF. Our algorithm has been implemented in a software tool called MoSeC. Availability: The software is available from the authors' web site http://research.ncl.ac.uk/synthetic_biology/downloads.html. Contact: anil.wipat@ncl.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21296753

  18. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT (AGWA): A GIS-BASED HYDROLOGIC MODELING TOOL FOR WATERSHED MANAGEMENT AND LANDSCAPE ASSESSMENT 1798

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Automated Geospatial Watershed Assessment (AGWA, see: www.tucson.ars.ag.gov/agwa) tool is a GIS interface jointly developed by the USDA-ARS, US-EPA, U. Arizona, and U. Wyoming to automate the parameterization and execution of the Soil Water Assessment Tool (SWAT) and KINEmatic Runoff and EROSion...

  19. AUTOMATED GEOSPATICAL WATERSHED ASSESSMENT (AGWA): A GIS-BASED HYDROLOICAL MODELING TOOL FOR WATERSHED ASSESSMENT AND ANALYSIS

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment tool (AGWA) is a GIS interface jointly developed by the USDA Agricultural Research Service, the U.S. Environmental Protection Agency, the University of Arizona, and the University of Wyoming to automate the parameterization and execut...

  20. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT (AGWA): A GIS-BASED HYDROLOGIC MODELING TOOL FOR WATERSHED ASSESSMENT AND ANALYSIS

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment tool (AGWA) is a GIS interface jointly developed by the USDA Agricultural Research Service, the U.S. Environmental Protection Agency, the University of Arizona, and the University of Wyoming to automate the parameterization and execu...

  1. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT (AGWA): A GIS-BASED HYRDOLOGIC MODELING TOOL FOR WATERSHED ASSESSMENT AND ANALYSIS

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment tool (AGWA) is a GIS interface jointly

    developed by the USDA Agricultural Research Service, the U.S. Environmental Protection

    Agency, the University of Arizona, and the University of Wyoming to automate the

    parame...

  2. Aviation Safety: Modeling and Analyzing Complex Interactions between Humans and Automated Systems

    NASA Technical Reports Server (NTRS)

    Rungta, Neha; Brat, Guillaume; Clancey, William J.; Linde, Charlotte; Raimondi, Franco; Seah, Chin; Shafto, Michael

    2013-01-01

    The on-going transformation from the current US Air Traffic System (ATS) to the Next Generation Air Traffic System (NextGen) will force the introduction of new automated systems and most likely will cause automation to migrate from ground to air. This will yield new function allocations between humans and automation and therefore change the roles and responsibilities in the ATS. Yet, safety in NextGen is required to be at least as good as in the current system. We therefore need techniques to evaluate the safety of the interactions between humans and automation. We think that current human factor studies and simulation-based techniques will fall short in front of the ATS complexity, and that we need to add more automated techniques to simulations, such as model checking, which offers exhaustive coverage of the non-deterministic behaviors in nominal and off-nominal scenarios. In this work, we present a verification approach based both on simulations and on model checking for evaluating the roles and responsibilities of humans and automation. Models are created using Brahms (a multi-agent framework) and we show that the traditional Brahms simulations can be integrated with automated exploration techniques based on model checking, thus offering a complete exploration of the behavioral space of the scenario. Our formal analysis supports the notion of beliefs and probabilities to reason about human behavior. We demonstrate the technique with the Ueberligen accident since it exemplifies authority problems when receiving conflicting advices from human and automated systems.

  3. Individual Differences in Response to Automation: The Five Factor Model of Personality

    ERIC Educational Resources Information Center

    Szalma, James L.; Taylor, Grant S.

    2011-01-01

    This study examined the relationship of operator personality (Five Factor Model) and characteristics of the task and of adaptive automation (reliability and adaptiveness--whether the automation was well-matched to changes in task demand) to operator performance, workload, stress, and coping. This represents the first investigation of how the Five

  4. Automated MRI cerebellar size measurements using active appearance modeling.

    PubMed

    Price, Mathew; Cardenas, Valerie A; Fein, George

    2014-12-01

    Although the human cerebellum has been increasingly identified as an important hub that shows potential for helping in the diagnosis of a large spectrum of disorders, such as alcoholism, autism, and fetal alcohol spectrum disorder, the high costs associated with manual segmentation, and low availability of reliable automated cerebellar segmentation tools, has resulted in a limited focus on cerebellar measurement in human neuroimaging studies. We present here the CATK (Cerebellar Analysis Toolkit), which is based on the Bayesian framework implemented in FMRIB's FIRST. This approach involves training Active Appearance Models (AAMs) using hand-delineated examples. CATK can currently delineate the cerebellar hemispheres and three vermal groups (lobules I-V, VI-VII, and VIII-X). Linear registration with the low-resolution MNI152 template is used to provide initial alignment, and Point Distribution Models (PDM) are parameterized using stellar sampling. The Bayesian approach models the relationship between shape and texture through computation of conditionals in the training set. Our method varies from the FIRST framework in that initial fitting is driven by 1D intensity profile matching, and the conditional likelihood function is subsequently used to refine fitting. The method was developed using T1-weighted images from 63 subjects that were imaged and manually labeled: 43 subjects were scanned once and were used for training models, and 20 subjects were imaged twice (with manual labeling applied to both runs) and used to assess reliability and validity. Intraclass correlation analysis shows that CATK is highly reliable (average test-retest ICCs of 0.96), and offers excellent agreement with the gold standard (average validity ICC of 0.87 against manual labels). Comparisons against an alternative atlas-based approach, SUIT (Spatially Unbiased Infratentorial Template), that registers images with a high-resolution template of the cerebellum, show that our AAM approach offers superior reliability and validity. Extensions of CATK to cerebellar hemisphere parcels are envisioned. PMID:25192657

  5. Automated MRI Cerebellar Size Measurements Using Active Appearance Modeling

    PubMed Central

    Price, Mathew; Cardenas, Valerie A.; Fein, George

    2014-01-01

    Although the human cerebellum has been increasingly identified as an important hub that shows potential for helping in the diagnosis of a large spectrum of disorders, such as alcoholism, autism, and fetal alcohol spectrum disorder, the high costs associated with manual segmentation, and low availability of reliable automated cerebellar segmentation tools, has resulted in a limited focus on cerebellar measurement in human neuroimaging studies. We present here the CATK (Cerebellar Analysis Toolkit), which is based on the Bayesian framework implemented in FMRIBs FIRST. This approach involves training Active Appearance Models (AAM) using hand-delineated examples. CATK can currently delineate the cerebellar hemispheres and three vermal groups (lobules IV, VIVII, and VIIIX). Linear registration with the low-resolution MNI152 template is used to provide initial alignment, and Point Distribution Models (PDM) are parameterized using stellar sampling. The Bayesian approach models the relationship between shape and texture through computation of conditionals in the training set. Our method varies from the FIRST framework in that initial fitting is driven by 1D intensity profile matching, and the conditional likelihood function is subsequently used to refine fitting. The method was developed using T1-weighted images from 63 subjects that were imaged and manually labeled: 43 subjects were scanned once and were used for training models, and 20 subjects were imaged twice (with manual labeling applied to both runs) and used to assess reliability and validity. Intraclass correlation analysis shows that CATK is highly reliable (average test-retest ICCs of 0.96), and offers excellent agreement with the gold standard (average validity ICC of 0.87 against manual labels). Comparisons against an alternative atlas-based approach, SUIT (Spatially Unbiased Infratentorial Template), that registers images with a high-resolution template of the cerebellum, show that our AAM approach offers superior reliability and validity. Extensions of CATK to cerebellar hemisphere parcels is envisioned. PMID:25192657

  6. Sharp-interface model of electrodeposition and ramified growth.

    PubMed

    Nielsen, Christoffer P; Bruus, Henrik

    2015-10-01

    We present a sharp-interface model of two-dimensional ramified growth during quasisteady electrodeposition. Our model differs from previous modeling methods in that it includes the important effects of extended space-charge regions and nonlinear electrode reactions. The electrokinetics is described by a continuum model, but the discrete nature of the ions is taken into account by adding a random noise term to the electrode current. The model is validated by comparing its behavior in the initial stage with the predictions of a linear stability analysis. The main limitations of the model are the restriction to two dimensions and the assumption of quasisteady transport. PMID:26565235

  7. Designers' models of the human-computer interface

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Breedin, Sarah D.

    1993-01-01

    Understanding design models of the human-computer interface (HCI) may produce two types of benefits. First, interface development often requires input from two different types of experts: human factors specialists and software developers. Given the differences in their backgrounds and roles, human factors specialists and software developers may have different cognitive models of the HCI. Yet, they have to communicate about the interface as part of the design process. If they have different models, their interactions are likely to involve a certain amount of miscommunication. Second, the design process in general is likely to be guided by designers' cognitive models of the HCI, as well as by their knowledge of the user, tasks, and system. Designers do not start with a blank slate; rather they begin with a general model of the object they are designing. The author's approach to a design model of the HCI was to have three groups make judgments of categorical similarity about the components of an interface: human factors specialists with HCI design experience, software developers with HCI design experience, and a baseline group of computer users with no experience in HCI design. The components of the user interface included both display components such as windows, text, and graphics, and user interaction concepts, such as command language, editing, and help. The judgments of the three groups were analyzed using hierarchical cluster analysis and Pathfinder. These methods indicated, respectively, how the groups categorized the concepts, and network representations of the concepts for each group. The Pathfinder analysis provides greater information about local, pairwise relations among concepts, whereas the cluster analysis shows global, categorical relations to a greater extent.

  8. On the Temkin model of solid liquid interface

    NASA Astrophysics Data System (ADS)

    Mori, Atsushi; Maksimov, Igor L.

    1999-04-01

    The multilayer mean-field model of the solid-liquid interface (SLI) is studied. The nonequilibrium state diagram of the SLI is constructed on the basis of a continuum approach for diffuse SLIs. The kinetics of the SLI propagation in nonequilibrium conditions is considered; the dependence of the SLI velocity and the SLI width on the undercooling is found.

  9. Automated medical diagnosis with fuzzy stochastic models: monitoring chronic diseases.

    PubMed

    Jeanpierre, Laurent; Charpillet, Franois

    2004-01-01

    As the world population ages, the patients per physician ratio keeps on increasing. This is even more important in the domain of chronic pathologies where people are usually monitored for years and need regular consultations. To address this problem, we propose an automated system to monitor a patient population, detecting anomalies in instantaneous data and in their temporal evolution, so that it could alert physicians. By handling the population of healthy patients autonomously and by drawing the physicians' attention to the patients-at-risk, the system allows physicians to spend comparatively more time with patients who need their services. In such a system, the interaction between the patients, the diagnosis module, and the physicians is very important. We have based this system on a combination of stochastic models, fuzzy filters, and strong medical semantics. We particularly focused on a particular tele-medicine application: the Diatelic Project. Its objective is to monitor chronic kidney-insufficient patients and to detect hydration troubles. During two years, physicians from the ALTIR have conducted a prospective randomized study of the system. This experiment clearly shows that the proposed system is really beneficial to the patients' health. PMID:15520535

  10. Atomic Models of Strong Solids Interfaces Viewed as Composite Structures

    NASA Astrophysics Data System (ADS)

    Staffell, I.; Shang, J. L.; Kendall, K.

    2014-02-01

    This paper looks back through the 1960s to the invention of carbon fibres and the theories of Strong Solids. In particular it focuses on the fracture mechanics paradox of strong composites containing weak interfaces. From Griffith theory, it is clear that three parameters must be considered in producing a high strength composite:- minimising defects; maximising the elastic modulus; and raising the fracture energy along the crack path. The interface then introduces two further factors:- elastic modulus mismatch causing crack stopping; and debonding along a brittle interface due to low interface fracture energy. Consequently, an understanding of the fracture energy of a composite interface is needed. Using an interface model based on atomic interaction forces, it is shown that a single layer of contaminant atoms between the matrix and the reinforcement can reduce the interface fracture energy by an order of magnitude, giving a large delamination effect. The paper also looks to a future in which cars will be made largely from composite materials. Radical improvements in automobile design are necessary because the number of cars worldwide is predicted to double. This paper predicts gains in fuel economy by suggesting a new theory of automobile fuel consumption using an adaptation of Coulomb's friction law. It is demonstrated both by experiment and by theoretical argument that the energy dissipated in standard vehicle tests depends only on weight. Consequently, moving from metal to fibre construction can give a factor 2 improved fuel economy performance, roughly the same as moving from a petrol combustion drive to hydrogen fuel cell propulsion. Using both options together can give a factor 4 improvement, as demonstrated by testing a composite car using the ECE15 protocol.

  11. Structural motifs at protein-protein interfaces: protein cores versus two-state and three-state model complexes.

    PubMed Central

    Tsai, C. J.; Xu, D.; Nussinov, R.

    1997-01-01

    The general similarity in the forces governing protein folding and protein-protein associations has led us to examine the similarity in the architectural motifs between the interfaces and the monomers. We have carried out extensive, all-against-all structural comparisons between the single-chain protein structural dataset and the interface dataset, derived both from all protein-protein complexes in the structural database and from interfaces generated via an automated crystal symmetry operation. We show that despite the absence of chain connections, the global features of the architectural motifs, present in monomers, recur in the interfaces, a reflection of the limited set of the folding patterns. However, although similarity has been observed, the details of the architectural motifs vary. In particular, the extent of the similarity correlates with the consideration of how the interface has been formed. Interfaces derived from two-state model complexes, where the chains fold cooperatively, display a considerable similarity to architectures in protein cores, as judged by the quality of their geometric superposition. On the other hand, the three-state model interfaces, representing binding of already folded molecules, manifest a larger variability and resemble the monomer architecture only in general outline. The origin of the difference between the monomers and the three-state model interfaces can be understood in terms of the different nature of the folding and the binding that are involved. Whereas in the former all degrees of freedom are available to the backbone to maximize favorable interactions, in rigid body, three-state model binding, only six degrees of freedom are allowed. Hence, residue or atom pair-wise potentials derived from protein-protein associations are expected to be less accurate, substantially increasing the number of computationally acceptable alternate binding modes (Finkelstein et al., 1995). PMID:9300480

  12. A general graphical user interface for automatic reliability modeling

    NASA Technical Reports Server (NTRS)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1991-01-01

    Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.

  13. Automated model selection in covariance estimation and spatial whitening of MEG and EEG signals.

    PubMed

    Engemann, Denis A; Gramfort, Alexandre

    2015-03-01

    Magnetoencephalography and electroencephalography (M/EEG) measure non-invasively the weak electromagnetic fields induced by post-synaptic neural currents. The estimation of the spatial covariance of the signals recorded on M/EEG sensors is a building block of modern data analysis pipelines. Such covariance estimates are used in brain-computer interfaces (BCI) systems, in nearly all source localization methods for spatial whitening as well as for data covariance estimation in beamformers. The rationale for such models is that the signals can be modeled by a zero mean Gaussian distribution. While maximizing the Gaussian likelihood seems natural, it leads to a covariance estimate known as empirical covariance (EC). It turns out that the EC is a poor estimate of the true covariance when the number of samples is small. To address this issue the estimation needs to be regularized. The most common approach downweights off-diagonal coefficients, while more advanced regularization methods are based on shrinkage techniques or generative models with low rank assumptions: probabilistic PCA (PPCA) and factor analysis (FA). Using cross-validation all of these models can be tuned and compared based on Gaussian likelihood computed on unseen data. We investigated these models on simulations, one electroencephalography (EEG) dataset as well as magnetoencephalography (MEG) datasets from the most common MEG systems. First, our results demonstrate that different models can be the best, depending on the number of samples, heterogeneity of sensor types and noise properties. Second, we show that the models tuned by cross-validation are superior to models with hand-selected regularization. Hence, we propose an automated solution to the often overlooked problem of covariance estimation of M/EEG signals. The relevance of the procedure is demonstrated here for spatial whitening and source localization of MEG signals. PMID:25541187

  14. Universality in Sandpiles, Interface Depinning, and Earthquake Models

    SciTech Connect

    Paczuski, M.; Boettcher, S.

    1996-07-01

    Recent numerical results for a model describing dispersive transport in ricepiles are explained by mapping the model to the depinning transition of an elastic interface that is dragged at one end through a random medium. The average velocity of transport vanishes with system size {ital L} as {l_angle}{ital v}{r_angle}{approximately}{ital L}{sup 2{minus}{ital D}}{approximately}{ital L}{sup {minus}0.23}, and the avalanche size distribution exponent {tau}=2{minus}1/{ital D}{approx_equal}1.55, where {ital D}{approx_equal}2.23 from interface depinning. We conjecture that the purely deterministic Burridge-Knopoff {open_quote}{open_quote}train{close_quote}{close_quote} model for earthquakes is in the same universality class. {copyright} {ital 1996 The American Physical Society.}

  15. Multiscale modeling of droplet interface bilayer membrane networks.

    PubMed

    Freeman, Eric C; Farimani, Amir B; Aluru, Narayana R; Philen, Michael K

    2015-11-01

    Droplet interface bilayer (DIB) networks are considered for the development of stimuli-responsive membrane-based materials inspired by cellular mechanics. These DIB networks are often modeled as combinations of electrical circuit analogues, creating complex networks of capacitors and resistors that mimic the biomolecular structures. These empirical models are capable of replicating data from electrophysiology experiments, but these models do not accurately capture the underlying physical phenomena and consequently do not allow for simulations of material functionalities beyond the voltage-clamp or current-clamp conditions. The work presented here provides a more robust description of DIB network behavior through the development of a hierarchical multiscale model, recognizing that the macroscopic network properties are functions of their underlying molecular structure. The result of this research is a modeling methodology based on controlled exchanges across the interfaces of neighboring droplets. This methodology is validated against experimental data, and an extension case is provided to demonstrate possible future applications of droplet interface bilayer networks. PMID:26594262

  16. Numerical modeling of capillary electrophoresis - electrospray mass spectrometry interface design.

    PubMed

    Jarvas, Gabor; Guttman, Andras; Foret, Frantisek

    2015-01-01

    Capillary electrophoresis hyphenated with electrospray mass spectrometry (CE-ESI-MS) has emerged in the past decade as one of the most powerful bioanalytical techniques. As the sensitivity and efficiency of new CE-ESI-MS interface designs are continuously improving, numerical modeling can play important role during their development. In this review, different aspects of computer modeling and simulation of CE-ESI-MS interfaces are comprehensively discussed. Relevant essentials of hydrodynamics as well as state-of-the-art modeling techniques are critically evaluated. Sheath liquid-, sheathless-, and liquid-junction interfaces are reviewed from the viewpoint of multidisciplinary numerical modeling along with details of single and multiphase models together with electric field mediated flows, electrohydrodynamics, and free fluid-surface methods. Practical examples are given to help non-specialists to understand the basic principles and applications. Finally, alternative approaches like air amplifiers are also included. © 2014 Wiley Periodicals, Inc. Mass Spec Rev 34: 558-569, 2015. PMID:24676884

  17. Developing A Laser Shockwave Model For Characterizing Diffusion Bonded Interfaces

    SciTech Connect

    James A. Smith; Jeffrey M. Lacy; Barry H. Rabin

    2014-07-01

    12. Other advances in QNDE and related topics: Preferred Session Laser-ultrasonics Developing A Laser Shockwave Model For Characterizing Diffusion Bonded Interfaces 41st Annual Review of Progress in Quantitative Nondestructive Evaluation Conference QNDE Conference July 20-25, 2014 Boise Centre 850 West Front Street Boise, Idaho 83702 James A. Smith, Jeffrey M. Lacy, Barry H. Rabin, Idaho National Laboratory, Idaho Falls, ID ABSTRACT: The US National Nuclear Security Agency has a Global Threat Reduction Initiative (GTRI) which is assigned with reducing the worldwide use of high-enriched uranium (HEU). A salient component of that initiative is the conversion of research reactors from HEU to low enriched uranium (LEU) fuels. An innovative fuel is being developed to replace HEU. The new LEU fuel is based on a monolithic fuel made from a U-Mo alloy foil encapsulated in Al-6061 cladding. In order to complete the fuel qualification process, the laser shock technique is being developed to characterize the clad-clad and fuel-clad interface strengths in fresh and irradiated fuel plates. The Laser Shockwave Technique (LST) is being investigated to characterize interface strength in fuel plates. LST is a non-contact method that uses lasers for the generation and detection of large amplitude acoustic waves to characterize interfaces in nuclear fuel plates. However the deposition of laser energy into the containment layer on specimen’s surface is intractably complex. The shock wave energy is inferred from the velocity on the backside and the depth of the impression left on the surface from the high pressure plasma pulse created by the shock laser. To help quantify the stresses and strengths at the interface, a finite element model is being developed and validated by comparing numerical and experimental results for back face velocities and front face depressions with experimental results. This paper will report on initial efforts to develop a finite element model for laser shock.

  18. Symmetric model of compressible granular mixtures with permeable interfaces

    NASA Astrophysics Data System (ADS)

    Saurel, Richard; Le Martelot, Sbastien; Tosello, Robert; Lapbie, Emmanuel

    2014-12-01

    Compressible granular materials are involved in many applications, some of them being related to energetic porous media. Gas permeation effects are important during their compaction stage, as well as their eventual chemical decomposition. Also, many situations involve porous media separated from pure fluids through two-phase interfaces. It is thus important to develop theoretical and numerical formulations to deal with granular materials in the presence of both two-phase interfaces and gas permeation effects. Similar topic was addressed for fluid mixtures and interfaces with the Discrete Equations Method (DEM) [R. Abgrall and R. Saurel, "Discrete equations for physical and numerical compressible multiphase mixtures," J. Comput. Phys. 186(2), 361-396 (2003)] but it seemed impossible to extend this approach to granular media as intergranular stress [K. K. Kuo, V. Yang, and B. B. Moore, "Intragranular stress, particle-wall friction and speed of sound in granular propellant beds," J. Ballist. 4(1), 697-730 (1980)] and associated configuration energy [J. B. Bdzil, R. Menikoff, S. F. Son, A. K. Kapila, and D. S. Stewart, "Two-phase modeling of deflagration-to-detonation transition in granular materials: A critical examination of modeling issues," Phys. Fluids 11, 378 (1999)] were present with significant effects. An approach to deal with fluid-porous media interfaces was derived in Saurel et al. ["Modelling dynamic and irreversible powder compaction," J. Fluid Mech. 664, 348-396 (2010)] but its validity was restricted to weak velocity disequilibrium only. Thanks to a deeper analysis, the DEM is successfully extended to granular media modelling in the present paper. It results in an enhanced version of the Baer and Nunziato ["A two-phase mixture theory for the deflagration-to-detonation transition (DDT) in reactive granular materials," Int. J. Multiphase Flow 12(6), 861-889 (1986)] model as symmetry of the formulation is now preserved. Several computational examples are shown to validate and illustrate method's capabilities.

  19. Generalized model for solid-on-solid interface growth

    NASA Astrophysics Data System (ADS)

    Richele, M. F.; Atman, A. P. F.

    2015-05-01

    We present a probabilistic cellular automaton (PCA) model to study solid-on-solid interface growth in which the transition rules depend on the local morphology of the profile obtained from the interface representation of the PCA. We show that the model is able to reproduce a wide range of patterns whose critical roughening exponents are associated to different universality classes, including random deposition, Edwards-Wilkinson, and Kardar-Parisi-Zhang. By means of the growth exponent method, we consider a particular set of the model parameters to build the two-dimensional phase diagram corresponding to a planar cut of the higher dimensional parameter space. A strong indication of phase transition between different universality classes can be observed, evincing different regimes of deposition, from layer-by-layer to Volmer-Weber and Stransk-Krastanov-like modes. We expect that this model can be useful to predict the morphological properties of interfaces obtained at different surface deposition problems, since it allows us to simulate several experimental situations by setting the values of the specific transition probabilities in a very simple and direct way.

  20. Joint opening nonlinear mechanism: Interface smeared crack model

    NASA Astrophysics Data System (ADS)

    Kuo, J. S. H.

    1982-08-01

    Contraction joint opening behavior is studied. An economical model called the Interface Smeared Crack Model is developed to simulate the joint opening nonlinear mechanism. The model is based on the general smeared crack approach, with a specially introduced pushing back operation which is intended to correct the local structure response at element level. This method dramatically reduces the computational cost compared with that of a standard joint element analysis. It is demonstrated that it would be beneficial to include joint opening mechanism in the dynamic analysis of arch dams, because joint opening will limit the peak tensile arch stresses and thus improve the seismic resistance of the structure.

  1. Automated MRI segmentation for individualized modeling of current flow in the human head

    NASA Astrophysics Data System (ADS)

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-12-01

    Objective. High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets.Main results. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly.Significance. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.

  2. A Multiple Agent Model of Human Performance in Automated Air Traffic Control and Flight Management Operations

    NASA Technical Reports Server (NTRS)

    Corker, Kevin; Pisanich, Gregory; Condon, Gregory W. (Technical Monitor)

    1995-01-01

    A predictive model of human operator performance (flight crew and air traffic control (ATC)) has been developed and applied in order to evaluate the impact of automation developments in flight management and air traffic control. The model is used to predict the performance of a two person flight crew and the ATC operators generating and responding to clearances aided by the Center TRACON Automation System (CTAS). The purpose of the modeling is to support evaluation and design of automated aids for flight management and airspace management and to predict required changes in procedure both air and ground in response to advancing automation in both domains. Additional information is contained in the original extended abstract.

  3. Industrial Automation Mechanic Model Curriculum Project. Final Report.

    ERIC Educational Resources Information Center

    Toledo Public Schools, OH.

    This document describes a demonstration program that developed secondary level competency-based instructional materials for industrial automation mechanics. Program activities included task list compilation, instructional materials research, learning activity packet (LAP) development, construction of lab elements, system implementation,…

  4. Physical Parameters of Substorms from Automated Forward Modeling (AFM)

    NASA Astrophysics Data System (ADS)

    Connors, M.; McPherron, R. L.; Ponto, J.

    2006-12-01

    Automated Forward Modeling (AFM) inverts magnetic data to give physical parameters for electric current flow in near-Earth space. On a meridian, it gives the total electric current crossing it, and latitudinal boundaries. AFM uses nonlinear optimization of parameters of a forward model. Characteristic behaviors of substorms are seen: the current strengthens rapidly at an onset, with electrojet boundary motion. The current rises for approximately 30 minutes, but poleward border expansion progresses slightly faster. Recovery is accompanied by a current decrease, but not poleward retreat of the auroral oval on up to a two- hour timescale. Average characteristics of the current closely follow those of the AL index, with large variation in individual events. Boundary motion is similar to that deduced for the electron aurora from satellite studies. AFM allows both the current strength and the borders to be determined from ground magnetic data alone, generally available on a continuous basis. In this study, 63 separate onsets in 1997 were characterized using AFM on the CANOPUS Churchill meridian. The provisional AL index was also obtained for the same events. The parametrization of Weimer (1993), JGR 99, 11005 was found to be extremely accurate for both AL and meridian current, which is I(MA) = c0 + c_1 te^{pt}, with c0 0.151 MA, c_1 1.63 MA/h, and p -1.98/h. This permits a current/AL relation of I(MA) = -0.0322 - 0.00165 * AL, where we stress that I and AL are averages. Further, on average the equatorward border of the electrojet does not change much at onset, while the poleward border's latitude in central dipole coordinates is well represented by 67.5+4.21*(1.0-e^{-5.47*t}), with t the postonset time in hours. These results agree very well with those of Frey et al. (2004), JGR 109, doi:10.1029/2004JA010607 for electron auroras observed using Image WIC near the onset meridian. AFM permits quantification of electrojet parameters, facilitating their interpretation and comparison to other quantities measured during substorms.

  5. Thermal Edge-Effects Model for Automated Tape Placement of Thermoplastic Composites

    NASA Technical Reports Server (NTRS)

    Costen, Robert C.

    2000-01-01

    Two-dimensional thermal models for automated tape placement (ATP) of thermoplastic composites neglect the diffusive heat transport that occurs between the newly placed tape and the cool substrate beside it. Such lateral transport can cool the tape edges prematurely and weaken the bond. The three-dimensional, steady state, thermal transport equation is solved by the Green's function method for a tape of finite width being placed on an infinitely wide substrate. The isotherm for the glass transition temperature on the weld interface is used to determine the distance inward from the tape edge that is prematurely cooled, called the cooling incursion Delta a. For the Langley ATP robot, Delta a = 0.4 mm for a unidirectional lay-up of PEEK/carbon fiber composite, and Delta a = 1.2 mm for an isotropic lay-up. A formula for Delta a is developed and applied to a wide range of operating conditions. A surprise finding is that Delta a need not decrease as the Peclet number Pe becomes very large, where Pe is the dimensionless ratio of inertial to diffusive heat transport. Conformable rollers that increase the consolidation length would also increase Delta a, unless other changes are made, such as proportionally increasing the material speed. To compensate for premature edge cooling, the thermal input could be extended past the tape edges by the amount Delta a. This method should help achieve uniform weld strength and crystallinity across the width of the tape.

  6. Bacterial Adhesion to Hexadecane (Model NAPL)-Water Interfaces

    NASA Astrophysics Data System (ADS)

    Ghoshal, S.; Zoueki, C. R.; Tufenkji, N.

    2009-05-01

    The rates of biodegradation of NAPLs have been shown to be influenced by the adhesion of hydrocarbon- degrading microorganisms as well as their proximity to the NAPL-water interface. Several studies provide evidence for bacterial adhesion or biofilm formation at alkane- or crude oil-water interfaces, but there is a significant knowledge gap in our understanding of the processes that influence initial adhesion of bacteria on to NAPL-water interfaces. In this study bacterial adhesion to hexadecane, and a series of NAPLs comprised of hexadecane amended with toluene, and/or with asphaltenes and resins, which are the surface active fractions of crude oils, were examined using a Microbial Adhesion to Hydrocarbons (MATH) assay. The microorganisms employed were Mycobacterium kubicae, Pseudomonas aeruginosa and Pseudomonas putida, which are hydrocarbon degraders or soil microorganisms. MATH assays as well as electrophoretic mobility measurements of the bacterial cells and the NAPL droplet surfaces in aqueous solutions were conducted at three solution pHs (4, 6 and 7). Asphaltenes and resins were shown to generally decrease microbial adhesion. Results of the MATH assay were not in qualitative agreement with theoretical predictions of bacteria- hydrocarbon interactions based on the extended Derjaguin-Landau-Verwey-Overbeek (XDLVO) model of free energy of interaction between the cell and NAPL droplets. In this model the free energy of interaction between two colloidal particles is predicted based on electrical double layer, van der Waals and hydrophobic forces. It is likely that the steric repulsion between bacteria and NAPL surfaces, caused by biopolymers on bacterial surfaces and aphaltenes and resins at the NAPL-water interface contributed to the decreased adhesion compared to that predicted by the XDLVO model.

  7. ShowFlow: A practical interface for groundwater modeling

    SciTech Connect

    Tauxe, J.D.

    1990-12-01

    ShowFlow was created to provide a user-friendly, intuitive environment for researchers and students who use computer modeling software. What traditionally has been a workplace available only to those familiar with command-line based computer systems is now within reach of almost anyone interested in the subject of modeling. In the case of this edition of ShowFlow, the user can easily experiment with simulations using the steady state gaussian plume groundwater pollutant transport model SSGPLUME, though ShowFlow can be rewritten to provide a similar interface for any computer model. Included in this thesis is all the source code for both the ShowFlow application for Microsoft{reg sign} Windows{trademark} and the SSGPLUME model, a User's Guide, and a Developer's Guide for converting ShowFlow to run other model programs. 18 refs., 13 figs.

  8. Behavior of asphaltene model compounds at w/o interfaces.

    PubMed

    Nordgård, Erland L; Sørland, Geir; Sjöblom, Johan

    2010-02-16

    Asphaltenes, present in significant amounts in heavy crude oil, contains subfractions capable of stabilizing water-in-oil emulsions. Still, the composition of these subfractions is not known in detail, and the actual mechanism behind emulsion stability is dependent on perceived interfacial concentrations and compositions. This study aims at utilizing polyaromatic surfactants which contains an acidic moiety as model compounds for the surface-active subfraction of asphaltenes. A modified pulse-field gradient (PFG) NMR method has been used to study droplet sizes and stability of emulsions prepared with asphaltene model compounds. The method has been compared to the standard microscopy droplet counting method. Arithmetic and volumetric mean droplet sizes as a function of surfactant concentration and water content clearly showed that the interfacial area was dependent on the available surfactant at the emulsion interface. Adsorption of the model compounds onto hydrophilic silica has been investigated by UV depletion, and minor differences in the chemical structure of the model compounds caused significant differences in the affinity toward this highly polar surface. The cross-sectional areas obtained have been compared to areas from the surface-to-volume ratio found by NMR and gave similar results for one of the two model compounds. The mean molecular area for this compound suggested a tilted geometry of the aromatic core with respect to the interface, which has also been proposed for real asphaltenic samples. The film behavior was further investigated using a liquid-liquid Langmuir trough supporting the ability to form stable interfacial films. This study supports that acidic, or strong hydrogen-bonding fractions, can promote stable water-in-oil emulsion. The use of model compounds opens up for studying emulsion behavior and demulsifier efficiency based on true interfacial concentrations rather than perceived interfaces. PMID:19852481

  9. Language Model Applications to Spelling with Brain-Computer Interfaces

    PubMed Central

    Mora-Cortes, Anderson; Manyakov, Nikolay V.; Chumerin, Nikolay; Van Hulle, Marc M.

    2014-01-01

    Within the Ambient Assisted Living (AAL) community, Brain-Computer Interfaces (BCIs) have raised great hopes as they provide alternative communication means for persons with disabilities bypassing the need for speech and other motor activities. Although significant advancements have been realized in the last decade, applications of language models (e.g., word prediction, completion) have only recently started to appear in BCI systems. The main goal of this article is to review the language model applications that supplement non-invasive BCI-based communication systems by discussing their potential and limitations, and to discern future trends. First, a brief overview of the most prominent BCI spelling systems is given, followed by an in-depth discussion of the language models applied to them. These language models are classified according to their functionality in the context of BCI-based spelling: the static/dynamic nature of the user interface, the use of error correction and predictive spelling, and the potential to improve their classification performance by using language models. To conclude, the review offers an overview of the advantages and challenges when implementing language models in BCI-based communication systems when implemented in conjunction with other AAL technologies. PMID:24675760

  10. Developing a laser shockwave model for characterizing diffusion bonded interfaces

    NASA Astrophysics Data System (ADS)

    Lacy, Jeffrey M.; Smith, James A.; Rabin, Barry H.

    2015-03-01

    The US National Nuclear Security Agency has a Global Threat Reduction Initiative (GTRI) with the goal of reducing the worldwide use of high-enriched uranium (HEU). A salient component of that initiative is the conversion of research reactors from HEU to low enriched uranium (LEU) fuels. An innovative fuel is being developed to replace HEU in high-power research reactors. The new LEU fuel is a monolithic fuel made from a U-Mo alloy foil encapsulated in Al-6061 cladding. In order to support the fuel qualification process, the Laser Shockwave Technique (LST) is being developed to characterize the clad-clad and fuel-clad interface strengths in fresh and irradiated fuel plates. LST is a non-contact method that uses lasers for the generation and detection of large amplitude acoustic waves to characterize interfaces in nuclear fuel plates. However, because the deposition of laser energy into the containment layer on a specimen's surface is intractably complex, the shock wave energy is inferred from the surface velocity measured on the backside of the fuel plate and the depth of the impression left on the surface by the high pressure plasma pulse created by the shock laser. To help quantify the stresses generated at the interfaces, a finite element method (FEM) model is being utilized. This paper will report on initial efforts to develop and validate the model by comparing numerical and experimental results for back surface velocities and front surface depressions in a single aluminum plate representative of the fuel cladding.

  11. Developing a laser shockwave model for characterizing diffusion bonded interfaces

    SciTech Connect

    Lacy, Jeffrey M. Smith, James A. Rabin, Barry H.

    2015-03-31

    The US National Nuclear Security Agency has a Global Threat Reduction Initiative (GTRI) with the goal of reducing the worldwide use of high-enriched uranium (HEU). A salient component of that initiative is the conversion of research reactors from HEU to low enriched uranium (LEU) fuels. An innovative fuel is being developed to replace HEU in high-power research reactors. The new LEU fuel is a monolithic fuel made from a U-Mo alloy foil encapsulated in Al-6061 cladding. In order to support the fuel qualification process, the Laser Shockwave Technique (LST) is being developed to characterize the clad-clad and fuel-clad interface strengths in fresh and irradiated fuel plates. LST is a non-contact method that uses lasers for the generation and detection of large amplitude acoustic waves to characterize interfaces in nuclear fuel plates. However, because the deposition of laser energy into the containment layer on a specimen's surface is intractably complex, the shock wave energy is inferred from the surface velocity measured on the backside of the fuel plate and the depth of the impression left on the surface by the high pressure plasma pulse created by the shock laser. To help quantify the stresses generated at the interfaces, a finite element method (FEM) model is being utilized. This paper will report on initial efforts to develop and validate the model by comparing numerical and experimental results for back surface velocities and front surface depressions in a single aluminum plate representative of the fuel cladding.

  12. a Deformable Template Model with Feature Tracking for Automated Ivus Segmentation

    NASA Astrophysics Data System (ADS)

    Manandhar, Prakash; Hau Chen, Chi

    2010-02-01

    Intravascular Ultrasound (IVUS) can be used to create a 3D vascular profile of arteries for preventative prediction of Coronary Heart Disease (CHD). Segmentation of individual B-scan frames is a crucial step for creating profiles. Manual segmentation is too labor intensive to be of routine use. Automated segmentation algorithms are not yet accurate enough. We present a method of tracking features across frames of ultrasound data to increase automated segmentation accuracy using a deformable template model.

  13. Automated Eukaryotic Gene Structure Annotation Using EVidenceModeler and the Program to Assemble Spliced Alignments

    SciTech Connect

    Haas, B J; Salzberg, S L; Zhu, W; Pertea, M; Allen, J E; Orvis, J; White, O; Buell, C R; Wortman, J R

    2007-12-10

    EVidenceModeler (EVM) is presented as an automated eukaryotic gene structure annotation tool that reports eukaryotic gene structures as a weighted consensus of all available evidence. EVM, when combined with the Program to Assemble Spliced Alignments (PASA), yields a comprehensive, configurable annotation system that predicts protein-coding genes and alternatively spliced isoforms. Our experiments on both rice and human genome sequences demonstrate that EVM produces automated gene structure annotation approaching the quality of manual curation.

  14. Numerical modeling of materials processes with fluid-fluid interfaces

    NASA Astrophysics Data System (ADS)

    Yanke, Jeffrey Michael

    A numerical model has been developed to study material processes that depend on the interaction between fluids with a large discontinuity in thermophysical properties. A base model capable of solving equations of mass, momentum, energy conservation, and solidification has been altered to enable tracking of the interface between two immiscible fluids and correctly predict the interface deformation using a volume of fluid (VOF) method. Two materials processes investigated using this technique are Electroslag Remelting (ESR) and plasma spray deposition. ESR is a secondary melting technique that passes an AC current through an electrically resistive slag to provide the heat necessary to melt the alloy. The simulation tracks the interface between the slag and metal. The model was validated against industrial scale ESR ingots and was able to predict trends in melt rate, sump depth, macrosegregation, and liquid sump depth. In order to better understand the underlying physics of the process, several constant current ESR runs simulated the effects of freezing slag in the model. Including the solidifying slag in the imulations was found to have an effect on the melt rate and sump shape but there is too much uncertainty in ESR slag property data at this time for quantitative predictions. The second process investigated in this work is the deposition of ceramic coatings via plasma spray deposition. In plasma spray deposition, powderized coating material is injected into a plasma that melts and carries the powder towards the substrate were it impacts, flattening out and freezing. The impacting droplets pile up to form a porous coating. The model is used to simulate this rain of liquid ceramic particles impacting the substrate and forming a coating. Trends in local solidification time and porosity are calculated for various particle sizes and velocities. The predictions of decreasing porosity with increasing particle velocity matches previous experimental results. Also, a preliminary study was conducted to investigate the effects of substrate surface defects and droplet impact angle on the propensity to form columnar porosity.

  15. Modeling of biomedical interfaces with nonlinear friction properties.

    PubMed

    Mesfar, W; Shirazi-Adl, A; Dammak, M

    2003-01-01

    Proper isotropic and anisotropic friction constitutive equations are developed based on previous friction measurements at cancellous bone-porous coated implant interfaces exhibiting nonlinear load-displacement curves. The simulated friction response is dependent on relative tangential displacements in both orthogonal directions. The interface constitutive matrix contains cross-stiffness terms identical in isotropic friction but different in anisotropic friction. These terms are due mainly to nonlinearity in response and vanish in unidirectional friction along a principal direction and in cases with Coulomb or linear friction. The interface ultimate resistance is evaluated by an elliptic criterion which becomes circular in isotropic cases. These constitutive relations are implemented in a finite element program which is employed to analyze a bone cube sliding on top of a porous-surfaced metallic plate, an experimental model used in our earlier measurements. The results for both isotropic and anisotropic frictions demonstrate the coupling between two orthogonal directions. The direction of resultant displacement under a variable load coincides with that of the load only when the friction is isotropic with coupling terms considered. In anisotropic friction, the resultant displacement occurs in a direction different from that of loading. Our previous bi-directional measurements corroborate well the findings of this study. PMID:12652026

  16. Electrochemical Stability of Model Polymer Electrolyte/Electrode Interfaces

    NASA Astrophysics Data System (ADS)

    Hallinan, Daniel; Yang, Guang

    2015-03-01

    Polymer electrolytes are promising materials for high energy density rechargeable batteries. However, typical polymer electrolytes are not electrochemically stable at the charging voltage of advanced positive electrode materials. Although not yet reported in literature, decomposition is expected to adversely affect the performance and lifetime of polymer-electrolyte-based batteries. In an attempt to better understand polymer electrolyte oxidation and design stable polymer electrolyte/positive electrode interfaces, we are studying electron transfer across model interfaces comprising gold nanoparticles and organic protecting ligands assembled into monolayer films. Gold nanoparticles provide large interfacial surface area yielding a measurable electrochemical signal. They are inert and hence non-reactive with most polymer electrolytes and lithium salts. The surface can be easily modified with ligands of different chemistry and molecular weight. In our study, poly(ethylene oxide) (PEO) will serve as the polymer electrolyte and lithium bis(trifluoromethanesulfonyl) imide salt (LiTFSI) will be the lithium salt. The effect of ligand type and molecular weight on both optical and electrical properties of the gold nanoparticle film will be presented. Finally, the electrochemical stability of the electrode/electrolyte interface and its dependence on interfacial properties will be presented.

  17. A biological model for controlling interface growth and morphology.

    SciTech Connect

    Hoyt, Jeffrey John; Holm, Elizabeth Ann

    2004-01-01

    Biological systems create proteins that perform tasks more efficiently and precisely than conventional chemicals. For example, many plants and animals produce proteins to control the freezing of water. Biological antifreeze proteins (AFPs) inhibit the solidification process, even below the freezing point. These molecules bond to specific sites at the ice/water interface and are theorized to suppress solidification chemically or geometrically. In this project, we investigated the theoretical and experimental data on AFPs and performed analyses to understand the unique physics of AFPs. The experimental literature was analyzed to determine chemical mechanisms and effects of protein bonding at ice surfaces, specifically thermodynamic freezing point depression, suppression of ice nucleation, decrease in dendrite growth kinetics, solute drag on the moving solid/liquid interface, and stearic pinning of the ice interface. Stearic pinning was found to be the most likely candidate to explain experimental results, including freezing point depression, growth morphologies, and thermal hysteresis. A new stearic pinning model was developed and applied to AFPs, with excellent quantitative results. Understanding biological antifreeze mechanisms could enable important medical and engineering applications, but considerable future work will be necessary.

  18. A symbolic/subsymbolic interface protocol for cognitive modeling

    PubMed Central

    Simen, Patrick; Polk, Thad

    2009-01-01

    Researchers studying complex cognition have grown increasingly interested in mapping symbolic cognitive architectures onto subsymbolic brain models. Such a mapping seems essential for understanding cognition under all but the most extreme viewpoints (namely, that cognition consists exclusively of digitally implemented rules; or instead, involves no rules whatsoever). Making this mapping reduces to specifying an interface between symbolic and subsymbolic descriptions of brain activity. To that end, we propose parameterization techniques for building cognitive models as programmable, structured, recurrent neural networks. Feedback strength in these models determines whether their components implement classically subsymbolic neural network functions (e.g., pattern recognition), or instead, logical rules and digital memory. These techniques support the implementation of limited production systems. Though inherently sequential and symbolic, these neural production systems can exploit principles of parallel, analog processing from decision-making models in psychology and neuroscience to explain the effects of brain damage on problem solving behavior. PMID:20711520

  19. A Model of Process-Based Automation: Cost and Quality Implications in the Medication Management Process

    ERIC Educational Resources Information Center

    Spaulding, Trent Joseph

    2011-01-01

    The objective of this research is to understand how a set of systems, as defined by the business process, creates value. The three studies contained in this work develop the model of process-based automation. The model states that complementarities among systems are specified by handoffs in the business process. The model also provides theory to…

  20. A Model of Process-Based Automation: Cost and Quality Implications in the Medication Management Process

    ERIC Educational Resources Information Center

    Spaulding, Trent Joseph

    2011-01-01

    The objective of this research is to understand how a set of systems, as defined by the business process, creates value. The three studies contained in this work develop the model of process-based automation. The model states that complementarities among systems are specified by handoffs in the business process. The model also provides theory to

  1. Groundwater modeling and remedial optimization design using graphical user interfaces

    SciTech Connect

    Deschaine, L.M.

    1997-05-01

    The ability to accurately predict the behavior of chemicals in groundwater systems under natural flow circumstances or remedial screening and design conditions is the cornerstone to the environmental industry. The ability to do this efficiently and effectively communicate the information to the client and regulators is what differentiates effective consultants from ineffective consultants. Recent advances in groundwater modeling graphical user interfaces (GUIs) are doing for numerical modeling what Windows{trademark} did for DOS{trademark}. GUI facilitates both the modeling process and the information exchange. This Test Drive evaluates the performance of two GUIs--Groundwater Vistas and ModIME--on an actual groundwater model calibration and remedial design optimization project. In the early days of numerical modeling, data input consisted of large arrays of numbers that required intensive labor to input and troubleshoot. Model calibration was also manual, as was interpreting the reams of computer output for each of the tens or hundreds of simulations required to calibrate and perform optimal groundwater remedial design. During this period, the majority of the modelers effort (and budget) was spent just getting the model running, as opposed to solving the environmental challenge at hand. GUIs take the majority of the grunt work out of the modeling process, thereby allowing the modeler to focus on designing optimal solutions.

  2. Towards an Improved Pilot-Vehicle Interface for Highly Automated Aircraft: Evaluation of the Haptic Flight Control System

    NASA Technical Reports Server (NTRS)

    Schutte, Paul; Goodrich, Kenneth; Williams, Ralph

    2012-01-01

    The control automation and interaction paradigm (e.g., manual, autopilot, flight management system) used on virtually all large highly automated aircraft has long been an exemplar of breakdowns in human factors and human-centered design. An alternative paradigm is the Haptic Flight Control System (HFCS) that is part of NASA Langley Research Center s Naturalistic Flight Deck Concept. The HFCS uses only stick and throttle for easily and intuitively controlling the actual flight of the aircraft without losing any of the efficiency and operational benefits of the current paradigm. Initial prototypes of the HFCS are being evaluated and this paper describes one such evaluation. In this evaluation we examined claims regarding improved situation awareness, appropriate workload, graceful degradation, and improved pilot acceptance. Twenty-four instrument-rated pilots were instructed to plan and fly four different flights in a fictitious airspace using a moderate fidelity desktop simulation. Three different flight control paradigms were tested: Manual control, Full Automation control, and a simplified version of the HFCS. Dependent variables included both subjective (questionnaire) and objective (SAGAT) measures of situation awareness, workload (NASA-TLX), secondary task performance, time to recognize automation failures, and pilot preference (questionnaire). The results showed a statistically significant advantage for the HFCS in a number of measures. Results that were not statistically significant still favored the HFCS. The results suggest that the HFCS does offer an attractive and viable alternative to the tactical components of today s FMS/autopilot control system. The paper describes further studies that are planned to continue to evaluate the HFCS.

  3. A Automated Off-Line Liquid Chromatography/mass Spectrometry Interface Using Solid Phase, Time-Of Secondary Ion Mass Spectrometry.

    NASA Astrophysics Data System (ADS)

    Beavis, Ronald Charles

    The design and construction of an automated off -line interface to couple a microbore HPLC to the Manitoba TOF I SIMS is described. In particular, the details of adapting the electrospray sample deposition method to deposit the eluent of a microbore HPLC column are discussed. Several improvements to the electrospray method were made to improve sensitivity. The spray was focussed and entrained in a nitrogen flow. The use of a hydrophilic boehmite ((AlO)OH) substrate reduced the effect of sodium contamination and increased the relative yield of protonated molecular ions in peptide samples. The design of a microbore HPLC system is described, including details of column plumbing and packing procedures. Comments are made as to the reliability and efficiency of several column packing materials, in this application. The off -line interface was applied to the separation of peptide mixtures produced by trypsin digestion of larger peptides. The products were detected by the mass spectrometry. The fragment ions in the mass spectra were analysed to give sequence information about the parent molecules. The sequence information available was closely correlated with the stability of the protonated molecular ion in the drift region of the spectrometer. The stability of a particular peptide appears to be a function of the residue sequence. The application of the interface to a fully protected peptide and to several steroid hormones is also discussed.

  4. Asynchronous brain computer interface using hidden semi-Markov models.

    PubMed

    Oliver, Gareth; Sunehag, Peter; Gedeon, Tom

    2012-01-01

    Ideal Brain Computer Interfaces need to perform asynchronously and at real time. We propose Hidden Semi-Markov Models (HSMM) to better segment and classify EEG data. The proposed HSMM method was tested against a simple windowed method on standard datasets. We found that our HSMM outperformed the simple windowed method. Furthermore, due to the computational demands of the algorithm, we adapted the HSMM algorithm to an online setting and demonstrate that this faster version of the algorithm can run in real time. PMID:23366489

  5. Modeling the Energy Use of a Connected and Automated Transportation System (Poster)

    SciTech Connect

    Gonder, J.; Brown, A.

    2014-07-01

    Early research points to large potential impacts of connected and automated vehicles (CAVs) on transportation energy use - dramatic savings, increased use, or anything in between. Due to a lack of suitable data and integrated modeling tools to explore these complex future systems, analyses to date have relied on simple combinations of isolated effects. This poster proposes a framework for modeling the potential energy implications from increasing penetration of CAV technologies and for assessing technology and policy options to steer them toward favorable energy outcomes. Current CAV modeling challenges include estimating behavior change, understanding potential vehicle-to-vehicle interactions, and assessing traffic flow and vehicle use under different automation scenarios. To bridge these gaps and develop a picture of potential future automated systems, NREL is integrating existing modeling capabilities with additional tools and data inputs to create a more fully integrated CAV assessment toolkit.

  6. Electroviscoelasticity of liquid/liquid interfaces: fractional-order model.

    PubMed

    Spasic, Aleksandar M; Lazarevic, Mihailo P

    2005-02-01

    A number of theories that describe the behavior of liquid-liquid interfaces have been developed and applied to various dispersed systems, e.g., Stokes, Reiner-Rivelin, Ericksen, Einstein, Smoluchowski, and Kinch. A new theory of electroviscoelasticity describes the behavior of electrified liquid-liquid interfaces in fine dispersed systems and is based on a new constitutive model of liquids. According to this model liquid-liquid droplet or droplet-film structure (collective of particles) is considered as a macroscopic system with internal structure determined by the way the molecules (ions) are tuned (structured) into the primary components of a cluster configuration. How the tuning/structuring occurs depends on the physical fields involved, both potential (elastic forces) and nonpotential (resistance forces). All these microelements of the primary structure can be considered as electromechanical oscillators assembled into groups, so that excitation by an external physical field may cause oscillations at the resonant/characteristic frequency of the system itself (coupling at the characteristic frequency). Up to now, three possible mathematical formalisms have been discussed related to the theory of electroviscoelasticity. The first is the tension tensor model, where the normal and tangential forces are considered, only in mathematical formalism, regardless of their origin (mechanical and/or electrical). The second is the Van der Pol derivative model, presented by linear and nonlinear differential equations. Finally, the third model presents an effort to generalize the previous Van der Pol equation: the ordinary time derivative and integral are now replaced with the corresponding fractional-order time derivative and integral of order p<1. PMID:15576102

  7. Modeling organohalide perovskites for photovoltaic applications: From materials to interfaces

    NASA Astrophysics Data System (ADS)

    de Angelis, Filippo

    2015-03-01

    The field of hybrid/organic photovoltaics has been revolutionized in 2012 by the first reports of solid-state solar cells based on organohalide perovskites, now topping at 20% efficiency. First-principles modeling has been widely applied to the dye-sensitized solar cells field, and more recently to perovskite-based solar cells. The computational design and screening of new materials has played a major role in advancing the DSCs field. Suitable modeling strategies may also offer a view of the crucial heterointerfaces ruling the device operational mechanism. I will illustrate how simulation tools can be employed in the emerging field of perovskite solar cells. The performance of the proposed simulation toolbox along with the fundamental modeling strategies are presented using selected examples of relevant materials and interfaces. The main issue with hybrid perovskite modeling is to be able to accurately describe their structural, electronic and optical features. These materials show a degree of short range disorder, due to the presence of mobile organic cations embedded within the inorganic matrix, requiring to average their properties over a molecular dynamics trajectory. Due to the presence of heavy atoms (e.g. Sn and Pb) their electronic structure must take into account spin-orbit coupling (SOC) in an effective way, possibly including GW corrections. The proposed SOC-GW method constitutes the basis for tuning the materials electronic and optical properties, rationalizing experimental trends. Modeling charge generation in perovskite-sensitized TiO2 interfaces is then approached based on a SOC-DFT scheme, describing alignment of energy levels in a qualitatively correct fashion. The role of interfacial chemistry on the device performance is finally discussed. The research leading to these results has received funding from the European Union Seventh Framework Programme [FP7/2007 2013] under Grant Agreement No. 604032 of the MESO project.

  8. Wheat stress indicator model, Crop Condition Assessment Division (CCAD) data base interface driver, user's manual

    NASA Technical Reports Server (NTRS)

    Hansen, R. F. (principal investigator)

    1981-01-01

    The use of the wheat stress indicator model CCAD data base interface driver is described. The purpose of this system is to interface the wheat stress indicator model with the CCAD operational data base. The interface driver routine decides what meteorological stations should be processed and calls the proper subroutines to process the stations.

  9. Data for Environmental Modeling (D4EM): Background and Applications of Data Automation

    EPA Science Inventory

    The Data for Environmental Modeling (D4EM) project demonstrates the development of a comprehensive set of open source software tools that overcome obstacles to accessing data needed by automating the process of populating model input data sets with environmental data available fr...

  10. A Binary Programming Approach to Automated Test Assembly for Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Finkelman, Matthew D.; Kim, Wonsuk; Roussos, Louis; Verschoor, Angela

    2010-01-01

    Automated test assembly (ATA) has been an area of prolific psychometric research. Although ATA methodology is well developed for unidimensional models, its application alongside cognitive diagnosis models (CDMs) is a burgeoning topic. Two suggested procedures for combining ATA and CDMs are to maximize the cognitive diagnostic index and to use a

  11. Model Based Control Design Using SLPS "Simulink PSpice Interface"

    NASA Astrophysics Data System (ADS)

    Moslehpour, Saeid; Kulcu, Ercan K.; Alnajjar, Hisham

    This paper elaborates on the new integration offered with the PSpice SLPS interface and the MATLAB simulink products. SLPS links the two widely used design products, PSpice and Mathwork's Simulink simulator. The SLPS simulation environment supports the substitution of an actual electronic block with an "ideal model", better known as the mathematical simulink model. Thus enabling the designer to identify and correct integration issues of electronics within a system. Moreover, stress audit can be performed by using the PSpice smoke analysis which helps to verify whether the components are working within the manufacturer's safe operating limits. It is invaluable since many companies design and test the electronics separately from the system level. Therefore, integrations usually are not discovered until the prototype level, causing critical time delays in getting a product to the market.

  12. Intelligent User Interfaces for Information Analysis: A Cognitive Model

    SciTech Connect

    Schwarting, Irene S.; Nelson, Rob A.; Cowell, Andrew J.

    2006-01-29

    Intelligent user interfaces (IUIs) for information analysis (IA) need to be designed with an intrinsic understanding of the analytical objectives and the dimensions of the information space. These analytical objectives are oriented around the requirement to provide decision makers with courses of action. Most tools available to support analysis barely skim the surface of the dimensions and categories of information used in analysis, and almost none are designed to address the ultimate requirement of decision support. This paper presents a high-level model of the cognitive framework of information analysts in the context of doing their jobs. It is intended that this model will enable the derivation of design requirements for advanced IUIs for IA.

  13. ORIGAMI -- The Oak Ridge Geometry Analysis and Modeling Interface

    SciTech Connect

    Burns, T.J.

    1996-04-01

    A revised ``ray-tracing`` package which is a superset of the geometry specifications of the radiation transport codes MORSE, MASH (GIFT Versions 4 and 5), HETC, and TORT has been developed by ORNL. Two additional CAD-based formats are also included as part of the superset: the native format of the BRL-CAD system--MGED, and the solid constructive geometry subset of the IGES specification. As part of this upgrade effort, ORNL has designed an Xwindows-based utility (ORIGAMI) to facilitate the construction, manipulation, and display of the geometric models required by the MASH code. Since the primary design criterion for this effort was that the utility ``see`` the geometric model exactly as the radiation transport code does, ORIGAMI is designed to utilize the same ``ray-tracing`` package as the revised version of MASH. ORIGAMI incorporates the functionality of two previously developed graphical utilities, CGVIEW and ORGBUG, into a single consistent interface.

  14. The electrical behavior of GaAs-insulator interfaces - A discrete energy interface state model

    NASA Technical Reports Server (NTRS)

    Kazior, T. E.; Lagowski, J.; Gatos, H. C.

    1983-01-01

    The relationship between the electrical behavior of GaAs Metal Insulator Semiconductor (MIS) structures and the high density discrete energy interface states (0.7 and 0.9 eV below the conduction band) was investigated utilizing photo- and thermal emission from the interface states in conjunction with capacitance measurements. It was found that all essential features of the anomalous behavior of GaAs MIS structures, such as the frequency dispersion and the C-V hysteresis, can be explained on the basis of nonequilibrium charging and discharging of the high density discrete energy interface states.

  15. Growth/reflectance model interface for wheat and corresponding model

    NASA Technical Reports Server (NTRS)

    Suits, G. H.; Sieron, R.; Odenweller, J.

    1984-01-01

    The use of modeling to explore the possibility of discovering new and useful crop condition indicators which might be available from the Thematic Mapper and to connect these symptoms to the biological causes in the crop is discussed. A crop growth model was used to predict the day to day growth features of the crop as it responds biologically to the various environmental factors. A reflectance model was used to predict the character of the interaction of daylight with the predicted growth features. An atmospheric path radiance was added to the reflected daylight to simulate the radiance appearing at the sensor. Finally, the digitized data sent to a ground station were calculated. The crop under investigation is wheat.

  16. A robust and flexible Geospatial Modeling Interface (GMI) for environmental model deployment and evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper provides an overview of the GMI (Geospatial Modeling Interface) simulation framework for environmental model deployment and assessment. GMI currently provides access to multiple environmental models including AgroEcoSystem-Watershed (AgES-W), Nitrate Leaching and Economic Analysis 2 (NLEA...

  17. Modeling the Electrical Contact Resistance at Steel-Carbon Interfaces

    NASA Astrophysics Data System (ADS)

    Brimmo, Ayoola T.; Hassan, Mohamed I.

    2015-10-01

    In the aluminum smelting industry, electrical contact resistance at the stub-carbon (steel-carbon) interface has been recurrently reported to be of magnitudes that legitimately necessitate concern. Mitigating this via finite element modeling has been the focus of a number of investigations, with the pressure- and temperature-dependent contact resistance relation frequently cited as a factor that limits the accuracy of such models. In this study, pressure- and temperature-dependent relations are derived from the most extensively cited works that have experimentally characterized the electrical contact resistance at these contacts. These relations are applied in a validated thermo-electro-mechanical finite element model used to estimate the voltage drop across a steel-carbon laboratory setup. By comparing the models' estimate of the contact electrical resistance with experimental measurements, we deduce the applicability of the different relations over a range of temperatures. The ultimate goal of this study is to apply mathematical modeling in providing pressure- and temperature-dependent relations that best describe the steel-carbon electrical contact resistance and identify the best fit relation at specific thermodynamic conditions.

  18. Modeling the Electrical Contact Resistance at Steel-Carbon Interfaces

    NASA Astrophysics Data System (ADS)

    Brimmo, Ayoola T.; Hassan, Mohamed I.

    2016-01-01

    In the aluminum smelting industry, electrical contact resistance at the stub-carbon (steel-carbon) interface has been recurrently reported to be of magnitudes that legitimately necessitate concern. Mitigating this via finite element modeling has been the focus of a number of investigations, with the pressure- and temperature-dependent contact resistance relation frequently cited as a factor that limits the accuracy of such models. In this study, pressure- and temperature-dependent relations are derived from the most extensively cited works that have experimentally characterized the electrical contact resistance at these contacts. These relations are applied in a validated thermo-electro-mechanical finite element model used to estimate the voltage drop across a steel-carbon laboratory setup. By comparing the models' estimate of the contact electrical resistance with experimental measurements, we deduce the applicability of the different relations over a range of temperatures. The ultimate goal of this study is to apply mathematical modeling in providing pressure- and temperature-dependent relations that best describe the steel-carbon electrical contact resistance and identify the best fit relation at specific thermodynamic conditions.

  19. Parallelization of a hydrological model using the message passing interface

    USGS Publications Warehouse

    Wu, Yiping; Li, Tiejian; Sun, Liqun; Chen, Ji

    2013-01-01

    With the increasing knowledge about the natural processes, hydrological models such as the Soil and Water Assessment Tool (SWAT) are becoming larger and more complex with increasing computation time. Additionally, other procedures such as model calibration, which may require thousands of model iterations, can increase running time and thus further reduce rapid modeling and analysis. Using the widely-applied SWAT as an example, this study demonstrates how to parallelize a serial hydrological model in a Windows® environment using a parallel programing technology—Message Passing Interface (MPI). With a case study, we derived the optimal values for the two parameters (the number of processes and the corresponding percentage of work to be distributed to the master process) of the parallel SWAT (P-SWAT) on an ordinary personal computer and a work station. Our study indicates that model execution time can be reduced by 42%–70% (or a speedup of 1.74–3.36) using multiple processes (two to five) with a proper task-distribution scheme (between the master and slave processes). Although the computation time cost becomes lower with an increasing number of processes (from two to five), this enhancement becomes less due to the accompanied increase in demand for message passing procedures between the master and all slave processes. Our case study demonstrates that the P-SWAT with a five-process run may reach the maximum speedup, and the performance can be quite stable (fairly independent of a project size). Overall, the P-SWAT can help reduce the computation time substantially for an individual model run, manual and automatic calibration procedures, and optimization of best management practices. In particular, the parallelization method we used and the scheme for deriving the optimal parameters in this study can be valuable and easily applied to other hydrological or environmental models.

  20. Modeling interface-controlled phase transformation kinetics in thin films

    NASA Astrophysics Data System (ADS)

    Pang, E. L.; Vo, N. Q.; Philippe, T.; Voorhees, P. W.

    2015-05-01

    The Johnson-Mehl-Avrami-Kolmogorov (JMAK) equation is widely used to describe phase transformation kinetics. This description, however, is not valid in finite size domains, in particular, thin films. A new computational model incorporating the level-set method is employed to study phase evolution in thin film systems. For both homogeneous (bulk) and heterogeneous (surface) nucleation, nucleation density and film thickness were systematically adjusted to study finite-thickness effects on the Avrami exponent during the transformation process. Only site-saturated nucleation with isotropic interface-kinetics controlled growth is considered in this paper. We show that the observed Avrami exponent is not constant throughout the phase transformation process in thin films with a value that is not consistent with the dimensionality of the transformation. Finite-thickness effects are shown to result in reduced time-dependent Avrami exponents when bulk nucleation is present, but not necessarily when surface nucleation is present.

  1. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1991-01-01

    Some of the many analytical models in human-computer interface design that are currently being developed are described. The usefulness of analytical models for human-computer interface design is evaluated. Can the use of analytical models be recommended to interface designers? The answer, based on the empirical research summarized here, is: not at this time. There are too many unanswered questions concerning the validity of models and their ability to meet the practical needs of design organizations.

  2. Automating Routine Tasks in AmI Systems by Using Models at Runtime

    NASA Astrophysics Data System (ADS)

    Serral, Estefana; Valderas, Pedro; Pelechano, Vicente

    One of the most important challenges to be confronted in Ambient Intelligent (AmI) systems is to automate routine tasks on behalf of users. In this work, we confront this challenge presenting a novel approach based on models at runtime. This approach proposes a context-adaptive task model that allows routine tasks to be specified in an understandable way for users, facilitating their participation in the specification. These tasks are described according to context, which is specified in an ontology-based context model. Both the context model and the task model are also used at runtime. The approach provides a software infrastructure capable of automating the routine tasks as they were specified in these models by interpreting them at runtime.

  3. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT (AGWA): A GIS-BASED HYDROLOGICAL MODELING TOOL FOR WATERSHED MANAGEMENT AND LANDSCAPE ASSESSMENT

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment (http://www.epa.gov/nerlesd1/land-sci/agwa/introduction.htm and www.tucson.ars.ag.gov/agwa) tool is a GIS interface jointly developed by the U.S. Environmental Protection Agency, USDA-Agricultural Research Service, and the University ...

  4. Evaluation of automated cell disruptor methods for oomycetous and ascomycetous model organisms

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Two automated cell disruptor-based methods for RNA extraction; disruption of thawed cells submerged in TRIzol Reagent (method QP), and direct disruption of frozen cells on dry ice (method CP), were optimized for a model oomycete, Phytophthora capsici, and compared with grinding in a mortar and pestl...

  5. Modeling Multiple Human-Automation Distributed Systems using Network-form Games

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume

    2012-01-01

    The paper describes at a high-level the network-form game framework (based on Bayes net and game theory), which can be used to model and analyze safety issues in large, distributed, mixed human-automation systems such as NextGen.

  6. Advances in automated noise data acquisition and noise source modeling for power reactors

    SciTech Connect

    Clapp, N.E. Jr.; Kryter, R.C.; Sweeney, F.J.; Renier, J.A.

    1981-01-01

    A newly expanded program, directed toward achieving a better appreciation of both the strengths and limitations of on-line, noise-based, long-term surveillance programs for nuclear reactors, is described. Initial results in the complementary experimental (acquisition and automated screening of noise signatures) and theoretical (stochastic modeling of likely noise sources) areas of investigation are given.

  7. Automated Test Assembly for Cognitive Diagnosis Models Using a Genetic Algorithm

    ERIC Educational Resources Information Center

    Finkelman, Matthew; Kim, Wonsuk; Roussos, Louis A.

    2009-01-01

    Much recent psychometric literature has focused on cognitive diagnosis models (CDMs), a promising class of instruments used to measure the strengths and weaknesses of examinees. This article introduces a genetic algorithm to perform automated test assembly alongside CDMs. The algorithm is flexible in that it can be applied whether the goal is to

  8. Automated volumetric grid generation for finite element modeling of human hand joints

    SciTech Connect

    Hollerbach, K.; Underhill, K.; Rainsberger, R.

    1995-02-01

    We are developing techniques for finite element analysis of human joints. These techniques need to provide high quality results rapidly in order to be useful to a physician. The research presented here increases model quality and decreases user input time by automating the volumetric mesh generation step.

  9. Description of waste pretreatment and interfacing systems dynamic simulation model

    SciTech Connect

    Garbrick, D.J.; Zimmerman, B.D.

    1995-05-01

    The Waste Pretreatment and Interfacing Systems Dynamic Simulation Model was created to investigate the required pretreatment facility processing rates for both high level and low level waste so that the vitrification of tank waste can be completed according to the milestones defined in the Tri-Party Agreement (TPA). In order to achieve this objective, the processes upstream and downstream of the pretreatment facilities must also be included. The simulation model starts with retrieval of tank waste and ends with vitrification for both low level and high level wastes. This report describes the results of three simulation cases: one based on suggested average facility processing rates, one with facility rates determined so that approximately 6 new DSTs are required, and one with facility rates determined so that approximately no new DSTs are required. It appears, based on the simulation results, that reasonable facility processing rates can be selected so that no new DSTs are required by the TWRS program. However, this conclusion must be viewed with respect to the modeling assumptions, described in detail in the report. Also included in the report, in an appendix, are results of two sensitivity cases: one with glass plant water recycle steams recycled versus not recycled, and one employing the TPA SST retrieval schedule versus a more uniform SST retrieval schedule. Both recycling and retrieval schedule appear to have a significant impact on overall tank usage.

  10. Bayesian inverse modeling at the hydrological surface-subsurface interface

    NASA Astrophysics Data System (ADS)

    Cucchi, K.; Rubin, Y.

    2014-12-01

    In systems where surface and subsurface hydrological domains are highly connected, modeling surface and subsurface flow jointly is essential to accurately represent the physical processes and come up with reliable predictions of flows in river systems or stream-aquifer exchange. The flow quantification at the interface merging the two hydrosystem components is a function of both surface and subsurface spatially distributed parameters. In the present study, we apply inverse modeling techniques to a synthetic catchment with connected surface and subsurface hydrosystems. The model is physically-based and implemented with the Gridded Surface Subsurface Hydrologic Analysis software. On the basis of hydrograph measurement at the catchment outlet, we estimate parameters such as saturated hydraulic conductivity, overland and channel roughness coefficients. We compare maximum likelihood estimates (ML) with the parameter distributions obtained using the Bayesian statistical framework for spatially random fields provided by the Method of Anchored Distributions (MAD). While ML estimates maximize the probability of observing the data and capture the global trend of the target variables, MAD focuses on obtaining a probability distribution for the random unknown parameters and the anchors are designed to capture local features. We check the consistency between the two approaches and evaluate the additional information provided by MAD on parameter distributions. We also assess the contribution of adding new types of measurements such as water table depth or soil conductivity to the reduction of parameter uncertainty.

  11. Graphical User Interface for Simulink Integrated Performance Analysis Model

    NASA Technical Reports Server (NTRS)

    Durham, R. Caitlyn

    2009-01-01

    The J-2X Engine (built by Pratt & Whitney Rocketdyne,) in the Upper Stage of the Ares I Crew Launch Vehicle, will only start within a certain range of temperature and pressure for Liquid Hydrogen and Liquid Oxygen propellants. The purpose of the Simulink Integrated Performance Analysis Model is to verify that in all reasonable conditions the temperature and pressure of the propellants are within the required J-2X engine start boxes. In order to run the simulation, test variables must be entered at all reasonable values of parameters such as heat leak and mass flow rate. To make this testing process as efficient as possible in order to save the maximum amount of time and money, and to show that the J-2X engine will start when it is required to do so, a graphical user interface (GUI) was created to allow the input of values to be used as parameters in the Simulink Model, without opening or altering the contents of the model. The GUI must allow for test data to come from Microsoft Excel files, allow those values to be edited before testing, place those values into the Simulink Model, and get the output from the Simulink Model. The GUI was built using MATLAB, and will run the Simulink simulation when the Simulate option is activated. After running the simulation, the GUI will construct a new Microsoft Excel file, as well as a MATLAB matrix file, using the output values for each test of the simulation so that they may graphed and compared to other values.

  12. The Automated Geospatial Watershed Assessment Tool (AGWA): Developing Post-Fire Model Parameters Using Precipitation and Runoff Records from Gauged Watersheds

    NASA Astrophysics Data System (ADS)

    Sheppard, B. S.; Goodrich, D. C.; Guertin, D. P.; Burns, I. S.; Canfield, E.; Sidman, G.

    2014-12-01

    New tools and functionality have been incorporated into the Automated Geospatial Watershed Assessment Tool (AGWA) to assess the impacts of wildfire on runoff and erosion. AGWA (see: www.tucson.ars.ag.gov/agwa or http://www.epa.gov/esd/land-sci/agwa/) is a GIS interface jointly developed by the USDA-Agricultural Research Service, the U.S. Environmental Protection Agency, the University of Arizona, and the University of Wyoming to automate the parameterization and execution of a suite of hydrologic and erosion models (RHEM, WEPP, KINEROS2 and SWAT). Through an intuitive interface the user selects an outlet from which AGWA delineates and discretizes the watershed using a Digital Elevation Model (DEM). The watershed model elements are then intersected with terrain, soils, and land cover data layers to derive the requisite model input parameters. With the addition of a burn severity map AGWA can be used to model post wildfire changes to a catchment. By applying the same design storm to burned and unburned conditions a rapid assessment of the watershed can be made and areas that are the most prone to flooding can be identified. Post-fire precipitation and runoff records from gauged forested watersheds are now being used to make improvements to post fire model input parameters. Rainfall and runoff pairs have been selected from these records in order to calibrate parameter values for surface roughness and saturated hydraulic conductivity used in the KINEROS2 model. Several objective functions will be tried in the calibration process. Results will be validated. Currently Department of Interior Burn Area Emergency Response (DOI BAER) teams are using the AGWA-KINEROS2 modeling interface to assess hydrologically imposed risk immediately following wild fire. These parameter refinements are being made to further improve the quality of these assessments.

  13. Automated dynamic analytical model improvement for damped structures

    NASA Technical Reports Server (NTRS)

    Fuh, J. S.; Berman, A.

    1985-01-01

    A method is described to improve a linear nonproportionally damped analytical model of a structure. The procedure finds the smallest changes in the analytical model such that the improved model matches the measured modal parameters. Features of the method are: (1) ability to properly treat complex valued modal parameters of a damped system; (2) applicability to realistically large structural models; and (3) computationally efficiency without involving eigensolutions and inversion of a large matrix.

  14. Calibration and application of an automated seepage meter for monitoring water flow across the sediment-water interface.

    PubMed

    Zhu, Tengyi; Fu, Dafang; Jenkinson, Byron; Jafvert, Chad T

    2015-04-01

    The advective flow of sediment pore water is an important parameter for understanding natural geochemical processes within lake, river, wetland, and marine sediments and also for properly designing permeable remedial sediment caps placed over contaminated sediments. Automated heat pulse seepage meters can be used to measure the vertical component of sediment pore water flow (i.e., vertical Darcy velocity); however, little information on meter calibration as a function of ambient water temperature exists in the literature. As a result, a method with associated equations for calibrating a heat pulse seepage meter as a function of ambient water temperature is fully described in this paper. Results of meter calibration over the temperature range 7.5 to 21.2 °C indicate that errors in accuracy are significant if proper temperature-dependence calibration is not performed. The proposed calibration method allows for temperature corrections to be made automatically in the field at any ambient water temperature. The significance of these corrections is discussed. PMID:25754860

  15. An automated noncontact deposition interface for liquid chromatography matrix-assisted laser desorption/ionization mass spectrometry.

    PubMed

    Ericson, Christer; Phung, Qui T; Horn, David M; Peters, Eric C; Fitchett, Jonathan R; Ficarro, Scott B; Salomon, Arthur R; Brill, Laurence M; Brock, Ansgar

    2003-05-15

    A new multichannel deposition system was developed for off-line liquid chromatography/matrix-assisted laser desorption/ionization mass spectrometry (LC/MALDI-MS). This system employs a pulsed electric field to transfer the eluents from multiple parallel columns directly onto MALDI targets without the column outlets touching the target surface. The deposition device performs well with a wide variety of solvents that have different viscosities, vapor pressures, polarities, and ionic strengths. Surface-modified targets were used to facilitate concentration and precise positioning of samples, allowing for efficient automation of high-throughput MALDI analysis. The operational properties of this system allow the user to prepare samples using MALDI matrixes whose properties range from hydrophilic to hydrophobic. The latter, exemplified by alpha-cyano-4-hydroxycinnamic acid, were typically processed with a multistep deposition method consisting of precoating of individual spots on the target plate, sample deposition, and sample recrystallization steps. Using this method, 50 amol of angiotensin II was detected reproducibly with high signal-to-noise ratio after LC separation. Experimental results show that there is no significant decrease in chromatographic resolution using this device. To assess the behavior of the apparatus for complex mixtures, 5 microg of a tryptic digest of the cytosolic proteins of yeast was analyzed by LC/MALDI-MS and more than 13,500 unique analytes were detected in a single LC/MS analysis. PMID:12918971

  16. Design and Implementation of a Simple Model Interface for Component Based Modeling

    NASA Astrophysics Data System (ADS)

    Castronova, A. M.; Goodall, J. L.

    2008-12-01

    Component based architectures offer an alternative approach for building large, complex hydrologic modeling systems. In contrast to more traditional coding structures (i.e. sequential and modular modeling approaches), component-based modeling allows individuals to construct autonomous computational units that can be linked together through the exchange of shared boundary conditions during run-time. One of the challenges in component-based modeling is designing simple yet robust component interface definitions that allow hydrologic processes to be quickly incorporated into a modeling system. In this study we address this challenge by presenting a new interface design that simplifies the process of implementing the Open Modeling Interface (OpenMI). A component is created by (1) authoring an xml-based configuration file that defines the component's core properties and (2) creating a class that implements the newly defined interface and its three methods: initialize, perform time step, and finish. We will present this approach for creating components and demonstrate how it can be used to create a hydrologic model.

  17. Challenges in Modeling of the Plasma-Material Interface

    NASA Astrophysics Data System (ADS)

    Krstic, Predrag; Meyer, Fred; Allain, Jean Paul

    2013-09-01

    Plasma-Material Interface mixes materials of the two worlds, creating a new entity, a dynamical surface, which communicates between the two and represent one of the most challenging areas of multidisciplinary science, with many fundamental processes and synergies. How to build an integrated theoretical-experimental approach? Without mutual validation of experiment and theory chances very slim to have believable results? The outreach of the PMI science modeling at the fusion plasma facilities is illustrated by the significant step forward in understanding achieved recently by the quantum-classical modeling of the lithiated carbon surfaces irradiated by deuterium, showing surprisingly large role of oxygen in the deuterium retention and erosion chemistry. The plasma-facing walls of the next-generation fusion reactors will be exposed to high fluxes of neutrons and plasma-particles and will operate at high temperatures for thermodynamic efficiency. To this end we have been studying the evolution dynamics of vacancies and interstitials to the saturated dpa doses of tungsten surfaces bombarded by self-atoms, as well as the plasma-surface interactions of the damaged surfaces (erosion, hydrogen and helium uptake and fuzz formation). PSK and FWM acknowledge support of the ORNL LDRD program.

  18. A bidirectional interface growth model for cranial interosseous suture morphogenesis

    PubMed Central

    Zollikofer, Christoph P E; Weissmann, John David

    2011-01-01

    Interosseous sutures exhibit highly variable patterns of interdigitation and corrugation. Recent research has identified fundamental molecular mechanisms of suture formation, and computer models have been used to simulate suture morphogenesis. However, the role of bone strain in the development of complex sutures is largely unknown, and measuring suture morphologies beyond the evaluation of fractal dimensions remains a challenge. Here we propose a morphogenetic model of suture formation, which is based on the paradigm of Laplacian interface growth. Computer simulations of suture morphogenesis under various boundary conditions generate a wide variety of synthetic sutural forms. Their morphologies are quantified with a combination of Fourier analysis and principal components analysis, and compared with natural morphological variation in an ontogenetic sample of human interparietal suture lines. Morphometric analyses indicate that natural sutural shapes exhibit a complex distribution in morphospace. The distribution of synthetic sutures closely matches the natural distribution. In both natural and synthetic systems, sutural complexity increases during morphogenesis. Exploration of the parameter space of the simulation system indicates that variation in strain and/or morphogen sensitivity and viscosity of sutural tissue may be key factors in generating the large variability of natural suture complexity. PMID:21539540

  19. Efficient Parallel Levenberg-Marquardt Model Fitting towards Real-Time Automated Parametric Imaging Microscopy

    PubMed Central

    Zhu, Xiang; Zhang, Dianwen

    2013-01-01

    We present a fast, accurate and robust parallel Levenberg-Marquardt minimization optimizer, GPU-LMFit, which is implemented on graphics processing unit for high performance scalable parallel model fitting processing. GPU-LMFit can provide a dramatic speed-up in massive model fitting analyses to enable real-time automated pixel-wise parametric imaging microscopy. We demonstrate the performance of GPU-LMFit for the applications in superresolution localization microscopy and fluorescence lifetime imaging microscopy. PMID:24130785

  20. Models of Distance Higher Education: Fully Automated or Partially Human?

    ERIC Educational Resources Information Center

    Serdiukov, Peter

    2001-01-01

    There is little doubt that due to major advances in information technology, education will certainly become more technology-based. The purpose of this paper is to: (1) consider the models of contemporary universities offering distance programs; (2) analyze how technology changes the model of learning; and (3) explore how the human dimension will…

  1. Proteomics for Validation of Automated Gene Model Predictions

    SciTech Connect

    Zhou, Kemin; Panisko, Ellen A.; Magnuson, Jon K.; Baker, Scott E.; Grigoriev, Igor V.

    2008-02-14

    High-throughput liquid chromatography mass spectrometry (LC-MS)-based proteomic analysis has emerged as a powerful tool for functional annotation of genome sequences. These analyses complement the bioinformatic and experimental tools used for deriving, verifying, and functionally annotating models of genes and their transcripts. Furthermore, proteomics extends verification and functional annotation to the level of the translation product of the gene model.

  2. Automated parametrical antenna modelling for ambient assisted living applications

    NASA Astrophysics Data System (ADS)

    Kazemzadeh, R.; John, W.; Mathis, W.

    2012-09-01

    In this paper a parametric modeling technique for a fast polynomial extraction of the physically relevant parameters of inductively coupled RFID/NFC (radio frequency identification/near field communication) antennas is presented. The polynomial model equations are obtained by means of a three-step procedure: first, full Partial Element Equivalent Circuit (PEEC) antenna models are determined by means of a number of parametric simulations within the input parameter range of a certain antenna class. Based on these models, the RLC antenna parameters are extracted in a subsequent model reduction step. Employing these parameters, polynomial equations describing the antenna parameter with respect to (w.r.t.) the overall antenna input parameter range are extracted by means of polynomial interpolation and approximation of the change of the polynomials' coefficients. The described approach is compared to the results of a reference PEEC solver with regard to accuracy and computation effort.

  3. Man power/cost estimation model: Automated planetary projects

    NASA Technical Reports Server (NTRS)

    Kitchen, L. D.

    1975-01-01

    A manpower/cost estimation model is developed which is based on a detailed level of financial analysis of over 30 million raw data points which are then compacted by more than three orders of magnitude to the level at which the model is applicable. The major parameter of expenditure is manpower (specifically direct labor hours) for all spacecraft subsystem and technical support categories. The resultant model is able to provide a mean absolute error of less than fifteen percent for the eight programs comprising the model data base. The model includes cost saving inheritance factors, broken down in four levels, for estimating follow-on type programs where hardware and design inheritance are evident or expected.

  4. Petri net-based modelling of human-automation conflicts in aviation.

    PubMed

    Pizziol, Sergio; Tessier, Catherine; Dehais, Frédéric

    2014-01-01

    Analyses of aviation safety reports reveal that human-machine conflicts induced by poor automation design are remarkable precursors of accidents. A review of different crew-automation conflicting scenarios shows that they have a common denominator: the autopilot behaviour interferes with the pilot's goal regarding the flight guidance via 'hidden' mode transitions. Considering both the human operator and the machine (i.e. the autopilot or the decision functions) as agents, we propose a Petri net model of those conflicting interactions, which allows them to be detected as deadlocks in the Petri net. In order to test our Petri net model, we designed an autoflight system that was formally analysed to detect conflicting situations. We identified three conflicting situations that were integrated in an experimental scenario in a flight simulator with 10 general aviation pilots. The results showed that the conflicts that we had a-priori identified as critical had impacted the pilots' performance. Indeed, the first conflict remained unnoticed by eight participants and led to a potential collision with another aircraft. The second conflict was detected by all the participants but three of them did not manage the situation correctly. The last conflict was also detected by all the participants but provoked typical automation surprise situation as only one declared that he had understood the autopilot behaviour. These behavioural results are discussed in terms of workload and number of fired 'hidden' transitions. Eventually, this study reveals that both formal and experimental approaches are complementary to identify and assess the criticality of human-automation conflicts. Practitioner Summary: We propose a Petri net model of human-automation conflicts. An experiment was conducted with general aviation pilots performing a scenario involving three conflicting situations to test the soundness of our formal approach. This study reveals that both formal and experimental approaches are complementary to identify and assess the criticality conflicts. PMID:24444329

  5. Analytical and numerical modeling of non-collinear shear wave mixing at an imperfect interface.

    PubMed

    Zhang, Ziyin; Nagy, Peter B; Hassan, Waled

    2016-02-01

    Non-collinear shear wave mixing at an imperfect interface between two solids can be exploited for nonlinear ultrasonic assessment of bond quality. In this study we developed two analytical models for nonlinear imperfect interfaces. The first model uses a finite nonlinear interfacial stiffness representation of an imperfect interface of vanishing thickness, while the second model relies on a thin nonlinear interphase layer to represent an imperfect interface region. The second model is actually a derivative of the first model obtained by calculating the equivalent interfacial stiffness of a thin isotropic nonlinear interphase layer in the quasi-static approximation. The predictions of both analytical models were numerically verified by comparison to COMSOL finite element simulations. These models can accurately predict the additional nonlinearity caused by interface imperfections based on the strength of the reflected and transmitted mixed longitudinal waves produced by them under non-collinear shear wave interrogation. PMID:26482394

  6. Sensitivity analysis of predictive models with an automated adjoint generator

    SciTech Connect

    Pin, F.G.; Oblow, E.M.

    1987-01-01

    The adjoint method is a well established sensitivity analysis methodology that is particularly efficient in large-scale modeling problems. The coefficients of sensitivity of a given response with respect to every parameter involved in the modeling code can be calculated from the solution of a single adjoint run of the code. Sensitivity coefficients provide a quantitative measure of the importance of the model data in calculating the final results. The major drawback of the adjoint method is the requirement for calculations of very large numbers of partial derivatives to set up the adjoint equations of the model. ADGEN is a software system that has been designed to eliminate this drawback and automatically implement the adjoint formulation in computer codes. The ADGEN system will be described and its use for improving performance assessments and predictive simulations will be discussed. 8 refs., 1 fig.

  7. Automated mask creation from a 3D model using Faethm.

    SciTech Connect

    Schiek, Richard Louis; Schmidt, Rodney Cannon

    2007-11-01

    We have developed and implemented a method which given a three-dimensional object can infer from topology the two-dimensional masks needed to produce that object with surface micro-machining. The masks produced by this design tool can be generic, process independent masks, or if given process constraints, specific for a target process. This design tool calculates the two-dimensional mask set required to produce a given three-dimensional model by investigating the vertical topology of the model.

  8. Web Interface for Modeling Fog Oil Dispersion During Training

    NASA Astrophysics Data System (ADS)

    Lozar, Robert C.

    2002-08-01

    Predicting the dispersion of military camouflage training materials-Smokes and Obscurants (SO)-is a rapidly improving science. The Defense Threat Reduction Agency (DTRA) developed the Hazard Prediction and Assessment Capability (HPAC), a software package that allows the modeling of the dispersion of several potentially detrimental materials. ERDC/CERL characterized the most commonly used SO material, fog oil in HPAC terminology, to predict the SO dispersion characteristics in various training scenarios that might have an effect on Threatened and Endangered Species (TES) at DoD installations. To make the configuration more user friendly, the researchers implemented an initial web-interface version of HPAC with a modifiable fog-oil component that can be applied at any installation in the world. By this method, an installation SO trainer can plan the location and time of fog oil training activities and is able to predict the degree to which various areas will be effected, particularly important in ensuring the appropriate management of TES on a DoD installation.

  9. TASSER-Lite: an automated tool for protein comparative modeling.

    PubMed

    Pandit, Shashi Bhushan; Zhang, Yang; Skolnick, Jeffrey

    2006-12-01

    This study involves the development of a rapid comparative modeling tool for homologous sequences by extension of the TASSER methodology, developed for tertiary structure prediction. This comparative modeling procedure was validated on a representative benchmark set of proteins in the Protein Data Bank composed of 901 single domain proteins (41-200 residues) having sequence identities between 35-90% with respect to the template. Using a Monte Carlo search scheme with the length of runs optimized for weakly/nonhomologous proteins, TASSER often provides appreciable improvement in structure quality over the initial template. However, on average, this requires approximately 29 h of CPU time per sequence. Since homologous proteins are unlikely to require the extent of conformational search as weakly/nonhomologous proteins, TASSER's parameters were optimized to reduce the required CPU time to approximately 17 min, while retaining TASSER's ability to improve structure quality. Using this optimized TASSER (TASSER-Lite), we find an average improvement in the aligned region of approximately 10% in root mean-square deviation from native over the initial template. Comparison of TASSER-Lite with the widely used comparative modeling tool MODELLER showed that TASSER-Lite yields final models that are closer to the native. TASSER-Lite is provided on the web at (http://cssb.biology.gatech.edu/skolnick/webservice/tasserlite/index.html). PMID:16963505

  10. A 2-D Interface Element for Coupled Analysis of Independently Modeled 3-D Finite Element Subdomains

    NASA Technical Reports Server (NTRS)

    Kandil, Osama A.

    1998-01-01

    Over the past few years, the development of the interface technology has provided an analysis framework for embedding detailed finite element models within finite element models which are less refined. This development has enabled the use of cascading substructure domains without the constraint of coincident nodes along substructure boundaries. The approach used for the interface element is based on an alternate variational principle often used in deriving hybrid finite elements. The resulting system of equations exhibits a high degree of sparsity but gives rise to a non-positive definite system which causes difficulties with many of the equation solvers in general-purpose finite element codes. Hence the global system of equations is generally solved using, a decomposition procedure with pivoting. The research reported to-date for the interface element includes the one-dimensional line interface element and two-dimensional surface interface element. Several large-scale simulations, including geometrically nonlinear problems, have been reported using the one-dimensional interface element technology; however, only limited applications are available for the surface interface element. In the applications reported to-date, the geometry of the interfaced domains exactly match each other even though the spatial discretization within each domain may be different. As such, the spatial modeling of each domain, the interface elements and the assembled system is still laborious. The present research is focused on developing a rapid modeling procedure based on a parametric interface representation of independently defined subdomains which are also independently discretized.

  11. A simplified cellular automation model for city traffic

    SciTech Connect

    Simon, P.M.; Nagel, K. |

    1997-12-31

    The authors systematically investigate the effect of blockage sites in a cellular automata model for traffic flow. Different scheduling schemes for the blockage sites are considered. None of them returns a linear relationship between the fraction of green time and the throughput. The authors use this information for a fast implementation of traffic in Dallas.

  12. Automated biowaste sampling system urine subsystem operating model, part 1

    NASA Technical Reports Server (NTRS)

    Fogal, G. L.; Mangialardi, J. K.; Rosen, F.

    1973-01-01

    The urine subsystem automatically provides for the collection, volume sensing, and sampling of urine from six subjects during space flight. Verification of the subsystem design was a primary objective of the current effort which was accomplished thru the detail design, fabrication, and verification testing of an operating model of the subsystem.

  13. Automated Volumetric Breast Density derived by Shape and Appearance Modeling.

    PubMed

    Malkov, Serghei; Kerlikowske, Karla; Shepherd, John

    2014-03-22

    The image shape and texture (appearance) estimation designed for facial recognition is a novel and promising approach for application in breast imaging. The purpose of this study was to apply a shape and appearance model to automatically estimate percent breast fibroglandular volume (%FGV) using digital mammograms. We built a shape and appearance model using 2000 full-field digital mammograms from the San Francisco Mammography Registry with known %FGV measured by single energy absorptiometry method. An affine transformation was used to remove rotation, translation and scale. Principal Component Analysis (PCA) was applied to extract significant and uncorrelated components of %FGV. To build an appearance model, we transformed the breast images into the mean texture image by piecewise linear image transformation. Using PCA the image pixels grey-scale values were converted into a reduced set of the shape and texture features. The stepwise regression with forward selection and backward elimination was used to estimate the outcome %FGV with shape and appearance features and other system parameters. The shape and appearance scores were found to correlate moderately to breast %FGV, dense tissue volume and actual breast volume, body mass index (BMI) and age. The highest Pearson correlation coefficient was equal 0.77 for the first shape PCA component and actual breast volume. The stepwise regression method with ten-fold cross-validation to predict %FGV from shape and appearance variables and other system outcome parameters generated a model with a correlation of r(2) = 0.8. In conclusion, a shape and appearance model demonstrated excellent feasibility to extract variables useful for automatic %FGV estimation. Further exploring and testing of this approach is warranted. PMID:25083119

  14. Automated volumetric breast density derived by shape and appearance modeling

    NASA Astrophysics Data System (ADS)

    Malkov, Serghei; Kerlikowske, Karla; Shepherd, John

    2014-03-01

    The image shape and texture (appearance) estimation designed for facial recognition is a novel and promising approach for application in breast imaging. The purpose of this study was to apply a shape and appearance model to automatically estimate percent breast fibroglandular volume (%FGV) using digital mammograms. We built a shape and appearance model using 2000 full-field digital mammograms from the San Francisco Mammography Registry with known %FGV measured by single energy absorptiometry method. An affine transformation was used to remove rotation, translation and scale. Principal Component Analysis (PCA) was applied to extract significant and uncorrelated components of %FGV. To build an appearance model, we transformed the breast images into the mean texture image by piecewise linear image transformation. Using PCA the image pixels grey-scale values were converted into a reduced set of the shape and texture features. The stepwise regression with forward selection and backward elimination was used to estimate the outcome %FGV with shape and appearance features and other system parameters. The shape and appearance scores were found to correlate moderately to breast %FGV, dense tissue volume and actual breast volume, body mass index (BMI) and age. The highest Pearson correlation coefficient was equal 0.77 for the first shape PCA component and actual breast volume. The stepwise regression method with ten-fold cross-validation to predict %FGV from shape and appearance variables and other system outcome parameters generated a model with a correlation of r2 = 0.8. In conclusion, a shape and appearance model demonstrated excellent feasibility to extract variables useful for automatic %FGV estimation. Further exploring and testing of this approach is warranted.

  15. Model-based metrics of human-automation function allocation in complex work environments

    NASA Astrophysics Data System (ADS)

    Kim, So Young

    Function allocation is the design decision which assigns work functions to all agents in a team, both human and automated. Efforts to guide function allocation systematically has been studied in many fields such as engineering, human factors, team and organization design, management science, and cognitive systems engineering. Each field focuses on certain aspects of function allocation, but not all; thus, an independent discussion of each does not address all necessary issues with function allocation. Four distinctive perspectives emerged from a review of these fields: technology-centered, human-centered, team-oriented, and work-oriented. Each perspective focuses on different aspects of function allocation: capabilities and characteristics of agents (automation or human), team structure and processes, and work structure and the work environment. Together, these perspectives identify the following eight issues with function allocation: 1) Workload, 2) Incoherency in function allocations, 3) Mismatches between responsibility and authority, 4) Interruptive automation, 5) Automation boundary conditions, 6) Function allocation preventing human adaptation to context, 7) Function allocation destabilizing the humans' work environment, and 8) Mission Performance. Addressing these issues systematically requires formal models and simulations that include all necessary aspects of human-automation function allocation: the work environment, the dynamics inherent to the work, agents, and relationships among them. Also, addressing these issues requires not only a (static) model, but also a (dynamic) simulation that captures temporal aspects of work such as the timing of actions and their impact on the agent's work. Therefore, with properly modeled work as described by the work environment, the dynamics inherent to the work, agents, and relationships among them, a modeling framework developed by this thesis, which includes static work models and dynamic simulation, can capture the issues with function allocation. Then, based on the eight issues, eight types of metrics are established. The purpose of these metrics is to assess the extent to which each issue exists with a given function allocation. Specifically, the eight types of metrics assess workload, coherency of a function allocation, mismatches between responsibility and authority, interruptive automation, automation boundary conditions, human adaptation to context, stability of the human's work environment, and mission performance. Finally, to validate the modeling framework and the metrics, a case study was conducted modeling four different function allocations between a pilot and flight deck automation during the arrival and approach phases of flight. A range of pilot cognitive control modes and maximum human taskload limits were also included in the model. The metrics were assessed for these four function allocations and analyzed to validate capability of the metrics to identify important issues in given function allocations. In addition, the design insights provided by the metrics are highlighted. This thesis concludes with a discussion of mechanisms for further validating the modeling framework and function allocation metrics developed here, and highlights where these developments can be applied in research and in the design of function allocations in complex work environments such as aviation operations.

  16. An Improvement in Thermal Modelling of Automated Tape Placement Process

    NASA Astrophysics Data System (ADS)

    Barasinski, Anas; Leygue, Adrien; Soccard, Eric; Poitou, Arnaud

    2011-01-01

    The thermoplastic tape placement process offers the possibility of manufacturing large laminated composite parts with all kinds of geometries (double curved i.e.). This process is based on the fusion bonding of a thermoplastic tape on a substrate. It has received a growing interest during last years because of its non autoclave abilities. In order to control and optimize the quality of the manufactured part, we need to predict the temperature field throughout the processing of the laminate. In this work, we focus on a thermal modeling of this process which takes in account the imperfect bonding existing between the different layers of the substrate by introducing thermal contact resistance in the model. This study is leaning on experimental results which inform us that the value of the thermal resistance evolves with temperature and pressure applied on the material.

  17. Morphology Based Cohesive Zone Modeling of the Cement-Bone Interface from Postmortem Retrievals

    PubMed Central

    Waanders, Daan; Janssen, Dennis; Mann, Kenneth A.; Verdonschot, Nico

    2011-01-01

    In cemented total hip arthroplasty, the cement-bone interface can be considerably degenerated after less than one year in-vivo service; this makes the interface much weaker relative to the direct post-operative situation. It is, however, still unknown how these degenerated interfaces behave under mixed-mode loading and how this is related to the morphology of the interface. In this study, we used a finite element approach to analyze the mixed-mode response of the cement-bone interface taken from postmortem retrievals and we investigated whether it was feasible to generate a fully elastic and a failure cohesive model based on only morphological input parameters. Computed tomography-based finite element analysis models of the postmortem cement-bone interface were generated and the interface morphology was determined. The models were loaded until failure in multiple directions by allowing cracking of the bone and cement components and including periodic boundary conditions. The resulting stiffness was related to the interface morphology. A closed form mixed-mode cohesive model that included failure was determined and related to the interface morphology. The responses of the finite element simulations compare satisfactorily with experimental observations, albeit the magnitude of the strength and stiffness are somewhat overestimated. Surprisingly, the finite element simulations predict no failure under shear loading and a considerable normal compression is generated which prevents dilation of the interface. The obtained mixed-mode stiffness response could subsequently be related to the interface morphology and subsequently be formulated into an elastic cohesive zone model. Finally, the acquired data could be used as an input for a cohesive model that also includes interface failure. PMID:21783159

  18. A continuously growing web-based interface structure databank

    NASA Astrophysics Data System (ADS)

    Erwin, N. A.; Wang, E. I.; Osysko, A.; Warner, D. H.

    2012-07-01

    The macroscopic properties of materials can be significantly influenced by the presence of microscopic interfaces. The complexity of these interfaces coupled with the vast configurational space in which they reside has been a long-standing obstacle to the advancement of true bottom-up material behavior predictions. In this vein, atomistic simulations have proven to be a valuable tool for investigating interface behavior. However, before atomistic simulations can be utilized to model interface behavior, meaningful interface atomic structures must be generated. The generation of structures has historically been carried out disjointly by individual research groups, and thus, has constituted an overlap in effort across the broad research community. To address this overlap and to lower the barrier for new researchers to explore interface modeling, we introduce a web-based interface structure databank (www.isdb.cee.cornell.edu) where users can search, download and share interface structures. The databank is intended to grow via two mechanisms: (1) interface structure donations from individual research groups and (2) an automated structure generation algorithm which continuously creates equilibrium interface structures. In this paper, we describe the databank, the automated interface generation algorithm, and compare a subset of the autonomously generated structures to structures currently available in the literature. To date, the automated generation algorithm has been directed toward aluminum grain boundary structures, which can be compared with experimentally measured population densities of aluminum polycrystals.

  19. Disturbed state model for sand-geosynthetic interfaces and application to pull-out tests

    NASA Astrophysics Data System (ADS)

    Pal, Surajit; Wije Wathugala, G.

    1999-12-01

    Successful numerical simulation of geosynthetic-reinforced earth structures depends on selecting proper constitutive models for soils, geosynthetics and soil-geosynthetic interfaces. Many constitutive models are available for modelling soils and geosynthetics. However, constitutive models for soil-geosynthetic interfaces which can capture most of the important characteristics of interface response are not readily available. In this paper, an elasto-plastic constitutive model based on the disturbed state concept (DSC) for geosynthetic-soil interfaces has been presented. The proposed model is capable of capturing most of the important characteristics of interface response, such as dilation, hardening and softening. The behaviour of interfaces under the direct shear test has been predicted by the model. The present model has been implemented in the finite element procedure in association with the thin-layer element. Five pull-out tests with two different geogrids have been simulated numerically using FEM. For the calibration of the constitutive models used in FEM, the standard laboratory tests used are: (1) triaxial tests for the sand, (2) direct shear tests for the interfaces and (3) axial tension tests for the geogrids. The results of the finite element simulations of pull-out tests agree well with the test data. The proposed model can be used for the stress-deformation study of geosynthetic-reinforced embankments through numerical simulation.

  20. Piloted Simulation of a Model-Predictive Automated Recovery System

    NASA Technical Reports Server (NTRS)

    Liu, James (Yuan); Litt, Jonathan; Sowers, T. Shane; Owens, A. Karl; Guo, Ten-Huei

    2014-01-01

    This presentation describes a model-predictive automatic recovery system for aircraft on the verge of a loss-of-control situation. The system determines when it must intervene to prevent an imminent accident, resulting from a poor approach. It estimates the altitude loss that would result from a go-around maneuver at the current flight condition. If the loss is projected to violate a minimum altitude threshold, the maneuver is automatically triggered. The system deactivates to allow landing once several criteria are met. Piloted flight simulator evaluation showed the system to provide effective envelope protection during extremely unsafe landing attempts. The results demonstrate how flight and propulsion control can be integrated to recover control of the vehicle automatically and prevent a potential catastrophe.

  1. An Energy Approach to a Micromechanics Model Accounting for Nonlinear Interface Debonding.

    SciTech Connect

    Tan, H.; Huang, Y.; Geubelle, P. H.; Liu, C.; Breitenfeld, M. S.

    2005-01-01

    We developed a micromechanics model to study the effect of nonlinear interface debonding on the constitutive behavior of composite materials. While implementing this micromechanics model into a large simulation code on solid rockets, we are challenged by problems such as tension/shear coupling and the nonuniform distribution of displacement jump at the particle/matrix interfaces. We therefore propose an energy approach to solve these problems. This energy approach calculates the potential energy of the representative volume element, including the contribution from the interface debonding. By minimizing the potential energy with respect to the variation of the interface displacement jump, the traction balanced interface debonding can be found and the macroscopic constitutive relations established. This energy approach has the ability to treat different load conditions in a unified way, and the interface cohesive law can be in any arbitrary forms. In this paper, the energy approach is verified to give the same constitutive behaviors as reported before.

  2. User's Manual for the Object User Interface (OUI): An Environmental Resource Modeling Framework

    USGS Publications Warehouse

    Markstrom, Steven L.; Koczot, Kathryn M.

    2008-01-01

    The Object User Interface is a computer application that provides a framework for coupling environmental-resource models and for managing associated temporal and spatial data. The Object User Interface is designed to be easily extensible to incorporate models and data interfaces defined by the user. Additionally, the Object User Interface is highly configurable through the use of a user-modifiable, text-based control file that is written in the eXtensible Markup Language. The Object User Interface user's manual provides (1) installation instructions, (2) an overview of the graphical user interface, (3) a description of the software tools, (4) a project example, and (5) specifications for user configuration and extension.

  3. Automated drusen detection in retinal images using analytical modelling algorithms

    PubMed Central

    2011-01-01

    Background Drusen are common features in the ageing macula associated with exudative Age-Related Macular Degeneration (ARMD). They are visible in retinal images and their quantitative analysis is important in the follow up of the ARMD. However, their evaluation is fastidious and difficult to reproduce when performed manually. Methods This article proposes a methodology for Automatic Drusen Deposits Detection and quantification in Retinal Images (AD3RI) by using digital image processing techniques. It includes an image pre-processing method to correct the uneven illumination and to normalize the intensity contrast with smoothing splines. The drusen detection uses a gradient based segmentation algorithm that isolates drusen and provides basic drusen characterization to the modelling stage. The detected drusen are then fitted by Modified Gaussian functions, producing a model of the image that is used to evaluate the affected area. Twenty two images were graded by eight experts, with the aid of a custom made software and compared with AD3RI. This comparison was based both on the total area and on the pixel-to-pixel analysis. The coefficient of variation, the intraclass correlation coefficient, the sensitivity, the specificity and the kappa coefficient were calculated. Results The ground truth used in this study was the experts' average grading. In order to evaluate the proposed methodology three indicators were defined: AD3RI compared to the ground truth (A2G); each expert compared to the other experts (E2E) and a standard Global Threshold method compared to the ground truth (T2G). The results obtained for the three indicators, A2G, E2E and T2G, were: coefficient of variation 28.8 %, 22.5 % and 41.1 %, intraclass correlation coefficient 0.92, 0.88 and 0.67, sensitivity 0.68, 0.67 and 0.74, specificity 0.96, 0.97 and 0.94, and kappa coefficient 0.58, 0.60 and 0.49, respectively. Conclusions The gradings produced by AD3RI obtained an agreement with the ground truth similar to the experts (with a higher reproducibility) and significantly better than the Threshold Method. Despite the higher sensitivity of the Threshold method, explained by its over segmentation bias, it has lower specificity and lower kappa coefficient. Therefore, it can be concluded that AD3RI accurately quantifies drusen, using a reproducible method with benefits for ARMD evaluation and follow-up. PMID:21749717

  4. A semi-automated vascular access system for preclinical models

    NASA Astrophysics Data System (ADS)

    Berry-Pusey, B. N.; Chang, Y. C.; Prince, S. W.; Chu, K.; David, J.; Taschereau, R.; Silverman, R. W.; Williams, D.; Ladno, W.; Stout, D.; Tsao, T. C.; Chatziioannou, A.

    2013-08-01

    Murine models are used extensively in biological and translational research. For many of these studies it is necessary to access the vasculature for the injection of biologically active agents. Among the possible methods for accessing the mouse vasculature, tail vein injections are a routine but critical step for many experimental protocols. To perform successful tail vein injections, a high skill set and experience is required, leaving most scientists ill-suited to perform this task. This can lead to a high variability between injections, which can impact experimental results. To allow more scientists to perform tail vein injections and to decrease the variability between injections, a vascular access system (VAS) that semi-automatically inserts a needle into the tail vein of a mouse was developed. The VAS uses near infrared light, image processing techniques, computer controlled motors, and a pressure feedback system to insert the needle and to validate its proper placement within the vein. The VAS was tested by injecting a commonly used radiolabeled probe (FDG) into the tail veins of five mice. These mice were then imaged using micro-positron emission tomography to measure the percentage of the injected probe remaining in the tail. These studies showed that, on average, the VAS leaves 3.4% of the injected probe in the tail. With these preliminary results, the VAS system demonstrates the potential for improving the accuracy of tail vein injections in mice.

  5. Automated Finite Element Modeling of Wing Structures for Shape Optimization

    NASA Technical Reports Server (NTRS)

    Harvey, Michael Stephen

    1993-01-01

    The displacement formulation of the finite element method is the most general and most widely used technique for structural analysis of airplane configurations. Modem structural synthesis techniques based on the finite element method have reached a certain maturity in recent years, and large airplane structures can now be optimized with respect to sizing type design variables for many load cases subject to a rich variety of constraints including stress, buckling, frequency, stiffness and aeroelastic constraints (Refs. 1-3). These structural synthesis capabilities use gradient based nonlinear programming techniques to search for improved designs. For these techniques to be practical a major improvement was required in computational cost of finite element analyses (needed repeatedly in the optimization process). Thus, associated with the progress in structural optimization, a new perspective of structural analysis has emerged, namely, structural analysis specialized for design optimization application, or.what is known as "design oriented structural analysis" (Ref. 4). This discipline includes approximation concepts and methods for obtaining behavior sensitivity information (Ref. 1), all needed to make the optimization of large structural systems (modeled by thousands of degrees of freedom and thousands of design variables) practical and cost effective.

  6. Modeling the flow in diffuse interface methods of solidification

    NASA Astrophysics Data System (ADS)

    Subhedar, A.; Steinbach, I.; Varnik, F.

    2015-08-01

    Fluid dynamical equations in the presence of a diffuse solid-liquid interface are investigated via a volume averaging approach. The resulting equations exhibit the same structure as the standard Navier-Stokes equation for a Newtonian fluid with a constant viscosity, the effect of the solid phase fraction appearing in the drag force only. This considerably simplifies the use of the lattice Boltzmann method as a fluid dynamics solver in solidification simulations. Galilean invariance is also satisfied within this approach. Further, we investigate deviations between the diffuse and sharp interface flow profiles via both quasiexact numerical integration and lattice Boltzmann simulations. It emerges from these studies that the freedom in choosing the solid-liquid coupling parameter h provides a flexible way of optimizing the diffuse interface-flow simulations. Once h is adapted for a given spatial resolution, the simulated flow profiles reach an accuracy comparable to quasiexact numerical simulations.

  7. Automation based on knowledge modeling theory and its applications in engine diagnostic systems using Space Shuttle Main Engine vibrational data

    NASA Astrophysics Data System (ADS)

    Kim, Jonnathan H.

    1995-04-01

    Humans can perform many complicated tasks without explicit rules. This inherent and advantageous capability becomes a hurdle when a task is to be automated. Modern computers and numerical calculations require explicit rules and discrete numerical values. In order to bridge the gap between human knowledge and automating tools, a knowledge model is proposed. Knowledge modeling techniques are discussed and utilized to automate a labor and time intensive task of detecting anomalous bearing wear patterns in the Space Shuttle Main Engine (SSME) High Pressure Oxygen Turbopump (HPOTP).

  8. Automated Contour Mapping With a Regional Deformable Model

    SciTech Connect

    Chao Ming; Li Tianfang; Schreibmann, Eduard; Koong, Albert; Xing Lei

    2008-02-01

    Purpose: To develop a regional narrow-band algorithm to auto-propagate the contour surface of a region of interest (ROI) from one phase to other phases of four-dimensional computed tomography (4D-CT). Methods and Materials: The ROI contours were manually delineated on a selected phase of 4D-CT. A narrow band encompassing the ROI boundary was created on the image and used as a compact representation of the ROI surface. A BSpline deformable registration was performed to map the band to other phases. A Mattes mutual information was used as the metric function, and the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm was used to optimize the function. After registration the deformation field was extracted and used to transform the manual contours to other phases. Bidirectional contour mapping was introduced to evaluate the proposed technique. The new algorithm was tested on synthetic images and applied to 4D-CT images of 4 thoracic patients and a head-and-neck Cone-beam CT case. Results: Application of the algorithm to synthetic images and Cone-beam CT images indicates that an accuracy of 1.0 mm is achievable and that 4D-CT images show a spatial accuracy better than 1.5 mm for ROI mappings between adjacent phases, and 3 mm in opposite-phase mapping. Compared with whole image-based calculations, the computation was an order of magnitude more efficient, in addition to the much-reduced computer memory consumption. Conclusions: A narrow-band model is an efficient way for contour mapping and should find widespread application in future 4D treatment planning.

  9. A New Modified Gaussian Model (MGM) Using a Bayesian Estimation Approach: Toward Automated Analysis of Planetary Spectra

    NASA Astrophysics Data System (ADS)

    Sugita, S.; Nagata, K.; Tsuboi, N.; Hiroi, T.; Okada, M.

    2011-03-01

    A new modified Gaussian model (MGM) that determines the optimum number of Gaussians and has little dependence on initial parameter selection is proposed, enabling automated analyses of currently available large volume of lunar reflectance spectra.

  10. Automated calibration of a stream solute transport model: Implications for interpretation of biogeochemical parameters

    USGS Publications Warehouse

    Scott, D.T.; Gooseff, M.N.; Bencala, K.E.; Runkel, R.L.

    2003-01-01

    The hydrologic processes of advection, dispersion, and transient storage are the primary physical mechanisms affecting solute transport in streams. The estimation of parameters for a conservative solute transport model is an essential step to characterize transient storage and other physical features that cannot be directly measured, and often is a preliminary step in the study of reactive solutes. Our study used inverse modeling to estimate parameters of the transient storage model OTIS (One dimensional Transport with Inflow and Storage). Observations from a tracer injection experiment performed on Uvas Creek, California, USA, are used to illustrate the application of automated solute transport model calibration to conservative and nonconservative stream solute transport. A computer code for universal inverse modeling (UCODE) is used for the calibrations. Results of this procedure are compared with a previous study that used a trial-and-error parameter estimation approach. The results demonstrated 1) importance of the proper estimation of discharge and lateral inflow within the stream system; 2) that although the fit of the observations is not much better when transient storage is invoked, a more randomly distributed set of residuals resulted (suggesting non-systematic error), indicating that transient storage is occurring; 3) that inclusion of transient storage for a reactive solute (Sr2+) provided a better fit to the observations, highlighting the importance of robust model parameterization; and 4) that applying an automated calibration inverse modeling estimation approach resulted in a comprehensive understanding of the model results and the limitation of input data.

  11. A comparison of automated anatomicalbehavioural mapping methods in a rodent model of stroke?

    PubMed Central

    Crum, William R.; Giampietro, Vincent P.; Smith, Edward J.; Gorenkova, Natalia; Stroemer, R. Paul; Modo, Michel

    2013-01-01

    Neurological damage, due to conditions such as stroke, results in a complex pattern of structural changes and significant behavioural dysfunctions; the automated analysis of magnetic resonance imaging (MRI) and discovery of structuralbehavioural correlates associated with these disorders remains challenging. Voxel lesion symptom mapping (VLSM) has been used to associate behaviour with lesion location in MRI, but this analysis requires the definition of lesion masks on each subject and does not exploit the rich structural information in the images. Tensor-based morphometry (TBM) has been used to perform voxel-wise structural analyses over the entire brain; however, a combination of lesion hyper-intensities and subtle structural remodelling away from the lesion might confound the interpretation of TBM. In this study, we compared and contrasted these techniques in a rodent model of stroke (n=58) to assess the efficacy of these techniques in a challenging pre-clinical application. The results from the automated techniques were compared using manually derived region-of-interest measures of the lesion, cortex, striatum, ventricle and hippocampus, and considered against model power calculations. The automated TBM techniques successfully detect both lesion and non-lesion effects, consistent with manual measurements. These techniques do not require manual segmentation to the same extent as VLSM and should be considered part of the toolkit for the unbiased analysis of pre-clinical imaging-based studies. PMID:23727124

  12. IDEF3 and IDEF4 automation system requirements document and system environment models

    NASA Technical Reports Server (NTRS)

    Blinn, Thomas M.

    1989-01-01

    The requirements specification is provided for the IDEF3 and IDEF4 tools that provide automated support for IDEF3 and IDEF4 modeling. The IDEF3 method is a scenario driven process flow description capture method intended to be used by domain experts to represent the knowledge about how a particular system or process works. The IDEF3 method provides modes to represent both (1) Process Flow Description to capture the relationships between actions within the context of a specific scenario, and (2) Object State Transition to capture the allowable transitions of an object in the domain. The IDEF4 method provides a method for capturing the (1) Class Submodel or object hierarchy, (2) Method Submodel or the procedures associated with each classes of objects, and (3) the Dispath Matching or the relationships between the objects and methods in the object oriented design. The requirements specified describe the capabilities that a fully functional IDEF3 or IDEF4 automated tool should support.

  13. NeuroGPS: automated localization of neurons for brain circuits using L1 minimization model

    PubMed Central

    Quan, Tingwei; Zheng, Ting; Yang, Zhongqing; Ding, Wenxiang; Li, Shiwei; Li, Jing; Zhou, Hang; Luo, Qingming; Gong, Hui; Zeng, Shaoqun

    2013-01-01

    Drawing the map of neuronal circuits at microscopic resolution is important to explain how brain works. Recent progresses in fluorescence labeling and imaging techniques have enabled measuring the whole brain of a rodent like a mouse at submicron-resolution. Considering the huge volume of such datasets, automatic tracing and reconstruct the neuronal connections from the image stacks is essential to form the large scale circuits. However, the first step among which, automated location the soma across different brain areas remains a challenge. Here, we addressed this problem by introducing L1 minimization model. We developed a fully automated system, NeuronGlobalPositionSystem (NeuroGPS) that is robust to the broad diversity of shape, size and density of the neurons in a mouse brain. This method allows locating the neurons across different brain areas without human intervention. We believe this method would facilitate the analysis of the neuronal circuits for brain function and disease studies. PMID:23546385

  14. Structure of liquid-vapor interfaces in the Ising model

    SciTech Connect

    Moseley, L.L.

    1997-06-01

    The asymptotic behavior of the density profile of the fluid-fluid interface is investigated by computer simulation and is found to be better described by the error function than by the hyperbolic tangent in three dimensions. For higher dimensions the hyperbolic tangent is a better approximation.

  15. A New Tool for Inundation Modeling: Community Modeling Interface for Tsunamis (ComMIT)

    NASA Astrophysics Data System (ADS)

    Titov, V. V.; Moore, C. W.; Greenslade, D. J. M.; Pattiaratchi, C.; Badal, R.; Synolakis, C. E.; Kânoğlu, U.

    2011-11-01

    Almost 5 years after the 26 December 2004 Indian Ocean tragedy, the 10 August 2009 Andaman tsunami demonstrated that accurate forecasting is possible using the tsunami community modeling tool Community Model Interface for Tsunamis (ComMIT). ComMIT is designed for ease of use, and allows dissemination of results to the community while addressing concerns associated with proprietary issues of bathymetry and topography. It uses initial conditions from a precomputed propagation database, has an easy-to-interpret graphical interface, and requires only portable hardware. ComMIT was initially developed for Indian Ocean countries with support from the United Nations Educational, Scientific, and Cultural Organization (UNESCO), the United States Agency for International Development (USAID), and the National Oceanic and Atmospheric Administration (NOAA). To date, more than 60 scientists from 17 countries in the Indian Ocean have been trained and are using it in operational inundation mapping.

  16. An automation of design and modelling tasks in NX Siemens environment with original software - generator module

    NASA Astrophysics Data System (ADS)

    Zbiciak, M.; Grabowik, C.; Janik, W.

    2015-11-01

    Nowadays the design constructional process is almost exclusively aided with CAD/CAE/CAM systems. It is evaluated that nearly 80% of design activities have a routine nature. These design routine tasks are highly susceptible to automation. Design automation is usually made with API tools which allow building original software responsible for adding different engineering activities. In this paper the original software worked out in order to automate engineering tasks at the stage of a product geometrical shape design is presented. The elaborated software works exclusively in NX Siemens CAD/CAM/CAE environment and was prepared in Microsoft Visual Studio with application of the .NET technology and NX SNAP library. The software functionality allows designing and modelling of spur and helicoidal involute gears. Moreover, it is possible to estimate relative manufacturing costs. With the Generator module it is possible to design and model both standard and non-standard gear wheels. The main advantage of the model generated in such a way is its better representation of an involute curve in comparison to those which are drawn in specialized standard CAD systems tools. It comes from fact that usually in CAD systems an involute curve is drawn by 3 points that respond to points located on the addendum circle, the reference diameter of a gear and the base circle respectively. In the Generator module the involute curve is drawn by 11 involute points which are located on and upper the base and the addendum circles therefore 3D gear wheels models are highly accurate. Application of the Generator module makes the modelling process very rapid so that the gear wheel modelling time is reduced to several seconds. During the conducted research the analysis of differences between standard 3 points and 11 points involutes was made. The results and conclusions drawn upon analysis are shown in details.

  17. EST2uni: an open, parallel tool for automated EST analysis and database creation, with a data mining web interface and microarray expression data integration

    PubMed Central

    Forment, Javier; Gilabert, Francisco; Robles, Antonio; Conejero, Vicente; Nuez, Fernando; Blanca, Jose M

    2008-01-01

    Background Expressed sequence tag (EST) collections are composed of a high number of single-pass, redundant, partial sequences, which need to be processed, clustered, and annotated to remove low-quality and vector regions, eliminate redundancy and sequencing errors, and provide biologically relevant information. In order to provide a suitable way of performing the different steps in the analysis of the ESTs, flexible computation pipelines adapted to the local needs of specific EST projects have to be developed. Furthermore, EST collections must be stored in highly structured relational databases available to researchers through user-friendly interfaces which allow efficient and complex data mining, thus offering maximum capabilities for their full exploitation. Results We have created EST2uni, an integrated, highly-configurable EST analysis pipeline and data mining software package that automates the pre-processing, clustering, annotation, database creation, and data mining of EST collections. The pipeline uses standard EST analysis tools and the software has a modular design to facilitate the addition of new analytical methods and their configuration. Currently implemented analyses include functional and structural annotation, SNP and microsatellite discovery, integration of previously known genetic marker data and gene expression results, and assistance in cDNA microarray design. It can be run in parallel in a PC cluster in order to reduce the time necessary for the analysis. It also creates a web site linked to the database, showing collection statistics, with complex query capabilities and tools for data mining and retrieval. Conclusion The software package presented here provides an efficient and complete bioinformatics tool for the management of EST collections which is very easy to adapt to the local needs of different EST projects. The code is freely available under the GPL license and can be obtained at . This site also provides detailed instructions for installation and configuration of the software package. The code is under active development to incorporate new analyses, methods, and algorithms as they are released by the bioinformatics community. PMID:18179701

  18. A User-Oriented Interface for Generalised Informetric Analysis Based on Applying Advanced Data Modelling Techniques.

    ERIC Educational Resources Information Center

    Jarvelin, Kalervo; Ingwersen, Peter; Niemi, Timo

    2000-01-01

    Presents a user-oriented interface for generalized informetric analysis and demonstrates how informetric calculations can be specified through advanced data modeling techniques. Topics include bibliographic data; online information retrieval systems; citation networks; query interface; impact factors; data restructuring; and multi-level…

  19. Modeling Auditory-Haptic Interface Cues from an Analog Multi-line Telephone

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Anderson, Mark R.; Bittner, Rachael M.

    2012-01-01

    The Western Electric Company produced a multi-line telephone during the 1940s-1970s using a six-button interface design that provided robust tactile, haptic and auditory cues regarding the "state" of the communication system. This multi-line telephone was used as a model for a trade study comparison of two interfaces: a touchscreen interface (iPad)) versus a pressure-sensitive strain gauge button interface (Phidget USB interface controllers). The experiment and its results are detailed in the authors' AES 133rd convention paper " Multimodal Information Management: Evaluation of Auditory and Haptic Cues for NextGen Communication Dispays". This Engineering Brief describes how the interface logic, visual indications, and auditory cues of the original telephone were synthesized using MAX/MSP, including the logic for line selection, line hold, and priority line activation.

  20. An advanced distributed automated extraction of drainage network model on high-resolution DEM

    NASA Astrophysics Data System (ADS)

    Mao, Y.; Ye, A.; Xu, J.; Ma, F.; Deng, X.; Miao, C.; Gong, W.; Di, Z.

    2014-07-01

    A high-resolution and high-accuracy drainage network map is a prerequisite for simulating the water cycle in land surface hydrological models. The objective of this study was to develop a new automated extraction of drainage network model, which can get high-precision continuous drainage network on high-resolution DEM (Digital Elevation Model). The high-resolution DEM need too much computer resources to extract drainage network. The conventional GIS method often can not complete to calculate on high-resolution DEM of big basins, because the number of grids is too large. In order to decrease the computation time, an advanced distributed automated extraction of drainage network model (Adam) was proposed in the study. The Adam model has two features: (1) searching upward from outlet of basin instead of sink filling, (2) dividing sub-basins on low-resolution DEM, and then extracting drainage network on sub-basins of high-resolution DEM. The case study used elevation data of the Shuttle Radar Topography Mission (SRTM) at 3 arc-second resolution in Zhujiang River basin, China. The results show Adam model can dramatically reduce the computation time. The extracting drainage network was continuous and more accurate than HydroSHEDS (Hydrological data and maps based on Shuttle Elevation Derivatives at multiple Scales).

  1. An architecture and model for cognitive engineering simulation analysis - Application to advanced aviation automation

    NASA Technical Reports Server (NTRS)

    Corker, Kevin M.; Smith, Barry R.

    1993-01-01

    The process of designing crew stations for large-scale, complex automated systems is made difficult because of the flexibility of roles that the crew can assume, and by the rapid rate at which system designs become fixed. Modern cockpit automation frequently involves multiple layers of control and display technology in which human operators must exercise equipment in augmented, supervisory, and fully automated control modes. In this context, we maintain that effective human-centered design is dependent on adequate models of human/system performance in which representations of the equipment, the human operator(s), and the mission tasks are available to designers for manipulation and modification. The joint Army-NASA Aircrew/Aircraft Integration (A3I) Program, with its attendant Man-machine Integration Design and Analysis System (MIDAS), was initiated to meet this challenge. MIDAS provides designers with a test bed for analyzing human-system integration in an environment in which both cognitive human function and 'intelligent' machine function are described in similar terms. This distributed object-oriented simulation system, its architecture and assumptions, and our experiences from its application in advanced aviation crew stations are described.

  2. A Framework for Automated Spine and Vertebrae Interpolation-Based Detection and Model-Based Segmentation.

    PubMed

    Korez, Robert; Ibragimov, Bulat; Likar, Botjan; Pernu, Franjo; Vrtovec, Toma

    2015-08-01

    Automated and semi-automated detection and segmentation of spinal and vertebral structures from computed tomography (CT) images is a challenging task due to a relatively high degree of anatomical complexity, presence of unclear boundaries and articulation of vertebrae with each other, as well as due to insufficient image spatial resolution, partial volume effects, presence of image artifacts, intensity variations and low signal-to-noise ratio. In this paper, we describe a novel framework for automated spine and vertebrae detection and segmentation from 3-D CT images. A novel optimization technique based on interpolation theory is applied to detect the location of the whole spine in the 3-D image and, using the obtained location of the whole spine, to further detect the location of individual vertebrae within the spinal column. The obtained vertebra detection results represent a robust and accurate initialization for the subsequent segmentation of individual vertebrae, which is performed by an improved shape-constrained deformable model approach. The framework was evaluated on two publicly available CT spine image databases of 50 lumbar and 170 thoracolumbar vertebrae. Quantitative comparison against corresponding reference vertebra segmentations yielded an overall mean centroid-to-centroid distance of 1.1 mm and Dice coefficient of 83.6% for vertebra detection, and an overall mean symmetric surface distance of 0.3 mm and Dice coefficient of 94.6% for vertebra segmentation. The results indicate that by applying the proposed automated detection and segmentation framework, vertebrae can be successfully detected and accurately segmented in 3-D from CT spine images. PMID:25585415

  3. Simulation of evaporation of a sessile drop using a diffuse interface model

    NASA Astrophysics Data System (ADS)

    Sefiane, Khellil; Ding, Hang; Sahu, Kirti; Matar, Omar

    2008-11-01

    We consider here the evaporation dynamics of a Newtonian liquid sessile drop using an improved diffuse interface model. The governing equations for the drop and surrounding vapour are both solved, and separated by the order parameter (i.e. volume fraction), based on the previous work of Ding et al. JCP 2007. The diffuse interface model has been shown to be successful in modelling the moving contact line problems (Jacqmin 2000; Ding and Spelt 2007, 2008). Here, a pinned contact line of the drop is assumed. The evaporative mass flux at the liquid-vapour interface is a function of local temperature constitutively and treated as a source term in the interface evolution equation, i.e. Cahn-Hilliard equation. The model is validated by comparing its predictions with data available in the literature. The evaporative dynamics are illustrated in terms of drop snapshots, and a quantitative comparison with the results using a free surface model are made.

  4. Mathematical analysis of a sharp-diffuse interfaces model for seawater intrusion

    NASA Astrophysics Data System (ADS)

    Choquet, C.; Didhiou, M. M.; Rosier, C.

    2015-10-01

    We consider a new model mixing sharp and diffuse interface approaches for seawater intrusion phenomena in free aquifers. More precisely, a phase field model is introduced in the boundary conditions on the virtual sharp interfaces. We thus include in the model the existence of diffuse transition zones but we preserve the simplified structure allowing front tracking. The three-dimensional problem then reduces to a two-dimensional model involving a strongly coupled system of partial differential equations of parabolic type describing the evolution of the depths of the two free surfaces, that is the interface between salt- and freshwater and the water table. We prove the existence of a weak solution for the model completed with initial and boundary conditions. We also prove that the depths of the two interfaces satisfy a coupled maximum principle.

  5. Sharp interface model of creep deformation in crystalline solids

    NASA Astrophysics Data System (ADS)

    Mishin, Y.; McFadden, G. B.; Sekerka, R. F.; Boettinger, W. J.

    2015-08-01

    We present a rigorous irreversible thermodynamics treatment of creep deformation of solid materials with interfaces described as geometric surfaces capable of vacancy generation and absorption and moving under the influence of local thermodynamic forces. The free energy dissipation rate derived in this work permits clear identification of thermodynamic driving forces for all stages of the creep process and formulation of kinetic equations of creep deformation and microstructure evolution. The theory incorporates capillary effects and reveals the different roles played by the interface free energy and interface stress. To describe the interaction of grain boundaries with stresses, we classify grain boundaries into coherent, incoherent and semicoherent, depending on their mechanical response to the stress. To prepare for future applications, we specialize the general equations to a particular case of a linear-elastic solid with a small concentration of vacancies. The proposed theory creates a thermodynamic framework for addressing more complex cases, such as creep in multicomponent alloys and cross-effects among vacancy generation/absorption and grain boundary motion and sliding.

  6. Statistical modelling of networked human-automation performance using working memory capacity.

    PubMed

    Ahmed, Nisar; de Visser, Ewart; Shaw, Tyler; Mohamed-Ameen, Amira; Campbell, Mark; Parasuraman, Raja

    2014-01-01

    This study examines the challenging problem of modelling the interaction between individual attentional limitations and decision-making performance in networked human-automation system tasks. Analysis of real experimental data from a task involving networked supervision of multiple unmanned aerial vehicles by human participants shows that both task load and network message quality affect performance, but that these effects are modulated by individual differences in working memory (WM) capacity. These insights were used to assess three statistical approaches for modelling and making predictions with real experimental networked supervisory performance data: classical linear regression, non-parametric Gaussian processes and probabilistic Bayesian networks. It is shown that each of these approaches can help designers of networked human-automated systems cope with various uncertainties in order to accommodate future users by linking expected operating conditions and performance from real experimental data to observable cognitive traits like WM capacity. Practitioner Summary: Working memory (WM) capacity helps account for inter-individual variability in operator performance in networked unmanned aerial vehicle supervisory tasks. This is useful for reliable performance prediction near experimental conditions via linear models; robust statistical prediction beyond experimental conditions via Gaussian process models and probabilistic inference about unknown task conditions/WM capacities via Bayesian network models. PMID:24308716

  7. Interface-capturing lattice Boltzmann equation model for two-phase flows

    NASA Astrophysics Data System (ADS)

    Lou, Qin; Guo, Zhaoli

    2015-01-01

    In this work, an interface-capturing lattice Boltzmann equation (LBE) model is proposed for two-phase flows. In the model, a Lax-Wendroff propagation scheme and a properly chosen equilibrium distribution function are employed. The Lax-Wendroff scheme is used to provide an adjustable Courant-Friedrichs-Lewy (CFL) number, and the equilibrium distribution is presented to remove the dependence of the relaxation time on the CFL number. As a result, the interface can be captured accurately by decreasing the CFL number. A theoretical expression is derived for the chemical potential gradient by solving the LBE directly for a two-phase system with a flat interface. The result shows that the gradient of the chemical potential is proportional to the square of the CFL number, which explains why the proposed model is able to capture the interface naturally with a small CFL number, and why large interface error exists in the standard LBE model. Numerical tests, including a one-dimensional flat interface problem, a two-dimensional circular droplet problem, and a three-dimensional spherical droplet problem, demonstrate that the proposed LBE model performs well and can capture a sharp interface with a suitable CFL number.

  8. Automated alignment-based curation of gene models in filamentous fungi

    PubMed Central

    2014-01-01

    Background Automated gene-calling is still an error-prone process, particularly for the highly plastic genomes of fungal species. Improvement through quality control and manual curation of gene models is a time-consuming process that requires skilled biologists and is only marginally performed. The wealth of available fungal genomes has not yet been exploited by an automated method that applies quality control of gene models in order to obtain more accurate genome annotations. Results We provide a novel method named alignment-based fungal gene prediction (ABFGP) that is particularly suitable for plastic genomes like those of fungi. It can assess gene models on a gene-by-gene basis making use of informant gene loci. Its performance was benchmarked on 6,965 gene models confirmed by full-length unigenes from ten different fungi. 79.4% of all gene models were correctly predicted by ABFGP. It improves the output of ab initio gene prediction software due to a higher sensitivity and precision for all gene model components. Applicability of the method was shown by revisiting the annotations of six different fungi, using gene loci from up to 29 fungal genomes as informants. Between 7,231 and 8,337 genes were assessed by ABFGP and for each genome between 1,724 and 3,505 gene model revisions were proposed. The reliability of the proposed gene models is assessed by an a posteriori introspection procedure of each intron and exon in the multiple gene model alignment. The total number and type of proposed gene model revisions in the six fungal genomes is correlated to the quality of the genome assembly, and to sequencing strategies used in the sequencing centre, highlighting different types of errors in different annotation pipelines. The ABFGP method is particularly successful in discovering sequence errors and/or disruptive mutations causing truncated and erroneous gene models. Conclusions The ABFGP method is an accurate and fully automated quality control method for fungal gene catalogues that can be easily implemented into existing annotation pipelines. With the exponential release of new genomes, the ABFGP method will help decreasing the number of gene models that require additional manual curation. PMID:24433567

  9. The enhanced Software Life Cyle Support Environment (ProSLCSE): Automation for enterprise and process modeling

    NASA Technical Reports Server (NTRS)

    Milligan, James R.; Dutton, James E.

    1993-01-01

    In this paper, we have introduced a comprehensive method for enterprise modeling that addresses the three important aspects of how an organization goes about its business. FirstEP includes infrastructure modeling, information modeling, and process modeling notations that are intended to be easy to learn and use. The notations stress the use of straightforward visual languages that are intuitive, syntactically simple, and semantically rich. ProSLCSE will be developed with automated tools and services to facilitate enterprise modeling and process enactment. In the spirit of FirstEP, ProSLCSE tools will also be seductively easy to use. Achieving fully managed, optimized software development and support processes will be long and arduous for most software organizations, and many serious problems will have to be solved along the way. ProSLCSE will provide the ability to document, communicate, and modify existing processes, which is the necessary first step.

  10. Towards automated 3D finite element modeling of direct fiber reinforced composite dental bridge.

    PubMed

    Li, Wei; Swain, Michael V; Li, Qing; Steven, Grant P

    2005-07-01

    An automated 3D finite element (FE) modeling procedure for direct fiber reinforced dental bridge is established on the basis of computer tomography (CT) scan data. The model presented herein represents a two-unit anterior cantilever bridge that includes a maxillary right incisor as an abutment and a maxillary left incisor as a cantilever pontic bonded by adhesive and reinforced fibers. The study aims at gathering fundamental knowledge for design optimization of this type of innovative composite dental bridges. To promote the automatic level of numerical analysis and computational design of new dental biomaterials, this report pays particular attention to the mathematical modeling, mesh generation, and validation of numerical models. To assess the numerical accuracy and to validate the model established, a convergence test and experimental verification are also presented. PMID:15912531

  11. Intelligent sensor-model automated control of PMR-15 autoclave processing

    NASA Technical Reports Server (NTRS)

    Hart, S.; Kranbuehl, D.; Loos, A.; Hinds, B.; Koury, J.

    1992-01-01

    An intelligent sensor model system has been built and used for automated control of the PMR-15 cure process in the autoclave. The system uses frequency-dependent FM sensing (FDEMS), the Loos processing model, and the Air Force QPAL intelligent software shell. The Loos model is used to predict and optimize the cure process including the time-temperature dependence of the extent of reaction, flow, and part consolidation. The FDEMS sensing system in turn monitors, in situ, the removal of solvent, changes in the viscosity, reaction advancement and cure completion in the mold continuously throughout the processing cycle. The sensor information is compared with the optimum processing conditions from the model. The QPAL composite cure control system allows comparison of the sensor monitoring with the model predictions to be broken down into a series of discrete steps and provides a language for making decisions on what to do next regarding time-temperature and pressure.

  12. Ab-initio molecular modeling of interfaces in tantalum-carbon system

    SciTech Connect

    Balani, Kantesh; Mungole, Tarang; Bakshi, Srinivasa Rao; Agarwal, Arvind

    2012-03-15

    Processing of ultrahigh temperature TaC ceramic material with sintering additives of B{sub 4}C and reinforcement of carbon nanotubes (CNTs) gives rise to possible formation of several interfaces (Ta{sub 2}C-TaC, TaC-CNT, Ta{sub 2}C-CNT, TaB{sub 2}-TaC, and TaB{sub 2}-CNT) that could influence the resultant properties. Current work focuses on interfaces developed during spark plasma sintering of TaC-system and performing ab initio molecular modeling of the interfaces generated during processing of TaC-B{sub 4}C and TaC-CNT composites. The energy of the various interfaces has been evaluated and compared with TaC-Ta{sub 2}C interface. The iso-surface electronic contours are extracted from the calculations eliciting the enhanced stability of TaC-CNT interface by 72.2%. CNTs form stable interfaces with Ta{sub 2}C and TaB{sub 2} phases with a reduction in the energy by 35.8% and 40.4%, respectively. The computed Ta-C-B interfaces are also compared with experimentally observed interfaces in high resolution TEM images.

  13. Effects of modeling errors on trajectory predictions in air traffic control automation

    NASA Technical Reports Server (NTRS)

    Jackson, Michael R. C.; Zhao, Yiyuan; Slattery, Rhonda

    1996-01-01

    Air traffic control automation synthesizes aircraft trajectories for the generation of advisories. Trajectory computation employs models of aircraft performances and weather conditions. In contrast, actual trajectories are flown in real aircraft under actual conditions. Since synthetic trajectories are used in landing scheduling and conflict probing, it is very important to understand the differences between computed trajectories and actual trajectories. This paper examines the effects of aircraft modeling errors on the accuracy of trajectory predictions in air traffic control automation. Three-dimensional point-mass aircraft equations of motion are assumed to be able to generate actual aircraft flight paths. Modeling errors are described as uncertain parameters or uncertain input functions. Pilot or autopilot feedback actions are expressed as equality constraints to satisfy control objectives. A typical trajectory is defined by a series of flight segments with different control objectives for each flight segment and conditions that define segment transitions. A constrained linearization approach is used to analyze trajectory differences caused by various modeling errors by developing a linear time varying system that describes the trajectory errors, with expressions to transfer the trajectory errors across moving segment transitions. A numerical example is presented for a complete commercial aircraft descent trajectory consisting of several flight segments.

  14. AIDE, A SYSTEM FOR DEVELOPING INTERACTIVE USER INTERFACES FOR ENVIRONMENTAL MODELS

    EPA Science Inventory

    Recent progress in environmental science and engineering has seen increasing use of interactive interfaces for computer models. nitial applications centered on the use of interactive software to assist in building complicated input sequences required by batch programs. rom these ...

  15. Design Through Manufacturing: The Solid Model-Finite Element Analysis Interface

    NASA Technical Reports Server (NTRS)

    Rubin, Carol

    2002-01-01

    State-of-the-art computer aided design (CAD) presently affords engineers the opportunity to create solid models of machine parts reflecting every detail of the finished product. Ideally, in the aerospace industry, these models should fulfill two very important functions: (1) provide numerical. control information for automated manufacturing of precision parts, and (2) enable analysts to easily evaluate the stress levels (using finite element analysis - FEA) for all structurally significant parts used in aircraft and space vehicles. Today's state-of-the-art CAD programs perform function (1) very well, providing an excellent model for precision manufacturing. But they do not provide a straightforward and simple means of automating the translation from CAD to FEA models, especially for aircraft-type structures. Presently, the process of preparing CAD models for FEA consumes a great deal of the analyst's time.

  16. A coupled damage-plasticity model for the cyclic behavior of shear-loaded interfaces

    NASA Astrophysics Data System (ADS)

    Carrara, P.; De Lorenzis, L.

    2015-12-01

    The present work proposes a novel thermodynamically consistent model for the behavior of interfaces under shear (i.e. mode-II) cyclic loading conditions. The interface behavior is defined coupling damage and plasticity. The admissible states' domain is formulated restricting the tangential interface stress to non-negative values, which makes the model suitable e.g. for interfaces with thin adherends. Linear softening is assumed so as to reproduce, under monotonic conditions, a bilinear mode-II interface law. Two damage variables govern respectively the loss of strength and of stiffness of the interface. The proposed model needs the evaluation of only four independent parameters, i.e. three defining the monotonic mode-II interface law, and one ruling the fatigue behavior. This limited number of parameters and their clear physical meaning facilitate experimental calibration. Model predictions are compared with experimental results on fiber reinforced polymer sheets externally bonded to concrete involving different load histories, and an excellent agreement is obtained.

  17. Modeling interface roughness scattering in a layered seabed for normal-incident chirp sonar signals.

    PubMed

    Tang, Dajun; Hefner, Brian T

    2012-04-01

    Downward looking sonar, such as the chirp sonar, is widely used as a sediment survey tool in shallow water environments. Inversion of geo-acoustic parameters from such sonar data precedes the availability of forward models. An exact numerical model is developed to initiate the simulation of the acoustic field produced by such a sonar in the presence of multiple rough interfaces. The sediment layers are assumed to be fluid layers with non-intercepting rough interfaces. PMID:22502485

  18. An automated procedure for material parameter evaluation for viscoplastic constitutive models

    NASA Technical Reports Server (NTRS)

    Imbrie, P. K.; James, G. H.; Hill, P. S.; Allen, D. H.; Haisler, W. E.

    1988-01-01

    An automated procedure is presented for evaluating the material parameters in Walker's exponential viscoplastic constitutive model for metals at elevated temperature. Both physical and numerical approximations are utilized to compute the constants for Inconel 718 at 1100 F. When intermediate results are carefully scrutinized and engineering judgement applied, parameters may be computed which yield stress output histories that are in agreement with experimental results. A qualitative assessment of the theta-plot method for predicting the limiting value of stress is also presented. The procedure may also be used as a basis to develop evaluation schemes for other viscoplastic constitutive theories of this type.

  19. Automated Optimization of WaterWater Interaction Parameters for a Coarse-Grained Model

    PubMed Central

    2015-01-01

    We have developed an automated parameter optimization software framework (ParOpt) that implements the NelderMead simplex algorithm and applied it to a coarse-grained polarizable water model. The model employs a tabulated, modified Morse potential with decoupled short- and long-range interactions incorporating four water molecules per interaction site. Polarizability is introduced by the addition of a harmonic angle term defined among three charged points within each bead. The target function for parameter optimization was based on the experimental density, surface tension, electric field permittivity, and diffusion coefficient. The model was validated by comparison of statistical quantities with experimental observation. We found very good performance of the optimization procedure and good agreement of the model with experiment. PMID:24460506

  20. PDB_REDO: automated re-refinement of X-ray structure models in the PDB

    PubMed Central

    Joosten, Robbie P.; Salzemann, Jean; Bloch, Vincent; Stockinger, Heinz; Berglund, Ann-Charlott; Blanchet, Christophe; Bongcam-Rudloff, Erik; Combet, Christophe; Da Costa, Ana L.; Deleage, Gilbert; Diarena, Matteo; Fabbretti, Roberto; Fettahi, Géraldine; Flegel, Volker; Gisel, Andreas; Kasam, Vinod; Kervinen, Timo; Korpelainen, Eija; Mattila, Kimmo; Pagni, Marco; Reichstadt, Matthieu; Breton, Vincent; Tickle, Ian J.; Vriend, Gert

    2009-01-01

    Structural biology, homology modelling and rational drug design require accurate three-dimensional macromolecular coordinates. However, the coordinates in the Protein Data Bank (PDB) have not all been obtained using the latest experimental and computational methods. In this study a method is presented for automated re-refinement of existing structure models in the PDB. A large-scale benchmark with 16 807 PDB entries showed that they can be improved in terms of fit to the deposited experimental X-ray data as well as in terms of geometric quality. The re-refinement protocol uses TLS models to describe concerted atom movement. The resulting structure models are made available through the PDB_REDO databank (http://www.cmbi.ru.nl/pdb_redo/). Grid computing techniques were used to overcome the computational requirements of this endeavour. PMID:22477769

  1. Streamflow forecasting using the modular modeling system and an object-user interface

    USGS Publications Warehouse

    Jeton, A.E.

    2001-01-01

    The U.S. Geological Survey (USGS), in cooperation with the Bureau of Reclamation (BOR), developed a computer program to provide a general framework needed to couple disparate environmental resource models and to manage the necessary data. The Object-User Interface (OUI) is a map-based interface for models and modeling data. It provides a common interface to run hydrologic models and acquire, browse, organize, and select spatial and temporal data. One application is to assist river managers in utilizing streamflow forecasts generated with the Precipitation-Runoff Modeling System running in the Modular Modeling System (MMS), a distributed-parameter watershed model, and the National Weather Service Extended Streamflow Prediction (ESP) methodology.

  2. Generating Phenotypical Erroneous Human Behavior to Evaluate Human-automation Interaction Using Model Checking

    PubMed Central

    Bolton, Matthew L.; Bass, Ellen J.; Siminiceanu, Radu I.

    2012-01-01

    Breakdowns in complex systems often occur as a result of system elements interacting in unanticipated ways. In systems with human operators, human-automation interaction associated with both normative and erroneous human behavior can contribute to such failures. Model-driven design and analysis techniques provide engineers with formal methods tools and techniques capable of evaluating how human behavior can contribute to system failures. This paper presents a novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models. The generated erroneous behavior is capable of replicating Hollnagels zero-order phenotypes of erroneous action for omissions, jumps, repetitions, and intrusions. Multiple phenotypical acts can occur in sequence, thus allowing for the generation of higher order phenotypes. The task behavior model pattern capable of generating erroneous behavior can be integrated into a formal system model so that system safety properties can be formally verified with a model checker. This allows analysts to prove that a human-automation interactive system (as represented by the model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. We present benchmarks related to the size of the statespace and verification time of models to show how the erroneous human behavior generation process scales. We demonstrate the method with a case study: the operation of a radiation therapy machine. A potential problem resulting from a generated erroneous human action is discovered. A design intervention is presented which prevents this problem from occurring. We discuss how our method could be used to evaluate larger applications and recommend future paths of development. PMID:23105914

  3. Measurement and Modeling of Thermal Contact Resistance at a Plastic-Metal Interface

    NASA Astrophysics Data System (ADS)

    Sridhar, L.; Narh, Kwabena A.

    1996-03-01

    Themal contact resistance(TCR) is a resistance to the flow of heat across the interface of 2 surfaces due to imperfect contact. The TCR at the metal-plastic interface has been shown to affect the modeling of injection molding processes. Its value is strongly dependent on a number of interface parameters including pressure, temperature, the nature of plastic material, the metal surface characteristics and the presence of interstitial medium such as the mold releasing agents used in injection molding. This study focuses on identifying the inter-relationships among these parameters, by measuring the TCR values for various plastic-metal interfaces under different experimental conditions in order to establish a model that could be used in process modeling and analysis.

  4. An Analytical Model for Solute Segregation at Liquid Metal/Solid Substrate Interface

    NASA Astrophysics Data System (ADS)

    Men, Hua; Fan, Zhongyun

    2014-11-01

    In this paper, we present an analytical model for describing the equilibrium solute segregation at the interface between metallic liquid (an A-B solution, where A is solvent and B is solute) and a solid substrate (S) using approaches of thermodynamics and statistical mechanics. This analytical model suggests that the interfacial solute segregation is governed by the difference in interfacial energies between the pure B/S and pure A/S interfaces, the heat of mixing of the A-B solution and the difference in entropies of fusion between pure solute and solvent. The calculated solute segregations at the interface in the liquid Al-Ti/TiB2 and liquid Sn-Al/Al2O3 systems are in qualitative agreement with the experimental observations. It is demonstrated that the present analytical model can be used to predict the solute segregation at the liquid/substrate interface, at least qualitatively.

  5. A comparison of molecular dynamics and diffuse interface model predictions of Lennard-Jones fluid evaporation

    SciTech Connect

    Barbante, Paolo; Frezzotti, Aldo; Gibelli, Livio

    2014-12-09

    The unsteady evaporation of a thin planar liquid film is studied by molecular dynamics simulations of Lennard-Jones fluid. The obtained results are compared with the predictions of a diffuse interface model in which capillary Korteweg contributions are added to hydrodynamic equations, in order to obtain a unified description of the liquid bulk, liquid-vapor interface and vapor region. Particular care has been taken in constructing a diffuse interface model matching the thermodynamic and transport properties of the Lennard-Jones fluid. The comparison of diffuse interface model and molecular dynamics results shows that, although good agreement is obtained in equilibrium conditions, remarkable deviations of diffuse interface model predictions from the reference molecular dynamics results are observed in the simulation of liquid film evaporation. It is also observed that molecular dynamics results are in good agreement with preliminary results obtained from a composite model which describes the liquid film by a standard hydrodynamic model and the vapor by the Boltzmann equation. The two mathematical model models are connected by kinetic boundary conditions assuming unit evaporation coefficient.

  6. Automated High-Throughput Characterization of Single Neurons by Means of Simplified Spiking Models

    PubMed Central

    Hagens, Olivier; Naud, Richard; Koch, Christof; Gerstner, Wulfram

    2015-01-01

    Single-neuron models are useful not only for studying the emergent properties of neural circuits in large-scale simulations, but also for extracting and summarizing in a principled way the information contained in electrophysiological recordings. Here we demonstrate that, using a convex optimization procedure we previously introduced, a Generalized Integrate-and-Fire model can be accurately fitted with a limited amount of data. The model is capable of predicting both the spiking activity and the subthreshold dynamics of different cell types, and can be used for online characterization of neuronal properties. A protocol is proposed that, combined with emergent technologies for automatic patch-clamp recordings, permits automated, in vitro high-throughput characterization of single neurons. PMID:26083597

  7. Automation, Control and Modeling of Compound Semiconductor Thin-Film Growth

    SciTech Connect

    Breiland, W.G.; Coltrin, M.E.; Drummond, T.J.; Horn, K.M.; Hou, H.Q.; Klem, J.F.; Tsao, J.Y.

    1999-02-01

    This report documents the results of a laboratory-directed research and development (LDRD) project on control and agile manufacturing in the critical metalorganic chemical vapor deposition (MOCVD) and molecular beam epitaxy (MBE) materials growth processes essential to high-speed microelectronics and optoelectronic components. This effort is founded on a modular and configurable process automation system that serves as a backbone allowing integration of process-specific models and sensors. We have developed and integrated MOCVD- and MBE-specific models in this system, and demonstrated the effectiveness of sensor-based feedback control in improving the accuracy and reproducibility of semiconductor heterostructures. In addition, within this framework we have constructed ''virtual reactor'' models for growth processes, with the goal of greatly shortening the epitaxial growth process development cycle.

  8. Automated High-Throughput Characterization of Single Neurons by Means of Simplified Spiking Models.

    PubMed

    Pozzorini, Christian; Mensi, Skander; Hagens, Olivier; Naud, Richard; Koch, Christof; Gerstner, Wulfram

    2015-06-01

    Single-neuron models are useful not only for studying the emergent properties of neural circuits in large-scale simulations, but also for extracting and summarizing in a principled way the information contained in electrophysiological recordings. Here we demonstrate that, using a convex optimization procedure we previously introduced, a Generalized Integrate-and-Fire model can be accurately fitted with a limited amount of data. The model is capable of predicting both the spiking activity and the subthreshold dynamics of different cell types, and can be used for online characterization of neuronal properties. A protocol is proposed that, combined with emergent technologies for automatic patch-clamp recordings, permits automated, in vitro high-throughput characterization of single neurons. PMID:26083597

  9. Improved automated diagnosis of misfire in internal combustion engines based on simulation models

    NASA Astrophysics Data System (ADS)

    Chen, Jian; Bond Randall, Robert

    2015-12-01

    In this paper, a new advance in the application of Artificial Neural Networks (ANNs) to the automated diagnosis of misfires in Internal Combustion engines(IC engines) is detailed. The automated diagnostic system comprises three stages: fault detection, fault localization and fault severity identification. Particularly, in the severity identification stage, separate Multi-Layer Perceptron networks (MLPs) with saturating linear transfer functions were designed for individual speed conditions, so they could achieve finer classification. In order to obtain sufficient data for the network training, numerical simulation was used to simulate different ranges of misfires in the engine. The simulation models need to be updated and evaluated using experimental data, so a series of experiments were first carried out on the engine test rig to capture the vibration signals for both normal condition and with a range of misfires. Two methods were used for the misfire diagnosis: one is based on the torsional vibration signals of the crankshaft and the other on the angular acceleration signals (rotational motion) of the engine block. Following the signal processing of the experimental and simulation signals, the best features were selected as the inputs to ANN networks. The ANN systems were trained using only the simulated data and tested using real experimental cases, indicating that the simulation model can be used for a wider range of faults for which it can still be considered valid. The final results have shown that the diagnostic system based on simulation can efficiently diagnose misfire, including location and severity.

  10. An efficient approach to automate the manual trial and error calibration of activated sludge models.

    PubMed

    Sin, Grkan; De Pauw, Dirk J W; Weijers, Stefan; Vanrolleghem, Peter A

    2008-06-15

    An efficient approach is introduced to help automate the rather tedious manual trial and error way of model calibration currently used in activated sludge modeling practice. To this end, we have evaluated a Monte Carlo based calibration approach consisting of four steps: (i) parameter subset selection, (ii) defining parameter space, (iii) parameter sampling for Monte Carlo simulations and (iv) selecting the best Monte Carlo simulation thereby providing the calibrated parameter values. The approach was evaluated on a formerly calibrated full-scale ASM2d model for a domestic plant (located in The Netherlands), using in total 3 months of dynamic oxygen, ammonia and nitrate sensor data. The Monte Carlo calibrated model was validated successfully using ammonia, oxygen and nitrate data collected at high measurement frequency. Statistical analysis of the residuals using mean absolute error (MAE), root mean square error (RMSE) and Janus coefficient showed that the calibrated model was able to provide statistically accurate and valid predictions for ammonium, oxygen and nitrate. This shows that this pragmatic approach can perform the task of model calibration and therefore be used in practice to save the valuable time of modelers spent on this step of activated sludge modeling. The high computational demand is a downside of this approach but this can be overcome by using distributed computing. Overall we expect that the use of such systems analysis tools in the application of activated sludge models will improve the quality of model predictions and their use in decision making. PMID:18098316

  11. Mathematical Modeling Research to Support the Development of Automated Insulin-Delivery Systems

    PubMed Central

    Steil, Garry M.; Reifman, Jaques

    2009-01-01

    The world leaders in glycemia modeling convened during the Eighth Annual Diabetes Technology Meeting in Bethesda, Maryland, on 14 November 2008, to discuss the current practices in mathematical modeling and make recommendations for its use in developing automated insulin-delivery systems. This report summarizes the collective views of the 25 participating experts in addressing the following four topics: current practices in modeling efforts for closed-loop control; framework for exchange of information and collaboration among research centers; major barriers for the development of accurate models; and key tasks for developing algorithms to build closed-loop control systems. Among the participants, the following main conclusions and recommendations were widely supported: Physiologic variance represents the single largest technical challenge to creating accurate simulation models.A Web site describing different models and the data supporting them should be made publically available, with funding agencies and journals requiring investigators to provide open access to both models and data.Existing simulation models should be compared and contrasted, using the same evaluation and validation criteria, to better assess the state of the art, understand any inherent limitations in the models, and identify gaps in data and/or model capability. PMID:20144371

  12. Automated 3D Damaged Cavity Model Builder for Lower Surface Acreage Tile on Orbiter

    NASA Technical Reports Server (NTRS)

    Belknap, Shannon; Zhang, Michael

    2013-01-01

    The 3D Automated Thermal Tool for Damaged Acreage Tile Math Model builder was developed to perform quickly and accurately 3D thermal analyses on damaged lower surface acreage tiles and structures beneath the damaged locations on a Space Shuttle Orbiter. The 3D model builder created both TRASYS geometric math models (GMMs) and SINDA thermal math models (TMMs) to simulate an idealized damaged cavity in the damaged tile(s). The GMMs are processed in TRASYS to generate radiation conductors between the surfaces in the cavity. The radiation conductors are inserted into the TMMs, which are processed in SINDA to generate temperature histories for all of the nodes on each layer of the TMM. The invention allows a thermal analyst to create quickly and accurately a 3D model of a damaged lower surface tile on the orbiter. The 3D model builder can generate a GMM and the correspond ing TMM in one or two minutes, with the damaged cavity included in the tile material. A separate program creates a configuration file, which would take a couple of minutes to edit. This configuration file is read by the model builder program to determine the location of the damage, the correct tile type, tile thickness, structure thickness, and SIP thickness of the damage, so that the model builder program can build an accurate model at the specified location. Once the models are built, they are processed by the TRASYS and SINDA.

  13. TOBAGO a semi-automated approach for the generation of 3-D building models

    NASA Astrophysics Data System (ADS)

    Gruen, Armin

    3-D city models are in increasing demand for a great number of applications. Photogrammetry is a relevant technology that can provide an abundance of geometric, topologic and semantic information concerning these models. The pressure to generate a large amount of data with high degree of accuracy and completeness poses a great challenge to phtogrammetry. The development of automated and semi-automated methods for the generation of those data sets is therefore a key issue in photogrammetric research. We present in this article a strategy and methodology for an efficient generation of even fairly complex building models. Within this concept we request the operator to measure the house roofs from a stereomodel in form of an unstructured point cloud. According to our experience this can be done very quickly. Even a non-experienced operator can measure several hundred roofs or roof units per day. In a second step we fit generic building models fully automatically to these point clouds. The structure information is inherently included in these building models. In such a way geometric, topologic and even semantic data can be handed over to a CAD-system, in our case AutoCad, for further visualization and manipulation. The structuring is achieved in three steps. In a first step a classifier is initiated which recognizes the class of houses a particular roof point cloud belongs to. This recognition step is primarily based on the analysis of the number of ridge points. In the second and third steps the concrete topological relations between roof points are investigated and generic building models are fitted to the point clouds. Based on the technique of constraint-based reasoning two geometrical parsers are solving this problem. We have tested the methodology under a variety of different conditions in several pilot projects. The results will indicate the good performance of our approach. In addition we will demonstrate how the results can be used for visualization (texture mapping) and animation (walk-throughs and fly-overs).

  14. Finite element analysis of thermal residual stresses at graded ceramic-metal interfaces. I - Model description and geometrical effects. II- Interface optimization for residual stress reduction

    NASA Astrophysics Data System (ADS)

    Williamson, R. L.; Rabin, B. H.; Drake, J. T.

    1993-07-01

    An elastic FEM numerical model for simulating residual stresses at graded ceramic-metal interfaces during cooling, which accounts for the effect of plasticity, was developed and used to investigate residual stresses at ceramic-metal graded and nongraded interfaces in Al2O3-Ni system. Specimen geometries were designed to provide information related to joining, coating, and thick-film applications. The results demonstrate the importance of accounting for plasticity when comparing graded and nongraded interfaces. It is shown that, in some cases, optimization of the microstructure is required to achieve reductions in certain critical stress components believed to be important for controlling interface failure. Using the new model, interface conditions favorable for achieving residual stress reduction were identified by investigating the effects of different interlayer thickness and nonlinear composition profiles on strain and stress distributions established during cooling from an assumed elevated bonding temperature.

  15. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1993-01-01

    Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.

  16. Study and characterization of interfaces in a two-dimensional generalized voter model

    NASA Astrophysics Data System (ADS)

    Bordogna, Clelia M.; Albano, Ezequiel V.

    2011-04-01

    We propose and study, by means of numerical simulations, the time evolution of interfaces in a generalized voter model in d=2 dimensions. In this model, a randomly selected voter can change his or her opinion (state) with a certain probability that is an algebraic function of the average opinion of his or her nearest neighbors. By starting with well-defined (sharp) interfaces between two different states of opinion, we measure the time dependence of the interface width (w), which behaves as a power law, i.e., w?t?. In this way we characterized three different types of interfaces: (i) between an ordered phase (consensus) and a disordered one (?=1/2); (ii) between ordered phases having different states of opinion (?=1/2), which corresponds to interface coarsening without surface tension; and (iii) as in (ii) but considering surface tension. Here, we observe a finite-size induced crossover with exponents ?=1/4 and ?=1/2 for early and longer times, respectively. So, our study allows for the characterization of interfaces of quite different nature in a unified fashion, providing insight into the understanding of interface coarsening with and without surface tension.

  17. FE Modeling of Guided Wave Propagation in Structures with Weak Interfaces

    NASA Astrophysics Data System (ADS)

    Hosten, Bernard; Castaings, Michel

    2005-04-01

    This paper describes the use of a Finite Element code for modeling the effects of weak interfaces on the propagation of low order Lamb modes. The variable properties of the interface are modeled by uniform repartitions of compression and shear springs that insure the continuity of the stresses and impose a discontinuity in the displacement field. The method is tested by comparison with measurements that were presented in a previous QNDE conference (B.W.Drinkwater, M.Castaings, and B.Hosten "The interaction of Lamb waves with solid-solid interfaces", Q.N.D.E. Vol. 22, (2003) 1064-1071). The interface was the contact between a rough elastomer with high internal damping loaded against one surface of a glass plate. Both normal and shear stiffnesses of the interface were quantified from the attenuation of A0 and S0 Lamb waves caused by leakage of energy from the plate into the elastomer and measured at each step of a compressive loading. The FE model is made in the frequency domain, thus allowing the viscoelastic properties of the elastomer to be modeled by using complex moduli as input data. By introducing the interface stiffnesses in the code, the predicted guided waves attenuations are compared to the experimental results to validate the numerical FE method.

  18. Analytical model for thermal boundary conductance and equilibrium thermal accommodation coefficient at solid/gas interfaces

    NASA Astrophysics Data System (ADS)

    Giri, Ashutosh; Hopkins, Patrick E.

    2016-02-01

    We develop an analytical model for the thermal boundary conductance between a solid and a gas. By considering the thermal fluxes in the solid and the gas, we describe the transmission of energy across the solid/gas interface with diffuse mismatch theory. From the predicted thermal boundary conductances across solid/gas interfaces, the equilibrium thermal accommodation coefficient is determined and compared to predictions from molecular dynamics simulations on the model solid-gas systems. We show that our model is applicable for modeling the thermal accommodation of gases on solid surfaces at non-cryogenic temperatures and relatively strong solid-gas interactions (ɛsf ≳ kBT).

  19. Pilot interaction with cockpit automation 2: An experimental study of pilots' model and awareness of the Flight Management System

    NASA Technical Reports Server (NTRS)

    Sarter, Nadine B.; Woods, David D.

    1994-01-01

    Technological developments have made it possible to automate more and more functions on the commercial aviation flight deck and in other dynamic high-consequence domains. This increase in the degrees of freedom in design has shifted questions away from narrow technological feasibility. Many concerned groups, from designers and operators to regulators and researchers, have begun to ask questions about how we should use the possibilities afforded by technology skillfully to support and expand human performance. In this article, we report on an experimental study that addressed these questions by examining pilot interaction with the current generation of flight deck automation. Previous results on pilot-automation interaction derived from pilot surveys, incident reports, and training observations have produced a corpus of features and contexts in which human-machine coordination is likely to break down (e.g., automation surprises). We used these data to design a simulated flight scenario that contained a variety of probes designed to reveal pilots' mental model of one major component of flight deck automation: the Flight Management System (FMS). The events within the scenario were also designed to probe pilots' ability to apply their knowledge and understanding in specific flight contexts and to examine their ability to track the status and behavior of the automated system (mode awareness). Although pilots were able to 'make the system work' in standard situations, the results reveal a variety of latent problems in pilot-FMS interaction that can affect pilot performance in nonnormal time critical situations.

  20. A multilayered sharp interface model of coupled freshwater and saltwater flow in coastal systems: model development and application

    USGS Publications Warehouse

    Essaid, H.I.

    1990-01-01

    The model allows for regional simulation of coastal groundwater conditions, including the effects of saltwater dynamics on the freshwater system. Vertically integrated freshwater and saltwater flow equations incorporating the interface boundary condition are solved within each aquifer. Leakage through confining layers is calculated by Darcy's law, accounting for density differences across the layer. The locations of the interface tip and toe, within grid blocks, are tracked by linearly extrapolating the position of the interface. The model has been verified using available analytical solutions and experimental results and applied to the Soquel-Aptos basin, Santa Cruz County, California. -from Author

  1. CHANNEL MORPHOLOGY TOOL (CMT): A GIS-BASED AUTOMATED EXTRACTION MODEL FOR CHANNEL GEOMETRY

    SciTech Connect

    JUDI, DAVID; KALYANAPU, ALFRED; MCPHERSON, TIMOTHY; BERSCHEID, ALAN

    2007-01-17

    This paper describes an automated Channel Morphology Tool (CMT) developed in ArcGIS 9.1 environment. The CMT creates cross-sections along a stream centerline and uses a digital elevation model (DEM) to create station points with elevations along each of the cross-sections. The generated cross-sections may then be exported into a hydraulic model. Along with the rapid cross-section generation the CMT also eliminates any cross-section overlaps that might occur due to the sinuosity of the channels using the Cross-section Overlap Correction Algorithm (COCoA). The CMT was tested by extracting cross-sections from a 5-m DEM for a 50-km channel length in Houston, Texas. The extracted cross-sections were compared directly with surveyed cross-sections in terms of the cross-section area. Results indicated that the CMT-generated cross-sections satisfactorily matched the surveyed data.

  2. An Accuracy Assessment of Automated Photogrammetric Techniques for 3d Modeling of Complex Interiors

    NASA Astrophysics Data System (ADS)

    Georgantas, A.; Brdif, M.; Pierrot-Desseilligny, M.

    2012-07-01

    This paper presents a comparison of automatic photogrammetric techniques to terrestrial laser scanning for 3D modelling of complex interior spaces. We try to evaluate the automated photogrammetric techniques not only in terms of their geometric quality compared to laser scanning but also in terms of cost in money, acquisition and computational time. To this purpose we chose as test site a modern building's stairway. APERO/MICMAC ( IGN )which is an Open Source photogrammetric software was used for the production of the 3D photogrammetric point cloud which was compared to the one acquired by a Leica Scanstation 2 laser scanner. After performing various qualitative and quantitative controls we present the advantages and disadvantages of each 3D modelling method applied in a complex interior of a modern building.

  3. Analytic Element Modeling of Steady Interface Flow in Multilayer Aquifers Using AnAqSim.

    PubMed

    Fitts, Charles R; Godwin, Joshua; Feiner, Kathleen; McLane, Charles; Mullendore, Seth

    2015-01-01

    This paper presents the analytic element modeling approach implemented in the software AnAqSim for simulating steady groundwater flow with a sharp fresh-salt interface in multilayer (three-dimensional) aquifer systems. Compared with numerical methods for variable-density interface modeling, this approach allows quick model construction and can yield useful guidance about the three-dimensional configuration of an interface even at a large scale. The approach employs subdomains and multiple layers as outlined by Fitts (2010) with the addition of discharge potentials for shallow interface flow (Strack 1989). The following simplifying assumptions are made: steady flow, a sharp interface between fresh- and salt water, static salt water, and no resistance to vertical flow and hydrostatic heads within each fresh water layer. A key component of this approach is a transition to a thin fixed minimum fresh water thickness mode when the fresh water thickness approaches zero. This allows the solution to converge and determine the steady interface position without a long transient simulation. The approach is checked against the widely used numerical codes SEAWAT and SWI/MODFLOW and a hypothetical application of the method to a coastal wellfield is presented. PMID:24942663

  4. Interfaces with internal structures in generalized rock-paper-scissors models

    NASA Astrophysics Data System (ADS)

    Avelino, P. P.; Bazeia, D.; Losano, L.; Menezes, J.; de Oliveira, B. F.

    2014-04-01

    In this work we investigate the development of stable dynamical structures along interfaces separating domains belonging to enemy partnerships in the context of cyclic predator-prey models with an even number of species N ?8. We use both stochastic and field theory simulations in one and two spatial dimensions, as well as analytical arguments, to describe the association at the interfaces of mutually neutral individuals belonging to enemy partnerships and to probe their role in the development of the dynamical structures at the interfaces. We identify an interesting behavior associated with the symmetric or asymmetric evolution of the interface profiles depending on whether N /2 is odd or even, respectively. We also show that the macroscopic evolution of the interface network is not very sensitive to the internal structure of the interfaces. Although this work focuses on cyclic predator-prey models with an even number of species, we argue that the results are expected to be quite generic in the context of spatial stochastic May-Leonard models.

  5. The use of automated parameter searches to improve ion channel kinetics for neural modeling.

    PubMed

    Hendrickson, Eric B; Edgerton, Jeremy R; Jaeger, Dieter

    2011-10-01

    The voltage and time dependence of ion channels can be regulated, notably by phosphorylation, interaction with phospholipids, and binding to auxiliary subunits. Many parameter variation studies have set conductance densities free while leaving kinetic channel properties fixed as the experimental constraints on the latter are usually better than on the former. Because individual cells can tightly regulate their ion channel properties, we suggest that kinetic parameters may be profitably set free during model optimization in order to both improve matches to data and refine kinetic parameters. To this end, we analyzed the parameter optimization of reduced models of three electrophysiologically characterized and morphologically reconstructed globus pallidus neurons. We performed two automated searches with different types of free parameters. First, conductance density parameters were set free. Even the best resulting models exhibited unavoidable problems which were due to limitations in our channel kinetics. We next set channel kinetics free for the optimized density matches and obtained significantly improved model performance. Some kinetic parameters consistently shifted to similar new values in multiple runs across three models, suggesting the possibility for tailored improvements to channel models. These results suggest that optimized channel kinetics can improve model matches to experimental voltage traces, particularly for channels characterized under different experimental conditions than recorded data to be matched by a model. The resulting shifts in channel kinetics from the original template provide valuable guidance for future experimental efforts to determine the detailed kinetics of channel isoforms and possible modulated states in particular types of neurons. PMID:21243419

  6. A methodology for model-based development and automated verification of software for aerospace systems

    NASA Astrophysics Data System (ADS)

    Martin, L.; Schatalov, M.; Hagner, M.; Goltz, U.; Maibaum, O.

    Today's software for aerospace systems typically is very complex. This is due to the increasing number of features as well as the high demand for safety, reliability, and quality. This complexity also leads to significant higher software development costs. To handle the software complexity, a structured development process is necessary. Additionally, compliance with relevant standards for quality assurance is a mandatory concern. To assure high software quality, techniques for verification are necessary. Besides traditional techniques like testing, automated verification techniques like model checking become more popular. The latter examine the whole state space and, consequently, result in a full test coverage. Nevertheless, despite the obvious advantages, this technique is rarely yet used for the development of aerospace systems. In this paper, we propose a tool-supported methodology for the development and formal verification of safety-critical software in the aerospace domain. The methodology relies on the V-Model and defines a comprehensive work flow for model-based software development as well as automated verification in compliance to the European standard series ECSS-E-ST-40C. Furthermore, our methodology supports the generation and deployment of code. For tool support we use the tool SCADE Suite (Esterel Technology), an integrated design environment that covers all the requirements for our methodology. The SCADE Suite is well established in avionics and defense, rail transportation, energy and heavy equipment industries. For evaluation purposes, we apply our approach to an up-to-date case study of the TET-1 satellite bus. In particular, the attitude and orbit control software is considered. The behavioral models for the subsystem are developed, formally verified, and optimized.

  7. Thermal modeling of roll and strip interfaces in rolling processes. Part 2: Simulation

    SciTech Connect

    Tseng, A.A.

    1999-02-12

    Part 1 of this paper reviewed the modeling approaches and correlations used to study the interface heat transfer phenomena of the roll-strip contact region in rolling processes. The thermal contact conductance approach was recommended for modeling the interface phenomena. To illustrate, the recommended approach and selected correlations are adopted in the present study for modeling of the roll-strip interface region. The specific values of the parameters used to correlate the corresponding thermal contact conductance for the typical cold and hot rolling of steels are first estimated. The influence of thermal contact resistance on the temperature distributions of the roll and strip is then studied. Comparing the present simulation results with previously published experimental and analytical results shows that the thermal contact conductance approach and numerical models used can reliably simulate the heat transfer behavior of the rolling process.

  8. General MACOS Interface for Modeling and Analysis for Controlled Optical Systems

    NASA Technical Reports Server (NTRS)

    Sigrist, Norbert; Basinger, Scott A.; Redding, David C.

    2012-01-01

    The General MACOS Interface (GMI) for Modeling and Analysis for Controlled Optical Systems (MACOS) enables the use of MATLAB as a front-end for JPL s critical optical modeling package, MACOS. MACOS is JPL s in-house optical modeling software, which has proven to be a superb tool for advanced systems engineering of optical systems. GMI, coupled with MACOS, allows for seamless interfacing with modeling tools from other disciplines to make possible integration of dynamics, structures, and thermal models with the addition of control systems for deformable optics and other actuated optics. This software package is designed as a tool for analysts to quickly and easily use MACOS without needing to be an expert at programming MACOS. The strength of MACOS is its ability to interface with various modeling/development platforms, allowing evaluation of system performance with thermal, mechanical, and optical modeling parameter variations. GMI provides an improved means for accessing selected key MACOS functionalities. The main objective of GMI is to marry the vast mathematical and graphical capabilities of MATLAB with the powerful optical analysis engine of MACOS, thereby providing a useful tool to anyone who can program in MATLAB. GMI also improves modeling efficiency by eliminating the need to write an interface function for each task/project, reducing error sources, speeding up user/modeling tasks, and making MACOS well suited for fast prototyping.

  9. Toward automated model building from video in computer-assisted diagnoses in colonoscopy

    NASA Astrophysics Data System (ADS)

    Koppel, Dan; Chen, Chao-I.; Wang, Yuan-Fang; Lee, Hua; Gu, Jia; Poirson, Allen; Wolters, Rolf

    2007-03-01

    A 3D colon model is an essential component of a computer-aided diagnosis (CAD) system in colonoscopy to assist surgeons in visualization, and surgical planning and training. This research is thus aimed at developing the ability to construct a 3D colon model from endoscopic videos (or images). This paper summarizes our ongoing research in automated model building in colonoscopy. We have developed the mathematical formulations and algorithms for modeling static, localized 3D anatomic structures within a colon that can be rendered from multiple novel view points for close scrutiny and precise dimensioning. This ability is useful for the scenario when a surgeon notices some abnormal tissue growth and wants a close inspection and precise dimensioning. Our modeling system uses only video images and follows a well-established computer-vision paradigm for image-based modeling. We extract prominent features from images and establish their correspondences across multiple images by continuous tracking and discrete matching. We then use these feature correspondences to infer the camera's movement. The camera motion parameters allow us to rectify images into a standard stereo configuration and calculate pixel movements (disparity) in these images. The inferred disparity is then used to recover 3D surface depth. The inferred 3D depth, together with texture information recorded in images, allow us to construct a 3D model with both structure and appearance information that can be rendered from multiple novel view points.

  10. A Translational Animal Model for Scar Compression Therapy Using an Automated Pressure Delivery System

    PubMed Central

    Alkhalil, A.; Tejiram, S.; Travis, T.E.; Prindeze, N.J.; Carney, B.C.; Moffatt, L.T.; Johnson, L.S.; Ramella-Roman, J.

    2015-01-01

    Background: Pressure therapy has been used to prevent and treat hypertrophic scars following cutaneous injury despite the limited understanding of its mechanism of action and lack of established animal model to optimize its usage. Objectives: The aim of this work was to test and characterize a novel automated pressure delivery system designed to deliver steady and controllable pressure in a red Duroc swine hypertrophic scar model. Methods: Excisional wounds were created by dermatome on 6 red Duroc pigs and allowed to scar while assessed weekly via gross visual inspection, laser Doppler imaging, and biopsy. A portable novel automated pressure delivery system was mounted on developing scars (n = 6) for 2 weeks. Results: The device maintained a pressure range of 30 4 mmHg for more than 90% of the 2-week treatment period. Pressure readings outside this designated range were attributed to normal animal behavior and responses to healing progression. Gross scar examination by the Vancouver Scar Scale showed significant and sustained (>4weeks) improvement in pressure-treated scars (P< .05). Histological examination of pressure-treated scars showed a significant decrease in dermal thickness compared with other groups (P < .05). Pressure-treated scars also showed increased perfusion by laser Doppler imaging during the treatment period compared with sham-treated and untreated scars (P < .05). Cellular quantification showed differential changes among treatment groups. Conclusion: These results illustrate the applications of this technology in hypertrophic scar Duroc swine model and the evaluation and optimization of pressure therapy in wound-healing and hypertrophic scar management. PMID:26171101

  11. Dynamic Distribution and Layouting of Model-Based User Interfaces in Smart Environments

    NASA Astrophysics Data System (ADS)

    Roscher, Dirk; Lehmann, Grzegorz; Schwartze, Veit; Blumendorf, Marco; Albayrak, Sahin

    The developments in computer technology in the last decade change the ways of computer utilization. The emerging smart environments make it possible to build ubiquitous applications that assist users during their everyday life, at any time, in any context. But the variety of contexts-of-use (user, platform and environment) makes the development of such ubiquitous applications for smart environments and especially its user interfaces a challenging and time-consuming task. We propose a model-based approach, which allows adapting the user interface at runtime to numerous (also unknown) contexts-of-use. Based on a user interface modelling language, defining the fundamentals and constraints of the user interface, a runtime architecture exploits the description to adapt the user interface to the current context-of-use. The architecture provides automatic distribution and layout algorithms for adapting the applications also to contexts unforeseen at design time. Designers do not specify predefined adaptations for each specific situation, but adaptation constraints and guidelines. Furthermore, users are provided with a meta user interface to influence the adaptations according to their needs. A smart home energy management system serves as running example to illustrate the approach.

  12. A correction for Dupuit-Forchheimer interface flow models of seawater intrusion in unconfined coastal aquifers

    NASA Astrophysics Data System (ADS)

    Koussis, Antonis D.; Mazi, Katerina; Riou, Fabien; Destouni, Georgia

    2015-06-01

    Interface flow models that use the Dupuit-Forchheimer (DF) approximation for assessing the freshwater lens and the seawater intrusion in coastal aquifers lack representation of the gap through which fresh groundwater discharges to the sea. In these models, the interface outcrops unrealistically at the same point as the free surface, is too shallow and intersects the aquifer base too far inland, thus overestimating an intruding seawater front. To correct this shortcoming of DF-type interface solutions for unconfined aquifers, we here adapt the outflow gap estimate of an analytical 2-D interface solution for infinitely thick aquifers to fit the 50%-salinity contour of variable-density solutions for finite-depth aquifers. We further improve the accuracy of the interface toe location predicted with depth-integrated DF interface solutions by ?20% (relative to the 50%-salinity contour of variable-density solutions) by combining the outflow-gap adjusted aquifer depth at the sea with a transverse-dispersion adjusted density ratio (Pool and Carrera, 2011), appropriately modified for unconfined flow. The effectiveness of the combined correction is exemplified for two regional Mediterranean aquifers, the Israel Coastal and Nile Delta aquifers.

  13. Fullerene film on metal surface: Diffusion of metal atoms and interface model

    SciTech Connect

    Li, Wen-jie; Li, Hai-Yang; Li, Hong-Nian; Wang, Peng; Wang, Xiao-Xiong; Wang, Jia-Ou; Wu, Rui; Qian, Hai-Jie; Ibrahim, Kurash

    2014-05-12

    We try to understand the fact that fullerene film behaves as n-type semiconductor in electronic devices and establish a model describing the energy level alignment at fullerene/metal interfaces. The C{sub 60}/Ag(100) system was taken as a prototype and studied with photoemission measurements. The photoemission spectra revealed that the Ag atoms of the substrate diffused far into C{sub 60} film and donated electrons to the molecules. So the C{sub 60} film became n-type semiconductor with the Ag atoms acting as dopants. The C{sub 60}/Ag(100) interface should be understood as two sub-interfaces on both sides of the molecular layer directly contacting with the substrate. One sub-interface is Fermi level alignment, and the other is vacuum level alignment.

  14. Developing a User-process Model for Designing Menu-based Interfaces: An Exploratory Study.

    ERIC Educational Resources Information Center

    Ju, Boryung; Gluck, Myke

    2003-01-01

    The purpose of this study was to organize menu items based on a user-process model and implement a new version of current software for enhancing usability of interfaces. A user-process model was developed, drawn from actual users' understanding of their goals and strategies to solve their information needs by using Dervin's Sense-Making Theory

  15. Rethinking Design Process: Using 3D Digital Models as an Interface in Collaborative Session

    ERIC Educational Resources Information Center

    Ding, Suining

    2008-01-01

    This paper describes a pilot study for an alternative design process by integrating a designer-user collaborative session with digital models. The collaborative session took place in a 3D AutoCAD class for a real world project. The 3D models served as an interface for designer-user collaboration during the design process. Students not only learned

  16. Automated home cage assessment shows behavioral changes in a transgenic mouse model of spinocerebellar ataxia type 17.

    PubMed

    Portal, Esteban; Riess, Olaf; Nguyen, Huu Phuc

    2013-08-01

    Spinocerebellar Ataxia type 17 (SCA17) is an autosomal dominantly inherited, neurodegenerative disease characterized by ataxia, involuntary movements, and dementia. A novel SCA17 mouse model having a 71 polyglutamine repeat expansion in the TATA-binding protein (TBP) has shown age related motor deficit using a classic motor test, yet concomitant weight increase might be a confounding factor for this measurement. In this study we used an automated home cage system to test several motor readouts for this same model to confirm pathological behavior results and evaluate benefits of automated home cage in behavior phenotyping. Our results confirm motor deficits in the Tbp/Q71 mice and present previously unrecognized behavioral characteristics obtained from the automated home cage, indicating its use for high-throughput screening and testing, e.g. of therapeutic compounds. PMID:23665119

  17. Automated Generation of Fault Management Artifacts from a Simple System Model

    NASA Technical Reports Server (NTRS)

    Kennedy, Andrew K.; Day, John C.

    2013-01-01

    Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.

  18. Examining Uncertainty in Demand Response Baseline Models and Variability in Automated Response to Dynamic Pricing

    SciTech Connect

    Mathieu, Johanna L.; Callaway, Duncan S.; Kiliccote, Sila

    2011-08-15

    Controlling electric loads to deliver power system services presents a number of interesting challenges. For example, changes in electricity consumption of Commercial and Industrial (C&I) facilities are usually estimated using counterfactual baseline models, and model uncertainty makes it difficult to precisely quantify control responsiveness. Moreover, C&I facilities exhibit variability in their response. This paper seeks to understand baseline model error and demand-side variability in responses to open-loop control signals (i.e. dynamic prices). Using a regression-based baseline model, we define several Demand Response (DR) parameters, which characterize changes in electricity use on DR days, and then present a method for computing the error associated with DR parameter estimates. In addition to analyzing the magnitude of DR parameter error, we develop a metric to determine how much observed DR parameter variability is attributable to real event-to-event variability versus simply baseline model error. Using data from 38 C&I facilities that participated in an automated DR program in California, we find that DR parameter errors are large. For most facilities, observed DR parameter variability is likely explained by baseline model error, not real DR parameter variability; however, a number of facilities exhibit real DR parameter variability. In some cases, the aggregate population of C&I facilities exhibits real DR parameter variability, resulting in implications for the system operator with respect to both resource planning and system stability.

  19. Electronic structure of the SiNx/TiN interface: A model system for superhard nanocomposites

    NASA Astrophysics Data System (ADS)

    Patscheider, Jrg; Hellgren, Niklas; Haasch, Richard T.; Petrov, Ivan; Greene, J. E.

    2011-03-01

    Nanostructured materials such as nanocomposites and nanolaminatessubjects of intense interest in modern materials researchare defined by internal interfaces, the nature of which is generally unknown. Nevertheless, the interfaces often determine the bulk properties. An example of this is superhard nanocomposites with hardness approaching that of diamond. TiN/Si3N4 nanocomposites (TiN nanocrystals encapsulated in a fully percolated SiNx tissue phase) and nanolaminates, in particular, have attracted much attention as model systems for the synthesis of such superhard materials. Here, we use in situ angle-resolved x-ray photoelectron spectroscopy to probe the electronic structure of Si3N4/TiN(001), Si/TiN(001), and Ti/TiN(001) bilayer interfaces, in which 4-ML-thick overlayers are grown in an ultrahigh vacuum system by reactive magnetron sputter deposition onto epitaxial TiN layers on MgO(001). The thickness of the Si3N4, Si, and Ti overlayers is chosen to be thin enough to insure sufficient electron transparency to probe the interfaces, while being close to values reported in typical nanocomposites and nanolaminates. The results show that these overlayer/TiN(001) interfaces have distinctly different bonding characteristics. Si3N4 exhibits interface polarization through the formation of an interlayer, in which the N concentration is enhanced at higher substrate bias values during Si3N4 deposition. The increased number of Ti-N bonds at the interface, together with the resulting polarization, strengthens interfacial bonding. In contrast, overlayers of Si and, even more so, metallic Ti weaken the interface by minimizing the valence band energy difference between the two phases. A model is proposed that provides a semiquantitative explanation of the interfacial bond strength in nitrogen-saturated and nitrogen-deficient Ti-Si-N nanocomposites.

  20. Conservative phase-field lattice Boltzmann model for interface tracking equation.

    PubMed

    Geier, Martin; Fakhari, Abbas; Lee, Taehun

    2015-06-01

    Based on the phase-field theory, we propose a conservative lattice Boltzmann method to track the interface between two different fluids. The presented model recovers the conservative phase-field equation and conserves mass locally and globally. Two entirely different approaches are used to calculate the gradient of the phase field, which is needed in computation of the normal to the interface. One approach uses finite-difference stencils similar to many existing lattice Boltzmann models for tracking the two-phase interface, while the other one invokes central moments to calculate the gradient of the phase field without any finite differences involved. The former approach suffers from the nonlocality of the collision operator while the latter is entirely local making it highly suitable for massive parallel implementation. Several benchmark problems are carried out to assess the accuracy and stability of the proposed model. PMID:26172824

  1. A Sketching Interface for Freeform 3D Modeling

    NASA Astrophysics Data System (ADS)

    Igarashi, Takeo

    This chapter introduces Teddy, a sketch-based modeling system to quickly and easily design freeform models such as stuffed animals and other rotund objects. The user draws several 2D freeform strokes interactively on the screen and the system automatically constructs plausible 3D polygonal surfaces. Our system supports several modeling operations, including the operation to construct a 3D polygonal surface from a 2D silhouette drawn by the user: it inflates the region surrounded by the silhouette making a wide area fat, and a narrow area thin. Teddy, our prototype system, is implemented as a Java program, and the mesh construction is done in real-time on a standard PC. Our informal user study showed that a first-time user masters the operations within 10 minutes, and can construct interesting 3D models within minutes. We also report the result of a case study where a high school teacher taught various 3D concepts in geography using the system.

  2. Modeling and matching of landmarks for automation of Mars Rover localization

    NASA Astrophysics Data System (ADS)

    Wang, Jue

    The Mars Exploration Rover (MER) mission, begun in January 2004, has been extremely successful. However, decision-making for many operation tasks of the current MER mission and the 1997 Mars Pathfinder mission is performed on Earth through a predominantly manual, time-consuming process. Unmanned planetary rover navigation is ideally expected to reduce rover idle time, diminish the need for entering safe-mode, and dynamically handle opportunistic science events without required communication to Earth. Successful automation of rover navigation and localization during the extraterrestrial exploration requires that accurate position and attitude information can be received by a rover and that the rover has the support of simultaneous localization and mapping. An integrated approach with Bundle Adjustment (BA) and Visual Odometry (VO) can efficiently refine the rover position. However, during the MER mission, BA is done manually because of the difficulty in the automation of the cross-sitetie points selection. This dissertation proposes an automatic approach to select cross-site tie points from multiple rover sites based on the methods of landmark extraction, landmark modeling, and landmark matching. The first step in this approach is that important landmarks such as craters and rocks are defined. Methods of automatic feature extraction and landmark modeling are then introduced. Complex models with orientation angles and simple models without those angles are compared. The results have shown that simple models can provide reasonably good results. Next, the sensitivity of different modeling parameters is analyzed. Based on this analysis, cross-site rocks are matched through two complementary stages: rock distribution pattern matching and rock model matching. In addition, a preliminary experiment on orbital and ground landmark matching is also briefly introduced. Finally, the reliability of the cross-site tie points selection is validated by fault detection, which considers the mapping capability of MER cameras and the reason for mismatches. Fault detection strategies are applied in each step of the cross-site tie points selection to automatically verify the accuracy. The mismatches are excluded and localization errors are minimized. The method proposed in this dissertation is demonstrated with the datasets from the 2004 MER mission (traverse of 318 m) as well as the simulated test data at Silver Lake (traverse of 5.5 km), California. The accuracy analysis demonstrates that the algorithm is efficient at automatically selecting a sufficient number of well-distributed high-quality tie points to link the ground images into an image network for BA. The method worked successfully along with a continuous 1.1 km stretch. With the BA performed, highly accurate maps can be created to help the rover to navigate precisely and automatically. The method also enables autonomous long-range Mars rover localization.

  3. Progress and challenges in the automated construction of Markov state models for full protein systems

    PubMed Central

    Bowman, Gregory R.; Beauchamp, Kyle A.; Boxer, George; Pande, Vijay S.

    2009-01-01

    Markov state models (MSMs) are a powerful tool for modeling both the thermodynamics and kinetics of molecular systems. In addition, they provide a rigorous means to combine information from multiple sources into a single model and to direct future simulations?experiments to minimize uncertainties in the model. However, constructing MSMs is challenging because doing so requires decomposing the extremely high dimensional and rugged free energy landscape of a molecular system into long-lived states, also called metastable states. Thus, their application has generally required significant chemical intuition and hand-tuning. To address this limitation we have developed a toolkit for automating the construction of MSMs called MSMBUILDER (available at https:??simtk.org?home?msmbuilder). In this work we demonstrate the application of MSMBUILDER to the villin headpiece (HP-35 NleNle), one of the smallest and fastest folding proteins. We show that the resulting MSM captures both the thermodynamics and kinetics of the original molecular dynamics of the system. As a first step toward experimental validation of our methodology we show that our model provides accurate structure prediction and that the longest timescale events correspond to folding. PMID:19791846

  4. Progress and challenges in the automated construction of Markov state models for full protein systems

    NASA Astrophysics Data System (ADS)

    Bowman, Gregory R.; Beauchamp, Kyle A.; Boxer, George; Pande, Vijay S.

    2009-09-01

    Markov state models (MSMs) are a powerful tool for modeling both the thermodynamics and kinetics of molecular systems. In addition, they provide a rigorous means to combine information from multiple sources into a single model and to direct future simulations/experiments to minimize uncertainties in the model. However, constructing MSMs is challenging because doing so requires decomposing the extremely high dimensional and rugged free energy landscape of a molecular system into long-lived states, also called metastable states. Thus, their application has generally required significant chemical intuition and hand-tuning. To address this limitation we have developed a toolkit for automating the construction of MSMs called MSMBUILDER (available at https://simtk.org/home/msmbuilder). In this work we demonstrate the application of MSMBUILDER to the villin headpiece (HP-35 NleNle), one of the smallest and fastest folding proteins. We show that the resulting MSM captures both the thermodynamics and kinetics of the original molecular dynamics of the system. As a first step toward experimental validation of our methodology we show that our model provides accurate structure prediction and that the longest timescale events correspond to folding.

  5. pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling

    NASA Astrophysics Data System (ADS)

    Wellmann, J. F.; Thiele, S. T.; Lindsay, M. D.; Jessell, M. W.

    2015-11-01

    We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilise the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a~link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential-fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.

  6. Progress and challenges in the automated construction of Markov state models for full protein systems.

    PubMed

    Bowman, Gregory R; Beauchamp, Kyle A; Boxer, George; Pande, Vijay S

    2009-09-28

    Markov state models (MSMs) are a powerful tool for modeling both the thermodynamics and kinetics of molecular systems. In addition, they provide a rigorous means to combine information from multiple sources into a single model and to direct future simulations/experiments to minimize uncertainties in the model. However, constructing MSMs is challenging because doing so requires decomposing the extremely high dimensional and rugged free energy landscape of a molecular system into long-lived states, also called metastable states. Thus, their application has generally required significant chemical intuition and hand-tuning. To address this limitation we have developed a toolkit for automating the construction of MSMs called MSMBUILDER (available at https://simtk.org/home/msmbuilder). In this work we demonstrate the application of MSMBUILDER to the villin headpiece (HP-35 NleNle), one of the smallest and fastest folding proteins. We show that the resulting MSM captures both the thermodynamics and kinetics of the original molecular dynamics of the system. As a first step toward experimental validation of our methodology we show that our model provides accurate structure prediction and that the longest timescale events correspond to folding. PMID:19791846

  7. pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling

    NASA Astrophysics Data System (ADS)

    Florian Wellmann, J.; Thiele, Sam T.; Lindsay, Mark D.; Jessell, Mark W.

    2016-03-01

    We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilize the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.

  8. Smart Frameworks and Self-Describing Models: Model Metadata for Automated Coupling of Hydrologic Process Components (Invited)

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.

    2013-12-01

    Model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that allow heterogeneous sets of process models to be assembled in a plug-and-play manner to create composite "system models". These mechanisms facilitate code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers, e.g. by requiring them to provide their output in specific forms that meet the input requirements of other models. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can compare the answers to these queries with similar answers from other process models in a collection and then automatically call framework service components as necessary to mediate the differences between the coupled models. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. To illustrate the power of standardized model interfaces and metadata, a smart, light-weight modeling framework written in Python will be introduced that can automatically (without user intervention) couple a set of BMI-enabled hydrologic process components together to create a spatial hydrologic model. The same mechanisms could also be used to provide seamless integration (import/export) of data and models.

  9. Tape-Drop Transient Model for In-Situ Automated Tape Placement of Thermoplastic Composites

    NASA Technical Reports Server (NTRS)

    Costen, Robert C.; Marchello, Joseph M.

    1998-01-01

    Composite parts of nonuniform thickness can be fabricated by in-situ automated tape placement (ATP) if the tape can be started and stopped at interior points of the part instead of always at its edges. This technique is termed start/stop-on-the-part, or, alternatively, tape-add/tape-drop. The resulting thermal transients need to be managed in order to achieve net shape and maintain uniform interlaminar weld strength and crystallinity. Starting-on-the-part has been treated previously. This paper continues the study with a thermal analysis of stopping-on-the-part. The thermal source is switched off when the trailing end of the tape enters the nip region of the laydown/consolidation head. The thermal transient is determined by a Fourier-Laplace transform solution of the two-dimensional, time-dependent thermal transport equation. This solution requires that the Peclet number Pe (the dimensionless ratio of inertial to diffusive heat transport) be independent of time and much greater than 1. Plotted isotherms show that the trailing tape-end cools more rapidly than the downstream portions of tape. This cooling can weaken the bond near the tape end; however the length of the affected region is found to be less than 2 mm. To achieve net shape, the consolidation head must continue to move after cut-off until the temperature on the weld interface decreases to the glass transition temperature. The time and elapsed distance for this condition to occur are computed for the Langley ATP robot applying PEEK/carbon fiber composite tape and for two upgrades in robot performance. The elapsed distance after cut-off ranges from about 1 mm for the present robot to about 1 cm for the second upgrade.

  10. Modeling and Measurements for Mitigating Interface from Skyshine

    SciTech Connect

    Kernan, Warnick J.; Mace, Emily K.; Siciliano, Edward R.; Conlin, Kenneth E.; Flumerfelt, Eric L.; Kouzes, Richard T.; Woodring, Mitchell L.

    2009-12-21

    ?Abstract Skyshine, the radiation scattered in the air above a high-activity gamma-ray source, can produce interference with radiation portal monitor (RPM) systems at distances up to even many hundred meters. Pacific Northwest National Laboratory (PNNL) has been engaged in a campaign of measurements, design work and modeling that explore methods of mitigating the effects of skyshine on outdoor measurements with sensitive instruments. An overview of our work with shielding of skyshine is being reported by us in another paper at this conference. This paper will concentrate on two topics: measurements and modeling with Monte Carlo transport calculations to characterize skyshine from an iridium-192 source, and testing of a prototype louver system, designed and fabricated at PNNL, as a shielding approach to limit the impact of skyshine interference on RPM systems.

  11. Modeling Nitrogen Cycle at the Surface-Subsurface Water Interface

    NASA Astrophysics Data System (ADS)

    Marzadri, A.; Tonina, D.; Bellin, A.

    2011-12-01

    Anthropogenic activities, primarily food and energy production, have altered the global nitrogen cycle, increasing reactive dissolved inorganic nitrogen, Nr, chiefly ammonium NH4+ and nitrate NO3-, availability in many streams worldwide. Increased Nr promotes biological activity often with negative consequences such as water body eutrophication and emission of nitrous oxide gas, N2O, an important greenhouse gas as a by-product of denitrification. The hyporheic zone may play an important role in processing Nr and returning it to the atmosphere. Here, we present a process-based three-dimensional semi-analytical model, which couples hyporheic hydraulics with biogeochemical reactions and transport equations. Transport is solved by means of particle tracking with negligible local dispersion and biogeochemical reactions modeled by linearized Monod's kinetics with temperature dependant reaction rate coefficients. Comparison of measured and predicted N2O emissions from 7 natural stream shows a good match. We apply our model to gravel bed rivers with alternate bar morphology to investigate the role of hyporheic hydraulic, depth of alluvium, relative availability of stream concentration of NO3- and NH4+ and water temperature on nitrogen gradients within the sediment. Our model shows complex concentration dynamics, which depend on hyporheic residence time distribution and consequently on streambed morphology, within the hyporheic zone. Nitrogen gas emissions from the hyporheic zone increase with alluvium depth in large low-gradient streams but not in small steep streams. On the other hand, hyporheic water temperature influences nitrification/denitrification processes mainly in small-steep than large low-gradient streams, because of the long residence times, which offset the slow reaction rates induced by low temperatures in the latter stream. The overall conclusion of our analysis is that river morphology has a major impact on biogeochemical processes such as nitrification and denitrification with a direct impact on the stream nutrient removal and transport.

  12. Development of numerical models of interfaces for multiscale simulation of heterogeneous materials

    NASA Astrophysics Data System (ADS)

    Astafurov, S. V.; Shilko, E. V.; Dimaki, A. V.; Psakhie, S. G.

    2015-10-01

    The paper is devoted to development of a model of the "third body" in the framework of movable cellular automaton method to take account of interfaces in heterogeneous interfacial materials. The main feature of the developed approach is the ability of direct account of the width and rheology of interphase/grain boundaries as well as their non-equilibrium state. Results of the verification of the developed model showed that it can be effectively used to study the response of such interfacial materials, for which is hampered to us use "classical" approaches of implicit and explicit accounting interfaces in the framework of discrete element methods.

  13. Model for Deformation, Stress and Contact at Interfaces Implications for Ultrasonic Measurements

    NASA Astrophysics Data System (ADS)

    Hopkins, Deborah; Reverdy, Frdric

    2004-02-01

    An analytical model is presented for calculating deformation, contact area, stiffness, and the distribution of stress across joints and interfaces that can be described as two rough surfaces in partial contact. Analytical, modeling, and experimental results are presented that demonstrate the role that deformation of the bulk material surrounding the joint and mechanical interaction between contact points play in joint properties and the propagation of acoustic waves across the interface. Results are applied to help interpret C-scans of spot-welded joints.

  14. Delft FEWS: an open interface that connects models and data streams for operational forecasting systems

    NASA Astrophysics Data System (ADS)

    de Rooij, Erik; Werner, Micha

    2010-05-01

    Many of the operational forecasting systems that are in use today are centred around a single modelling suite. Over the years these systems and the required data streams have been tailored to provide a closed-knit interaction with their underlying modelling components. However, as time progresses it becomes a challenge to integrate new technologies into these model centric operational systems. Often the software used to develop these systems is out of date, or the original designers of these systems are no longer available. Additionally, the changing of the underlying models may requiring the complete system to be changed. This then becomes an extensive effort, not only from a software engineering point of view, but also from a training point of view. Due to significant time and resources being committed to re-training the forecasting teams that interact with the system on a daily basis. One approach to reducing the effort required in integrating new models and data is through an open interface architecture, and through the use of defined interfaces and standards in data exchange. This approach is taken by the Delft-FEWS operational forecasting shell, which has now been applied in some 40 operational forecasting centres across the world. The Delft-FEWS framework provides several interfaces that allow models and data in differing formats to be flexibly integrated with the system. The most common approach to the integration of modes is through the Delft-FEWS Published Interface. This is an XML based data exchange format that supports the exchange of time series data, as well as vector and gridded data formats. The Published Interface supports standardised data formats such as GRIB and the NetCDF-CF standard. A wide range of models has been integrated with the system through this approach, and these are used operationally across the forecasting centres using Delft FEWS. Models can communicate directly with the interface of Delft-FEWS, or through a SOAP service. This giving the flexibility required for a state-of-the-art operational forecasting service. While Delft-FEWS comes with a user-friendly GIS based interface, a time series viewer and editor, and a wide range of tools for visualization, analysis, validation and data conversion, the available graphical display can be extended. New graphical components can be seamlessly integrated with the system through the SOAP service. Thanks to this open infrastructure, new models can easily be incorporated into an operational system without having to change the operational process. This allows the forecaster to focus on the science instead of having to worry about model details and data formats. Furthermore all model formats introduced to the Delft-FEWS framework will in principle become available to the Delft-FEWS community (in some cases subject to the licence conditions of the model supplier). Currently a wide range of models has been integrated and is being used operationally; Mike 11, HEC-RAS & HEC-RESSIM, HBV, MODFLOW, SOBEK and more. In this way Delft-FEWS not only provides a modelling interface but also a platform for model inter-comparison or multi-model ensembles, as well as a knowledge interface that allows forecasters throughout the world to exchange their views and ideas on operational forecasting. Keywords: FEWS; forecasting; modelling; timeseries; data; XML; NetCDF; interface; SOAP

  15. Model studies of Rayleigh instabilities via microdesigned interfaces

    SciTech Connect

    Glaeser, Andreas M.

    2000-10-17

    The energetic and kinetic properties of surfaces play a critical role in defining the microstructural changes that occur during sintering and high-temperature use of ceramics. Characterization of surface diffusion in ceramics is particularly difficult, and significant variations in reported values of surface diffusivities arise even in well-studied systems. Effects of impurities, surface energy anisotropy, and the onset of surface attachment limited kinetics (SALK) are believed to contribute to this variability. An overview of the use of Rayleigh instabilities as a means of characterizing surface diffusivities is presented. The development of models of morphological evolution that account for effects of surface energy anisotropy is reviewed, and the potential interplay between impurities and surface energy anisotropy is addressed. The status of experimental studies of Rayleigh instabilities in sapphire utilizing lithographically introduced pore channels of controlled geometry and crystallography is summarized. Results of model studies indicate that impurities can significantly influence both the spatial and temporal characteristics of Rayleigh instabilities; this is attributed at least in part to impurity effects on the surface energy anisotropy. Related model experiments indicate that the onset of SALK may also contribute significantly to apparent variations in surface diffusion coefficients.

  16. Accident prediction model for railway-highway interfaces.

    PubMed

    Oh, Jutaek; Washington, Simon P; Nam, Doohee

    2006-03-01

    Considerable past research has explored relationships between vehicle accidents and geometric design and operation of road sections, but relatively little research has examined factors that contribute to accidents at railway-highway crossings. Between 1998 and 2002 in Korea, about 95% of railway accidents occurred at highway-rail grade crossings, resulting in 402 accidents, of which about 20% resulted in fatalities. These statistics suggest that efforts to reduce crashes at these locations may significantly reduce crash costs. The objective of this paper is to examine factors associated with railroad crossing crashes. Various statistical models are used to examine the relationships between crossing accidents and features of crossings. The paper also compares accident models developed in the United States and the safety effects of crossing elements obtained using Korea data. Crashes were observed to increase with total traffic volume and average daily train volumes. The proximity of crossings to commercial areas and the distance of the train detector from crossings are associated with larger numbers of accidents, as is the time duration between the activation of warning signals and gates. The unique contributions of the paper are the application of the gamma probability model to deal with underdispersion and the insights obtained regarding railroad crossing related vehicle crashes. PMID:16297846

  17. A model for investigating the behaviour of non-spherical particles at interfaces.

    PubMed

    Morris, G; Neethling, S J; Cilliers, J J

    2011-02-01

    This paper introduces a simple method for modelling non-spherical particles with a fixed contact angle at an interface whilst also providing a method to fix the particles orientation. It is shown how a wide variety of particle shapes (spherical, ellipsoidal, disc) can be created from a simple initial geometry containing only six vertices. The shapes are made from one continuous surface with edges and corners treated as smooth curves not discontinuities. As such, particles approaching cylindrical and orthorhombic shapes can be simulated but the contact angle crossing the edges will be fixed. Non-spherical particles, when attached to an interface can cause large distortions in the surface which affect the forces acting on the particle. The model presented is capable of resolving this distortion of the surface around the particle at the interface as well as allowing for the particle's orientation to be controlled. It is shown that, when considering orthorhombic particles with rounded edges, the flatter the particle the more energetically stable it is to sit flat at the interface. However, as the particle becomes more cube like, the effects of contact angle have a greater effect on the energetically stable orientations. Results for cylindrical particles with rounded edges are also discussed. The model presented allows the user to define the shape, dimensions, contact angle and orientation of the particle at the interface allowing more in-depth investigation of the complex phenomenon of 3D film distortion around an attached particle and the forces that arise due to it. PMID:21067767

  18. A graphical user interface for numerical modeling of acclimation responses of vegetation to climate change

    NASA Astrophysics Data System (ADS)

    Le, Phong V. V.; Kumar, Praveen; Drewry, Darren T.; Quijano, Juan C.

    2012-12-01

    Ecophysiological models that vertically resolve vegetation canopy states are becoming a powerful tool for studying the exchange of mass, energy, and momentum between the land surface and the atmosphere. A mechanistic multilayer canopy-soil-root system model (MLCan) developed by Drewry et al. (2010a) has been used to capture the emergent vegetation responses to elevated atmospheric CO2 for both C3 and C4 plants under various climate conditions. However, processing input data and setting up such a model can be time-consuming and error-prone. In this paper, a graphical user interface that has been developed for MLCan is presented. The design of this interface aims to provide visualization capabilities and interactive support for processing input meteorological forcing data and vegetation parameter values to facilitate the use of this model. In addition, the interface also provides graphical tools for analyzing the forcing data and simulated numerical results. The model and its interface are both written in the MATLAB programming language. Finally, an application of this model package for capturing the ecohydrological responses of three bioenergy crops (maize, miscanthus, and switchgrass) to local environmental drivers at two different sites in the Midwestern United States is presented.

  19. Modeling the Assembly of Polymer-Grafted Nanoparticles at Oil-Water Interfaces.

    PubMed

    Yong, Xin

    2015-10-27

    Using dissipative particle dynamics (DPD), I model the interfacial adsorption and self-assembly of polymer-grafted nanoparticles at a planar oil-water interface. The amphiphilic core-shell nanoparticles irreversibly adsorb to the interface and create a monolayer covering the interface. The polymer chains of the adsorbed nanoparticles are significantly deformed by surface tension to conform to the interface. I quantitatively characterize the properties of the particle-laden interface and the structure of the monolayer in detail at different surface coverages. I observe that the monolayer of particles grafted with long polymer chains undergoes an intriguing liquid-crystalline-amorphous phase transition in which the relationship between the monolayer structure and the surface tension/pressure of the interface is elucidated. Moreover, my results indicate that the amorphous state at high surface coverage is induced by the anisotropic distribution of the randomly grafted chains on each particle core, which leads to noncircular in-plane morphology formed under excluded volume effects. These studies provide a fundamental understanding of the interfacial behavior of polymer-grafted nanoparticles for achieving complete control of the adsorption and subsequent self-assembly. PMID:26439456

  20. Context based mixture model for cell phase identification in automated fluorescence microscopy

    PubMed Central

    Wang, Meng; Zhou, Xiaobo; King, Randy W; Wong, Stephen TC

    2007-01-01

    Background Automated identification of cell cycle phases of individual live cells in a large population captured via automated fluorescence microscopy technique is important for cancer drug discovery and cell cycle studies. Time-lapse fluorescence microscopy images provide an important method to study the cell cycle process under different conditions of perturbation. Existing methods are limited in dealing with such time-lapse data sets while manual analysis is not feasible. This paper presents statistical data analysis and statistical pattern recognition to perform this task. Results The data is generated from Hela H2B GFP cells imaged during a 2-day period with images acquired 15 minutes apart using an automated time-lapse fluorescence microscopy. The patterns are described with four kinds of features, including twelve general features, Haralick texture features, Zernike moment features, and wavelet features. To generate a new set of features with more discriminate power, the commonly used feature reduction techniques are used, which include Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), Maximum Margin Criterion (MMC), Stepwise Discriminate Analysis based Feature Selection (SDAFS), and Genetic Algorithm based Feature Selection (GAFS). Then, we propose a Context Based Mixture Model (CBMM) for dealing with the time-series cell sequence information and compare it to other traditional classifiers: Support Vector Machine (SVM), Neural Network (NN), and K-Nearest Neighbor (KNN). Being a standard practice in machine learning, we systematically compare the performance of a number of common feature reduction techniques and classifiers to select an optimal combination of a feature reduction technique and a classifier. A cellular database containing 100 manually labelled subsequence is built for evaluating the performance of the classifiers. The generalization error is estimated using the cross validation technique. The experimental results show that CBMM outperforms all other classifies in identifying prophase and has the best overall performance. Conclusion The application of feature reduction techniques can improve the prediction accuracy significantly. CBMM can effectively utilize the contextual information and has the best overall performance when combined with any of the previously mentioned feature reduction techniques. PMID:17263881

  1. Modeling and Analysis Generic Interface for eXternal numerical codes (MAGIX)

    NASA Astrophysics Data System (ADS)

    Möller, T.; Bernst, I.; Panoglou, D.; Muders, D.; Ossenkopf, V.; Röllig, M.; Schilke, P.

    2013-01-01

    The Modeling and Analysis Generic Interface for eXternal numerical codes (MAGIX) is a model optimizer developed under the framework of the coherent set of astrophysical tools for spectroscopy (CATS) project. The MAGIX package provides a framework of an easy interface between existing codes and an iterating engine that attempts to minimize deviations of the model results from available observational data, constraining the values of the model parameters and providing corresponding error estimates. Many models (and, in principle, not only astrophysical models) can be plugged into MAGIX to explore their parameter space and find the set of parameter values that best fits observational/experimental data. MAGIX complies with the data structures and reduction tools of Atacama Large Millimeter Array (ALMA), but can be used with other astronomical and with non-astronomical data. http://www.astro.uni-koeln.de/projects/schilke/MAGIX

  2. Automated method for modeling seven-helix transmembrane receptors from experimental data.

    PubMed Central

    Herzyk, P; Hubbard, R E

    1995-01-01

    A rule-based automated method is presented for modeling the structures of the seven transmembrane helices of G-protein-coupled receptors. The structures are generated by using a simulated annealing Monte Carlo procedure that positions and orients rigid helices to satisfy structural restraints. The restraints are derived from analysis of experimental information from biophysical studies on native and mutant proteins, from analysis of the sequences of related proteins, and from theoretical considerations of protein structure. Calculations are presented for two systems. The method was validated through calculations using appropriate experimental information for bacteriorhodopsin, which produced a model structure with a root mean square (rms) deviation of 1.87 A from the structure determined by electron microscopy. Calculations are also presented using experimental and theoretical information available for bovine rhodopsin to assign the helices to a projection density map and to produce a model of bovine rhodopsin that can be used as a template for modeling other G-protein-coupled receptors. Images FIGURE 2 FIGURE 3 FIGURE 4 FIGURE 8 FIGURE 11 PMID:8599649

  3. Analytical solutions in a hydraulic model of seepage with sharp interfaces

    NASA Astrophysics Data System (ADS)

    Kacimov, A. R.

    2002-02-01

    Flows in horizontal homogeneous porous layers are studied in terms of a hydraulic model with an abrupt interface between two incompressible Darcian fluids of contrasting density driven by an imposed gradient along the layer. The flow of one fluid moving above a resting finger-type pool of another is studied. A straight interface between two moving fluids is shown to slump, rotate and propagate deeper under periodic drive conditions than in a constant-rate regime. Superpropagation of the interface is related to Philip's superelevation in tidal dynamics and acceleration of the front in vertical infiltration in terms of the Green-Ampt model with an oscillating ponding water level. All solutions studied are based on reduction of the governing PDE to nonlinear ODEs and further analytical and numerical integration by computer algebra routines.

  4. Time integration for diffuse interface models for two-phase flow

    SciTech Connect

    Aland, Sebastian

    2014-04-01

    We propose a variant of the θ-scheme for diffuse interface models for two-phase flow, together with three new linearization techniques for the surface tension. These involve either additional stabilizing force terms, or a fully implicit coupling of the Navier–Stokes and Cahn–Hilliard equation. In the common case that the equations for interface and flow are coupled explicitly, we find a time step restriction which is very different to other two-phase flow models and in particular is independent of the grid size. We also show that the proposed stabilization techniques can lift this time step restriction. Even more pronounced is the performance of the proposed fully implicit scheme which is stable for arbitrarily large time steps. We demonstrate in a Taylor-flow application that this superior coupling between flow and interface equation can decrease the computation time by several orders of magnitude.

  5. Laboratory measurements and theoretical modeling of seismoelectric interface response and coseismic wave fields

    SciTech Connect

    Schakel, M. D.; Slob, E. C.; Heller, H. K. J.; Smeulders, D. M. J.

    2011-04-01

    A full-waveform seismoelectric numerical model incorporating the directivity pattern of a pressure source is developed. This model provides predictions of coseismic electric fields and the electromagnetic waves that originate from a fluid/porous-medium interface. An experimental setup in which coseismic electric fields and interface responses are measured is constructed. The seismo-electric origin of the signals is confirmed. The numerically predicted polarity reversal of the interfacial signal and seismoelectric effects due to multiple scattering are detected in the measurements. Both the simulated coseismic electric fields and the electromagnetic waves originating from interfaces agree with the measurements in terms of travel times, waveform, polarity, amplitude, and spatial amplitude decay, demonstrating that seismoelectric effects are comprehensively described by theory.

  6. The Slow Bow Shock Model of the Heliospheric Interface

    NASA Astrophysics Data System (ADS)

    Zieger, B.; Opher, M.

    2013-05-01

    Recent IBEX observations indicate that the pristine interstellar wind is most likely subfast and sub-Alfvenic, which means that no regular fast magnetosonic bow shock can form upstream of the heliosphere. Nevertheless, a slow magnetosonic bow shock can still exist in the local interstellar medium, provided that the angle between the interstellar magnetic field and the interstellar plasma flow velocity (alpha_Bv) is sufficiently small. The latter is supported by a number of kinetic-gasdynamic and multi-fluid MHD simulations that used the Voyager termination shock crossings to constrain the magnitude (3 to 4 microG) and direction (alpha_Bv= 15 to 30 degrees) of the interstellar magnetic field. We propose a quasi-parallel slow bow shock model as a likely alternative of the currently prevailing no bow shock model. The theoretically expected slow bow shock is self-consistently reproduced in our multi-fluid MHD simulations. Since slow-mode information can propagate mainly along the magnetic field, the slow bow shock is significantly shifted from the nose of the heliosphere toward the flank in the direction of the interstellar magnetic field. Such a displaced slow bow shock results in a dense and highly asymmetric hydrogen wall that is expected to produce detectable extra Lyman alpha absorption not only around the nose direction but also in some preferential tailward directions. This could explain among others the puzzling blue shift observed in the Lyman alpha absorption profile of Sirius. The slow bow shock model could easily explain the hotter and slower secondary interstellar hydrogen population observed by IBEX, which is thought to originate from the outer heliosheath. Thus both Lyman alpha and IBEX observations seem to be more consistent with a slow bow shock rather than a shock-free fast bow wave. Voyager 1 is most likely heading towards the slow bow shock, while Voyager 2 is not, which means that the two spacecraft are expected to encounter fundamentally different interstellar plasma populations beyond the heliopause.

  7. Semi-automated calibration method for modelling of mountain permafrost evolution in Switzerland

    NASA Astrophysics Data System (ADS)

    Marmy, A.; Rajczak, J.; Delaloye, R.; Hilbich, C.; Hoelzle, M.; Kotlarski, S.; Lambiel, C.; Noetzli, J.; Phillips, M.; Salzmann, N.; Staub, B.; Hauck, C.

    2015-09-01

    Permafrost is a widespread phenomenon in the European Alps. Many important topics such as the future evolution of permafrost related to climate change and the detection of permafrost related to potential natural hazards sites are of major concern to our society. Numerical permafrost models are the only tools which facilitate the projection of the future evolution of permafrost. Due to the complexity of the processes involved and the heterogeneity of Alpine terrain, models must be carefully calibrated and results should be compared with observations at the site (borehole) scale. However, a large number of local point data are necessary to obtain a broad overview of the thermal evolution of mountain permafrost over a larger area, such as the Swiss Alps, and the site-specific model calibration of each point would be time-consuming. To face this issue, this paper presents a semi-automated calibration method using the Generalized Likelihood Uncertainty Estimation (GLUE) as implemented in a 1-D soil model (CoupModel) and applies it to six permafrost sites in the Swiss Alps prior to long-term permafrost evolution simulations. We show that this automated calibration method is able to accurately reproduce the main thermal condition characteristics with some limitations at sites with unique conditions such as 3-D air or water circulation, which have to be calibrated manually. The calibration obtained was used for RCM-based long-term simulations under the A1B climate scenario specifically downscaled at each borehole site. The projection shows general permafrost degradation with thawing at 10 m, even partially reaching 20 m depths until the end of the century, but with different timing among the sites. The degradation is more rapid at bedrock sites whereas ice-rich sites with a blocky surface cover showed a reduced sensitivity to climate change. The snow cover duration is expected to be reduced drastically (between -20 to -37 %) impacting the ground thermal regime. However, the uncertainty range of permafrost projections is large, resulting mainly from the broad range of input climate data from the different GCM-RCM chains of the ENSEMBLES data set.

  8. Diffuse interface models of locally inextensible vesicles in a viscous fluid

    NASA Astrophysics Data System (ADS)

    Aland, Sebastian; Egerer, Sabine; Lowengrub, John; Voigt, Axel

    2014-11-01

    We present a new diffuse interface model for the dynamics of inextensible vesicles in a viscous fluid with inertial forces. A new feature of this work is the implementation of the local inextensibility condition in the diffuse interface context. Local inextensibility is enforced by using a local Lagrange multiplier, which provides the necessary tension force at the interface. We introduce a new equation for the local Lagrange multiplier whose solution essentially provides a harmonic extension of the multiplier off the interface while maintaining the local inextensibility constraint near the interface. We also develop a local relaxation scheme that dynamically corrects local stretching/compression errors thereby preventing their accumulation. Asymptotic analysis is presented that shows that our new system converges to a relaxed version of the inextensible sharp interface model. This is also verified numerically. To solve the equations, we use an adaptive finite element method with implicit coupling between the Navier-Stokes and the diffuse interface inextensibility equations. Numerical simulations of a single vesicle in a shear flow at different Reynolds numbers demonstrate that errors in enforcing local inextensibility may accumulate and lead to large differences in the dynamics in the tumbling regime and smaller differences in the inclination angle of vesicles in the tank-treading regime. The local relaxation algorithm is shown to prevent the accumulation of stretching and compression errors very effectively. Simulations of two vesicles in an extensional flow show that local inextensibility plays an important role when vesicles are in close proximity by inhibiting fluid drainage in the near contact region.

  9. Diffuse interface models of locally inextensible vesicles in a viscous fluid

    PubMed Central

    Aland, Sebastian; Egerer, Sabine; Lowengrub, John; Voigt, Axel

    2014-01-01

    We present a new diffuse interface model for the dynamics of inextensible vesicles in a viscous fluid with inertial forces. A new feature of this work is the implementation of the local inextensibility condition in the diffuse interface context. Local inextensibility is enforced by using a local Lagrange multiplier, which provides the necessary tension force at the interface. We introduce a new equation for the local Lagrange multiplier whose solution essentially provides a harmonic extension of the multiplier off the interface while maintaining the local inextensibility constraint near the interface. We also develop a local relaxation scheme that dynamically corrects local stretching/compression errors thereby preventing their accumulation. Asymptotic analysis is presented that shows that our new system converges to a relaxed version of the inextensible sharp interface model. This is also verified numerically. To solve the equations, we use an adaptive finite element method with implicit coupling between the Navier-Stokes and the diffuse interface inextensibility equations. Numerical simulations of a single vesicle in a shear flow at different Reynolds numbers demonstrate that errors in enforcing local inextensibility may accumulate and lead to large differences in the dynamics in the tumbling regime and smaller differences in the inclination angle of vesicles in the tank-treading regime. The local relaxation algorithm is shown to prevent the accumulation of stretching and compression errors very effectively. Simulations of two vesicles in an extensional flow show that local inextensibility plays an important role when vesicles are in close proximity by inhibiting fluid drainage in the near contact region. PMID:25246712

  10. Automated estimation of fetal cardiac timing events from Doppler ultrasound signal using hybrid models.

    PubMed

    Marzbanrad, Faezeh; Kimura, Yoshitaka; Funamoto, Kiyoe; Sugibayashi, Rika; Endo, Miyuki; Ito, Takuya; Palaniswami, Marimuthu; Khandoker, Ahsan H

    2014-07-01

    In this paper, a new noninvasive method is proposed for automated estimation of fetal cardiac intervals from Doppler Ultrasound (DUS) signal. This method is based on a novel combination of empirical mode decomposition (EMD) and hybrid support vector machines-hidden Markov models (SVM/HMM). EMD was used for feature extraction by decomposing the DUS signal into different components (IMFs), one of which is linked to the cardiac valve motions, i.e. opening (o) and closing (c) of the Aortic (A) and Mitral (M) valves. The noninvasive fetal electrocardiogram (fECG) was used as a reference for the segmentation of the IMF into cardiac cycles. The hybrid SVM/HMM was then applied to identify the cardiac events, based on the amplitude and timing of the IMF peaks as well as the sequence of the events. The estimated timings were verified using pulsed doppler images. Results show that this automated method can continuously evaluate beat-to-beat valve motion timings and identify more than 91% of total events which is higher than previous methods. Moreover, the changes of the cardiac intervals were analyzed for three fetal age groups: 16-29, 30-35, and 36-41 weeks. The time intervals from Q-wave of fECG to Ac (Systolic Time Interval, STI), Ac to Mo (Isovolumic Relaxation Time, IRT), Q-wave to Ao (Preejection Period, PEP) and Ao to Ac (Ventricular Ejection Time, VET) were found to change significantly ( ) across these age groups. In particular, STI, IRT, and PEP of the fetuses with 36-41 week were significantly ( ) different from other age groups. These findings can be used as sensitive markers for evaluating the fetal cardiac performance. PMID:24144677

  11. Conserved dynamics and interface roughening in spontaneous imbibition: A phase field model

    NASA Astrophysics Data System (ADS)

    Dub, M.; Rost, M.; Elder, K. R.; Alava, M.; Majaniemi, S.; Ala-Nissila, T.

    2000-06-01

    The propagation and roughening of a fluid-gas interface through a disordered medium in the case of capillary driven spontaneous imbibition is considered. The system is described by a conserved (model B) phase-field model, with the structure of the disordered medium appearing as a quenched random field $\\alpha({\\bf x})$. The flow of liquid into the medium is obtained by imposing a non-equilibrium boundary condition on the chemical potential, which reproduces Washburn's equation $H \\sim t^{1/2}$ for the slowing down motion of the average interface position $H$. The interface is found to be superrough, with global roughness exponent $\\chi \\approx 1.25$, indicating anomalous scaling. The spatial extent of the roughness is determined by a length scale $\\xi_{\\times} \\sim H^{1/2}$ arising from the conservation law. The interface advances by avalanche motion, which causes temporal multiscaling and qualitatively reproduces the experimental results of Horv\\a'ath and Stanley [Phys. Rev. E {\\bf 52} 5166 (1995)] on the temporal scaling of the interface.

  12. Interface modeling to predict well casing damage for big hill strategic petroleum reserve.

    SciTech Connect

    Ehgartner, Brian L.; Park, Byoung Yoon

    2012-02-01

    Oil leaks were found in well casings of Caverns 105 and 109 at the Big Hill Strategic Petroleum Reserve site. According to the field observations, two instances of casing damage occurred at the depth of the interface between the caprock and top of salt. This damage could be caused by interface movement induced by cavern volume closure due to salt creep. A three dimensional finite element model, which allows each cavern to be configured individually, was constructed to investigate shear and vertical displacements across each interface. The model contains interfaces between each lithology and a shear zone to examine the interface behavior in a realistic manner. This analysis results indicate that the casings of Caverns 105 and 109 failed by shear stress that exceeded shear strength due to the horizontal movement of the top of salt relative to the caprock, and tensile stress due to the downward movement of the top of salt from the caprock, respectively. The casings of Caverns 101, 110, 111 and 114, located at the far ends of the field, are predicted to be failed by shear stress in the near future. The casings of inmost Caverns 107 and 108 are predicted to be failed by tensile stress in the near future.

  13. Automated Geospatial Watershed Assessment

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment (AGWA) tool is a Geographic Information Systems (GIS) interface jointly developed by the U.S. Environmental Protection Agency, the U.S. Department of Agriculture (USDA) Agricultural Research Service, and the University of Arizona to a...

  14. Microcontroller for automation application

    NASA Technical Reports Server (NTRS)

    Cooper, H. W.

    1975-01-01

    The description of a microcontroller currently being developed for automation application was given. It is basically an 8-bit microcomputer with a 40K byte random access memory/read only memory, and can control a maximum of 12 devices through standard 15-line interface ports.

  15. Modelling and interpreting biologically crusted dryland soil sub-surface structure using automated micropenetrometry

    NASA Astrophysics Data System (ADS)

    Hoon, Stephen R.; Felde, Vincent J. M. N. L.; Drahorad, Sylvie L.; Felix-Henningsen, Peter

    2015-04-01

    Soil penetrometers are used routinely to determine the shear strength of soils and deformable sediments both at the surface and throughout a depth profile in disciplines as diverse as soil science, agriculture, geoengineering and alpine avalanche-safety (e.g. Grunwald et al. 2001, Van Herwijnen et al. 2009). Generically, penetrometers comprise two principal components: An advancing probe, and a transducer; the latter to measure the pressure or force required to cause the probe to penetrate or advance through the soil or sediment. The force transducer employed to determine the pressure can range, for example, from a simple mechanical spring gauge to an automatically data-logged electronic transducer. Automated computer control of the penetrometer step size and probe advance rate enables precise measurements to be made down to a resolution of 10's of microns, (e.g. the automated electronic micropenetrometer (EMP) described by Drahorad 2012). Here we discuss the determination, modelling and interpretation of biologically crusted dryland soil sub-surface structures using automated micropenetrometry. We outline a model enabling the interpretation of depth dependent penetration resistance (PR) profiles and their spatial differentials using the model equations, σ {}(z) ={}σ c0{}+Σ 1n[σ n{}(z){}+anz + bnz2] and dσ /dz = Σ 1n[dσ n(z) /dz{} {}+{}Frn(z)] where σ c0 and σ n are the plastic deformation stresses for the surface and nth soil structure (e.g. soil crust, layer, horizon or void) respectively, and Frn(z)dz is the frictional work done per unit volume by sliding the penetrometer rod an incremental distance, dz, through the nth layer. Both σ n(z) and Frn(z) are related to soil structure. They determine the form of σ {}(z){} measured by the EMP transducer. The model enables pores (regions of zero deformation stress) to be distinguished from changes in layer structure or probe friction. We have applied this method to both artificial calibration soils in the laboratory, and in-situ field studies. In particular, we discuss the nature and detection of surface and buried (fossil) subsurface Biological Soil Crusts (BSCs), voids, macroscopic particles and compositional layers. The strength of surface BSCs and the occurrence of buried BSCs and layers has been detected at sub millimetre scales to depths of 40mm. Our measurements and field observations of PR show the importance of morphological layering to overall BSC functions (Felde et al. 2015). We also discuss the effect of penetrometer shaft and probe-tip profiles upon the theoretical and experimental curves, EMP resolution and reproducibility, demonstrating how the model enables voids, buried biological soil crusts, exotic particles, soil horizons and layers to be distinguished one from another. This represents a potentially important contribution to advancing understanding of the relationship between BSCs and dryland soil structure. References: Drahorad SL, Felix-Henningsen P. (2012) An electronic micropenetrometer (EMP) for field measurements of biological soil crust stability, J. Plant Nutr. Soil Sci., 175, 519-520 Felde V.J.M.N.L., Drahorad S.L., Felix-Henningsen P., Hoon S.R. (2015) Ongoing oversanding induces biological soil crust layering - a new approach for BSC structure elucidation determined from high resolution penetration resistance data (submitted) Grunwald, S., Rooney D.J., McSweeney K., Lowery B. (2001) Development of pedotransfer functions for a profile cone penetrometer, Geoderma, 100, 25-47 Van Herwijnen A., Bellaire S., Schweizer J. (2009) Comparison of micro-structural snowpack parameters derived from penetration resistance measurements with fracture character observations from compression tests, Cold Regions Sci. {& Technol.}, 59, 193-201

  16. A Model of Subdiffusive Interface Dynamics with a Local Conservation of Minimum Height

    NASA Astrophysics Data System (ADS)

    Koduvely, Hari M.; Dhar, Deepak

    1998-01-01

    We define a new model of interface roughening in one dimension which has the property that the minimum of interface height is conserved locally during the evolution. This model corresponds to the limit q ? ? of the q-color dimer deposition-evaporation model introduced by us earlier [Hari Menon and Dhar, J. Phys. A: Math. Gen. 28:6517 (1995)]. We present numerical evidence from Monte Carlo simulations and the exact diagonalization of the evolution operator on finite rings that growth of correlations in this model is subdiffusive with dynamical exponent z?2.5. For periodic boundary conditions, the variation of the gap in the relaxation spectrum with system size appears to involve a logarithmic correction term. Some generalizations of the model are briefly discussed.

  17. Modelling electrified interfaces in quantum chemistry: constant charge vs. constant potential.

    PubMed

    Benedikt, Udo; Schneider, Wolfgang B; Auer, Alexander A

    2013-02-28

    The proper description of electrified metal/solution interfaces, as they occur in electrochemical systems, is a key component for simulating the unique features of electrocatalytic reactions using electronic structure calculations. While in standard solid state (plane wave, periodic boundary conditions) density functional theory (DFT) calculations several models for describing electrochemical environments exist, for cluster models in a quantum chemistry approach (atomic orbital basis, finite system) this is not straightforward. In this work, two different approaches for the theoretical description of electrified interfaces of nanoparticles, the constant charge and the constant potential model, are discussed. Different schemes for describing electrochemical reactions including solvation models are tested for a consistent description of the electrochemical potential and the local chemical behavior for finite structures. The different schemes and models are investigated for the oxygen reduction reaction (ORR) on a hemispherical cuboctahedral platinum nanoparticle. PMID:23329171

  18. Partially Automated Method for Localizing Standardized Acupuncture Points on the Heads of Digital Human Models

    PubMed Central

    Kim, Jungdae; Kang, Dae-In

    2015-01-01

    Having modernized imaging tools for precise positioning of acupuncture points over the human body where the traditional therapeutic method is applied is essential. For that reason, we suggest a more systematic positioning method that uses X-ray computer tomographic images to precisely position acupoints. Digital Korean human data were obtained to construct three-dimensional head-skin and skull surface models of six individuals. Depending on the method used to pinpoint the positions of the acupoints, every acupoint was classified into one of three types: anatomical points, proportional points, and morphological points. A computational algorithm and procedure were developed for partial automation of the positioning. The anatomical points were selected by using the structural characteristics of the skin surface and skull. The proportional points were calculated from the positions of the anatomical points. The morphological points were also calculated by using some control points related to the connections between the source and the target models. All the acupoints on the heads of the six individual were displayed on three-dimensional computer graphical image models. This method may be helpful for developing more accurate experimental designs and for providing more quantitative volumetric methods for performing analyses in acupuncture-related research. PMID:26101534

  19. VoICE: A semi-automated pipeline for standardizing vocal analysis across models

    PubMed Central

    Burkett, Zachary D.; Day, Nancy F.; Peñagarikano, Olga; Geschwind, Daniel H.; White, Stephanie A.

    2015-01-01

    The study of vocal communication in animal models provides key insight to the neurogenetic basis for speech and communication disorders. Current methods for vocal analysis suffer from a lack of standardization, creating ambiguity in cross-laboratory and cross-species comparisons. Here, we present VoICE (Vocal Inventory Clustering Engine), an approach to grouping vocal elements by creating a high dimensionality dataset through scoring spectral similarity between all vocalizations within a recording session. This dataset is then subjected to hierarchical clustering, generating a dendrogram that is pruned into meaningful vocalization “types” by an automated algorithm. When applied to birdsong, a key model for vocal learning, VoICE captures the known deterioration in acoustic properties that follows deafening, including altered sequencing. In a mammalian neurodevelopmental model, we uncover a reduced vocal repertoire of mice lacking the autism susceptibility gene, Cntnap2. VoICE will be useful to the scientific community as it can standardize vocalization analyses across species and laboratories. PMID:26018425

  20. Automated Probing and Inference of Analytical Models for Metabolic Network Dynamics

    NASA Astrophysics Data System (ADS)

    Wikswo, John; Schmidt, Michael; Jenkins, Jerry; Hood, Jonathan; Lipson, Hod

    2010-03-01

    We introduce a method to automatically construct mathematical models of a biological system, and apply this technique to infer a seven-dimensional nonlinear model of glycolytic oscillations in yeast -- based only on noisy observational data obtained from in silico experiments. Graph-based symbolic encoding, fitness prediction, and estimation-exploration can for the first time provide the level of symbolic regression required for biological applications. With no a priori knowledge of the system, the Cornell algorithm in several hours of computation correctly identified all seven ordinary nonlinear differential equations, the most complicated of which was dA3dt=-1.12.A3-192.24.A3S11+12.50.A3^4+124.92.S3+31.69.A3S3, where A3 = [ATP], S1= [glucose], and S3 = [cytosolic pyruvate and acetaldehyde pool]. Errors on the 26 parameters ranged from 0 to 14.5%. The algorithm also automatically identified new and potentially useful chemical constants of the motion, e.g. -k1N2+K2v1+k2S1A3-(k4-k5v1)A3^4+k6 0. This approach may enable automated design, control and analysis of wet-lab experiments for model identification/refinement.

  1. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    SciTech Connect

    Robertson, J.; Polly, B.; Collis, J.

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  2. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    SciTech Connect

    and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  3. DockTope: a Web-based tool for automated pMHC-I modelling.

    PubMed

    Menegatti Rigo, Maurcio; Amaral Antunes, Dinler; Vaz de Freitas, Martiela; Fabiano de Almeida Mendes, Marcus; Meira, Lindolfo; Sinigaglia, Marialva; Fioravanti Vieira, Gustavo

    2015-01-01

    The immune system is constantly challenged, being required to protect the organism against a wide variety of infectious pathogens and, at the same time, to avoid autoimmune disorders. One of the most important molecules involved in these events is the Major Histocompatibility Complex class I (MHC-I), responsible for binding and presenting small peptides from the intracellular environment to CD8(+) T cells. The study of peptide:MHC-I (pMHC-I) molecules at a structural level is crucial to understand the molecular mechanisms underlying immunologic responses. Unfortunately, there are few pMHC-I structures in the Protein Data Bank (PDB) (especially considering the total number of complexes that could be formed combining different peptides), and pMHC-I modelling tools are scarce. Here, we present DockTope, a free and reliable web-based tool for pMHC-I modelling, based on crystal structures from the PDB. DockTope is fully automated and allows any researcher to construct a pMHC-I complex in an efficient way. We have reproduced a dataset of 135 non-redundant pMHC-I structures from the PDB (C? RMSD below 1?). Modelling of pMHC-I complexes is remarkably important, contributing to the knowledge of important events such as cross-reactivity, autoimmunity, cancer therapy, transplantation and rational vaccine design. PMID:26674250

  4. Automated Geometric Model Builder Using Range Image Sensor Data: Final Acquistion

    SciTech Connect

    Diegert, C.; Sackos, J.

    1999-02-01

    This report documents a data collection where we recorded redundant range image data from multiple views of a simple scene, and recorded accurate survey measurements of the same scene. Collecting these data was a focus of the research project Automated Geometric Model Builder Using Range Image Sensor Data (96-0384), supported by Sandia's Laboratory-Directed Research and Development (LDRD) Program during fiscal years 1996, 1997, and 1998. The data described here are available from the authors on CDROM, or electronically over the Internet. Included in this data distribution are Computer-Aided Design (CAD) models we constructed from the survey measurements. The CAD models are compatible with the SolidWorks 98 Plus system, the modern Computer-Aided Design software system that is central to Sandia's DeskTop Engineering Project (DTEP). Integration of our measurements (as built) with the constructive geometry process of the CAD system (as designed) delivers on a vision of the research project. This report on our final data collection will also serve as a final report on the project.

  5. DockTope: a Web-based tool for automated pMHC-I modelling

    PubMed Central

    Menegatti Rigo, Maurício; Amaral Antunes, Dinler; Vaz de Freitas, Martiela; Fabiano de Almeida Mendes, Marcus; Meira, Lindolfo; Sinigaglia, Marialva; Fioravanti Vieira, Gustavo

    2015-01-01

    The immune system is constantly challenged, being required to protect the organism against a wide variety of infectious pathogens and, at the same time, to avoid autoimmune disorders. One of the most important molecules involved in these events is the Major Histocompatibility Complex class I (MHC-I), responsible for binding and presenting small peptides from the intracellular environment to CD8+ T cells. The study of peptide:MHC-I (pMHC-I) molecules at a structural level is crucial to understand the molecular mechanisms underlying immunologic responses. Unfortunately, there are few pMHC-I structures in the Protein Data Bank (PDB) (especially considering the total number of complexes that could be formed combining different peptides), and pMHC-I modelling tools are scarce. Here, we present DockTope, a free and reliable web-based tool for pMHC-I modelling, based on crystal structures from the PDB. DockTope is fully automated and allows any researcher to construct a pMHC-I complex in an efficient way. We have reproduced a dataset of 135 non-redundant pMHC-I structures from the PDB (Cα RMSD below 1 Å). Modelling of pMHC-I complexes is remarkably important, contributing to the knowledge of important events such as cross-reactivity, autoimmunity, cancer therapy, transplantation and rational vaccine design. PMID:26674250

  6. Towards Automated Bargaining in Electronic Markets: A Partially Two-Sided Competition Model

    NASA Astrophysics Data System (ADS)

    Gatti, Nicola; Lazaric, Alessandro; Restelli, Marcello

    This paper focuses on the prominent issue of automating bargaining agents within electronic markets. Models of bargaining in literature deal with settings wherein there are only two agents and no model satisfactorily captures settings in which there is competition among buyers, being they more than one, and analogously among sellers. In this paper, we extend the principal bargaining protocol, i.e. the alternating-offers protocol, to capture bargaining in markets. The model we propose is such that, in presence of a unique buyer and a unique seller, agents' equilibrium strategies are those in the original protocol. Moreover, we game theoretically study the considered game providing the following results: in presence of one-sided competition (more buyers and one seller or vice versa) we provide agents' equilibrium strategies for all the values of the parameters, in presence of two-sided competition (more buyers and more sellers) we provide an algorithm that produce agents' equilibrium strategies for a large set of the parameters and we experimentally evaluate its effectiveness.

  7. Microwave landing system modeling with application to air traffic control automation

    NASA Technical Reports Server (NTRS)

    Poulose, M. M.

    1992-01-01

    Compared to the current instrument landing system, the microwave landing system (MLS), which is in the advanced stage of implementation, can potentially provide significant fuel and time savings as well as more flexibility in approach and landing functions. However, the expanded coverage and increased accuracy requirements of the MLS make it more susceptible to the features of the site in which it is located. An analytical approach is presented for evaluating the multipath effects of scatterers that are commonly found in airport environments. The approach combines a multiplane model with a ray-tracing technique and a formulation for estimating the electromagnetic fields caused by the antenna array in the presence of scatterers. The model is applied to several airport scenarios. The reduced computational burden enables the scattering effects on MLS position information to be evaluated in near real time. Evaluation in near real time would permit the incorporation of the modeling scheme into air traffic control automation; it would adaptively delineate zones of reduced accuracy within the MLS coverage volume, and help establish safe approach and takeoff trajectories in the presence of uneven terrain and other scatterers.

  8. Automated multiscale segmentation of volumetric biomedical imagery based on a Markov random field model

    NASA Astrophysics Data System (ADS)

    Montgomery, David W. G.; Amira, Abbes; Murtagh, Fionn

    2005-06-01

    A fully automated volumetric image segmentation algorithm is proposed which uses Bayesian inference to assess the appropriate number of image segments. The segmentation is performed exclusively within the wavelet domain, after the application of the redundant a trous wavelet transform employing four decomposition levels. This type of analysis allows for the evaluation of spatial relationships between objects in an image at multiple scales, exploiting the image characteristics matched to a particular scale. These could possibly go undetected in other analysis techniques. The Bayes Information Criterion (BIC) is calculated for a range of segment numbers with a relative maximum determining optimal segment number selection. The fundamental idea of the BIC is to approximate the integrated likelihood in the Bayes factor and then ignore terms which do not increase quickly with N, where N is the cardinality of the data. Gaussian Mixture Modelling (GMM) is then applied to an individual mid-level wavelet scale to achieve a baseline scene estimate considering only voxel intensities. This estimate is then refined using a series of wavelet scales in a multiband manner to reflect spatial and multiresolution correlations within the image, by means of a Markov Random Field Model (MRFM). This approach delivers promising results for a number of volumetric brain MR and PET images, with inherent image features being identified. Results achieved largely correspond with those obtained by researchers in biomedical imaging utilising manually defined parameters for image modelling.

  9. VoICE: A semi-automated pipeline for standardizing vocal analysis across models.

    PubMed

    Burkett, Zachary D; Day, Nancy F; Peñagarikano, Olga; Geschwind, Daniel H; White, Stephanie A

    2015-01-01

    The study of vocal communication in animal models provides key insight to the neurogenetic basis for speech and communication disorders. Current methods for vocal analysis suffer from a lack of standardization, creating ambiguity in cross-laboratory and cross-species comparisons. Here, we present VoICE (Vocal Inventory Clustering Engine), an approach to grouping vocal elements by creating a high dimensionality dataset through scoring spectral similarity between all vocalizations within a recording session. This dataset is then subjected to hierarchical clustering, generating a dendrogram that is pruned into meaningful vocalization "types" by an automated algorithm. When applied to birdsong, a key model for vocal learning, VoICE captures the known deterioration in acoustic properties that follows deafening, including altered sequencing. In a mammalian neurodevelopmental model, we uncover a reduced vocal repertoire of mice lacking the autism susceptibility gene, Cntnap2. VoICE will be useful to the scientific community as it can standardize vocalization analyses across species and laboratories. PMID:26018425

  10. Self-Observation Model Employing an Instinctive Interface for Classroom Active Learning

    ERIC Educational Resources Information Center

    Chen, Gwo-Dong; Nurkhamid; Wang, Chin-Yeh; Yang, Shu-Han; Chao, Po-Yao

    2014-01-01

    In a classroom, obtaining active, whole-focused, and engaging learning results from a design is often difficult. In this study, we propose a self-observation model that employs an instinctive interface for classroom active learning. Students can communicate with virtual avatars in the vertical screen and can react naturally according to the

  11. Development of a GIS interface for WEPP model application to Great Lakes forested watersheds

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This presentation will highlight efforts on development of a new WEPP GIS interface, targeted toward application in forested regions bordering the Great Lakes. The key components and algorithms of the online GIS system will be outlined. The general procedures used to provide input to the WEPP model ...

  12. Importance of interfaces in governing thermal transport in composite materials: modeling and experimental perspectives.

    PubMed

    Roy, Ajit K; Farmer, Barry L; Varshney, Vikas; Sihn, Sangwook; Lee, Jonghoon; Ganguli, Sabyasachi

    2012-02-01

    Thermal management in polymeric composite materials has become increasingly critical in the air-vehicle industry because of the increasing thermal load in small-scale composite devices extensively used in electronics and aerospace systems. The thermal transport phenomenon in these small-scale heterogeneous systems is essentially controlled by the interface thermal resistance because of the large surface-to-volume ratio. In this review article, several modeling strategies are discussed for different length scales, complemented by our experimental efforts to tailor the thermal transport properties of polymeric composite materials. Progress in the molecular modeling of thermal transport in thermosets is reviewed along with a discussion on the interface thermal resistance between functionalized carbon nanotube and epoxy resin systems. For the thermal transport in fiber-reinforced composites, various micromechanics-based analytical and numerical modeling schemes are reviewed in predicting the transverse thermal conductivity. Numerical schemes used to realize and scale the interface thermal resistance and the finite mean free path of the energy carrier in the mesoscale are discussed in the frame of the lattice Boltzmann-Peierls-Callaway equation. Finally, guided by modeling, complementary experimental efforts are discussed for exfoliated graphite and vertically aligned nanotubes based composites toward improving their effective thermal conductivity by tailoring interface thermal resistance. PMID:22295993

  13. AgRISTARS: Yield model development/soil moisture. Interface control document

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The interactions and support functions required between the crop Yield Model Development (YMD) Project and Soil Moisture (SM) Project are defined. The requirements for YMD support of SM and vice-versa are outlined. Specific tasks in support of these interfaces are defined for development of support functions.

  14. A Monthly Water-Balance Model Driven By a Graphical User Interface

    USGS Publications Warehouse

    McCabe, Gregory J.; Markstrom, Steven L.

    2007-01-01

    This report describes a monthly water-balance model driven by a graphical user interface, referred to as the Thornthwaite monthly water-balance program. Computations of monthly water-balance components of the hydrologic cycle are made for a specified location. The program can be used as a research tool, an assessment tool, and a tool for classroom instruction.

  15. The integrity of welded interfaces in ultra high molecular weight polyethylene: Part 1-Model.

    PubMed

    Buckley, C Paul; Wu, Junjie; Haughie, David W

    2006-06-01

    The difficulty of eradicating memory of powder-particle interfaces in UHMWPE for bearing surfaces for hip and knee replacements is well-known, and 'fusion defects' have been implicated frequently in joint failures. During processing the polymer is formed into solid directly from the reactor powder, under pressure and at temperatures above the melting point, and two types of inter-particle defect occur: Type 1 (consolidation-deficient) and Type 2 (diffusion-deficient). To gain quantitative information on the extent of the problem, the formation of macroscopic butt welds in this material was studied, by (1) modelling the process and (2) measuring experimentally the resultant evolution of interface toughness. This paper reports on the model. A quantitative measure of interface structural integrity is defined, and related to the "maximum reptated molecular weight" introduced previously. The model assumes an idealised surface topography. It is used to calculate the evolution of interface integrity during welding, for given values of temperature, pressure, and parameters describing the surfaces, and a given molar mass distribution. Only four material properties are needed for the calculation; all of them available for polyethylene. The model shows that, for UHMWPE typically employed in knee transplants, the rate of eradication of Type 1 defects is highly sensitive to surface topography, process temperature and pressure. Also, even if Type 1 defects are prevented, Type 2 defects heal extremely slowly. They must be an intrinsic feature of UHMWPE for all reasonable forming conditions, and products and forming processes should be designed accordingly. PMID:16490249

  16. Cryo-EM Data Are Superior to Contact and Interface Information in Integrative Modeling.

    PubMed

    de Vries, Sjoerd J; Chauvot de Beauchêne, Isaure; Schindler, Christina E M; Zacharias, Martin

    2016-02-23

    Protein-protein interactions carry out a large variety of essential cellular processes. Cryo-electron microscopy (cryo-EM) is a powerful technique for the modeling of protein-protein interactions at a wide range of resolutions, and recent developments have caused a revolution in the field. At low resolution, cryo-EM maps can drive integrative modeling of the interaction, assembling existing structures into the map. Other experimental techniques can provide information on the interface or on the contacts between the monomers in the complex. This inevitably raises the question regarding which type of data is best suited to drive integrative modeling approaches. Systematic comparison of the prediction accuracy and specificity of the different integrative modeling paradigms is unavailable to date. Here, we compare EM-driven, interface-driven, and contact-driven integrative modeling paradigms. Models were generated for the protein docking benchmark using the ATTRACT docking engine and evaluated using the CAPRI two-star criterion. At 20 Å resolution, EM-driven modeling achieved a success rate of 100%, outperforming the other paradigms even with perfect interface and contact information. Therefore, even very low resolution cryo-EM data is superior in predicting heterodimeric and heterotrimeric protein assemblies. Our study demonstrates that a force field is not necessary, cryo-EM data alone is sufficient to accurately guide the monomers into place. The resulting rigid models successfully identify regions of conformational change, opening up perspectives for targeted flexible remodeling. PMID:26846888

  17. Facial pressure zones of an oronasal interface for noninvasive ventilation: a computer model analysis* **

    PubMed Central

    Barros, Luana Souto; Talaia, Pedro; Drummond, Marta; Natal-Jorge, Renato

    2014-01-01

    OBJECTIVE: To study the effects of an oronasal interface (OI) for noninvasive ventilation, using a three-dimensional (3D) computational model with the ability to simulate and evaluate the main pressure zones (PZs) of the OI on the human face. METHODS: We used a 3D digital model of the human face, based on a pre-established geometric model. The model simulated soft tissues, skull, and nasal cartilage. The geometric model was obtained by 3D laser scanning and post-processed for use in the model created, with the objective of separating the cushion from the frame. A computer simulation was performed to determine the pressure required in order to create the facial PZs. We obtained descriptive graphical images of the PZs and their intensity. RESULTS: For the graphical analyses of each face-OI model pair and their respective evaluations, we ran 21 simulations. The computer model identified several high-impact PZs in the nasal bridge and paranasal regions. The variation in soft tissue depth had a direct impact on the amount of pressure applied (438-724 cmH2O). CONCLUSIONS: The computer simulation results indicate that, in patients submitted to noninvasive ventilation with an OI, the probability of skin lesion is higher in the nasal bridge and paranasal regions. This methodology could increase the applicability of biomechanical research on noninvasive ventilation interfaces, providing the information needed in order to choose the interface that best minimizes the risk of skin lesion. PMID:25610506

  18. Automated geography

    SciTech Connect

    Dobson, J.E.

    1983-05-01

    Analytical methods and computer technology for spatial analysis have advanced rapidly. Geographers can now consider a general form of automated geography which integrates all of the new techniques into an analytical whole. Computer cartography, computer graphics, digital remote sensing, geographic information systems, spatial statistics, and quantitative spatial modeling can be combined eclectically with traditional manual techniques to address geographic problems that are too large and complex for manual treatment alone. Small systems are widely available to facilitate small, less complex problems. Automation can assist in all forms of geography - scientific and humanistic, nomothetic and idiographic, basic and applied - but its adoption is likely to be highest among applied scientists. The immediate challenge is to prepare for a major shift toward computer instruction and automated geography in the late 1980s. Long term effects will include improved contributions by geographers to national and international policy analyses, a greater emphasis on team-work and sharing, stronger ties with other disciplines, and a generally more viable discipline. 27 references.

  19. A Laboratory Seismoelectric Measurement for the Permafrost Model with a Frozen-unfrozen Interface

    NASA Astrophysics Data System (ADS)

    Liu, Z.

    2007-12-01

    For the Qing-Cang railway line located in the permafrost region, the freeze-thaw cycling with the seasons and spring-thaw of the permafrost are main factors to weaken the railway bed. Therefore, the determination of the frozen-unfrozen interface depth below the railway bed is important for the railway operation, and moreover, it can contribute to the evaluation of the permafrost environment effected by the railway. Since the frozen-unfrozen interface is a contact of two media with various porosity and saturation, an electric double-layer can be formed at the interface by the absorption of electrical charge to it. When a seismic wave is incident at the interface, a relative motion of the charges in the electric double-layer would induce an electromagnetic (EM) wave, or a seismoeletric conversion signal that can be measured remotely, which is potential for determining the frost depth. A simple permafrost model with a frozen-unfrozen interface was built mainly by two parts: the upper part was a frozen sand block with a 7cm thickness and the lower one with the same material was in an unfrozen state saturated with water. And the contact of the two parts simulated the frozen-unfrozen interface. The interface model was placed in a freezer, while it was heated from the bottom with a heating sheet made by the electric heating wires laid under the unfrozen part. A P-wave source transducer with 48 kHz narrow band frequency was set on the top the frozen part and driven by a square electric pulse. The six electrodes with a 1 cm even interval were fixed inside the frozen part with 1 cm vertical distance to the interface. In the experiment, all the analog signals acquired from the temperature sensors, acoustic transducers, and electrodes were sent through preamplifiers and recorded digitally by computer-based virtual instruments (VIs). At the beginning of the experiment, the first arrivals of the seismoeletric signals observed from the six electrodes with minimum offset set to be 7cm were proportional to the distances between the acoustic sources to electrodes, and thus the EM signals are originated from the stationary electromagnetic field that moves along with the acoustic waves. After the eight hours, we recognized two new events of EM waves by their exactly identical arrive times from the six electrodes. The event A with identical arrival time being close to zero is the EM interference of the high-voltage pulse exciting the acoustic source transducer. The identical arrival time 23-25 microsecond of the event B roughly equates to that of the acoustic wave travel time from the source to the interface, and it is obviously the conversion EM signal originated from the electric double-layer in the interface. With a minimum 14cm offset, the event A arrived at the same time only with greatly reduced amplitude, and the event B had not able to be detected for its weak amplitude. Another event B' with an about 50 microsecond identical arriving time could, however, be recognized, and it should be a conversion EM wave from the interface exited by the second acoustic vibration cycle from the acoustic source wave with higher amplitude, as the arrival time just equates to that of the second cycle of the narrow band acoustic wave to travel to the interface. These measurements in the laboratory show that , the electric double-layer formed at the frozen-unfrozen interface can be polarized to generate EM waves by both an EM pulse and a vibration source, which imply that the frozen-unfrozen interface of the permafrost could be surveying by both EM, and seismoelectric methods. And the results also show that the electric double-layer needs several hours to be formed in a laboratory experiment under low tempreture.

  20. Modelling discontinuous metal matrix composite behavior under creep conditions: Effect of interface diffusional matter transport and interface sliding

    SciTech Connect

    Vala, J.; Svoboda, J.; Kozak, V.; Cadek, J. . Inst. of Physical Metallurgy)

    1994-05-01

    The continuum mechanics analysis of creep behavior of metal matrix composites is presented in this paper. The analysis includes elasticity, power law creep and diffusional matter transport and sliding along the reinforcement/matrix interfaces. The solution of the problem requires development of original mathematical and numerical methods. The present results indicate a significant influence of reinforcement/matrix interface properties on macroscopic behavior of the composite as well as on stress distribution in the composite under creep conditions.

  1. A Demonstration of Automated DNA Sequencing.

    ERIC Educational Resources Information Center

    Latourelle, Sandra; Seidel-Rogol, Bonnie

    1998-01-01

    Details a simulation that employs a paper-and-pencil model to demonstrate the principles behind automated DNA sequencing. Discusses the advantages of automated sequencing as well as the chemistry of automated DNA sequencing. (DDR)

  2. Integrating Automated Data into Ecosystem Models: How Can We Drink from a Firehose?

    NASA Astrophysics Data System (ADS)

    Allen, M. F.; Harmon, T. C.

    2014-12-01

    Sensors and imaging are changing the way we are measuring ecosystem behavior. Within short time frames, we are able to capture how organisms behave in response to rapid change, and detect events that alter composition and shift states. To transform these observations into process-level understanding, we need to efficiently interpret signals. One way to do this is to automatically integrate the data into ecosystem models. In our soil carbon cycling studies, we collect continuous time series for meteorological conditions, soil processes, and automated imagery. To characterize the timing and clarity of change behavior in our data, we adopted signal-processing approaches like coupled wavelet/coherency analyses. In situ CO2 measurements allow us to visualize when root/microbial activity results in CO2 being respired from the soil surface, versus when other chemical/physical phenomena may alter gas pathways. While these approaches are interesting in understanding individual phenomena, they fail to get us beyond the study of individual processes. Sensor data are compared with the outputs from ecosystem models to detect the patterns in specific phenomena or to revise model parameters or traits. For instance, we measured unexpected levels of soil CO2 in a tropical ecosystem. By examining small-scale ecosystem model parameters, we were able to pinpoint those parameters that needed to be altered to resemble the data outputs. However, we do not capture the essence of large-scale ecosystem shifts. The time is right to utilize real-time data assimilation as an additional forcing of ecosystem models. Continuous, diurnal soil temperature and moisture, along with hourly hyphal or root growth could feed into well-established ecosystem models such as HYDRUS or DayCENT. This approach would provide instantaneous "measurements" of shifting ecosystem processes as they occur, allowing us to identify critical process connections more efficiently.

  3. Coarse Grained Modeling of The Interface BetweenWater and Heterogeneous Surfaces

    SciTech Connect

    Willard, Adam; Chandler, David

    2008-06-23

    Using coarse grained models we investigate the behavior of water adjacent to an extended hydrophobic surface peppered with various fractions of hydrophilic patches of different sizes. We study the spatial dependence of the mean interface height, the solvent density fluctuations related to drying the patchy substrate, and the spatial dependence of interfacial fluctuations. We find that adding small uniform attractive interactions between the substrate and solvent cause the mean position of the interface to be very close to the substrate. Nevertheless, the interfacial fluctuations are large and spatially heterogeneous in response to the underlying patchy substrate. We discuss the implications of these findings to the assembly of heterogeneous surfaces.

  4. Dynamic lattice Monte Carlo simulation of a model protein at an oil/water interface

    NASA Astrophysics Data System (ADS)

    Anderson, Rebeccah E.; Pande, Vijay S.; Radke, Clayton J.

    2000-05-01

    Adsorption of a proteinlike heteropolymer is modeled at an oil/water interface by dynamic lattice Monte Carlo simulation. The heteropolymer is a designed sequence of 27 amino-acid-type lattice sites and has been used as a model for short (50-70) residue proteins. Oil is represented by a characteristic hydrophobic amino acid monomer, and water is represented by a characteristic hydrophilic amino acid monomer. The model protein is initially placed slightly away from the oil/water interface and is then allowed to undergo Verdier-Stockmeyer moves as amino acid sites interact with each other and with the oil and water. Local mixing of the oil and water is permitted over the length scale of the protein. Our lattice representation displays correct behavior in bulk water in that the model protein folds rapidly from an extended rod into a globular like state. In addition, there is a phase transition between the globular (folded) state and the denatured (unfolded) state at a particular temperature, Tm*. By examining the free-energy landscape at 0.94 Tm*, we identify four configurational states in the adsorbing system: unfolded in the bulk water, folded in the bulk water, unfolded at the interface, and folded at the interface. The most probable state of the four is the adsorbed unfolded state at the interface, with a large free-energy barrier to desorption. (20kBTm*). We find that it is the unfavorable interaction between the oil and the water that drives the protein to the interface. Adsorption of a single protein molecule reduces the oil and water energies by 175kBTm*. A typical conformation of the adsorbed, unfolded protein has the majority of protein segments remaining in the water but lying directly adjacent to the interface, with about 30% loops penetrating into the water phase and only a few segments (10%) penetrating into the oil. This work provides a picture of single-molecule protein adsorption at the oil/water interface in which the protein unfolds into an extended train structure and thereafter is essentially irreversibly bound.

  5. Effective impedance model for analysis of reflection at the interfaces of photonic crystals.

    PubMed

    Momeni, Babak; Eftekhar, Ali Asghar; Adibi, Ali

    2007-04-01

    We present an alternative definition of impedance to describe the reflection at the interfaces of photonic crystals. We show that this effective impedance can be defined only by the properties of the photonic crystal modes and is independent of the properties of the incident region. This approximate model successfully explains the main features in the reflection spectrum and of various interface terminations of photonic crystals. In particular, we show an impedance matching condition at which reflectionless transmission of power to a low-group-velocity photonic crystal mode is possible, a property that is attractive for various dispersion-based applications of photonic crystals. PMID:17339934

  6. Interface localization in the 2D Ising model with a driven line

    NASA Astrophysics Data System (ADS)

    Cohen, O.; Mukamel, D.

    2016-04-01

    We study the effect of a one-dimensional driving field on the interface between two coexisting phases in a two dimensional model. This is done by considering an Ising model on a cylinder with Glauber dynamics in all sites and additional biased Kawasaki dynamics in the central ring. Based on the exact solution of the two-dimensional Ising model, we are able to compute the phase diagram of the driven model within a special limit of fast drive and slow spin flips in the central ring. The model is found to exhibit two phases where the interface is pinned to the central ring: one in which it fluctuates symmetrically around the central ring and another where it fluctuates asymmetrically. In addition, we find a phase where the interface is centered in the bulk of the system, either below or above the central ring of the cylinder. In the latter case, the symmetry breaking is ‘stronger’ than that found in equilibrium when considering a repulsive potential on the central ring. This equilibrium model is analyzed here by using a restricted solid-on-solid model.

  7. Continuity-based model interfacing for plant-wide simulation: a general approach.

    PubMed

    Volcke, Eveline I P; van Loosdrecht, Mark C M; Vanrolleghem, Peter A

    2006-08-01

    In plant-wide simulation studies of wastewater treatment facilities, often existing models from different origin need to be coupled. However, as these submodels are likely to contain different state variables, their coupling is not straightforward. The continuity-based interfacing method (CBIM) provides a general framework to construct model interfaces for models of wastewater systems, taking into account conservation principles. In this contribution, the CBIM approach is applied to study the effect of sludge digestion reject water treatment with a SHARON-Anammox process on a plant-wide scale. Separate models were available for the SHARON process and for the Anammox process. The Benchmark simulation model no. 2 (BSM2) is used to simulate the behaviour of the complete WWTP including sludge digestion. The CBIM approach is followed to develop three different model interfaces. At the same time, the generally applicable CBIM approach was further refined and particular issues when coupling models in which pH is considered as a state variable, are pointed out. PMID:16846629

  8. Dynamics modeling for parallel haptic interfaces with force sensing and control.

    PubMed

    Bernstein, Nicholas; Lawrence, Dale; Pao, Lucy

    2013-01-01

    Closed-loop force control can be used on haptic interfaces (HIs) to mitigate the effects of mechanism dynamics. A single multidimensional force-torque sensor is often employed to measure the interaction force between the haptic device and the user's hand. The parallel haptic interface at the University of Colorado (CU) instead employs smaller 1D force sensors oriented along each of the five actuating rods to build up a 5D force vector. This paper shows that a particular manipulandum/hand partition in the system dynamics is induced by the placement and type of force sensing, and discusses the implications on force and impedance control for parallel haptic interfaces. The details of a "squaring down" process are also discussed, showing how to obtain reduced degree-of-freedom models from the general six degree-of-freedom dynamics formulation. PMID:24808395

  9. Modeling Complex Cross-Systems Software Interfaces Using SysML

    NASA Technical Reports Server (NTRS)

    Mandutianu, Sanda; Morillo, Ron; Simpson, Kim; Liepack, Otfrid; Bonanne, Kevin

    2013-01-01

    The complex flight and ground systems for NASA human space exploration are designed, built, operated and managed as separate programs and projects. However, each system relies on one or more of the other systems in order to accomplish specific mission objectives, creating a complex, tightly coupled architecture. Thus, there is a fundamental need to understand how each system interacts with the other. To determine if a model-based system engineering approach could be utilized to assist with understanding the complex system interactions, the NASA Engineering and Safety Center (NESC) sponsored a task to develop an approach for performing cross-system behavior modeling. This paper presents the results of applying Model Based Systems Engineering (MBSE) principles using the System Modeling Language (SysML) to define cross-system behaviors and how they map to crosssystem software interfaces documented in system-level Interface Control Documents (ICDs).

  10. Nuclear Reactor/Hydrogen Process Interface Including the HyPEP Model

    SciTech Connect

    Steven R. Sherman

    2007-05-01

    The Nuclear Reactor/Hydrogen Plant interface is the intermediate heat transport loop that will connect a very high temperature gas-cooled nuclear reactor (VHTR) to a thermochemical, high-temperature electrolysis, or hybrid hydrogen production plant. A prototype plant called the Next Generation Nuclear Plant (NGNP) is planned for construction and operation at the Idaho National Laboratory in the 2018-2021 timeframe, and will involve a VHTR, a high-temperature interface, and a hydrogen production plant. The interface is responsible for transporting high-temperature thermal energy from the nuclear reactor to the hydrogen production plant while protecting the nuclear plant from operational disturbances at the hydrogen plant. Development of the interface is occurring under the DOE Nuclear Hydrogen Initiative (NHI) and involves the study, design, and development of high-temperature heat exchangers, heat transport systems, materials, safety, and integrated system models. Research and development work on the system interface began in 2004 and is expected to continue at least until the start of construction of an engineering-scale demonstration plant.

  11. Non-Redundant Unique Interface Structures as Templates for Modeling Protein Interactions

    PubMed Central

    Cukuroglu, Engin; Gursoy, Attila; Nussinov, Ruth; Keskin, Ozlem

    2014-01-01

    Improvements in experimental techniques increasingly provide structural data relating to protein-protein interactions. Classification of structural details of protein-protein interactions can provide valuable insights for modeling and abstracting design principles. Here, we aim to cluster protein-protein interactions by their interface structures, and to exploit these clusters to obtain and study shared and distinct protein binding sites. We find that there are 22604 unique interface structures in the PDB. These unique interfaces, which provide a rich resource of structural data of protein-protein interactions, can be used for template-based docking. We test the specificity of these non-redundant unique interface structures by finding protein pairs which have multiple binding sites. We suggest that residues with more than 40% relative accessible surface area should be considered as surface residues in template-based docking studies. This comprehensive study of protein interface structures can serve as a resource for the community. The dataset can be accessed at http://prism.ccbb.ku.edu.tr/piface. PMID:24475173

  12. Open boundary conditions for the Diffuse Interface Model in 1-D

    NASA Astrophysics Data System (ADS)

    Desmarais, J. L.; Kuerten, J. G. M.

    2014-04-01

    New techniques are developed for solving multi-phase flows in unbounded domains using the Diffuse Interface Model in 1-D. They extend two open boundary conditions originally designed for the Navier-Stokes equations. The non-dimensional formulation of the DIM generalizes the approach to any fluid. The equations support a steady state whose analytical approximation close to the critical point depends only on temperature. This feature enables the use of detectors at the boundaries switching between conventional boundary conditions in bulk phases and a multi-phase strategy in interfacial regions. Moreover, the latter takes advantage of the steady state approximation to minimize the interface-boundary interactions. The techniques are applied to fluids experiencing a phase transition and where the interface between the phases travels through one of the boundaries. When the interface crossing the boundary is fully developed, the technique greatly improves results relative to cases where conventional boundary conditions can be used. Limitations appear when the interface crossing the boundary is not a stable equilibrium between the two phases: the terms responsible for creating the true balance between the phases perturb the interior solution. Both boundary conditions present good numerical stability properties: the error remains bounded when the initial conditions or the far field values are perturbed. For the PML, the influence of its main parameters on the global error is investigated to make a compromise between computational costs and maximum error. The approach can be extended to multiple spatial dimensions.

  13. Automated choroidal segmentation of 1060 nm OCT in healthy and pathologic eyes using a statistical model

    PubMed Central

    Kaji?, Vedran; Esmaeelpour, Marieh; Povaay, Boris; Marshall, David; Rosin, Paul L.; Drexler, Wolfgang

    2011-01-01

    A two stage statistical model based on texture and shape for fully automatic choroidal segmentation of normal and pathologic eyes obtained by a 1060 nm optical coherence tomography (OCT) system is developed. A novel dynamic programming approach is implemented to determine location of the retinal pigment epithelium/ Bruchs membrane /choriocapillaris (RBC) boundary. The choroidsclera interface (CSI) is segmented using a statistical model. The algorithm is robust even in presence of speckle noise, low signal (thick choroid), retinal pigment epithelium (RPE) detachments and atrophy, drusen, shadowing and other artifacts. Evaluation against a set of 871 manually segmented cross-sectional scans from 12 eyes achieves an average error rate of 13%, computed per tomogram as a ratio of incorrectly classified pixels and the total layer surface. For the first time a fully automatic choroidal segmentation algorithm is successfully applied to a wide range of clinical volumetric OCT data. PMID:22254171

  14. Modeling of tunneling current in ultrathin MOS structure with interface trap charge and fixed oxide charge

    NASA Astrophysics Data System (ADS)

    Hu, Bo; Huang, Shi-Hua; Wu, Feng-Min

    2013-01-01

    A model based on analysis of the self-consistent PoissonSchrodinger equation is proposed to investigate the tunneling current of electrons in the inversion layer of a p-type metal-oxide-semiconductor (MOS) structure. In this model, the influences of interface trap charge (ITC) at the SiSiO2 interface and fixed oxide charge (FOC) in the oxide region are taken into account, and one-band effective mass approximation is used. The tunneling probability is obtained by employing the transfer matrix method. Further, the effects of in-plane momentum on the quantization in the electron motion perpendicular to the SiSiO2 interface of a MOS device are investigated. Theoretical simulation results indicate that both ITC and FOC have great influence on the tunneling current through a MOS structure when their densities are larger than 1012 cm-2, which results from the great change of bound electrons near the SiSiO2 interface and the oxide region. Therefore, for real ultrathin MOS structures with ITC and FOC, this model can give a more accurate description for the tunneling current in the inversion layer.

  15. NURBS- and T-spline-based isogeometric cohesive zone modeling of interface debonding

    NASA Astrophysics Data System (ADS)

    Dimitri, R.; De Lorenzis, L.; Wriggers, P.; Zavarise, G.

    2014-08-01

    Cohesive zone (CZ) models have long been used by the scientific community to analyze the progressive damage of materials and interfaces. In these models, non-linear relationships between tractions and relative displacements are assumed, which dictate both the work of separation per unit fracture surface and the peak stress that has to be reached for the crack formation. This contribution deals with isogeometric CZ modeling of interface debonding. The interface is discretized with generalized contact elements which account for both contact and cohesive debonding within a unified framework. The formulation is suitable for non-matching discretizations of the interacting surfaces in presence of large deformations and large relative displacements. The isogeometric discretizations are based on non uniform rational B-splines as well as analysis-suitable T-splines enabling local refinement. Conventional Lagrange polynomial discretizations are also used for comparison purposes. Some numerical examples demonstrate that the proposed formulation based on isogeometric analysis is a computationally accurate and efficient technology to solve challenging interface debonding problems in 2D and 3D.

  16. An automated method for generating analogic signals that embody the Markov kinetics of model ionic channels.

    PubMed

    Luchian, Tudor

    2005-08-30

    In this work we present an automated method for generating electrical signals which reflect the kinetics of ionic channels that have custom-tailored intermediate sub-states and intermediate reaction constants. The concept of our virtual single-channel waveform generator makes use of two software platforms, one for the numerical generation of single channel traces stemming from a pre-defined model and another for the digital-to-analog conversion of such numerical generated single channel traces. This technique of continuous generation and recording of the activity of a model ionic channel provides an efficient protocol to teach neophytes in the field of single-channel electrophysiology about its major phenomenological facets. Random analogic signals generated by using our technique can be successfully employed in a number of applications, such us: assisted learning of the single-molecule kinetic investigation via electrical recordings, impedance spectroscopy, the evaluation of linear frequency response of neurons and the study of stochastic resonance of ion channels. PMID:16054511

  17. Effect of fluid circulation on subduction interface tectonic processes: Insights from thermo-mechanical numerical modelling

    NASA Astrophysics Data System (ADS)

    Angiboust, S.; Wolf, S.; Burov, E.; Agard, P.; Yamato, P.

    2012-12-01

    Both geophysical and petrological data suggest that large amounts of water are released in subduction zones during the burial of oceanic lithosphere through metamorphic dehydration reactions. These fluids are generally considered to be responsible for mantle wedge hydration, mechanical weakening of the plate interface and to affect slab-interface seismicity. In order to bridge the gap between subduction dynamics and the wealth of field, petrological and experimental data documenting small-scale fluid circulation at mantle depths, we designed a bi-phase model, in which fluid migration is driven by rock fluid concentrations, non-lithostatic pressure gradients and deformation. Oceanic subduction is modelled using a forward visco-elasto-plastic thermo-mechanically and thermodynamically coupled code (FLAMAR) following the previous work by Yamato et al. (2007). After 16.5 Myr of convergence, deformation is accommodated along the subduction interface by a low-strength shear zone characterised by a weak (10-25% of serpentinite) and relatively narrow (5-10 km) serpentinized front in the reference experiment. Dehydration associated with eclogitization of the oceanic crust (60-75 km depth) and serpentinite breakdown (110-130 km depth) significantly decreases the mechanical strength of the mantle at these depths, thereby favouring the detachment of large slices of oceanic crust along the plate interface. The geometries obtained are in good agreement with reconstructions derived from field evidence from the Alpine eclogite-facies ophiolitic belt (i.e., coherent fragments of oceanic crust detached at ca.80 km depth in the Alpine subduction zone and exhumed along the subduction interface). Through a parametric study, we further investigate the role of various parameters, such as fluid circulation, oceanic crustal structure and rheology, on the formation of such large tectonic slices. We conclude that the detachment of oceanic crust slices is largely promoted by fluid circulation along the subduction interface and by the subduction of a strong and originally discontinuous mafic crust.

  18. Integrated surface and groundwater modelling in the Thames Basin, UK using the Open Modelling Interface

    NASA Astrophysics Data System (ADS)

    Mackay, Jonathan; Abesser, Corinna; Hughes, Andrew; Jackson, Chris; Kingdon, Andrew; Mansour, Majdi; Pachocka, Magdalena; Wang, Lei; Williams, Ann

    2013-04-01

    The River Thames catchment is situated in the south-east of England. It covers approximately 16,000 km2 and is the most heavily populated river basin in the UK. It is also one of the driest and has experienced severe drought events in the recent past. With the onset of climate change and human exploitation of our environment, there are now serious concerns over the sustainability of water resources in this basin with 6 million m3 consumed every day for public water supply alone. Groundwater in the Thames basin is extremely important, providing 40% of water for public supply. The principal aquifer is the Chalk, a dual permeability limestone, which has been extensively studied to understand its hydraulic properties. The fractured Jurassic limestone in the upper catchment also forms an important aquifer, supporting baseflow downstream during periods of drought. These aquifers are unconnected other than through the River Thames and its tributaries, which provide two-thirds of London's drinking water. Therefore, to manage these water resources sustainably and to make robust projections into the future, surface and groundwater processes must be considered in combination. This necessitates the simulation of the feedbacks and complex interactions between different parts of the water cycle, and the development of integrated environmental models. The Open Modelling Interface (OpenMI) standard provides a method through which environmental models of varying complexity and structure can be linked, allowing them to run simultaneously and exchange data at each timestep. This architecture has allowed us to represent the surface and subsurface flow processes within the Thames basin at an appropriate level of complexity based on our understanding of particular hydrological processes and features. We have developed a hydrological model in OpenMI which integrates a process-driven, gridded finite difference groundwater model of the Chalk with a more simplistic, semi-distributed conceptual model of the Jurassic limestone. A distributed river routing model of the Thames has also been integrated to connect the surface and subsurface hydrological processes. This application demonstrates the potential benefits and issues associated with implementing this approach.

  19. MaxMod: a hidden Markov model based novel interface to MODELLER for improved prediction of protein 3D models.

    PubMed

    Parida, Bikram K; Panda, Prasanna K; Misra, Namrata; Mishra, Barada K

    2015-02-01

    Modeling the three-dimensional (3D) structures of proteins assumes great significance because of its manifold applications in biomolecular research. Toward this goal, we present MaxMod, a graphical user interface (GUI) of the MODELLER program that combines profile hidden Markov model (profile HMM) method with Clustal Omega program to significantly improve the selection of homologous templates and target-template alignment for construction of accurate 3D protein models. MaxMod distinguishes itself from other existing GUIs of MODELLER software by implementing effortless modeling of proteins using templates that bear modified residues. Additionally, it provides various features such as loop optimization, express modeling (a feature where protein model can be generated directly from its sequence, without any further user intervention) and automatic update of PDB database, thus enhancing the user-friendly control of computational tasks. We find that HMM-based MaxMod performs better than other modeling packages in terms of execution time and model quality. MaxMod is freely available as a downloadable standalone tool for academic and non-commercial purpose at http://www.immt.res.in/maxmod/. PMID:25636267

  20. Modeling strategic use of human computer interfaces with novel hidden Markov models

    PubMed Central

    Mariano, Laura J.; Poore, Joshua C.; Krum, David M.; Schwartz, Jana L.; Coskren, William D.; Jones, Eric M.

    2015-01-01

    Immersive software tools are virtual environments designed to give their users an augmented view of real-world data and ways of manipulating that data. As virtual environments, every action users make while interacting with these tools can be carefully logged, as can the state of the software and the information it presents to the user, giving these actions context. This data provides a high-resolution lens through which dynamic cognitive and behavioral processes can be viewed. In this report, we describe new methods for the analysis and interpretation of such data, utilizing a novel implementation of the Beta Process Hidden Markov Model (BP-HMM) for analysis of software activity logs. We further report the results of a preliminary study designed to establish the validity of our modeling approach. A group of 20 participants were asked to play a simple computer game, instrumented to log every interaction with the interface. Participants had no previous experience with the game's functionality or rules, so the activity logs collected during their naïve interactions capture patterns of exploratory behavior and skill acquisition as they attempted to learn the rules of the game. Pre- and post-task questionnaires probed for self-reported styles of problem solving, as well as task engagement, difficulty, and workload. We jointly modeled the activity log sequences collected from all participants using the BP-HMM approach, identifying a global library of activity patterns representative of the collective behavior of all the participants. Analyses show systematic relationships between both pre- and post-task questionnaires, self-reported approaches to analytic problem solving, and metrics extracted from the BP-HMM decomposition. Overall, we find that this novel approach to decomposing unstructured behavioral data within software environments provides a sensible means for understanding how users learn to integrate software functionality for strategic task pursuit. PMID:26191026

  1. Modeling strategic use of human computer interfaces with novel hidden Markov models.

    PubMed

    Mariano, Laura J; Poore, Joshua C; Krum, David M; Schwartz, Jana L; Coskren, William D; Jones, Eric M

    2015-01-01

    Immersive software tools are virtual environments designed to give their users an augmented view of real-world data and ways of manipulating that data. As virtual environments, every action users make while interacting with these tools can be carefully logged, as can the state of the software and the information it presents to the user, giving these actions context. This data provides a high-resolution lens through which dynamic cognitive and behavioral processes can be viewed. In this report, we describe new methods for the analysis and interpretation of such data, utilizing a novel implementation of the Beta Process Hidden Markov Model (BP-HMM) for analysis of software activity logs. We further report the results of a preliminary study designed to establish the validity of our modeling approach. A group of 20 participants were asked to play a simple computer game, instrumented to log every interaction with the interface. Participants had no previous experience with the game's functionality or rules, so the activity logs collected during their naïve interactions capture patterns of exploratory behavior and skill acquisition as they attempted to learn the rules of the game. Pre- and post-task questionnaires probed for self-reported styles of problem solving, as well as task engagement, difficulty, and workload. We jointly modeled the activity log sequences collected from all participants using the BP-HMM approach, identifying a global library of activity patterns representative of the collective behavior of all the participants. Analyses show systematic relationships between both pre- and post-task questionnaires, self-reported approaches to analytic problem solving, and metrics extracted from the BP-HMM decomposition. Overall, we find that this novel approach to decomposing unstructured behavioral data within software environments provides a sensible means for understanding how users learn to integrate software functionality for strategic task pursuit. PMID:26191026

  2. An Agent-Based Interface to Terrestrial Ecological Forecasting

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Nemani, Ramakrishna; Pang, Wan-Lin; Votava, Petr; Etzioni, Oren

    2004-01-01

    This paper describes a flexible agent-based ecological forecasting system that combines multiple distributed data sources and models to provide near-real-time answers to questions about the state of the Earth system We build on novel techniques in automated constraint-based planning and natural language interfaces to automatically generate data products based on descriptions of the desired data products.

  3. Visualization: A Mind-Machine Interface for Discovery.

    PubMed

    Nielsen, Cydney B

    2016-02-01

    Computation is critical for enabling us to process data volumes and model data complexities that are unthinkable by manual means. However, we are far from automating the sense-making process. Human knowledge and reasoning are critical for discovery. Visualization offers a powerful interface between mind and machine that should be further exploited in future genome analysis tools. PMID:26739384

  4. Fully automated segmentation of oncological PET volumes using a combined multiscale and statistical model

    SciTech Connect

    Montgomery, David W. G.; Amira, Abbes; Zaidi, Habib

    2007-02-15

    The widespread application of positron emission tomography (PET) in clinical oncology has driven this imaging technology into a number of new research and clinical arenas. Increasing numbers of patient scans have led to an urgent need for efficient data handling and the development of new image analysis techniques to aid clinicians in the diagnosis of disease and planning of treatment. Automatic quantitative assessment of metabolic PET data is attractive and will certainly revolutionize the practice of functional imaging since it can lower variability across institutions and may enhance the consistency of image interpretation independent of reader experience. In this paper, a novel automated system for the segmentation of oncological PET data aiming at providing an accurate quantitative analysis tool is proposed. The initial step involves expectation maximization (EM)-based mixture modeling using a k-means clustering procedure, which varies voxel order for initialization. A multiscale Markov model is then used to refine this segmentation by modeling spatial correlations between neighboring image voxels. An experimental study using an anthropomorphic thorax phantom was conducted for quantitative evaluation of the performance of the proposed segmentation algorithm. The comparison of actual tumor volumes to the volumes calculated using different segmentation methodologies including standard k-means, spatial domain Markov Random Field Model (MRFM), and the new multiscale MRFM proposed in this paper showed that the latter dramatically reduces the relative error to less than 8% for small lesions (7 mm radii) and less than 3.5% for larger lesions (9 mm radii). The analysis of the resulting segmentations of clinical oncologic PET data seems to confirm that this methodology shows promise and can successfully segment patient lesions. For problematic images, this technique enables the identification of tumors situated very close to nearby high normal physiologic uptake. The use of this technique to estimate tumor volumes for assessment of response to therapy and to delineate treatment volumes for the purpose of combined PET/CT-based radiation therapy treatment planning is also discussed.

  5. Fully automated segmentation of oncological PET volumes using a combined multiscale and statistical model.

    PubMed

    Montgomery, David W G; Amira, Abbes; Zaidi, Habib

    2007-02-01

    The widespread application of positron emission tomography (PET) in clinical oncology has driven this imaging technology into a number of new research and clinical arenas. Increasing numbers of patient scans have led to an urgent need for efficient data handling and the development of new image analysis techniques to aid clinicians in the diagnosis of disease and planning of treatment. Automatic quantitative assessment of metabolic PET data is attractive and will certainly revolutionize the practice of functional imaging since it can lower variability across institutions and may enhance the consistency of image interpretation independent of reader experience. In this paper, a novel automated system for the segmentation of oncological PET data aiming at providing an accurate quantitative analysis tool is proposed. The initial step involves expectation maximization (EM)-based mixture modeling using a k-means clustering procedure, which varies voxel order for initialization. A multiscale Markov model is then used to refine this segmentation by modeling spatial correlations between neighboring image voxels. An experimental study using an anthropomorphic thorax phantom was conducted for quantitative evaluation of the performance of the proposed segmentation algorithm. The comparison of actual tumor volumes to the volumes calculated using different segmentation methodologies including standard k-means, spatial domain Markov Random Field Model (MRFM), and the new multiscale MRFM proposed in this paper showed that the latter dramatically reduces the relative error to less than 8% for small lesions (7 mm radii) and less than 3.5% for larger lesions (9 mm radii). The analysis of the resulting segmentations of clinical oncologic PET data seems to confirm that this methodology shows promise and can successfully segment patient lesions. For problematic images, this technique enables the identification of tumors situated very close to nearby high normal physiologic uptake. The use of this technique to estimate tumor volumes for assessment of response to therapy and to delineate treatment volumes for the purpose of combined PET/CT-based radiation therapy treatment planning is also discussed. PMID:17388190

  6. The location of the thermodynamic atmosphere-ice interface in fully-coupled models

    NASA Astrophysics Data System (ADS)

    West, A. E.; McLaren, A. J.; Hewitt, H. T.; Best, M. J.

    2015-11-01

    In fully-coupled climate models, it is now normal to include a sea ice component with multiple layers, each having their own temperature. When coupling this component to an atmosphere model, it is more common for surface variables to be calculated in the sea ice component of the model, the equivalent of placing an interface immediately above the surface. This study uses a one-dimensional (1-D) version of the Los Alamos sea ice model (CICE) thermodynamic solver and the Met Office atmospheric surface exchange solver (JULES) to compare this method with that of allowing the surface variables to be calculated instead in the atmosphere, the equivalent of placing an interface immediately below the surface. The model is forced with a sensible heat flux derived from a sinusoidally varying near-surface air temperature. The two coupling methods are tested first with a 1-h coupling frequency, and then a 3-h coupling frequency, both commonly-used. With an above-surface interface, the resulting surface temperature and flux cycles contain large phase and amplitude errors, as well as having a very "blocky" shape. The simulation of both quantities is greatly improved when the interface is instead placed within the top ice layer, allowing surface variables to be calculated on the shorter timescale of the atmosphere. There is also an unexpected slight improvement in the simulation of the top-layer ice temperature by the ice model. The study concludes with a discussion of the implications of these results to three-dimensional modelling. An appendix examines the stability of the alternative method of coupling under various physically realistic scenarios.

  7. Distribution automation applications of fiber optics

    NASA Technical Reports Server (NTRS)

    Kirkham, Harold; Johnston, A.; Friend, H.

    1989-01-01

    Motivations for interest and research in distribution automation are discussed. The communication requirements of distribution automation are examined and shown to exceed the capabilities of power line carrier, radio, and telephone systems. A fiber optic based communication system is described that is co-located with the distribution system and that could satisfy the data rate and reliability requirements. A cost comparison shows that it could be constructed at a cost that is similar to that of a power line carrier system. The requirements for fiber optic sensors for distribution automation are discussed. The design of a data link suitable for optically-powered electronic sensing is presented. Empirical results are given. A modeling technique that was used to understand the reflections of guided light from a variety of surfaces is described. An optical position-indicator design is discussed. Systems aspects of distribution automation are discussed, in particular, the lack of interface, communications, and data standards. The economics of distribution automation are examined.

  8. Contaminant analysis automation demonstration proposal

    SciTech Connect

    Dodson, M.G.; Schur, A.; Heubach, J.G.

    1993-10-01

    The nation-wide and global need for environmental restoration and waste remediation (ER&WR) presents significant challenges to the analytical chemistry laboratory. The expansion of ER&WR programs forces an increase in the volume of samples processed and the demand for analysis data. To handle this expanding volume, productivity must be increased. However. The need for significantly increased productivity, faces contaminant analysis process which is costly in time, labor, equipment, and safety protection. Laboratory automation offers a cost effective approach to meeting current and future contaminant analytical laboratory needs. The proposed demonstration will present a proof-of-concept automated laboratory conducting varied sample preparations. This automated process also highlights a graphical user interface that provides supervisory, control and monitoring of the automated process. The demonstration provides affirming answers to the following questions about laboratory automation: Can preparation of contaminants be successfully automated?; Can a full-scale working proof-of-concept automated laboratory be developed that is capable of preparing contaminant and hazardous chemical samples?; Can the automated processes be seamlessly integrated and controlled?; Can the automated laboratory be customized through readily convertible design? and Can automated sample preparation concepts be extended to the other phases of the sample analysis process? To fully reap the benefits of automation, four human factors areas should be studied and the outputs used to increase the efficiency of laboratory automation. These areas include: (1) laboratory configuration, (2) procedures, (3) receptacles and fixtures, and (4) human-computer interface for the full automated system and complex laboratory information management systems.

  9. Third-generation electrokinetically pumped sheath-flow nanospray interface with improved stability and sensitivity for automated capillary zone electrophoresis-mass spectrometry analysis of complex proteome digests.

    PubMed

    Sun, Liangliang; Zhu, Guijie; Zhang, Zhenbin; Mou, Si; Dovichi, Norman J

    2015-05-01

    We have reported a set of electrokinetically pumped sheath flow nanoelectrospray interfaces to couple capillary zone electrophoresis with mass spectrometry. A separation capillary is threaded through a cross into a glass emitter. A side arm provides fluidic contact with a sheath buffer reservoir that is connected to a power supply. The potential applied to the sheath buffer drives electro-osmosis in the emitter to pump the sheath fluid at nanoliter per minute rates. Our first-generation interface placed a flat-tipped capillary in the emitter. Sensitivity was inversely related to orifice size and to the distance from the capillary tip to the emitter orifice. A second-generation interface used a capillary with an etched tip that allowed the capillary exit to approach within a few hundred micrometers of the emitter orifice, resulting in a significant increase in sensitivity. In both the first- and second-generation interfaces, the emitter diameter was typically 8 ?m; these narrow orifices were susceptible to plugging and tended to have limited lifetime. We now report a third-generation interface that employs a larger diameter emitter orifice with very short distance between the capillary tip and the emitter orifice. This modified interface is much more robust and produces much longer lifetime than our previous designs with no loss in sensitivity. We evaluated the third-generation interface for a 5000 min (127 runs, 3.5 days) repetitive analysis of bovine serum albumin digest using an uncoated capillary. We observed a 10% relative standard deviation in peak area, an average of 160,000 theoretical plates, and very low carry-over (much less than 1%). We employed a linear-polyacrylamide (LPA)-coated capillary for single-shot, bottom-up proteomic analysis of 300 ng of Xenopus laevis fertilized egg proteome digest and identified 1249 protein groups and 4038 peptides in a 110 min separation using an LTQ-Orbitrap Velos mass spectrometer; peak capacity was ?330. The proteome data set using this third-generation interface-based CZE-MS/MS is similar in size to that generated using a commercial ultraperformance liquid chromatographic analysis of the same sample with the same mass spectrometer and similar analysis time. PMID:25786131

  10. A DIFFUSE-INTERFACE APPROACH FOR MODELING TRANSPORT, DIFFUSION AND ADSORPTION/DESORPTION OF MATERIAL QUANTITIES ON A DEFORMABLE INTERFACE*

    PubMed Central

    Teigen, Knut Erik; Li, Xiangrong; Lowengrub, John; Wang, Fan; Voigt, Axel

    2010-01-01

    A method is presented to solve two-phase problems involving a material quantity on an interface. The interface can be advected, stretched, and change topology, and material can be adsorbed to or desorbed from it. The method is based on the use of a diffuse interface framework, which allows a simple implementation using standard finite-difference or finite-element techniques. Here, finite-difference methods on a block-structured adaptive grid are used, and the resulting equations are solved using a non-linear multigrid method. Interfacial flow with soluble surfactants is used as an example of the application of the method, and several test cases are presented demonstrating its accuracy and convergence. PMID:21373370

  11. Thermal modeling of roll and strip interface in rolling processes. Part 1: Review

    SciTech Connect

    Tseng, A.A.

    1999-02-12

    In rolling, the roll is used as a tool to deform the strip, resulting in rolling contact between the roll and strip and creating thermal resistance along the interface. A wide range of modeling approaches or correlations has been developed to study the interface heat transfer phenomena during rolling. In this paper the rationale and formulation of the most frequently adopted approaches in modeling of the roll-strip contact region are reviewed. The associated advantages and disadvantages of three major approaches are specifically discussed. The parameters involved in each approach are identified. The present paper is intended to provide the necessary information to assist investigations in understanding the intricacy of the contact problem encountered and the heat transfer parameters involved. In an accompanying paper (Part 2), an illustration is given to show the appropriate way to model the contact region in the metal rolling process.

  12. Scaling of fluctuations in one-dimensional interface and hopping models

    NASA Astrophysics Data System (ADS)

    Binder, P.-M.; Paczuski, M.; Barma, Mustansir

    1994-02-01

    We study time-dependent correlation functions in a family of one-dimensional biased stochastic lattice-gas models in which particles can move up to k lattice spacings. In terms of equivalent interface models, the family interpolates between the low-noise Ising (k=1) and Toom (k=?) interfaces on a square lattice. Since the continuum description of density (or height) fluctuations in these models involves at most (k+1)th-order terms in a gradient expansion, we can test specific renormalization-group predictions using Monte Carlo methods to probe scaling behavior. In particular we confirm the existence of multiplicative logarithms in the temporal behavior of mean-squared height fluctuations [~t1/2(ln t)1/4], induced by a marginal cubic gradient term. Analogs of redundant operators, familiar in the context of equilibrium systems, also appear to occur in these nonequilibrium systems.

  13. Modeling the Effect of Interface Wear on Fatigue Hysteresis Behavior of Carbon Fiber-Reinforced Ceramic-Matrix Composites

    NASA Astrophysics Data System (ADS)

    Longbiao, Li

    2015-12-01

    An analytical method has been developed to investigate the effect of interface wear on fatigue hysteresis behavior in carbon fiber-reinforced ceramic-matrix composites (CMCs). The damage mechanisms, i.e., matrix multicracking, fiber/matrix interface debonding and interface wear, fibers fracture, slip and pull-out, have been considered. The statistical matrix multicracking model and fracture mechanics interface debonding criterion were used to determine the matrix crack spacing and interface debonded length. Upon first loading to fatigue peak stress and subsequent cyclic loading, the fibers failure probabilities and fracture locations were determined by combining the interface wear model and fiber statistical failure model based on the assumption that the loads carried by broken and intact fibers satisfy the global load sharing criterion. The effects of matrix properties, i.e., matrix cracking characteristic strength and matrix Weibull modulus, interface properties, i.e., interface shear stress and interface debonded energy, fiber properties, i.e., fiber Weibull modulus and fiber characteristic strength, and cycle number on fibers failure, hysteresis loops and interface slip, have been investigated. The hysteresis loops under fatigue loading from the present analytical method were in good agreement with experimental data.

  14. Liquid-vapor interface of water-methanol mixture. II. A simple lattice-gas model

    NASA Astrophysics Data System (ADS)

    Matsumoto, Mitsuhiro; Mizukuchi, Hiroshi; Kataoka, Yosuke

    1993-01-01

    A simple lattice-gas model with a mean field approximation is presented to investigate qualitative features of liquid-vapor interface of water-methanol mixtures. The hydrophobicity of methanol molecules is incorporated by introducing anisotropic interactions. A rigorous framework to treat such anisotropy in a lattice-gas mixture model is described. The model is mathematically equivalent to an interfacial system of a diluted antiferro Ising spin system. Results of density profiles, orientational ordering near the surface, and surface excess thermodynamic quantities are compared with results of computer simulation based on a more realistic model.

  15. Thermo-Mechanical Modeling of Foil-Supported Carbon Nanotube Array Interface Materials

    NASA Astrophysics Data System (ADS)

    Pour Shahid Saeed Abadi, Parisa; Cola, Baratunde; Graham, Samuel

    2010-03-01

    A thin metal foil with vertically aligned carbon nanotube (CNT) arrays synthesized on both sides is a new class of thermal interface materials that has demonstrated thermal resistances less than 0.1 cm^2 K/W under moderate pressures. Such interface materials are able to obtain such low resistances due to their unique combination of high thermal conductivity and high conformability to surface roughness. For such structures, the contact resistances between CNT arrays and the adjacent surfaces are the major constituents of total resistance. Here we integrate a recently developed contact mechanics model for CNT arrays with a finite element code that captures the nonlinear mechanical behavior of the interface material and the effects of interface topography on the thermal performance. The developed model elucidates the relative affects of metal foil as well as CNT array deformation on the compliance of the composite structure. The results support previous experimental observations that the combination of foil and CNT array deformation significantly enhances interfacial contact and thermal conductance.

  16. Integration Of Heat Transfer Coefficient In Glass Forming Modeling With Special Interface Element

    NASA Astrophysics Data System (ADS)

    Moreau, P.; César de Sá, J.; Grégoire, S.; Lochegnies, D.

    2007-05-01

    Numerical modeling of the glass forming processes requires the accurate knowledge of the heat exchange between the glass and the forming tools. A laboratory testing is developed to determine the evolution of the heat transfer coefficient in different glass/mould contact conditions (contact pressure, temperature, lubrication…). In this paper, trials are performed to determine heat transfer coefficient evolutions in experimental conditions close to the industrial blow-and-blow process conditions. In parallel of this work, a special interface element is implemented in a commercial Finite Element code in order to deal with heat transfer between glass and mould for non-meshing meshes and evolutive contact. This special interface element, implemented by using user subroutines, permits to introduce the previous heat transfer coefficient evolutions in the numerical modelings at the glass/mould interface in function of the local temperatures, contact pressures, contact time and kind of lubrication. The blow-and-blow forming simulation of a perfume bottle is finally performed to assess the special interface element performance.

  17. Integration Of Heat Transfer Coefficient In Glass Forming Modeling With Special Interface Element

    SciTech Connect

    Moreau, P.; Gregoire, S.; Lochegnies, D.; Cesar de Sa, J.

    2007-05-17

    Numerical modeling of the glass forming processes requires the accurate knowledge of the heat exchange between the glass and the forming tools. A laboratory testing is developed to determine the evolution of the heat transfer coefficient in different glass/mould contact conditions (contact pressure, temperature, lubrication...). In this paper, trials are performed to determine heat transfer coefficient evolutions in experimental conditions close to the industrial blow-and-blow process conditions. In parallel of this work, a special interface element is implemented in a commercial Finite Element code in order to deal with heat transfer between glass and mould for non-meshing meshes and evolutive contact. This special interface element, implemented by using user subroutines, permits to introduce the previous heat transfer coefficient evolutions in the numerical modelings at the glass/mould interface in function of the local temperatures, contact pressures, contact time and kind of lubrication. The blow-and-blow forming simulation of a perfume bottle is finally performed to assess the special interface element performance.

  18. Interfacing MATLAB and Python Optimizers to Black-Box Environmental Simulation Models

    NASA Astrophysics Data System (ADS)

    Matott, L. S.; Leung, K.; Tolson, B.

    2009-12-01

    A common approach for utilizing environmental models in a management or policy-analysis context is to incorporate them into a simulation-optimization framework - where an underlying process-based environmental model is linked with an optimization search algorithm. The optimization search algorithm iteratively adjusts various model inputs (i.e. parameters or design variables) in order to minimize an application-specific objective function computed on the basis of model outputs (i.e. response variables). Numerous optimization algorithms have been applied to the simulation-optimization of environmental systems and this research investigated the use of optimization libraries and toolboxes that are readily available in MATLAB and Python - two popular high-level programming languages. Inspired by model-independent calibration codes (e.g. PEST and UCODE), a small piece of interface software (known as PIGEON) was developed. PIGEON allows users to interface Python and MATLAB optimizers with arbitrary black-box environmental models without writing any additional interface code. An initial set of benchmark tests (involving more than 20 MATLAB and Python optimization algorithms) were performed to validate the interface software - results highlight the need to carefully consider such issues as numerical precision in output files and enforcement (or not) of parameter limits. Additional benchmark testing considered the problem of fitting isotherm expressions to laboratory data - with an emphasis on dual-mode expressions combining non-linear isotherms with a linear partitioning component. With respect to the selected isotherm fitting problems, derivative-free search algorithms significantly outperformed gradient-based algorithms. Attempts to improve gradient-based performance, via parameter tuning and also via several alternative multi-start approaches, were largely unsuccessful.

  19. A hybrid geometric-statistical deformable model for automated 3-D segmentation in brain MRI.

    PubMed

    Huang, Albert; Abugharbieh, Rafeef; Tam, Roger

    2009-07-01

    We present a novel 3-D deformable model-based approach for accurate, robust, and automated tissue segmentation of brain MRI data of single as well as multiple magnetic resonance sequences. The main contribution of this study is that we employ an edge-based geodesic active contour for the segmentation task by integrating both image edge geometry and voxel statistical homogeneity into a novel hybrid geometric-statistical feature to regularize contour convergence and extract complex anatomical structures. We validate the accuracy of the segmentation results on simulated brain MRI scans of both single T1-weighted and multiple T1/T2/PD-weighted sequences. We also demonstrate the robustness of the proposed method when applied to clinical brain MRI scans. When compared to a current state-of-the-art region-based level-set segmentation formulation, our white matter and gray matter segmentation resulted in significantly higher accuracy levels with a mean improvement in Dice similarity indexes of 8.55% ( p < 0.0001) and 10.18% ( p < 0.0001), respectively. PMID:19336280

  20. A new method for automated discontinuity trace mapping on rock mass 3D surface model

    NASA Astrophysics Data System (ADS)

    Li, Xiaojun; Chen, Jianqin; Zhu, Hehua

    2016-04-01

    This paper presents an automated discontinuity trace mapping method on a 3D surface model of rock mass. Feature points of discontinuity traces are first detected using the Normal Tensor Voting Theory, which is robust to noisy point cloud data. Discontinuity traces are then extracted from feature points in four steps: (1) trace feature point grouping, (2) trace segment growth, (3) trace segment connection, and (4) redundant trace segment removal. A sensitivity analysis is conducted to identify optimal values for the parameters used in the proposed method. The optimal triangular mesh element size is between 5 cm and 6 cm; the angle threshold in the trace segment growth step is between 70° and 90°; the angle threshold in the trace segment connection step is between 50° and 70°, and the distance threshold should be at least 15 times the mean triangular mesh element size. The method is applied to the excavation face trace mapping of a drill-and-blast tunnel. The results show that the proposed discontinuity trace mapping method is fast and effective and could be used as a supplement to traditional direct measurement of discontinuity traces.

  1. Automated parameter estimation for biological models using Bayesian statistical model checking

    PubMed Central

    2015-01-01

    Background Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Domain experts usually estimate the values of these parameters by fitting the model to experimental data. Model fitting is usually expressed as an optimization problem that requires minimizing a cost-function which measures some notion of distance between the model and the data. This optimization problem is often solved by combining local and global search methods that tend to perform well for the specific application domain. When some prior information about parameters is available, methods such as Bayesian inference are commonly used for parameter learning. Choosing the appropriate parameter search technique requires detailed domain knowledge and insight into the underlying system. Results Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. Conclusions We have developed a new algorithmic technique for discovering parameters in complex stochastic models of biological systems given behavioral specifications written in a formal mathematical logic. Our algorithm uses Bayesian model checking, sequential hypothesis testing, and stochastic optimization to automatically synthesize parameters of probabilistic biological models. PMID:26679759

  2. Design of a photoionization detector for high-performance liquid chromatography using an automated liquid-to-vapor phase interface and application to phenobarbital in an animal feed and to amantadine.

    PubMed

    Schmermund, J T; Locke, D C

    1990-05-01

    An automated liquid-to-vapor phase interface system forms the basis for a new high-performance liquid chromatography (HPLC)-photoionization detection (PID) system. The system incorporates a six-valve interface enabling peak trapping, solvent switching and thermal desorption of the solute of interest into a vapor phase PID. For reversed-phase HPLC, the eluted solute peak is isolated on a Tenax trap after dilution of the effluent with water; the water is then evaporated, following which the trapped solute is flash-evaporated into the PID system. For normal-phase HPLC, the column effluent is diluted with hexane, the solute peak is concentrated on a short column packed with a propyl-amino/cyano bonded phase and the solvent is evaporated. The solute is then eluted with water onto the Tenax trap, and the above procedure for reversed-phase HPLC followed. All operations are controlled with a microcomputer. The advantages of the new detector system include completely automated operation, fast sample preparation, high sensitivity, and inherent selectivity. The system was applied to phenobarbital, which was extracted with acetonitrile from spiked laboratory animal feed, and to amantadine. The phenobarbital assay used a normal-phase separation with hexane-methyl tert.-butyl ether-methanol eluent. The manual sample preparation time was 5 min and the limit of detection was 2 ng phenobarbital injected; a conventional HPLC assay with UV detection required a longer sample preparation time and had a detection limit of 700 ng. Amantadine was assayed using a reversed-phase HPLC system with a water-methanol-triethylamine-orthophosphoric acid mobile phase. The detection limit was 25 ng injected. PMID:2355063

  3. Similarities of coherent tunneling spectroscopy of ferromagnet/ferromagnet junction within two interface models: Delta potential and finite width model

    NASA Astrophysics Data System (ADS)

    Pasanai, K.

    2016-03-01

    The tunneling conductance spectra of a ferromagnet/ferromagnet junction was theoretically studied under a scattering approach using two models of the interface: delta potential and finite width model in a one dimensional system. In the first model, the interface between the materials was characterized by the delta potential that has infinite height but no width. For the other model, the interface was modeled by an insulator with a finite thickness and potential barrier height. As a result, it was found that the potential strength under the delta potential model suppressed the conductance spectra as expected. In the finite width model, the insulating layer can give rise to an oscillation behavior when the layer is thick. This oscillation occurs in the region of the energy that is larger than the potential barrier. Moreover, the conductance spectra was suppressed by varying the insulating thickness, also depending on how high the potential barrier was. When the results from the two models were compared, they gave rise to the same result when the insulating layer was thin and the potential barrier was slightly larger than the energy of the bottom of the minority band of the ferromagnet.

  4. Mixed-level optical-system simulation incorporating component-level modeling of interface elements

    NASA Astrophysics Data System (ADS)

    Mena, Pablo V.; Stone, Bryan; Heller, Evan; Herrmann, Dan; Ghillino, Enrico; Scarmozzino, Rob

    2014-03-01

    While system-level simulation can allow designers to assess optical system performance via measures such as signal waveforms, spectra, eye diagrams, and BER calculations, component-level modeling can provide a more accurate description of coupling into and out of individual devices, as well as their detailed signal propagation characteristics. In particular, the system-level simulation of interface components used in optical systems, including splitters, combiners, grating couplers, waveguides, spot-size converters, and lens assemblies, can benefit from more detailed component-level modeling. Depending upon the nature of the device and the scale of the problem, simulation of optical transmission through these components can be carried out using either electromagnetic device-level simulation, such as the beampropagation method, or ray-based approaches. In either case, system-level simulation can interface to such componentlevel modeling via a suitable exchange of optical signal data. This paper presents the use of a mixed-level simulation flow in which both electromagnetic device-level and ray-based tools are integrated with a system-level simulation environment in order to model the use of various interface components in optical systems for a range of purposes, including, for example, coupling to and from optical transmission media such as single- and multimode optical fiber. This approach enables case studies on the impact of physical and geometric component variations on system performance, and the sensitivity of system behavior to misalignment between components.

  5. An SPH model for multiphase flows with complex interfaces and large density differences

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Zong, Z.; Liu, M. B.; Zou, L.; Li, H. T.; Shu, C.

    2015-02-01

    In this paper, an improved SPH model for multiphase flows with complex interfaces and large density differences is developed. The multiphase SPH model is based on the assumption of pressure continuity over the interfaces and avoids directly using the information of neighboring particles' densities or masses in solving governing equations. In order to improve computational accuracy and to obtain smooth pressure fields, a corrected density re-initialization is applied. A coupled dynamic solid boundary treatment (SBT) is implemented both to reduce numerical oscillations and to prevent unphysical particle penetration in the boundary area. The density correction and coupled dynamics SBT algorithms are modified to adapt to the density discontinuity on fluid interfaces in multiphase simulation. A cut-off value of the particle density is set to avoid negative pressure, which can lead to severe numerical difficulties and may even terminate the simulations. Three representative numerical examples, including a Rayleigh-Taylor instability test, a non-Boussinesq problem and a dam breaking simulation, are presented and compared with analytical results or experimental data. It is demonstrated that the present SPH model is capable of modeling complex multiphase flows with large interfacial deformations and density ratios.

  6. Multiscale Modeling of Intergranular Fracture in Aluminum: Constitutive Relation For Interface Debonding

    NASA Technical Reports Server (NTRS)

    Yamakov, V.; Saether, E.; Glaessgen, E. H.

    2008-01-01

    Intergranular fracture is a dominant mode of failure in ultrafine grained materials. In the present study, the atomistic mechanisms of grain-boundary debonding during intergranular fracture in aluminum are modeled using a coupled molecular dynamics finite element simulation. Using a statistical mechanics approach, a cohesive-zone law in the form of a traction-displacement constitutive relationship, characterizing the load transfer across the plane of a growing edge crack, is extracted from atomistic simulations and then recast in a form suitable for inclusion within a continuum finite element model. The cohesive-zone law derived by the presented technique is free of finite size effects and is statistically representative for describing the interfacial debonding of a grain boundary (GB) interface examined at atomic length scales. By incorporating the cohesive-zone law in cohesive-zone finite elements, the debonding of a GB interface can be simulated in a coupled continuum-atomistic model, in which a crack starts in the continuum environment, smoothly penetrates the continuum-atomistic interface, and continues its propagation in the atomistic environment. This study is a step towards relating atomistically derived decohesion laws to macroscopic predictions of fracture and constructing multiscale models for nanocrystalline and ultrafine grained materials.

  7. The DaveMLTranslator: An Interface for DAVE-ML Aerodynamic Models

    NASA Technical Reports Server (NTRS)

    Hill, Melissa A.; Jackson, E. Bruce

    2007-01-01

    It can take weeks or months to incorporate a new aerodynamic model into a vehicle simulation and validate the performance of the model. The Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML) has been proposed as a means to reduce the time required to accomplish this task by defining a standard format for typical components of a flight dynamic model. The purpose of this paper is to describe an object-oriented C++ implementation of a class that interfaces a vehicle subsystem model specified in DAVE-ML and a vehicle simulation. Using the DaveMLTranslator class, aerodynamic or other subsystem models can be automatically imported and verified at run-time, significantly reducing the elapsed time between receipt of a DAVE-ML model and its integration into a simulation environment. The translator performs variable initializations, data table lookups, and mathematical calculations for the aerodynamic build-up, and executes any embedded static check-cases for verification. The implementation is efficient, enabling real-time execution. Simple interface code for the model inputs and outputs is the only requirement to integrate the DaveMLTranslator as a vehicle aerodynamic model. The translator makes use of existing table-lookup utilities from the Langley Standard Real-Time Simulation in C++ (LaSRS++). The design and operation of the translator class is described and comparisons with existing, conventional, C++ aerodynamic models of the same vehicle are given.

  8. Modelisation microstructurale en fatigue/fluage a froid des alliages de titane quasi alpha par le modele des automates cellulaires

    NASA Astrophysics Data System (ADS)

    Boutana, Mohammed Nabil

    Les proprietes d'emploi des alliages de titane sont extremement dependantes a certains aspects des microstructures developpees lors de leur elaboration. Ces microstructures peuvent etre fortement heterogenes du point de vue de leur orientation cristallographique et de leur repartition spatiale. Leurs influences sur le comportement du materiau et son endommagement precoce sont des questions qui sont actuellement soulevees. Dans le present projet de doctorat on chercher a repondre a cette question mais aussi de presenter des solutions tangibles quant a l'utilisation securitaire de ces alliages. Un nouveau modele appele automate cellulaire a ete developpe pour simuler le comportement mecanique des alliages de titane en fatigue-fluage a froid. Ces modeles ont permet de mieux comprendre la correlation entre la microstructure et le comportement mecanique du materiau et surtout une analyse detaillee du comportement local du materiau. Mots-cles: Automate cellulaire, fatigue/fluage, alliage de titane, inclusion d'Eshelby, modelisation

  9. Designing geo-spatial interfaces to scale process models: the GeoWEPP approach

    NASA Astrophysics Data System (ADS)

    Renschler, Chris S.

    2003-04-01

    Practical decision making in spatially distributed environmental assessment and management is increasingly based on environmental process models linked to geographical information systems. Powerful personal computers and Internet-accessible assessment tools are providing much greater public access to, and use of, environmental models and geo-spatial data. However traditional process models, such as the water erosion prediction project (WEPP), were not typically developed with a flexible graphical user interface (GUI) for applications across a wide range of spatial and temporal scales, utilizing readily available geo-spatial data of highly variable precision and accuracy, and communicating with a diverse spectrum of users with different levels of expertise. As the development of the geo-spatial interface for WEPP (GeoWEPP) demonstrates, the GUI plays a key role in facilitating effective communication between the tool developer and user about data and model scales. The GeoWEPP approach illustrates that it is critical to develop a scientific and functional framework for the design, implementation, and use of such geo-spatial model assessment tools. The way that GeoWEPP was developed and implemented suggests a framework and scaling theory leading to a practical approach for developing geo-spatial interfaces for process models. GeoWEPP accounts for fundamental water erosion processes, model, and users needs, but most important it also matches realistic data availability and environmental settings by enabling even non-GIS-literate users to assemble the available geo-spatial data quickly to start soil and water conservation planning. In general, it is potential users' spatial and temporal scales of interest, and scales of readily available data, that should drive model design or selection, as opposed to using or designing the most sophisticated process model as the starting point and then determining data needs and result scales.

  10. Mesoscale Fluctuation-Relaxation Model for Velocity Slip and Temperature Jump on Fluid-Solid Interface

    NASA Astrophysics Data System (ADS)

    Czerwinska, Justyna

    2006-11-01

    Many micro- and bio-engineering applications require correct prediction of fluid flow in complex geometries. Fluid-solid contact interface prominently influences flow behavior. Velocity slip and thermal jump on the fluid-solid boundary are the result of non-equilibrium intermolecular energy transport. For gases this phenomena is well described by Maxwell-Smoluchowski equation. For liquids, at present simulations are conducted by hybrid approach (Continuum- Molecular Dynamics; Lattice Boltzmann Method - Molecular Dynamics). Presented here Fluctuation-Relaxation model, which is consequence of non-equilibrium Fluctuation Theorem, provides coarse grained relation for intermolecular solid-fluid energy transport. The implementation and verification is obtained by using Voronoi Particle Dynamics. Consequently, velocity slip and thermal jump are the result of the relaxation to equilibrium of the near boundary fluid particles. The model predicts correctly other theoretical and computational results. Moreover it provides extension to understanding of fluid-solid interface behavior on the mesoscale.

  11. Modeling of ultrasound transmission through a solid-liquid interface comprising a network of gas pockets

    SciTech Connect

    Paumel, K.; Baque, F.; Moysan, J.; Corneloup, G.; Chatain, D.

    2011-08-15

    Ultrasonic inspection of sodium-cooled fast reactor requires a good acoustic coupling between the transducer and the liquid sodium. Ultrasonic transmission through a solid surface in contact with liquid sodium can be complex due to the presence of microscopic gas pockets entrapped by the surface roughness. Experiments are run using substrates with controlled roughness consisting of a network of holes and a modeling approach is then developed. In this model, a gas pocket stiffness at a partially solid-liquid interface is defined. This stiffness is then used to calculate the transmission coefficient of ultrasound at the entire interface. The gas pocket stiffness has a static, as well as an inertial component, which depends on the ultrasonic frequency and the radiative mass.

  12. Formulation of consumables management models: Mission planning processor payload interface definition

    NASA Technical Reports Server (NTRS)

    Torian, J. G.

    1977-01-01

    Consumables models required for the mission planning and scheduling function are formulated. The relation of the models to prelaunch, onboard, ground support, and postmission functions for the space transportation systems is established. Analytical models consisting of an orbiter planning processor with consumables data base is developed. A method of recognizing potential constraint violations in both the planning and flight operations functions, and a flight data file storage/retrieval of information over an extended period which interfaces with a flight operations processor for monitoring of the actual flights is presented.

  13. Ergonomic Models of Anthropometry, Human Biomechanics and Operator-Equipment Interfaces

    NASA Technical Reports Server (NTRS)

    Kroemer, Karl H. E. (Editor); Snook, Stover H. (Editor); Meadows, Susan K. (Editor); Deutsch, Stanley (Editor)

    1988-01-01

    The Committee on Human Factors was established in October 1980 by the Commission on Behavioral and Social Sciences and Education of the National Research Council. The committee is sponsored by the Office of Naval Research, the Air Force Office of Scientific Research, the Army Research Institute for the Behavioral and Social Sciences, the National Aeronautics and Space Administration, and the National Science Foundation. The workshop discussed the following: anthropometric models; biomechanical models; human-machine interface models; and research recommendations. A 17-page bibliography is included.

  14. Evaluation of automated statistical shape model based knee kinematics from biplane fluoroscopy

    PubMed Central

    Baka, Nora; Kaptein, Bart L.; Giphart, J. Erik; Staring, Marius; de Bruijne, Marleen; Lelieveldt, Boudewijn P.F.; Valstar, Edward

    2014-01-01

    State-of-the-art fluoroscopic knee kinematic analysis methods require the patient-specific bone shapes segmented from CT or MRI. Substituting the patient-specific bone shapes with personalizable models, such as statistical shape models (SSM), could eliminate the CT/MRI acquisitions, and thereby decrease costs and radiation dose (when eliminating CT). SSM based kinematics, however, have not yet been evaluated on clinically relevant joint motion parameters. Therefore, in this work the applicability of SSM-s for computing knee kinematics from biplane fluoroscopic sequences was explored. Kinematic precision with an edge based automated bone tracking method using SSM-s was evaluated on 6 cadaver and 10 in-vivo fluoroscopic sequences. The SSMs of the femur and the tibia-fibula were created using 61 training datasets. Kinematic precision was determined for medial-lateral tibial shift, anterior-posterior tibial drawer, joint distraction-contraction, flexion, tibial rotation and adduction. The relationship between kinematic precision and bone shape accuracy was also investigated. The SSM based kinematics resulted in sub-millimeter (0.48–0.81 mm) and approximately one degree (0.69–0.99°) median precision on the cadaveric knees compared to bone-marker-based kinematics. The precision on the in-vivo datasets was comparable to the cadaveric sequences when evaluated with a semi-automatic reference method. These results are promising, though further work is necessary to reach the accuracy of CT-based kinematics. We also demonstrated that a better shape reconstruction accuracy does not automatically imply a better kinematic precision. This result suggests that the ability of accurately fitting the edges in the fluoroscopic sequences has a larger role in determining the kinematic precision than the overall 3D shape accuracy. PMID:24207131

  15. Characterizing and Modeling Brittle Bi-material Interfaces Subjected to Shear

    NASA Astrophysics Data System (ADS)

    Anyfantis, Konstantinos N.; Berggreen, Christian

    2014-12-01

    This work is based on the investigation, both experimentally and numerically, of the Mode II fracture process and bond strength of bondlines formed in co-cured composite/metal joints. To this end, GFRP-to-steel double strap joints were tested in tension, so that the bi-material interface was subjected to shear with debonding occurring under Mode II conditions. The study of the debonding process and thus failure of the joints was based both on stress and energy considerations. Analytical formulas were utilized for the derivation of the respective shear strength and fracture toughness measures which characterize the bi-material interface, by considering the joint's failure load, geometry and involved materials. The derived stress and toughness magnitudes were further utilized as the parameters of an extrinsic cohesive law, applied in connection with the modeling the bi-material interface in a finite element simulation environment. It was concluded that interfacial fracture in the considered joints was driven by the fracture toughness and not by strength considerations, and that LEFM is well suited to analyze the failure of the joint. Additionally, the double strap joint geometry was identified and utilized as a characterization test for measuring the Mode II fracture toughness of brittle bi-material interfaces.

  16. Rigorous interpolation near tilted interfaces in 3-D finite-difference EM modelling

    NASA Astrophysics Data System (ADS)

    Shantsev, Daniil V.; Maa, Frank A.

    2015-02-01

    We present a rigorous method for interpolation of electric and magnetic fields close to an interface with a conductivity contrast. The method takes into account not only a well-known discontinuity in the normal electric field, but also discontinuity in all the normal derivatives of electric and magnetic tangential fields. The proposed method is applied to marine 3-D controlled-source electromagnetic modelling (CSEM) where sources and receivers are located close to the seafloor separating conductive seawater and resistive formation. For the finite-difference scheme based on the Yee grid, the new interpolation is demonstrated to be much more accurate than alternative methods (interpolation using nodes on one side of the interface or interpolation using nodes on both sides, but ignoring the derivative jumps). The rigorous interpolation can handle arbitrary orientation of interface with respect to the grid, which is demonstrated on a marine CSEM example with a dipping seafloor. The interpolation coefficients are computed by minimizing a misfit between values at the nearest nodes and linear expansions of the continuous field components in the coordinate system aligned with the interface. The proposed interpolation operators can handle either uniform or non-uniform grids and can be applied to interpolation for both sources and receivers.

  17. Modeling Geometry and Progressive Failure of Material Interfaces in Plain Weave Composites

    NASA Technical Reports Server (NTRS)

    Hsu, Su-Yuen; Cheng, Ron-Bin

    2010-01-01

    A procedure combining a geometrically nonlinear, explicit-dynamics contact analysis, computer aided design techniques, and elasticity-based mesh adjustment is proposed to efficiently generate realistic finite element models for meso-mechanical analysis of progressive failure in textile composites. In the procedure, the geometry of fiber tows is obtained by imposing a fictitious expansion on the tows. Meshes resulting from the procedure are conformal with the computed tow-tow and tow-matrix interfaces but are incongruent at the interfaces. The mesh interfaces are treated as cohesive contact surfaces not only to resolve the incongruence but also to simulate progressive failure. The method is employed to simulate debonding at the material interfaces in a ceramic-matrix plain weave composite with matrix porosity and in a polymeric matrix plain weave composite without matrix porosity, both subject to uniaxial cyclic loading. The numerical results indicate progression of the interfacial damage during every loading and reverse loading event in a constant strain amplitude cyclic process. However, the composites show different patterns of damage advancement.

  18. Designing of Multi-Interface Diverging Experiments to Model Rayleigh-Taylor Growth in Supernovae

    NASA Astrophysics Data System (ADS)

    Grosskopf, Michael; Drake, R.; Kuranz, C.; Plewa, T.; Hearn, N.; Meakin, C.; Arnett, D.; Miles, A.; Robey, H.; Hansen, J.; Hsing, W.; Edwards, M.

    2008-05-01

    In previous experiments on the Omega Laser, researchers studying blast-wave-driven instabilities have observed the growth of Rayleigh-Taylor instabilities under conditions scaled to the He/H interface of SN1987A. Most of these experiments have been planar experiments, as the energy available proved unable to accelerate enough mass in a diverging geometry. With the advent of the NIF laser, which can deliver hundreds of kJ to an experiment, it is possible to produce 3D, blast-wave-driven, multiple-interface explosions and to study the mixing that develops. We report scaling simulations to model the interface dynamics of a multilayered, diverging Rayleigh-Taylor experiment for NIF using CALE, a hybrid adaptive Lagrangian-Eulerian code developed at LLNL. Specifically, we looked both qualitatively and quantitatively at the Rayleigh-Taylor growth and multi-interface interactions in mass-scaled, spherically divergent systems using different materials. The simulations will assist in the target design process and help choose diagnostics to maximize the information we receive in a particular shot. Simulations are critical for experimental planning, especially for experiments on large-scale facilities. *This research was sponsored by LLNL through contract LLNL B56128 and by the NNSA through DOE Research Grant DE-FG52-04NA00064.

  19. A surfactantless emulsion as a model for the liquid-liquid interface

    NASA Astrophysics Data System (ADS)

    Knight, Katherine Mary

    An electrochemically polarised liquid-liquid interface in the form of a surfactantless oil-in-water emulsion has been developed, and its creation, stabilisation and use as a model liquid-liquid system for structural characterisation using Small Angle Neutron Scattering (SANS) are described. The emulsion, composed of 1,2-dichloroethane (DCE)-in-D20, was created using a condensation method and the two main processes of destabilisation, sedimentation and coalescence, were minimised using density-matching and electrochemistry. The stabilised emulsion interface was then studied with SANS, using the Dll and D22 diffractometers at the ILL and LOQ at ISIS. This was to determine structural information regarding a layer of adsorbed Bovine Serum Albumin (BSA) protein at the interface with and without stabilising salts and the only analysable results were obtained using Dll, due to the lower Q-range accessible. The BSA layer thickness was determined to be 40 and 48 A for emulsions with and without salts respectively, and this was comparable with the literature thickness of 40 A. Another use for the surfactantless emulsion would be for electrodeless electrodeposition of metals at the interface, utilising the interfacial potential, and preliminary experiments were carried out using both oil-in-water and water-in-oil emulsions.

  20. Molecular simulation of water vapor-liquid phase interfaces using TIP4P/2005 model

    NASA Astrophysics Data System (ADS)

    Plankov, Barbora; Vin, Vclav; Hrub, Jan; Duka, Michal; N?mec, Tom; Celn, David

    2015-05-01

    Molecular dynamics simulations for water were run using the TIP4P/2005 model for temperatures ranging from 250 K to 600 K. The density profile, the surface tension and the thickness of the phase interface were calculated as preliminary results. The surface tension values matched nicely with the IAPWS correlation over wide range of temperatures. As a partial result, DL_POLY Classis was successfully used for tests of the new computing cluster in our institute.

  1. Degenerate Ising model for atomistic simulation of crystal-melt interfaces

    SciTech Connect

    Schebarchov, D.; Schulze, T. P.; Hendy, S. C.; Department of Physics, University of Auckland, Auckland 1010

    2014-02-21

    One of the simplest microscopic models for a thermally driven first-order phase transition is an Ising-type lattice system with nearest-neighbour interactions, an external field, and a degeneracy parameter. The underlying lattice and the interaction coupling constant control the anisotropic energy of the phase boundary, the field strength represents the bulk latent heat, and the degeneracy quantifies the difference in communal entropy between the two phases. We simulate the (stochastic) evolution of this minimal model by applying rejection-free canonical and microcanonical Monte Carlo algorithms, and we obtain caloric curves and heat capacity plots for square (2D) and face-centred cubic (3D) lattices with periodic boundary conditions. Since the model admits precise adjustment of bulk latent heat and communal entropy, neither of which affect the interface properties, we are able to tune the crystal nucleation barriers at a fixed degree of undercooling and verify a dimension-dependent scaling expected from classical nucleation theory. We also analyse the equilibrium crystal-melt coexistence in the microcanonical ensemble, where we detect negative heat capacities and find that this phenomenon is more pronounced when the interface is the dominant contributor to the total entropy. The negative branch of the heat capacity appears smooth only when the equilibrium interface-area-to-volume ratio is not constant but varies smoothly with the excitation energy. Finally, we simulate microcanonical crystal nucleation and subsequent relaxation to an equilibrium Wulff shape, demonstrating the model's utility in tracking crystal-melt interfaces at the atomistic level.

  2. Raise and peel models of fluctuating interfaces and combinatorics of Pascal's hexagon

    NASA Astrophysics Data System (ADS)

    Pyatov, P.

    2004-09-01

    The raise and peel model of a one-dimensional fluctuating interface (model A) is extended by considering one source (model B) or two sources (model C) at the boundaries. The Hamiltonians describing the three processes have, in the thermodynamic limit, spectra given by conformal field theory. The probabilities of the different configurations in the stationary states of the three models are not only related but have interesting combinatorial properties. We show that by extending Pascal's triangle (which gives solutions to linear relations in terms of integer numbers), to an hexagon, one obtains integer solutions of bilinear relations. These solutions not only give the weights of the various configurations in the three models but also give an insight into the connections between the probability distributions in the stationary states of the three models. Interestingly enough, Pascal's hexagon also gives solutions to a Hirota's difference equation.

  3. Simplifying the interaction between cognitive models and task environments with the JSON Network Interface.

    PubMed

    Hope, Ryan M; Schoelles, Michael J; Gray, Wayne D

    2014-12-01

    Process models of cognition, written in architectures such as ACT-R and EPIC, should be able to interact with the same software with which human subjects interact. By eliminating the need to simulate the experiment, this approach would simplify the modeler's effort, while ensuring that all steps required of the human are also required by the model. In practice, the difficulties of allowing one software system to interact with another present a significant barrier to any modeler who is not also skilled at this type of programming. The barrier increases if the programming language used by the modeling software differs from that used by the experimental software. The JSON Network Interface simplifies this problem for ACT-R modelers, and potentially, modelers using other systems. PMID:24338626

  4. An automated image analysis method to measure regularity in biological patterns: a case study in a Drosophila neurodegenerative model.

    PubMed

    Diez-Hermano, Sergio; Valero, Jorge; Rueda, Cristina; Ganfornina, Maria D; Sanchez, Diego

    2015-01-01

    The fruitfly compound eye has been broadly used as a model for neurodegenerative diseases. Classical quantitative techniques to estimate the degeneration level of an eye under certain experimental conditions rely either on time consuming histological techniques to measure retinal thickness, or pseudopupil visualization and manual counting. Alternatively, visual examination of the eye surface appearance gives only a qualitative approximation provided the observer is well-trained. Therefore, there is a need for a simplified and standardized analysis of fruitfly eye degeneration extent for both routine laboratory use and for automated high-throughput analysis. We have designed the freely available ImageJ plugin FLEYE, a novel and user-friendly method for quantitative unbiased evaluation of neurodegeneration levels based on the acquisition of fly eye surface pictures. The incorporation of automated image analysis tools and a classification algorithm sustained on a built-in statistical model allow the user to quickly analyze large sample size data with reliability and robustness. Pharmacological screenings or genetic studies using the Drosophila retina as a model system may benefit from our method, because it can be easily implemented in a fully automated environment. In addition, FLEYE can be trained to optimize the image detection capabilities, resulting in a versatile approach to evaluate the pattern regularity of other biological or non-biological samples and their experimental or pathological disruption. PMID:25887846

  5. Deterministic contact mechanics model applied to electrode interfaces in polymer electrolyte fuel cells and interfacial water accumulation

    NASA Astrophysics Data System (ADS)

    Zenyuk, I. V.; Kumbur, E. C.; Litster, S.

    2013-11-01

    An elastic deterministic contact mechanics model is applied to the compressed micro-porous (MPL) and catalyst layer (CL) interfaces in polymer electrolyte fuel cells (PEFCs) to elucidate the interfacial morphology. The model employs measured two-dimensional surface profiles and computes local surface deformation and interfacial gap, average contact resistance, and percent contact area as a function of compression pressure. Here, we apply the model to one interface having a MPL with cracks and one with a crack-free MPL. The void size distributions and water retention curves for the two sets of CL|MPL interfaces under compression are also computed. The CL|MPL interfaces with cracks are observed to have higher roughness, resulting in twice the interfacial average gap compared to the non-cracked interface at a given level of compression. The results indicate that the interfacial contact resistance is roughly the same for cracked or non-cracked interfaces due to cracks occupying low percentage of overall area. However, the cracked CL|MPL interface yields higher liquid saturation levels at all capillary pressures, resulting in an order of magnitude higher water storage capacity compared to the smooth interface. The van Genuchten water retention curve correlation for log-normal void size distributions is found to fit non-cracked CL|MPL interfaces well.

  6. Automating ground-fixed target modeling with the smart target model generator

    NASA Astrophysics Data System (ADS)

    Verner, D.; Dukes, R.

    2007-04-01

    The Smart Target Model Generator (STMG) is an AFRL/MNAL sponsored tool for generating 3D building models for use in various weapon effectiveness tools. These tools include tri-service approved tools such as Modular Effectiveness/Vulnerability Assessment (MEVA), Building Analysis Module in Joint Weaponeering System (JWS), PENCRV3D, and WinBlast. It also supports internal dispersion modeling of chemical contaminants. STMG also has capabilities to generate infrared or other sensor images. Unlike most CAD-models, STMG provides physics-based component properties such as strength, density, reinforcement, and material type. Interior components such as electrical and mechanical equipment, rooms, and ducts are also modeled. Buildings can be manually created with a graphical editor or automatically generated using rule-bases which size and place the structural components using rules based on structural engineering principles. In addition to its primary purposes of supporting conventional kinetic munitions, it can also be used to support sensor modeling and automatic target recognition.

  7. Finite element modeling of laminated composite plates with locally delaminated interface subjected to impact loading.

    PubMed

    Abo Sabah, Saddam Hussein; Kueh, Ahmad Beng Hong

    2014-01-01

    This paper investigates the effects of localized interface progressive delamination on the behavior of two-layer laminated composite plates when subjected to low velocity impact loading for various fiber orientations. By means of finite element approach, the laminae stiffnesses are constructed independently from their interface, where a well-defined virtually zero-thickness interface element is discreetly adopted for delamination simulation. The present model has the advantage of simulating a localized interfacial condition at arbitrary locations, for various degeneration areas and intensities, under the influence of numerous boundary conditions since the interfacial description is expressed discretely. In comparison, the model shows good agreement with existing results from the literature when modeled in a perfectly bonded state. It is found that as the local delamination area increases, so does the magnitude of the maximum displacement history. Also, as top and bottom fiber orientations deviation increases, both central deflection and energy absorption increase although the relative maximum displacement correspondingly decreases when in contrast to the laminates perfectly bonded state. PMID:24696668

  8. A coupled cohesive zone model for transient analysis of thermoelastic interface debonding

    NASA Astrophysics Data System (ADS)

    Sapora, Alberto; Paggi, Marco

    2014-04-01

    A coupled cohesive zone model based on an analogy between fracture and contact mechanics is proposed to investigate debonding phenomena at imperfect interfaces due to thermomechanical loading and thermal fields in bodies with cohesive cracks. Traction-displacement and heat flux-temperature relations are theoretically derived and numerically implemented in the finite element method. In the proposed formulation, the interface conductivity is a function of the normal gap, generalizing the Kapitza constant resistance model to partial decohesion effects. The case of a centered interface in a bimaterial component subjected to thermal loads is used as a test problem. The analysis focuses on the time evolution of the displacement and temperature fields during the transient regime before debonding, an issue not yet investigated in the literature. The solution of the nonlinear numerical problem is gained via an implicit scheme both in space and in time. The proposed model is finally applied to a case study in photovoltaics where the evolution of the thermoelastic fields inside a defective solar cell is predicted.

  9. Finite Element Modeling of Laminated Composite Plates with Locally Delaminated Interface Subjected to Impact Loading

    PubMed Central

    Abo Sabah, Saddam Hussein; Kueh, Ahmad Beng Hong

    2014-01-01

    This paper investigates the effects of localized interface progressive delamination on the behavior of two-layer laminated composite plates when subjected to low velocity impact loading for various fiber orientations. By means of finite element approach, the laminae stiffnesses are constructed independently from their interface, where a well-defined virtually zero-thickness interface element is discreetly adopted for delamination simulation. The present model has the advantage of simulating a localized interfacial condition at arbitrary locations, for various degeneration areas and intensities, under the influence of numerous boundary conditions since the interfacial description is expressed discretely. In comparison, the model shows good agreement with existing results from the literature when modeled in a perfectly bonded state. It is found that as the local delamination area increases, so does the magnitude of the maximum displacement history. Also, as top and bottom fiber orientations deviation increases, both central deflection and energy absorption increase although the relative maximum displacement correspondingly decreases when in contrast to the laminates perfectly bonded state. PMID:24696668

  10. EzGal: A Flexible Interface for Stellar Population Synthesis Models

    NASA Astrophysics Data System (ADS)

    Mancone, Conor L.; Gonzalez, Anthony H.

    2012-06-01

    We present EzGal, a flexible Python program designed to easily generate observable parameters (magnitudes, colors, and mass-to-light ratios) for arbitrary input stellar population synthesis (SPS) models. As has been demonstrated by various authors, for many applications the choice of input SPS models can be a significant source of systematic uncertainty. A key strength of EzGal is that it enables simple, direct comparison of different model sets so that the uncertainty introduced by choice of model set can be quantified. Its ability to work with new models will allow EzGal to remain useful as SPS modeling evolves to keep up with the latest research (such as varying IMFs). EzGal is also capable of generating composite stellar population models (CSPs) for arbitrary input star-formation histories and reddening laws, and it can be used to interpolate between metallicities for a given model set. To facilitate use, we have created an online interface to run EzGal and quickly generate magnitude and mass-to-light ratio predictions for a variety of star-formation histories and model sets. We make many commonly used SPS models available from the online interface, including the canonical Bruzual & Charlot models, an updated version of these models, the Maraston models, the BaSTI models, and the Flexible Stellar Population Synthesis (FSPS) models. We use EzGal to compare magnitude predictions for the model sets as a function of wavelength, age, metallicity, and star-formation history. From this comparison we quickly recover the well-known result that the models agree best in the optical for old solar-metallicity models, with differences at the 0.1 level. Similarly, the most problematic regime for SPS modeling is for young ages (?2 Gyr) and long wavelengths (??7500 ), where thermally pulsating AGB stars are important and scatter between models can vary from 0.3 mag (Sloan i) to 0.7 mag (Ks). We find that these differences are not caused by one discrepant model set and should therefore be interpreted as general uncertainties in SPS modeling. Finally, we connect our results to a more physically motivated example by generating CSPs with a star-formation history matching the global star-formation history of the universe. We demonstrate that the wavelength and age dependence of SPS model uncertainty translates into a redshift-dependent model uncertainty, highlighting the importance of a quantitative understanding of model differences when comparing observations with models as a function of redshift.

  11. Object-Based Integration of Photogrammetric and LiDAR Data for Automated Generation of Complex Polyhedral Building Models.

    PubMed

    Kim, Changjae; Habib, Ayman

    2009-01-01

    This research is concerned with a methodology for automated generation of polyhedral building models for complex structures, whose rooftops are bounded by straight lines. The process starts by utilizing LiDAR data for building hypothesis generation and derivation of individual planar patches constituting building rooftops. Initial boundaries of these patches are then refined through the integration of LiDAR and photogrammetric data and hierarchical processing of the planar patches. Building models for complex structures are finally produced using the refined boundaries. The performance of the developed methodology is evaluated through qualitative and quantitative analysis of the generated building models from real data. PMID:22346722

  12. Automated quantification of carotid artery stenosis on contrast-enhanced MRA data using a deformable vascular tube model.

    PubMed

    Suinesiaputra, Avan; de Koning, Patrick J H; Zudilova-Seinstra, Elena; Reiber, Johan H C; van der Geest, Rob J

    2012-08-01

    The purpose of this study was to develop and validate a method for automated segmentation of the carotid artery lumen from volumetric MR Angiographic (MRA) images using a deformable tubular 3D Non-Uniform Rational B-Splines (NURBS) model. A flexible 3D tubular NURBS model was designed to delineate the carotid arterial lumen. User interaction was allowed to guide the model by placement of forbidden areas. Contrast-enhanced MRA (CE-MRA) from 21 patients with carotid atherosclerotic disease were included in this study. The validation was performed against expert drawn contours on multi-planar reformatted image slices perpendicular to the artery. Excellent linear correlations were found on cross-sectional area measurement (r = 0.98, P < 0.05) and on luminal diameter (r = 0.98, P < 0.05). Strong match in terms of the Dice similarity indices were achieved: 0.95 0.02 (common carotid artery), 0.90 0.07 (internal carotid artery), 0.87 0.07 (external carotid artery), 0.88 0.09 (carotid bifurcation) and 0.75 0.20 (stenosed segments). Slight overestimation of stenosis grading by the automated method was observed. The mean differences was 7.20% (SD = 21.00%) and 5.2% (SD = 21.96%) when validated against two observers. Reproducibility in stenosis grade calculation by the automated method was high; the mean difference between two repeated analyses was 1.9 7.3%. In conclusion, the automated method shows high potential for clinical application in the analysis of CE-MRA of carotid arteries. PMID:22160666

  13. Automation based on knowledge modeling theory and its applications in engine diagnostic systems using Space Shuttle Main Engine vibrational data. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kim, Jonnathan H.

    1995-01-01

    Humans can perform many complicated tasks without explicit rules. This inherent and advantageous capability becomes a hurdle when a task is to be automated. Modern computers and numerical calculations require explicit rules and discrete numerical values. In order to bridge the gap between human knowledge and automating tools, a knowledge model is proposed. Knowledge modeling techniques are discussed and utilized to automate a labor and time intensive task of detecting anomalous bearing wear patterns in the Space Shuttle Main Engine (SSME) High Pressure Oxygen Turbopump (HPOTP).

  14. Automated Bayesian model development for frequency detection in biological time series

    PubMed Central

    2011-01-01

    Background A first step in building a mathematical model of a biological system is often the analysis of the temporal behaviour of key quantities. Mathematical relationships between the time and frequency domain, such as Fourier Transforms and wavelets, are commonly used to extract information about the underlying signal from a given time series. This one-to-one mapping from time points to frequencies inherently assumes that both domains contain the complete knowledge of the system. However, for truncated, noisy time series with background trends this unique mapping breaks down and the question reduces to an inference problem of identifying the most probable frequencies. Results In this paper we build on the method of Bayesian Spectrum Analysis and demonstrate its advantages over conventional methods by applying it to a number of test cases, including two types of biological time series. Firstly, oscillations of calcium in plant root cells in response to microbial symbionts are non-stationary and noisy, posing challenges to data analysis. Secondly, circadian rhythms in gene expression measured over only two cycles highlights the problem of time series with limited length. The results show that the Bayesian frequency detection approach can provide useful results in specific areas where Fourier analysis can be uninformative or misleading. We demonstrate further benefits of the Bayesian approach for time series analysis, such as direct comparison of different hypotheses, inherent estimation of noise levels and parameter precision, and a flexible framework for modelling the data without pre-processing. Conclusions Modelling in systems biology often builds on the study of time-dependent phenomena. Fourier Transforms are a convenient tool for analysing the frequency domain of time series. However, there are well-known limitations of this method, such as the introduction of spurious frequencies when handling short and noisy time series, and the requirement for uniformly sampled data. Biological time series often deviate significantly from the requirements of optimality for Fourier transformation. In this paper we present an alternative approach based on Bayesian inference. We show the value of placing spectral analysis in the framework of Bayesian inference and demonstrate how model comparison can automate this procedure. PMID:21702910

  15. Automated detection of arterial input function in DSC perfusion MRI in a stroke rat model

    NASA Astrophysics Data System (ADS)

    Yeh, M.-Y.; Lee, T.-H.; Yang, S.-T.; Kuo, H.-H.; Chyi, T.-K.; Liu, H.-L.

    2009-05-01

    Quantitative cerebral blood flow (CBF) estimation requires deconvolution of the tissue concentration time curves with an arterial input function (AIF). However, image-based determination of AIF in rodent is challenged due to limited spatial resolution. We evaluated the feasibility of quantitative analysis using automated AIF detection and compared the results with commonly applied semi-quantitative analysis. Permanent occlusion of bilateral or unilateral common carotid artery was used to induce cerebral ischemia in rats. The image using dynamic susceptibility contrast method was performed on a 3-T magnetic resonance scanner with a spin-echo echo-planar-image sequence (TR/TE = 700/80 ms, FOV = 41 mm, matrix = 64, 3 slices, SW = 2 mm), starting from 7 s prior to contrast injection (1.2 ml/kg) at four different time points. For quantitative analysis, CBF was calculated by the AIF which was obtained from 10 voxels with greatest contrast enhancement after deconvolution. For semi-quantitative analysis, relative CBF was estimated by the integral divided by the first moment of the relaxivity time curves. We observed if the AIFs obtained in the three different ROIs (whole brain, hemisphere without lesion and hemisphere with lesion) were similar, the CBF ratios (lesion/normal) between quantitative and semi-quantitative analyses might have a similar trend at different operative time points. If the AIFs were different, the CBF ratios might be different. We concluded that using local maximum one can define proper AIF without knowing the anatomical location of arteries in a stroke rat model.

  16. Automated modeling of ecosystem CO2 fluxes based on closed chamber measurements: A standardized conceptual and practical approach

    NASA Astrophysics Data System (ADS)

    Hoffmann, Mathias; Jurisch, Nicole; Albiac Borraz, Elisa; Hagemann, Ulrike; Sommer, Michael; Augustin, Jrgen

    2015-04-01

    Closed chamber measurements are widely used for determining the CO2 exchange of small-scale or heterogeneous ecosystems. Among the chamber design and operational handling, the data processing procedure is a considerable source of uncertainty of obtained results. We developed a standardized automatic data processing algorithm, based on the language and statistical computing environment R to (i) calculate measured CO2 flux rates, (ii) parameterize ecosystem respiration (Reco) and gross primary production (GPP) models, (iii) optionally compute an adaptive temperature model, (iv) model Reco, GPP and net ecosystem exchange (NEE), and (v) evaluate model uncertainty (calibration, validation and uncertainty prediction). The algorithm was tested for different manual and automatic chamber measurement systems (such as e.g. automated NEE-chambers and the LI-8100A soil CO2 Flux system) and ecosystems. Our study shows that even minor changes within the modelling approach may result in considerable differences of calculated flux rates, derived photosynthetic active radiation and temperature dependencies and subsequently modeled Reco, GPP and NEE balance of up to 25%. Thus, certain modeling implications will be given, since automated and standardized data processing procedures, based on clearly defined criteria, such as statistical parameters and thresholds are a prerequisite and highly desirable to guarantee the reproducibility, traceability of modelling results and encourage a better comparability between closed chamber based CO2 measurements.

  17. Device models for bilayer organic solar cells using interface rate equations

    NASA Astrophysics Data System (ADS)

    Thongprong, Non; Duxbury, Phillip

    2014-03-01

    Although the generalized Shockley diode equation is often used to fit the electrical response of organic photovoltaic devices, however developing device models to relate these parameters to atomistic processes is more difficult yet essential to fundamental understanding. A useful device model for organic heterojunctions was developed by Giebink et al., where the heterointerface is treated using a rate equation approach and the electric field in the donor and acceptor regions is assumed to be constant. We have developed models and computational tools combining the bilayer interface model of Giebink et al. with methods to include non-uniform electric fields in the donor and acceptor regions of the material. Injection barriers and trap effects in the donor and acceptor regions are also incorporated in our computational tools. Here the model and computational methods will be briefly outlined and results for the effects of low mobility in the donor or acceptor regions will be summarized. In these models, the series resistance in the generalized Shockley equation is interpreted as a sum of total bulk resistivity of materials and barriers at each layer's contact, while the parallel resistance mainly stems from dissociation efficiency of charge transfer states at the interface of doner and acceptor.

  18. Blocking and Blending: Different Assembly Models of Cyclodextrin and Sodium Caseinate at the Oil/Water Interface.

    PubMed

    Xu, Hua-Neng; Liu, Huan-Huan; Zhang, Lianfu

    2015-08-25

    The stability of cyclodextrin (CD)-based emulsions is attributed to the formation of a solid film of oil-CD complexes at the oil/water interface. However, competitive interactions between CDs and other components at the interface still need to be understood. Here we develop two different routes that allow the incorporation of a model protein (sodium caseinate, SC) into emulsions based on ?-CD. One route is the components adsorbed simultaneously from a mixed solution to the oil/water interface (route I), and the other is SC was added to a previously established CD-stabilized interface (route II). The adsorption mechanism of ?-CD modified by SC at the oil/water interface is investigated by rheological and optical methods. Strong sensitivity of the rheological behavior to the routes is indicated by both steady-state and small-deformation oscillatory experiments. Possible ?-CD/SC interaction models at the interface are proposed. In route I, the protein, due to its higher affinity for the interface, adsorbs strongly at the interface with blocking of the adsorption of ?-CD and formation of oil-CD complexes. In route II, the protein penetrates and blends into the preadsorbed layer of oil-CD complexes already formed at the interface. The revelation of interfacial assembly is expected to help better understand CD-based emulsions in natural systems and improve their designs in engineering applications. PMID:26228663

  19. A model for low temperature interface passivation between amorphous and crystalline silicon

    NASA Astrophysics Data System (ADS)

    Mitchell, J.

    2013-11-01

    Excellent passivation of the crystalline surface is known to occur following post-deposition thermal annealing of intrinsic hydrogenated amorphous silicon thin-film layers deposited by plasma-enhanced chemical vapour deposition. The hydrogen primarily responsible for passivating dangling bonds at the crystalline silicon surface has often been singularly linked to a bulk diffusion mechanism within the thin-film layer. In this work, the origins and the mechanism by which hydrogen passivation occurs are more accurately identified by way of an interface-diffusion model, which operates independent of the a-Si:H bulk. This first-principles approach achieved good agreement with experimental results, describing a linear relationship between the average diffusion lengths and anneals temperature. Similarly, the time hydrogen spends between shallow-trap states is shown to decrease rapidly with increases in temperature circuitously related to probabilistic displacement distances. The interface reconfiguration model proposed in this work demonstrates the importance of interface states and identifies the misconception surrounding hydrogen passivation of the c-Si surface.

  20. Modeling the Charge Transport in Graphene Nano Ribbon Interfaces for Nano Scale Electronic Devices

    NASA Astrophysics Data System (ADS)

    Kumar, Ravinder; Engles, Derick

    2015-05-01

    In this research work we have modeled, simulated and compared the electronic charge transport for Metal-Semiconductor-Metal interfaces of Graphene Nano Ribbons (GNR) with different geometries using First-Principle calculations and Non-Equilibrium Green's Function (NEGF) method. We modeled junctions of Armchair GNR strip sandwiched between two Zigzag strips with (Z-A-Z) and Zigzag GNR strip sandwiched between two Armchair strips with (A-Z-A) using semi-empirical Extended Huckle Theory (EHT) within the framework of Non-Equilibrium Green Function (NEGF). I-V characteristics of the interfaces were visualized for various transport parameters. The distinct changes in conductance and I-V curves reported as the Width across layers, Channel length (Central part) was varied at different bias voltages from -1V to 1 V with steps of 0.25 V. From the simulated results we observed that the conductance through A-Z-A graphene junction is in the range of 10-13 Siemens whereas the conductance through Z-A-Z graphene junction is in the range of 10-5 Siemens. These suggested conductance controlled mechanisms for the charge transport in the graphene interfaces with different geometries is important for the design of graphene based nano scale electronic devices like Graphene FETs, Sensors.

  1. Configuring a Graphical User Interface for Managing Local HYSPLIT Model Runs Through AWIPS

    NASA Technical Reports Server (NTRS)

    Wheeler, mark M.; Blottman, Peter F.; Sharp, David W.; Hoeth, Brian; VanSpeybroeck, Kurt M.

    2009-01-01

    Responding to incidents involving the release of harmful airborne pollutants is a continual challenge for Weather Forecast Offices in the National Weather Service. When such incidents occur, current protocol recommends forecaster-initiated requests of NOAA's Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model output through the National Centers of Environmental Prediction to obtain critical dispersion guidance. Individual requests are submitted manually through a secured web site, with desired multiple requests submitted in sequence, for the purpose of obtaining useful trajectory and concentration forecasts associated with the significant release of harmful chemical gases, radiation, wildfire smoke, etc., into local the atmosphere. To help manage the local HYSPLIT for both routine and emergency use, a graphical user interface was designed for operational efficiency. The interface allows forecasters to quickly determine the current HYSPLIT configuration for the list of predefined sites (e.g., fixed sites and floating sites), and to make any necessary adjustments to key parameters such as Input Model. Number of Forecast Hours, etc. When using the interface, forecasters will obtain desired output more confidently and without the danger of corrupting essential configuration files.

  2. A model for the control mode man-computer interface dialogue

    NASA Technical Reports Server (NTRS)

    Chafin, R. L.

    1981-01-01

    A four stage model is presented for the control mode man-computer interface dialogue. It consists of context development, semantic development syntactic development, and command execution. Each stage is discussed in terms of the operator skill levels (naive, novice, competent, and expert) and pertinent human factors issues. These issues are human problem solving, human memory, and schemata. The execution stage is discussed in terms of the operators typing skills. This model provides an understanding of the human process in command mode activity for computer systems and a foundation for relating system characteristics to operator characteristics.

  3. PRay - A graphical user interface for interactive visualization and modification of rayinvr models

    NASA Astrophysics Data System (ADS)

    Fromm, T.

    2016-01-01

    PRay is a graphical user interface for interactive displaying and editing of velocity models for seismic refraction. It is optimized for editing rayinvr models but can also be used as a dynamic viewer for ray tracing results from other software. The main features are the graphical editing of nodes and fast adjusting of the display (stations and phases). It can be extended by user-defined shell scripts and links to phase picking software. PRay is open source software written in the scripting language Perl, runs on Unix-like operating systems including Mac OS X and provides a version controlled source code repository for community development.

  4. Definition of common support equipment and space station interface requirements for IOC model technology experiments

    NASA Technical Reports Server (NTRS)

    Russell, Richard A.; Waiss, Richard D.

    1988-01-01

    A study was conducted to identify the common support equipment and Space Station interface requirements for the IOC (initial operating capabilities) model technology experiments. In particular, each principal investigator for the proposed model technology experiment was contacted and visited for technical understanding and support for the generation of the detailed technical backup data required for completion of this study. Based on the data generated, a strong case can be made for a dedicated technology experiment command and control work station consisting of a command keyboard, cathode ray tube, data processing and storage, and an alert/annunciator panel located in the pressurized laboratory.

  5. Testing of Environmental Satellite Bus-Instrument Interfaces Using Engineering Models

    NASA Technical Reports Server (NTRS)

    Gagnier, Donald; Hayner, Rick; Nosek, Thomas; Roza, Michael; Hendershot, James E.; Razzaghi, Andrea I.

    2004-01-01

    This paper discusses the formulation and execution of a laboratory test of the electrical interfaces between multiple atmospheric scientific instruments and the spacecraft bus that carries them. The testing, performed in 2002, used engineering models of the instruments and the Aura spacecraft bus electronics. Aura is one of NASA s Earth Observatory System missions. The test was designed to evaluate the complex interfaces in the command and data handling subsystems prior to integration of the complete flight instruments on the spacecraft. A problem discovered during the flight integration phase of the observatory can cause significant cost and schedule impacts. The tests successfully revealed problems and led to their resolution before the full-up integration phase, saving significant cost and schedule. This approach could be beneficial for future environmental satellite programs involving the integration of multiple, complex scientific instruments onto a spacecraft bus.

  6. Automated characterization of bending and expansion of a lattice of a Si substrate near a SiGe/Si interface by using split HOLZ line patterns.

    PubMed

    Saitoh, Koh; Yasuda, Yoshifumi; Hamabe, Maiko; Tanaka, Nobuo

    2010-01-01

    A method to determine lattice parameters and parameters characterizing the bending strain of the lattice, the direction and magnitude of the displacement field of the bending strain, by using higher-order Laue zone (HOLZ) reflection lines observed in convergent-beam electron diffraction patterns is proposed. In this method, all of the parameters are simultaneously determined by a fit of two Hough transforms of experimental and kinematically simulated HOLZ line patterns. This method has been used to obtain two-dimensional maps of lattice parameter a, the direction and relative magnitude of the displacement field in a Si substrate near a SiGe/Si interface. PMID:20484750

  7. A Cognitive System Model for Human/Automation Dynamics in Airspace Management

    NASA Technical Reports Server (NTRS)

    Corker, Kevin M.; Pisanich, Gregory; Lebacqz, J. Victor (Technical Monitor)

    1997-01-01

    NASA has initiated a significant thrust of research and development focused on providing the flight crew and air traffic managers automation aids to increase capacity in en route and terminal area operations through the use of flexible, more fuel-efficient routing, while improving the level of safety in commercial carrier operations. In that system development, definition of cognitive requirements for integrated multi-operator dynamic aiding systems is fundamental. In order to support that cognitive function definition, we have extended the Man Machine Integrated Design and Analysis System (MIDAS) to include representation of multiple cognitive agents (both human operators and intelligent aiding systems) operating aircraft, airline operations centers and air traffic control centers in the evolving airspace. The demands of this application require representation of many intelligent agents sharing world-models, and coordinating action/intention with cooperative scheduling of goals and actions in a potentially unpredictable world of operations. The MIDAS operator models have undergone significant development in order to understand the requirements for operator aiding and the impact of that aiding in the complex nondeterminate system of national airspace operations. The operator model's structure has been modified to include attention functions, action priority, and situation assessment. The cognitive function model has been expanded to include working memory operations including retrieval from long-term store, interference, visual-motor and verbal articulatory loop functions, and time-based losses. The operator's activity structures have been developed to include prioritization and interruption of multiple parallel activities among multiple operators, to provide for anticipation (knowledge of the intention and action of remote operators), and to respond to failures of the system and other operators in the system in situation-specific paradigms. The model's internal representation has been be modified so that multiple, autonomous sets of equipment will function in a scenario as the single equipment sets do now. In order to support the analysis requirements with multiple items of equipment, it is necessary for equipment to access the state of other equipment objects at initialization time (a radar object may need to access the position and speed of aircraft in its area, for example), and as a function of perception and sensor system interaction. The model has been improved to include multiple world-states as a function of equipment am operator interaction. The model has been used -1o predict the impact of warning and alert zones in aircraft operation, and, more critic-ally, the interaction of flight-deck based warning mechanisms and air traffic controller action in response to ground-based conflict prediction and alerting systems. In this operation, two operating systems provide alerting to two autonomous, but linked sets of operators, whose view of the system and whose dynamics in response are radically different. System stability and operator action was predicted using the MIDAS model.

  8. Prediction of hot spots in protein interfaces using a random forest model with hybrid features.

    PubMed

    Wang, Lin; Liu, Zhi-Ping; Zhang, Xiang-Sun; Chen, Luonan

    2012-03-01

    Prediction of hot spots in protein interfaces provides crucial information for the research on protein-protein interaction and drug design. Existing machine learning methods generally judge whether a given residue is likely to be a hot spot by extracting features only from the target residue. However, hot spots usually form a small cluster of residues which are tightly packed together at the center of protein interface. With this in mind, we present a novel method to extract hybrid features which incorporate a wide range of information of the target residue and its spatially neighboring residues, i.e. the nearest contact residue in the other face (mirror-contact residue) and the nearest contact residue in the same face (intra-contact residue). We provide a novel random forest (RF) model to effectively integrate these hybrid features for predicting hot spots in protein interfaces. Our method can achieve accuracy (ACC) of 82.4% and Matthew's correlation coefficient (MCC) of 0.482 in Alanine Scanning Energetics Database, and ACC of 77.6% and MCC of 0.429 in Binding Interface Database. In a comparison study, performance of our RF model exceeds other existing methods, such as Robetta, FOLDEF, KFC, KFC2, MINERVA and HotPoint. Of our hybrid features, three physicochemical features of target residues (mass, polarizability and isoelectric point), the relative side-chain accessible surface area and the average depth index of mirror-contact residues are found to be the main discriminative features in hot spots prediction. We also confirm that hot spots tend to form large contact surface areas between two interacting proteins. Source data and code are available at: http://www.aporc.org/doc/wiki/HotSpot. PMID:22258275

  9. Modeling interface exchange coupling: Effect on switching of granular FePt films

    NASA Astrophysics Data System (ADS)

    Abugri, Joseph B.; Visscher, P. B.; Su, Hao; Gupta, Subhadra

    2015-07-01

    To raise the areal density of magnetic recording to 1 Tbit/in2, there has been much recent work on the use of FePt granular films, because their high perpendicular anisotropy allows small grains to be stable. However, their coercivity may be higher than available write-head fields. One approach to reduce the coercivity is to heat the grain (heat assisted magnetic recording). Another strategy is to add a soft capping layer to help nucleate switching via exchange coupling with the hard FePt grains. We have simulated a model of such a capped medium and have studied the effect of the strength of the interface exchange and thickness of hard layer and soft layer on the overall coercivity. Although the magnetization variation within such boundary layers may be complex, the net effect of the boundary can often be modeled as an infinitely thin interface characterized by an interface exchange energy densitywe show how to do this consistently in a micromagnetic simulation. Although the switching behavior in the presence of exchange, magnetostatic, and external fields is quite complex, we show that by adding these fields one at a time, the main features of the M-H loop can be understood. In particular, we find that even without hard-soft interface exchange, magnetostatic coupling eliminates the zero-field kink in the loop, so that the absence of the kink does not (as has sometimes been assumed) imply exchange coupling. The computations have been done with a public-domain micromagnetics simulator that has been adapted to easily simulate arrays of grains.

  10. Fracture permeability and seismic wave scattering--Poroelastic linear-slip interface model for heterogeneous fractures

    SciTech Connect

    Nakagawa, S.; Myer, L.R.

    2009-06-15

    Schoenberg's Linear-slip Interface (LSI) model for single, compliant, viscoelastic fractures has been extended to poroelastic fractures for predicting seismic wave scattering. However, this extended model results in no impact of the in-plane fracture permeability on the scattering. Recently, we proposed a variant of the LSI model considering the heterogeneity in the in-plane fracture properties. This modified model considers wave-induced, fracture-parallel fluid flow induced by passing seismic waves. The research discussed in this paper applies this new LSI model to heterogeneous fractures to examine when and how the permeability of a fracture is reflected in the scattering of seismic waves. From numerical simulations, we conclude that the heterogeneity in the fracture properties is essential for the scattering of seismic waves to be sensitive to the permeability of a fracture.

  11. Interfacing comprehensive rotorcraft analysis with advanced aeromechanics and vortex wake models

    NASA Astrophysics Data System (ADS)

    Liu, Haiying

    This dissertation describes three aspects of the comprehensive rotorcraft analysis. First, a physics-based methodology for the modeling of hydraulic devices within multibody-based comprehensive models of rotorcraft systems is developed. This newly proposed approach can predict the fully nonlinear behavior of hydraulic devices, and pressure levels in the hydraulic chambers are coupled with the dynamic response of the system. The proposed hydraulic device models are implemented in a multibody code and calibrated by comparing their predictions with test bench measurements for the UH-60 helicopter lead-lag damper. Predicted peak damping forces were found to be in good agreement with measurements, while the model did not predict the entire time history of damper force to the same level of accuracy. The proposed model evaluates relevant hydraulic quantities such as chamber pressures, orifice flow rates, and pressure relief valve displacements. This model could be used to design lead-lag dampers with desirable force and damping characteristics. The second part of this research is in the area of computational aeroelasticity, in which an interface between computational fluid dynamics (CFD) and computational structural dynamics (CSD) is established. This interface enables data exchange between CFD and CSD with the goal of achieving accurate airloads predictions. In this work, a loose coupling approach based on the delta-airloads method is developed in a finite-element method based multibody dynamics formulation, DYMORE. To validate this aerodynamic interface, a CFD code, OVERFLOW-2, is loosely coupled with a CSD program, DYMORE, to compute the airloads of different flight conditions for Sikorsky UH-60 aircraft. This loose coupling approach has good convergence characteristics. The predicted airloads are found to be in good agreement with the experimental data, although not for all flight conditions. In addition, the tight coupling interface between the CFD program, OVERFLOW-2, and the CSD program, DYMORE, is also established. The ability to accurately capture the wake structure around a helicopter rotor is crucial for rotorcraft performance analysis. In the third part of this thesis, a new representation of the wake vortex structure based on Non-Uniform Rational B-Spline (NURBS) curves and surfaces is proposed to develop an efficient model for prescribed and free wakes. NURBS curves and surfaces are able to represent complex shapes with remarkably little data. The proposed formulation has the potential to reduce the computational cost associated with the use of Helmholtz's law and the Biot-Savart law when calculating the induced flow field around the rotor. An efficient free-wake analysis will considerably decrease the computational cost of comprehensive rotorcraft analysis, making the approach more attractive to routine use in industrial settings.

  12. Automating spectral measurements

    NASA Astrophysics Data System (ADS)

    Goldstein, Fred T.

    2008-09-01

    This paper discusses the architecture of software utilized in spectroscopic measurements. As optical coatings become more sophisticated, there is mounting need to automate data acquisition (DAQ) from spectrophotometers. Such need is exacerbated when 100% inspection is required, ancillary devices are utilized, cost reduction is crucial, or security is vital. While instrument manufacturers normally provide point-and-click DAQ software, an application programming interface (API) may be missing. In such cases automation is impossible or expensive. An API is typically provided in libraries (*.dll, *.ocx) which may be embedded in user-developed applications. Users can thereby implement DAQ automation in several Windows languages. Another possibility, developed by FTG as an alternative to instrument manufacturers' software, is the ActiveX application (*.exe). ActiveX, a component of many Windows applications, provides means for programming and interoperability. This architecture permits a point-and-click program to act as automation client and server. Excel, for example, can control and be controlled by DAQ applications. Most importantly, ActiveX permits ancillary devices such as barcode readers and XY-stages to be easily and economically integrated into scanning procedures. Since an ActiveX application has its own user-interface, it can be independently tested. The ActiveX application then runs (visibly or invisibly) under DAQ software control. Automation capabilities are accessed via a built-in spectro-BASIC language with industry-standard (VBA-compatible) syntax. Supplementing ActiveX, spectro-BASIC also includes auxiliary serial port commands for interfacing programmable logic controllers (PLC). A typical application is automatic filter handling.

  13. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP

    PubMed Central

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740

  14. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    PubMed

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740

  15. BOADICEA breast cancer risk prediction model: updates to cancer incidences, tumour pathology and web interface

    PubMed Central

    Lee, A J; Cunningham, A P; Kuchenbaecker, K B; Mavaddat, N; Easton, D F; Antoniou, A C

    2014-01-01

    Background: The Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm (BOADICEA) is a risk prediction model that is used to compute probabilities of carrying mutations in the high-risk breast and ovarian cancer susceptibility genes BRCA1 and BRCA2, and to estimate the future risks of developing breast or ovarian cancer. In this paper, we describe updates to the BOADICEA model that extend its capabilities, make it easier to use in a clinical setting and yield more accurate predictions. Methods: We describe: (1) updates to the statistical model to include cancer incidences from multiple populations; (2) updates to the distributions of tumour pathology characteristics using new data on BRCA1 and BRCA2 mutation carriers and women with breast cancer from the general population; (3) improvements to the computational efficiency of the algorithm so that risk calculations now run substantially faster; and (4) updates to the model's web interface to accommodate these new features and to make it easier to use in a clinical setting. Results: We present results derived using the updated model, and demonstrate that the changes have a significant impact on risk predictions. Conclusion: All updates have been implemented in a new version of the BOADICEA web interface that is now available for general use: http://ccge.medschl.cam.ac.uk/boadicea/. PMID:24346285

  16. AUTOMATED GIS WATERSHED ANALYSIS TOOLS FOR RUSLE/SEDMOD SOIL EROSION AND SEDIMENTATION MODELING

    EPA Science Inventory

    A comprehensive procedure for computing soil erosion and sediment delivery metrics has been developed using a suite of automated Arc Macro Language (AML ) scripts and a pair of processing- intensive ANSI C++ executable programs operating on an ESRI ArcGIS 8.x Workstation platform...

  17. Methods of modeling open mine workings and decision making in automated planning and control

    SciTech Connect

    Tabakman, I.B.

    1987-11-01

    This article describes methods for the computerized simulation of surface mine and mining operations based on the mapping and graphing of reserve and exploitation levels and on state-of-the-art pattern recognition algorithms implementable on microprocessors, for purposes of the automation of control and planning scenarios and the creation of an information system to assist in operational decision making.

  18. Automating the analytical laboratory via the Chemical Analysis Automation paradigm

    SciTech Connect

    Hollen, R.; Rzeszutko, C.

    1997-10-01

    To address the need for standardization within the analytical chemistry laboratories of the nation, the Chemical Analysis Automation (CAA) program within the US Department of Energy, Office of Science and Technology`s Robotic Technology Development Program is developing laboratory sample analysis systems that will automate the environmental chemical laboratories. The current laboratory automation paradigm consists of islands-of-automation that do not integrate into a system architecture. Thus, today the chemist must perform most aspects of environmental analysis manually using instrumentation that generally cannot communicate with other devices in the laboratory. CAA is working towards a standardized and modular approach to laboratory automation based upon the Standard Analysis Method (SAM) architecture. Each SAM system automates a complete chemical method. The building block of a SAM is known as the Standard Laboratory Module (SLM). The SLM, either hardware or software, automates a subprotocol of an analysis method and can operate as a standalone or as a unit within a SAM. The CAA concept allows the chemist to easily assemble an automated analysis system, from sample extraction through data interpretation, using standardized SLMs without the worry of hardware or software incompatibility or the necessity of generating complicated control programs. A Task Sequence Controller (TSC) software program schedules and monitors the individual tasks to be performed by each SLM configured within a SAM. The chemist interfaces with the operation of the TSC through the Human Computer Interface (HCI), a logical, icon-driven graphical user interface. The CAA paradigm has successfully been applied in automating EPA SW-846 Methods 3541/3620/8081 for the analysis of PCBs in a soil matrix utilizing commercially available equipment in tandem with SLMs constructed by CAA.

  19. Dual-Phase-Lag Model of Wave Propagation at the Interface Between Elastic and Thermoelastic Diffusion Media

    NASA Astrophysics Data System (ADS)

    Kumar, R.; Gupta, V.

    2015-01-01

    A dual-phase-lag diffusion model, augmenting the Fick law by the inclusion of the delay times of the mass flow and the potential gradient at the interface between two media into it, is proposed. The effects of reflection and refraction of plane waves at the interface between an elastic and a thermoelastic diffusion media were investigated with the use of this model. It was established that the ratios between the amplitudes and energies of the waves reflected and refracted at the interface between the indicated media are determined by the angle of incidence of radiation on this interface, the frequency of the incident wave, and the thermoelastic and diffusion properties of the media. Expressions for the ratios between the energies of different reflected and refracted waves and the energy of the incident were derived. The variation in these ratios with change in the angle of incidence of radiation on the indicated interface was calculated numerically and represented graphically.

  20. A System for Automated Extraction of Metadata from Scanned Documents using Layout Recognition and String Pattern Search Models

    PubMed Central

    Misra, Dharitri; Chen, Siyuan; Thoma, George R.

    2010-01-01

    One of the most expensive aspects of archiving digital documents is the manual acquisition of context-sensitive metadata useful for the subsequent discovery of, and access to, the archived items. For certain types of textual documents, such as journal articles, pamphlets, official government records, etc., where the metadata is contained within the body of the documents, a cost effective method is to identify and extract the metadata in an automated way, applying machine learning and string pattern search techniques. At the U. S. National Library of Medicine (NLM) we have developed an automated metadata extraction (AME) system that employs layout classification and recognition models with a metadata pattern search model for a text corpus with structured or semi-structured information. A combination of Support Vector Machine and Hidden Markov Model is used to create the layout recognition models from a training set of the corpus, following which a rule-based metadata search model is used to extract the embedded metadata by analyzing the string patterns within and surrounding each field in the recognized layouts. In this paper, we describe the design of our AME system, with focus on the metadata search model. We present the extraction results for a historic collection from the Food and Drug Administration, and outline how the system may be adapted for similar collections. Finally, we discuss some ongoing enhancements to our AME system. PMID:21179386

  1. ModelMuse - A Graphical User Interface for MODFLOW-2005 and PHAST

    USGS Publications Warehouse

    Winston, Richard B.

    2009-01-01

    ModelMuse is a graphical user interface (GUI) for the U.S. Geological Survey (USGS) models MODFLOW-2005 and PHAST. This software package provides a GUI for creating the flow and transport input file for PHAST and the input files for MODFLOW-2005. In ModelMuse, the spatial data for the model is independent of the grid, and the temporal data is independent of the stress periods. Being able to input these data independently allows the user to redefine the spatial and temporal discretization at will. This report describes the basic concepts required to work with ModelMuse. These basic concepts include the model grid, data sets, formulas, objects, the method used to assign values to data sets, and model features. The ModelMuse main window has a top, front, and side view of the model that can be used for editing the model, and a 3-D view of the model that can be used to display properties of the model. ModelMuse has tools to generate and edit the model grid. It also has a variety of interpolation methods and geographic functions that can be used to help define the spatial variability of the model. ModelMuse can be used to execute both MODFLOW-2005 and PHAST and can also display the results of MODFLOW-2005 models. An example of using ModelMuse with MODFLOW-2005 is included in this report. Several additional examples are described in the help system for ModelMuse, which can be accessed from the Help menu.

  2. Work Practice Simulation of Complex Human-Automation Systems in Safety Critical Situations: The Brahms Generalized berlingen Model

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Linde, Charlotte; Seah, Chin; Shafto, Michael

    2013-01-01

    The transition from the current air traffic system to the next generation air traffic system will require the introduction of new automated systems, including transferring some functions from air traffic controllers to on­-board automation. This report describes a new design verification and validation (V&V) methodology for assessing aviation safety. The approach involves a detailed computer simulation of work practices that includes people interacting with flight-critical systems. The research is part of an effort to develop new modeling and verification methodologies that can assess the safety of flight-critical systems, system configurations, and operational concepts. The 2002 Ueberlingen mid-air collision was chosen for analysis and modeling because one of the main causes of the accident was one crew's response to a conflict between the instructions of the air traffic controller and the instructions of TCAS, an automated Traffic Alert and Collision Avoidance System on-board warning system. It thus furnishes an example of the problem of authority versus autonomy. It provides a starting point for exploring authority/autonomy conflict in the larger system of organization, tools, and practices in which the participants' moment-by-moment actions take place. We have developed a general air traffic system model (not a specific simulation of Überlingen events), called the Brahms Generalized Ueberlingen Model (Brahms-GUeM). Brahms is a multi-agent simulation system that models people, tools, facilities/vehicles, and geography to simulate the current air transportation system as a collection of distributed, interactive subsystems (e.g., airports, air-traffic control towers and personnel, aircraft, automated flight systems and air-traffic tools, instruments, crew). Brahms-GUeM can be configured in different ways, called scenarios, such that anomalous events that contributed to the Überlingen accident can be modeled as functioning according to requirements or in an anomalous condition, as occurred during the accident. Brahms-GUeM thus implicitly defines a class of scenarios, which include as an instance what occurred at Überlingen. Brahms-GUeM is a modeling framework enabling "what if" analysis of alternative work system configurations and thus facilitating design of alternative operations concepts. It enables subsequent adaption (reusing simulation components) for modeling and simulating NextGen scenarios. This project demonstrates that BRAHMS provides the capacity to model the complexity of air transportation systems, going beyond idealized and simple flights to include for example the interaction of pilots and ATCOs. The research shows clearly that verification and validation must include the entire work system, on the one hand to check that mechanisms exist to handle failures of communication and alerting subsystems and/or failures of people to notice, comprehend, or communicate problematic (unsafe) situations; but also to understand how people must use their own judgment in relating fallible systems like TCAS to other sources of information and thus to evaluate how the unreliability of automation affects system safety. The simulation shows in particular that distributed agents (people and automated systems) acting without knowledge of each others' actions can create a complex, dynamic system whose interactive behavior is unexpected and is changing too quickly to comprehend and control.

  3. In-situ Studies of Structures and Processes at Model Battery Electrode/Electrolyte Interfaces

    NASA Astrophysics Data System (ADS)

    Fenter, Paul

    2015-03-01

    The ability to understand and control materials properties within electrochemical energy storage systems is a significant scientific and technical challenge. This is due, at least in part, to the extreme conditions present within these systems, and the significant structural and chemical changes that take place as lithium ions are incorporated in the active electrode material. In particular, the behavior of interfaces in such systems is poorly understood, notably the solid-liquid interface that separates the electrode and the liquid electrolyte. I will review our recent work in which we seek to isolate and understand the role of interfacial reactivity in these systems through in-situ, real-time, observations of electrochemically driven lithiation/delithation reactions. This is achieved by observing well-defined model electrode-electrolyte interfaces using X-ray reflectivity. These results reveal novel understandings of interfacial reactivity in conversion reactions (e.g., Si, SixCr, Ge, NiO) that can be used to control the complex reaction lithiation pathway through the use of thin-film and multilayer electrode structures. This work was supported by the Center for Electrochemical Energy Science, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, in collaboration with T. Fister, A. Gewirth, M.J. Bedzyk and others.

  4. Improved modeling of electrified interfaces using the effective screening medium method

    NASA Astrophysics Data System (ADS)

    Hamada, Ikutaro; Sugino, Osamu; Bonnet, Nicphore; Otani, Minoru

    2014-03-01

    The effective screening medium (ESM) method has been developed as a way to simulate electrified interfaces within a first principles framework using periodic boundary conditions. Given a slab geometry standing for the interface, the ESM method allows filling the region away from the slab with a dielectric screening medium- the ESM per se-as a simple way to include electrostatic screening effect of the environment. In the original version of the ESM method, the relative permittivity changes discontinuously from ? = 1 to ? > 1 at the boundary located between the molecular system and the ESM, which causes numerical instability when the electron density of the molecular system touches the boundary. Here we improve upon the description of the screening medium by imposing a smooth transition of the dielectric permittivity between the molecular system and the ESM (smooth ESM), thus precluding numerical instabilities when molecules come in contact with the ESM. Moreover, at short distances, the smooth ESM acts as a repulsive wall, and thus the simulation cell can serve as a natural container for molecules in molecular dynamics simulations. Consequently, the smooth ESM method is a substantial advancement in modeling solid-liquid interfaces under electric bias.

  5. Development and implementation of (Q)SAR modeling within the CHARMMing Web-user interface

    PubMed Central

    Weidlich, Iwona E.; Pevzner, Yuri; Miller, Benjamin T.; Filippov, Igor V.; Woodcock, H. Lee; Brooks, Bernard R.

    2014-01-01

    Recent availability of large publicly accessible databases of chemical compounds and their biological activities (PubChem, ChEMBL) has inspired us to develop a Web-based tool for SAR and QSAR modeling to add to the services provided by CHARMMing (www.charmming.org). This new module implements some of the most recent advances in modern machine learning algorithms – Random Forest, Support Vector Machine (SVM), Stochastic Gradient Descent, Gradient Tree Boosting etc. A user can import training data from Pubchem Bioassay data collections directly from our interface or upload his or her own SD files which contain structures and activity information to create new models (either categorical or numerical). A user can then track the model generation process and run models on new data to predict activity. PMID:25362883

  6. A model of blind zone for in situ monitoring the solid/liquid interface using ultrasonic wave.

    PubMed

    Peng, Song; Ouyang, Qi; Zhu, Z Z; Zhang, X L

    2015-07-01

    To in situ monitor a solid/liquid interface to control metal qualities, the paper analysis blind models of the ultrasonic propagation in the solidifying molten metal with a solid/liquid interface in the Bridgman type furnace, and a mathematical calculation model of blind zone with different source locations and surface concavities is built. The study points out that the blind zone I is caused by ray bending in the interface edge, and the blind zone II is caused by totally reflection which is related with initial ray angle, critical refraction angle of solid/liquid media. A serial of simulation experiments are operated on the base of the model, and numerical computation results coincide with model calculated results very well. Therefore, receiver should locate beyond these blind zones in the right boundary to obtain time of flight data which is used to reconstruct the solid/liquid interface. PMID:25783779

  7. Particles at fluid-fluid interfaces: A new Navier-Stokes-Cahn-Hilliard surface- phase-field-crystal model

    NASA Astrophysics Data System (ADS)

    Aland, Sebastian; Lowengrub, John; Voigt, Axel

    2012-10-01

    Colloid particles that are partially wetted by two immiscible fluids can become confined to fluid-fluid interfaces. At sufficiently high volume fractions, the colloids may jam and the interface may crystallize. The fluids together with the interfacial colloids form an emulsion with interesting material properties and offer an important route to new soft materials. A promising approach to simulate these emulsions was presented in Aland [Phys. FluidsPHFLE61070-663110.1063/1.3584815 23, 062103 (2011)], where a Navier-Stokes-Cahn-Hilliard model for the macroscopic two-phase fluid system was combined with a surface phase-field-crystal model for the microscopic colloidal particles along the interface. Unfortunately this model leads to spurious velocities which require very fine spatial and temporal resolutions to accurately and stably simulate. In this paper we develop an improved Navier-Stokes-Cahn-Hilliard-surface phase-field-crystal model based on the principles of mass conservation and thermodynamic consistency. To validate our approach, we derive a sharp interface model and show agreement with the improved diffuse interface model. Using simple flow configurations, we show that the new model has much better properties and does not lead to spurious velocities. Finally, we demonstrate the solid-like behavior of the crystallized interface by simulating the fall of a solid ball through a colloid-laden multiphase fluid.

  8. Validation of a digital mammographic unit model for an objective and highly automated clinical image quality assessment.

    PubMed

    Perez-Ponce, Hector; Daul, Christian; Wolf, Didier; Noel, Alain

    2013-08-01

    In mammography, image quality assessment has to be directly related to breast cancer indicator (e.g. microcalcifications) detectability. Recently, we proposed an X-ray source/digital detector (XRS/DD) model leading to such an assessment. This model simulates very realistic contrast-detail phantom (CDMAM) images leading to gold disc (representing microcalcifications) detectability thresholds that are very close to those of real images taken under the simulated acquisition conditions. The detection step was performed with a mathematical observer. The aim of this contribution is to include human observers into the disc detection process in real and virtual images to validate the simulation framework based on the XRS/DD model. Mathematical criteria (contrast-detail curves, image quality factor, etc.) are used to assess and to compare, from the statistical point of view, the cancer indicator detectability in real and virtual images. The quantitative results given in this paper show that the images simulated by the XRS/DD model are useful for image quality assessment in the case of all studied exposure conditions using either human or automated scoring. Also, this paper confirms that with the XRS/DD model the image quality assessment can be automated and the whole time of the procedure can be drastically reduced. Compared to standard quality assessment methods, the number of images to be acquired is divided by a factor of eight. PMID:23207102

  9. Cockpit automation - In need of a philosophy

    NASA Technical Reports Server (NTRS)

    Wiener, E. L.

    1985-01-01

    Concern has been expressed over the rapid development and deployment of automatic devices in transport aircraft, due mainly to the human interface and particularly the role of automation in inducing human error. The paper discusses the need for coherent philosophies of automation, and proposes several approaches: (1) flight management by exception, which states that as long as a crew stays within the bounds of regulations, air traffic control and flight safety, it may fly as it sees fit; (2) exceptions by forecasting, where the use of forecasting models would predict boundary penetration, rather than waiting for it to happen; (3) goal-sharing, where a computer is informed of overall goals, and subsequently has the capability of checking inputs and aircraft position for consistency with the overall goal or intentions; and (4) artificial intelligence and expert systems, where intelligent machines could mimic human reason.

  10. Modulation depth estimation and variable selection in state-space models for neural interfaces.

    PubMed

    Malik, Wasim Q; Hochberg, Leigh R; Donoghue, John P; Brown, Emery N

    2015-02-01

    Rapid developments in neural interface technology are making it possible to record increasingly large signal sets of neural activity. Various factors such as asymmetrical information distribution and across-channel redundancy may, however, limit the benefit of high-dimensional signal sets, and the increased computational complexity may not yield corresponding improvement in system performance. High-dimensional system models may also lead to overfitting and lack of generalizability. To address these issues, we present a generalized modulation depth measure using the state-space framework that quantifies the tuning of a neural signal channel to relevant behavioral covariates. For a dynamical system, we develop computationally efficient procedures for estimating modulation depth from multivariate data. We show that this measure can be used to rank neural signals and select an optimal channel subset for inclusion in the neural decoding algorithm. We present a scheme for choosing the optimal subset based on model order selection criteria. We apply this method to neuronal ensemble spike-rate decoding in neural interfaces, using our framework to relate motor cortical activity with intended movement kinematics. With offline analysis of intracortical motor imagery data obtained from individuals with tetraplegia using the BrainGate neural interface, we demonstrate that our variable selection scheme is useful for identifying and ranking the most information-rich neural signals. We demonstrate that our approach offers several orders of magnitude lower complexity but virtually identical decoding performance compared to greedy search and other selection schemes. Our statistical analysis shows that the modulation depth of human motor cortical single-unit signals is well characterized by the generalized Pareto distribution. Our variable selection scheme has wide applicability in problems involving multisensor signal modeling and estimation in biomedical engineering systems. PMID:25265627

  11. Modulation Depth Estimation and Variable Selection in State-Space Models for Neural Interfaces

    PubMed Central

    Hochberg, Leigh R.; Donoghue, John P.; Brown, Emery N.

    2015-01-01

    Rapid developments in neural interface technology are making it possible to record increasingly large signal sets of neural activity. Various factors such as asymmetrical information distribution and across-channel redundancy may, however, limit the benefit of high-dimensional signal sets, and the increased computational complexity may not yield corresponding improvement in system performance. High-dimensional system models may also lead to overfitting and lack of generalizability. To address these issues, we present a generalized modulation depth measure using the state-space framework that quantifies the tuning of a neural signal channel to relevant behavioral covariates. For a dynamical system, we develop computationally efficient procedures for estimating modulation depth from multivariate data. We show that this measure can be used to rank neural signals and select an optimal channel subset for inclusion in the neural decoding algorithm. We present a scheme for choosing the optimal subset based on model order selection criteria. We apply this method to neuronal ensemble spike-rate decoding in neural interfaces, using our framework to relate motor cortical activity with intended movement kinematics. With offline analysis of intracortical motor imagery data obtained from individuals with tetraplegia using the BrainGate neural interface, we demonstrate that our variable selection scheme is useful for identifying and ranking the most information-rich neural signals. We demonstrate that our approach offers several orders of magnitude lower complexity but virtually identical decoding performance compared to greedy search and other selection schemes. Our statistical analysis shows that the modulation depth of human motor cortical single-unit signals is well characterized by the generalized Pareto distribution. Our variable selection scheme has wide applicability in problems involving multisensor signal modeling and estimation in biomedical engineering systems. PMID:25265627

  12. Automation for System Safety Analysis

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Fleming, Land; Throop, David; Thronesbery, Carroll; Flores, Joshua; Bennett, Ted; Wennberg, Paul

    2009-01-01

    This presentation describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  13. High-frequency surface waves at a plasma-metal interface: I. Linear model

    SciTech Connect

    Dvinin, S. A.; Vologirov, A. G.; Mikheev, V. V.; Sviridkina, V. S.

    2008-08-15

    A study is made of the dispersion properties of surface waves at a plasma-metal interface under thermodynamically nonequilibrium conditions such that a space charge sheath forms at the plasma boundary. In the simplest model, the sheath is described as a dielectric with a given permittivity. The wave parameters in a highly collisional plasma are discussed. The effect of interaction between waves propagating near the opposite plasma boundaries is considered, in particular, for space charge sheaths of different thicknesses. Conditions are determined under which the parameters of surface waves are substantially altered by the plasma-sheath geometric resonance.

  14. Molecules to modeling: Toxoplasma gondii oocysts at the human–animal–environment interface

    PubMed Central

    VanWormer, Elizabeth; Fritz, Heather; Shapiro, Karen; Mazet, Jonna A.K.; Conrad, Patricia A.

    2013-01-01

    Environmental transmission of extremely resistant Toxoplasma gondii oocysts has resulted in infection of diverse species around the world, leading to severe disease and deaths in human and animal populations. This review explores T. gondii oocyst shedding, survival, and transmission, emphasizing the importance of linking laboratory and landscape from molecular characterization of oocysts to watershed-level models of oocyst loading and transport in terrestrial and aquatic systems. Building on discipline-specific studies, a One Health approach incorporating tools and perspectives from diverse fields and stakeholders has contributed to an advanced understanding of T. gondii and is addressing transmission at the rapidly changing human–animal–environment interface. PMID:23218130

  15. Modelling the Bioelectronic Interface in Engineered Tethered Membranes: From Biosensing to Electroporation.

    PubMed

    Hoiles, William; Krishnamurthy, Vikram; Cornell, Bruce

    2015-06-01

    This paper studies the construction and predictive models of three novel measurement platforms: (i) a Pore Formation Measurement Platform (PFMP) for detecting the presence of pore forming proteins and peptides, (ii) the Ion Channel Switch (ICS) biosensor for detecting the presence of analyte molecules in a fluid chamber, and (iii) an Electroporation Measurement Platform (EMP) that provides reliable measurements of the electroporation phenomenon. Common to all three measurement platforms is that they are comprised of an engineered tethered membrane that is formed via a rapid solvent exchange technique allowing the platform to have a lifetime of several months. The membrane is tethered to a gold electrode bioelectronic interface that includes an ionic reservoir separating the membrane and gold surface, allowing the membrane to mimic the physiological response of natural cell membranes. The electrical response of the PFMP, ICS, and EMP are predicted using continuum theories for electrodiffusive flow coupled with boundary conditions for modelling chemical reactions and electrical double layers present at the bioelectronic interface. Experimental measurements are used to validate the predictive accuracy of the dynamic models. These include using the PFMP for measuring the pore formation dynamics of the antimicrobial peptide PGLa and the protein toxin Staphylococcal ?-Hemolysin; the ICS biosensor for measuring nano-molar concentrations of streptavidin, ferritin, thyroid stimulating hormone (TSH), and human chorionic gonadotropin (pregnancy hormone hCG); and the EMP for measuring electroporation of membranes with different tethering densities, and membrane compositions. PMID:25373111

  16. Development of a semi-automated method for mitral valve modeling with medial axis representation using 3D ultrasound

    PubMed Central

    M. Pouch, Alison; A. Yushkevich, Paul; M. Jackson, Benjamin; S. Jassar, Arminder; Vergnat, Mathieu; H. Gorman, Joseph; C. Gorman, Robert; M. Sehgal, Chandra

    2012-01-01

    Purpose: Precise 3D modeling of the mitral valve has the potential to improve our understanding of valve morphology, particularly in the setting of mitral regurgitation (MR). Toward this goal, the authors have developed a user-initialized algorithm for reconstructing valve geometry from transesophageal 3D ultrasound (3D US) image data. Methods: Semi-automated image analysis was performed on transesophageal 3D US images obtained from 14 subjects with MR ranging from trace to severe. Image analysis of the mitral valve at midsystole had two stages: user-initialized segmentation and 3D deformable modeling with continuous medial representation (cm-rep). Semi-automated segmentation began with user-identification of valve location in 2D projection images generated from 3D US data. The mitral leaflets were then automatically segmented in 3D using the level set method. Second, a bileaflet deformable medial model was fitted to the binary valve segmentation by Bayesian optimization. The resulting cm-rep provided a visual reconstruction of the mitral valve, from which localized measurements of valve morphology were automatically derived. The features extracted from the fitted cm-rep included annular area, annular circumference, annular height, intercommissural width, septolateral length, total tenting volume, and percent anterior tenting volume. These measurements were compared to those obtained by expert manual tracing. Regurgitant orifice area (ROA) measurements were compared to qualitative assessments of MR severity. The accuracy of valve shape representation with cm-rep was evaluated in terms of the Dice overlap between the fitted cm-rep and its target segmentation. Results: The morphological features and anatomic ROA derived from semi-automated image analysis were consistent with manual tracing of 3D US image data and with qualitative assessments of MR severity made on clinical radiology. The fitted cm-reps accurately captured valve shape and demonstrated patient-specific differences in valve morphology among subjects with varying degrees of MR severity. Minimal variation in the Dice overlap and morphological measurements was observed when different cm-rep templates were used to initialize model fitting. Conclusions: This study demonstrates the use of deformable medial modeling for semi-automated 3D reconstruction of mitral valve geometry using transesophageal 3D US. The proposed algorithm provides a parametric geometrical representation of the mitral leaflets, which can be used to evaluate valve morphology in clinical ultrasound images. PMID:22320803

  17. Two graphical user interfaces for managing and analyzing MODFLOW groundwater-model scenarios

    USGS Publications Warehouse

    Banta, Edward R.

    2014-01-01

    Scenario Manager and Scenario Analyzer are graphical user interfaces that facilitate the use of calibrated, MODFLOW-based groundwater models for investigating possible responses to proposed stresses on a groundwater system. Scenario Manager allows a user, starting with a calibrated model, to design and run model scenarios by adding or modifying stresses simulated by the model. Scenario Analyzer facilitates the process of extracting data from model output and preparing such display elements as maps, charts, and tables. Both programs are designed for users who are familiar with the science on which groundwater modeling is based but who may not have a groundwater modelers expertise in building and calibrating a groundwater model from start to finish. With Scenario Manager, the user can manipulate model input to simulate withdrawal or injection wells, time-variant specified hydraulic heads, recharge, and such surface-water features as rivers and canals. Input for stresses to be simulated comes from user-provided geographic information system files and time-series data files. A Scenario Manager project can contain multiple scenarios and is self-documenting. Scenario Analyzer can be used to analyze output from any MODFLOW-based model; it is not limited to use with scenarios generated by Scenario Manager. Model-simulated values of hydraulic head, drawdown, solute concentration, and cell-by-cell flow rates can be presented in display elements. Map data can be represented as lines of equal value (contours) or as a gradated color fill. Charts and tables display time-series data obtained from output generated by a transient-state model run or from user-provided text files of time-series data. A display element can be based entirely on output of a single model run, or, to facilitate comparison of results of multiple scenarios, an element can be based on output from multiple model runs. Scenario Analyzer can export display elements and supporting metadata as a Portable Document Format file.

  18. A remote sensing change detection system based on neighborhood/object correlation image analysis, expert systems, and automated calibration model

    NASA Astrophysics Data System (ADS)

    Im, Jungho

    Changes in biophysical materials and human-made features on Earth (e.g., land cover change) are important environmental characteristics that affect human life as well as ecosystems. Many scientists have been working on algorithms to extract accurate change information through time at local to global scales. Remote sensing and GIS-assisted change detection methods have been used to identify change information for dynamic biophysical materials and man-made features. This dissertation research designed and developed new change detection algorithms and integrated them into a remote sensing change detection system (RSCDS). The system was implemented as a dynamic linked library (DLL) in ESRI ArcMap 9.1 using Visual Basic. A variety of modules were developed within the system. Three key modules of the system included neighborhood/object correlation image analysis, expert decision trees inference engine, and an automated binary change detection model using a threshold-based calibration approach. The three modules were evaluated using three case studies: (1) a change detection model based on neighborhood correlation image analysis and decision tree classification, (2) object-based change detection using correlation image analysis and image segmentation techniques, and (3) an automated binary change detection model using a threshold-based calibration approach. Seven research hypotheses were tested using these case studies. An important feature of the RSCDS is flexibility. Since the system was designed for general change detection purpose, it can be applied to any change detection study area. Another feature of the system is transportability. Since a number of different spatial/aspatial data, modules, and software packages were integrated into the single system (i.e., RSCDS), it is straightforward to use and/or implement the data and modules. Cost-effectiveness is a major advantage of the RSCDS. A number of processes necessary for change detection were automated within the system, allowing considerable processing time and labor to be saved.

  19. Diffusion-controlled interface kinetics-inclusive system-theoretic propagation models for molecular communication systems

    NASA Astrophysics Data System (ADS)

    Chude-Okonkwo, Uche A. K.; Malekian, Reza; Maharaj, B. T.

    2015-12-01

    Inspired by biological systems, molecular communication has been proposed as a new communication paradigm that uses biochemical signals to transfer information from one nano device to another over a short distance. The biochemical nature of the information transfer process implies that for molecular communication purposes, the development of molecular channel models should take into consideration diffusion phenomenon as well as the physical/biochemical kinetic possibilities of the process. The physical and biochemical kinetics arise at the interfaces between the diffusion channel and the transmitter/receiver units. These interfaces are herein termed molecular antennas. In this paper, we present the deterministic propagation model of the molecular communication between an immobilized nanotransmitter and nanoreceiver, where the emission and reception kinetics are taken into consideration. Specifically, we derived closed-form system-theoretic models and expressions for configurations that represent different communication systems based on the type of molecular antennas used. The antennas considered are the nanopores at the transmitter and the surface receptor proteins/enzymes at the receiver. The developed models are simulated to show the influence of parameters such as the receiver radius, surface receptor protein/enzyme concentration, and various reaction rate constants. Results show that the effective receiver surface area and the rate constants are important to the system's output performance. Assuming high rate of catalysis, the analysis of the frequency behavior of the developed propagation channels in the form of transfer functions shows significant difference introduce by the inclusion of the molecular antennas into the diffusion-only model. It is also shown that for t > > 0 and with the information molecules' concentration greater than the Michaelis-Menten kinetic constant of the systems, the inclusion of surface receptors proteins and enzymes in the models makes the system act like a band-stop filter over an infinite frequency range.

  20. Modeling and simulation of electronic structure, material interface and random doping in nano electronic devices

    PubMed Central

    Chen, Duan; Wei, Guo-Wei

    2010-01-01

    The miniaturization of nano-scale electronic devices, such as metal oxide semiconductor field effect transistors (MOSFETs), has given rise to a pressing demand in the new theoretical understanding and practical tactic for dealing with quantum mechanical effects in integrated circuits. Modeling and simulation of this class of problems have emerged as an important topic in applied and computational mathematics. This work presents mathematical models and computational algorithms for the simulation of nano-scale MOSFETs. We introduce a unified two-scale energy functional to describe the electrons and the continuum electrostatic potential of the nano-electronic device. This framework enables us to put microscopic and macroscopic descriptions in an equal footing at nano scale. By optimization of the energy functional, we derive consistently-coupled Poisson-Kohn-Sham equations. Additionally, layered structures are crucial to the electrostatic and transport properties of nano transistors. A material interface model is proposed for more accurate description of the electrostatics governed by the Poisson equation. Finally, a new individual dopant model that utilizes the Dirac delta function is proposed to understand the random doping effect in nano electronic devices. Two mathematical algorithms, the matched interface and boundary (MIB) method and the Dirichlet-to-Neumann mapping (DNM) technique, are introduced to improve the computational efficiency of nano-device simulations. Electronic structures are computed via subband decomposition and the transport properties, such as the I-V curves and electron density, are evaluated via the non-equilibrium Green's functions (NEGF) formalism. Two distinct device configurations, a double-gate MOSFET and a four-gate MOSFET, are considered in our three-dimensional numerical simulations. For these devices, the current fluctuation and voltage threshold lowering effect induced by the discrete dopant model are explored. Numerical convergence and model well-posedness are also investigated in the present work. PMID:20396650