Science.gov

Sample records for automated modelling interface

  1. Automated Fluid Interface System (AFIS)

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Automated remote fluid servicing will be necessary for future space missions, as future satellites will be designed for on-orbit consumable replenishment. In order to develop an on-orbit remote servicing capability, a standard interface between a tanker and the receiving satellite is needed. The objective of the Automated Fluid Interface System (AFIS) program is to design, fabricate, and functionally demonstrate compliance with all design requirements for an automated fluid interface system. A description and documentation of the Fairchild AFIS design is provided.

  2. Automated identification and indexing of dislocations in crystal interfaces

    SciTech Connect

    Stukowski, Alexander; Bulatov, Vasily V.; Arsenlis, Athanasios

    2012-10-31

    Here, we present a computational method for identifying partial and interfacial dislocations in atomistic models of crystals with defects. Our automated algorithm is based on a discrete Burgers circuit integral over the elastic displacement field and is not limited to specific lattices or dislocation types. Dislocations in grain boundaries and other interfaces are identified by mapping atomic bonds from the dislocated interface to an ideal template configuration of the coherent interface to reveal incompatible displacements induced by dislocations and to determine their Burgers vectors. Additionally, the algorithm generates a continuous line representation of each dislocation segment in the crystal and also identifies dislocation junctions.

  3. Automated identification and indexing of dislocations in crystal interfaces

    DOE PAGESBeta

    Stukowski, Alexander; Bulatov, Vasily V.; Arsenlis, Athanasios

    2012-10-31

    Here, we present a computational method for identifying partial and interfacial dislocations in atomistic models of crystals with defects. Our automated algorithm is based on a discrete Burgers circuit integral over the elastic displacement field and is not limited to specific lattices or dislocation types. Dislocations in grain boundaries and other interfaces are identified by mapping atomic bonds from the dislocated interface to an ideal template configuration of the coherent interface to reveal incompatible displacements induced by dislocations and to determine their Burgers vectors. Additionally, the algorithm generates a continuous line representation of each dislocation segment in the crystal andmore » also identifies dislocation junctions.« less

  4. Automated Student Model Improvement

    ERIC Educational Resources Information Center

    Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.

    2012-01-01

    Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…

  5. Automation Interfaces of the Orion GNC Executive Architecture

    NASA Technical Reports Server (NTRS)

    Hart, Jeremy

    2009-01-01

    This viewgraph presentation describes Orion mission's automation Guidance, Navigation and Control (GNC) architecture and interfaces. The contents include: 1) Orion Background; 2) Shuttle/Orion Automation Comparison; 3) Orion Mission Sequencing; 4) Orion Mission Sequencing Display Concept; and 5) Status and Forward Plans.

  6. Towards automation of user interface design

    NASA Technical Reports Server (NTRS)

    Gastner, Rainer; Kraetzschmar, Gerhard K.; Lutz, Ernst

    1992-01-01

    This paper suggests an approach to automatic software design in the domain of graphical user interfaces. There are still some drawbacks in existing user interface management systems (UIMS's) which basically offer only quantitative layout specifications via direct manipulation. Our approach suggests a convenient way to get a default graphical user interface which may be customized and redesigned easily in further prototyping cycles.

  7. Control Interface and Tracking Control System for Automated Poultry Inspection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A new visible/near-infrared inspection system interface was developed in order to conduct research to test and implement an automated chicken inspection system for online operation on commercial chicken processing lines. The spectroscopic system demonstrated effective spectral acquisition and data ...

  8. Space station automation and robotics study. Operator-systems interface

    NASA Technical Reports Server (NTRS)

    1984-01-01

    This is the final report of a Space Station Automation and Robotics Planning Study, which was a joint project of the Boeing Aerospace Company, Boeing Commercial Airplane Company, and Boeing Computer Services Company. The study is in support of the Advanced Technology Advisory Committee established by NASA in accordance with a mandate by the U.S. Congress. Boeing support complements that provided to the NASA Contractor study team by four aerospace contractors, the Stanford Research Institute (SRI), and the California Space Institute. This study identifies automation and robotics (A&R) technologies that can be advanced by requirements levied by the Space Station Program. The methodology used in the study is to establish functional requirements for the operator system interface (OSI), establish the technologies needed to meet these requirements, and to forecast the availability of these technologies. The OSI would perform path planning, tracking and control, object recognition, fault detection and correction, and plan modifications in connection with extravehicular (EV) robot operations.

  9. Automation in teleoperation from a man-machine interface viewpoint

    NASA Technical Reports Server (NTRS)

    Bejczy, A. K.; Corker, K.

    1984-01-01

    Teleoperation can be defined as the use of robotic devices having mobility, manipulative and some sensing capabilities, and remotely controlled by a human operator. The purpose of this paper is to discuss and exemplify technology issues related to the use of robots as man-extension or teleoperator systems in space. The main thrust of the paper is focused at research and development in the area of sensing- and computer-based automation from the viewpoint of man-machine interface devices and techniques. The objective of this R and D effort is to render space teleoperation efficient and safe through the use of devices and techniques which will permit integrated and task-level ('intelligent') two-way control communication between human operator and teleoperator machine in earth orbit.

  10. Geographic information system/watershed model interface

    USGS Publications Warehouse

    Fisher, Gary T.

    1989-01-01

    Geographic information systems allow for the interactive analysis of spatial data related to water-resources investigations. A conceptual design for an interface between a geographic information system and a watershed model includes functions for the estimation of model parameter values. Design criteria include ease of use, minimal equipment requirements, a generic data-base management system, and use of a macro language. An application is demonstrated for a 90.1-square-kilometer subbasin of the Patuxent River near Unity, Maryland, that performs automated derivation of watershed parameters for hydrologic modeling.

  11. Database-driven web interface automating gyrokinetic simulations for validation

    NASA Astrophysics Data System (ADS)

    Ernst, D. R.

    2010-11-01

    We are developing a web interface to connect plasma microturbulence simulation codes with experimental data. The website automates the preparation of gyrokinetic simulations utilizing plasma profile and magnetic equilibrium data from TRANSP analysis of experiments, read from MDSPLUS over the internet. This database-driven tool saves user sessions, allowing searches of previous simulations, which can be restored to repeat the same analysis for a new discharge. The website includes a multi-tab, multi-frame, publication quality java plotter Webgraph, developed as part of this project. Input files can be uploaded as templates and edited with context-sensitive help. The website creates inputs for GS2 and GYRO using a well-tested and verified back-end, in use for several years for the GS2 code [D. R. Ernst et al., Phys. Plasmas 11(5) 2637 (2004)]. A centralized web site has the advantage that users receive bug fixes instantaneously, while avoiding the duplicated effort of local compilations. Possible extensions to the database to manage run outputs, toward prototyping for the Fusion Simulation Project, are envisioned. Much of the web development utilized support from the DoE National Undergraduate Fellowship program [e.g., A. Suarez and D. R. Ernst, http://meetings.aps.org/link/BAPS.2005.DPP.GP1.57.

  12. Sharing control between humans and automation using haptic interface: primary and secondary task performance benefits.

    PubMed

    Griffiths, Paul G; Gillespie, R Brent

    2005-01-01

    This paper describes a paradigm for human/automation control sharing in which the automation acts through a motor coupled to a machine's manual control interface. The manual interface becomes a haptic display, continually informing the human about automation actions. While monitoring by feel, users may choose either to conform to the automation or override it and express their own control intentions. This paper's objective is to demonstrate that adding automation through haptic display can be used not only to improve performance on a primary task but also to reduce perceptual demands or free attention for a secondary task. Results are presented from three experiments in which 11 participants completed a lane-following task using a motorized steering wheel on a fixed-base driving simulator. The automation behaved like a copilot, assisting with lane following by applying torques to the steering wheel. Results indicate that haptic assist improves lane following by least 30%, p < .0001, while reducing visual demand by 29%, p < .0001, or improving reaction time in a secondary tone localization task by 18 ms, p = .0009. Potential applications of this research include the design of automation interfaces based on haptics that support human/automation control sharing better than traditional push-button automation interfaces. PMID:16435698

  13. Automated parking garage system model

    NASA Technical Reports Server (NTRS)

    Collins, E. R., Jr.

    1975-01-01

    A one-twenty-fifth scale model of the key components of an automated parking garage system is described. The design of the model required transferring a vehicle from an entry level, vertically (+Z, -Z), to a storage location at any one of four storage positions (+X, -X, +Y, +Y, -Y) on the storage levels. There are three primary subsystems: (1) a screw jack to provide the vertical motion of the elevator, (2) a cam-driven track-switching device to provide X to Y motion, and (3) a transfer cart to provide horizontal travel and a small amount to vertical motion for transfer to the storage location. Motive power is provided by dc permanent magnet gear motors, one each for the elevator and track switching device and two for the transfer cart drive system (one driving the cart horizontally and the other providing the vertical transfer). The control system, through the use of a microprocessor, provides complete automation through a feedback system which utilizes sensing devices.

  14. On Abstractions and Simplifications in the Design of Human-Automation Interfaces

    NASA Technical Reports Server (NTRS)

    Heymann, Michael; Degani, Asaf; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This report addresses the design of human-automation interaction from a formal perspective that focuses on the information content of the interface, rather than the design of the graphical user interface. It also addresses the issue of the information provided to the user (e.g., user-manuals, training material, and all other resources). In this report, we propose a formal procedure for generating interfaces and user-manuals. The procedure is guided by two criteria: First, the interface must be correct, that is, with the given interface the user will be able to perform the specified tasks correctly. Second, the interface should be succinct. The report discusses the underlying concepts and the formal methods for this approach. Two examples are used to illustrate the procedure. The algorithm for constructing interfaces can be automated, and a preliminary software system for its implementation has been developed.

  15. On Abstractions and Simplifications in the Design of Human-Automation Interfaces

    NASA Technical Reports Server (NTRS)

    Heymann, Michael; Degani, Asaf; Shafto, Michael; Meyer, George; Clancy, Daniel (Technical Monitor)

    2001-01-01

    This report addresses the design of human-automation interaction from a formal perspective that focuses on the information content of the interface, rather than the design of the graphical user interface. It also addresses the, issue of the information provided to the user (e.g., user-manuals, training material, and all other resources). In this report, we propose a formal procedure for generating interfaces and user-manuals. The procedure is guided by two criteria: First, the interface must be correct, i.e., that with the given interface the user will be able to perform the specified tasks correctly. Second, the interface should be as succinct as possible. The report discusses the underlying concepts and the formal methods for this approach. Several examples are used to illustrate the procedure. The algorithm for constructing interfaces can be automated, and a preliminary software system for its implementation has been developed.

  16. Spherical model of growing interfaces

    NASA Astrophysics Data System (ADS)

    Henkel, Malte; Durang, Xavier

    2015-05-01

    Building on an analogy between the ageing behaviour of magnetic systems and growing interfaces, the Arcetri model, a new exactly solvable model for growing interfaces is introduced, which shares many properties with the kinetic spherical model. The long-time behaviour of the interface width and of the two-time correlators and responses is analysed. For all dimensions d ≠ 2, universal characteristics distinguish the Arcetri model from the Edwards-Wilkinson model, although for d > 2 all stationary and non-equilibrium exponents are the same. For d = 1 dimensions, the Arcetri model is equivalent to the p = 2 spherical spin glass. For 2 < d < 4 dimensions, its relaxation properties are related to the ones of a particle-reaction model, namely a bosonic variant of the diffusive pair-contact process. The global persistence exponent is also derived.

  17. Automated Volumetric Analysis of Interface Fluid in Descemet Stripping Automated Endothelial Keratoplasty Using Intraoperative Optical Coherence Tomography

    PubMed Central

    Xu, David; Dupps, William J.; Srivastava, Sunil K.; Ehlers, Justis P.

    2014-01-01

    Purpose. We demonstrated a novel automated algorithm for segmentation of intraoperative optical coherence tomography (iOCT) imaging of fluid interface gap in Descemet stripping automated endothelial keratoplasty (DSAEK) and evaluated the effect of intraoperative maneuvers to promote graft apposition on interface dimensions. Methods. A total of 30 eyes of 29 patients from the anterior segment arm of the PIONEER study was included in this analysis. The iOCT scans were entered into an automated algorithm that delineated the spatial extent of the fluid interface gap in three dimensions between donor and host cornea during surgery. The algorithm was validated against manual segmentation, and performance was evaluated by absolute accuracy and intraclass correlation coefficient. Patients underwent DSAEK using a standard sequence of maneuvers, including controlled elevation of IOP and compressive corneal sweep to promote graft adhesion. Measurement of interface fluid volume, en face area, and maximal interface height were compared between scans before anterior chamber infusion, after pressure elevation alone, and after corneal sweep with pressure elevation using dependent-samples t-test. Results. The algorithm achieved 87% absolute accuracy and an intraclass correlation of 0.96. Nine datasets of a total of 84 (11%) required human correction of segmentation errors. Mean interface fluid volume was significantly decreased by corneal sweep (P = 0.021) and by both maneuvers combined (P = 0.046). Mean en face area was significantly decreased by corneal sweep (P = 0.010) and the maneuvers combined (P < 0.001). Maximal interface height was significantly decreased by pressure elevation (P = 0.010), corneal sweep (P = 0.009), and the maneuvers combined (P = 0.010). Conclusions. Quantitative analysis of iOCT volumetric scans shows the significant effect of controlled pressure elevation and corneal sweep on graft apposition in DSAEK. Computerized iOCT analysis yields objective measurements of interface fluid intraoperatively, which provides information on anatomic outcomes and could be used in future trials. PMID:25103262

  18. Hierarchical interface-enriched finite element method: An automated technique for mesh-independent simulations

    NASA Astrophysics Data System (ADS)

    Soghrati, Soheil

    2014-10-01

    A hierarchical interface-enriched finite element method (HIFEM) is introduced for the mesh-independent treatment of problems with complex morphologies. The proposed method provides an automated framework to capture gradient discontinuities associated with multiple materials interfaces that are in a close proximity or contact, while using finite element meshes that do not conform to the problem geometry. While yielding an optimal precision and convergence rate, other principal advantages of HIFEM include the ease of implementation and the ability to compute enriched solutions for a variety of complex materials interface configurations. Moreover, the construction of enrichment functions in this method is independent of the number and sequence of materials interfaces introduced to the nonconforming mesh. An immediate benefit of this feature is the ability to add new materials phases to already enriched nonconforming elements, without the need to remove/modify existing enrichments or sub-elements. In addition to detailed convergence study, several example problems are presented to show the application of HIFEM for modeling various engineering problems, including woven composites, heterogeneous materials systems, and actively-cooled microvascular systems.

  19. Theoretical considerations in designing operator interfaces for automated systems

    NASA Technical Reports Server (NTRS)

    Norman, Susan D.

    1987-01-01

    The domains most amenable to techniques based on artificial intelligence (AI) are those that are systematic or for which a systematic domain can be generated. In aerospace systems, many operational tasks are systematic owing to the highly procedural nature of the applications. However, aerospace applications can also be nonprocedural, particularly in the event of a failure or an unexpected event. Several techniques are discussed for designing automated systems for real-time, dynamic environments, particularly when a 'breakdown' occurs. A breakdown is defined as operation of an automated system outside its predetermined, conceptual domain.

  20. Cooperative control - The interface challenge for men and automated machines

    NASA Technical Reports Server (NTRS)

    Hankins, W. W., III; Orlando, N. E.

    1984-01-01

    The research issues associated with the increasing autonomy and independence of machines and their evolving relationships to human beings are explored. The research, conducted by Langley Research Center (LaRC), will produce a new social work order in which the complementary attributes of robots and human beings, which include robots' greater strength and precision and humans' greater physical and intellectual dexterity, are necessary for systems of cooperation. Attention is given to the tools for performing the research, including the Intelligent Systems Research Laboratory (ISRL) and industrial manipulators, as well as to the research approaches taken by the Automation Technology Branch (ATB) of LaRC to achieve high automation levels. The ATB is focusing on artificial intelligence research through DAISIE, a system which tends to organize its environment into hierarchical controller/planner abstractions.

  1. Automation and Accountability in Decision Support System Interface Design

    ERIC Educational Resources Information Center

    Cummings, Mary L.

    2006-01-01

    When the human element is introduced into decision support system design, entirely new layers of social and ethical issues emerge but are not always recognized as such. This paper discusses those ethical and social impact issues specific to decision support systems and highlights areas that interface designers should consider during design with an…

  2. Automated, Parametric Geometry Modeling and Grid Generation for Turbomachinery Applications

    NASA Technical Reports Server (NTRS)

    Harrand, Vincent J.; Uchitel, Vadim G.; Whitmire, John B.

    2000-01-01

    The objective of this Phase I project is to develop a highly automated software system for rapid geometry modeling and grid generation for turbomachinery applications. The proposed system features a graphical user interface for interactive control, a direct interface to commercial CAD/PDM systems, support for IGES geometry output, and a scripting capability for obtaining a high level of automation and end-user customization of the tool. The developed system is fully parametric and highly automated, and, therefore, significantly reduces the turnaround time for 3D geometry modeling, grid generation and model setup. This facilitates design environments in which a large number of cases need to be generated, such as for parametric analysis and design optimization of turbomachinery equipment. In Phase I we have successfully demonstrated the feasibility of the approach. The system has been tested on a wide variety of turbomachinery geometries, including several impellers and a multi stage rotor-stator combination. In Phase II, we plan to integrate the developed system with turbomachinery design software and with commercial CAD/PDM software.

  3. Model-Based Design of Air Traffic Controller-Automation Interaction

    NASA Technical Reports Server (NTRS)

    Romahn, Stephan; Callantine, Todd J.; Palmer, Everett A.; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    A model of controller and automation activities was used to design the controller-automation interactions necessary to implement a new terminal area air traffic management concept. The model was then used to design a controller interface that provides the requisite information and functionality. Using data from a preliminary study, the Crew Activity Tracking System (CATS) was used to help validate the model as a computational tool for describing controller performance.

  4. User interface design principles for the SSM/PMAD automated power system

    NASA Technical Reports Server (NTRS)

    Jakstas, Laura M.; Myers, Chris J.

    1991-01-01

    Martin Marietta has developed a user interface for the space station module power management and distribution (SSM/PMAD) automated power system testbed which provides human access to the functionality of the power system, as well as exemplifying current techniques in user interface design. The testbed user interface was designed to enable an engineer to operate the system easily without having significant knowledge of computer systems, as well as provide an environment in which the engineer can monitor and interact with the SSM/PMAD system hardware. The design of the interface supports a global view of the most important data from the various hardware and software components, as well as enabling the user to obtain additional or more detailed data when needed. The components and representations of the SSM/PMAD testbed user interface are examined. An engineer's interactions with the system are also described.

  5. Task-focused modeling in automated agriculture

    NASA Astrophysics Data System (ADS)

    Vriesenga, Mark R.; Peleg, K.; Sklansky, Jack

    1993-01-01

    Machine vision systems analyze image data to carry out automation tasks. Our interest is in machine vision systems that rely on models to achieve their designed task. When the model is interrogated from an a priori menu of questions, the model need not be complete. Instead, the machine vision system can use a partial model that contains a large amount of information in regions of interest and less information elsewhere. We propose an adaptive modeling scheme for machine vision, called task-focused modeling, which constructs a model having just sufficient detail to carry out the specified task. The model is detailed in regions of interest to the task and is less detailed elsewhere. This focusing effect saves time and reduces the computational effort expended by the machine vision system. We illustrate task-focused modeling by an example involving real-time micropropagation of plants in automated agriculture.

  6. Reinventing the energy modelling-policy interface

    NASA Astrophysics Data System (ADS)

    Strachan, Neil; Fais, Birgit; Daly, Hannah

    2016-03-01

    Energy modelling has a crucial underpinning role for policy making, but the modelling-policy interface faces several limitations. A reinvention of this interface would better provide timely, targeted, tested, transparent and iterated insights from such complex multidisciplinary tools.

  7. Automated two-dimensional interface for capillary gas chromatography

    DOEpatents

    Strunk, M.R.; Bechtold, W.E.

    1996-02-20

    A multidimensional gas chromatograph (GC) system is disclosed which has wide bore capillary and narrow bore capillary GC columns in series and has a novel system interface. Heart cuts from a high flow rate sample, separated by a wide bore GC column, are collected and directed to a narrow bore GC column with carrier gas injected at a lower flow compatible with a mass spectrometer. A bimodal six-way valve is connected with the wide bore GC column outlet and a bimodal four-way valve is connected with the narrow bore GC column inlet. A trapping and retaining circuit with a cold trap is connected with the six-way valve and a transfer circuit interconnects the two valves. The six-way valve is manipulated between first and second mode positions to collect analyte, and the four-way valve is manipulated between third and fourth mode positions to allow carrier gas to sweep analyte from a deactivated cold trap, through the transfer circuit, and then to the narrow bore GC capillary column for separation and subsequent analysis by a mass spectrometer. Rotary valves have substantially the same bore width as their associated columns to minimize flow irregularities and resulting sample peak deterioration. The rotary valves are heated separately from the GC columns to avoid temperature lag and resulting sample deterioration. 3 figs.

  8. Automated two-dimensional interface for capillary gas chromatography

    DOEpatents

    Strunk, Michael R.; Bechtold, William E.

    1996-02-20

    A multidimensional gas chromatograph (GC) system having wide bore capillary and narrow bore capillary GC columns in series and having a novel system interface. Heart cuts from a high flow rate sample, separated by a wide bore GC column, are collected and directed to a narrow bore GC column with carrier gas injected at a lower flow compatible with a mass spectrometer. A bimodal six-way valve is connected with the wide bore GC column outlet and a bimodal four-way valve is connected with the narrow bore GC column inlet. A trapping and retaining circuit with a cold trap is connected with the six-way valve and a transfer circuit interconnects the two valves. The six-way valve is manipulated between first and second mode positions to collect analyte, and the four-way valve is manipulated between third and fourth mode positions to allow carrier gas to sweep analyte from a deactivated cold trap, through the transfer circuit, and then to the narrow bore GC capillary column for separation and subsequent analysis by a mass spectrometer. Rotary valves have substantially the same bore width as their associated columns to minimize flow irregularities and resulting sample peak deterioration. The rotary valves are heated separately from the GC columns to avoid temperature lag and resulting sample deterioration.

  9. Design Through Manufacturing: The Solid Model - Finite Element Analysis Interface

    NASA Technical Reports Server (NTRS)

    Rubin, Carol

    2003-01-01

    State-of-the-art computer aided design (CAD) presently affords engineers the opportunity to create solid models of machine parts which reflect every detail of the finished product. Ideally, these models should fulfill two very important functions: (1) they must provide numerical control information for automated manufacturing of precision parts, and (2) they must enable analysts to easily evaluate the stress levels (using finite element analysis - FEA) for all structurally significant parts used in space missions. Today's state-of-the-art CAD programs perform function (1) very well, providing an excellent model for precision manufacturing. But they do not provide a straightforward and simple means of automating the translation from CAD to FEA models, especially for aircraft-type structures. The research performed during the fellowship period investigated the transition process from the solid CAD model to the FEA stress analysis model with the final goal of creating an automatic interface between the two. During the period of the fellowship a detailed multi-year program for the development of such an interface was created. The ultimate goal of this program will be the development of a fully parameterized automatic ProE/FEA translator for parts and assemblies, with the incorporation of data base management into the solution, and ultimately including computational fluid dynamics and thermal modeling in the interface.

  10. RCrane: semi-automated RNA model building

    PubMed Central

    Keating, Kevin S.; Pyle, Anna Marie

    2012-01-01

    RNA crystals typically diffract to much lower resolutions than protein crystals. This low-resolution diffraction results in unclear density maps, which cause considerable difficulties during the model-building process. These difficulties are exacerbated by the lack of computational tools for RNA modeling. Here, RCrane, a tool for the partially automated building of RNA into electron-density maps of low or intermediate resolution, is presented. This tool works within Coot, a common program for macromolecular model building. RCrane helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RCrane then allows the crystallographer to review the newly built structure and select alternative backbone conformations where desired. This tool can also be used to automatically correct the backbone structure of previously built nucleotides. These automated corrections can fix incorrect sugar puckers, steric clashes and other structural problems. PMID:22868764

  11. Automating Risk Analysis of Software Design Models

    PubMed Central

    Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P.

    2014-01-01

    The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance. PMID:25136688

  12. Automating risk analysis of software design models.

    PubMed

    Frydman, Maxime; Ruiz, Guifr; Heymann, Elisa; Csar, Eduardo; Miller, Barton P

    2014-01-01

    The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance. PMID:25136688

  13. Automated dynamic analytical model improvement

    NASA Technical Reports Server (NTRS)

    Berman, A.

    1981-01-01

    A method is developed and illustrated which finds minimum changes in analytical mass and stiffness matrices to make them consistent with a set of measured normal modes and natural frequencies. The corrected model is an improved base for studies of physical changes, changes in boundary conditions, and for prediction of forced responses. Features of the method are: efficient procedures not requiring solutions of the eigenproblem; the model may have more degrees of freedom than the test data; modal displacements at all the analytical degrees of freedom are obtained; the frequency dependence of the coordinate transformations are properly treated.

  14. Modeling Increased Complexity and the Reliance on Automation: FLightdeck Automation Problems (FLAP) Model

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2014-01-01

    This paper highlights the development of a model that is focused on the safety issue of increasing complexity and reliance on automation systems in transport category aircraft. Recent statistics show an increase in mishaps related to manual handling and automation errors due to pilot complacency and over-reliance on automation, loss of situational awareness, automation system failures and/or pilot deficiencies. Consequently, the aircraft can enter a state outside the flight envelope and/or air traffic safety margins which potentially can lead to loss-of-control (LOC), controlled-flight-into-terrain (CFIT), or runway excursion/confusion accidents, etc. The goal of this modeling effort is to provide NASA's Aviation Safety Program (AvSP) with a platform capable of assessing the impacts of AvSP technologies and products towards reducing the relative risk of automation related accidents and incidents. In order to do so, a generic framework, capable of mapping both latent and active causal factors leading to automation errors, is developed. Next, the framework is converted into a Bayesian Belief Network model and populated with data gathered from Subject Matter Experts (SMEs). With the insertion of technologies and products, the model provides individual and collective risk reduction acquired by technologies and methodologies developed within AvSP.

  15. Atomistic modeling of dislocation-interface interactions

    SciTech Connect

    Wang, Jian; Valone, Steven M; Beyerlein, Irene J; Misra, Amit; Germann, T. C.

    2011-01-31

    Using atomic scale models and interface defect theory, we first classify interface structures into a few types with respect to geometrical factors, then study the interfacial shear response and further simulate the dislocation-interface interactions using molecular dynamics. The results show that the atomic scale structural characteristics of both heterophases and homophases interfaces play a crucial role in (i) their mechanical responses and (ii) the ability of incoming lattice dislocations to transmit across them.

  16. A Generalized Timeline Representation, Services, and Interface for Automating Space Mission Operations

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Johnston, Mark; Frank, Jeremy; Giuliano, Mark; Kavelaars, Alicia; Lenzen, Christoph; Policella, Nicola

    2012-01-01

    Most use a timeline based representation for operations modeling. Most model a core set of state, resource types. Most provide similar capabilities on this modeling to enable (semi) automated schedule generation. In this paper we explore the commonality of : representation and services for these timelines. These commonalities offer potential to be harmonized to enable interoperability, re-use.

  17. Automation model of sewerage rehabilitation planning.

    PubMed

    Yang, M D; Su, T C

    2006-01-01

    The major steps of sewerage rehabilitation include inspection of sewerage, assessment of structural conditions, computation of structural condition grades, and determination of rehabilitation methods and materials. Conventionally, sewerage rehabilitation planning relies on experts with professional background that is tedious and time-consuming. This paper proposes an automation model of planning optimal sewerage rehabilitation strategies for the sewer system by integrating image process, clustering technology, optimization, and visualization display. Firstly, image processing techniques, such as wavelet transformation and co-occurrence features extraction, were employed to extract various characteristics of structural failures from CCTV inspection images. Secondly, a classification neural network was established to automatically interpret the structural conditions by comparing the extracted features with the typical failures in a databank. Then, to achieve optimal rehabilitation efficiency, a genetic algorithm was used to determine appropriate rehabilitation methods and substitution materials for the pipe sections with a risk of mal-function and even collapse. Finally, the result from the automation model can be visualized in a geographic information system in which essential information of the sewer system and sewerage rehabilitation plans are graphically displayed. For demonstration, the automation model of optimal sewerage rehabilitation planning was applied to a sewer system in east Taichung, Chinese Taiwan. PMID:17302324

  18. Automated Expert Modeling and Student Evaluation

    SciTech Connect

    2012-09-12

    AEMASE searches a database of recorded events for combinations of events that are of interest. It compares matching combinations to a statistical model to determine similarity to previous events of interest and alerts the user as new matching examples are found. AEMASE is currently used by weapons tactics instructors to find situations of interest in recorded tactical training scenarios. AEMASE builds on a sub-component, the Relational Blackboard (RBB), which is being released as open-source software. AEMASE builds on RBB adding interactive expert model construction (automated knowledge capture) and re-evaluation of scenario data.

  19. Automated Expert Modeling and Student Evaluation

    Energy Science and Technology Software Center (ESTSC)

    2012-09-12

    AEMASE searches a database of recorded events for combinations of events that are of interest. It compares matching combinations to a statistical model to determine similarity to previous events of interest and alerts the user as new matching examples are found. AEMASE is currently used by weapons tactics instructors to find situations of interest in recorded tactical training scenarios. AEMASE builds on a sub-component, the Relational Blackboard (RBB), which is being released as open-source software.more » AEMASE builds on RBB adding interactive expert model construction (automated knowledge capture) and re-evaluation of scenario data.« less

  20. Automated statistical modeling of analytical measurement systems

    SciTech Connect

    Jacobson, J J

    1992-08-01

    The statistical modeling of analytical measurement systems at the Idaho Chemical Processing Plant (ICPP) has been completely automated through computer software. The statistical modeling of analytical measurement systems is one part of a complete quality control program used by the Remote Analytical Laboratory (RAL) at the ICPP. The quality control program is an integration of automated data input, measurement system calibration, database management, and statistical process control. The quality control program and statistical modeling program meet the guidelines set forth by the American Society for Testing Materials and American National Standards Institute. A statistical model is a set of mathematical equations describing any systematic bias inherent in a measurement system and the precision of a measurement system. A statistical model is developed from data generated from the analysis of control standards. Control standards are samples which are made up at precise known levels by an independent laboratory and submitted to the RAL. The RAL analysts who process control standards do not know the values of those control standards. The object behind statistical modeling is to describe real process samples in terms of their bias and precision and, to verify that a measurement system is operating satisfactorily. The processing of control standards gives us this ability.

  1. Development of a commercial Automated Laser Gas Interface (ALGI) for AMS

    NASA Astrophysics Data System (ADS)

    Daniel, R.; Mores, M.; Kitchen, R.; Sundquist, M.; Hauser, T.; Stodola, M.; Tannenbaum, S.; Skipper, P.; Liberman, R.; Young, G.; Corless, S.; Tucker, M.

    2013-01-01

    National Electrostatics Corporation (NEC), Massachusetts Institute of Technology (MIT), and GlaxoSmithKline (GSK) collectively have been developing an interface to introduce CO2 produced by the laser combustion of liquid chromatograph eluate deposited on a CuO substrate directly into the ion source of an AMS system, thereby bypassing the customary graphitization process. The Automated Laser Gas Interface (ALGI) converts dried liquid samples to CO2 gas quickly and efficiently, allowing 96 samples to be measured in as little as 16 h. 14C:12C ratios stabilize typically within 2 min of analysis time per sample. Presented is the recent progress of NECs ALGI, a stand-alone accessory to an NEC gas-enabled multi-cathode source of negative ions by Cs sputtering (MC-SNICS) ion source.

  2. A method for automated detection of usability problems from client user interface events.

    PubMed

    Saadawi, Gilan M; Legowski, Elizabeth; Medvedeva, Olga; Chavan, Girish; Crowley, Rebecca S

    2005-01-01

    Think-aloud usability analysis provides extremely useful data but is very time-consuming and expensive to perform because of the extensive manual video analysis that is required. We describe a simple method for automated detection of usability problems from client user interface events for a developing medical intelligent tutoring system. The method incorporates (1) an agent-based method for communication that funnels all interface events and system responses to a centralized database, (2) a simple schema for representing interface events and higher order subgoals, and (3) an algorithm that reproduces the criteria used for manual coding of usability problems. A correction factor was empirically determining to account for the slower task performance of users when thinking aloud. We tested the validity of the method by simultaneously identifying usability problems using TAU and manually computing them from stored interface event data using the proposed algorithm. All usability problems that did not rely on verbal utterances were detectable with the proposed method. PMID:16779121

  3. A diffuse interface model with immiscibility preservation

    NASA Astrophysics Data System (ADS)

    Tiwari, Arpit; Freund, Jonathan B.; Pantano, Carlos

    2013-11-01

    A new, simple, and computationally efficient interface capturing scheme based on a diffuse interface approach is presented for simulation of compressible multiphase flows. Multi-fluid interfaces are represented using field variables (interface functions) with associated transport equations that are augmented, with respect to an established formulation, to enforce a selected interface thickness. The resulting interface region can be set just thick enough to be resolved by the underlying mesh and numerical method, yet thin enough to provide an efficient model for dynamics of well-resolved scales. A key advance in the present method is that the interface regularization is asymptotically compatible with the thermodynamic mixture laws of the mixture model upon which it is constructed. It incorporates first-order pressure and velocity non-equilibrium effects while preserving interface conditions for equilibrium flows, even within the thin diffused mixture region. We first quantify the improved convergence of this formulation in some widely used one-dimensional configurations, then show that it enables fundamentally better simulations of bubble dynamics. Demonstrations include both a spherical-bubble collapse, which is shown to maintain excellent symmetry despite the Cartesian mesh, and a jetting bubble collapse adjacent a wall. Comparisons show that without the new formulation the jet is suppressed by numerical diffusion leading to qualitatively incorrect results.

  4. A Diffuse Interface Model with Immiscibility Preservation.

    PubMed

    Tiwari, Arpit; Freund, Jonathan B; Pantano, Carlos

    2013-11-01

    A new, simple, and computationally efficient interface capturing scheme based on a diffuse interface approach is presented for simulation of compressible multiphase flows. Multi-fluid interfaces are represented using field variables (interface functions) with associated transport equations that are augmented, with respect to an established formulation, to enforce a selected interface thickness. The resulting interface region can be set just thick enough to be resolved by the underlying mesh and numerical method, yet thin enough to provide an efficient model for dynamics of well-resolved scales. A key advance in the present method is that the interface regularization is asymptotically compatible with the thermodynamic mixture laws of the mixture model upon which it is constructed. It incorporates first-order pressure and velocity non-equilibrium effects while preserving interface conditions for equilibrium flows, even within the thin diffused mixture region. We first quantify the improved convergence of this formulation in some widely used one-dimensional configurations, then show that it enables fundamentally better simulations of bubble dynamics. Demonstrations include both a spherical bubble collapse, which is shown to maintain excellent symmetry despite the Cartesian mesh, and a jetting bubble collapse adjacent a wall. Comparisons show that without the new formulation the jet is suppressed by numerical diffusion leading to qualitatively incorrect results. PMID:24058207

  5. A Diffuse Interface Model with Immiscibility Preservation

    PubMed Central

    Tiwari, Arpit; Freund, Jonathan B.; Pantano, Carlos

    2013-01-01

    A new, simple, and computationally efficient interface capturing scheme based on a diffuse interface approach is presented for simulation of compressible multiphase flows. Multi-fluid interfaces are represented using field variables (interface functions) with associated transport equations that are augmented, with respect to an established formulation, to enforce a selected interface thickness. The resulting interface region can be set just thick enough to be resolved by the underlying mesh and numerical method, yet thin enough to provide an efficient model for dynamics of well-resolved scales. A key advance in the present method is that the interface regularization is asymptotically compatible with the thermodynamic mixture laws of the mixture model upon which it is constructed. It incorporates first-order pressure and velocity non-equilibrium effects while preserving interface conditions for equilibrium flows, even within the thin diffused mixture region. We first quantify the improved convergence of this formulation in some widely used one-dimensional configurations, then show that it enables fundamentally better simulations of bubble dynamics. Demonstrations include both a spherical bubble collapse, which is shown to maintain excellent symmetry despite the Cartesian mesh, and a jetting bubble collapse adjacent a wall. Comparisons show that without the new formulation the jet is suppressed by numerical diffusion leading to qualitatively incorrect results. PMID:24058207

  6. A Generalized Timeline Representation, Services, and Interface for Automating Space Mission Operations

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Johnston, Mark; Frank, Jeremy; Giuliano, Mark; Kavelaars, Alicia; Lenzen, Christoph; Policella, Nicola

    2012-01-01

    Numerous automated and semi-automated planning & scheduling systems have been developed for space applications. Most of these systems are model-based in that they encode domain knowledge necessary to predict spacecraft state and resources based on initial conditions and a proposed activity plan. The spacecraft state and resources as often modeled as a series of timelines, with a timeline or set of timelines to represent a state or resource key in the operations of the spacecraft. In this paper, we first describe a basic timeline representation that can represent a set of state, resource, timing, and transition constraints. We describe a number of planning and scheduling systems designed for space applications (and in many cases deployed for use of ongoing missions) and describe how they do and do not map onto this timeline model.

  7. Automation life-cycle cost model

    NASA Technical Reports Server (NTRS)

    Gathmann, Thomas P.; Reeves, Arlinda J.; Cline, Rick; Henrion, Max; Ruokangas, Corinne

    1992-01-01

    The problem domain being addressed by this contractual effort can be summarized by the following list: Automation and Robotics (A&R) technologies appear to be viable alternatives to current, manual operations; Life-cycle cost models are typically judged with suspicion due to implicit assumptions and little associated documentation; and Uncertainty is a reality for increasingly complex problems and few models explicitly account for its affect on the solution space. The objectives for this effort range from the near-term (1-2 years) to far-term (3-5 years). In the near-term, the envisioned capabilities of the modeling tool are annotated. In addition, a framework is defined and developed in the Decision Modelling System (DEMOS) environment. Our approach is summarized as follows: Assess desirable capabilities (structure into near- and far-term); Identify useful existing models/data; Identify parameters for utility analysis; Define tool framework; Encode scenario thread for model validation; and Provide transition path for tool development. This report contains all relevant, technical progress made on this contractual effort.

  8. Parallel computing for automated model calibration

    SciTech Connect

    Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.; Vail, Lance W.

    2002-07-29

    Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. So far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.

  9. Automated measurement of transplantable solid tumors using digital electronic calipers interfaced to a microcomputer.

    PubMed

    Worzalla, J F; Bewley, J R; Grindey, G B

    1990-08-01

    Data collection for transplantable solid tumors has been automated with electronic digital calipers and a balance which have been coupled through an RS-232 interface to a microcomputer. BASIC programs handle data entry, calculations and data storage. A "PROTOCOL" program accepts keyboard input of sample name, notebook number, submitter and dose along with necessary information on tumor system, and then initial animal weights for treatment groups are sent from balance to computer. Data is stored as an ASCII file on floppy disks, and protocol reports are printed. When the test is to be measured, a "MEASURE" program prompts the user for keyboard entry of toxic deaths in each group. Then the computer requests input of width and length of tumors for each animal. These tumor dimensions are sent to microcomputer by pressing a button on the calipers. When a group is completed, final animal weights are sent from balance to microcomputer. Then tumor weights and percent inhibition as compared to appropriate control groups are calculated, and the data is appended to the file for that test. A hard copy is generated as tumors are measured, and reports including percent inhibition can be printed immediately after a test is measured. The data as an ASCII file is transferred via modem to mainframe computer, where another program transfers the information to a database management program. These automated procedures for tumor measurement save time and lessen the chance for error by eliminating manual recording of solid tumor dimensions and subsequent reentry of this data for calculation. PMID:2272765

  10. Bayesian Safety Risk Modeling of Human-Flightdeck Automation Interaction

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2015-01-01

    Usage of automatic systems in airliners has increased fuel efficiency, added extra capabilities, enhanced safety and reliability, as well as provide improved passenger comfort since its introduction in the late 80's. However, original automation benefits, including reduced flight crew workload, human errors or training requirements, were not achieved as originally expected. Instead, automation introduced new failure modes, redistributed, and sometimes increased workload, brought in new cognitive and attention demands, and increased training requirements. Modern airliners have numerous flight modes, providing more flexibility (and inherently more complexity) to the flight crew. However, the price to pay for the increased flexibility is the need for increased mode awareness, as well as the need to supervise, understand, and predict automated system behavior. Also, over-reliance on automation is linked to manual flight skill degradation and complacency in commercial pilots. As a result, recent accidents involving human errors are often caused by the interactions between humans and the automated systems (e.g., the breakdown in man-machine coordination), deteriorated manual flying skills, and/or loss of situational awareness due to heavy dependence on automated systems. This paper describes the development of the increased complexity and reliance on automation baseline model, named FLAP for FLightdeck Automation Problems. The model development process starts with a comprehensive literature review followed by the construction of a framework comprised of high-level causal factors leading to an automation-related flight anomaly. The framework was then converted into a Bayesian Belief Network (BBN) using the Hugin Software v7.8. The effects of automation on flight crew are incorporated into the model, including flight skill degradation, increased cognitive demand and training requirements along with their interactions. Besides flight crew deficiencies, automation system failures and anomalies of avionic systems are also incorporated. The resultant model helps simulate the emergence of automation-related issues in today's modern airliners from a top-down, generalized approach, which serves as a platform to evaluate NASA developed technologies

  11. Systems Engineering Interfaces: A Model Based Approach

    NASA Technical Reports Server (NTRS)

    Fosse, Elyse; Delp, Christopher

    2013-01-01

    Currently: Ops Rev developed and maintains a framework that includes interface-specific language, patterns, and Viewpoints. Ops Rev implements the framework to design MOS 2.0 and its 5 Mission Services. Implementation de-couples interfaces and instances of interaction Future: A Mission MOSE implements the approach and uses the model based artifacts for reviews. The framework extends further into the ground data layers and provides a unified methodology.

  12. Model compilation: An approach to automated model derivation

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo

    1990-01-01

    An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.

  13. Microcanonical model for interface formation

    SciTech Connect

    Rucklidge, A.; Zaleski, S.

    1988-04-01

    We describe a new cellular automaton model which allows us to simulate separation of phases. The model is an extension of existing cellular automata for the Ising model, such as Q2R. It conserves particle number and presents the qualitative features of spinodal decomposition. The dynamics is deterministic and does not require random number generators. The spins exchange energy with small local reservoirs or demons. The rate of relaxation to equilibrium is investigated, and the results are compared to the Lifshitz-Slyozov theory.

  14. Modeling Europa's Ice-Ocean Interface

    NASA Astrophysics Data System (ADS)

    Elsenousy, A.; Vance, S.; Bills, B. G.

    2014-12-01

    This work focuses on modeling the ice-ocean interface on Jupiter's Moon (Europa); mainly from the standpoint of heat and salt transfer relationship with emphasis on the basal ice growth rate and its implications to Europa's tidal response. Modeling the heat and salt flux at Europa's ice/ocean interface is necessary to understand the dynamics of Europa's ocean and its interaction with the upper ice shell as well as the history of active turbulence at this area. To achieve this goal, we used McPhee et al., 2008 parameterizations on Earth's ice/ocean interface that was developed to meet Europa's ocean dynamics. We varied one parameter at a time to test its influence on both; "h" the basal ice growth rate and on "R" the double diffusion tendency strength. The double diffusion tendency "R" was calculated as the ratio between the interface heat exchange coefficient αh to the interface salt exchange coefficient αs. Our preliminary results showed a strong double diffusion tendency R ~200 at Europa's ice-ocean interface for plausible changes in the heat flux due to onset or elimination of a hydrothermal activity, suggesting supercooling and a strong tendency for forming frazil ice.

  15. Automated Extraction of Planetary Digital Elevation Models

    NASA Astrophysics Data System (ADS)

    Andre, S. L.; Andre, T. C.; Robinson, M. S.

    2003-12-01

    Digital elevation models (DEMs) are invaluable products for planetary terrain interpretation [i.e. 1,2,3]. Typically, stereo matching programs require a user-selected set of corresponding points in the left and right images (seed points) to initiate automated stereo matching routines, which then find matching points between the two images. User input of seed points for each stereo pair can be a tedious and time-consuming step. An automated stereo matching tool for planetary images is useful in reducing or eliminating the need for human interaction (and potential error) in choosing initial seed points. In our software, we implement an adaptive least squares (ALS) correlation algorithm [4] and a sheet-growing algorithm [5]. The ALS algorithm matches a patch in the left image to a patch in the right image; this algorithm iteratively minimizes the sum of the squares between the patches to determine optimal transformation parameters. Successful matches are then used to predict matches for the locations of surrounding unmatched points (sheet growing algorithm). Matching is initiated using either automatically generated seed points or manually picked seed points. We are developing strategies to identify and reduce the number of errors produced by the stereo matching software; additional constraints may be applied after the matching process to check the validity of each match. We are currently testing the stereo matcher on image pairs using correlation patch sizes ranging from 9x 9 pixels to 25x 25 pixels. A rigorous error analysis will be performed to better assess the quality of the results. Initial results of DEMs derived from Mariner 10 images compare well with DEMs generated by another area-based stereo matcher [6]. Our ultimate goal is to produce a user-friendly, robust stereo matcher tool that can be used by the planetary science community across a wide variety of image datasets. [1] Herrick R. and Sharpton V. 2000, JGR 105, 20245-20262. [2] Oberst J. et al. 1997, Eos 78, 445-450. [3] Smith D. et al. 1999, Science 284, 1495-1503. [4] Gruen A. 1985, S. Afr. J. of Photogramm. Rem. Sens. Cart. 14(3), 175-187. [5] Otto G. and Chau T. 1989, Image Vision Comput. 7, 83-94. [6] Cook A. and Robinson M. 2000, JGR 105,9429-9443.

  16. An interface tracking model for droplet electrocoalescence.

    SciTech Connect

    Erickson, Lindsay Crowl

    2013-09-01

    This report describes an Early Career Laboratory Directed Research and Development (LDRD) project to develop an interface tracking model for droplet electrocoalescence. Many fluid-based technologies rely on electrical fields to control the motion of droplets, e.g. microfluidic devices for high-speed droplet sorting, solution separation for chemical detectors, and purification of biodiesel fuel. Precise control over droplets is crucial to these applications. However, electric fields can induce complex and unpredictable fluid dynamics. Recent experiments (Ristenpart et al. 2009) have demonstrated that oppositely charged droplets bounce rather than coalesce in the presence of strong electric fields. A transient aqueous bridge forms between approaching drops prior to pinch-off. This observation applies to many types of fluids, but neither theory nor experiments have been able to offer a satisfactory explanation. Analytic hydrodynamic approximations for interfaces become invalid near coalescence, and therefore detailed numerical simulations are necessary. This is a computationally challenging problem that involves tracking a moving interface and solving complex multi-physics and multi-scale dynamics, which are beyond the capabilities of most state-of-the-art simulations. An interface-tracking model for electro-coalescence can provide a new perspective to a variety of applications in which interfacial physics are coupled with electrodynamics, including electro-osmosis, fabrication of microelectronics, fuel atomization, oil dehydration, nuclear waste reprocessing and solution separation for chemical detectors. We present a conformal decomposition finite element (CDFEM) interface-tracking method for the electrohydrodynamics of two-phase flow to demonstrate electro-coalescence. CDFEM is a sharp interface method that decomposes elements along fluid-fluid boundaries and uses a level set function to represent the interface.

  17. Formally verifying human–automation interaction as part of a system model: limitations and tradeoffs

    PubMed Central

    Bass, Ellen J.

    2011-01-01

    Both the human factors engineering (HFE) and formal methods communities are concerned with improving the design of safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to perform formal verification of human–automation interaction with a programmable device. This effort utilizes a system architecture composed of independent models of the human mission, human task behavior, human-device interface, device automation, and operational environment. The goals of this architecture were to allow HFE practitioners to perform formal verifications of realistic systems that depend on human–automation interaction in a reasonable amount of time using representative models, intuitive modeling constructs, and decoupled models of system components that could be easily changed to support multiple analyses. This framework was instantiated using a patient controlled analgesia pump in a two phased process where models in each phase were verified using a common set of specifications. The first phase focused on the mission, human-device interface, and device automation; and included a simple, unconstrained human task behavior model. The second phase replaced the unconstrained task model with one representing normative pump programming behavior. Because models produced in the first phase were too large for the model checker to verify, a number of model revisions were undertaken that affected the goals of the effort. While the use of human task behavior models in the second phase helped mitigate model complexity, verification time increased. Additional modeling tools and technological developments are necessary for model checking to become a more usable technique for HFE. PMID:21572930

  18. A Web Interface for Eco System Modeling

    NASA Astrophysics Data System (ADS)

    McHenry, K.; Kooper, R.; Serbin, S. P.; LeBauer, D. S.; Desai, A. R.; Dietze, M. C.

    2012-12-01

    We have developed the Predictive Ecosystem Analyzer (PEcAn) as an open-source scientific workflow system and ecoinformatics toolbox that manages the flow of information in and out of regional-scale terrestrial biosphere models, facilitates heterogeneous data assimilation, tracks data provenance, and enables more effective feedback between models and field research. The over-arching goal of PEcAn is to make otherwise complex analyses transparent, repeatable, and accessible to a diverse array of researchers, allowing both novice and expert users to focus on using the models to examine complex ecosystems rather than having to deal with complex computer system setup and configuration questions in order to run the models. Through the developed web interface we hide much of the data and model details and allow the user to simply select locations, ecosystem models, and desired data sources as inputs to the model. Novice users are guided by the web interface through setting up a model execution and plotting the results. At the same time expert users are given enough freedom to modify specific parameters before the model gets executed. This will become more important as more and more models are added to the PEcAn workflow as well as more and more data that will become available as NEON comes online. On the backend we support the execution of potentially computationally expensive models on different High Performance Computers (HPC) and/or clusters. The system can be configured with a single XML file that gives it the flexibility needed for configuring and running the different models on different systems using a combination of information stored in a database as well as pointers to files on the hard disk. While the web interface usually creates this configuration file, expert users can still directly edit it to fine tune the configuration.. Once a workflow is finished the web interface will allow for the easy creation of plots over result data while also allowing the user to download the results for further processing. The current workflow in the web interface is a simple linear workflow, but will be expanded to allow for more complex workflows. We are working with Kepler and Cyberintegrator to allow for these more complex workflows as well as collecting provenance of the workflow being executed. This provenance regarding model executions is stored in a database along with the derived results. All of this information is then accessible using the BETY database web frontend. The PEcAn interface.

  19. Interfacing a robotic station with a gas chromatograph for the full automation of the determination of organochlorine pesticides in vegetables

    SciTech Connect

    Torres, P.; Luque de Castro, M.D.

    1996-12-31

    A fully automated method for the determination of organochlorine pesticides in vegetables is proposed. The overall system acts as an {open_quotes}analytical black box{close_quotes} because a robotic station performs the prelimninary operations, from weighing to capping the leached analytes and location in an autosampler of an automated gas chromatograph with electron capture detection. The method has been applied to the determination of lindane, heptachlor, captan, chlordane and metoxcychlor in tea, marjoram, cinnamon, pennyroyal, and mint with good results in most cases. A gas chromatograph has been interfaced to a robotic station for the determination of pesticides in vegetables. 15 refs., 4 figs., 2 tabs.

  20. Model-centric distribution automation: Capacity, reliability, and efficiency

    DOE PAGESBeta

    Onen, Ahmet; Jung, Jaesung; Dilek, Murat; Cheng, Danling; Broadwater, Robert P.; Scirbona, Charlie; Cocks, George; Hamilton, Stephanie; Wang, Xiaoyu

    2016-02-26

    A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditionalmore » system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.« less

  1. Automating a human factors evaluation of graphical user interfaces for NASA applications: An update on CHIMES

    NASA Technical Reports Server (NTRS)

    Jiang, Jian-Ping; Murphy, Elizabeth D.; Bailin, Sidney C.; Truszkowski, Walter F.

    1993-01-01

    Capturing human factors knowledge about the design of graphical user interfaces (GUI's) and applying this knowledge on-line are the primary objectives of the Computer-Human Interaction Models (CHIMES) project. The current CHIMES prototype is designed to check a GUI's compliance with industry-standard guidelines, general human factors guidelines, and human factors recommendations on color usage. Following the evaluation, CHIMES presents human factors feedback and advice to the GUI designer. The paper describes the approach to modeling human factors guidelines, the system architecture, a new method developed to convert quantitative RGB primaries into qualitative color representations, and the potential for integrating CHIMES with user interface management systems (UIMS). Both the conceptual approach and its implementation are discussed. This paper updates the presentation on CHIMES at the first International Symposium on Ground Data Systems for Spacecraft Control.

  2. Automated model integration at source code level: An approach for implementing models into the NASA Land Information System

    NASA Astrophysics Data System (ADS)

    Wang, S.; Peters-Lidard, C. D.; Mocko, D. M.; Kumar, S.; Nearing, G. S.; Arsenault, K. R.; Geiger, J. V.

    2014-12-01

    Model integration bridges the data flow between modeling frameworks and models. However, models usually do not fit directly into a particular modeling environment, if not designed for it. An example includes implementing different types of models into the NASA Land Information System (LIS), a software framework for land-surface modeling and data assimilation. Model implementation requires scientific knowledge and software expertise and may take a developer months to learn LIS and model software structure. Debugging and testing of the model implementation is also time-consuming due to not fully understanding LIS or the model. This time spent is costly for research and operational projects. To address this issue, an approach has been developed to automate model integration into LIS. With this in mind, a general model interface was designed to retrieve forcing inputs, parameters, and state variables needed by the model and to provide as state variables and outputs to LIS. Every model can be wrapped to comply with the interface, usually with a FORTRAN 90 subroutine. Development efforts need only knowledge of the model and basic programming skills. With such wrappers, the logic is the same for implementing all models. Code templates defined for this general model interface could be re-used with any specific model. Therefore, the model implementation can be done automatically. An automated model implementation toolkit was developed with Microsoft Excel and its built-in VBA language. It allows model specifications in three worksheets and contains FORTRAN 90 code templates in VBA programs. According to the model specification, the toolkit generates data structures and procedures within FORTRAN modules and subroutines, which transfer data between LIS and the model wrapper. Model implementation is standardized, and about 80 - 90% of the development load is reduced. In this presentation, the automated model implementation approach is described along with LIS programming interfaces, the general model interface and five case studies, including a regression model, Noah-MP, FASST, SAC-HTET/SNOW-17, and FLake. These different models vary in complexity with software structure. Also, we will describe how these complexities were overcome through using this approach and results of model benchmarks within LIS.

  3. Automation Marketplace 2010: New Models, Core Systems

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2010-01-01

    In a year when a difficult economy presented fewer opportunities for immediate gains, the major industry players have defined their business strategies with fundamentally different concepts of library automation. This is no longer an industry where companies compete on the basis of the best or the most features in similar products but one where…

  4. Automation Marketplace 2010: New Models, Core Systems

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2010-01-01

    In a year when a difficult economy presented fewer opportunities for immediate gains, the major industry players have defined their business strategies with fundamentally different concepts of library automation. This is no longer an industry where companies compete on the basis of the best or the most features in similar products but one where

  5. Modeling strategic behavior in human-automation interaction - Why an 'aid' can (and should) go unused

    NASA Technical Reports Server (NTRS)

    Kirlik, Alex

    1993-01-01

    Task-offload aids (e.g., an autopilot, an 'intelligent' assistant) can be selectively engaged by the human operator to dynamically delegate tasks to automation. Introducing such aids eliminates some task demands but creates new ones associated with programming, engaging, and disengaging the aiding device via an interface. The burdens associated with managing automation can sometimes outweigh the potential benefits of automation to improved system performance. Aid design parameters and features of the overall multitask context combine to determine whether or not a task-offload aid will effectively support the operator. A modeling and sensitivity analysis approach is presented that identifies effective strategies for human-automation interaction as a function of three task-context parameters and three aid design parameters. The analysis and modeling approaches provide resources for predicting how a well-adapted operator will use a given task-offload aid, and for specifying aid design features that ensure that automation will provide effective operator support in a multitask environment.

  6. Exactly solvable models of growing interfaces: the Arcetri models

    NASA Astrophysics Data System (ADS)

    Durang, Xavier; Henkel, Malte

    Motivated by an analogy with the spherical model of a ferromagnet, the Arcetri models present new universality classes for the growth of interfaces, distinct from the common Edwards-Wilkinson and Kardar-Parisi-Zhang universality classes. Those models are obtained by treating and replacing the non-linear term in the noisy Burgers equation or the KPZ equation by a mean spherical condition. We studied the consequences of such constraints on the Edwards-Wilkinson (EW) interface.

  7. Development and Design of a User Interface for a Computer Automated Heating, Ventilation, and Air Conditioning System

    SciTech Connect

    Anderson, B.; /Fermilab

    1999-10-08

    A user interface is created to monitor and operate the heating, ventilation, and air conditioning system. The interface is networked to the system's programmable logic controller. The controller maintains automated control of the system. The user through the interface is able to see the status of the system and override or adjust the automatic control features. The interface is programmed to show digital readouts of system equipment as well as visual queues of system operational statuses. It also provides information for system design and component interaction. The interface is made easier to read by simple designs, color coordination, and graphics. Fermi National Accelerator Laboratory (Fermi lab) conducts high energy particle physics research. Part of this research involves collision experiments with protons, and anti-protons. These interactions are contained within one of two massive detectors along Fermilab's largest particle accelerator the Tevatron. The D-Zero Assembly Building houses one of these detectors. At this time detector systems are being upgraded for a second experiment run, titled Run II. Unlike the previous run, systems at D-Zero must be computer automated so operators do not have to continually monitor and adjust these systems during the run. Human intervention should only be necessary for system start up and shut down, and equipment failure. Part of this upgrade includes the heating, ventilation, and air conditioning system (HVAC system). The HVAC system is responsible for controlling two subsystems, the air temperatures of the D-Zero Assembly Building and associated collision hall, as well as six separate water systems used in the heating and cooling of the air and detector components. The BYAC system is automated by a programmable logic controller. In order to provide system monitoring and operator control a user interface is required. This paper will address methods and strategies used to design and implement an effective user interface. Background material pertinent to the BYAC system will cover the separate water and air subsystems and their purposes. In addition programming and system automation will also be covered.

  8. A Hierarchical Test Model and Automated Test Framework for RTC

    NASA Astrophysics Data System (ADS)

    Lim, Jae-Hee; Song, Suk-Hoon; Kuc, Tae-Yong; Park, Hong-Seong; Kim, Hong-Seak

    This paper presents a hierarchical test model and automated test framework for robot software components of RTC(Robot Technology Component) combined with hardware module. The hierarchical test model consists of three levels of testing based on V-model : unit test, integration test, and system test. The automated test framework incorporates four components of test data generation, test manager, test execution, and test monitoring. The proposed testing model and its automation framework is proven to be efficient for testing of developed robotic software components in terms of time and cost. The feasibility and effectiveness of proposed architecture for robot components testing are illustrated through an application example along with embedded robotic testbed equipped with range sensor hardware and its software component modeled as an RTC.

  9. Aspects of a Theory for Automated Student Modelling.

    ERIC Educational Resources Information Center

    Brown, John Seely; And Others

    Automated Student Modelling explicates reasoning strategies, the representation of procedural skills, and underlying misconceptions as manifested in errors. A diagnostic model, based on a "procedural network" as opposed to a "semantic network," is presented which provides a technique both for modelling the underlying cognitive processes of a…

  10. Modelling Safe Interface Interactions in Web Applications

    NASA Astrophysics Data System (ADS)

    Brambilla, Marco; Cabot, Jordi; Grossniklaus, Michael

    Current Web applications embed sophisticated user interfaces and business logic. The original interaction paradigm of the Web based on static content pages that are browsed by hyperlinks is, therefore, not valid anymore. In this paper, we advocate a paradigm shift for browsers and Web applications, that improves the management of user interaction and browsing history. Pages are replaced by States as basic navigation nodes, and Back/Forward navigation along the browsing history is replaced by a full-fledged interactive application paradigm, supporting transactions at the interface level and featuring Undo/Redo capabilities. This new paradigm offers a safer and more precise interaction model, protecting the user from unexpected behaviours of the applications and the browser.

  11. Modeling of concrete, soil, and interfaces

    NASA Astrophysics Data System (ADS)

    Schreyer, H. L.

    1985-06-01

    The ability to predict the response of structures subjected to large, abrupt bursts of energy depends on a number of factors. These include load definition, the characteristics of the geological material in which the structure is embedded, the mechanism by which forces are transferred from the geological material to the structure, and the characteristics of the structure. The basic objective of the research project discussed here is to provide an improved understanding of the response of common geological and structural materials and of structure-media (concrete-soil) interfaces (SMI), to abrupt loading. For this purpose, engineering constitutive models have been developed, and appropriate experimental data are being used to validate the models. An improved relation for predicting the limit surface of concrete is given. The need for new devices and measurement techniques for rate effects in concrete and the response of unsaturated clay and salt is discussed. Preliminary data from existing experimental equipment are shown. The implications of strain softening are displayed with the use of a model problem. Proposed work includes the use of a strain-softening model to represent interfaces, the extension of the viscoplastic model to represent anisotropic media, and the determination of rate effects in concrete for one and two-dimensional experimental specimens.

  12. ModelMate - A graphical user interface for model analysis

    USGS Publications Warehouse

    Banta, Edward R.

    2011-01-01

    ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.

  13. Modeling of metal-ferroelectric-insulator-semiconductor structure considering the effects of interface traps

    NASA Astrophysics Data System (ADS)

    Sun, Jing; Shi, Xiao Rong; Zheng, Xue Jun; Tian, Li; Zhu, Zhe

    2015-06-01

    An improved model, in which the interface traps effects are considered, is developed by combining with quantum mechanical model, dipole switching theory and silicon physics of metal-oxide-semiconductor structure to describe the electrical properties of metal-ferroelectric-insulator-semiconductor (MFIS) structure. Using the model, the effects of the interface traps on the surface potential (ϕSi) of the semiconductor, the low frequency (LF) capacitance-voltage (C-V) characteristics and memory window of MFIS structure are simulated, and the results show that the ϕSi- V and LF C-V curves are shifted toward the positive-voltage direction and the memory window become worse as the density of the interface trap states increases. This paper is expected to provide some guidance to the design and performance improvement of MFIS structure devices. In addition, the improved model can be integrated into electronic design automation (EDA) software for circuit simulation.

  14. Modeling and deadlock avoidance of automated manufacturing systems with multiple automated guided vehicles.

    PubMed

    Wu, Naiqi; Zhou, MengChu

    2005-12-01

    An automated manufacturing system (AMS) contains a number of versatile machines (or workstations), buffers, an automated material handling system (MHS), and is computer-controlled. An effective and flexible alternative for implementing MHS is to use automated guided vehicle (AGV) system. The deadlock issue in AMS is very important in its operation and has extensively been studied. The deadlock problems were separately treated for parts in production and transportation and many techniques were developed for each problem. However, such treatment does not take the advantage of the flexibility offered by multiple AGVs. In general, it is intractable to obtain maximally permissive control policy for either problem. Instead, this paper investigates these two problems in an integrated way. First we model an AGV system and part processing processes by resource-oriented Petri nets, respectively. Then the two models are integrated by using macro transitions. Based on the combined model, a novel control policy for deadlock avoidance is proposed. It is shown to be maximally permissive with computational complexity of O (n2) where n is the number of machines in AMS if the complexity for controlling the part transportation by AGVs is not considered. Thus, the complexity of deadlock avoidance for the whole system is bounded by the complexity in controlling the AGV system. An illustrative example shows its application and power. PMID:16366245

  15. A catalog of automated analysis methods for enterprise models.

    PubMed

    Florez, Hector; Sánchez, Mario; Villalobos, Jorge

    2016-01-01

    Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool. PMID:27047732

  16. Automated particulate sampler field test model operations guide

    SciTech Connect

    Bowyer, S.M.; Miley, H.S.

    1996-10-01

    The Automated Particulate Sampler Field Test Model Operations Guide is a collection of documents which provides a complete picture of the Automated Particulate Sampler (APS) and the Field Test in which it was evaluated. The Pacific Northwest National Laboratory (PNNL) Automated Particulate Sampler was developed for the purpose of radionuclide particulate monitoring for use under the Comprehensive Test Ban Treaty (CTBT). Its design was directed by anticipated requirements of small size, low power consumption, low noise level, fully automatic operation, and most predominantly the sensitivity requirements of the Conference on Disarmament Working Paper 224 (CDWP224). This guide is intended to serve as both a reference document for the APS and to provide detailed instructions on how to operate the sampler. This document provides a complete description of the APS Field Test Model and all the activity related to its evaluation and progression.

  17. Development of an automated speech recognition interface for personal emergency response systems

    PubMed Central

    Hamill, Melinda; Young, Vicky; Boger, Jennifer; Mihailidis, Alex

    2009-01-01

    Background Demands on long-term-care facilities are predicted to increase at an unprecedented rate as the baby boomer generation reaches retirement age. Aging-in-place (i.e. aging at home) is the desire of most seniors and is also a good option to reduce the burden on an over-stretched long-term-care system. Personal Emergency Response Systems (PERSs) help enable older adults to age-in-place by providing them with immediate access to emergency assistance. Traditionally they operate with push-button activators that connect the occupant via speaker-phone to a live emergency call-centre operator. If occupants do not wear the push button or cannot access the button, then the system is useless in the event of a fall or emergency. Additionally, a false alarm or failure to check-in at a regular interval will trigger a connection to a live operator, which can be unwanted and intrusive to the occupant. This paper describes the development and testing of an automated, hands-free, dialogue-based PERS prototype. Methods The prototype system was built using a ceiling mounted microphone array, an open-source automatic speech recognition engine, and a 'yes' and 'no' response dialog modelled after an existing call-centre protocol. Testing compared a single microphone versus a microphone array with nine adults in both noisy and quiet conditions. Dialogue testing was completed with four adults. Results and discussion The microphone array demonstrated improvement over the single microphone. In all cases, dialog testing resulted in the system reaching the correct decision about the kind of assistance the user was requesting. Further testing is required with elderly voices and under different noise conditions to ensure the appropriateness of the technology. Future developments include integration of the system with an emergency detection method as well as communication enhancement using features such as barge-in capability. Conclusion The use of an automated dialog-based PERS has the potential to provide users with more autonomy in decisions regarding their own health and more privacy in their own home. PMID:19583876

  18. Radiation budget measurement/model interface

    NASA Technical Reports Server (NTRS)

    Vonderhaar, T. H.; Ciesielski, P.; Randel, D.; Stevens, D.

    1983-01-01

    This final report includes research results from the period February, 1981 through November, 1982. Two new results combine to form the final portion of this work. They are the work by Hanna (1982) and Stevens to successfully test and demonstrate a low-order spectral climate model and the work by Ciesielski et al. (1983) to combine and test the new radiation budget results from NIMBUS-7 with earlier satellite measurements. Together, the two related activities set the stage for future research on radiation budget measurement/model interfacing. Such combination of results will lead to new applications of satellite data to climate problems. The objectives of this research under the present contract are therefore satisfied. Additional research reported herein includes the compilation and documentation of the radiation budget data set a Colorado State University and the definition of climate-related experiments suggested after lengthy analysis of the satellite radiation budget experiments.

  19. Automated data acquisition technology development:Automated modeling and control development

    NASA Technical Reports Server (NTRS)

    Romine, Peter L.

    1995-01-01

    This report documents the completion of, and improvements made to, the software developed for automated data acquisition and automated modeling and control development on the Texas Micro rackmounted PC's. This research was initiated because a need was identified by the Metal Processing Branch of NASA Marshall Space Flight Center for a mobile data acquisition and data analysis system, customized for welding measurement and calibration. Several hardware configurations were evaluated and a PC based system was chosen. The Welding Measurement System (WMS), is a dedicated instrument strickly for use of data acquisition and data analysis. In addition to the data acquisition functions described in this thesis, WMS also supports many functions associated with process control. The hardware and software requirements for an automated acquisition system for welding process parameters, welding equipment checkout, and welding process modeling were determined in 1992. From these recommendations, NASA purchased the necessary hardware and software. The new welding acquisition system is designed to collect welding parameter data and perform analysis to determine the voltage versus current arc-length relationship for VPPA welding. Once the results of this analysis are obtained, they can then be used to develop a RAIL function to control welding startup and shutdown without torch crashing.

  20. Modeling interfaces between solids: Application to Li battery materials

    NASA Astrophysics Data System (ADS)

    Lepley, N. D.; Holzwarth, N. A. W.

    2015-12-01

    We present a general scheme to model an energy for analyzing interfaces between crystalline solids, quantitatively including the effects of varying configurations and lattice strain. This scheme is successfully applied to the modeling of likely interface geometries of several solid state battery materials including Li metal, Li3PO4 , Li3PS4 , Li2O , and Li2S . Our formalism, together with a partial density of states analysis, allows us to characterize the thickness, stability, and transport properties of these interfaces. We find that all of the interfaces in this study are stable with the exception of Li3PS4/Li . For this chemically unstable interface, the partial density of states helps to identify mechanisms associated with the interface reactions. Our energetic measure of interfaces and our analysis of the band alignment between interface materials indicate multiple factors, which may be predictors of interface stability, an important property of solid electrolyte systems.

  1. Variational Implicit Solvation with Solute Molecular Mechanics: From Diffuse-Interface to Sharp-Interface Models

    PubMed Central

    Li, Bo; Zhao, Yanxiang

    2013-01-01

    Central in a variational implicit-solvent description of biomolecular solvation is an effective free-energy functional of the solute atomic positions and the solute-solvent interface (i.e., the dielectric boundary). The free-energy functional couples together the solute molecular mechanical interaction energy, the solute-solvent interfacial energy, the solute-solvent van der Waals interaction energy, and the electrostatic energy. In recent years, the sharp-interface version of the variational implicit-solvent model has been developed and used for numerical computations of molecular solvation. In this work, we propose a diffuse-interface version of the variational implicit-solvent model with solute molecular mechanics. We also analyze both the sharp-interface and diffuse-interface models. We prove the existence of free-energy minimizers and obtain their bounds. We also prove the convergence of the diffuse-interface model to the sharp-interface model in the sense of Γ-convergence. We further discuss properties of sharp-interface free-energy minimizers, the boundary conditions and the coupling of the Poisson–Boltzmann equation in the diffuse-interface model, and the convergence of forces from diffuse-interface to sharp-interface descriptions. Our analysis relies on the previous works on the problem of minimizing surface areas and on our observations on the coupling between solute molecular mechanical interactions with the continuum solvent. Our studies justify rigorously the self consistency of the proposed diffuse-interface variational models of implicit solvation. PMID:24058213

  2. Automating sensitivity analysis of computer models using computer calculus

    SciTech Connect

    Oblow, E.M.; Pin, F.G.

    1985-01-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs.

  3. Computational design of patterned interfaces using reduced order models

    PubMed Central

    Vattré, A. J.; Abdolrahim, N.; Kolluri, K.; Demkowicz, M. J.

    2014-01-01

    Patterning is a familiar approach for imparting novel functionalities to free surfaces. We extend the patterning paradigm to interfaces between crystalline solids. Many interfaces have non-uniform internal structures comprised of misfit dislocations, which in turn govern interface properties. We develop and validate a computational strategy for designing interfaces with controlled misfit dislocation patterns by tailoring interface crystallography and composition. Our approach relies on a novel method for predicting the internal structure of interfaces: rather than obtaining it from resource-intensive atomistic simulations, we compute it using an efficient reduced order model based on anisotropic elasticity theory. Moreover, our strategy incorporates interface synthesis as a constraint on the design process. As an illustration, we apply our approach to the design of interfaces with rapid, 1-D point defect diffusion. Patterned interfaces may be integrated into the microstructure of composite materials, markedly improving performance. PMID:25169868

  4. Automating the identification of structural model parameters

    SciTech Connect

    Allen, J.J.; Martinez, D.R.

    1989-01-01

    System identification methods are analytical techniques for resolving the correct model form and parametric values. However, for system identification to become a practical tool for engineering analysis, the estimation techniques/codes must communicate with finite element software packages, without intensive analyst intervention and supervision. This paper presents a technique used to integrate commercial software packages for finite element modeling (MSC/NASTRAN), mathematical programming techniques (ADS), and general linear system analysis (PRO-MATLAB). The parameter estimation techniques and the software for controlling the overall system were programmed in PRO-MATLAB. Two examples of application of this software are presented. The examples consist of a truss structure in which the model form is well defined and an electronics package whose model form is ill-defined since it is difficult to model with finite elements. A comparison of the resulting updated models with the experimental data showed significant improvement. 19 refs., 8 figs., 2 tabs.

  5. A nonlinear interface model applied to masonry structures

    NASA Astrophysics Data System (ADS)

    Lebon, Frédéric; Raffa, Maria Letizia; Rizzoni, Raffaella

    2015-12-01

    In this paper, a new imperfect interface model is presented. The model includes finite strains, micro-cracks and smooth roughness. The model is consistently derived by coupling a homogenization approach for micro-cracked media and arguments of asymptotic analysis. The model is applied to brick/mortar interfaces. Numerical results are presented.

  6. A new interface element for connecting independently modeled substructures

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.

    1993-01-01

    A new interface element based on the hybrid variational formulation is presented and demonstrated. The element provides a means of connecting independently modeled substructures whose nodes along the common boundary need not be coincident. The interface element extends previous work to include connecting an arbitrary number of substructures, the use of closed and generally curved interfaces, and the use of multiple, possibly nested, interfaces. Several applications of the element are presented and aspects of the implementation are discussed.

  7. Hexapods with fieldbus interfaces for automated manufacturing of opto-mechanical components

    NASA Astrophysics Data System (ADS)

    Schreiber, Steffen; Muellerleile, Christian; Frietsch, Markus; Gloess, Rainer

    2013-09-01

    The adjustment of opto-mechanical components in manufacturing processes often requires precise motion in all six degrees of freedom with nanometer range resolution and absence of hysteresis. Parallel kinematic systems are predestined for such tasks due to their compact design, low inertia and high stiffness resulting in rapid settling behavior. To achieve adequate system performance, specialized motion controllers are required to handle the complex kinematic models for the different types of Hexapods and the associated extensive calculations of inverse kinematics. These controllers often rely on proprietary command languages, a fact that demands a high level of familiarization. This paper describes how the integration of fieldbus interfaces into Hexapod controllers simplifies the communication while providing higher flexibility. By using standardized communication protocols with cycle times down to 12.5 ?s it is straightforward to control multiple Hexapods and other devices by superordinate PLCs of different manufacturers. The paper also illustrates how to simplify adjustment and alignment processes by combining scanning algorithms with user defined coordinate systems.

  8. Automated adaptive inference of phenomenological dynamical models

    NASA Astrophysics Data System (ADS)

    Daniels, Bryan C.; Nemenman, Ilya

    2015-08-01

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved.

  9. Automated Environment Generation for Software Model Checking

    NASA Technical Reports Server (NTRS)

    Tkachuk, Oksana; Dwyer, Matthew B.; Pasareanu, Corina S.

    2003-01-01

    A key problem in model checking open systems is environment modeling (i.e., representing the behavior of the execution context of the system under analysis). Software systems are fundamentally open since their behavior is dependent on patterns of invocation of system components and values defined outside the system but referenced within the system. Whether reasoning about the behavior of whole programs or about program components, an abstract model of the environment can be essential in enabling sufficiently precise yet tractable verification. In this paper, we describe an approach to generating environments of Java program fragments. This approach integrates formally specified assumptions about environment behavior with sound abstractions of environment implementations to form a model of the environment. The approach is implemented in the Bandera Environment Generator (BEG) which we describe along with our experience using BEG to reason about properties of several non-trivial concurrent Java programs.

  10. Automated adaptive inference of phenomenological dynamical models

    PubMed Central

    Daniels, Bryan C.; Nemenman, Ilya

    2015-01-01

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved. PMID:26293508

  11. Models for Automated Tube Performance Calculations

    SciTech Connect

    C. Brunkhorst

    2002-12-12

    High power radio-frequency systems, as typically used in fusion research devices, utilize vacuum tubes. Evaluation of vacuum tube performance involves data taken from tube operating curves. The acquisition of data from such graphical sources is a tedious process. A simple modeling method is presented that will provide values of tube currents for a given set of element voltages. These models may be used as subroutines in iterative solutions of amplifier operating conditions for a specific loading impedance.

  12. State modeling and pass automation in spacecraft control

    NASA Technical Reports Server (NTRS)

    Klein, J.; Kulp, D.; Rashkin, R.

    1996-01-01

    The integrated monitoring and control commercial off-the-shelf system (IMACCS), which demonstrates the feasibility of automating spacecraft monitoring and control activities through the use of state modeling, is described together with its use. The use of the system for the control and ground support of the solar, anomalous and magnetic particle explorer (SAMPEX) spacecraft is considered. A key component of IMACCS is the Altair mission control system which implements finite state modeling as an element of its expert system capability. Using the finite state modeling and state transition capabilities of the Altair mission control system, IMACCS features automated monitoring, routine pass support, anomaly resolution and emergency 'lights on again' response. Automatic orbit determination and the production of typical flight dynamics products exists. These functionalities are described.

  13. Automated sample plan selection for OPC modeling

    NASA Astrophysics Data System (ADS)

    Casati, Nathalie; Gabrani, Maria; Viswanathan, Ramya; Bayraktar, Zikri; Jaiswal, Om; DeMaris, David; Abdo, Amr Y.; Oberschmidt, James; Krause, Andreas

    2014-03-01

    It is desired to reduce the time required to produce metrology data for calibration of Optical Proximity Correction (OPC) models and also maintain or improve the quality of the data collected with regard to how well that data represents the types of patterns that occur in real circuit designs. Previous work based on clustering in geometry and/or image parameter space has shown some benefit over strictly manual or intuitive selection, but leads to arbitrary pattern exclusion or selection which may not be the best representation of the product. Forming the pattern selection as an optimization problem, which co-optimizes a number of objective functions reflecting modelers' insight and expertise, has shown to produce models with equivalent quality to the traditional plan of record (POR) set but in a less time.

  14. Grcarma: A fully automated task-oriented interface for the analysis of molecular dynamics trajectories.

    PubMed

    Koukos, Panagiotis I; Glykos, Nicholas M

    2013-10-01

    We report the availability of grcarma, a program encoding for a fully automated set of tasks aiming to simplify the analysis of molecular dynamics trajectories of biological macromolecules. It is a cross-platform, Perl/Tk-based front-end to the program carma and is designed to facilitate the needs of the novice as well as those of the expert user, while at the same time maintaining a user-friendly and intuitive design. Particular emphasis was given to the automation of several tedious tasks, such as extraction of clusters of structures based on dihedral and Cartesian principal component analysis, secondary structure analysis, calculation and display of root-meansquare deviation (RMSD) matrices, calculation of entropy, calculation and analysis of variance–covariance matrices, calculation of the fraction of native contacts, etc. The program is free-open source software available immediately for download. PMID:24159629

  15. Qgui: A high-throughput interface for automated setup and analysis of free energy calculations and empirical valence bond simulations in biological systems.

    PubMed

    Isaksen, Geir Villy; Andberg, Tor Arne Heim; Åqvist, Johan; Brandsdal, Bjørn Olav

    2015-07-01

    Structural information and activity data has increased rapidly for many protein targets during the last decades. In this paper, we present a high-throughput interface (Qgui) for automated free energy and empirical valence bond (EVB) calculations that use molecular dynamics (MD) simulations for conformational sampling. Applications to ligand binding using both the linear interaction energy (LIE) method and the free energy perturbation (FEP) technique are given using the estrogen receptor (ERα) as a model system. Examples of free energy profiles obtained using the EVB method for the rate-limiting step of the enzymatic reaction catalyzed by trypsin are also shown. In addition, we present calculation of high-precision Arrhenius plots to obtain the thermodynamic activation enthalpy and entropy with Qgui from running a large number of EVB simulations. PMID:26080356

  16. A power line data communication interface using spread spectrum technology in home automation

    SciTech Connect

    Shwehdi, M.H.; Khan, A.Z.

    1996-07-01

    Building automation technology is rapidly developing towards more reliable communication systems, devices that control electronic equipments. These equipment if controlled leads to efficient energy management, and savings on the monthly electricity bill. Power Line communication (PLC) has been one of the dreams of the electronics industry for decades, especially for building automation. It is the purpose of this paper to demonstrate communication methods among electronic control devices through an AC power line carrier within the buildings for more efficient energy control. The paper outlines methods of communication over a powerline, namely the X-10 and CE bus. It also introduces the spread spectrum technology as to increase speed to 100--150 times faster than the X-10 system. The powerline carrier has tremendous applications in the field of building automation. The paper presents an attempt to realize a smart house concept, so called, in which all home electronic devices from a coffee maker to a water heater microwave to chaos robots will be utilized by an intelligent network whenever one wishes to do so. The designed system may be applied very profitably to help in energy management for both customer and utility.

  17. Elevator model based on a tiny PLC for teaching automation

    NASA Astrophysics Data System (ADS)

    Kim, Kee Hwan; Lee, Young Dae

    2005-12-01

    The development of control related applications requires knowledge of different subject matters like mechanical components, control equipment and physics. To understand the behavior of these heterogeneous applications is not easy especially the students who begin to study the electronic engineering. In order to introduce to them the most common components and skills necessary to put together a functioning automated system, we have designed a simple elevator model controlled by a PLC which was designed based on a microcontroller.

  18. Automated refinement and inference of analytical models for metabolic networks

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael D.; Vallabhajosyula, Ravishankar R.; Jenkins, Jerry W.; Hood, Jonathan E.; Soni, Abhishek S.; Wikswo, John P.; Lipson, Hod

    2011-10-01

    The reverse engineering of metabolic networks from experimental data is traditionally a labor-intensive task requiring a priori systems knowledge. Using a proven model as a test system, we demonstrate an automated method to simplify this process by modifying an existing or related model--suggesting nonlinear terms and structural modifications--or even constructing a new model that agrees with the system's time series observations. In certain cases, this method can identify the full dynamical model from scratch without prior knowledge or structural assumptions. The algorithm selects between multiple candidate models by designing experiments to make their predictions disagree. We performed computational experiments to analyze a nonlinear seven-dimensional model of yeast glycolytic oscillations. This approach corrected mistakes reliably in both approximated and overspecified models. The method performed well to high levels of noise for most states, could identify the correct model de novo, and make better predictions than ordinary parametric regression and neural network models. We identified an invariant quantity in the model, which accurately derived kinetics and the numerical sensitivity coefficients of the system. Finally, we compared the system to dynamic flux estimation and discussed the scaling and application of this methodology to automated experiment design and control in biological systems in real time.

  19. Automated refinement and inference of analytical models for metabolic networks.

    PubMed

    Schmidt, Michael D; Vallabhajosyula, Ravishankar R; Jenkins, Jerry W; Hood, Jonathan E; Soni, Abhishek S; Wikswo, John P; Lipson, Hod

    2011-10-01

    The reverse engineering of metabolic networks from experimental data is traditionally a labor-intensive task requiring a priori systems knowledge. Using a proven model as a test system, we demonstrate an automated method to simplify this process by modifying an existing or related model--suggesting nonlinear terms and structural modifications--or even constructing a new model that agrees with the system's time series observations. In certain cases, this method can identify the full dynamical model from scratch without prior knowledge or structural assumptions. The algorithm selects between multiple candidate models by designing experiments to make their predictions disagree. We performed computational experiments to analyze a nonlinear seven-dimensional model of yeast glycolytic oscillations. This approach corrected mistakes reliably in both approximated and overspecified models. The method performed well to high levels of noise for most states, could identify the correct model de novo, and make better predictions than ordinary parametric regression and neural network models. We identified an invariant quantity in the model, which accurately derived kinetics and the numerical sensitivity coefficients of the system. Finally, we compared the system to dynamic flux estimation and discussed the scaling and application of this methodology to automated experiment design and control in biological systems in real time. PMID:21832805

  20. Automated refinement and inference of analytical models for metabolic networks

    PubMed Central

    Schmidt, Michael D; Vallabhajosyula, Ravishankar R; Jenkins, Jerry W; Hood, Jonathan E; Soni, Abhishek S; Wikswo, John P; Lipson, Hod

    2013-01-01

    The reverse engineering of metabolic networks from experimental data is traditionally a labor-intensive task requiring a priori systems knowledge. Using a proven model as a test system, we demonstrate an automated method to simplify this process by modifying an existing or related model – suggesting nonlinear terms and structural modifications – or even constructing a new model that agrees with the system’s time-series observations. In certain cases, this method can identify the full dynamical model from scratch without prior knowledge or structural assumptions. The algorithm selects between multiple candidate models by designing experiments to make their predictions disagree. We performed computational experiments to analyze a nonlinear seven-dimensional model of yeast glycolytic oscillations. This approach corrected mistakes reliably in both approximated and overspecified models. The method performed well to high levels of noise for most states, could identify the correct model de novo, and make better predictions than ordinary parametric regression and neural network models. We identified an invariant quantity in the model, which accurately derived kinetics and the numerical sensitivity coefficients of the system. Finally, we compared the system to dynamic flux estimation and discussed the scaling and application of this methodology to automated experiment design and control in biological systems in real-time. PMID:21832805

  1. Automated photogrammetry for three-dimensional models of urban spaces

    NASA Astrophysics Data System (ADS)

    Leberl, Franz; Meixner, Philipp; Wendel, Andreas; Irschara, Arnold

    2012-02-01

    The location-aware Internet is inspiring intensive work addressing the automated assembly of three-dimensional models of urban spaces with their buildings, circulation spaces, vegetation, signs, even their above-ground and underground utility lines. Two-dimensional geographic information systems (GISs) and municipal utility information exist and can serve to guide the creation of models being built with aerial, sometimes satellite imagery, streetside images, indoor imaging, and alternatively with light detection and ranging systems (LiDARs) carried on airplanes, cars, or mounted on tripods. We review the results of current research to automate the information extraction from sensor data. We show that aerial photography at ground sampling distances (GSD) of 1 to 10 cm is well suited to provide geometry data about building facades and roofs, that streetside imagery at 0.5 to 2 cm is particularly interesting when it is collected within community photo collections (CPCs) by the general public, and that the transition to digital imaging has opened the no-cost option of highly overlapping images in support of a more complete and thus more economical automation. LiDAR-systems are a widely used source of three-dimensional data, but they deliver information not really superior to digital photography.

  2. Reassembly and interfacing neural models registered on biological model databases.

    PubMed

    Otake, Mihoko; Takagi, Toshihisa

    2005-01-01

    The importance of modeling and simulation of biological process is growing for further understanding of living systems at all scales from molecular to cellular, organic, and individuals. In the field of neuroscience, there are so called platform simulators, the de-facto standard neural simulators. More than a hundred neural models are registered on the model database. These models are executable in corresponding simulation environments. But usability of the registered models is not sufficient. In order to make use of the model, the users have to identify the input, output and internal state variables and parameters of the models. The roles and units of each variable and parameter are not explicitly defined in the model files. These are suggested implicitly in the papers where the simulation results are demonstrated. In this study, we propose a novel method of reassembly and interfacing models registered on biological model database. The method was applied to the neural models registered on one of the typical biological model database, ModelDB. The results are described in detail with the hippocampal pyramidal neuron model. The model is executable in NEURON simulator environment, which demonstrates that somatic EPSP amplitude is independent of synapse location. Input and output parameters and variables were identified successfully, and the results of the simulation were recorded in the organized form with annotations. PMID:16901091

  3. Rapid Automated Aircraft Simulation Model Updating from Flight Data

    NASA Technical Reports Server (NTRS)

    Brian, Geoff; Morelli, Eugene A.

    2011-01-01

    Techniques to identify aircraft aerodynamic characteristics from flight measurements and compute corrections to an existing simulation model of a research aircraft were investigated. The purpose of the research was to develop a process enabling rapid automated updating of aircraft simulation models using flight data and apply this capability to all flight regimes, including flight envelope extremes. The process presented has the potential to improve the efficiency of envelope expansion flight testing, revision of control system properties, and the development of high-fidelity simulators for pilot training.

  4. Multibody dynamics model building using graphical interfaces

    NASA Technical Reports Server (NTRS)

    Macala, Glenn A.

    1989-01-01

    In recent years, the extremely laborious task of manually deriving equations of motion for the simulation of multibody spacecraft dynamics has largely been eliminated. Instead, the dynamicist now works with commonly available general purpose dynamics simulation programs which generate the equations of motion either explicitly or implicitly via computer codes. The user interface to these programs has predominantly been via input data files, each with its own required format and peculiarities, causing errors and frustrations during program setup. Recent progress in a more natural method of data input for dynamics programs: the graphical interface, is described.

  5. Automated Decomposition of Model-based Learning Problems

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Millar, Bill

    1996-01-01

    A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.

  6. Rapid Prototyping of Hydrologic Model Interfaces with IPython

    NASA Astrophysics Data System (ADS)

    Farthing, M. W.; Winters, K. D.; Ahmadia, A. J.; Hesser, T.; Howington, S. E.; Johnson, B. D.; Tate, J.; Kees, C. E.

    2014-12-01

    A significant gulf still exists between the state of practice and state of the art in hydrologic modeling. Part of this gulf is due to the lack of adequate pre- and post-processing tools for newly developed computational models. The development of user interfaces has traditionally lagged several years behind the development of a particular computational model or suite of models. As a result, models with mature interfaces often lack key advancements in model formulation, solution methods, and/or software design and technology. Part of the problem has been a focus on developing monolithic tools to provide comprehensive interfaces for the entire suite of model capabilities. Such efforts require expertise in software libraries and frameworks for creating user interfaces (e.g., Tcl/Tk, Qt, and MFC). These tools are complex and require significant investment in project resources (time and/or money) to use. Moreover, providing the required features for the entire range of possible applications and analyses creates a cumbersome interface. For a particular site or application, the modeling requirements may be simplified or at least narrowed, which can greatly reduce the number and complexity of options that need to be accessible to the user. However, monolithic tools usually are not adept at dynamically exposing specific workflows. Our approach is to deliver highly tailored interfaces to users. These interfaces may be site and/or process specific. As a result, we end up with many, customized interfaces rather than a single, general-use tool. For this approach to be successful, it must be efficient to create these tailored interfaces. We need technology for creating quality user interfaces that is accessible and has a low barrier for integration into model development efforts. Here, we present efforts to leverage IPython notebooks as tools for rapid prototyping of site and application-specific user interfaces. We provide specific examples from applications in near-shore environments as well as levee analysis. We discuss our design decisions and methodology for developing customized interfaces, strategies for delivery of the interfaces to users in various computing environments, as well as implications for the design/implementation of simulation models.

  7. The Application of the Cumulative Logistic Regression Model to Automated Essay Scoring

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; Sinharay, Sandip

    2010-01-01

    Most automated essay scoring programs use a linear regression model to predict an essay score from several essay features. This article applied a cumulative logit model instead of the linear regression model to automated essay scoring. Comparison of the performances of the linear regression model and the cumulative logit model was performed on a

  8. The Application of the Cumulative Logistic Regression Model to Automated Essay Scoring

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; Sinharay, Sandip

    2010-01-01

    Most automated essay scoring programs use a linear regression model to predict an essay score from several essay features. This article applied a cumulative logit model instead of the linear regression model to automated essay scoring. Comparison of the performances of the linear regression model and the cumulative logit model was performed on a…

  9. Developing a Graphical User Interface to Automate the Estimation and Prediction of Risk Values for Flood Protective Structures using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Hasan, M.; Helal, A.; Gabr, M.

    2014-12-01

    In this project, we focus on providing a computer-automated platform for a better assessment of the potential failures and retrofit measures of flood-protecting earth structures, e.g., dams and levees. Such structures play an important role during extreme flooding events as well as during normal operating conditions. Furthermore, they are part of other civil infrastructures such as water storage and hydropower generation. Hence, there is a clear need for accurate evaluation of stability and functionality levels during their service lifetime so that the rehabilitation and maintenance costs are effectively guided. Among condition assessment approaches based on the factor of safety, the limit states (LS) approach utilizes numerical modeling to quantify the probability of potential failures. The parameters for LS numerical modeling include i) geometry and side slopes of the embankment, ii) loading conditions in terms of rate of rising and duration of high water levels in the reservoir, and iii) cycles of rising and falling water levels simulating the effect of consecutive storms throughout the service life of the structure. Sample data regarding the correlations of these parameters are available through previous research studies. We have unified these criteria and extended the risk assessment in term of loss of life through the implementation of a graphical user interface to automate input parameters that divides data into training and testing sets, and then feeds them into Artificial Neural Network (ANN) tool through MATLAB programming. The ANN modeling allows us to predict risk values of flood protective structures based on user feedback quickly and easily. In future, we expect to fine-tune the software by adding extensive data on variations of parameters.

  10. Modeling and Control of the Automated Radiator Inspection Device

    NASA Technical Reports Server (NTRS)

    Dawson, Darren

    1991-01-01

    Many of the operations performed at the Kennedy Space Center (KSC) are dangerous and repetitive tasks which make them ideal candidates for robotic applications. For one specific application, KSC is currently in the process of designing and constructing a robot called the Automated Radiator Inspection Device (ARID), to inspect the radiator panels on the orbiter. The following aspects of the ARID project are discussed: modeling of the ARID; design of control algorithms; and nonlinear based simulation of the ARID. Recommendations to assist KSC personnel in the successful completion of the ARID project are given.

  11. Automated extraction of knowledge for model-based diagnostics

    NASA Technical Reports Server (NTRS)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  12. Automated Physico-Chemical Cell Model Development through Information Theory

    SciTech Connect

    Peter J. Ortoleva

    2005-11-29

    The objective of this project was to develop predictive models of the chemical responses of microbial cells to variations in their surroundings. The application of these models is optimization of environmental remediation and energy-producing biotechnical processes.The principles on which our project is based are as follows: chemical thermodynamics and kinetics; automation of calibration through information theory; integration of multiplex data (e.g. cDNA microarrays, NMR, proteomics), cell modeling, and bifurcation theory to overcome cellular complexity; and the use of multiplex data and information theory to calibrate and run an incomplete model. In this report we review four papers summarizing key findings and a web-enabled, multiple module workflow we have implemented that consists of a set of interoperable systems biology computational modules.

  13. Diffuse Interface Methods for Modeling Drug-Eluting Stent Coatings.

    PubMed

    Saylor, David M; Forrey, Christopher; Kim, Chang-Soo; Warren, James A

    2016-02-01

    An overview of diffuse interface models specific to drug-eluting stent coatings is presented. Microscale heterogeneities, both in the coating and use environment, dictate the performance of these coatings. Using diffuse interface methods, these heterogeneities can be explicitly incorporated into the model equations with relative ease. This enables one to predict the complex microstructures that evolve during coating fabrication and subsequent impact on drug release. Examples are provided that illustrate the wide range of phenomena that can be addressed with diffuse interface models including: crystallization, constrained phase separation, hydrolytic degradation, and heterogeneous binding. Challenges associated with the lack of material property data and numerical solution of the model equations are also highlighted. Finally, in light of these potential drawbacks, the potential to utilize diffuse interface models to help guide product and process development is discussed. PMID:26183961

  14. Finite element modeling of frictionally restrained composite interfaces

    NASA Technical Reports Server (NTRS)

    Ballarini, Roberto; Ahmed, Shamim

    1989-01-01

    The use of special interface finite elements to model frictional restraint in composite interfaces is described. These elements simulate Coulomb friction at the interface, and are incorporated into a standard finite element analysis of a two-dimensional isolated fiber pullout test. Various interfacial characteristics, such as the distribution of stresses at the interface, the extent of slip and delamination, load diffusion from fiber to matrix, and the amount of fiber extraction or depression are studied for different friction coefficients. The results are compared to those obtained analytically using a singular integral equation approach, and those obtained by assuming a constant interface shear strength. The usefulness of these elements in micromechanical modeling of fiber-reinforced composite materials is highlighted.

  15. A nonlinear rainfall runoff model embedded with an automated calibration method Part 2: The automated calibration method

    NASA Astrophysics Data System (ADS)

    Lin, Gwo-Fong; Wang, Chun-Ming

    2007-08-01

    SummaryThe purpose of this paper is to develop an automated calibration method for the nonlinear computational units cascaded (NCUC) model. The simple genetic algorithm (SGA), a popular and robust optimization technique, is introduced in this paper as the basis of the automated calibration method. Therefore, the way to transform the model calibration problem into the optimization problem is first proposed. The general scheme to appropriately arrange the parameters of the NCUC model is then developed, so that the chromosomes of the SGA can be properly constructed. Two performance criterion functions, which are frequently used to evaluate the performance of the rainfall-runoff modeling, are adopted in this paper as the objective function to calibrate the NCUC model. Since the SGA imposes two restrictions on the fitness values, the key of the proposed automated calibration method is the evaluation of the fitness values. The methods to evaluate the fitness values according to the two objective functions are both given in this paper. With the proposed automated calibration method, high-quality parameters of the NCUC model can be obtained without modelers' subjective interventions.

  16. Modeling electronic transport mechanisms in metal-manganite memristive interfaces

    NASA Astrophysics Data System (ADS)

    Gomez-Marlasca, F.; Ghenzi, N.; Leyva, A. G.; Albornoz, C.; Rubi, D.; Stoliar, P.; Levy, P.

    2013-04-01

    We studied La0.325Pr0.300Ca0.375MnO3-Ag memristive interfaces. We present a pulsing/measuring protocol capable of registering both quasi-static i-v data and non-volatile remnant resistance. This protocol allowed distinguishing two different electronic transport mechanisms coexisting at the memristive interface, namely space charge limited current and thermionic emission limited current. We introduce a 2-element electric model that accounts for the obtained results and allows predicting the quasi-static i-v relation of the interface by means of a simple function of both the applied voltage and the remnant resistance value. Each element of the electric model is associated to one of the electronic transport mechanisms found. This electric model could result useful for developing time-domain simulation models of metal-manganite memristive interfaces.

  17. Modeling of interface roughness in thermoelectric composite materials.

    PubMed

    Gather, F; Heiliger, C; Klar, P J

    2011-08-24

    We use a network model to calculate the influence of the mesoscopic interface structure on the thermoelectric properties of superlattice structures consisting of alternating layers of materials A and B. The thermoelectric figure of merit of such a composite material depends on the layer thickness, if interface resistances are accounted for, and can be increased by proper interface design. In general, interface roughness reduces the figure of merit, again compared to the case of ideal interfaces. However, the strength of this reduction depends strongly on the type of interface roughness. Smooth atomic surface diffusion leading to alloying of materials A and B causes the largest reduction of the figure of merit. Consequently, in real structures, it is important not only to minimize interface roughness, but also to control the type of roughness. Although the microscopic effects of interfaces are only empirically accounted for, using a network model can yield useful information about the dependence of the macroscopic transport coefficients on the mesoscopic disorder in structured thermoelectric materials. PMID:21811010

  18. Automated geo/ortho registered aerial imagery product generation using the mapping system interface card (MSIC)

    NASA Astrophysics Data System (ADS)

    Bratcher, Tim; Kroutil, Robert; Lanouette, André; Lewis, Paul E.; Miller, David; Shen, Sylvia; Thomas, Mark

    2013-05-01

    The development concept paper for the MSIC system was first introduced in August 2012 by these authors. This paper describes the final assembly, testing, and commercial availability of the Mapping System Interface Card (MSIC). The 2.3kg MSIC is a self-contained, compact variable configuration, low cost real-time precision metadata annotator with embedded INS/GPS designed specifically for use in small aircraft. The MSIC was specifically designed to convert commercial-off-the-shelf (COTS) digital cameras and imaging/non-imaging spectrometers with Camera Link standard data streams into mapping systems for airborne emergency response and scientific remote sensing applications. COTS digital cameras and imaging/non-imaging spectrometers covering the ultraviolet through long-wave infrared wavelengths are important tools now readily available and affordable for use by emergency responders and scientists. The MSIC will significantly enhance the capability of emergency responders and scientists by providing a direct transformation of these important COTS sensor tools into low-cost real-time aerial mapping systems.

  19. A generalized mechanical model for suture interfaces of arbitrary geometry

    NASA Astrophysics Data System (ADS)

    Li, Yaning; Ortiz, Christine; Boyce, Mary C.

    2013-04-01

    Suture interfaces with a triangular wave form commonly found in nature have recently been shown to exhibit exceptional mechanical behavior, where geometric parameters such as amplitude, frequency, and hierarchy can be used to nonlinearly tailor and amplify mechanical properties. In this study, using the principle of complementary virtual work, we formulate a generalized, composite mechanical model for arbitrarily-shaped interdigitating suture interfaces in order to more broadly investigate the influence of wave-form geometry on load transmission, deformation mechanisms, anisotropy, and stiffness, strength, and toughness of the suture interface for tensile and shear loading conditions. The application of this suture interface model is exemplified for the case of the general trapezoidal wave-form. Expressions for the in-plane stiffness, strength and fracture toughness and failure mechanisms are derived as nonlinear functions of shape factor β (which characterizes the general trapezoidal shape as triangular, trapezoidal, rectangular or anti-trapezoidal), the wavelength/amplitude ratio, the interface width/wavelength ratio, and the stiffness and strength ratios of the skeletal/interfacial phases. These results provide guidelines for choosing and tailoring interface geometry to optimize the mechanical performance in resisting different loads. The presented model provides insights into the relation between the mechanical function and the morphological diversity of suture interface geometries observed in natural systems.

  20. A new seismically constrained subduction interface model for Central America

    NASA Astrophysics Data System (ADS)

    Kyriakopoulos, C.; Newman, A. V.; Thomas, A. M.; Moore-Driskell, M.; Farmer, G. T.

    2015-08-01

    We provide a detailed, seismically defined three-dimensional model for the subducting plate interface along the Middle America Trench between northern Nicaragua and southern Costa Rica. The model uses data from a weighted catalog of about 30,000 earthquake hypocenters compiled from nine catalogs to constrain the interface through a process we term the "maximum seismicity method." The method determines the average position of the largest cluster of microseismicity beneath an a priori functional surface above the interface. This technique is applied to all seismicity above 40 km depth, the approximate intersection of the hanging wall Mohorovičić discontinuity, where seismicity likely lies along the plate interface. Below this depth, an envelope above 90% of seismicity approximates the slab surface. Because of station proximity to the interface, this model provides highest precision along the interface beneath the Nicoya Peninsula of Costa Rica, an area where marked geometric changes coincide with crustal transitions and topography observed seaward of the trench. The new interface is useful for a number of geophysical studies that aim to understand subduction zone earthquake behavior and geodynamic and tectonic development of convergent plate boundaries.

  1. T:XML: A Tool Supporting User Interface Model Transformation

    NASA Astrophysics Data System (ADS)

    López-Jaquero, Víctor; Montero, Francisco; González, Pascual

    Model driven development of user interfaces is based on the transformation of an abstract specification into the final user interface the user will interact with. The design of transformation rules to carry out this transformation process is a key issue in any model-driven user interface development approach. In this paper, we introduce T:XML, an integrated development environment for managing, creating and previewing transformation rules. The tool supports the specification of transformation rules by using a graphical notation that works on the basis of the transformation of the input model into a graph-based representation. T:XML allows the design and execution of transformation rules in an integrated development environment. Furthermore, the designer can also preview how the generated user interface looks like after the transformations have been applied. These previewing capabilities can be used to quickly create prototypes to discuss with the users in user-centered design methods.

  2. Combining biometric and symbolic models for customized, automated prosthesis design.

    PubMed

    Modgil, S; Hutton, T J; Hammond, P; Davenport, J C

    2002-07-01

    In a previous paper [Artif. Intell. Med. 5 (1993) 431] we described RaPiD, a knowledge-based system for designing dental prostheses. The present paper discusses how RaPiD has been extended using techniques from computer vision and logic grammars. The first employs point distribution and active shape models (ASMs) to determine dentition from images of casts of patient's jaws. This enables a design to be customized to, and visualised against, an image of a patient's dentition. The second is based on the notion of a path grammar, a form of logic grammar, to generate a path linking an ordered sequence of subcomponents. The shape of an important and complex prosthesis component can be automatically seeded in this fashion. Combining these models now substantially automates the design process, beginning with a photograph of a dental cast and ending with an annotated and validated design diagram ready to guide manufacture. PMID:12069761

  3. Control of a Wheelchair in an Indoor Environment Based on a Brain-Computer Interface and Automated Navigation.

    PubMed

    Zhang, Rui; Li, Yuanqing; Yan, Yongyong; Zhang, Hao; Wu, Shaoyu; Yu, Tianyou; Gu, Zhenghui

    2016-01-01

    The concept of controlling a wheelchair using brain signals is promising. However, the continuous control of a wheelchair based on unstable and noisy electroencephalogram signals is unreliable and generates a significant mental burden for the user. A feasible solution is to integrate a brain-computer interface (BCI) with automated navigation techniques. This paper presents a brain-controlled intelligent wheelchair with the capability of automatic navigation. Using an autonomous navigation system, candidate destinations and waypoints are automatically generated based on the existing environment. The user selects a destination using a motor imagery (MI)-based or P300-based BCI. According to the determined destination, the navigation system plans a short and safe path and navigates the wheelchair to the destination. During the movement of the wheelchair, the user can issue a stop command with the BCI. Using our system, the mental burden of the user can be substantially alleviated. Furthermore, our system can adapt to changes in the environment. Two experiments based on MI and P300 were conducted to demonstrate the effectiveness of our system. PMID:26054072

  4. Development of an automated core model for nuclear reactors

    SciTech Connect

    Mosteller, R.D.

    1998-12-31

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of this project was to develop an automated package of computer codes that can model the steady-state behavior of nuclear-reactor cores of various designs. As an added benefit, data produced for steady-state analysis also can be used as input to the TRAC transient-analysis code for subsequent safety analysis of the reactor at any point in its operating lifetime. The basic capability to perform steady-state reactor-core analysis already existed in the combination of the HELIOS lattice-physics code and the NESTLE advanced nodal code. In this project, the automated package was completed by (1) obtaining cross-section libraries for HELIOS, (2) validating HELIOS by comparing its predictions to results from critical experiments and from the MCNP Monte Carlo code, (3) validating NESTLE by comparing its predictions to results from numerical benchmarks and to measured data from operating reactors, and (4) developing a linkage code to transform HELIOS output into NESTLE input.

  5. An Automated 3d Indoor Topological Navigation Network Modelling

    NASA Astrophysics Data System (ADS)

    Jamali, A.; Rahman, A. A.; Boguslawski, P.; Gold, C. M.

    2015-10-01

    Indoor navigation is important for various applications such as disaster management and safety analysis. In the last decade, indoor environment has been a focus of wide research; that includes developing techniques for acquiring indoor data (e.g. Terrestrial laser scanning), 3D indoor modelling and 3D indoor navigation models. In this paper, an automated 3D topological indoor network generated from inaccurate 3D building models is proposed. In a normal scenario, 3D indoor navigation network derivation needs accurate 3D models with no errors (e.g. gap, intersect) and two cells (e.g. rooms, corridors) should touch each other to build their connections. The presented 3D modeling of indoor navigation network is based on surveying control points and it is less dependent on the 3D geometrical building model. For reducing time and cost of indoor building data acquisition process, Trimble LaserAce 1000 as surveying instrument is used. The modelling results were validated against an accurate geometry of indoor building environment which was acquired using Trimble M3 total station.

  6. Interfaces in the Potts model II: Antonov's rule and rigidity of the order disorder interface

    NASA Astrophysics Data System (ADS)

    Messager, Alain; Miracle-Sole, Salvador; Ruiz, Jean; Shlosman, Senya

    1991-09-01

    Within the ferromagnetic q-state Potts model we discuss the wetting of the interface between two ordered phases a and b by the disordered phase f at the transition temperature. In two or more dimensions and for q large we establish the validity of the Antonov's rule, ? ab = ? af + ? fb , where ? denotes the surface tension between the considered phases. We also prove that at this temperature, in three or more dimensions the interface between any ordered phase and the disordered one is rigid.

  7. User interface for ground-water modeling: Arcview extension

    USGS Publications Warehouse

    Tsou, M.-S.; Whittemore, D.O.

    2001-01-01

    Numerical simulation for ground-water modeling often involves handling large input and output data sets. A geographic information system (GIS) provides an integrated platform to manage, analyze, and display disparate data and can greatly facilitate modeling efforts in data compilation, model calibration, and display of model parameters and results. Furthermore, GIS can be used to generate information for decision making through spatial overlay and processing of model results. Arc View is the most widely used Windows-based GIS software that provides a robust user-friendly interface to facilitate data handling and display. An extension is an add-on program to Arc View that provides additional specialized functions. An Arc View interface for the ground-water flow and transport models MODFLOW and MT3D was built as an extension for facilitating modeling. The extension includes preprocessing of spatially distributed (point, line, and polygon) data for model input and postprocessing of model output. An object database is used for linking user dialogs and model input files. The Arc View interface utilizes the capabilities of the 3D Analyst extension. Models can be automatically calibrated through the Arc View interface by external linking to such programs as PEST. The efficient pre- and postprocessing capabilities and calibration link were demonstrated for ground-water modeling in southwest Kansas.

  8. Minimal model for charge transfer excitons at the dielectric interface

    NASA Astrophysics Data System (ADS)

    Ono, Shota; Ohno, Kaoru

    2016-03-01

    A theoretical description of the charge transfer (CT) exciton across the donor-acceptor interface without the use of a completely localized hole (or electron) is a challenge in the field of organic solar cells. We calculate the total wave function of the CT exciton by solving an effective two-particle Schrödinger equation for the inhomogeneous dielectric interface. We formulate the magnitude of the CT and construct a minimal model of the CT exciton under the breakdown of inversion symmetry. We demonstrate that both a light hole mass and a hole localization along the normal to the dielectric interface are crucial to yield the CT exciton.

  9. Neuroengineering modeling of single neuron and neural interface.

    PubMed

    Hu, X L; Zhang, Y T; Yao, J

    2002-01-01

    The single neuron has attracted widespread attention as an elementary unit for understanding the electrophysiological mechanisms of nervous systems and for exploring the functions of biological neural networks. Over the past decades, much modeling work on neural interface has been presented in support of experimental findings in neural engineering. This article reviews the recent research results on modeling electrical activities of the single neuron, electrical synapse, neuromuscular junction, and neural interfaces at cochlea. Single neuron models vary form to illustrate how neurons fire and what the firing patterns mean. Focusing on these two questions, recent modeling work on single neurons is discussed. The modeling of neural receptors at inner and outer hair cells is examined to explain the transforming procedure from sounds to electrical signals. The low-pass characteristics of electrical synapse and neuromuscular junction are also discussed in an attempt to understand the mechanism of electrical transmission across the interfaces. PMID:12739750

  10. Stable, reproducible, and automated capillary zone electrophoresis-tandem mass spectrometry system with an electrokinetically pumped sheath-flow nanospray interface.

    PubMed

    Zhu, Guijie; Sun, Liangliang; Yan, Xiaojing; Dovichi, Norman J

    2014-01-31

    A PrinCE autosampler was coupled to a Q-Exactive mass spectrometer by an electrokinetically pumped sheath-flow nanospray interface to perform automated capillary zone electrophoresis-electrospray ionization-tandem mass spectrometry (CZE-ESI-MS/MS). 20ng aliquots of an Escherichia coli digest were injected to evaluate the system. Eight sequential injections over an 8-h period identified 1115±70 (relative standard deviation, RSD=6%) peptides and 270±8 (RSD=3%) proteins per run. The average RSDs of migration time, peak intensity, and peak area were 3%, 24% and 19%, respectively, for 340 peptides with high intensity. This is the first report of an automated CZE-ESI-MS/MS system using the electrokinetically pumped sheath-flow nanospray interface. The results demonstrate that this system is capable of reproducibly identifying over 1000 peptides from an E. coli tryptic digest in a 1-h analysis time. PMID:24439510

  11. Flightdeck Automation Problems (FLAP) Model for Safety Technology Portfolio Assessment

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2014-01-01

    NASA's Aviation Safety Program (AvSP) develops and advances methodologies and technologies to improve air transportation safety. The Safety Analysis and Integration Team (SAIT) conducts a safety technology portfolio assessment (PA) to analyze the program content, to examine the benefits and risks of products with respect to program goals, and to support programmatic decision making. The PA process includes systematic identification of current and future safety risks as well as tracking several quantitative and qualitative metrics to ensure the program goals are addressing prominent safety risks accurately and effectively. One of the metrics within the PA process involves using quantitative aviation safety models to gauge the impact of the safety products. This paper demonstrates the role of aviation safety modeling by providing model outputs and evaluating a sample of portfolio elements using the Flightdeck Automation Problems (FLAP) model. The model enables not only ranking of the quantitative relative risk reduction impact of all portfolio elements, but also highlighting the areas with high potential impact via sensitivity and gap analyses in support of the program office. Although the model outputs are preliminary and products are notional, the process shown in this paper is essential to a comprehensive PA of NASA's safety products in the current program and future programs/projects.

  12. Device model for electronic processes at organic/organic interfaces

    NASA Astrophysics Data System (ADS)

    Liu, Feilong; Paul Ruden, P.; Campbell, Ian. H.; Smith, Darryl L.

    2012-05-01

    Interfaces between different organic materials can play a key role in determining organic semiconductor device characteristics. Here, we present a physics-based one-dimensional model with the goal of exploring critical processes at organic/organic interfaces. Specifically, we envision a simple bilayer structure consisting of an electron transport layer (ETL), a hole transport layer (HTL), and the interface between them. The model calculations focus on the following aspects: (1) the microscopic physical processes at the interface, such as exciton formation/dissociation, exciplex formation/dissociation, and geminate/nongeminate recombination; (2) the treatment of the interface parameters and the discretization method; and (3) the application of this model to different devices, such as organic light emitting diodes and photovoltaic cells. At the interface, an electron on an ETL molecule can interact with a hole on an adjacent HTL molecule and form an intermolecular excited state (exciplex). If either the electron or the hole transfers across the interface, an exciton can be formed. The exciton may subsequently diffuse into the relevant layer and relax to the ground state. A strong effective electric field at the interface can cause excitons or exciplexes to dissociate into electrons in the ETL and holes in the HTL. Geminate recombination may occur when the Coulomb interaction between the electron and the hole generated at the interface by exciton dissociation causes the formation of a correlated state that then relaxes to the ground state. The relative impacts of the different processes on measurable macroscopic device characteristics are explored in our calculations by varying the corresponding kinetic coefficients. As it is the aim of this work to investigate effects associated with the organic/organic interface, its treatment in the numerical calculations is of critical importance. We model the interface as a continuous but rather sharp transition from the ETL to the HTL. The model is applied to different devices where different microscopic processes dominate. We discuss the results for an organic light emitting device with exciton or exciplex emission and for a photovoltaic device with or without geminate recombination. In the examples, C60 and tetracene parameters are used for the ETL and HTL materials, respectively.

  13. Radiation budget measurement/model interface research

    NASA Technical Reports Server (NTRS)

    Vonderhaar, T. H.

    1981-01-01

    The NIMBUS 6 data were analyzed to form an up to date climatology of the Earth radiation budget as a basis for numerical model definition studies. Global maps depicting infrared emitted flux, net flux and albedo from processed NIMBUS 6 data for July, 1977, are presented. Zonal averages of net radiation flux for April, May, and June and zonal mean emitted flux and net flux for the December to January period are also presented. The development of two models is reported. The first is a statistical dynamical model with vertical and horizontal resolution. The second model is a two level global linear balance model. The results of time integration of the model up to 120 days, to simulate the January circulation, are discussed. Average zonal wind, meridonal wind component, vertical velocity, and moisture budget are among the parameters addressed.

  14. Back to the Future: A Non-Automated Method of Constructing Transfer Models

    ERIC Educational Resources Information Center

    Feng, Mingyu; Beck, Joseph

    2009-01-01

    Representing domain knowledge is important for constructing educational software, and automated approaches have been proposed to construct and refine such models. In this paper, instead of applying automated and computationally intensive approaches, we simply start with existing hand-constructed transfer models at various levels of granularity and…

  15. Modeling nonspecific interactions at biological interfaces

    NASA Astrophysics Data System (ADS)

    White, Andrew D.

    Difficulties in applied biomaterials often arise from the complexities of interactions in biological environments. These interactions can be broadly broken into two categories: those which are important to function (strong binding to a single target) and those which are detrimental to function (weak binding to many targets). These will be referred to as specific and nonspecific interactions, respectively. Nonspecific interactions have been central to failures of biomaterials, sensors, and surface coatings in harsh biological environments. There is little modeling work on studying nonspecific interactions. Modeling all possible nonspecific interactions within a biological system is difficult, yet there are ways to both indirectly model nonspecific interactions and directly model many interactions using machine-learning. This research utilizes bioinformatics, phenomenological modeling, molecular simulations, experiments, and stochastic modeling to study nonspecific interactions. These techniques are used to study the hydration molecules which resist nonspecific interactions, the formation of salt bridges, the chemistry of protein surfaces, nonspecific stabilization of proteins in molecular chaperones, and analysis of high-throughput screening experiments. The common aspect for these systems is that nonspecific interactions are more important than specific interactions. Studying these disparate systems has created a set of principles for resisting nonspecific interactions which have been experimentally demonstrated with the creation and testing of novel materials which resist nonspecific interactions.

  16. A distributed data component for the open modeling interface

    Technology Transfer Automated Retrieval System (TEKTRAN)

    As the volume of collected data continues to increase in the environmental sciences, so does the need for effective means for accessing those data. We have developed an Open Modeling Interface (OpenMI) data component that retrieves input data for model components from environmental information syste...

  17. Integration of finite element modeling with solid modeling through a dynamic interface

    NASA Technical Reports Server (NTRS)

    Shephard, Mark S.

    1987-01-01

    Finite element modeling is dominated by geometric modeling type operations. Therefore, an effective interface to geometric modeling requires access to both the model and the modeling functionality used to create it. The use of a dynamic interface that addresses these needs through the use of boundary data structures and geometric operators is discussed.

  18. Automated robust generation of compact 3D statistical shape models

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaz; Likar, Bostjan; Tomazevic, Dejan; Pernus, Franjo

    2004-05-01

    Ascertaining the detailed shape and spatial arrangement of anatomical structures is important not only within diagnostic settings but also in the areas of planning, simulation, intraoperative navigation, and tracking of pathology. Robust, accurate and efficient automated segmentation of anatomical structures is difficult because of their complexity and inter-patient variability. Furthermore, the position of the patient during image acquisition, the imaging device and protocol, image resolution, and other factors induce additional variations in shape and appearance. Statistical shape models (SSMs) have proven quite successful in capturing structural variability. A possible approach to obtain a 3D SSM is to extract reference voxels by precisely segmenting the structure in one, reference image. The corresponding voxels in other images are determined by registering the reference image to each other image. The SSM obtained in this way describes statistically plausible shape variations over the given population as well as variations due to imperfect registration. In this paper, we present a completely automated method that significantly reduces shape variations induced by imperfect registration, thus allowing a more accurate description of variations. At each iteration, the derived SSM is used for coarse registration, which is further improved by describing finer variations of the structure. The method was tested on 64 lumbar spinal column CT scans, from which 23, 38, 45, 46 and 42 volumes of interest containing vertebra L1, L2, L3, L4 and L5, respectively, were extracted. Separate SSMs were generated for each vertebra. The results show that the method is capable of reducing the variations induced by registration errors.

  19. Solid phase extraction-liquid chromatography (SPE-LC) interface for automated peptide separation and identification by tandem mass spectrometry

    NASA Astrophysics Data System (ADS)

    Hørning, Ole Bjeld; Theodorsen, Søren; Vorm, Ole; Jensen, Ole Nørregaard

    2007-12-01

    Reversed-phase solid phase extraction (SPE) is a simple and widely used technique for desalting and concentration of peptide and protein samples prior to mass spectrometry analysis. Often, SPE sample preparation is done manually and the samples eluted, dried and reconstituted into 96-well titer plates for subsequent LC-MS/MS analysis. To reduce the number of sample handling stages and increase throughput, we developed a robotic system to interface off-line SPE to LC-ESI-MS/MS. Samples were manually loaded onto disposable SPE tips that subsequently were connected in-line with a capillary chromatography column. Peptides were recovered from the SPE column and separated on the RP-LC column using isocratic elution conditions and analysed by electrospray tandem mass spectrometry. Peptide mixtures eluted within approximately 5 min, with individual peptide peak resolution of ~7 s (FWHM), making the SPE-LC suited for analysis of medium complex samples (3-12 protein components). For optimum performance, the isocratic flow rate was reduced to 30 nL/min, producing nanoelectrospray like conditions which ensure high ionisation efficiency and sensitivity. Using a modified autosampler for mounting and disposing of the SPE tips, the SPE-LC-MS/MS system could analyse six samples per hour, and up to 192 SPE tips in one batch. The relatively high sample throughput, medium separation power and high sensitivity makes the automated SPE-LC-MS/MS setup attractive for proteomics experiments as demonstrated by the identification of the components of simple protein mixtures and of proteins recovered from 2DE gels.

  20. An interface model for dosage adjustment connects hematotoxicity to pharmacokinetics.

    PubMed

    Meille, C; Iliadis, A; Barbolosi, D; Frances, N; Freyer, G

    2008-12-01

    When modeling is required to describe pharmacokinetics and pharmacodynamics simultaneously, it is difficult to link time-concentration profiles and drug effects. When patients are under chemotherapy, despite the huge amount of blood monitoring numerations, there is a lack of exposure variables to describe hematotoxicity linked with the circulating drug blood levels. We developed an interface model that transforms circulating pharmacokinetic concentrations to adequate exposures, destined to be inputs of the pharmacodynamic process. The model is materialized by a nonlinear differential equation involving three parameters. The relevance of the interface model for dosage adjustment is illustrated by numerous simulations. In particular, the interface model is incorporated into a complex system including pharmacokinetics and neutropenia induced by docetaxel and by cisplatin. Emphasis is placed on the sensitivity of neutropenia with respect to the variations of the drug amount. This complex system including pharmacokinetic, interface, and pharmacodynamic hematotoxicity models is an interesting tool for analysis of hematotoxicity induced by anticancer agents. The model could be a new basis for further improvements aimed at incorporating new experimental features. PMID:19107581

  1. NASA: Model development for human factors interfacing

    NASA Technical Reports Server (NTRS)

    Smith, L. L.

    1984-01-01

    The results of an intensive literature review in the general topics of human error analysis, stress and job performance, and accident and safety analysis revealed no usable techniques or approaches for analyzing human error in ground or space operations tasks. A task review model is described and proposed to be developed in order to reduce the degree of labor intensiveness in ground and space operations tasks. An extensive number of annotated references are provided.

  2. Molecular Modeling of Water Interfaces: From Molecular Spectroscopy to Thermodynamics.

    PubMed

    Nagata, Yuki; Ohto, Tatsuhiko; Backus, Ellen H G; Bonn, Mischa

    2016-04-28

    Understanding aqueous interfaces at the molecular level is not only fundamentally important, but also highly relevant for a variety of disciplines. For instance, electrode-water interfaces are relevant for electrochemistry, as are mineral-water interfaces for geochemistry and air-water interfaces for environmental chemistry; water-lipid interfaces constitute the boundaries of the cell membrane, and are thus relevant for biochemistry. One of the major challenges in these fields is to link macroscopic properties such as interfacial reactivity, solubility, and permeability as well as macroscopic thermodynamic and spectroscopic observables to the structure, structural changes, and dynamics of molecules at these interfaces. Simulations, by themselves, or in conjunction with appropriate experiments, can provide such molecular-level insights into aqueous interfaces. In this contribution, we review the current state-of-the-art of three levels of molecular dynamics (MD) simulation: ab initio, force field, and coarse-grained. We discuss the advantages, the potential, and the limitations of each approach for studying aqueous interfaces, by assessing computations of the sum-frequency generation spectra and surface tension. The comparison of experimental and simulation data provides information on the challenges of future MD simulations, such as improving the force field models and the van der Waals corrections in ab initio MD simulations. Once good agreement between experimental observables and simulation can be established, the simulation can be used to provide insights into the processes at a level of detail that is generally inaccessible to experiments. As an example we discuss the mechanism of the evaporation of water. We finish by presenting an outlook outlining four future challenges for molecular dynamics simulations of aqueous interfacial systems. PMID:27010817

  3. Critical interfaces and duality in the Ashkin-Teller model

    SciTech Connect

    Picco, Marco; Santachiara, Raoul

    2011-06-15

    We report on the numerical measures on different spin interfaces and Fortuin-Kasteleyn (FK) cluster boundaries in the Askhin-Teller (AT) model. For a general point on the AT critical line, we find that the fractal dimension of a generic spin cluster interface can take one of four different possible values. In particular we found spin interfaces whose fractal dimension is d{sub f}=3/2 all along the critical line. Furthermore, the fractal dimension of the boundaries of FK clusters was found to satisfy all along the AT critical line a duality relation with the fractal dimension of their outer boundaries. This result provides clear numerical evidence that such duality, which is well known in the case of the O(n) model, exists in an extended conformal field theory.

  4. Automated macromolecular model building for X-ray crystallography using ARP/wARP version 7

    PubMed Central

    Langer, Gerrit G; Cohen, Serge X; Lamzin, Victor S; Perrakis, Anastassis

    2008-01-01

    ARP/wARP is a software suite to build macromolecular models in X-ray crystallography electron density maps. Structural genomics initiatives and the study of complex macromolecular assemblies and membrane proteins all rely on advanced methods for 3D structure determination. ARP/wARP meets these needs by providing the tools to obtain a macromolecular model automatically, with a reproducible computational procedure. ARP/wARP 7.0 tackles several tasks: iterative protein model building including a high-level decision-making control module; fast construction of the secondary structure of a protein; building flexible loops in alternate conformations; fully automated placement of ligands, including a choice of the best fitting ligand from a “cocktail”; and finding ordered water molecules. All protocols are easy to handle by a non-expert user through a graphical user interface or a command line. The time required is typically a few minutes although iterative model building may take a few hours. PMID:18600222

  5. Attenuation of numerical artefacts in the modelling of fluid interfaces

    NASA Astrophysics Data System (ADS)

    Evrard, Fabien; van Wachem, Berend G. M.; Denner, Fabian

    2015-11-01

    Numerical artefacts in the modelling of fluid interfaces, such as parasitic currents or spurious capillary waves, present a considerable problem in two-phase flow modelling. Parasitic currents result from an imperfect evaluation of the interface curvature and can severely affect the flow, whereas spatially underresolved (spurious) capillary waves impose strict limits on the time-step and, hence, dictate the required computational resources for surface-tension-dominated flows. By applying an additional shear stress term at the fluid interface, thereby dissipating the surface energy associated with small wavelengths, we have been able to considerably reduce the adverse impact of parasitic currents and mitigate the time-step limit imposed by capillary waves. However, a careful choice of the applied interface viscosity is crucial, since an excess of additional dissipation compromises the accuracy of the solution. We present the derivation of the additional interfacial shear stress term, explain the underlying physical mechanism and discuss the impact on parasitic currents and interface instabilities based on a variety of numerical experiments. We acknowledge financial support from the Engineering and Physical Sciences Research Council (EPSRC) through Grant No. EP/M021556/1 and from PETROBRAS.

  6. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT (AGWA): A GIS-BASED HYRDOLOGIC MODELING TOOL FOR WATERSHED ASSESSMENT AND ANALYSIS

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment tool (AGWA) is a GIS interface jointly

    developed by the USDA Agricultural Research Service, the U.S. Environmental Protection

    Agency, the University of Arizona, and the University of Wyoming to automate the

    parame...

  7. AUTOMATED GEOSPATICAL WATERSHED ASSESSMENT (AGWA): A GIS-BASED HYDROLOICAL MODELING TOOL FOR WATERSHED ASSESSMENT AND ANALYSIS

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment tool (AGWA) is a GIS interface jointly developed by the USDA Agricultural Research Service, the U.S. Environmental Protection Agency, the University of Arizona, and the University of Wyoming to automate the parameterization and execut...

  8. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT (AGWA): A GIS-BASED HYDROLOGIC MODELING TOOL FOR WATERSHED ASSESSMENT AND ANALYSIS

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment tool (AGWA) is a GIS interface jointly developed by the USDA Agricultural Research Service, the U.S. Environmental Protection Agency, the University of Arizona, and the University of Wyoming to automate the parameterization and execu...

  9. Individual Differences in Response to Automation: The Five Factor Model of Personality

    ERIC Educational Resources Information Center

    Szalma, James L.; Taylor, Grant S.

    2011-01-01

    This study examined the relationship of operator personality (Five Factor Model) and characteristics of the task and of adaptive automation (reliability and adaptiveness--whether the automation was well-matched to changes in task demand) to operator performance, workload, stress, and coping. This represents the first investigation of how the Five

  10. Individual Differences in Response to Automation: The Five Factor Model of Personality

    ERIC Educational Resources Information Center

    Szalma, James L.; Taylor, Grant S.

    2011-01-01

    This study examined the relationship of operator personality (Five Factor Model) and characteristics of the task and of adaptive automation (reliability and adaptiveness--whether the automation was well-matched to changes in task demand) to operator performance, workload, stress, and coping. This represents the first investigation of how the Five…

  11. Aviation Safety: Modeling and Analyzing Complex Interactions between Humans and Automated Systems

    NASA Technical Reports Server (NTRS)

    Rungta, Neha; Brat, Guillaume; Clancey, William J.; Linde, Charlotte; Raimondi, Franco; Seah, Chin; Shafto, Michael

    2013-01-01

    The on-going transformation from the current US Air Traffic System (ATS) to the Next Generation Air Traffic System (NextGen) will force the introduction of new automated systems and most likely will cause automation to migrate from ground to air. This will yield new function allocations between humans and automation and therefore change the roles and responsibilities in the ATS. Yet, safety in NextGen is required to be at least as good as in the current system. We therefore need techniques to evaluate the safety of the interactions between humans and automation. We think that current human factor studies and simulation-based techniques will fall short in front of the ATS complexity, and that we need to add more automated techniques to simulations, such as model checking, which offers exhaustive coverage of the non-deterministic behaviors in nominal and off-nominal scenarios. In this work, we present a verification approach based both on simulations and on model checking for evaluating the roles and responsibilities of humans and automation. Models are created using Brahms (a multi-agent framework) and we show that the traditional Brahms simulations can be integrated with automated exploration techniques based on model checking, thus offering a complete exploration of the behavioral space of the scenario. Our formal analysis supports the notion of beliefs and probabilities to reason about human behavior. We demonstrate the technique with the Ueberligen accident since it exemplifies authority problems when receiving conflicting advices from human and automated systems.

  12. Automated MRI Cerebellar Size Measurements Using Active Appearance Modeling

    PubMed Central

    Price, Mathew; Cardenas, Valerie A.; Fein, George

    2014-01-01

    Although the human cerebellum has been increasingly identified as an important hub that shows potential for helping in the diagnosis of a large spectrum of disorders, such as alcoholism, autism, and fetal alcohol spectrum disorder, the high costs associated with manual segmentation, and low availability of reliable automated cerebellar segmentation tools, has resulted in a limited focus on cerebellar measurement in human neuroimaging studies. We present here the CATK (Cerebellar Analysis Toolkit), which is based on the Bayesian framework implemented in FMRIB’s FIRST. This approach involves training Active Appearance Models (AAM) using hand-delineated examples. CATK can currently delineate the cerebellar hemispheres and three vermal groups (lobules I–V, VI–VII, and VIII–X). Linear registration with the low-resolution MNI152 template is used to provide initial alignment, and Point Distribution Models (PDM) are parameterized using stellar sampling. The Bayesian approach models the relationship between shape and texture through computation of conditionals in the training set. Our method varies from the FIRST framework in that initial fitting is driven by 1D intensity profile matching, and the conditional likelihood function is subsequently used to refine fitting. The method was developed using T1-weighted images from 63 subjects that were imaged and manually labeled: 43 subjects were scanned once and were used for training models, and 20 subjects were imaged twice (with manual labeling applied to both runs) and used to assess reliability and validity. Intraclass correlation analysis shows that CATK is highly reliable (average test-retest ICCs of 0.96), and offers excellent agreement with the gold standard (average validity ICC of 0.87 against manual labels). Comparisons against an alternative atlas-based approach, SUIT (Spatially Unbiased Infratentorial Template), that registers images with a high-resolution template of the cerebellum, show that our AAM approach offers superior reliability and validity. Extensions of CATK to cerebellar hemisphere parcels is envisioned. PMID:25192657

  13. Designers' models of the human-computer interface

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Breedin, Sarah D.

    1993-01-01

    Understanding design models of the human-computer interface (HCI) may produce two types of benefits. First, interface development often requires input from two different types of experts: human factors specialists and software developers. Given the differences in their backgrounds and roles, human factors specialists and software developers may have different cognitive models of the HCI. Yet, they have to communicate about the interface as part of the design process. If they have different models, their interactions are likely to involve a certain amount of miscommunication. Second, the design process in general is likely to be guided by designers' cognitive models of the HCI, as well as by their knowledge of the user, tasks, and system. Designers do not start with a blank slate; rather they begin with a general model of the object they are designing. The author's approach to a design model of the HCI was to have three groups make judgments of categorical similarity about the components of an interface: human factors specialists with HCI design experience, software developers with HCI design experience, and a baseline group of computer users with no experience in HCI design. The components of the user interface included both display components such as windows, text, and graphics, and user interaction concepts, such as command language, editing, and help. The judgments of the three groups were analyzed using hierarchical cluster analysis and Pathfinder. These methods indicated, respectively, how the groups categorized the concepts, and network representations of the concepts for each group. The Pathfinder analysis provides greater information about local, pairwise relations among concepts, whereas the cluster analysis shows global, categorical relations to a greater extent.

  14. Atomic Models of Strong Solids Interfaces Viewed as Composite Structures

    NASA Astrophysics Data System (ADS)

    Staffell, I.; Shang, J. L.; Kendall, K.

    2014-02-01

    This paper looks back through the 1960s to the invention of carbon fibres and the theories of Strong Solids. In particular it focuses on the fracture mechanics paradox of strong composites containing weak interfaces. From Griffith theory, it is clear that three parameters must be considered in producing a high strength composite:- minimising defects; maximising the elastic modulus; and raising the fracture energy along the crack path. The interface then introduces two further factors:- elastic modulus mismatch causing crack stopping; and debonding along a brittle interface due to low interface fracture energy. Consequently, an understanding of the fracture energy of a composite interface is needed. Using an interface model based on atomic interaction forces, it is shown that a single layer of contaminant atoms between the matrix and the reinforcement can reduce the interface fracture energy by an order of magnitude, giving a large delamination effect. The paper also looks to a future in which cars will be made largely from composite materials. Radical improvements in automobile design are necessary because the number of cars worldwide is predicted to double. This paper predicts gains in fuel economy by suggesting a new theory of automobile fuel consumption using an adaptation of Coulomb's friction law. It is demonstrated both by experiment and by theoretical argument that the energy dissipated in standard vehicle tests depends only on weight. Consequently, moving from metal to fibre construction can give a factor 2 improved fuel economy performance, roughly the same as moving from a petrol combustion drive to hydrogen fuel cell propulsion. Using both options together can give a factor 4 improvement, as demonstrated by testing a composite car using the ECE15 protocol.

  15. A general graphical user interface for automatic reliability modeling

    NASA Technical Reports Server (NTRS)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1991-01-01

    Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.

  16. Automation of sample plan creation for process model calibration

    NASA Astrophysics Data System (ADS)

    Oberschmidt, James; Abdo, Amr; Desouky, Tamer; Al-Imam, Mohamed; Krasnoperova, Azalia; Viswanathan, Ramya

    2010-04-01

    The process of preparing a sample plan for optical and resist model calibration has always been tedious. Not only because it is required to accurately represent full chip designs with countless combinations of widths, spaces and environments, but also because of the constraints imposed by metrology which may result in limiting the number of structures to be measured. Also, there are other limits on the types of these structures, and this is mainly due to the accuracy variation across different types of geometries. For instance, pitch measurements are normally more accurate than corner rounding. Thus, only certain geometrical shapes are mostly considered to create a sample plan. In addition, the time factor is becoming very crucial as we migrate from a technology node to another due to the increase in the number of development and production nodes, and the process is getting more complicated if process window aware models are to be developed in a reasonable time frame, thus there is a need for reliable methods to choose sample plans which also help reduce cycle time. In this context, an automated flow is proposed for sample plan creation. Once the illumination and film stack are defined, all the errors in the input data are fixed and sites are centered. Then, bad sites are excluded. Afterwards, the clean data are reduced based on geometrical resemblance. Also, an editable database of measurement-reliable and critical structures are provided, and their percentage in the final sample plan as well as the total number of 1D/2D samples can be predefined. It has the advantage of eliminating manual selection or filtering techniques, and it provides powerful tools for customizing the final plan, and the time needed to generate these plans is greatly reduced.

  17. A polarizable continuum model for molecules at spherical diffuse interfaces

    NASA Astrophysics Data System (ADS)

    Di Remigio, Roberto; Mozgawa, Krzysztof; Cao, Hui; Weijo, Ville; Frediani, Luca

    2016-03-01

    We present an extension of the Polarizable Continuum Model (PCM) to simulate solvent effects at diffuse interfaces with spherical symmetry, such as nanodroplets and micelles. We derive the form of the Green's function for a spatially varying dielectric permittivity with spherical symmetry and exploit the integral equation formalism of the PCM for general dielectric environments to recast the solvation problem into a continuum solvation framework. This allows the investigation of the solvation of ions and molecules in nonuniform dielectric environments, such as liquid droplets, micelles or membranes, while maintaining the computationally appealing characteristics of continuum solvation models. We describe in detail our implementation, both for the calculation of the Green's function and for its subsequent use in the PCM electrostatic problem. The model is then applied on a few test systems, mainly to analyze the effect of interface curvature on solvation energetics.

  18. A Hybrid Tool for User Interface Modeling and Prototyping

    NASA Astrophysics Data System (ADS)

    Trætteberg, Hallvard

    Although many methods have been proposed, model-based development methods have only to some extent been adopted for UI design. In particular, they are not easy to combine with user-centered design methods. In this paper, we present a hybrid UI modeling and GUI prototyping tool, which is designed to fit better with IS development and UI design traditions. The tool includes a diagram editor for domain and UI models and an execution engine that integrates UI behavior, live UI components and sample data. Thus, both model-based user interface design and prototyping-based iterative design are supported

  19. Computer modelling of nanoscale diffusion phenomena at epitaxial interfaces

    NASA Astrophysics Data System (ADS)

    Michailov, M.; Ranguelov, B.

    2014-05-01

    The present study outlines an important area in the application of computer modelling to interface phenomena. Being relevant to the fundamental physical problem of competing atomic interactions in systems with reduced dimensionality, these phenomena attract special academic attention. On the other hand, from a technological point of view, detailed knowledge of the fine atomic structure of surfaces and interfaces correlates with a large number of practical problems in materials science. Typical examples are formation of nanoscale surface patterns, two-dimensional superlattices, atomic intermixing at an epitaxial interface, atomic transport phenomena, structure and stability of quantum wires on surfaces. We discuss here a variety of diffusion mechanisms that control surface-confined atomic exchange, formation of alloyed atomic stripes and islands, relaxation of pure and alloyed atomic terraces, diffusion of clusters and their stability in an external field. The computational model refines important details of diffusion of adatoms and clusters accounting for the energy barriers at specific atomic sites: smooth domains, terraces, steps and kinks. The diffusion kinetics, integrity and decomposition of atomic islands in an external field are considered in detail and assigned to specific energy regions depending on the cluster stability in mass transport processes. The presented ensemble of diffusion scenarios opens a way for nanoscale surface design towards regular atomic interface patterns with exotic physical features.

  20. Numerical modeling of capillary electrophoresis - electrospray mass spectrometry interface design.

    PubMed

    Jarvas, Gabor; Guttman, Andras; Foret, Frantisek

    2015-01-01

    Capillary electrophoresis hyphenated with electrospray mass spectrometry (CE-ESI-MS) has emerged in the past decade as one of the most powerful bioanalytical techniques. As the sensitivity and efficiency of new CE-ESI-MS interface designs are continuously improving, numerical modeling can play important role during their development. In this review, different aspects of computer modeling and simulation of CE-ESI-MS interfaces are comprehensively discussed. Relevant essentials of hydrodynamics as well as state-of-the-art modeling techniques are critically evaluated. Sheath liquid-, sheathless-, and liquid-junction interfaces are reviewed from the viewpoint of multidisciplinary numerical modeling along with details of single and multiphase models together with electric field mediated flows, electrohydrodynamics, and free fluid-surface methods. Practical examples are given to help non-specialists to understand the basic principles and applications. Finally, alternative approaches like air amplifiers are also included. © 2014 Wiley Periodicals, Inc. Mass Spec Rev 34: 558-569, 2015. PMID:24676884

  1. Multiscale modeling of droplet interface bilayer membrane networks.

    PubMed

    Freeman, Eric C; Farimani, Amir B; Aluru, Narayana R; Philen, Michael K

    2015-11-01

    Droplet interface bilayer (DIB) networks are considered for the development of stimuli-responsive membrane-based materials inspired by cellular mechanics. These DIB networks are often modeled as combinations of electrical circuit analogues, creating complex networks of capacitors and resistors that mimic the biomolecular structures. These empirical models are capable of replicating data from electrophysiology experiments, but these models do not accurately capture the underlying physical phenomena and consequently do not allow for simulations of material functionalities beyond the voltage-clamp or current-clamp conditions. The work presented here provides a more robust description of DIB network behavior through the development of a hierarchical multiscale model, recognizing that the macroscopic network properties are functions of their underlying molecular structure. The result of this research is a modeling methodology based on controlled exchanges across the interfaces of neighboring droplets. This methodology is validated against experimental data, and an extension case is provided to demonstrate possible future applications of droplet interface bilayer networks. PMID:26594262

  2. Developing A Laser Shockwave Model For Characterizing Diffusion Bonded Interfaces

    SciTech Connect

    James A. Smith; Jeffrey M. Lacy; Barry H. Rabin

    2014-07-01

    12. Other advances in QNDE and related topics: Preferred Session Laser-ultrasonics Developing A Laser Shockwave Model For Characterizing Diffusion Bonded Interfaces 41st Annual Review of Progress in Quantitative Nondestructive Evaluation Conference QNDE Conference July 20-25, 2014 Boise Centre 850 West Front Street Boise, Idaho 83702 James A. Smith, Jeffrey M. Lacy, Barry H. Rabin, Idaho National Laboratory, Idaho Falls, ID ABSTRACT: The US National Nuclear Security Agency has a Global Threat Reduction Initiative (GTRI) which is assigned with reducing the worldwide use of high-enriched uranium (HEU). A salient component of that initiative is the conversion of research reactors from HEU to low enriched uranium (LEU) fuels. An innovative fuel is being developed to replace HEU. The new LEU fuel is based on a monolithic fuel made from a U-Mo alloy foil encapsulated in Al-6061 cladding. In order to complete the fuel qualification process, the laser shock technique is being developed to characterize the clad-clad and fuel-clad interface strengths in fresh and irradiated fuel plates. The Laser Shockwave Technique (LST) is being investigated to characterize interface strength in fuel plates. LST is a non-contact method that uses lasers for the generation and detection of large amplitude acoustic waves to characterize interfaces in nuclear fuel plates. However the deposition of laser energy into the containment layer on specimen’s surface is intractably complex. The shock wave energy is inferred from the velocity on the backside and the depth of the impression left on the surface from the high pressure plasma pulse created by the shock laser. To help quantify the stresses and strengths at the interface, a finite element model is being developed and validated by comparing numerical and experimental results for back face velocities and front face depressions with experimental results. This paper will report on initial efforts to develop a finite element model for laser shock.

  3. A visual interface for the SUPERFLEX hydrological modelling framework

    NASA Astrophysics Data System (ADS)

    Gao, H.; Fenicia, F.; Kavetski, D.; Savenije, H. H. G.

    2012-04-01

    The SUPERFLEX framework is a modular modelling system for conceptual hydrological modelling at the catchment scale. This work reports the development of a visual interface for the SUPERFLEX model. This aims to enhance the communication between the hydrologic experimentalists and modelers, in particular further bridging the gap between the field soft data and the modeler's knowledge. In collaboration with field experimentalists, modelers can visually and intuitively hypothesize different model architectures and combinations of reservoirs, select from a library of constructive functions to describe the relationship between reservoirs' storage and discharge, specify the shape of lag functions and, finally, set parameter values. The software helps hydrologists take advantage of any existing insights into the study site, translate it into a conceptual hydrological model and implement it within a computationally robust algorithm. This tool also helps challenge and contrast competing paradigms such as the "uniqueness of place" vs "one model fits all". Using this interface, hydrologists can test different hypotheses and model representations, and stepwise build deeper understanding of the watershed of interest.

  4. Symmetric model of compressible granular mixtures with permeable interfaces

    NASA Astrophysics Data System (ADS)

    Saurel, Richard; Le Martelot, Sébastien; Tosello, Robert; Lapébie, Emmanuel

    2014-12-01

    Compressible granular materials are involved in many applications, some of them being related to energetic porous media. Gas permeation effects are important during their compaction stage, as well as their eventual chemical decomposition. Also, many situations involve porous media separated from pure fluids through two-phase interfaces. It is thus important to develop theoretical and numerical formulations to deal with granular materials in the presence of both two-phase interfaces and gas permeation effects. Similar topic was addressed for fluid mixtures and interfaces with the Discrete Equations Method (DEM) [R. Abgrall and R. Saurel, "Discrete equations for physical and numerical compressible multiphase mixtures," J. Comput. Phys. 186(2), 361-396 (2003)] but it seemed impossible to extend this approach to granular media as intergranular stress [K. K. Kuo, V. Yang, and B. B. Moore, "Intragranular stress, particle-wall friction and speed of sound in granular propellant beds," J. Ballist. 4(1), 697-730 (1980)] and associated configuration energy [J. B. Bdzil, R. Menikoff, S. F. Son, A. K. Kapila, and D. S. Stewart, "Two-phase modeling of deflagration-to-detonation transition in granular materials: A critical examination of modeling issues," Phys. Fluids 11, 378 (1999)] were present with significant effects. An approach to deal with fluid-porous media interfaces was derived in Saurel et al. ["Modelling dynamic and irreversible powder compaction," J. Fluid Mech. 664, 348-396 (2010)] but its validity was restricted to weak velocity disequilibrium only. Thanks to a deeper analysis, the DEM is successfully extended to granular media modelling in the present paper. It results in an enhanced version of the Baer and Nunziato ["A two-phase mixture theory for the deflagration-to-detonation transition (DDT) in reactive granular materials," Int. J. Multiphase Flow 12(6), 861-889 (1986)] model as symmetry of the formulation is now preserved. Several computational examples are shown to validate and illustrate method's capabilities.

  5. Modeling the Photoionized Interface in Blister H II Regions

    NASA Astrophysics Data System (ADS)

    Sankrit, Ravi; Hester, J. Jeff

    2000-06-01

    We present a grid of photoionization models for the emission from photoevaporative interfaces between the ionized gas and molecular cloud in blister H II regions. For the density profiles of the emitting gas in the models, we use a general power-law form calculated for photoionized, photoevaporative flows by Bertoldi. We find that the spatial emission-line profiles are dependent on the incident flux, the shape of the ionizing continuum, and the elemental abundances. In particular, we find that the peak emissivity of the [S II] and [N II] lines are more sensitive to the elemental abundances than are the total line intensities. The diagnostics obtained from the grid of models can be used in conjunction with high spatial resolution data to infer the properties of ionized interfaces in blister H II regions. As an example, we consider a location at the tip of an ``elephant trunk'' structure in M16 (the Eagle Nebula) and show how narrowband Hubble Space Telescope Wide Field Planetary Camera 2 (HSTWFPC2) images constrain the H II region properties. We present a photoionization model that explains the ionization structure and emission from the interface seen in these high spatial resolution data.

  6. Generalized model for solid-on-solid interface growth

    NASA Astrophysics Data System (ADS)

    Richele, M. F.; Atman, A. P. F.

    2015-05-01

    We present a probabilistic cellular automaton (PCA) model to study solid-on-solid interface growth in which the transition rules depend on the local morphology of the profile obtained from the interface representation of the PCA. We show that the model is able to reproduce a wide range of patterns whose critical roughening exponents are associated to different universality classes, including random deposition, Edwards-Wilkinson, and Kardar-Parisi-Zhang. By means of the growth exponent method, we consider a particular set of the model parameters to build the two-dimensional phase diagram corresponding to a planar cut of the higher dimensional parameter space. A strong indication of phase transition between different universality classes can be observed, evincing different regimes of deposition, from layer-by-layer to Volmer-Weber and Stransk-Krastanov-like modes. We expect that this model can be useful to predict the morphological properties of interfaces obtained at different surface deposition problems, since it allows us to simulate several experimental situations by setting the values of the specific transition probabilities in a very simple and direct way.

  7. Stability of finite difference models containing two boundaries or interfaces

    NASA Technical Reports Server (NTRS)

    Trefethen, L. N.

    1984-01-01

    The stability of finite difference models of hyperbolic initial boundary value problems is connected with the propagation and reflection of parasitic waves. Wave propagation ideas are applied to models containing two boundaires or interfaces, where repeated reflection of trapped wave packets is a potential new source of instability. Various known instability phenomena are accounted for in a unified way. Results show: (1) dissipativity does not ensure stability when three or more formulas are concatenated at a boundary or internal interface; (2) algebraic GKS instabilities can be converted by a second boundary to exponential instabilities only when an infinite numerical reflection coefficient is present; and (3) GKS-stability and P-stability can be established in certain problems by showing that all numerical reflection coefficients have modulus less than 1.

  8. Diffuse interface modeling of a radial vapor bubble collapse

    NASA Astrophysics Data System (ADS)

    Magaletti, Francesco; Marino, Luca; Massimo Casciola, Carlo

    2015-12-01

    A diffuse interface model is exploited to study in details the dynamics of a cavitation vapor bubble, by including phase change, transition to supercritical conditions, shock wave propagation and thermal conduction. The numerical experiments show that the actual dynamic is a sequence of collapses and rebounds demonstrating the importance of nonequilibrium phase changes. In particular the transition to supercritical conditions avoids the full condensation and leads to shockwave emission after the collapse and to successive bubble rebound.

  9. Automated MRI segmentation for individualized modeling of current flow in the human head

    NASA Astrophysics Data System (ADS)

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-12-01

    Objective. High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets.Main results. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly.Significance. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.

  10. Automated MRI Segmentation for Individualized Modeling of Current Flow in the Human Head

    PubMed Central

    Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.

    2013-01-01

    Objective High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography (HD-EEG) require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images (MRI) requires labor-intensive manual segmentation, even when leveraging available automated segmentation tools. Also, accurate placement of many high-density electrodes on individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach A fully automated segmentation technique based on Statical Parametric Mapping 8 (SPM8), including an improved tissue probability map (TPM) and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on 4 healthy subjects and 7 stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. Main results The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view (FOV) extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Significance Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials. PMID:24099977

  11. A Multiple Agent Model of Human Performance in Automated Air Traffic Control and Flight Management Operations

    NASA Technical Reports Server (NTRS)

    Corker, Kevin; Pisanich, Gregory; Condon, Gregory W. (Technical Monitor)

    1995-01-01

    A predictive model of human operator performance (flight crew and air traffic control (ATC)) has been developed and applied in order to evaluate the impact of automation developments in flight management and air traffic control. The model is used to predict the performance of a two person flight crew and the ATC operators generating and responding to clearances aided by the Center TRACON Automation System (CTAS). The purpose of the modeling is to support evaluation and design of automated aids for flight management and airspace management and to predict required changes in procedure both air and ground in response to advancing automation in both domains. Additional information is contained in the original extended abstract.

  12. Industrial Automation Mechanic Model Curriculum Project. Final Report.

    ERIC Educational Resources Information Center

    Toledo Public Schools, OH.

    This document describes a demonstration program that developed secondary level competency-based instructional materials for industrial automation mechanics. Program activities included task list compilation, instructional materials research, learning activity packet (LAP) development, construction of lab elements, system implementation,…

  13. Physical Parameters of Substorms from Automated Forward Modeling (AFM)

    NASA Astrophysics Data System (ADS)

    Connors, M.; McPherron, R. L.; Ponto, J.

    2006-12-01

    Automated Forward Modeling (AFM) inverts magnetic data to give physical parameters for electric current flow in near-Earth space. On a meridian, it gives the total electric current crossing it, and latitudinal boundaries. AFM uses nonlinear optimization of parameters of a forward model. Characteristic behaviors of substorms are seen: the current strengthens rapidly at an onset, with electrojet boundary motion. The current rises for approximately 30 minutes, but poleward border expansion progresses slightly faster. Recovery is accompanied by a current decrease, but not poleward retreat of the auroral oval on up to a two- hour timescale. Average characteristics of the current closely follow those of the AL index, with large variation in individual events. Boundary motion is similar to that deduced for the electron aurora from satellite studies. AFM allows both the current strength and the borders to be determined from ground magnetic data alone, generally available on a continuous basis. In this study, 63 separate onsets in 1997 were characterized using AFM on the CANOPUS Churchill meridian. The provisional AL index was also obtained for the same events. The parametrization of Weimer (1993), JGR 99, 11005 was found to be extremely accurate for both AL and meridian current, which is I(MA) = c0 + c_1 te^{pt}, with c0 0.151 MA, c_1 1.63 MA/h, and p -1.98/h. This permits a current/AL relation of I(MA) = -0.0322 - 0.00165 * AL, where we stress that I and AL are averages. Further, on average the equatorward border of the electrojet does not change much at onset, while the poleward border's latitude in central dipole coordinates is well represented by 67.5+4.21*(1.0-e^{-5.47*t}), with t the postonset time in hours. These results agree very well with those of Frey et al. (2004), JGR 109, doi:10.1029/2004JA010607 for electron auroras observed using Image WIC near the onset meridian. AFM permits quantification of electrojet parameters, facilitating their interpretation and comparison to other quantities measured during substorms.

  14. Computational modeling on molybdenum sulfide grain boundary interfaces

    NASA Astrophysics Data System (ADS)

    Ramos, Manuel

    2008-10-01

    Since discovery of the CoMoS phase in catalysis, many investigations had been conducted in order to understand its intrinsic chemical-physical properties. The CoMoS phase is important because it posses strong catalytical properties. Up to now some models exist in order to explain a mechanism of why this phase is created on a catalyst. Those models are ``cherry model'' which assumes that Co atoms start nucleating on MoS2 at the edges, other model like ``decoration'' states that Co atoms are added on the MoS2 edges in a decoration form, however even when those two mentioned models are based on density functional calculations (DFT), they lack of explain how two different interfaces meet (MoS2/Co9S8, or MoS2/Ni3Co6) in a bulk catalyst, this lacking of information provided by the models could be attribute to the fact that they are made to explain specific cases, such as small particles (Hexagonal truncated, Nano-octahedral and Triangular prism) containing around 10^5 atoms as much. This work presents computational calculations using Cerius2 molecular software to model based on DFT methods the CoMoS and NiMoS interfaces. Obtain information will be compared with results obtained by HRTEM.

  15. Interface transfer coefficient in second-phase-growth models

    NASA Astrophysics Data System (ADS)

    Maugis, P.; Martin, G.

    1994-05-01

    In order to derive an atomistic expression for the transfer coefficient across an interface, we extend the Gibbs dividing-surface scheme to kinetic problems. In equilibrium thermodynamics, this scheme consists in replacing the continuous concentration profile between two coherent phases by a stepped profile with a discontinuity at the dividing surface: the Gibbsian excess free energy (interfacial energy) is the difference between the free energies associated with the true continuous profile and with the artificial stepped one. Close to equilibrium, the diffusion flux along the actual continuous concentration profile is equal to minus the gradient of the chemical potential multiplied by a mobility: the latter is a continuous function of the local equilibrium concentration, which can be evaluated in a mean-field approximation. Gibbs' dividing-surface scheme introduces a transfer coefficient across the (artificial) dividing interface. Equating the exact expression of the flux along the actual concentration profile to that predicted in Gibbs' scheme for the same difference in chemical potential across the system yields the expression for the transfer coefficient. In the simplest mean-field description of chemical diffusion, the transfer coefficient is found to be negative. The reason for that is that the mobility increases as the concentration goes to 1/2, at least in the simplest case. Assuming the concentration to be uniform up to the interface underestimates the flux and must be compensated by a negative ``contact resistance'' between the two phases. Neglecting the transfer coefficient results in underestimating the flux: the error can be large for small samples, in particular in the case of the nucleation and growth of a phase with low diffusivity, inside a high-diffusivity matrix. The range of validity of the model is shown to coincide with that of linear diffusion theory. In this range, the transfer coefficient at the interface does not depend on the velocity of the interface.

  16. Bacterial Adhesion to Hexadecane (Model NAPL)-Water Interfaces

    NASA Astrophysics Data System (ADS)

    Ghoshal, S.; Zoueki, C. R.; Tufenkji, N.

    2009-05-01

    The rates of biodegradation of NAPLs have been shown to be influenced by the adhesion of hydrocarbon- degrading microorganisms as well as their proximity to the NAPL-water interface. Several studies provide evidence for bacterial adhesion or biofilm formation at alkane- or crude oil-water interfaces, but there is a significant knowledge gap in our understanding of the processes that influence initial adhesion of bacteria on to NAPL-water interfaces. In this study bacterial adhesion to hexadecane, and a series of NAPLs comprised of hexadecane amended with toluene, and/or with asphaltenes and resins, which are the surface active fractions of crude oils, were examined using a Microbial Adhesion to Hydrocarbons (MATH) assay. The microorganisms employed were Mycobacterium kubicae, Pseudomonas aeruginosa and Pseudomonas putida, which are hydrocarbon degraders or soil microorganisms. MATH assays as well as electrophoretic mobility measurements of the bacterial cells and the NAPL droplet surfaces in aqueous solutions were conducted at three solution pHs (4, 6 and 7). Asphaltenes and resins were shown to generally decrease microbial adhesion. Results of the MATH assay were not in qualitative agreement with theoretical predictions of bacteria- hydrocarbon interactions based on the extended Derjaguin-Landau-Verwey-Overbeek (XDLVO) model of free energy of interaction between the cell and NAPL droplets. In this model the free energy of interaction between two colloidal particles is predicted based on electrical double layer, van der Waals and hydrophobic forces. It is likely that the steric repulsion between bacteria and NAPL surfaces, caused by biopolymers on bacterial surfaces and aphaltenes and resins at the NAPL-water interface contributed to the decreased adhesion compared to that predicted by the XDLVO model.

  17. ShowFlow: A practical interface for groundwater modeling

    SciTech Connect

    Tauxe, J.D.

    1990-12-01

    ShowFlow was created to provide a user-friendly, intuitive environment for researchers and students who use computer modeling software. What traditionally has been a workplace available only to those familiar with command-line based computer systems is now within reach of almost anyone interested in the subject of modeling. In the case of this edition of ShowFlow, the user can easily experiment with simulations using the steady state gaussian plume groundwater pollutant transport model SSGPLUME, though ShowFlow can be rewritten to provide a similar interface for any computer model. Included in this thesis is all the source code for both the ShowFlow application for Microsoft{reg sign} Windows{trademark} and the SSGPLUME model, a User's Guide, and a Developer's Guide for converting ShowFlow to run other model programs. 18 refs., 13 figs.

  18. Thermal Edge-Effects Model for Automated Tape Placement of Thermoplastic Composites

    NASA Technical Reports Server (NTRS)

    Costen, Robert C.

    2000-01-01

    Two-dimensional thermal models for automated tape placement (ATP) of thermoplastic composites neglect the diffusive heat transport that occurs between the newly placed tape and the cool substrate beside it. Such lateral transport can cool the tape edges prematurely and weaken the bond. The three-dimensional, steady state, thermal transport equation is solved by the Green's function method for a tape of finite width being placed on an infinitely wide substrate. The isotherm for the glass transition temperature on the weld interface is used to determine the distance inward from the tape edge that is prematurely cooled, called the cooling incursion Delta a. For the Langley ATP robot, Delta a = 0.4 mm for a unidirectional lay-up of PEEK/carbon fiber composite, and Delta a = 1.2 mm for an isotropic lay-up. A formula for Delta a is developed and applied to a wide range of operating conditions. A surprise finding is that Delta a need not decrease as the Peclet number Pe becomes very large, where Pe is the dimensionless ratio of inertial to diffusive heat transport. Conformable rollers that increase the consolidation length would also increase Delta a, unless other changes are made, such as proportionally increasing the material speed. To compensate for premature edge cooling, the thermal input could be extended past the tape edges by the amount Delta a. This method should help achieve uniform weld strength and crystallinity across the width of the tape.

  19. Behavior of asphaltene model compounds at w/o interfaces.

    PubMed

    Nordgård, Erland L; Sørland, Geir; Sjöblom, Johan

    2010-02-16

    Asphaltenes, present in significant amounts in heavy crude oil, contains subfractions capable of stabilizing water-in-oil emulsions. Still, the composition of these subfractions is not known in detail, and the actual mechanism behind emulsion stability is dependent on perceived interfacial concentrations and compositions. This study aims at utilizing polyaromatic surfactants which contains an acidic moiety as model compounds for the surface-active subfraction of asphaltenes. A modified pulse-field gradient (PFG) NMR method has been used to study droplet sizes and stability of emulsions prepared with asphaltene model compounds. The method has been compared to the standard microscopy droplet counting method. Arithmetic and volumetric mean droplet sizes as a function of surfactant concentration and water content clearly showed that the interfacial area was dependent on the available surfactant at the emulsion interface. Adsorption of the model compounds onto hydrophilic silica has been investigated by UV depletion, and minor differences in the chemical structure of the model compounds caused significant differences in the affinity toward this highly polar surface. The cross-sectional areas obtained have been compared to areas from the surface-to-volume ratio found by NMR and gave similar results for one of the two model compounds. The mean molecular area for this compound suggested a tilted geometry of the aromatic core with respect to the interface, which has also been proposed for real asphaltenic samples. The film behavior was further investigated using a liquid-liquid Langmuir trough supporting the ability to form stable interfacial films. This study supports that acidic, or strong hydrogen-bonding fractions, can promote stable water-in-oil emulsion. The use of model compounds opens up for studying emulsion behavior and demulsifier efficiency based on true interfacial concentrations rather than perceived interfaces. PMID:19852481

  20. Language Model Applications to Spelling with Brain-Computer Interfaces

    PubMed Central

    Mora-Cortes, Anderson; Manyakov, Nikolay V.; Chumerin, Nikolay; Van Hulle, Marc M.

    2014-01-01

    Within the Ambient Assisted Living (AAL) community, Brain-Computer Interfaces (BCIs) have raised great hopes as they provide alternative communication means for persons with disabilities bypassing the need for speech and other motor activities. Although significant advancements have been realized in the last decade, applications of language models (e.g., word prediction, completion) have only recently started to appear in BCI systems. The main goal of this article is to review the language model applications that supplement non-invasive BCI-based communication systems by discussing their potential and limitations, and to discern future trends. First, a brief overview of the most prominent BCI spelling systems is given, followed by an in-depth discussion of the language models applied to them. These language models are classified according to their functionality in the context of BCI-based spelling: the static/dynamic nature of the user interface, the use of error correction and predictive spelling, and the potential to improve their classification performance by using language models. To conclude, the review offers an overview of the advantages and challenges when implementing language models in BCI-based communication systems when implemented in conjunction with other AAL technologies. PMID:24675760

  1. A diffuse interface model of grain boundary faceting

    NASA Astrophysics Data System (ADS)

    Abdeljawad, Fadi; Medlin, Douglas; Zimmerman, Jonathan; Hattar, Khalid; Foiles, Stephen

    Incorporating anisotropy into thermodynamic treatments of interfaces dates back to over a century ago. For a given orientation of two abutting grains in a pure metal, depressions in the grain boundary (GB) energy may exist as a function of GB inclination, defined by the plane normal. Therefore, an initially flat GB may facet resulting in a hill-and-valley structure. Herein, we present a diffuse interface model of GB faceting that is capable of capturing anisotropic GB energies and mobilities, and accounting for the excess energy due to facet junctions and their non-local interactions. The hallmark of our approach is the ability to independently examine the role of each of the interface properties on the faceting behavior. As a demonstration, we consider the Σ 5 < 001 > tilt GB in iron, where faceting along the { 310 } and { 210 } planes was experimentally observed. Linear stability analysis and numerical examples highlight the role of junction energy and associated non-local interactions on the resulting facet length scales. On the whole, our modeling approach provides a general framework to examine the spatio-temporal evolution of highly anisotropic GBs in polycrystalline metals. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  2. Developing a laser shockwave model for characterizing diffusion bonded interfaces

    SciTech Connect

    Lacy, Jeffrey M. Smith, James A. Rabin, Barry H.

    2015-03-31

    The US National Nuclear Security Agency has a Global Threat Reduction Initiative (GTRI) with the goal of reducing the worldwide use of high-enriched uranium (HEU). A salient component of that initiative is the conversion of research reactors from HEU to low enriched uranium (LEU) fuels. An innovative fuel is being developed to replace HEU in high-power research reactors. The new LEU fuel is a monolithic fuel made from a U-Mo alloy foil encapsulated in Al-6061 cladding. In order to support the fuel qualification process, the Laser Shockwave Technique (LST) is being developed to characterize the clad-clad and fuel-clad interface strengths in fresh and irradiated fuel plates. LST is a non-contact method that uses lasers for the generation and detection of large amplitude acoustic waves to characterize interfaces in nuclear fuel plates. However, because the deposition of laser energy into the containment layer on a specimen's surface is intractably complex, the shock wave energy is inferred from the surface velocity measured on the backside of the fuel plate and the depth of the impression left on the surface by the high pressure plasma pulse created by the shock laser. To help quantify the stresses generated at the interfaces, a finite element method (FEM) model is being utilized. This paper will report on initial efforts to develop and validate the model by comparing numerical and experimental results for back surface velocities and front surface depressions in a single aluminum plate representative of the fuel cladding.

  3. Developing a laser shockwave model for characterizing diffusion bonded interfaces

    NASA Astrophysics Data System (ADS)

    Lacy, Jeffrey M.; Smith, James A.; Rabin, Barry H.

    2015-03-01

    The US National Nuclear Security Agency has a Global Threat Reduction Initiative (GTRI) with the goal of reducing the worldwide use of high-enriched uranium (HEU). A salient component of that initiative is the conversion of research reactors from HEU to low enriched uranium (LEU) fuels. An innovative fuel is being developed to replace HEU in high-power research reactors. The new LEU fuel is a monolithic fuel made from a U-Mo alloy foil encapsulated in Al-6061 cladding. In order to support the fuel qualification process, the Laser Shockwave Technique (LST) is being developed to characterize the clad-clad and fuel-clad interface strengths in fresh and irradiated fuel plates. LST is a non-contact method that uses lasers for the generation and detection of large amplitude acoustic waves to characterize interfaces in nuclear fuel plates. However, because the deposition of laser energy into the containment layer on a specimen's surface is intractably complex, the shock wave energy is inferred from the surface velocity measured on the backside of the fuel plate and the depth of the impression left on the surface by the high pressure plasma pulse created by the shock laser. To help quantify the stresses generated at the interfaces, a finite element method (FEM) model is being utilized. This paper will report on initial efforts to develop and validate the model by comparing numerical and experimental results for back surface velocities and front surface depressions in a single aluminum plate representative of the fuel cladding.

  4. Modeling the interaction of biological cells with a solidifying interface

    NASA Astrophysics Data System (ADS)

    Chang, Anthony; Dantzig, Jonathan A.; Darr, Brian T.; Hubel, Allison

    2007-10-01

    In this article, we develop a modified level set method for modeling the interaction of particles with a solidifying interface. The dynamic computation of the van der Waals and drag forces between the particles and the solidification front leads to a problem of multiple length scales, which we resolve using adaptive grid techniques. We present a variety of example problems to demonstrate the accuracy and utility of the method. We also use the model to interpret experimental results obtained using directional solidification in a cryomicroscope.

  5. Diffuse-interface modeling of three-phase interactions

    NASA Astrophysics Data System (ADS)

    Park, Jang Min; Anderson, Patrick D.

    2016-05-01

    In this work, a numerical model is developed to study the three-phase interactions which take place when two immiscible drops suspended in a third immiscible liquid are brought together. The diffuse-interface model coupled with the hydrodynamic equations is solved by a standard finite element method. Partial and complete engulfing between two immiscible drops is studied, and the effects of several parameters are discussed. In the partial-engulfing case, two stages of wetting and pulling are identified, which qualitatively agrees with the experiment. In the complete-engulfing case, three stages of wetting and/or penetration, pulling, and spreading are identified.

  6. Frictional dissipation at a model heterogeneous sliding interface

    NASA Astrophysics Data System (ADS)

    Hammerberg, J. E.; Mikulla, R. P.; Holian, B. L.

    2000-03-01

    We have studied frictional dissipation at a flat interface in a two-dimensional Lennard-Jones system consisting of two triangular lattices rotated with respect to each other by 90 degrees. Molecular dynamics simulations with constant tangential velocity boundary conditions resulted, for low velocities, in a steady state for which the tangential force is an increasing function of the relative sliding velocity. In these simulations the fundamental mechanism for this increase appears to be the movement of surface atoms across the interface into potential minima which are separated by time-dependent barriers. We present a simple cage model consisting of a particle in a time-dependent double-well potential with damping and external noise which reproduces some of the behavior seen in the large scale simulations.

  7. Numerical modeling of materials processes with fluid-fluid interfaces

    NASA Astrophysics Data System (ADS)

    Yanke, Jeffrey Michael

    A numerical model has been developed to study material processes that depend on the interaction between fluids with a large discontinuity in thermophysical properties. A base model capable of solving equations of mass, momentum, energy conservation, and solidification has been altered to enable tracking of the interface between two immiscible fluids and correctly predict the interface deformation using a volume of fluid (VOF) method. Two materials processes investigated using this technique are Electroslag Remelting (ESR) and plasma spray deposition. ESR is a secondary melting technique that passes an AC current through an electrically resistive slag to provide the heat necessary to melt the alloy. The simulation tracks the interface between the slag and metal. The model was validated against industrial scale ESR ingots and was able to predict trends in melt rate, sump depth, macrosegregation, and liquid sump depth. In order to better understand the underlying physics of the process, several constant current ESR runs simulated the effects of freezing slag in the model. Including the solidifying slag in the imulations was found to have an effect on the melt rate and sump shape but there is too much uncertainty in ESR slag property data at this time for quantitative predictions. The second process investigated in this work is the deposition of ceramic coatings via plasma spray deposition. In plasma spray deposition, powderized coating material is injected into a plasma that melts and carries the powder towards the substrate were it impacts, flattening out and freezing. The impacting droplets pile up to form a porous coating. The model is used to simulate this rain of liquid ceramic particles impacting the substrate and forming a coating. Trends in local solidification time and porosity are calculated for various particle sizes and velocities. The predictions of decreasing porosity with increasing particle velocity matches previous experimental results. Also, a preliminary study was conducted to investigate the effects of substrate surface defects and droplet impact angle on the propensity to form columnar porosity.

  8. Automated Eukaryotic Gene Structure Annotation Using EVidenceModeler and the Program to Assemble Spliced Alignments

    SciTech Connect

    Haas, B J; Salzberg, S L; Zhu, W; Pertea, M; Allen, J E; Orvis, J; White, O; Buell, C R; Wortman, J R

    2007-12-10

    EVidenceModeler (EVM) is presented as an automated eukaryotic gene structure annotation tool that reports eukaryotic gene structures as a weighted consensus of all available evidence. EVM, when combined with the Program to Assemble Spliced Alignments (PASA), yields a comprehensive, configurable annotation system that predicts protein-coding genes and alternatively spliced isoforms. Our experiments on both rice and human genome sequences demonstrate that EVM produces automated gene structure annotation approaching the quality of manual curation.

  9. a Deformable Template Model with Feature Tracking for Automated Ivus Segmentation

    NASA Astrophysics Data System (ADS)

    Manandhar, Prakash; Hau Chen, Chi

    2010-02-01

    Intravascular Ultrasound (IVUS) can be used to create a 3D vascular profile of arteries for preventative prediction of Coronary Heart Disease (CHD). Segmentation of individual B-scan frames is a crucial step for creating profiles. Manual segmentation is too labor intensive to be of routine use. Automated segmentation algorithms are not yet accurate enough. We present a method of tracking features across frames of ultrasound data to increase automated segmentation accuracy using a deformable template model.

  10. A biological model for controlling interface growth and morphology.

    SciTech Connect

    Hoyt, Jeffrey John; Holm, Elizabeth Ann

    2004-01-01

    Biological systems create proteins that perform tasks more efficiently and precisely than conventional chemicals. For example, many plants and animals produce proteins to control the freezing of water. Biological antifreeze proteins (AFPs) inhibit the solidification process, even below the freezing point. These molecules bond to specific sites at the ice/water interface and are theorized to suppress solidification chemically or geometrically. In this project, we investigated the theoretical and experimental data on AFPs and performed analyses to understand the unique physics of AFPs. The experimental literature was analyzed to determine chemical mechanisms and effects of protein bonding at ice surfaces, specifically thermodynamic freezing point depression, suppression of ice nucleation, decrease in dendrite growth kinetics, solute drag on the moving solid/liquid interface, and stearic pinning of the ice interface. Stearic pinning was found to be the most likely candidate to explain experimental results, including freezing point depression, growth morphologies, and thermal hysteresis. A new stearic pinning model was developed and applied to AFPs, with excellent quantitative results. Understanding biological antifreeze mechanisms could enable important medical and engineering applications, but considerable future work will be necessary.

  11. A symbolic/subsymbolic interface protocol for cognitive modeling

    PubMed Central

    Simen, Patrick; Polk, Thad

    2009-01-01

    Researchers studying complex cognition have grown increasingly interested in mapping symbolic cognitive architectures onto subsymbolic brain models. Such a mapping seems essential for understanding cognition under all but the most extreme viewpoints (namely, that cognition consists exclusively of digitally implemented rules; or instead, involves no rules whatsoever). Making this mapping reduces to specifying an interface between symbolic and subsymbolic descriptions of brain activity. To that end, we propose parameterization techniques for building cognitive models as programmable, structured, recurrent neural networks. Feedback strength in these models determines whether their components implement classically subsymbolic neural network functions (e.g., pattern recognition), or instead, logical rules and digital memory. These techniques support the implementation of limited production systems. Though inherently sequential and symbolic, these neural production systems can exploit principles of parallel, analog processing from decision-making models in psychology and neuroscience to explain the effects of brain damage on problem solving behavior. PMID:20711520

  12. A Model of Process-Based Automation: Cost and Quality Implications in the Medication Management Process

    ERIC Educational Resources Information Center

    Spaulding, Trent Joseph

    2011-01-01

    The objective of this research is to understand how a set of systems, as defined by the business process, creates value. The three studies contained in this work develop the model of process-based automation. The model states that complementarities among systems are specified by handoffs in the business process. The model also provides theory to

  13. A Model of Process-Based Automation: Cost and Quality Implications in the Medication Management Process

    ERIC Educational Resources Information Center

    Spaulding, Trent Joseph

    2011-01-01

    The objective of this research is to understand how a set of systems, as defined by the business process, creates value. The three studies contained in this work develop the model of process-based automation. The model states that complementarities among systems are specified by handoffs in the business process. The model also provides theory to…

  14. Groundwater modeling and remedial optimization design using graphical user interfaces

    SciTech Connect

    Deschaine, L.M.

    1997-05-01

    The ability to accurately predict the behavior of chemicals in groundwater systems under natural flow circumstances or remedial screening and design conditions is the cornerstone to the environmental industry. The ability to do this efficiently and effectively communicate the information to the client and regulators is what differentiates effective consultants from ineffective consultants. Recent advances in groundwater modeling graphical user interfaces (GUIs) are doing for numerical modeling what Windows{trademark} did for DOS{trademark}. GUI facilitates both the modeling process and the information exchange. This Test Drive evaluates the performance of two GUIs--Groundwater Vistas and ModIME--on an actual groundwater model calibration and remedial design optimization project. In the early days of numerical modeling, data input consisted of large arrays of numbers that required intensive labor to input and troubleshoot. Model calibration was also manual, as was interpreting the reams of computer output for each of the tens or hundreds of simulations required to calibrate and perform optimal groundwater remedial design. During this period, the majority of the modelers effort (and budget) was spent just getting the model running, as opposed to solving the environmental challenge at hand. GUIs take the majority of the grunt work out of the modeling process, thereby allowing the modeler to focus on designing optimal solutions.

  15. Towards an Improved Pilot-Vehicle Interface for Highly Automated Aircraft: Evaluation of the Haptic Flight Control System

    NASA Technical Reports Server (NTRS)

    Schutte, Paul; Goodrich, Kenneth; Williams, Ralph

    2012-01-01

    The control automation and interaction paradigm (e.g., manual, autopilot, flight management system) used on virtually all large highly automated aircraft has long been an exemplar of breakdowns in human factors and human-centered design. An alternative paradigm is the Haptic Flight Control System (HFCS) that is part of NASA Langley Research Center s Naturalistic Flight Deck Concept. The HFCS uses only stick and throttle for easily and intuitively controlling the actual flight of the aircraft without losing any of the efficiency and operational benefits of the current paradigm. Initial prototypes of the HFCS are being evaluated and this paper describes one such evaluation. In this evaluation we examined claims regarding improved situation awareness, appropriate workload, graceful degradation, and improved pilot acceptance. Twenty-four instrument-rated pilots were instructed to plan and fly four different flights in a fictitious airspace using a moderate fidelity desktop simulation. Three different flight control paradigms were tested: Manual control, Full Automation control, and a simplified version of the HFCS. Dependent variables included both subjective (questionnaire) and objective (SAGAT) measures of situation awareness, workload (NASA-TLX), secondary task performance, time to recognize automation failures, and pilot preference (questionnaire). The results showed a statistically significant advantage for the HFCS in a number of measures. Results that were not statistically significant still favored the HFCS. The results suggest that the HFCS does offer an attractive and viable alternative to the tactical components of today s FMS/autopilot control system. The paper describes further studies that are planned to continue to evaluate the HFCS.

  16. A user interface model for navigation in virtual environments.

    PubMed

    Serolli Pinho, Márcio; Dias, Leandro Luís; Antunes Moreira, Carlos G; González Khodjaoghlanian, Emmanuel; Pizzini Becker, Gustavo; Duarte, Lúcio Mauro

    2002-10-01

    One of the most complicated tasks when working with three-dimensional virtual worlds is the navigation process. Usually, this process requires the use of buttons and key-sequences and the development of interaction metaphors that frequently make the interaction process artificial and inefficient. In these environments, very simple tasks, such as looking upward and downward, can became extremely complicated. To overcome these obstacles, this work presents an interaction model for three-dimensional virtual worlds, based on the interpretation of the natural gestures of a real user while he/she is walking in a real world. This model is an example of a non-WIMP (Window, Icon, Menu, Pointer) interface. To test this model, we created a device named "virtual bike." With this device, the user can navigate through the virtual environment exactly as if he were riding a real bike. PMID:12448781

  17. Modeling the Energy Use of a Connected and Automated Transportation System (Poster)

    SciTech Connect

    Gonder, J.; Brown, A.

    2014-07-01

    Early research points to large potential impacts of connected and automated vehicles (CAVs) on transportation energy use - dramatic savings, increased use, or anything in between. Due to a lack of suitable data and integrated modeling tools to explore these complex future systems, analyses to date have relied on simple combinations of isolated effects. This poster proposes a framework for modeling the potential energy implications from increasing penetration of CAV technologies and for assessing technology and policy options to steer them toward favorable energy outcomes. Current CAV modeling challenges include estimating behavior change, understanding potential vehicle-to-vehicle interactions, and assessing traffic flow and vehicle use under different automation scenarios. To bridge these gaps and develop a picture of potential future automated systems, NREL is integrating existing modeling capabilities with additional tools and data inputs to create a more fully integrated CAV assessment toolkit.

  18. Automated model-based calibration of imaging spectrographs

    NASA Astrophysics Data System (ADS)

    Kosec, Matjaž; Bürmen, Miran; Tomaževič, Dejan; Pernuš, Franjo; Likar, Boštjan

    2012-03-01

    Hyper-spectral imaging has gained recognition as an important non-invasive research tool in the field of biomedicine. Among the variety of available hyperspectral imaging systems, systems comprising an imaging spectrograph, lens, wideband illumination source and a corresponding camera stand out for the short acquisition time and good signal to noise ratio. The individual images acquired by imaging spectrograph-based systems contain full spectral information along one spatial dimension. Due to the imperfections in the camera lens and in particular the optical components of the imaging spectrograph, the acquired images are subjected to spatial and spectral distortions, resulting in scene dependent nonlinear spectral degradations and spatial misalignments which need to be corrected. However, the existing correction methods require complex calibration setups and a tedious manual involvement, therefore, the correction of the distortions is often neglected. Such simplified approach can lead to significant errors in the analysis of the acquired hyperspectral images. In this paper, we present a novel fully automated method for correction of the geometric and spectral distortions in the acquired images. The method is based on automated non-rigid registration of the reference and acquired images corresponding to the proposed calibration object incorporating standardized spatial and spectral information. The obtained transformation was successfully used for sub-pixel correction of various hyperspectral images, resulting in significant improvement of the spectral and spatial alignment. It was found that the proposed calibration is highly accurate and suitable for routine use in applications involving either diffuse reflectance or transmittance measurement setups.

  19. Electroviscoelasticity of liquid/liquid interfaces: fractional-order model.

    PubMed

    Spasic, Aleksandar M; Lazarevic, Mihailo P

    2005-02-01

    A number of theories that describe the behavior of liquid-liquid interfaces have been developed and applied to various dispersed systems, e.g., Stokes, Reiner-Rivelin, Ericksen, Einstein, Smoluchowski, and Kinch. A new theory of electroviscoelasticity describes the behavior of electrified liquid-liquid interfaces in fine dispersed systems and is based on a new constitutive model of liquids. According to this model liquid-liquid droplet or droplet-film structure (collective of particles) is considered as a macroscopic system with internal structure determined by the way the molecules (ions) are tuned (structured) into the primary components of a cluster configuration. How the tuning/structuring occurs depends on the physical fields involved, both potential (elastic forces) and nonpotential (resistance forces). All these microelements of the primary structure can be considered as electromechanical oscillators assembled into groups, so that excitation by an external physical field may cause oscillations at the resonant/characteristic frequency of the system itself (coupling at the characteristic frequency). Up to now, three possible mathematical formalisms have been discussed related to the theory of electroviscoelasticity. The first is the tension tensor model, where the normal and tangential forces are considered, only in mathematical formalism, regardless of their origin (mechanical and/or electrical). The second is the Van der Pol derivative model, presented by linear and nonlinear differential equations. Finally, the third model presents an effort to generalize the previous Van der Pol equation: the ordinary time derivative and integral are now replaced with the corresponding fractional-order time derivative and integral of order p<1. PMID:15576102

  20. Modeling organohalide perovskites for photovoltaic applications: From materials to interfaces

    NASA Astrophysics Data System (ADS)

    de Angelis, Filippo

    2015-03-01

    The field of hybrid/organic photovoltaics has been revolutionized in 2012 by the first reports of solid-state solar cells based on organohalide perovskites, now topping at 20% efficiency. First-principles modeling has been widely applied to the dye-sensitized solar cells field, and more recently to perovskite-based solar cells. The computational design and screening of new materials has played a major role in advancing the DSCs field. Suitable modeling strategies may also offer a view of the crucial heterointerfaces ruling the device operational mechanism. I will illustrate how simulation tools can be employed in the emerging field of perovskite solar cells. The performance of the proposed simulation toolbox along with the fundamental modeling strategies are presented using selected examples of relevant materials and interfaces. The main issue with hybrid perovskite modeling is to be able to accurately describe their structural, electronic and optical features. These materials show a degree of short range disorder, due to the presence of mobile organic cations embedded within the inorganic matrix, requiring to average their properties over a molecular dynamics trajectory. Due to the presence of heavy atoms (e.g. Sn and Pb) their electronic structure must take into account spin-orbit coupling (SOC) in an effective way, possibly including GW corrections. The proposed SOC-GW method constitutes the basis for tuning the materials electronic and optical properties, rationalizing experimental trends. Modeling charge generation in perovskite-sensitized TiO2 interfaces is then approached based on a SOC-DFT scheme, describing alignment of energy levels in a qualitatively correct fashion. The role of interfacial chemistry on the device performance is finally discussed. The research leading to these results has received funding from the European Union Seventh Framework Programme [FP7/2007 2013] under Grant Agreement No. 604032 of the MESO project.

  1. Time-domain matched interface and boundary (MIB) modeling of Debye dispersive media with curved interfaces

    NASA Astrophysics Data System (ADS)

    Nguyen, Duc Duy; Zhao, Shan

    2014-12-01

    A new finite-difference time-domain (FDTD) method is introduced for solving transverse magnetic Maxwell's equations in Debye dispersive media with complex interfaces and discontinuous wave solutions. Based on the auxiliary differential equation approach, a hybrid Maxwell-Debye system is constructed, which couples the wave equation for the electric component with Maxwell's equations for the magnetic components. This hybrid formulation enables the calculation of the time dependent parts of the interface jump conditions, so that one can track the transient changes in the regularities of the electromagnetic fields across a dispersive interface. Effective matched interface and boundary (MIB) treatments are proposed to rigorously impose the physical jump conditions which are not only time dependent, but also couple both Cartesian directions and both magnetic field components. Based on a staggered Yee lattice, the proposed MIB scheme can deal with arbitrarily curved interfaces and nonsmooth interfaces with sharped edges. Second order convergences are numerically achieved in solving dispersive interface problems with constant curvatures, general curvatures, and nonsmooth corners.

  2. Intelligent User Interfaces for Information Analysis: A Cognitive Model

    SciTech Connect

    Schwarting, Irene S.; Nelson, Rob A.; Cowell, Andrew J.

    2006-01-29

    Intelligent user interfaces (IUIs) for information analysis (IA) need to be designed with an intrinsic understanding of the analytical objectives and the dimensions of the information space. These analytical objectives are oriented around the requirement to provide decision makers with courses of action. Most tools available to support analysis barely skim the surface of the dimensions and categories of information used in analysis, and almost none are designed to address the ultimate requirement of decision support. This paper presents a high-level model of the cognitive framework of information analysts in the context of doing their jobs. It is intended that this model will enable the derivation of design requirements for advanced IUIs for IA.

  3. A possible model for understanding the personality--intelligence interface.

    PubMed

    Chamorro-Premuzic, Tomas; Furnham, Adrian

    2004-05-01

    Despite the recent increase in the number of studies examining empirical links between personality and intelligence (see Hofstee, 2001; Zeidner & Matthews, 2000), a theoretical integration of ability and nonability traits remains largely unaddressed. This paper presents a possible conceptual framework for understanding the personality-intelligence interface. In doing so, it conceptualizes three different levels of intelligence, namely, intellectual ability (which comprises both Gf and Gc), IQ test performance and subjectively assessed intelligence (a mediator between personality, intellectual ability and IQ test performance). Although the model draws heavily upon correlation evidence, each of its paths may be tested independently. The presented model may, therefore, be used to explore causation and further develop theoretical approaches to understanding the relation between ability and nonability traits underlying human performance. PMID:15142305

  4. Model Based Control Design Using SLPS "Simulink PSpice Interface"

    NASA Astrophysics Data System (ADS)

    Moslehpour, Saeid; Kulcu, Ercan K.; Alnajjar, Hisham

    This paper elaborates on the new integration offered with the PSpice SLPS interface and the MATLAB simulink products. SLPS links the two widely used design products, PSpice and Mathwork's Simulink simulator. The SLPS simulation environment supports the substitution of an actual electronic block with an "ideal model", better known as the mathematical simulink model. Thus enabling the designer to identify and correct integration issues of electronics within a system. Moreover, stress audit can be performed by using the PSpice smoke analysis which helps to verify whether the components are working within the manufacturer's safe operating limits. It is invaluable since many companies design and test the electronics separately from the system level. Therefore, integrations usually are not discovered until the prototype level, causing critical time delays in getting a product to the market.

  5. PyGSM: Python interface to the Global Sky Model

    NASA Astrophysics Data System (ADS)

    Price, Danny C.

    2016-03-01

    PyGSM is a Python interface for the Global Sky Model (GSM, ascl:1011.010). The GSM is a model of diffuse galactic radio emission, constructed from a variety of all-sky surveys spanning the radio band (e.g. Haslam and WMAP). PyGSM uses the GSM to generate all-sky maps in Healpix format of diffuse Galactic radio emission from 10 MHz to 94 GHz. The PyGSM module provides visualization utilities, file output in FITS format, and the ability to generate observed skies for a given location and date. PyGSM requires Healpy, PyEphem (ascl:1112.014), and AstroPy (ascl:1304.002).

  6. Data for Environmental Modeling (D4EM): Background and Applications of Data Automation

    EPA Science Inventory

    The Data for Environmental Modeling (D4EM) project demonstrates the development of a comprehensive set of open source software tools that overcome obstacles to accessing data needed by automating the process of populating model input data sets with environmental data available fr...

  7. General Models for Automated Essay Scoring: Exploring an Alternative to the Status Quo

    ERIC Educational Resources Information Center

    Kelly, P. Adam

    2005-01-01

    Powers, Burstein, Chodorow, Fowles, and Kukich (2002) suggested that automated essay scoring (AES) may benefit from the use of "general" scoring models designed to score essays irrespective of the prompt for which an essay was written. They reasoned that such models may enhance score credibility by signifying that an AES system measures the same…

  8. A Binary Programming Approach to Automated Test Assembly for Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Finkelman, Matthew D.; Kim, Wonsuk; Roussos, Louis; Verschoor, Angela

    2010-01-01

    Automated test assembly (ATA) has been an area of prolific psychometric research. Although ATA methodology is well developed for unidimensional models, its application alongside cognitive diagnosis models (CDMs) is a burgeoning topic. Two suggested procedures for combining ATA and CDMs are to maximize the cognitive diagnostic index and to use a…

  9. The electrical behavior of GaAs-insulator interfaces - A discrete energy interface state model

    NASA Technical Reports Server (NTRS)

    Kazior, T. E.; Lagowski, J.; Gatos, H. C.

    1983-01-01

    The relationship between the electrical behavior of GaAs Metal Insulator Semiconductor (MIS) structures and the high density discrete energy interface states (0.7 and 0.9 eV below the conduction band) was investigated utilizing photo- and thermal emission from the interface states in conjunction with capacitance measurements. It was found that all essential features of the anomalous behavior of GaAs MIS structures, such as the frequency dispersion and the C-V hysteresis, can be explained on the basis of nonequilibrium charging and discharging of the high density discrete energy interface states.

  10. Growth/reflectance model interface for wheat and corresponding model

    NASA Technical Reports Server (NTRS)

    Suits, G. H.; Sieron, R.; Odenweller, J.

    1984-01-01

    The use of modeling to explore the possibility of discovering new and useful crop condition indicators which might be available from the Thematic Mapper and to connect these symptoms to the biological causes in the crop is discussed. A crop growth model was used to predict the day to day growth features of the crop as it responds biologically to the various environmental factors. A reflectance model was used to predict the character of the interaction of daylight with the predicted growth features. An atmospheric path radiance was added to the reflected daylight to simulate the radiance appearing at the sensor. Finally, the digitized data sent to a ground station were calculated. The crop under investigation is wheat.

  11. Model a Discourse and Transform It to Your User Interface

    NASA Astrophysics Data System (ADS)

    Kaindl, Hermann

    Every interactive system needs a user interface, today possibly even several ones adapted for different devices (PCs, PDAs, mobile phones). Developing a user interface is difficult and takes a lot of effort, since it normally requires design and implementation. This is also expensive, and even more so for several user interfaces for different devices.

  12. A robust and flexible Geospatial Modeling Interface (GMI) for environmental model deployment and evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper provides an overview of the GMI (Geospatial Modeling Interface) simulation framework for environmental model deployment and assessment. GMI currently provides access to multiple environmental models including AgroEcoSystem-Watershed (AgES-W), Nitrate Leaching and Economic Analysis 2 (NLEA...

  13. Modeling the Electrical Contact Resistance at Steel-Carbon Interfaces

    NASA Astrophysics Data System (ADS)

    Brimmo, Ayoola T.; Hassan, Mohamed I.

    2016-01-01

    In the aluminum smelting industry, electrical contact resistance at the stub-carbon (steel-carbon) interface has been recurrently reported to be of magnitudes that legitimately necessitate concern. Mitigating this via finite element modeling has been the focus of a number of investigations, with the pressure- and temperature-dependent contact resistance relation frequently cited as a factor that limits the accuracy of such models. In this study, pressure- and temperature-dependent relations are derived from the most extensively cited works that have experimentally characterized the electrical contact resistance at these contacts. These relations are applied in a validated thermo-electro-mechanical finite element model used to estimate the voltage drop across a steel-carbon laboratory setup. By comparing the models' estimate of the contact electrical resistance with experimental measurements, we deduce the applicability of the different relations over a range of temperatures. The ultimate goal of this study is to apply mathematical modeling in providing pressure- and temperature-dependent relations that best describe the steel-carbon electrical contact resistance and identify the best fit relation at specific thermodynamic conditions.

  14. Parallelization of a hydrological model using the message passing interface

    USGS Publications Warehouse

    Wu, Yiping; Li, Tiejian; Sun, Liqun; Chen, Ji

    2013-01-01

    With the increasing knowledge about the natural processes, hydrological models such as the Soil and Water Assessment Tool (SWAT) are becoming larger and more complex with increasing computation time. Additionally, other procedures such as model calibration, which may require thousands of model iterations, can increase running time and thus further reduce rapid modeling and analysis. Using the widely-applied SWAT as an example, this study demonstrates how to parallelize a serial hydrological model in a Windows® environment using a parallel programing technology—Message Passing Interface (MPI). With a case study, we derived the optimal values for the two parameters (the number of processes and the corresponding percentage of work to be distributed to the master process) of the parallel SWAT (P-SWAT) on an ordinary personal computer and a work station. Our study indicates that model execution time can be reduced by 42%–70% (or a speedup of 1.74–3.36) using multiple processes (two to five) with a proper task-distribution scheme (between the master and slave processes). Although the computation time cost becomes lower with an increasing number of processes (from two to five), this enhancement becomes less due to the accompanied increase in demand for message passing procedures between the master and all slave processes. Our case study demonstrates that the P-SWAT with a five-process run may reach the maximum speedup, and the performance can be quite stable (fairly independent of a project size). Overall, the P-SWAT can help reduce the computation time substantially for an individual model run, manual and automatic calibration procedures, and optimization of best management practices. In particular, the parallelization method we used and the scheme for deriving the optimal parameters in this study can be valuable and easily applied to other hydrological or environmental models.

  15. Modeling interface-controlled phase transformation kinetics in thin films

    NASA Astrophysics Data System (ADS)

    Pang, E. L.; Vo, N. Q.; Philippe, T.; Voorhees, P. W.

    2015-05-01

    The Johnson-Mehl-Avrami-Kolmogorov (JMAK) equation is widely used to describe phase transformation kinetics. This description, however, is not valid in finite size domains, in particular, thin films. A new computational model incorporating the level-set method is employed to study phase evolution in thin film systems. For both homogeneous (bulk) and heterogeneous (surface) nucleation, nucleation density and film thickness were systematically adjusted to study finite-thickness effects on the Avrami exponent during the transformation process. Only site-saturated nucleation with isotropic interface-kinetics controlled growth is considered in this paper. We show that the observed Avrami exponent is not constant throughout the phase transformation process in thin films with a value that is not consistent with the dimensionality of the transformation. Finite-thickness effects are shown to result in reduced time-dependent Avrami exponents when bulk nucleation is present, but not necessarily when surface nucleation is present.

  16. Automated NMR fragment based screening identified a novel interface blocker to the LARG/RhoA complex.

    PubMed

    Gao, Jia; Ma, Rongsheng; Wang, Wei; Wang, Na; Sasaki, Ryan; Snyderman, David; Wu, Jihui; Ruan, Ke

    2014-01-01

    The small GTPase cycles between the inactive GDP form and the activated GTP form, catalyzed by the upstream guanine exchange factors. The modulation of such process by small molecules has been proven to be a fruitful route for therapeutic intervention to prevent the over-activation of the small GTPase. The fragment based approach emerging in the past decade has demonstrated its paramount potential in the discovery of inhibitors targeting such novel and challenging protein-protein interactions. The details regarding the procedure of NMR fragment screening from scratch have been rarely disclosed comprehensively, thus restricts its wider applications. To achieve a consistent screening applicable to a number of targets, we developed a highly automated protocol to cover every aspect of NMR fragment screening as possible, including the construction of small but diverse libray, determination of the aqueous solubility by NMR, grouping compounds with mutual dispersity to a cocktail, and the automated processing and visualization of the ligand based screening spectra. We exemplified our streamlined screening in RhoA alone and the complex of the small GTPase RhoA and its upstream guanine exchange factor LARG. Two hits were confirmed from the primary screening in cocktail and secondary screening over individual hits for LARG/RhoA complex, while one of them was also identified from the screening for RhoA alone. HSQC titration of the two hits over RhoA and LARG alone, respectively, identified one compound binding to RhoA.GDP at a 0.11 mM affinity, and perturbed the residues at the switch II region of RhoA. This hit blocked the formation of the LARG/RhoA complex, validated by the native gel electrophoresis, and the titration of RhoA to ¹⁵N labeled LARG in the absence and presence the compound, respectively. It therefore provides us a starting point toward a more potent inhibitor to RhoA activation catalyzed by LARG. PMID:24505392

  17. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1991-01-01

    Some of the many analytical models in human-computer interface design that are currently being developed are described. The usefulness of analytical models for human-computer interface design is evaluated. Can the use of analytical models be recommended to interface designers? The answer, based on the empirical research summarized here, is: not at this time. There are too many unanswered questions concerning the validity of models and their ability to meet the practical needs of design organizations.

  18. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT (AGWA): A GIS-BASED HYDROLOGICAL MODELING TOOL FOR WATERSHED MANAGEMENT AND LANDSCAPE ASSESSMENT

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment (http://www.epa.gov/nerlesd1/land-sci/agwa/introduction.htm and www.tucson.ars.ag.gov/agwa) tool is a GIS interface jointly developed by the U.S. Environmental Protection Agency, USDA-Agricultural Research Service, and the University ...

  19. Bayesian inverse modeling at the hydrological surface-subsurface interface

    NASA Astrophysics Data System (ADS)

    Cucchi, K.; Rubin, Y.

    2014-12-01

    In systems where surface and subsurface hydrological domains are highly connected, modeling surface and subsurface flow jointly is essential to accurately represent the physical processes and come up with reliable predictions of flows in river systems or stream-aquifer exchange. The flow quantification at the interface merging the two hydrosystem components is a function of both surface and subsurface spatially distributed parameters. In the present study, we apply inverse modeling techniques to a synthetic catchment with connected surface and subsurface hydrosystems. The model is physically-based and implemented with the Gridded Surface Subsurface Hydrologic Analysis software. On the basis of hydrograph measurement at the catchment outlet, we estimate parameters such as saturated hydraulic conductivity, overland and channel roughness coefficients. We compare maximum likelihood estimates (ML) with the parameter distributions obtained using the Bayesian statistical framework for spatially random fields provided by the Method of Anchored Distributions (MAD). While ML estimates maximize the probability of observing the data and capture the global trend of the target variables, MAD focuses on obtaining a probability distribution for the random unknown parameters and the anchors are designed to capture local features. We check the consistency between the two approaches and evaluate the additional information provided by MAD on parameter distributions. We also assess the contribution of adding new types of measurements such as water table depth or soil conductivity to the reduction of parameter uncertainty.

  20. Automated volumetric grid generation for finite element modeling of human hand joints

    SciTech Connect

    Hollerbach, K.; Underhill, K.; Rainsberger, R.

    1995-02-01

    We are developing techniques for finite element analysis of human joints. These techniques need to provide high quality results rapidly in order to be useful to a physician. The research presented here increases model quality and decreases user input time by automating the volumetric mesh generation step.

  1. Modeling Multiple Human-Automation Distributed Systems using Network-form Games

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume

    2012-01-01

    The paper describes at a high-level the network-form game framework (based on Bayes net and game theory), which can be used to model and analyze safety issues in large, distributed, mixed human-automation systems such as NextGen.

  2. Advances in automated noise data acquisition and noise source modeling for power reactors

    SciTech Connect

    Clapp, N.E. Jr.; Kryter, R.C.; Sweeney, F.J.; Renier, J.A.

    1981-01-01

    A newly expanded program, directed toward achieving a better appreciation of both the strengths and limitations of on-line, noise-based, long-term surveillance programs for nuclear reactors, is described. Initial results in the complementary experimental (acquisition and automated screening of noise signatures) and theoretical (stochastic modeling of likely noise sources) areas of investigation are given.

  3. Automated Test Assembly for Cognitive Diagnosis Models Using a Genetic Algorithm

    ERIC Educational Resources Information Center

    Finkelman, Matthew; Kim, Wonsuk; Roussos, Louis A.

    2009-01-01

    Much recent psychometric literature has focused on cognitive diagnosis models (CDMs), a promising class of instruments used to measure the strengths and weaknesses of examinees. This article introduces a genetic algorithm to perform automated test assembly alongside CDMs. The algorithm is flexible in that it can be applied whether the goal is to…

  4. Evaluation of automated cell disruptor methods for oomycetous and ascomycetous model organisms

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Two automated cell disruptor-based methods for RNA extraction; disruption of thawed cells submerged in TRIzol Reagent (method QP), and direct disruption of frozen cells on dry ice (method CP), were optimized for a model oomycete, Phytophthora capsici, and compared with grinding in a mortar and pestl...

  5. Workstation Modelling and Development: Clinical Definition of a Picture Archiving and Communications System (PACS) User Interface

    NASA Astrophysics Data System (ADS)

    Braudes, Robert E.; Mun, Seong K.; Sibert, John L.; Schnizlein, John; Horii, Steven C.

    1989-05-01

    A PACS must provide a user interface which is acceptable to all potential users of the system. Observations and interviews have been conducted with six radiology services at the Georgetown University Medical Center, Department of Radiology, in order to evaluate user interface requirements for a PACS system. Based on these observations, a conceptual model of radiology has been developed. These discussions have also revealed some significant differences in the user interface requirements between the various services. Several underlying factors have been identified which may be used as initial predictors of individual user interface styles. A user model has been developed which incorporates these factors into the specification of a tailored PACS user interface.

  6. Graphical User Interface for Simulink Integrated Performance Analysis Model

    NASA Technical Reports Server (NTRS)

    Durham, R. Caitlyn

    2009-01-01

    The J-2X Engine (built by Pratt & Whitney Rocketdyne,) in the Upper Stage of the Ares I Crew Launch Vehicle, will only start within a certain range of temperature and pressure for Liquid Hydrogen and Liquid Oxygen propellants. The purpose of the Simulink Integrated Performance Analysis Model is to verify that in all reasonable conditions the temperature and pressure of the propellants are within the required J-2X engine start boxes. In order to run the simulation, test variables must be entered at all reasonable values of parameters such as heat leak and mass flow rate. To make this testing process as efficient as possible in order to save the maximum amount of time and money, and to show that the J-2X engine will start when it is required to do so, a graphical user interface (GUI) was created to allow the input of values to be used as parameters in the Simulink Model, without opening or altering the contents of the model. The GUI must allow for test data to come from Microsoft Excel files, allow those values to be edited before testing, place those values into the Simulink Model, and get the output from the Simulink Model. The GUI was built using MATLAB, and will run the Simulink simulation when the Simulate option is activated. After running the simulation, the GUI will construct a new Microsoft Excel file, as well as a MATLAB matrix file, using the output values for each test of the simulation so that they may graphed and compared to other values.

  7. Calibration and application of an automated seepage meter for monitoring water flow across the sediment-water interface.

    PubMed

    Zhu, Tengyi; Fu, Dafang; Jenkinson, Byron; Jafvert, Chad T

    2015-04-01

    The advective flow of sediment pore water is an important parameter for understanding natural geochemical processes within lake, river, wetland, and marine sediments and also for properly designing permeable remedial sediment caps placed over contaminated sediments. Automated heat pulse seepage meters can be used to measure the vertical component of sediment pore water flow (i.e., vertical Darcy velocity); however, little information on meter calibration as a function of ambient water temperature exists in the literature. As a result, a method with associated equations for calibrating a heat pulse seepage meter as a function of ambient water temperature is fully described in this paper. Results of meter calibration over the temperature range 7.5 to 21.2 °C indicate that errors in accuracy are significant if proper temperature-dependence calibration is not performed. The proposed calibration method allows for temperature corrections to be made automatically in the field at any ambient water temperature. The significance of these corrections is discussed. PMID:25754860

  8. The Automated Geospatial Watershed Assessment Tool (AGWA): Developing Post-Fire Model Parameters Using Precipitation and Runoff Records from Gauged Watersheds

    NASA Astrophysics Data System (ADS)

    Sheppard, B. S.; Goodrich, D. C.; Guertin, D. P.; Burns, I. S.; Canfield, E.; Sidman, G.

    2014-12-01

    New tools and functionality have been incorporated into the Automated Geospatial Watershed Assessment Tool (AGWA) to assess the impacts of wildfire on runoff and erosion. AGWA (see: www.tucson.ars.ag.gov/agwa or http://www.epa.gov/esd/land-sci/agwa/) is a GIS interface jointly developed by the USDA-Agricultural Research Service, the U.S. Environmental Protection Agency, the University of Arizona, and the University of Wyoming to automate the parameterization and execution of a suite of hydrologic and erosion models (RHEM, WEPP, KINEROS2 and SWAT). Through an intuitive interface the user selects an outlet from which AGWA delineates and discretizes the watershed using a Digital Elevation Model (DEM). The watershed model elements are then intersected with terrain, soils, and land cover data layers to derive the requisite model input parameters. With the addition of a burn severity map AGWA can be used to model post wildfire changes to a catchment. By applying the same design storm to burned and unburned conditions a rapid assessment of the watershed can be made and areas that are the most prone to flooding can be identified. Post-fire precipitation and runoff records from gauged forested watersheds are now being used to make improvements to post fire model input parameters. Rainfall and runoff pairs have been selected from these records in order to calibrate parameter values for surface roughness and saturated hydraulic conductivity used in the KINEROS2 model. Several objective functions will be tried in the calibration process. Results will be validated. Currently Department of Interior Burn Area Emergency Response (DOI BAER) teams are using the AGWA-KINEROS2 modeling interface to assess hydrologically imposed risk immediately following wild fire. These parameter refinements are being made to further improve the quality of these assessments.

  9. Automated dynamic analytical model improvement for damped structures

    NASA Technical Reports Server (NTRS)

    Fuh, J. S.; Berman, A.

    1985-01-01

    A method is described to improve a linear nonproportionally damped analytical model of a structure. The procedure finds the smallest changes in the analytical model such that the improved model matches the measured modal parameters. Features of the method are: (1) ability to properly treat complex valued modal parameters of a damped system; (2) applicability to realistically large structural models; and (3) computationally efficiency without involving eigensolutions and inversion of a large matrix.

  10. Challenges in Modeling of the Plasma-Material Interface

    NASA Astrophysics Data System (ADS)

    Krstic, Predrag; Meyer, Fred; Allain, Jean Paul

    2013-09-01

    Plasma-Material Interface mixes materials of the two worlds, creating a new entity, a dynamical surface, which communicates between the two and represent one of the most challenging areas of multidisciplinary science, with many fundamental processes and synergies. How to build an integrated theoretical-experimental approach? Without mutual validation of experiment and theory chances very slim to have believable results? The outreach of the PMI science modeling at the fusion plasma facilities is illustrated by the significant step forward in understanding achieved recently by the quantum-classical modeling of the lithiated carbon surfaces irradiated by deuterium, showing surprisingly large role of oxygen in the deuterium retention and erosion chemistry. The plasma-facing walls of the next-generation fusion reactors will be exposed to high fluxes of neutrons and plasma-particles and will operate at high temperatures for thermodynamic efficiency. To this end we have been studying the evolution dynamics of vacancies and interstitials to the saturated dpa doses of tungsten surfaces bombarded by self-atoms, as well as the plasma-surface interactions of the damaged surfaces (erosion, hydrogen and helium uptake and fuzz formation). PSK and FWM acknowledge support of the ORNL LDRD program.

  11. A bidirectional interface growth model for cranial interosseous suture morphogenesis

    PubMed Central

    Zollikofer, Christoph P E; Weissmann, John David

    2011-01-01

    Interosseous sutures exhibit highly variable patterns of interdigitation and corrugation. Recent research has identified fundamental molecular mechanisms of suture formation, and computer models have been used to simulate suture morphogenesis. However, the role of bone strain in the development of complex sutures is largely unknown, and measuring suture morphologies beyond the evaluation of fractal dimensions remains a challenge. Here we propose a morphogenetic model of suture formation, which is based on the paradigm of Laplacian interface growth. Computer simulations of suture morphogenesis under various boundary conditions generate a wide variety of synthetic sutural forms. Their morphologies are quantified with a combination of Fourier analysis and principal components analysis, and compared with natural morphological variation in an ontogenetic sample of human interparietal suture lines. Morphometric analyses indicate that natural sutural shapes exhibit a complex distribution in morphospace. The distribution of synthetic sutures closely matches the natural distribution. In both natural and synthetic systems, sutural complexity increases during morphogenesis. Exploration of the parameter space of the simulation system indicates that variation in strain and/or morphogen sensitivity and viscosity of sutural tissue may be key factors in generating the large variability of natural suture complexity. PMID:21539540

  12. Modelling the inhomogeneous SiC Schottky interface

    NASA Astrophysics Data System (ADS)

    Gammon, P. M.; Pérez-Tomás, A.; Shah, V. A.; Vavasour, O.; Donchev, E.; Pang, J. S.; Myronov, M.; Fisher, C. A.; Jennings, M. R.; Leadley, D. R.; Mawby, P. A.

    2013-12-01

    For the first time, the I-V-T dataset of a Schottky diode has been accurately modelled, parameterised, and fully fit, incorporating the effects of interface inhomogeneity, patch pinch-off and resistance, and ideality factors that are both heavily temperature and voltage dependent. A Ni/SiC Schottky diode is characterised at 2 K intervals from 20 to 320 K, which, at room temperature, displays low ideality factors (n < 1.01) that suggest that these diodes may be homogeneous. However, at cryogenic temperatures, excessively high (n > 8), voltage dependent ideality factors and evidence of the so-called "thermionic field emission effect" within a T0-plot, suggest significant inhomogeneity. Two models are used, each derived from Tung's original interactive parallel conduction treatment of barrier height inhomogeneity that can reproduce these commonly seen effects in single temperature I-V traces. The first model incorporates patch pinch-off effects and produces accurate and reliable fits above around 150 K, and at current densities lower than 10-5 A cm-2. Outside this region, we show that resistive effects within a given patch are responsible for the excessive ideality factors, and a second simplified model incorporating these resistive effects as well as pinch-off accurately reproduces the entire temperature range. Analysis of these fitting parameters reduces confidence in those fits above 230 K, and questions are raised about the physical interpretation of the fitting parameters. Despite this, both methods used are shown to be useful tools for accurately reproducing I-V-T data over a large temperature range.

  13. Phase field modeling of a glide dislocation transmission across a coherent sliding interface

    NASA Astrophysics Data System (ADS)

    Zheng, Songlin; Ni, Yong; He, Linghui

    2015-04-01

    Three-dimensional phase field microelasticity modeling and simulation capable of representing core structure and elastic interactions of dislocations are used to study a glide dislocation transmission across a coherent sliding interface in face-centered cubic metals. We investigate the role of the interface sliding process, which is described as the reversible motion of interface dislocation on the interfacial barrier strength to transmission. Numerical results show that a wider transient interface sliding zone develops on the interface with a lower interfacial unstable stacking fault energy to trap the glide dislocation leading to a stronger barrier to transmission. The interface sliding zone shrinks in the case of high applied stress and low mobility for the interfacial dislocation. This indicates that such interfacial barrier strength might be rate dependent. We discuss the calculated interfacial barrier strength for the Cu/Ni interface from the contribution of interface sliding comparable to previous atomistic simulations.

  14. Model-based automated segmentation of kinetochore microtubule from electron tomography.

    PubMed

    Jiang, Ming; Ji, Qiang; McEwen, Bruce

    2004-01-01

    The segmentation of kinetochore microtubules from electron tomography is challenging due to the poor quality of the acquired data and the cluttered cellular surroundings. We propose to automate the microtubule segmentation by extending the active shape model (ASM) in two aspects. First, we develop a higher order boundary model obtained by 3-D local surface estimation that characterizes the microtubule boundary better than the gray level appearance model in the 2-D microtubule cross section. We then incorporate this model into the weight matrix of the fitting error measurement to increase the influence of salient features. Second, we integrate the ASM with Kalman filtering to utilize the shape information along the longitudinal direction of the microtubules. The ASM modified in this way is robust against missing data and outliers frequently present in the kinetochore tomography volume. Experimental results demonstrate that our automated method outperforms manual process but using only a fraction of the time of the latter. PMID:17272020

  15. Interfacing click chemistry with automated oligonucleotide synthesis for the preparation of fluorescent DNA probes containing internal xanthene and cyanine dyes.

    PubMed

    Astakhova, I Kira; Wengel, Jesper

    2013-01-14

    Double-labeled oligonucleotide probes containing fluorophores interacting by energy-transfer mechanisms are essential for modern bioanalysis, molecular diagnostics, and in vivo imaging techniques. Although bright xanthene and cyanine dyes are gaining increased prominence within these fields, little attention has thus far been paid to probes containing these dyes internally attached, a fact which is mainly due to the quite challenging synthesis of such oligonucleotide probes. Herein, by using 2'-O-propargyl uridine phosphoramidite and a series of xanthenes and cyanine azide derivatives, we have for the first time performed solid-phase copper(I)-catalyzed azide-alkyne cycloaddition (CuAAC) click labeling during the automated phosphoramidite oligonucleotide synthesis followed by postsynthetic click reactions in solution. We demonstrate that our novel strategy is rapid and efficient for the preparation of novel oligonucleotide probes containing internally positioned xanthene and cyanine dye pairs and thus represents a significant step forward for the preparation of advanced fluorescent oligonucleotide probes. Furthermore, we demonstrate that the novel xanthene and cyanine labeled probes display unusual and very promising photophysical properties resulting from energy-transfer interactions between the fluorophores controlled by nucleic acid assembly. Potential benefits of using these novel fluorescent probes within, for example, molecular diagnostics and fluorescence microscopy include: Considerable Stokes shifts (40-110 nm), quenched fluorescence of single-stranded probes accompanied by up to 7.7-fold light-up effect of emission upon target DNA/RNA binding, remarkable sensitivity to single-nucleotide mismatches, generally high fluorescence brightness values (FB up to 26), and hence low limit of target detection values (LOD down to <5 nM). PMID:23180379

  16. A method for modeling contact dynamics for automated capture mechanisms

    NASA Technical Reports Server (NTRS)

    Williams, Philip J.

    1991-01-01

    Logicon Control Dynamics develops contact dynamics models for space-based docking and berthing vehicles. The models compute contact forces for the physical contact between mating capture mechanism surfaces. Realistic simulation requires proportionality constants, for calculating contact forces, to approximate surface stiffness of contacting bodies. Proportionality for rigid metallic bodies becomes quite large. Small penetrations of surface boundaries can produce large contact forces.

  17. Proteomics for Validation of Automated Gene Model Predictions

    SciTech Connect

    Zhou, Kemin; Panisko, Ellen A.; Magnuson, Jon K.; Baker, Scott E.; Grigoriev, Igor V.

    2008-02-14

    High-throughput liquid chromatography mass spectrometry (LC-MS)-based proteomic analysis has emerged as a powerful tool for functional annotation of genome sequences. These analyses complement the bioinformatic and experimental tools used for deriving, verifying, and functionally annotating models of genes and their transcripts. Furthermore, proteomics extends verification and functional annotation to the level of the translation product of the gene model.

  18. Models of Distance Higher Education: Fully Automated or Partially Human?

    ERIC Educational Resources Information Center

    Serdiukov, Peter

    2001-01-01

    There is little doubt that due to major advances in information technology, education will certainly become more technology-based. The purpose of this paper is to: (1) consider the models of contemporary universities offering distance programs; (2) analyze how technology changes the model of learning; and (3) explore how the human dimension will…

  19. Comparison of Joint Modeling Approaches Including Eulerian Sliding Interfaces

    SciTech Connect

    Lomov, I; Antoun, T; Vorobiev, O

    2009-12-16

    Accurate representation of discontinuities such as joints and faults is a key ingredient for high fidelity modeling of shock propagation in geologic media. The following study was done to improve treatment of discontinuities (joints) in the Eulerian hydrocode GEODYN (Lomov and Liu 2005). Lagrangian methods with conforming meshes and explicit inclusion of joints in the geologic model are well suited for such an analysis. Unfortunately, current meshing tools are unable to automatically generate adequate hexahedral meshes for large numbers of irregular polyhedra. Another concern is that joint stiffness in such explicit computations requires significantly reduced time steps, with negative implications for both the efficiency and quality of the numerical solution. An alternative approach is to use non-conforming meshes and embed joint information into regular computational elements. However, once slip displacement on the joints become comparable to the zone size, Lagrangian (even non-conforming) meshes could suffer from tangling and decreased time step problems. The use of non-conforming meshes in an Eulerian solver may alleviate these difficulties and provide a viable numerical approach for modeling the effects of faults on the dynamic response of geologic materials. We studied shock propagation in jointed/faulted media using a Lagrangian and two Eulerian approaches. To investigate the accuracy of this joint treatment the GEODYN calculations have been compared with results from the Lagrangian code GEODYN-L which uses an explicit treatment of joints via common plane contact. We explore two approaches to joint treatment in the code, one for joints with finite thickness and the other for tight joints. In all cases the sliding interfaces are tracked explicitly without homogenization or blending the joint and block response into an average response. In general, rock joints will introduce an increase in normal compliance in addition to a reduction in shear strength. In the present work we consider the limiting case of stiff discontinuities that only affect the shear strength of the material.

  20. Man power/cost estimation model: Automated planetary projects

    NASA Technical Reports Server (NTRS)

    Kitchen, L. D.

    1975-01-01

    A manpower/cost estimation model is developed which is based on a detailed level of financial analysis of over 30 million raw data points which are then compacted by more than three orders of magnitude to the level at which the model is applicable. The major parameter of expenditure is manpower (specifically direct labor hours) for all spacecraft subsystem and technical support categories. The resultant model is able to provide a mean absolute error of less than fifteen percent for the eight programs comprising the model data base. The model includes cost saving inheritance factors, broken down in four levels, for estimating follow-on type programs where hardware and design inheritance are evident or expected.

  1. Automated parametrical antenna modelling for ambient assisted living applications

    NASA Astrophysics Data System (ADS)

    Kazemzadeh, R.; John, W.; Mathis, W.

    2012-09-01

    In this paper a parametric modeling technique for a fast polynomial extraction of the physically relevant parameters of inductively coupled RFID/NFC (radio frequency identification/near field communication) antennas is presented. The polynomial model equations are obtained by means of a three-step procedure: first, full Partial Element Equivalent Circuit (PEEC) antenna models are determined by means of a number of parametric simulations within the input parameter range of a certain antenna class. Based on these models, the RLC antenna parameters are extracted in a subsequent model reduction step. Employing these parameters, polynomial equations describing the antenna parameter with respect to (w.r.t.) the overall antenna input parameter range are extracted by means of polynomial interpolation and approximation of the change of the polynomials' coefficients. The described approach is compared to the results of a reference PEEC solver with regard to accuracy and computation effort.

  2. Petri net-based modelling of human-automation conflicts in aviation.

    PubMed

    Pizziol, Sergio; Tessier, Catherine; Dehais, Frédéric

    2014-01-01

    Analyses of aviation safety reports reveal that human-machine conflicts induced by poor automation design are remarkable precursors of accidents. A review of different crew-automation conflicting scenarios shows that they have a common denominator: the autopilot behaviour interferes with the pilot's goal regarding the flight guidance via 'hidden' mode transitions. Considering both the human operator and the machine (i.e. the autopilot or the decision functions) as agents, we propose a Petri net model of those conflicting interactions, which allows them to be detected as deadlocks in the Petri net. In order to test our Petri net model, we designed an autoflight system that was formally analysed to detect conflicting situations. We identified three conflicting situations that were integrated in an experimental scenario in a flight simulator with 10 general aviation pilots. The results showed that the conflicts that we had a-priori identified as critical had impacted the pilots' performance. Indeed, the first conflict remained unnoticed by eight participants and led to a potential collision with another aircraft. The second conflict was detected by all the participants but three of them did not manage the situation correctly. The last conflict was also detected by all the participants but provoked typical automation surprise situation as only one declared that he had understood the autopilot behaviour. These behavioural results are discussed in terms of workload and number of fired 'hidden' transitions. Eventually, this study reveals that both formal and experimental approaches are complementary to identify and assess the criticality of human-automation conflicts. Practitioner Summary: We propose a Petri net model of human-automation conflicts. An experiment was conducted with general aviation pilots performing a scenario involving three conflicting situations to test the soundness of our formal approach. This study reveals that both formal and experimental approaches are complementary to identify and assess the criticality conflicts. PMID:24444329

  3. Analytical and numerical modeling of non-collinear shear wave mixing at an imperfect interface.

    PubMed

    Zhang, Ziyin; Nagy, Peter B; Hassan, Waled

    2016-02-01

    Non-collinear shear wave mixing at an imperfect interface between two solids can be exploited for nonlinear ultrasonic assessment of bond quality. In this study we developed two analytical models for nonlinear imperfect interfaces. The first model uses a finite nonlinear interfacial stiffness representation of an imperfect interface of vanishing thickness, while the second model relies on a thin nonlinear interphase layer to represent an imperfect interface region. The second model is actually a derivative of the first model obtained by calculating the equivalent interfacial stiffness of a thin isotropic nonlinear interphase layer in the quasi-static approximation. The predictions of both analytical models were numerically verified by comparison to COMSOL finite element simulations. These models can accurately predict the additional nonlinearity caused by interface imperfections based on the strength of the reflected and transmitted mixed longitudinal waves produced by them under non-collinear shear wave interrogation. PMID:26482394

  4. Automated mask creation from a 3D model using Faethm.

    SciTech Connect

    Schiek, Richard Louis; Schmidt, Rodney Cannon

    2007-11-01

    We have developed and implemented a method which given a three-dimensional object can infer from topology the two-dimensional masks needed to produce that object with surface micro-machining. The masks produced by this design tool can be generic, process independent masks, or if given process constraints, specific for a target process. This design tool calculates the two-dimensional mask set required to produce a given three-dimensional model by investigating the vertical topology of the model.

  5. Continental hydrosystem modelling: the concept of nested stream-aquifer interfaces

    NASA Astrophysics Data System (ADS)

    Flipo, N.; Mouhri, A.; Labarthe, B.; Biancamaria, S.

    2014-01-01

    Recent developments in hydrological modelling are based on a view of the interface being a single continuum through which water flows. These coupled hydrological-hydrogeological models, emphasising the importance of the stream-aquifer interface, are more and more used in hydrological sciences for pluri-disciplinary studies aiming at investigating environmental issues. This notion of a single continuum, which is accepted by the hydrological modellers, originates in the historical modelling of hydrosystems based on the hypothesis of a homogeneous media that led to the Darcy law. There is then a need to first bridge the gap between hydrological and eco-hydrological views of the stream-aquifer interfaces, and, secondly, to rationalise the modelling of stream-aquifer interface within a consistent framework that fully takes into account the multi-dimensionality of the stream-aquifer interfaces. We first define the concept of nested stream-aquifer interfaces as a key transitional component of continental hydrosystem. Based on a literature review, we then demonstrate the usefulness of the concept for the multi-dimensional study of the stream-aquifer interface, with a special emphasis on the stream network, which is identified as the key component for scaling hydrological processes occurring at the interface. Finally we focus on the stream-aquifer interface modelling at different scales, with up-to-date methodologies and give some guidances for the multi-dimensional modelling of the interface using the innovative methodology MIM (Measurements-Interpolation-Modelling), which is graphically developed, scaling in space the three pools of methods needed to fully understand stream-aquifer interfaces at various scales. The outcome of MIM is the localisation in space of the stream-aquifer interface types that can be studied by a given approach. The efficiency of the method is demonstrated with two approaches from the local (~1 m) to the continental (<10 M km2) scale.

  6. A 2-D Interface Element for Coupled Analysis of Independently Modeled 3-D Finite Element Subdomains

    NASA Technical Reports Server (NTRS)

    Kandil, Osama A.

    1998-01-01

    Over the past few years, the development of the interface technology has provided an analysis framework for embedding detailed finite element models within finite element models which are less refined. This development has enabled the use of cascading substructure domains without the constraint of coincident nodes along substructure boundaries. The approach used for the interface element is based on an alternate variational principle often used in deriving hybrid finite elements. The resulting system of equations exhibits a high degree of sparsity but gives rise to a non-positive definite system which causes difficulties with many of the equation solvers in general-purpose finite element codes. Hence the global system of equations is generally solved using, a decomposition procedure with pivoting. The research reported to-date for the interface element includes the one-dimensional line interface element and two-dimensional surface interface element. Several large-scale simulations, including geometrically nonlinear problems, have been reported using the one-dimensional interface element technology; however, only limited applications are available for the surface interface element. In the applications reported to-date, the geometry of the interfaced domains exactly match each other even though the spatial discretization within each domain may be different. As such, the spatial modeling of each domain, the interface elements and the assembled system is still laborious. The present research is focused on developing a rapid modeling procedure based on a parametric interface representation of independently defined subdomains which are also independently discretized.

  7. TASSER-Lite: an automated tool for protein comparative modeling.

    PubMed

    Pandit, Shashi Bhushan; Zhang, Yang; Skolnick, Jeffrey

    2006-12-01

    This study involves the development of a rapid comparative modeling tool for homologous sequences by extension of the TASSER methodology, developed for tertiary structure prediction. This comparative modeling procedure was validated on a representative benchmark set of proteins in the Protein Data Bank composed of 901 single domain proteins (41-200 residues) having sequence identities between 35-90% with respect to the template. Using a Monte Carlo search scheme with the length of runs optimized for weakly/nonhomologous proteins, TASSER often provides appreciable improvement in structure quality over the initial template. However, on average, this requires approximately 29 h of CPU time per sequence. Since homologous proteins are unlikely to require the extent of conformational search as weakly/nonhomologous proteins, TASSER's parameters were optimized to reduce the required CPU time to approximately 17 min, while retaining TASSER's ability to improve structure quality. Using this optimized TASSER (TASSER-Lite), we find an average improvement in the aligned region of approximately 10% in root mean-square deviation from native over the initial template. Comparison of TASSER-Lite with the widely used comparative modeling tool MODELLER showed that TASSER-Lite yields final models that are closer to the native. TASSER-Lite is provided on the web at (http://cssb.biology.gatech.edu/skolnick/webservice/tasserlite/index.html). PMID:16963505

  8. Investigating automated depth modelling of archaeo-magnetic datasets

    NASA Astrophysics Data System (ADS)

    Cheyney, Samuel; Hill, Ian; Linford, Neil; Leech, Christopher

    2010-05-01

    Magnetic surveying is a commonly used tool for first-pass non-invasive archaeological surveying, and is often used to target areas for more detailed geophysical investigation, or excavation. Quick and routine processing of magnetic datasets mean survey results are typically viewed as 2D greyscale maps and the shapes of anomalies are interpreted in terms of likely archaeological structures. This technique is simple, but ignores some of the information content of the data. The data collected using dense spatial sampling with modern precise instrumentation are capable of yielding numerical estimates of the depths to buried structures, and their physical properties. The magnetic field measured at the surface is a superposition of the responses to all anomalous magnetic susceptibilities in the subsurface, and is therefore capable of revealing a 3D model of the magnetic properties. The application of mathematical modelling techniques to very-near-surface surveys such as for archaeology is quite rare, however similar methods are routinely used in regional scale mineral exploration surveys. Inverse modelling techniques have inherent ambiguity due to the nature of the mathematical "inverse problem". Often, although a good fit to the recorded values can be obtained, the final model will be non-unique and may be heavily biased by the starting model provided. Also the run time and computer resources required can be restrictive. Our approach is to derive as much information as possible from the data directly, and use this to define a starting model for inversion. This addresses both the ambiguity of the inverse problem and reduces the task for the inversion computation. A number of alternative methods exist that can be used to obtain parameters for source bodies in potential field data. Here, methods involving the derivatives of the total magnetic field are used in association with advanced image processing techniques to outline the edges of anomalous bodies more accurately. When combined with the use of methods such as downwards continuation, Euler deconvolution and pseudo-gravity transformations, which can reveal information concerning depth and susceptibility parameters, a rapidly obtained initial model may be devised allowing subsequent inversion of data to be achieved more efficiently and with increased confidence in the final result. The long-term objective is to devise a procedure which will lead to models of the 3D subsurface materials with minimal user control, and short computing time, however retaining confidence in the final result. Such methods would be applicable to a variety of other near-surface magnetic data, such as brownfield sites.

  9. A Voyage to Arcturus: A model for automated management of a WLCG Tier-2 facility

    NASA Astrophysics Data System (ADS)

    Roy, Gareth; Crooks, David; Mertens, Lena; Mitchell, Mark; Purdie, Stuart; Cadellin Skipsey, Samuel; Britton, David

    2014-06-01

    With the current trend towards "On Demand Computing" in big data environments it is crucial that the deployment of services and resources becomes increasingly automated. Deployment based on cloud platforms is available for large scale data centre environments but these solutions can be too complex and heavyweight for smaller, resource constrained WLCG Tier-2 sites. Along with a greater desire for bespoke monitoring and collection of Grid related metrics, a more lightweight and modular approach is desired. In this paper we present a model for a lightweight automated framework which can be use to build WLCG grid sites, based on "off the shelf" software components. As part of the research into an automation framework the use of both IPMI and SNMP for physical device management will be included, as well as the use of SNMP as a monitoring/data sampling layer such that more comprehensive decision making can take place and potentially be automated. This could lead to reduced down times and better performance as services are recognised to be in a non-functional state by autonomous systems.

  10. A simplified cellular automation model for city traffic

    SciTech Connect

    Simon, P.M.; Nagel, K. |

    1997-12-31

    The authors systematically investigate the effect of blockage sites in a cellular automata model for traffic flow. Different scheduling schemes for the blockage sites are considered. None of them returns a linear relationship between the fraction of green time and the throughput. The authors use this information for a fast implementation of traffic in Dallas.

  11. Automated biowaste sampling system urine subsystem operating model, part 1

    NASA Technical Reports Server (NTRS)

    Fogal, G. L.; Mangialardi, J. K.; Rosen, F.

    1973-01-01

    The urine subsystem automatically provides for the collection, volume sensing, and sampling of urine from six subjects during space flight. Verification of the subsystem design was a primary objective of the current effort which was accomplished thru the detail design, fabrication, and verification testing of an operating model of the subsystem.

  12. Automated volumetric breast density derived by shape and appearance modeling

    NASA Astrophysics Data System (ADS)

    Malkov, Serghei; Kerlikowske, Karla; Shepherd, John

    2014-03-01

    The image shape and texture (appearance) estimation designed for facial recognition is a novel and promising approach for application in breast imaging. The purpose of this study was to apply a shape and appearance model to automatically estimate percent breast fibroglandular volume (%FGV) using digital mammograms. We built a shape and appearance model using 2000 full-field digital mammograms from the San Francisco Mammography Registry with known %FGV measured by single energy absorptiometry method. An affine transformation was used to remove rotation, translation and scale. Principal Component Analysis (PCA) was applied to extract significant and uncorrelated components of %FGV. To build an appearance model, we transformed the breast images into the mean texture image by piecewise linear image transformation. Using PCA the image pixels grey-scale values were converted into a reduced set of the shape and texture features. The stepwise regression with forward selection and backward elimination was used to estimate the outcome %FGV with shape and appearance features and other system parameters. The shape and appearance scores were found to correlate moderately to breast %FGV, dense tissue volume and actual breast volume, body mass index (BMI) and age. The highest Pearson correlation coefficient was equal 0.77 for the first shape PCA component and actual breast volume. The stepwise regression method with ten-fold cross-validation to predict %FGV from shape and appearance variables and other system outcome parameters generated a model with a correlation of r2 = 0.8. In conclusion, a shape and appearance model demonstrated excellent feasibility to extract variables useful for automatic %FGV estimation. Further exploring and testing of this approach is warranted.

  13. Automated Volumetric Breast Density derived by Shape and Appearance Modeling.

    PubMed

    Malkov, Serghei; Kerlikowske, Karla; Shepherd, John

    2014-03-22

    The image shape and texture (appearance) estimation designed for facial recognition is a novel and promising approach for application in breast imaging. The purpose of this study was to apply a shape and appearance model to automatically estimate percent breast fibroglandular volume (%FGV) using digital mammograms. We built a shape and appearance model using 2000 full-field digital mammograms from the San Francisco Mammography Registry with known %FGV measured by single energy absorptiometry method. An affine transformation was used to remove rotation, translation and scale. Principal Component Analysis (PCA) was applied to extract significant and uncorrelated components of %FGV. To build an appearance model, we transformed the breast images into the mean texture image by piecewise linear image transformation. Using PCA the image pixels grey-scale values were converted into a reduced set of the shape and texture features. The stepwise regression with forward selection and backward elimination was used to estimate the outcome %FGV with shape and appearance features and other system parameters. The shape and appearance scores were found to correlate moderately to breast %FGV, dense tissue volume and actual breast volume, body mass index (BMI) and age. The highest Pearson correlation coefficient was equal 0.77 for the first shape PCA component and actual breast volume. The stepwise regression method with ten-fold cross-validation to predict %FGV from shape and appearance variables and other system outcome parameters generated a model with a correlation of r(2) = 0.8. In conclusion, a shape and appearance model demonstrated excellent feasibility to extract variables useful for automatic %FGV estimation. Further exploring and testing of this approach is warranted. PMID:25083119

  14. Morphology Based Cohesive Zone Modeling of the Cement-Bone Interface from Postmortem Retrievals

    PubMed Central

    Waanders, Daan; Janssen, Dennis; Mann, Kenneth A.; Verdonschot, Nico

    2011-01-01

    In cemented total hip arthroplasty, the cement-bone interface can be considerably degenerated after less than one year in-vivo service; this makes the interface much weaker relative to the direct post-operative situation. It is, however, still unknown how these degenerated interfaces behave under mixed-mode loading and how this is related to the morphology of the interface. In this study, we used a finite element approach to analyze the mixed-mode response of the cement-bone interface taken from postmortem retrievals and we investigated whether it was feasible to generate a fully elastic and a failure cohesive model based on only morphological input parameters. Computed tomography-based finite element analysis models of the postmortem cement-bone interface were generated and the interface morphology was determined. The models were loaded until failure in multiple directions by allowing cracking of the bone and cement components and including periodic boundary conditions. The resulting stiffness was related to the interface morphology. A closed form mixed-mode cohesive model that included failure was determined and related to the interface morphology. The responses of the finite element simulations compare satisfactorily with experimental observations, albeit the magnitude of the strength and stiffness are somewhat overestimated. Surprisingly, the finite element simulations predict no failure under shear loading and a considerable normal compression is generated which prevents dilation of the interface. The obtained mixed-mode stiffness response could subsequently be related to the interface morphology and subsequently be formulated into an elastic cohesive zone model. Finally, the acquired data could be used as an input for a cohesive model that also includes interface failure. PMID:21783159

  15. Experiments using automated sample plan selection for OPC modeling

    NASA Astrophysics Data System (ADS)

    Viswanathan, Ramya; Jaiswal, Om; Casati, Nathalie; Abdo, Amr; Oberschmidt, James; Watts, Josef; Gabrani, Maria

    2015-03-01

    OPC models have become critical in the manufacturing of integrated circuits (ICs) by allowing correction of complex designs, as we approach the physical limits of scaling in IC chip design. The accuracy of these models depends upon the ability of the calibration set to sufficiently cover the design space, and be manageable enough to address metrology constraints. We show that the proposed method provides results of at least similar quality, in some cases superior quality compared to both the traditional method and sample plan sets of higher size. The main advantage of our method over the existing ones is that it generates a calibration set much faster, considering a large initial set and even more importantly, by automatically selecting its minimum optimal size.

  16. An Improvement in Thermal Modelling of Automated Tape Placement Process

    SciTech Connect

    Barasinski, Anaies; Leygue, Adrien; Poitou, Arnaud; Soccard, Eric

    2011-01-17

    The thermoplastic tape placement process offers the possibility of manufacturing large laminated composite parts with all kinds of geometries (double curved i.e.). This process is based on the fusion bonding of a thermoplastic tape on a substrate. It has received a growing interest during last years because of its non autoclave abilities.In order to control and optimize the quality of the manufactured part, we need to predict the temperature field throughout the processing of the laminate. In this work, we focus on a thermal modeling of this process which takes in account the imperfect bonding existing between the different layers of the substrate by introducing thermal contact resistance in the model. This study is leaning on experimental results which inform us that the value of the thermal resistance evolves with temperature and pressure applied on the material.

  17. An Improvement in Thermal Modelling of Automated Tape Placement Process

    NASA Astrophysics Data System (ADS)

    Barasinski, Anaïs; Leygue, Adrien; Soccard, Eric; Poitou, Arnaud

    2011-01-01

    The thermoplastic tape placement process offers the possibility of manufacturing large laminated composite parts with all kinds of geometries (double curved i.e.). This process is based on the fusion bonding of a thermoplastic tape on a substrate. It has received a growing interest during last years because of its non autoclave abilities. In order to control and optimize the quality of the manufactured part, we need to predict the temperature field throughout the processing of the laminate. In this work, we focus on a thermal modeling of this process which takes in account the imperfect bonding existing between the different layers of the substrate by introducing thermal contact resistance in the model. This study is leaning on experimental results which inform us that the value of the thermal resistance evolves with temperature and pressure applied on the material.

  18. Structural model for the GeO2/Ge interface: A first-principles study

    NASA Astrophysics Data System (ADS)

    Saito, Shoichiro; Ono, Tomoya

    2011-08-01

    First-principles modeling of a GeO2/Ge(001) interface reveals that sixfold GeO2, which is derived from cristobalite and is different from rutile, dramatically reduces the lattice mismatch at the interface and is much more stable than the conventional fourfold interface. Since the grain boundary between fourfold and sixfold GeO2 is unstable, sixfold GeO2 forms a large grain at the interface. On the contrary, a comparative study with SiO2 demonstrates that SiO2 maintains a fourfold structure. The sixfold GeO2/Ge interface is shown to be a consequence of the ground-state phase of GeO2. In addition, the electronic structure calculation reveals that sixfold GeO2 at the interface shifts the valence band maximum far from the interface toward the conduction band.

  19. Automated target recognition using passive radar and coordinated flight models

    NASA Astrophysics Data System (ADS)

    Ehrman, Lisa M.; Lanterman, Aaron D.

    2003-09-01

    Rather than emitting pulses, passive radar systems rely on illuminators of opportunity, such as TV and FM radio, to illuminate potential targets. These systems are particularly attractive since they allow receivers to operate without emitting energy, rendering them covert. Many existing passive radar systems estimate the locations and velocities of targets. This paper focuses on adding an automatic target recognition (ATR) component to such systems. Our approach to ATR compares the Radar Cross Section (RCS) of targets detected by a passive radar system to the simulated RCS of known targets. To make the comparison as accurate as possible, the received signal model accounts for aircraft position and orientation, propagation losses, and antenna gain patterns. The estimated positions become inputs for an algorithm that uses a coordinated flight model to compute probable aircraft orientation angles. The Fast Illinois Solver Code (FISC) simulates the RCS of several potential target classes as they execute the estimated maneuvers. The RCS is then scaled by the Advanced Refractive Effects Prediction System (AREPS) code to account for propagation losses that occur as functions of altitude and range. The Numerical Electromagnetic Code (NEC2) computes the antenna gain pattern, so that the RCS can be further scaled. The Rician model compares the RCS of the illuminated aircraft with those of the potential targets. This comparison results in target identification.

  20. Using a sharp interface to model the capillary fringe: a model comparison

    NASA Astrophysics Data System (ADS)

    Bandilla, K.; Celia, M. A.; Nordbotten, J. M.; Court, B.; Elliot, T. J.

    2010-12-01

    Geologic carbon sequestration is seen as one option to reduce anthropogenic carbon emissions in the short term. The largest storage capacity is generally ascribed to deep saline aquifers where supercritical CO2 is trapped beneath a sufficiently impermeable cap rock. One approach to better understand the processes involved in CO2 sequestration in deep saline aquifers is the use of mathematical models. These models span the full spectrum of complexity from highly simplified models such as a single well, single aquifer Theis solution, to highly complex 3D multi-phase flow numerical simulators like THOUGH2 and ECLIPSE. While numerical simulators allow for a suite of subsurface processes to be modeled, the high computational cost makes Monte Carlo type risk analysis studies problematic. One approach to reduce computational cost is model simplification by dimensionality reduction and/or other assumptions such as a sharp interface separating the native brine and the injected CO2. The demand for computationally efficient models has also lead to a renewed interest in analytical and semi-analytical models. As analytical and semi-analytical models are becoming more sophisticated, the impact of capillary forces and relative permeability effects are becoming an active research area. This presentation explores the use and validity of sharp interface semi-analytical and numerical solutions with regard to their ability to model a capillary fringe and the resulting non-linear saturation profile and relative permeability distribution. In particular, the impact of using different capillary pressure - saturation relationships and relative permeability - saturation relationships in different model types (i.e., semi-analytical, numerical) is discussed. For perspective on previously published numerical results, these models are similarly applied to a single layer, single well system. The saturation profiles and pressure perturbations produced by our models are evaluated using results from the numerical reservoir simulator ECLIPSE as a criteria for predicting CO2 plume behavior through sharp interface models in cases with saturation profiles impacted by a capillary fringe.

  1. Disturbed state model for sand-geosynthetic interfaces and application to pull-out tests

    NASA Astrophysics Data System (ADS)

    Pal, Surajit; Wije Wathugala, G.

    1999-12-01

    Successful numerical simulation of geosynthetic-reinforced earth structures depends on selecting proper constitutive models for soils, geosynthetics and soil-geosynthetic interfaces. Many constitutive models are available for modelling soils and geosynthetics. However, constitutive models for soil-geosynthetic interfaces which can capture most of the important characteristics of interface response are not readily available. In this paper, an elasto-plastic constitutive model based on the disturbed state concept (DSC) for geosynthetic-soil interfaces has been presented. The proposed model is capable of capturing most of the important characteristics of interface response, such as dilation, hardening and softening. The behaviour of interfaces under the direct shear test has been predicted by the model. The present model has been implemented in the finite element procedure in association with the thin-layer element. Five pull-out tests with two different geogrids have been simulated numerically using FEM. For the calibration of the constitutive models used in FEM, the standard laboratory tests used are: (1) triaxial tests for the sand, (2) direct shear tests for the interfaces and (3) axial tension tests for the geogrids. The results of the finite element simulations of pull-out tests agree well with the test data. The proposed model can be used for the stress-deformation study of geosynthetic-reinforced embankments through numerical simulation.

  2. A continuously growing web-based interface structure databank

    NASA Astrophysics Data System (ADS)

    Erwin, N. A.; Wang, E. I.; Osysko, A.; Warner, D. H.

    2012-07-01

    The macroscopic properties of materials can be significantly influenced by the presence of microscopic interfaces. The complexity of these interfaces coupled with the vast configurational space in which they reside has been a long-standing obstacle to the advancement of true bottom-up material behavior predictions. In this vein, atomistic simulations have proven to be a valuable tool for investigating interface behavior. However, before atomistic simulations can be utilized to model interface behavior, meaningful interface atomic structures must be generated. The generation of structures has historically been carried out disjointly by individual research groups, and thus, has constituted an overlap in effort across the broad research community. To address this overlap and to lower the barrier for new researchers to explore interface modeling, we introduce a web-based interface structure databank (www.isdb.cee.cornell.edu) where users can search, download and share interface structures. The databank is intended to grow via two mechanisms: (1) interface structure donations from individual research groups and (2) an automated structure generation algorithm which continuously creates equilibrium interface structures. In this paper, we describe the databank, the automated interface generation algorithm, and compare a subset of the autonomously generated structures to structures currently available in the literature. To date, the automated generation algorithm has been directed toward aluminum grain boundary structures, which can be compared with experimentally measured population densities of aluminum polycrystals.

  3. Piloted Simulation of a Model-Predictive Automated Recovery System

    NASA Technical Reports Server (NTRS)

    Liu, James (Yuan); Litt, Jonathan; Sowers, T. Shane; Owens, A. Karl; Guo, Ten-Huei

    2014-01-01

    This presentation describes a model-predictive automatic recovery system for aircraft on the verge of a loss-of-control situation. The system determines when it must intervene to prevent an imminent accident, resulting from a poor approach. It estimates the altitude loss that would result from a go-around maneuver at the current flight condition. If the loss is projected to violate a minimum altitude threshold, the maneuver is automatically triggered. The system deactivates to allow landing once several criteria are met. Piloted flight simulator evaluation showed the system to provide effective envelope protection during extremely unsafe landing attempts. The results demonstrate how flight and propulsion control can be integrated to recover control of the vehicle automatically and prevent a potential catastrophe.

  4. An Energy Approach to a Micromechanics Model Accounting for Nonlinear Interface Debonding.

    SciTech Connect

    Tan, H.; Huang, Y.; Geubelle, P. H.; Liu, C.; Breitenfeld, M. S.

    2005-01-01

    We developed a micromechanics model to study the effect of nonlinear interface debonding on the constitutive behavior of composite materials. While implementing this micromechanics model into a large simulation code on solid rockets, we are challenged by problems such as tension/shear coupling and the nonuniform distribution of displacement jump at the particle/matrix interfaces. We therefore propose an energy approach to solve these problems. This energy approach calculates the potential energy of the representative volume element, including the contribution from the interface debonding. By minimizing the potential energy with respect to the variation of the interface displacement jump, the traction balanced interface debonding can be found and the macroscopic constitutive relations established. This energy approach has the ability to treat different load conditions in a unified way, and the interface cohesive law can be in any arbitrary forms. In this paper, the energy approach is verified to give the same constitutive behaviors as reported before.

  5. User's Manual for the Object User Interface (OUI): An Environmental Resource Modeling Framework

    USGS Publications Warehouse

    Markstrom, Steven L.; Koczot, Kathryn M.

    2008-01-01

    The Object User Interface is a computer application that provides a framework for coupling environmental-resource models and for managing associated temporal and spatial data. The Object User Interface is designed to be easily extensible to incorporate models and data interfaces defined by the user. Additionally, the Object User Interface is highly configurable through the use of a user-modifiable, text-based control file that is written in the eXtensible Markup Language. The Object User Interface user's manual provides (1) installation instructions, (2) an overview of the graphical user interface, (3) a description of the software tools, (4) a project example, and (5) specifications for user configuration and extension.

  6. A formalism for modeling solid electrolyte/electrode interfaces using first principles methods

    NASA Astrophysics Data System (ADS)

    Lepley, Nicholas; Holzwarth, Natalie

    We describe a scheme based on the interface energy for analyzing interfaces between crystalline solids, quantitatively including the effect of lattice strain. This scheme is applied to the modeling of likely interface geometries of several solid state battery materials including Li metal, Li3PO4, Li3PS4, Li2O, and Li2S. We find that all of the interfaces in this study are stable with the exception of Li3PS4/Li. For this chemically unstable interface, the partial density of states helps to identify mechanisms associated with the interface reactions. We also consider the case of charged defects at the interface, and show that accurately modeling them requires a careful treatment of the resulting electric fields. Our energetic measure of interfaces and our analysis of the band alignment between interface materials indicate multiple factors which may be predictors of interface stability, an important property of solid electrolyte systems. Supported by NSF Grant DMR-1105485 and DMR-1507942.

  7. Modeling the flow in diffuse interface methods of solidification.

    PubMed

    Subhedar, A; Steinbach, I; Varnik, F

    2015-08-01

    Fluid dynamical equations in the presence of a diffuse solid-liquid interface are investigated via a volume averaging approach. The resulting equations exhibit the same structure as the standard Navier-Stokes equation for a Newtonian fluid with a constant viscosity, the effect of the solid phase fraction appearing in the drag force only. This considerably simplifies the use of the lattice Boltzmann method as a fluid dynamics solver in solidification simulations. Galilean invariance is also satisfied within this approach. Further, we investigate deviations between the diffuse and sharp interface flow profiles via both quasiexact numerical integration and lattice Boltzmann simulations. It emerges from these studies that the freedom in choosing the solid-liquid coupling parameter h provides a flexible way of optimizing the diffuse interface-flow simulations. Once h is adapted for a given spatial resolution, the simulated flow profiles reach an accuracy comparable to quasiexact numerical simulations. PMID:26382542

  8. Modeling the flow in diffuse interface methods of solidification

    NASA Astrophysics Data System (ADS)

    Subhedar, A.; Steinbach, I.; Varnik, F.

    2015-08-01

    Fluid dynamical equations in the presence of a diffuse solid-liquid interface are investigated via a volume averaging approach. The resulting equations exhibit the same structure as the standard Navier-Stokes equation for a Newtonian fluid with a constant viscosity, the effect of the solid phase fraction appearing in the drag force only. This considerably simplifies the use of the lattice Boltzmann method as a fluid dynamics solver in solidification simulations. Galilean invariance is also satisfied within this approach. Further, we investigate deviations between the diffuse and sharp interface flow profiles via both quasiexact numerical integration and lattice Boltzmann simulations. It emerges from these studies that the freedom in choosing the solid-liquid coupling parameter h provides a flexible way of optimizing the diffuse interface-flow simulations. Once h is adapted for a given spatial resolution, the simulated flow profiles reach an accuracy comparable to quasiexact numerical simulations.

  9. A semi-automated vascular access system for preclinical models

    NASA Astrophysics Data System (ADS)

    Berry-Pusey, B. N.; Chang, Y. C.; Prince, S. W.; Chu, K.; David, J.; Taschereau, R.; Silverman, R. W.; Williams, D.; Ladno, W.; Stout, D.; Tsao, T. C.; Chatziioannou, A.

    2013-08-01

    Murine models are used extensively in biological and translational research. For many of these studies it is necessary to access the vasculature for the injection of biologically active agents. Among the possible methods for accessing the mouse vasculature, tail vein injections are a routine but critical step for many experimental protocols. To perform successful tail vein injections, a high skill set and experience is required, leaving most scientists ill-suited to perform this task. This can lead to a high variability between injections, which can impact experimental results. To allow more scientists to perform tail vein injections and to decrease the variability between injections, a vascular access system (VAS) that semi-automatically inserts a needle into the tail vein of a mouse was developed. The VAS uses near infrared light, image processing techniques, computer controlled motors, and a pressure feedback system to insert the needle and to validate its proper placement within the vein. The VAS was tested by injecting a commonly used radiolabeled probe (FDG) into the tail veins of five mice. These mice were then imaged using micro-positron emission tomography to measure the percentage of the injected probe remaining in the tail. These studies showed that, on average, the VAS leaves 3.4% of the injected probe in the tail. With these preliminary results, the VAS system demonstrates the potential for improving the accuracy of tail vein injections in mice.

  10. Automated Finite Element Modeling of Wing Structures for Shape Optimization

    NASA Technical Reports Server (NTRS)

    Harvey, Michael Stephen

    1993-01-01

    The displacement formulation of the finite element method is the most general and most widely used technique for structural analysis of airplane configurations. Modem structural synthesis techniques based on the finite element method have reached a certain maturity in recent years, and large airplane structures can now be optimized with respect to sizing type design variables for many load cases subject to a rich variety of constraints including stress, buckling, frequency, stiffness and aeroelastic constraints (Refs. 1-3). These structural synthesis capabilities use gradient based nonlinear programming techniques to search for improved designs. For these techniques to be practical a major improvement was required in computational cost of finite element analyses (needed repeatedly in the optimization process). Thus, associated with the progress in structural optimization, a new perspective of structural analysis has emerged, namely, structural analysis specialized for design optimization application, or.what is known as "design oriented structural analysis" (Ref. 4). This discipline includes approximation concepts and methods for obtaining behavior sensitivity information (Ref. 1), all needed to make the optimization of large structural systems (modeled by thousands of degrees of freedom and thousands of design variables) practical and cost effective.

  11. Automated model-based organ delineation for radiotherapy planning in prostatic region

    SciTech Connect

    Pekar, Vladimir . E-mail: vladimir.pekar@philips.com; McNutt, Todd R.; Kaus, Michael R.

    2004-11-01

    Purpose: Organ delineation is one of the most tedious and time-consuming parts of radiotherapy planning. It is usually performed by manual contouring in two-dimensional slices using simple drawing tools, and it may take several hours to delineate all structures of interest in a three-dimensional (3D) data set used for planning. In this paper, a 3D model-based approach to automated organ delineation is introduced that allows for a significant reduction of the time required for contouring. Methods and materials: The presented method is based on an adaptation of 3D deformable surface models to the boundaries of the anatomic structures of interest. The adaptation is based on a tradeoff between deformations of the model induced by its attraction to certain image features and the shape integrity of the model. To make the concept clinically feasible, interactive tools are introduced that allow quick correction in problematic areas in which the automated model adaptation may fail. A feasibility study with 40 clinical data sets was done for the male pelvic area, in which the risk organs (bladder, rectum, and femoral heads) were segmented by automatically adapting the corresponding organ models. Results: In several cases of the validation study, minor user interaction was required. Nevertheless, a statistically significant reduction in the time required compared with manual organ contouring was achieved. The results of the validation study showed that the presented model-based approach is accurate (1.0-1.7 mm mean error) for the tested anatomic structures. Conclusion: A framework for organ delineation in radiotherapy planning is presented, including automated 3D model-based segmentation, as well as tools for interactive corrections. We demonstrated that the proposed approach is significantly more efficient than manual contouring in two-dimensional slices.

  12. Automated calibration of a stream solute transport model: Implications for interpretation of biogeochemical parameters

    USGS Publications Warehouse

    Scott, D.T.; Gooseff, M.N.; Bencala, K.E.; Runkel, R.L.

    2003-01-01

    The hydrologic processes of advection, dispersion, and transient storage are the primary physical mechanisms affecting solute transport in streams. The estimation of parameters for a conservative solute transport model is an essential step to characterize transient storage and other physical features that cannot be directly measured, and often is a preliminary step in the study of reactive solutes. Our study used inverse modeling to estimate parameters of the transient storage model OTIS (One dimensional Transport with Inflow and Storage). Observations from a tracer injection experiment performed on Uvas Creek, California, USA, are used to illustrate the application of automated solute transport model calibration to conservative and nonconservative stream solute transport. A computer code for universal inverse modeling (UCODE) is used for the calibrations. Results of this procedure are compared with a previous study that used a trial-and-error parameter estimation approach. The results demonstrated 1) importance of the proper estimation of discharge and lateral inflow within the stream system; 2) that although the fit of the observations is not much better when transient storage is invoked, a more randomly distributed set of residuals resulted (suggesting non-systematic error), indicating that transient storage is occurring; 3) that inclusion of transient storage for a reactive solute (Sr2+) provided a better fit to the observations, highlighting the importance of robust model parameterization; and 4) that applying an automated calibration inverse modeling estimation approach resulted in a comprehensive understanding of the model results and the limitation of input data.

  13. Structure of liquid-vapor interfaces in the Ising model

    SciTech Connect

    Moseley, L.L.

    1997-06-01

    The asymptotic behavior of the density profile of the fluid-fluid interface is investigated by computer simulation and is found to be better described by the error function than by the hyperbolic tangent in three dimensions. For higher dimensions the hyperbolic tangent is a better approximation.

  14. A comparison of automated anatomical–behavioural mapping methods in a rodent model of stroke☆

    PubMed Central

    Crum, William R.; Giampietro, Vincent P.; Smith, Edward J.; Gorenkova, Natalia; Stroemer, R. Paul; Modo, Michel

    2013-01-01

    Neurological damage, due to conditions such as stroke, results in a complex pattern of structural changes and significant behavioural dysfunctions; the automated analysis of magnetic resonance imaging (MRI) and discovery of structural–behavioural correlates associated with these disorders remains challenging. Voxel lesion symptom mapping (VLSM) has been used to associate behaviour with lesion location in MRI, but this analysis requires the definition of lesion masks on each subject and does not exploit the rich structural information in the images. Tensor-based morphometry (TBM) has been used to perform voxel-wise structural analyses over the entire brain; however, a combination of lesion hyper-intensities and subtle structural remodelling away from the lesion might confound the interpretation of TBM. In this study, we compared and contrasted these techniques in a rodent model of stroke (n = 58) to assess the efficacy of these techniques in a challenging pre-clinical application. The results from the automated techniques were compared using manually derived region-of-interest measures of the lesion, cortex, striatum, ventricle and hippocampus, and considered against model power calculations. The automated TBM techniques successfully detect both lesion and non-lesion effects, consistent with manual measurements. These techniques do not require manual segmentation to the same extent as VLSM and should be considered part of the toolkit for the unbiased analysis of pre-clinical imaging-based studies. PMID:23727124

  15. NeuroGPS: automated localization of neurons for brain circuits using L1 minimization model.

    PubMed

    Quan, Tingwei; Zheng, Ting; Yang, Zhongqing; Ding, Wenxiang; Li, Shiwei; Li, Jing; Zhou, Hang; Luo, Qingming; Gong, Hui; Zeng, Shaoqun

    2013-01-01

    Drawing the map of neuronal circuits at microscopic resolution is important to explain how brain works. Recent progresses in fluorescence labeling and imaging techniques have enabled measuring the whole brain of a rodent like a mouse at submicron-resolution. Considering the huge volume of such datasets, automatic tracing and reconstruct the neuronal connections from the image stacks is essential to form the large scale circuits. However, the first step among which, automated location the soma across different brain areas remains a challenge. Here, we addressed this problem by introducing L1 minimization model. We developed a fully automated system, NeuronGlobalPositionSystem (NeuroGPS) that is robust to the broad diversity of shape, size and density of the neurons in a mouse brain. This method allows locating the neurons across different brain areas without human intervention. We believe this method would facilitate the analysis of the neuronal circuits for brain function and disease studies. PMID:23546385

  16. IDEF3 and IDEF4 automation system requirements document and system environment models

    NASA Technical Reports Server (NTRS)

    Blinn, Thomas M.

    1989-01-01

    The requirements specification is provided for the IDEF3 and IDEF4 tools that provide automated support for IDEF3 and IDEF4 modeling. The IDEF3 method is a scenario driven process flow description capture method intended to be used by domain experts to represent the knowledge about how a particular system or process works. The IDEF3 method provides modes to represent both (1) Process Flow Description to capture the relationships between actions within the context of a specific scenario, and (2) Object State Transition to capture the allowable transitions of an object in the domain. The IDEF4 method provides a method for capturing the (1) Class Submodel or object hierarchy, (2) Method Submodel or the procedures associated with each classes of objects, and (3) the Dispath Matching or the relationships between the objects and methods in the object oriented design. The requirements specified describe the capabilities that a fully functional IDEF3 or IDEF4 automated tool should support.

  17. A New Tool for Inundation Modeling: Community Modeling Interface for Tsunamis (ComMIT)

    NASA Astrophysics Data System (ADS)

    Titov, V. V.; Moore, C. W.; Greenslade, D. J. M.; Pattiaratchi, C.; Badal, R.; Synolakis, C. E.; Kânoğlu, U.

    2011-11-01

    Almost 5 years after the 26 December 2004 Indian Ocean tragedy, the 10 August 2009 Andaman tsunami demonstrated that accurate forecasting is possible using the tsunami community modeling tool Community Model Interface for Tsunamis (ComMIT). ComMIT is designed for ease of use, and allows dissemination of results to the community while addressing concerns associated with proprietary issues of bathymetry and topography. It uses initial conditions from a precomputed propagation database, has an easy-to-interpret graphical interface, and requires only portable hardware. ComMIT was initially developed for Indian Ocean countries with support from the United Nations Educational, Scientific, and Cultural Organization (UNESCO), the United States Agency for International Development (USAID), and the National Oceanic and Atmospheric Administration (NOAA). To date, more than 60 scientists from 17 countries in the Indian Ocean have been trained and are using it in operational inundation mapping.

  18. A User-Oriented Interface for Generalised Informetric Analysis Based on Applying Advanced Data Modelling Techniques.

    ERIC Educational Resources Information Center

    Jarvelin, Kalervo; Ingwersen, Peter; Niemi, Timo

    2000-01-01

    Presents a user-oriented interface for generalized informetric analysis and demonstrates how informetric calculations can be specified through advanced data modeling techniques. Topics include bibliographic data; online information retrieval systems; citation networks; query interface; impact factors; data restructuring; and multi-level…

  19. Modeling Auditory-Haptic Interface Cues from an Analog Multi-line Telephone

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Anderson, Mark R.; Bittner, Rachael M.

    2012-01-01

    The Western Electric Company produced a multi-line telephone during the 1940s-1970s using a six-button interface design that provided robust tactile, haptic and auditory cues regarding the "state" of the communication system. This multi-line telephone was used as a model for a trade study comparison of two interfaces: a touchscreen interface (iPad)) versus a pressure-sensitive strain gauge button interface (Phidget USB interface controllers). The experiment and its results are detailed in the authors' AES 133rd convention paper " Multimodal Information Management: Evaluation of Auditory and Haptic Cues for NextGen Communication Dispays". This Engineering Brief describes how the interface logic, visual indications, and auditory cues of the original telephone were synthesized using MAX/MSP, including the logic for line selection, line hold, and priority line activation.

  20. An automation of design and modelling tasks in NX Siemens environment with original software - generator module

    NASA Astrophysics Data System (ADS)

    Zbiciak, M.; Grabowik, C.; Janik, W.

    2015-11-01

    Nowadays the design constructional process is almost exclusively aided with CAD/CAE/CAM systems. It is evaluated that nearly 80% of design activities have a routine nature. These design routine tasks are highly susceptible to automation. Design automation is usually made with API tools which allow building original software responsible for adding different engineering activities. In this paper the original software worked out in order to automate engineering tasks at the stage of a product geometrical shape design is presented. The elaborated software works exclusively in NX Siemens CAD/CAM/CAE environment and was prepared in Microsoft Visual Studio with application of the .NET technology and NX SNAP library. The software functionality allows designing and modelling of spur and helicoidal involute gears. Moreover, it is possible to estimate relative manufacturing costs. With the Generator module it is possible to design and model both standard and non-standard gear wheels. The main advantage of the model generated in such a way is its better representation of an involute curve in comparison to those which are drawn in specialized standard CAD systems tools. It comes from fact that usually in CAD systems an involute curve is drawn by 3 points that respond to points located on the addendum circle, the reference diameter of a gear and the base circle respectively. In the Generator module the involute curve is drawn by 11 involute points which are located on and upper the base and the addendum circles therefore 3D gear wheels models are highly accurate. Application of the Generator module makes the modelling process very rapid so that the gear wheel modelling time is reduced to several seconds. During the conducted research the analysis of differences between standard 3 points and 11 points involutes was made. The results and conclusions drawn upon analysis are shown in details.

  1. Reconciling lattice and continuum models for polymers at interfaces.

    PubMed

    Fleer, G J; Skvortsov, A M

    2012-04-01

    It is well known that lattice and continuum descriptions for polymers at interfaces are, in principle, equivalent. In order to compare the two models quantitatively, one needs a relation between the inverse extrapolation length c as used in continuum theories and the lattice adsorption parameter Δχ(s) (defined with respect to the critical point). So far, this has been done only for ideal chains with zero segment volume in extremely dilute solutions. The relation Δχ(s)(c) is obtained by matching the boundary conditions in the two models. For depletion (positive c and Δχ(s)) the result is very simple: Δχ(s) = ln(1 + c/5). For adsorption (negative c and Δχ(s)) the ideal-chain treatment leads to an unrealistic divergence for strong adsorption: c decreases without bounds and the train volume fraction exceeds unity. This due to the fact that for ideal chains the volume filling cannot be accounted for. We extend the treatment to real chains with finite segment volume at finite concentrations, for both good and theta solvents. For depletion the volume filling is not important and the ideal-chain result Δχ(s) = ln(1 + c/5) is generally valid also for non-ideal chains, at any concentration, chain length, or solvency. Depletion profiles can be accurately described in terms of two length scales: ρ = tanh(2)[(z + p)/δ], where the depletion thickness (distal length) δ is a known function of chain length and polymer concentration, and the proximal length p is a known function of c (or Δχ(s)) and δ. For strong repulsion p = 1/c (then the proximal length equals the extrapolation length), for weaker repulsion p depends also on chain length and polymer concentration (then p is smaller than 1/c). In very dilute solutions we find quantitative agreement with previous analytical results for ideal chains, for any chain length, down to oligomers. In more concentrated solutions there is excellent agreement with numerical self-consistent depletion profiles, for both weak and strong repulsion, for any chain length, and for any solvency. For adsorption the volume filling dominates. As a result c now reaches a lower limit c ≈ -0.5 (depending slightly on solvency). This limit follows immediately from the condition of a fully occupied train layer. Comparison with numerical SCF calculations corroborates that our analytical result is a good approximation. We suggest some simple methods to determine the interaction parameter (either c or Δχ(s)) from experiments. The relation Δχ(s)(c) provides a quantitative connection between continuum and lattice theories, and enables the use of analytical continuum results to describe the adsorption (and stretching) of lattice chains of any chain length. For example, a fully analytical treatment of mechanical desorption of a polymer chain (including the temperature dependence and the phase transitions) is now feasible. PMID:22482580

  2. An advanced distributed automated extraction of drainage network model on high-resolution DEM

    NASA Astrophysics Data System (ADS)

    Mao, Y.; Ye, A.; Xu, J.; Ma, F.; Deng, X.; Miao, C.; Gong, W.; Di, Z.

    2014-07-01

    A high-resolution and high-accuracy drainage network map is a prerequisite for simulating the water cycle in land surface hydrological models. The objective of this study was to develop a new automated extraction of drainage network model, which can get high-precision continuous drainage network on high-resolution DEM (Digital Elevation Model). The high-resolution DEM need too much computer resources to extract drainage network. The conventional GIS method often can not complete to calculate on high-resolution DEM of big basins, because the number of grids is too large. In order to decrease the computation time, an advanced distributed automated extraction of drainage network model (Adam) was proposed in the study. The Adam model has two features: (1) searching upward from outlet of basin instead of sink filling, (2) dividing sub-basins on low-resolution DEM, and then extracting drainage network on sub-basins of high-resolution DEM. The case study used elevation data of the Shuttle Radar Topography Mission (SRTM) at 3 arc-second resolution in Zhujiang River basin, China. The results show Adam model can dramatically reduce the computation time. The extracting drainage network was continuous and more accurate than HydroSHEDS (Hydrological data and maps based on Shuttle Elevation Derivatives at multiple Scales).

  3. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    SciTech Connect

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-11-15

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was attributed to phantom setup errors due to the slightly deformable and flexible phantom extremities. The estimated site-specific safety buffer distance with 0.001% probability of collision for (gantry-to-couch, gantry-to-phantom) was (1.23 cm, 3.35 cm), (1.01 cm, 3.99 cm), and (2.19 cm, 5.73 cm) for treatment to the head, lung, and prostate, respectively. Automated delivery to all three treatment sites was completed in 15 min and collision free using a digital Linac. Conclusions: An individualized collision prediction model for the purpose of noncoplanar beam delivery was developed and verified. With the model, the study has demonstrated the feasibility of predicting deliverable beams for an individual patient and then guiding fully automated noncoplanar treatment delivery. This work motivates development of clinical workflows and quality assurance procedures to allow more extensive use and automation of noncoplanar beam geometries.

  4. Mathematical analysis of a sharp-diffuse interfaces model for seawater intrusion

    NASA Astrophysics Data System (ADS)

    Choquet, C.; Diédhiou, M. M.; Rosier, C.

    2015-10-01

    We consider a new model mixing sharp and diffuse interface approaches for seawater intrusion phenomena in free aquifers. More precisely, a phase field model is introduced in the boundary conditions on the virtual sharp interfaces. We thus include in the model the existence of diffuse transition zones but we preserve the simplified structure allowing front tracking. The three-dimensional problem then reduces to a two-dimensional model involving a strongly coupled system of partial differential equations of parabolic type describing the evolution of the depths of the two free surfaces, that is the interface between salt- and freshwater and the water table. We prove the existence of a weak solution for the model completed with initial and boundary conditions. We also prove that the depths of the two interfaces satisfy a coupled maximum principle.

  5. An architecture and model for cognitive engineering simulation analysis - Application to advanced aviation automation

    NASA Technical Reports Server (NTRS)

    Corker, Kevin M.; Smith, Barry R.

    1993-01-01

    The process of designing crew stations for large-scale, complex automated systems is made difficult because of the flexibility of roles that the crew can assume, and by the rapid rate at which system designs become fixed. Modern cockpit automation frequently involves multiple layers of control and display technology in which human operators must exercise equipment in augmented, supervisory, and fully automated control modes. In this context, we maintain that effective human-centered design is dependent on adequate models of human/system performance in which representations of the equipment, the human operator(s), and the mission tasks are available to designers for manipulation and modification. The joint Army-NASA Aircrew/Aircraft Integration (A3I) Program, with its attendant Man-machine Integration Design and Analysis System (MIDAS), was initiated to meet this challenge. MIDAS provides designers with a test bed for analyzing human-system integration in an environment in which both cognitive human function and 'intelligent' machine function are described in similar terms. This distributed object-oriented simulation system, its architecture and assumptions, and our experiences from its application in advanced aviation crew stations are described.

  6. Simulation of evaporation of a sessile drop using a diffuse interface model

    NASA Astrophysics Data System (ADS)

    Sefiane, Khellil; Ding, Hang; Sahu, Kirti; Matar, Omar

    2008-11-01

    We consider here the evaporation dynamics of a Newtonian liquid sessile drop using an improved diffuse interface model. The governing equations for the drop and surrounding vapour are both solved, and separated by the order parameter (i.e. volume fraction), based on the previous work of Ding et al. JCP 2007. The diffuse interface model has been shown to be successful in modelling the moving contact line problems (Jacqmin 2000; Ding and Spelt 2007, 2008). Here, a pinned contact line of the drop is assumed. The evaporative mass flux at the liquid-vapour interface is a function of local temperature constitutively and treated as a source term in the interface evolution equation, i.e. Cahn-Hilliard equation. The model is validated by comparing its predictions with data available in the literature. The evaporative dynamics are illustrated in terms of drop snapshots, and a quantitative comparison with the results using a free surface model are made.

  7. Sharp interface model of creep deformation in crystalline solids

    NASA Astrophysics Data System (ADS)

    Mishin, Y.; McFadden, G. B.; Sekerka, R. F.; Boettinger, W. J.

    2015-08-01

    We present a rigorous irreversible thermodynamics treatment of creep deformation of solid materials with interfaces described as geometric surfaces capable of vacancy generation and absorption and moving under the influence of local thermodynamic forces. The free energy dissipation rate derived in this work permits clear identification of thermodynamic driving forces for all stages of the creep process and formulation of kinetic equations of creep deformation and microstructure evolution. The theory incorporates capillary effects and reveals the different roles played by the interface free energy and interface stress. To describe the interaction of grain boundaries with stresses, we classify grain boundaries into coherent, incoherent and semicoherent, depending on their mechanical response to the stress. To prepare for future applications, we specialize the general equations to a particular case of a linear-elastic solid with a small concentration of vacancies. The proposed theory creates a thermodynamic framework for addressing more complex cases, such as creep in multicomponent alloys and cross-effects among vacancy generation/absorption and grain boundary motion and sliding.

  8. Coherent description of transport across the water interface: From nanodroplets to climate models

    NASA Astrophysics Data System (ADS)

    Wilhelmsen, Øivind; Trinh, Thuat T.; Lervik, Anders; Badam, Vijay Kumar; Kjelstrup, Signe; Bedeaux, Dick

    2016-03-01

    Transport of mass and energy across the vapor-liquid interface of water is of central importance in a variety of contexts such as climate models, weather forecasts, and power plants. We provide a complete description of the transport properties of the vapor-liquid interface of water with the framework of nonequilibrium thermodynamics. Transport across the planar interface is then described by 3 interface transfer coefficients where 9 more coefficients extend the description to curved interfaces. We obtain all coefficients in the range 260-560 K by taking advantage of water evaporation experiments at low temperatures, nonequilibrium molecular dynamics with the TIP4P/2005 rigid-water-molecule model at high temperatures, and square gradient theory to represent the whole range. Square gradient theory is used to link the region where experiments are possible (low vapor pressures) to the region where nonequilibrium molecular dynamics can be done (high vapor pressures). This enables a description of transport across the planar water interface, interfaces of bubbles, and droplets, as well as interfaces of water structures with complex geometries. The results are likely to improve the description of evaporation and condensation of water at widely different scales; they open a route to improve the understanding of nanodroplets on a small scale and the precision of climate models on a large scale.

  9. Interface-capturing lattice Boltzmann equation model for two-phase flows

    NASA Astrophysics Data System (ADS)

    Lou, Qin; Guo, Zhaoli

    2015-01-01

    In this work, an interface-capturing lattice Boltzmann equation (LBE) model is proposed for two-phase flows. In the model, a Lax-Wendroff propagation scheme and a properly chosen equilibrium distribution function are employed. The Lax-Wendroff scheme is used to provide an adjustable Courant-Friedrichs-Lewy (CFL) number, and the equilibrium distribution is presented to remove the dependence of the relaxation time on the CFL number. As a result, the interface can be captured accurately by decreasing the CFL number. A theoretical expression is derived for the chemical potential gradient by solving the LBE directly for a two-phase system with a flat interface. The result shows that the gradient of the chemical potential is proportional to the square of the CFL number, which explains why the proposed model is able to capture the interface naturally with a small CFL number, and why large interface error exists in the standard LBE model. Numerical tests, including a one-dimensional flat interface problem, a two-dimensional circular droplet problem, and a three-dimensional spherical droplet problem, demonstrate that the proposed LBE model performs well and can capture a sharp interface with a suitable CFL number.

  10. Interface-capturing lattice Boltzmann equation model for two-phase flows.

    PubMed

    Lou, Qin; Guo, Zhaoli

    2015-01-01

    In this work, an interface-capturing lattice Boltzmann equation (LBE) model is proposed for two-phase flows. In the model, a Lax-Wendroff propagation scheme and a properly chosen equilibrium distribution function are employed. The Lax-Wendroff scheme is used to provide an adjustable Courant-Friedrichs-Lewy (CFL) number, and the equilibrium distribution is presented to remove the dependence of the relaxation time on the CFL number. As a result, the interface can be captured accurately by decreasing the CFL number. A theoretical expression is derived for the chemical potential gradient by solving the LBE directly for a two-phase system with a flat interface. The result shows that the gradient of the chemical potential is proportional to the square of the CFL number, which explains why the proposed model is able to capture the interface naturally with a small CFL number, and why large interface error exists in the standard LBE model. Numerical tests, including a one-dimensional flat interface problem, a two-dimensional circular droplet problem, and a three-dimensional spherical droplet problem, demonstrate that the proposed LBE model performs well and can capture a sharp interface with a suitable CFL number. PMID:25679734

  11. Statistical modelling of networked human-automation performance using working memory capacity.

    PubMed

    Ahmed, Nisar; de Visser, Ewart; Shaw, Tyler; Mohamed-Ameen, Amira; Campbell, Mark; Parasuraman, Raja

    2014-01-01

    This study examines the challenging problem of modelling the interaction between individual attentional limitations and decision-making performance in networked human-automation system tasks. Analysis of real experimental data from a task involving networked supervision of multiple unmanned aerial vehicles by human participants shows that both task load and network message quality affect performance, but that these effects are modulated by individual differences in working memory (WM) capacity. These insights were used to assess three statistical approaches for modelling and making predictions with real experimental networked supervisory performance data: classical linear regression, non-parametric Gaussian processes and probabilistic Bayesian networks. It is shown that each of these approaches can help designers of networked human-automated systems cope with various uncertainties in order to accommodate future users by linking expected operating conditions and performance from real experimental data to observable cognitive traits like WM capacity. Practitioner Summary: Working memory (WM) capacity helps account for inter-individual variability in operator performance in networked unmanned aerial vehicle supervisory tasks. This is useful for reliable performance prediction near experimental conditions via linear models; robust statistical prediction beyond experimental conditions via Gaussian process models and probabilistic inference about unknown task conditions/WM capacities via Bayesian network models. PMID:24308716

  12. a Psycholinguistic Model for Simultaneous Translation, and Proficiency Assessment by Automated Acoustic Analysis of Discourse.

    NASA Astrophysics Data System (ADS)

    Yaghi, Hussein M.

    Two separate but related issues are addressed: how simultaneous translation (ST) works on a cognitive level and how such translation can be objectively assessed. Both of these issues are discussed in the light of qualitative and quantitative analyses of a large corpus of recordings of ST and shadowing. The proposed ST model utilises knowledge derived from a discourse analysis of the data, many accepted facts in the psychology tradition, and evidence from controlled experiments that are carried out here. This model has three advantages: (i) it is based on analyses of extended spontaneous speech rather than word-, syllable-, or clause -bound stimuli; (ii) it draws equally on linguistic and psychological knowledge; and (iii) it adopts a non-traditional view of language called 'the linguistic construction of reality'. The discourse-based knowledge is also used to develop three computerised systems for the assessment of simultaneous translation: one is a semi-automated system that treats the content of the translation; and two are fully automated, one of which is based on the time structure of the acoustic signals whilst the other is based on their cross-correlation. For each system, several parameters of performance are identified, and they are correlated with assessments rendered by the traditional, subjective, qualitative method. Using signal processing techniques, the acoustic analysis of discourse leads to the conclusion that quality in simultaneous translation can be assessed quantitatively with varying degrees of automation. It identifies as measures of performance (i) three content-based standards; (ii) four time management parameters that reflect the influence of the source on the target language time structure; and (iii) two types of acoustical signal coherence. Proficiency in ST is shown to be directly related to coherence and speech rate but inversely related to omission and delay. High proficiency is associated with a high degree of simultaneity and prudence, but a low degree of time dissipation and inactivity. All these automated systems are shown to be independently capable of identifying quality in translation, and their performance parameters capable of yielding congruent results regardless of whether the method used is fully or partly automated, and regardless of whether it is based on content, or on acoustical signals. This means that translation assessment does not need to be solely qualitative any more, and that these quantitative systems can be used to complement the traditional subjective method.

  13. Ab-initio molecular modeling of interfaces in tantalum-carbon system

    SciTech Connect

    Balani, Kantesh; Mungole, Tarang; Bakshi, Srinivasa Rao; Agarwal, Arvind

    2012-03-15

    Processing of ultrahigh temperature TaC ceramic material with sintering additives of B{sub 4}C and reinforcement of carbon nanotubes (CNTs) gives rise to possible formation of several interfaces (Ta{sub 2}C-TaC, TaC-CNT, Ta{sub 2}C-CNT, TaB{sub 2}-TaC, and TaB{sub 2}-CNT) that could influence the resultant properties. Current work focuses on interfaces developed during spark plasma sintering of TaC-system and performing ab initio molecular modeling of the interfaces generated during processing of TaC-B{sub 4}C and TaC-CNT composites. The energy of the various interfaces has been evaluated and compared with TaC-Ta{sub 2}C interface. The iso-surface electronic contours are extracted from the calculations eliciting the enhanced stability of TaC-CNT interface by 72.2%. CNTs form stable interfaces with Ta{sub 2}C and TaB{sub 2} phases with a reduction in the energy by 35.8% and 40.4%, respectively. The computed Ta-C-B interfaces are also compared with experimentally observed interfaces in high resolution TEM images.

  14. Automated alignment-based curation of gene models in filamentous fungi

    PubMed Central

    2014-01-01

    Background Automated gene-calling is still an error-prone process, particularly for the highly plastic genomes of fungal species. Improvement through quality control and manual curation of gene models is a time-consuming process that requires skilled biologists and is only marginally performed. The wealth of available fungal genomes has not yet been exploited by an automated method that applies quality control of gene models in order to obtain more accurate genome annotations. Results We provide a novel method named alignment-based fungal gene prediction (ABFGP) that is particularly suitable for plastic genomes like those of fungi. It can assess gene models on a gene-by-gene basis making use of informant gene loci. Its performance was benchmarked on 6,965 gene models confirmed by full-length unigenes from ten different fungi. 79.4% of all gene models were correctly predicted by ABFGP. It improves the output of ab initio gene prediction software due to a higher sensitivity and precision for all gene model components. Applicability of the method was shown by revisiting the annotations of six different fungi, using gene loci from up to 29 fungal genomes as informants. Between 7,231 and 8,337 genes were assessed by ABFGP and for each genome between 1,724 and 3,505 gene model revisions were proposed. The reliability of the proposed gene models is assessed by an a posteriori introspection procedure of each intron and exon in the multiple gene model alignment. The total number and type of proposed gene model revisions in the six fungal genomes is correlated to the quality of the genome assembly, and to sequencing strategies used in the sequencing centre, highlighting different types of errors in different annotation pipelines. The ABFGP method is particularly successful in discovering sequence errors and/or disruptive mutations causing truncated and erroneous gene models. Conclusions The ABFGP method is an accurate and fully automated quality control method for fungal gene catalogues that can be easily implemented into existing annotation pipelines. With the exponential release of new genomes, the ABFGP method will help decreasing the number of gene models that require additional manual curation. PMID:24433567

  15. The enhanced Software Life Cyle Support Environment (ProSLCSE): Automation for enterprise and process modeling

    NASA Technical Reports Server (NTRS)

    Milligan, James R.; Dutton, James E.

    1993-01-01

    In this paper, we have introduced a comprehensive method for enterprise modeling that addresses the three important aspects of how an organization goes about its business. FirstEP includes infrastructure modeling, information modeling, and process modeling notations that are intended to be easy to learn and use. The notations stress the use of straightforward visual languages that are intuitive, syntactically simple, and semantically rich. ProSLCSE will be developed with automated tools and services to facilitate enterprise modeling and process enactment. In the spirit of FirstEP, ProSLCSE tools will also be seductively easy to use. Achieving fully managed, optimized software development and support processes will be long and arduous for most software organizations, and many serious problems will have to be solved along the way. ProSLCSE will provide the ability to document, communicate, and modify existing processes, which is the necessary first step.

  16. Intelligent sensor-model automated control of PMR-15 autoclave processing

    NASA Technical Reports Server (NTRS)

    Hart, S.; Kranbuehl, D.; Loos, A.; Hinds, B.; Koury, J.

    1992-01-01

    An intelligent sensor model system has been built and used for automated control of the PMR-15 cure process in the autoclave. The system uses frequency-dependent FM sensing (FDEMS), the Loos processing model, and the Air Force QPAL intelligent software shell. The Loos model is used to predict and optimize the cure process including the time-temperature dependence of the extent of reaction, flow, and part consolidation. The FDEMS sensing system in turn monitors, in situ, the removal of solvent, changes in the viscosity, reaction advancement and cure completion in the mold continuously throughout the processing cycle. The sensor information is compared with the optimum processing conditions from the model. The QPAL composite cure control system allows comparison of the sensor monitoring with the model predictions to be broken down into a series of discrete steps and provides a language for making decisions on what to do next regarding time-temperature and pressure.

  17. AIDE, A SYSTEM FOR DEVELOPING INTERACTIVE USER INTERFACES FOR ENVIRONMENTAL MODELS

    EPA Science Inventory

    Recent progress in environmental science and engineering has seen increasing use of interactive interfaces for computer models. nitial applications centered on the use of interactive software to assist in building complicated input sequences required by batch programs. rom these ...

  18. A coupled damage-plasticity model for the cyclic behavior of shear-loaded interfaces

    NASA Astrophysics Data System (ADS)

    Carrara, P.; De Lorenzis, L.

    2015-12-01

    The present work proposes a novel thermodynamically consistent model for the behavior of interfaces under shear (i.e. mode-II) cyclic loading conditions. The interface behavior is defined coupling damage and plasticity. The admissible states' domain is formulated restricting the tangential interface stress to non-negative values, which makes the model suitable e.g. for interfaces with thin adherends. Linear softening is assumed so as to reproduce, under monotonic conditions, a bilinear mode-II interface law. Two damage variables govern respectively the loss of strength and of stiffness of the interface. The proposed model needs the evaluation of only four independent parameters, i.e. three defining the monotonic mode-II interface law, and one ruling the fatigue behavior. This limited number of parameters and their clear physical meaning facilitate experimental calibration. Model predictions are compared with experimental results on fiber reinforced polymer sheets externally bonded to concrete involving different load histories, and an excellent agreement is obtained.

  19. A phase-field point-particle model for particle-laden interfaces

    NASA Astrophysics Data System (ADS)

    Gu, Chuan; Botto, Lorenzo

    2014-11-01

    The irreversible attachment of solid particles to fluid interfaces is exploited in a variety of applications, such as froth flotation and Pickering emulsions. Critical in these applications is to predict particle transport in and near the interface, and the two-way coupling between the particles and the interface. While it is now possible to carry out particle-resolved simulations of these systems, simulating relatively large systems with many particles remains challenging. We present validation studies and preliminary results for a hybrid Eulerian-Lagrangian simulation method, in which the dynamics of the interface is fully-resolved by a phase-field approach, while the particles are treated in the ``point-particle'' approximation. With this method, which represents a compromise between the competing needs of resolving particle and interface scale phenomena, we are able to simulate the adsorption of a large number of particles in the interface of drops, and particle-interface interactions during the spinodal coarsening of a multiphase system. While this method models the adsorption phenomenon efficiently and with reasonable accuracy, it still requires understanding subtle issues related to the modelling of hydrodynamic and capillary forces for particles in contact with interface.

  20. Effects of modeling errors on trajectory predictions in air traffic control automation

    NASA Technical Reports Server (NTRS)

    Jackson, Michael R. C.; Zhao, Yiyuan; Slattery, Rhonda

    1996-01-01

    Air traffic control automation synthesizes aircraft trajectories for the generation of advisories. Trajectory computation employs models of aircraft performances and weather conditions. In contrast, actual trajectories are flown in real aircraft under actual conditions. Since synthetic trajectories are used in landing scheduling and conflict probing, it is very important to understand the differences between computed trajectories and actual trajectories. This paper examines the effects of aircraft modeling errors on the accuracy of trajectory predictions in air traffic control automation. Three-dimensional point-mass aircraft equations of motion are assumed to be able to generate actual aircraft flight paths. Modeling errors are described as uncertain parameters or uncertain input functions. Pilot or autopilot feedback actions are expressed as equality constraints to satisfy control objectives. A typical trajectory is defined by a series of flight segments with different control objectives for each flight segment and conditions that define segment transitions. A constrained linearization approach is used to analyze trajectory differences caused by various modeling errors by developing a linear time varying system that describes the trajectory errors, with expressions to transfer the trajectory errors across moving segment transitions. A numerical example is presented for a complete commercial aircraft descent trajectory consisting of several flight segments.

  1. Design Through Manufacturing: The Solid Model-Finite Element Analysis Interface

    NASA Technical Reports Server (NTRS)

    Rubin, Carol

    2002-01-01

    State-of-the-art computer aided design (CAD) presently affords engineers the opportunity to create solid models of machine parts reflecting every detail of the finished product. Ideally, in the aerospace industry, these models should fulfill two very important functions: (1) provide numerical. control information for automated manufacturing of precision parts, and (2) enable analysts to easily evaluate the stress levels (using finite element analysis - FEA) for all structurally significant parts used in aircraft and space vehicles. Today's state-of-the-art CAD programs perform function (1) very well, providing an excellent model for precision manufacturing. But they do not provide a straightforward and simple means of automating the translation from CAD to FEA models, especially for aircraft-type structures. Presently, the process of preparing CAD models for FEA consumes a great deal of the analyst's time.

  2. Modeling interface roughness scattering in a layered seabed for normal-incident chirp sonar signals.

    PubMed

    Tang, Dajun; Hefner, Brian T

    2012-04-01

    Downward looking sonar, such as the chirp sonar, is widely used as a sediment survey tool in shallow water environments. Inversion of geo-acoustic parameters from such sonar data precedes the availability of forward models. An exact numerical model is developed to initiate the simulation of the acoustic field produced by such a sonar in the presence of multiple rough interfaces. The sediment layers are assumed to be fluid layers with non-intercepting rough interfaces. PMID:22502485

  3. An automated procedure for material parameter evaluation for viscoplastic constitutive models

    NASA Technical Reports Server (NTRS)

    Imbrie, P. K.; James, G. H.; Hill, P. S.; Allen, D. H.; Haisler, W. E.

    1988-01-01

    An automated procedure is presented for evaluating the material parameters in Walker's exponential viscoplastic constitutive model for metals at elevated temperature. Both physical and numerical approximations are utilized to compute the constants for Inconel 718 at 1100 F. When intermediate results are carefully scrutinized and engineering judgement applied, parameters may be computed which yield stress output histories that are in agreement with experimental results. A qualitative assessment of the theta-plot method for predicting the limiting value of stress is also presented. The procedure may also be used as a basis to develop evaluation schemes for other viscoplastic constitutive theories of this type.

  4. Streamflow forecasting using the modular modeling system and an object-user interface

    USGS Publications Warehouse

    Jeton, A.E.

    2001-01-01

    The U.S. Geological Survey (USGS), in cooperation with the Bureau of Reclamation (BOR), developed a computer program to provide a general framework needed to couple disparate environmental resource models and to manage the necessary data. The Object-User Interface (OUI) is a map-based interface for models and modeling data. It provides a common interface to run hydrologic models and acquire, browse, organize, and select spatial and temporal data. One application is to assist river managers in utilizing streamflow forecasts generated with the Precipitation-Runoff Modeling System running in the Modular Modeling System (MMS), a distributed-parameter watershed model, and the National Weather Service Extended Streamflow Prediction (ESP) methodology.

  5. Automated Optimization of Water–Water Interaction Parameters for a Coarse-Grained Model

    PubMed Central

    2015-01-01

    We have developed an automated parameter optimization software framework (ParOpt) that implements the Nelder–Mead simplex algorithm and applied it to a coarse-grained polarizable water model. The model employs a tabulated, modified Morse potential with decoupled short- and long-range interactions incorporating four water molecules per interaction site. Polarizability is introduced by the addition of a harmonic angle term defined among three charged points within each bead. The target function for parameter optimization was based on the experimental density, surface tension, electric field permittivity, and diffusion coefficient. The model was validated by comparison of statistical quantities with experimental observation. We found very good performance of the optimization procedure and good agreement of the model with experiment. PMID:24460506

  6. PDB_REDO: automated re-refinement of X-ray structure models in the PDB

    PubMed Central

    Joosten, Robbie P.; Salzemann, Jean; Bloch, Vincent; Stockinger, Heinz; Berglund, Ann-Charlott; Blanchet, Christophe; Bongcam-Rudloff, Erik; Combet, Christophe; Da Costa, Ana L.; Deleage, Gilbert; Diarena, Matteo; Fabbretti, Roberto; Fettahi, Géraldine; Flegel, Volker; Gisel, Andreas; Kasam, Vinod; Kervinen, Timo; Korpelainen, Eija; Mattila, Kimmo; Pagni, Marco; Reichstadt, Matthieu; Breton, Vincent; Tickle, Ian J.; Vriend, Gert

    2009-01-01

    Structural biology, homology modelling and rational drug design require accurate three-dimensional macromolecular coordinates. However, the coordinates in the Protein Data Bank (PDB) have not all been obtained using the latest experimental and computational methods. In this study a method is presented for automated re-refinement of existing structure models in the PDB. A large-scale benchmark with 16 807 PDB entries showed that they can be improved in terms of fit to the deposited experimental X-ray data as well as in terms of geometric quality. The re-refinement protocol uses TLS models to describe concerted atom movement. The resulting structure models are made available through the PDB_REDO databank (http://www.cmbi.ru.nl/pdb_redo/). Grid computing techniques were used to overcome the computational requirements of this endeavour. PMID:22477769

  7. PDB_REDO: automated re-refinement of X-ray structure models in the PDB.

    PubMed

    Joosten, Robbie P; Salzemann, Jean; Bloch, Vincent; Stockinger, Heinz; Berglund, Ann-Charlott; Blanchet, Christophe; Bongcam-Rudloff, Erik; Combet, Christophe; Da Costa, Ana L; Deleage, Gilbert; Diarena, Matteo; Fabbretti, Roberto; Fettahi, Géraldine; Flegel, Volker; Gisel, Andreas; Kasam, Vinod; Kervinen, Timo; Korpelainen, Eija; Mattila, Kimmo; Pagni, Marco; Reichstadt, Matthieu; Breton, Vincent; Tickle, Ian J; Vriend, Gert

    2009-06-01

    Structural biology, homology modelling and rational drug design require accurate three-dimensional macromolecular coordinates. However, the coordinates in the Protein Data Bank (PDB) have not all been obtained using the latest experimental and computational methods. In this study a method is presented for automated re-refinement of existing structure models in the PDB. A large-scale benchmark with 16 807 PDB entries showed that they can be improved in terms of fit to the deposited experimental X-ray data as well as in terms of geometric quality. The re-refinement protocol uses TLS models to describe concerted atom movement. The resulting structure models are made available through the PDB_REDO databank (http://www.cmbi.ru.nl/pdb_redo/). Grid computing techniques were used to overcome the computational requirements of this endeavour. PMID:22477769

  8. Automated determination of fibrillar structures by simultaneous model building and fiber diffraction refinement.

    PubMed

    Potrzebowski, Wojciech; André, Ingemar

    2015-07-01

    For highly oriented fibrillar molecules, three-dimensional structures can often be determined from X-ray fiber diffraction data. However, because of limited information content, structure determination and validation can be challenging. We demonstrate that automated structure determination of protein fibers can be achieved by guiding the building of macromolecular models with fiber diffraction data. We illustrate the power of our approach by determining the structures of six bacteriophage viruses de novo using fiber diffraction data alone and together with solid-state NMR data. Furthermore, we demonstrate the feasibility of molecular replacement from monomeric and fibrillar templates by solving the structure of a plant virus using homology modeling and protein-protein docking. The generated models explain the experimental data to the same degree as deposited reference structures but with improved structural quality. We also developed a cross-validation method for model selection. The results highlight the power of fiber diffraction data as structural constraints. PMID:25961412

  9. A gradient-descent-based approach for transparent linguistic interface generation in fuzzy models.

    PubMed

    Chen, Long; Chen, C L Philip; Pedrycz, Witold

    2010-10-01

    Linguistic interface is a group of linguistic terms or fuzzy descriptions that describe variables in a system utilizing corresponding membership functions. Its transparency completely or partly decides the interpretability of fuzzy models. This paper proposes a GRadiEnt-descEnt-based Transparent lInguistic iNterface Generation (GREETING) approach to overcome the disadvantage of traditional linguistic interface generation methods where the consideration of the interpretability aspects of linguistic interface is limited. In GREETING, the widely used interpretability criteria of linguistic interface are considered and optimized. The numeric experiments on the data sets from University of California, Irvine (UCI) machine learning databases demonstrate the feasibility and superiority of the proposed GREETING method. The GREETING method is also applied to fuzzy decision tree generation. It is shown that GREETING generates better transparent fuzzy decision trees in terms of better classification rates and comparable tree sizes. PMID:19963699

  10. Automated behavioral phenotyping reveals presymptomatic alterations in a SCA3 genetrap mouse model.

    PubMed

    Hübener, Jeannette; Casadei, Nicolas; Teismann, Peter; Seeliger, Mathias W; Björkqvist, Maria; von Hörsten, Stephan; Riess, Olaf; Nguyen, Huu Phuc

    2012-06-20

    Characterization of disease models of neurodegenerative disorders requires a systematic and comprehensive phenotyping in a highly standardized manner. Therefore, automated high-resolution behavior test systems such as the homecage based LabMaster system are of particular interest. We demonstrate the power of the automated LabMaster system by discovering previously unrecognized features of a recently characterized atxn3 mutant mouse model. This model provided neurological symptoms including gait ataxia, tremor, weight loss and premature death at the age of 12 months usually detectable just 2 weeks before the mice died. Moreover, using the LabMaster system we were able to detect hypoactivity in presymptomatic mutant mice in the dark as well as light phase. Additionally, we analyzed inflammation, immunological and hematological parameters, which indicated a reduced immune defense in phenotypic mice. Here we demonstrate that a detailed characterization even of organ systems that are usually not affected in SCA3 is important for further studies of pathogenesis and required for the preclinical therapeutic studies. PMID:22749017

  11. Generating Phenotypical Erroneous Human Behavior to Evaluate Human-automation Interaction Using Model Checking

    PubMed Central

    Bolton, Matthew L.; Bass, Ellen J.; Siminiceanu, Radu I.

    2012-01-01

    Breakdowns in complex systems often occur as a result of system elements interacting in unanticipated ways. In systems with human operators, human-automation interaction associated with both normative and erroneous human behavior can contribute to such failures. Model-driven design and analysis techniques provide engineers with formal methods tools and techniques capable of evaluating how human behavior can contribute to system failures. This paper presents a novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models. The generated erroneous behavior is capable of replicating Hollnagel’s zero-order phenotypes of erroneous action for omissions, jumps, repetitions, and intrusions. Multiple phenotypical acts can occur in sequence, thus allowing for the generation of higher order phenotypes. The task behavior model pattern capable of generating erroneous behavior can be integrated into a formal system model so that system safety properties can be formally verified with a model checker. This allows analysts to prove that a human-automation interactive system (as represented by the model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. We present benchmarks related to the size of the statespace and verification time of models to show how the erroneous human behavior generation process scales. We demonstrate the method with a case study: the operation of a radiation therapy machine. A potential problem resulting from a generated erroneous human action is discovered. A design intervention is presented which prevents this problem from occurring. We discuss how our method could be used to evaluate larger applications and recommend future paths of development. PMID:23105914

  12. A conceptual model of the automated credibility assessment of the volunteered geographic information

    NASA Astrophysics Data System (ADS)

    Idris, N. H.; Jackson, M. J.; Ishak, M. H. I.

    2014-02-01

    The use of Volunteered Geographic Information (VGI) in collecting, sharing and disseminating geospatially referenced information on the Web is increasingly common. The potentials of this localized and collective information have been seen to complement the maintenance process of authoritative mapping data sources and in realizing the development of Digital Earth. The main barrier to the use of this data in supporting this bottom up approach is the credibility (trust), completeness, accuracy, and quality of both the data input and outputs generated. The only feasible approach to assess these data is by relying on an automated process. This paper describes a conceptual model of indicators (parameters) and practical approaches to automated assess the credibility of information contributed through the VGI including map mashups, Geo Web and crowd - sourced based applications. There are two main components proposed to be assessed in the conceptual model - metadata and data. The metadata component comprises the indicator of the hosting (websites) and the sources of data / information. The data component comprises the indicators to assess absolute and relative data positioning, attribute, thematic, temporal and geometric correctness and consistency. This paper suggests approaches to assess the components. To assess the metadata component, automated text categorization using supervised machine learning is proposed. To assess the correctness and consistency in the data component, we suggest a matching validation approach using the current emerging technologies from Linked Data infrastructures and using third party reviews validation. This study contributes to the research domain that focuses on the credibility, trust and quality issues of data contributed by web citizen providers.

  13. A comparison of molecular dynamics and diffuse interface model predictions of Lennard-Jones fluid evaporation

    SciTech Connect

    Barbante, Paolo; Frezzotti, Aldo; Gibelli, Livio

    2014-12-09

    The unsteady evaporation of a thin planar liquid film is studied by molecular dynamics simulations of Lennard-Jones fluid. The obtained results are compared with the predictions of a diffuse interface model in which capillary Korteweg contributions are added to hydrodynamic equations, in order to obtain a unified description of the liquid bulk, liquid-vapor interface and vapor region. Particular care has been taken in constructing a diffuse interface model matching the thermodynamic and transport properties of the Lennard-Jones fluid. The comparison of diffuse interface model and molecular dynamics results shows that, although good agreement is obtained in equilibrium conditions, remarkable deviations of diffuse interface model predictions from the reference molecular dynamics results are observed in the simulation of liquid film evaporation. It is also observed that molecular dynamics results are in good agreement with preliminary results obtained from a composite model which describes the liquid film by a standard hydrodynamic model and the vapor by the Boltzmann equation. The two mathematical model models are connected by kinetic boundary conditions assuming unit evaporation coefficient.

  14. An automated method to build groundwater model hydrostratigraphy from airborne electromagnetic data and lithological borehole logs

    NASA Astrophysics Data System (ADS)

    Marker, P. A.; Foged, N.; He, X.; Christiansen, A. V.; Refsgaard, J. C.; Auken, E.; Bauer-Gottwein, P.

    2015-02-01

    Large-scale integrated hydrological models are important decision support tools in water resources management. The largest source of uncertainty in such models is the hydrostratigraphic model. Geometry and configuration of hydrogeological units are often poorly determined from hydrogeological data alone. Due to sparse sampling in space, lithological borehole logs may overlook structures that are important for groundwater flow at larger scales. Good spatial coverage along with high spatial resolution makes airborne time-domain electromagnetic (AEM) data valuable for the structural input to large-scale groundwater models. We present a novel method to automatically integrate large AEM data-sets and lithological information into large-scale hydrological models. Clay-fraction maps are produced by translating geophysical resistivity into clay-fraction values using lithological borehole information. Voxel models of electrical resistivity and clay fraction are classified into hydrostratigraphic zones using k-means clustering. Hydraulic conductivity values of the zones are estimated by hydrological calibration using hydraulic head and stream discharge observations. The method is applied to a Danish case study. Benchmarking hydrological performance by comparison of simulated hydrological state variables, the cluster model performed competitively. Calibrations of 11 hydrostratigraphic cluster models with 1-11 hydraulic conductivity zones showed improved hydrological performance with increasing number of clusters. Beyond the 5-cluster model hydrological performance did not improve. Due to reproducibility and possibility of method standardization and automation, we believe that hydrostratigraphic model generation with the proposed method has important prospects for groundwater models used in water resources management.

  15. Automation, Control and Modeling of Compound Semiconductor Thin-Film Growth

    SciTech Connect

    Breiland, W.G.; Coltrin, M.E.; Drummond, T.J.; Horn, K.M.; Hou, H.Q.; Klem, J.F.; Tsao, J.Y.

    1999-02-01

    This report documents the results of a laboratory-directed research and development (LDRD) project on control and agile manufacturing in the critical metalorganic chemical vapor deposition (MOCVD) and molecular beam epitaxy (MBE) materials growth processes essential to high-speed microelectronics and optoelectronic components. This effort is founded on a modular and configurable process automation system that serves as a backbone allowing integration of process-specific models and sensors. We have developed and integrated MOCVD- and MBE-specific models in this system, and demonstrated the effectiveness of sensor-based feedback control in improving the accuracy and reproducibility of semiconductor heterostructures. In addition, within this framework we have constructed ''virtual reactor'' models for growth processes, with the goal of greatly shortening the epitaxial growth process development cycle.

  16. Automated High-Throughput Characterization of Single Neurons by Means of Simplified Spiking Models.

    PubMed

    Pozzorini, Christian; Mensi, Skander; Hagens, Olivier; Naud, Richard; Koch, Christof; Gerstner, Wulfram

    2015-06-01

    Single-neuron models are useful not only for studying the emergent properties of neural circuits in large-scale simulations, but also for extracting and summarizing in a principled way the information contained in electrophysiological recordings. Here we demonstrate that, using a convex optimization procedure we previously introduced, a Generalized Integrate-and-Fire model can be accurately fitted with a limited amount of data. The model is capable of predicting both the spiking activity and the subthreshold dynamics of different cell types, and can be used for online characterization of neuronal properties. A protocol is proposed that, combined with emergent technologies for automatic patch-clamp recordings, permits automated, in vitro high-throughput characterization of single neurons. PMID:26083597

  17. Automated parameter optimization in modeling absorption spectra and resonance Raman excitation profiles.

    PubMed

    Shorr, Eric; Myers Kelley, Anne

    2007-09-14

    An automated method is described for optimizing the molecular parameters in simultaneous modeling of optical absorption spectra and resonance Raman excitation profiles. The method utilizes a previously developed Fortran routine that calculates absorption spectra and Raman excitation profiles for polyatomic molecules in solution from a model for the potential energy surfaces and spectral broadening mechanisms. It is combined here with an optimization routine from the commercial MATLAB package that iteratively adjusts the parameters of the molecular model to minimize the least-squared error between calculated and experimental spectra. Optimizations that typically require days to weeks of human time when performed interactively can be accomplished automatically in less than an hour of computer time. The method can handle large molecules (we show results for as many as 23 Raman-active modes) and mixtures of spectral broadening mechanisms (lifetime, Brownian oscillator, and inhomogeneous), and is robust toward noise or missing data points. PMID:17712457

  18. Automated High-Throughput Characterization of Single Neurons by Means of Simplified Spiking Models

    PubMed Central

    Hagens, Olivier; Naud, Richard; Koch, Christof; Gerstner, Wulfram

    2015-01-01

    Single-neuron models are useful not only for studying the emergent properties of neural circuits in large-scale simulations, but also for extracting and summarizing in a principled way the information contained in electrophysiological recordings. Here we demonstrate that, using a convex optimization procedure we previously introduced, a Generalized Integrate-and-Fire model can be accurately fitted with a limited amount of data. The model is capable of predicting both the spiking activity and the subthreshold dynamics of different cell types, and can be used for online characterization of neuronal properties. A protocol is proposed that, combined with emergent technologies for automatic patch-clamp recordings, permits automated, in vitro high-throughput characterization of single neurons. PMID:26083597

  19. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1993-01-01

    Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.

  20. Modeling and Preliminary Testing Socket-Residual Limb Interface Stiffness of Above-Elbow Prostheses

    PubMed Central

    Sensinger, Jonathon W.; Weir, Richard F. ff.

    2011-01-01

    The interface between the socket and residual limb can have a significant effect on the performance of a prosthesis. Specifically, knowledge of the rotational stiffness of the socket-residual limb (S-RL) interface is extremely useful in designing new prostheses and evaluating new control paradigms, as well as in comparing existing and new socket technologies. No previous studies, however, have examined the rotational stiffness of S-RL interfaces. To address this problem, a math model is compared to a more complex finite element analysis, to see if the math model sufficiently captures the main effects of S-RL interface rotational stiffness. Both of these models are then compared to preliminary empirical testing, in which a series of X-rays, called fluoroscopy, is taken to obtain the movement of the bone relative to the socket. Force data are simultaneously recorded, and the combination of force and movement data are used to calculate the empirical rotational stiffness of elbow S-RL interface. The empirical rotational stiffness values are then compared to the models, to see if values of Young’s modulus obtained in other studies at localized points may be used to determine the global rotational stiffness of the S-RL interface. Findings include agreement between the models and empirical results and the ability of persons to significantly modulate the rotational stiffness of their S-RL interface a little less than one order of magnitude. The floor and ceiling of this range depend significantly on socket length and co-contraction levels, but not on residual limb diameter or bone diameter. Measured trans-humeral S-RL interface rotational stiffness values ranged from 24–140 Nm/rad for the four subjects tested in this study. PMID:18403287

  1. Improved automated diagnosis of misfire in internal combustion engines based on simulation models

    NASA Astrophysics Data System (ADS)

    Chen, Jian; Bond Randall, Robert

    2015-12-01

    In this paper, a new advance in the application of Artificial Neural Networks (ANNs) to the automated diagnosis of misfires in Internal Combustion engines(IC engines) is detailed. The automated diagnostic system comprises three stages: fault detection, fault localization and fault severity identification. Particularly, in the severity identification stage, separate Multi-Layer Perceptron networks (MLPs) with saturating linear transfer functions were designed for individual speed conditions, so they could achieve finer classification. In order to obtain sufficient data for the network training, numerical simulation was used to simulate different ranges of misfires in the engine. The simulation models need to be updated and evaluated using experimental data, so a series of experiments were first carried out on the engine test rig to capture the vibration signals for both normal condition and with a range of misfires. Two methods were used for the misfire diagnosis: one is based on the torsional vibration signals of the crankshaft and the other on the angular acceleration signals (rotational motion) of the engine block. Following the signal processing of the experimental and simulation signals, the best features were selected as the inputs to ANN networks. The ANN systems were trained using only the simulated data and tested using real experimental cases, indicating that the simulation model can be used for a wider range of faults for which it can still be considered valid. The final results have shown that the diagnostic system based on simulation can efficiently diagnose misfire, including location and severity.

  2. FE Modeling of Guided Wave Propagation in Structures with Weak Interfaces

    NASA Astrophysics Data System (ADS)

    Hosten, Bernard; Castaings, Michel

    2005-04-01

    This paper describes the use of a Finite Element code for modeling the effects of weak interfaces on the propagation of low order Lamb modes. The variable properties of the interface are modeled by uniform repartitions of compression and shear springs that insure the continuity of the stresses and impose a discontinuity in the displacement field. The method is tested by comparison with measurements that were presented in a previous QNDE conference (B.W.Drinkwater, M.Castaings, and B.Hosten "The interaction of Lamb waves with solid-solid interfaces", Q.N.D.E. Vol. 22, (2003) 1064-1071). The interface was the contact between a rough elastomer with high internal damping loaded against one surface of a glass plate. Both normal and shear stiffnesses of the interface were quantified from the attenuation of A0 and S0 Lamb waves caused by leakage of energy from the plate into the elastomer and measured at each step of a compressive loading. The FE model is made in the frequency domain, thus allowing the viscoelastic properties of the elastomer to be modeled by using complex moduli as input data. By introducing the interface stiffnesses in the code, the predicted guided waves attenuations are compared to the experimental results to validate the numerical FE method.

  3. Automated 3D Damaged Cavity Model Builder for Lower Surface Acreage Tile on Orbiter

    NASA Technical Reports Server (NTRS)

    Belknap, Shannon; Zhang, Michael

    2013-01-01

    The 3D Automated Thermal Tool for Damaged Acreage Tile Math Model builder was developed to perform quickly and accurately 3D thermal analyses on damaged lower surface acreage tiles and structures beneath the damaged locations on a Space Shuttle Orbiter. The 3D model builder created both TRASYS geometric math models (GMMs) and SINDA thermal math models (TMMs) to simulate an idealized damaged cavity in the damaged tile(s). The GMMs are processed in TRASYS to generate radiation conductors between the surfaces in the cavity. The radiation conductors are inserted into the TMMs, which are processed in SINDA to generate temperature histories for all of the nodes on each layer of the TMM. The invention allows a thermal analyst to create quickly and accurately a 3D model of a damaged lower surface tile on the orbiter. The 3D model builder can generate a GMM and the correspond ing TMM in one or two minutes, with the damaged cavity included in the tile material. A separate program creates a configuration file, which would take a couple of minutes to edit. This configuration file is read by the model builder program to determine the location of the damage, the correct tile type, tile thickness, structure thickness, and SIP thickness of the damage, so that the model builder program can build an accurate model at the specified location. Once the models are built, they are processed by the TRASYS and SINDA.

  4. Mathematical Modeling Research to Support the Development of Automated Insulin-Delivery Systems

    PubMed Central

    Steil, Garry M.; Reifman, Jaques

    2009-01-01

    The world leaders in glycemia modeling convened during the Eighth Annual Diabetes Technology Meeting in Bethesda, Maryland, on 14 November 2008, to discuss the current practices in mathematical modeling and make recommendations for its use in developing automated insulin-delivery systems. This report summarizes the collective views of the 25 participating experts in addressing the following four topics: current practices in modeling efforts for closed-loop control; framework for exchange of information and collaboration among research centers; major barriers for the development of accurate models; and key tasks for developing algorithms to build closed-loop control systems. Among the participants, the following main conclusions and recommendations were widely supported: Physiologic variance represents the single largest technical challenge to creating accurate simulation models.A Web site describing different models and the data supporting them should be made publically available, with funding agencies and journals requiring investigators to provide open access to both models and data.Existing simulation models should be compared and contrasted, using the same evaluation and validation criteria, to better assess the state of the art, understand any inherent limitations in the models, and identify gaps in data and/or model capability. PMID:20144371

  5. TOBAGO — a semi-automated approach for the generation of 3-D building models

    NASA Astrophysics Data System (ADS)

    Gruen, Armin

    3-D city models are in increasing demand for a great number of applications. Photogrammetry is a relevant technology that can provide an abundance of geometric, topologic and semantic information concerning these models. The pressure to generate a large amount of data with high degree of accuracy and completeness poses a great challenge to phtogrammetry. The development of automated and semi-automated methods for the generation of those data sets is therefore a key issue in photogrammetric research. We present in this article a strategy and methodology for an efficient generation of even fairly complex building models. Within this concept we request the operator to measure the house roofs from a stereomodel in form of an unstructured point cloud. According to our experience this can be done very quickly. Even a non-experienced operator can measure several hundred roofs or roof units per day. In a second step we fit generic building models fully automatically to these point clouds. The structure information is inherently included in these building models. In such a way geometric, topologic and even semantic data can be handed over to a CAD-system, in our case AutoCad, for further visualization and manipulation. The structuring is achieved in three steps. In a first step a classifier is initiated which recognizes the class of houses a particular roof point cloud belongs to. This recognition step is primarily based on the analysis of the number of ridge points. In the second and third steps the concrete topological relations between roof points are investigated and generic building models are fitted to the point clouds. Based on the technique of constraint-based reasoning two geometrical parsers are solving this problem. We have tested the methodology under a variety of different conditions in several pilot projects. The results will indicate the good performance of our approach. In addition we will demonstrate how the results can be used for visualization (texture mapping) and animation (walk-throughs and fly-overs).

  6. Analytical model for thermal boundary conductance and equilibrium thermal accommodation coefficient at solid/gas interfaces

    NASA Astrophysics Data System (ADS)

    Giri, Ashutosh; Hopkins, Patrick E.

    2016-02-01

    We develop an analytical model for the thermal boundary conductance between a solid and a gas. By considering the thermal fluxes in the solid and the gas, we describe the transmission of energy across the solid/gas interface with diffuse mismatch theory. From the predicted thermal boundary conductances across solid/gas interfaces, the equilibrium thermal accommodation coefficient is determined and compared to predictions from molecular dynamics simulations on the model solid-gas systems. We show that our model is applicable for modeling the thermal accommodation of gases on solid surfaces at non-cryogenic temperatures and relatively strong solid-gas interactions (ɛsf ≳ kBT).

  7. Framework for non-coherent interface models at finite displacement jumps and finite strains

    NASA Astrophysics Data System (ADS)

    Ottosen, Niels Saabye; Ristinmaa, Matti; Mosler, Jörn

    2016-05-01

    This paper deals with a novel constitutive framework suitable for non-coherent interfaces, such as cracks, undergoing large deformations in a geometrically exact setting. For this type of interface, the displacement field shows a jump across the interface. Within the engineering community, so-called cohesive zone models are frequently applied in order to describe non-coherent interfaces. However, for existing models to comply with the restrictions imposed by (a) thermodynamical consistency (e.g., the second law of thermodynamics), (b) balance equations (in particular, balance of angular momentum) and (c) material frame indifference, these models are essentially fiber models, i.e. models where the traction vector is collinear with the displacement jump. This constraints the ability to model shear and, in addition, anisotropic effects are excluded. A novel, extended constitutive framework which is consistent with the above mentioned fundamental physical principles is elaborated in this paper. In addition to the classical tractions associated with a cohesive zone model, the main idea is to consider additional tractions related to membrane-like forces and out-of-plane shear forces acting within the interface. For zero displacement jump, i.e. coherent interfaces, this framework degenerates to existing formulations presented in the literature. For hyperelasticity, the Helmholtz energy of the proposed novel framework depends on the displacement jump as well as on the tangent vectors of the interface with respect to the current configuration - or equivalently - the Helmholtz energy depends on the displacement jump and the surface deformation gradient. It turns out that by defining the Helmholtz energy in terms of the invariants of these variables, all above-mentioned fundamental physical principles are automatically fulfilled. Extensions of the novel framework necessary for material degradation (damage) and plasticity are also covered.

  8. An automated shell for management of parametric dispersion/deposition modeling

    SciTech Connect

    Paddock, R.A.; Absil, M.J.G.; Peerenboom, J.P.; Newsom, D.E.; North, M.J.; Coskey, R.J. Jr.

    1994-03-01

    In 1993, the US Army tasked Argonne National Laboratory to perform a study of chemical agent dispersion and deposition for the Chemical Stockpile Emergency Preparedness Program using an existing Army computer model. The study explored a wide range of situations in terms of six parameters: agent type, quantity released, liquid droplet size, release height, wind speed, and atmospheric stability. A number of discrete values of interest were chosen for each parameter resulting in a total of 18,144 possible different combinations of parameter values. Therefore, the need arose for a systematic method to assemble the large number of input streams for the model, filter out unrealistic combinations of parameter values, run the model, and extract the results of interest from the extensive model output. To meet these needs, we designed an automated shell for the computer model. The shell processed the inputs, ran the model, and reported the results of interest. By doing so, the shell compressed the time needed to perform the study and freed the researchers to focus on the evaluation and interpretation of the model predictions. The results of the study are still under review by the Army and other agencies; therefore, it would be premature to discuss the results in this paper. However, the design of the shell could be applied to other hazards for which multiple-parameter modeling is performed. This paper describes the design and operation of the shell as an example for other hazards and models.

  9. A multilayered sharp interface model of coupled freshwater and saltwater flow in coastal systems: model development and application

    USGS Publications Warehouse

    Essaid, H.I.

    1990-01-01

    The model allows for regional simulation of coastal groundwater conditions, including the effects of saltwater dynamics on the freshwater system. Vertically integrated freshwater and saltwater flow equations incorporating the interface boundary condition are solved within each aquifer. Leakage through confining layers is calculated by Darcy's law, accounting for density differences across the layer. The locations of the interface tip and toe, within grid blocks, are tracked by linearly extrapolating the position of the interface. The model has been verified using available analytical solutions and experimental results and applied to the Soquel-Aptos basin, Santa Cruz County, California. -from Author

  10. Pilot interaction with cockpit automation 2: An experimental study of pilots' model and awareness of the Flight Management System

    NASA Technical Reports Server (NTRS)

    Sarter, Nadine B.; Woods, David D.

    1994-01-01

    Technological developments have made it possible to automate more and more functions on the commercial aviation flight deck and in other dynamic high-consequence domains. This increase in the degrees of freedom in design has shifted questions away from narrow technological feasibility. Many concerned groups, from designers and operators to regulators and researchers, have begun to ask questions about how we should use the possibilities afforded by technology skillfully to support and expand human performance. In this article, we report on an experimental study that addressed these questions by examining pilot interaction with the current generation of flight deck automation. Previous results on pilot-automation interaction derived from pilot surveys, incident reports, and training observations have produced a corpus of features and contexts in which human-machine coordination is likely to break down (e.g., automation surprises). We used these data to design a simulated flight scenario that contained a variety of probes designed to reveal pilots' mental model of one major component of flight deck automation: the Flight Management System (FMS). The events within the scenario were also designed to probe pilots' ability to apply their knowledge and understanding in specific flight contexts and to examine their ability to track the status and behavior of the automated system (mode awareness). Although pilots were able to 'make the system work' in standard situations, the results reveal a variety of latent problems in pilot-FMS interaction that can affect pilot performance in nonnormal time critical situations.

  11. An Improved Cochlea Model with a General User Interface

    NASA Astrophysics Data System (ADS)

    Duifhuis, H.; Kruseman, J. M.; van Hengel, P. W. J.

    2003-02-01

    We have developed a flexible 1D cochlea model to test hypotheses and data against physical and mathematical constraints. The model is flexible in the sense that several linear and nonlinear model characteristics can be selected, and different boundary conditions can be tested. The software model runs at a reasonable speed at a modern PC. As an example, we will show the results of the model in comparison with the systematic study of the phase behavior (group delay) of distortion product otoacoustic emissions (DPOAEs) in the guinea pig (S. Schneider, V. Prijs and R. Schoonhoven, [9]). We also will demonstrate the effects of some common non-physical boundary conditions. Finally, we briefly indicate that this model of the auditory periphery provides a superior front end for an ASR (automatic speech recognition)-system.

  12. Analytic Element Modeling of Steady Interface Flow in Multilayer Aquifers Using AnAqSim.

    PubMed

    Fitts, Charles R; Godwin, Joshua; Feiner, Kathleen; McLane, Charles; Mullendore, Seth

    2015-01-01

    This paper presents the analytic element modeling approach implemented in the software AnAqSim for simulating steady groundwater flow with a sharp fresh-salt interface in multilayer (three-dimensional) aquifer systems. Compared with numerical methods for variable-density interface modeling, this approach allows quick model construction and can yield useful guidance about the three-dimensional configuration of an interface even at a large scale. The approach employs subdomains and multiple layers as outlined by Fitts (2010) with the addition of discharge potentials for shallow interface flow (Strack 1989). The following simplifying assumptions are made: steady flow, a sharp interface between fresh- and salt water, static salt water, and no resistance to vertical flow and hydrostatic heads within each fresh water layer. A key component of this approach is a transition to a thin fixed minimum fresh water thickness mode when the fresh water thickness approaches zero. This allows the solution to converge and determine the steady interface position without a long transient simulation. The approach is checked against the widely used numerical codes SEAWAT and SWI/MODFLOW and a hypothetical application of the method to a coastal wellfield is presented. PMID:24942663

  13. Interfaces with internal structures in generalized rock-paper-scissors models.

    PubMed

    Avelino, P P; Bazeia, D; Losano, L; Menezes, J; de Oliveira, B F

    2014-04-01

    In this work we investigate the development of stable dynamical structures along interfaces separating domains belonging to enemy partnerships in the context of cyclic predator-prey models with an even number of species N≥8. We use both stochastic and field theory simulations in one and two spatial dimensions, as well as analytical arguments, to describe the association at the interfaces of mutually neutral individuals belonging to enemy partnerships and to probe their role in the development of the dynamical structures at the interfaces. We identify an interesting behavior associated with the symmetric or asymmetric evolution of the interface profiles depending on whether N/2 is odd or even, respectively. We also show that the macroscopic evolution of the interface network is not very sensitive to the internal structure of the interfaces. Although this work focuses on cyclic predator-prey models with an even number of species, we argue that the results are expected to be quite generic in the context of spatial stochastic May-Leonard models. PMID:24827281

  14. CHANNEL MORPHOLOGY TOOL (CMT): A GIS-BASED AUTOMATED EXTRACTION MODEL FOR CHANNEL GEOMETRY

    SciTech Connect

    JUDI, DAVID; KALYANAPU, ALFRED; MCPHERSON, TIMOTHY; BERSCHEID, ALAN

    2007-01-17

    This paper describes an automated Channel Morphology Tool (CMT) developed in ArcGIS 9.1 environment. The CMT creates cross-sections along a stream centerline and uses a digital elevation model (DEM) to create station points with elevations along each of the cross-sections. The generated cross-sections may then be exported into a hydraulic model. Along with the rapid cross-section generation the CMT also eliminates any cross-section overlaps that might occur due to the sinuosity of the channels using the Cross-section Overlap Correction Algorithm (COCoA). The CMT was tested by extracting cross-sections from a 5-m DEM for a 50-km channel length in Houston, Texas. The extracted cross-sections were compared directly with surveyed cross-sections in terms of the cross-section area. Results indicated that the CMT-generated cross-sections satisfactorily matched the surveyed data.

  15. An Accuracy Assessment of Automated Photogrammetric Techniques for 3d Modeling of Complex Interiors

    NASA Astrophysics Data System (ADS)

    Georgantas, A.; Brdif, M.; Pierrot-Desseilligny, M.

    2012-07-01

    This paper presents a comparison of automatic photogrammetric techniques to terrestrial laser scanning for 3D modelling of complex interior spaces. We try to evaluate the automated photogrammetric techniques not only in terms of their geometric quality compared to laser scanning but also in terms of cost in money, acquisition and computational time. To this purpose we chose as test site a modern building's stairway. APERO/MICMAC ( IGN )which is an Open Source photogrammetric software was used for the production of the 3D photogrammetric point cloud which was compared to the one acquired by a Leica Scanstation 2 laser scanner. After performing various qualitative and quantitative controls we present the advantages and disadvantages of each 3D modelling method applied in a complex interior of a modern building.

  16. The use of automated parameter searches to improve ion channel kinetics for neural modeling.

    PubMed

    Hendrickson, Eric B; Edgerton, Jeremy R; Jaeger, Dieter

    2011-10-01

    The voltage and time dependence of ion channels can be regulated, notably by phosphorylation, interaction with phospholipids, and binding to auxiliary subunits. Many parameter variation studies have set conductance densities free while leaving kinetic channel properties fixed as the experimental constraints on the latter are usually better than on the former. Because individual cells can tightly regulate their ion channel properties, we suggest that kinetic parameters may be profitably set free during model optimization in order to both improve matches to data and refine kinetic parameters. To this end, we analyzed the parameter optimization of reduced models of three electrophysiologically characterized and morphologically reconstructed globus pallidus neurons. We performed two automated searches with different types of free parameters. First, conductance density parameters were set free. Even the best resulting models exhibited unavoidable problems which were due to limitations in our channel kinetics. We next set channel kinetics free for the optimized density matches and obtained significantly improved model performance. Some kinetic parameters consistently shifted to similar new values in multiple runs across three models, suggesting the possibility for tailored improvements to channel models. These results suggest that optimized channel kinetics can improve model matches to experimental voltage traces, particularly for channels characterized under different experimental conditions than recorded data to be matched by a model. The resulting shifts in channel kinetics from the original template provide valuable guidance for future experimental efforts to determine the detailed kinetics of channel isoforms and possible modulated states in particular types of neurons. PMID:21243419

  17. Modelling the rheology of anisotropic particles adsorbed on a two-dimensional fluid interface.

    PubMed

    Luo, Alan M; Sagis, Leonard M C; Öttinger, Hans Christian; De Michele, Cristiano; Ilg, Patrick

    2015-06-14

    We present a general approach based on nonequilibrium thermodynamics for bridging the gap between a well-defined microscopic model and the macroscopic rheology of particle-stabilised interfaces. Our approach is illustrated by starting with a microscopic model of hard ellipsoids confined to a planar surface, which is intended to simply represent a particle-stabilised fluid-fluid interface. More complex microscopic models can be readily handled using the methods outlined in this paper. From the aforementioned microscopic starting point, we obtain the macroscopic, constitutive equations using a combination of systematic coarse-graining, computer experiments and Hamiltonian dynamics. Exemplary numerical solutions of the constitutive equations are given for a variety of experimentally relevant flow situations to explore the rheological behaviour of our model. In particular, we calculate the shear and dilatational moduli of the interface over a wide range of surface coverages, ranging from the dilute isotropic regime, to the concentrated nematic regime. PMID:25921915

  18. Automated Feature Based Tls Data Registration for 3d Building Modeling

    NASA Astrophysics Data System (ADS)

    Kitamura, K.; Kochi, N.; Kaneko, S.

    2012-07-01

    In this paper we present a novel method for the registration of point cloud data obtained using terrestrial laser scanner (TLS). The final goal of our investigation is the automated reconstruction of CAD drawings and the 3D modeling of objects surveyed by TLS. Because objects are scanned from multiple positions, individual point cloud need to be registered to the same coordinate system. We propose in this paper an automated feature based registration procedure. Our proposed method does not require the definition of initial values or the placement of targets and is robust against noise and background elements. A feature extraction procedure is performed for each point cloud as pre-processing. The registration of the point clouds from different viewpoints is then performed by utilizing the extracted features. The feature extraction method which we had developed previously (Kitamura, 2010) is used: planes and edges are extracted from the point cloud. By utilizing these features, the amount of information to process is reduced and the efficiency of the whole registration procedure is increased. In this paper, we describe the proposed algorithm and, in order to demonstrate its effectiveness, we show the results obtained by using real data.

  19. A methodology for model-based development and automated verification of software for aerospace systems

    NASA Astrophysics Data System (ADS)

    Martin, L.; Schatalov, M.; Hagner, M.; Goltz, U.; Maibaum, O.

    Today's software for aerospace systems typically is very complex. This is due to the increasing number of features as well as the high demand for safety, reliability, and quality. This complexity also leads to significant higher software development costs. To handle the software complexity, a structured development process is necessary. Additionally, compliance with relevant standards for quality assurance is a mandatory concern. To assure high software quality, techniques for verification are necessary. Besides traditional techniques like testing, automated verification techniques like model checking become more popular. The latter examine the whole state space and, consequently, result in a full test coverage. Nevertheless, despite the obvious advantages, this technique is rarely yet used for the development of aerospace systems. In this paper, we propose a tool-supported methodology for the development and formal verification of safety-critical software in the aerospace domain. The methodology relies on the V-Model and defines a comprehensive work flow for model-based software development as well as automated verification in compliance to the European standard series ECSS-E-ST-40C. Furthermore, our methodology supports the generation and deployment of code. For tool support we use the tool SCADE Suite (Esterel Technology), an integrated design environment that covers all the requirements for our methodology. The SCADE Suite is well established in avionics and defense, rail transportation, energy and heavy equipment industries. For evaluation purposes, we apply our approach to an up-to-date case study of the TET-1 satellite bus. In particular, the attitude and orbit control software is considered. The behavioral models for the subsystem are developed, formally verified, and optimized.

  20. General MACOS Interface for Modeling and Analysis for Controlled Optical Systems

    NASA Technical Reports Server (NTRS)

    Sigrist, Norbert; Basinger, Scott A.; Redding, David C.

    2012-01-01

    The General MACOS Interface (GMI) for Modeling and Analysis for Controlled Optical Systems (MACOS) enables the use of MATLAB as a front-end for JPL s critical optical modeling package, MACOS. MACOS is JPL s in-house optical modeling software, which has proven to be a superb tool for advanced systems engineering of optical systems. GMI, coupled with MACOS, allows for seamless interfacing with modeling tools from other disciplines to make possible integration of dynamics, structures, and thermal models with the addition of control systems for deformable optics and other actuated optics. This software package is designed as a tool for analysts to quickly and easily use MACOS without needing to be an expert at programming MACOS. The strength of MACOS is its ability to interface with various modeling/development platforms, allowing evaluation of system performance with thermal, mechanical, and optical modeling parameter variations. GMI provides an improved means for accessing selected key MACOS functionalities. The main objective of GMI is to marry the vast mathematical and graphical capabilities of MATLAB with the powerful optical analysis engine of MACOS, thereby providing a useful tool to anyone who can program in MATLAB. GMI also improves modeling efficiency by eliminating the need to write an interface function for each task/project, reducing error sources, speeding up user/modeling tasks, and making MACOS well suited for fast prototyping.

  1. Toward automated model building from video in computer-assisted diagnoses in colonoscopy

    NASA Astrophysics Data System (ADS)

    Koppel, Dan; Chen, Chao-I.; Wang, Yuan-Fang; Lee, Hua; Gu, Jia; Poirson, Allen; Wolters, Rolf

    2007-03-01

    A 3D colon model is an essential component of a computer-aided diagnosis (CAD) system in colonoscopy to assist surgeons in visualization, and surgical planning and training. This research is thus aimed at developing the ability to construct a 3D colon model from endoscopic videos (or images). This paper summarizes our ongoing research in automated model building in colonoscopy. We have developed the mathematical formulations and algorithms for modeling static, localized 3D anatomic structures within a colon that can be rendered from multiple novel view points for close scrutiny and precise dimensioning. This ability is useful for the scenario when a surgeon notices some abnormal tissue growth and wants a close inspection and precise dimensioning. Our modeling system uses only video images and follows a well-established computer-vision paradigm for image-based modeling. We extract prominent features from images and establish their correspondences across multiple images by continuous tracking and discrete matching. We then use these feature correspondences to infer the camera's movement. The camera motion parameters allow us to rectify images into a standard stereo configuration and calculate pixel movements (disparity) in these images. The inferred disparity is then used to recover 3D surface depth. The inferred 3D depth, together with texture information recorded in images, allow us to construct a 3D model with both structure and appearance information that can be rendered from multiple novel view points.

  2. Atomistic Cohesive Zone Models for Interface Decohesion in Metals

    NASA Technical Reports Server (NTRS)

    Yamakov, Vesselin I.; Saether, Erik; Glaessgen, Edward H.

    2009-01-01

    Using a statistical mechanics approach, a cohesive-zone law in the form of a traction-displacement constitutive relationship characterizing the load transfer across the plane of a growing edge crack is extracted from atomistic simulations for use within a continuum finite element model. The methodology for the atomistic derivation of a cohesive-zone law is presented. This procedure can be implemented to build cohesive-zone finite element models for simulating fracture in nanocrystalline or ultrafine grained materials.

  3. Dynamic Distribution and Layouting of Model-Based User Interfaces in Smart Environments

    NASA Astrophysics Data System (ADS)

    Roscher, Dirk; Lehmann, Grzegorz; Schwartze, Veit; Blumendorf, Marco; Albayrak, Sahin

    The developments in computer technology in the last decade change the ways of computer utilization. The emerging smart environments make it possible to build ubiquitous applications that assist users during their everyday life, at any time, in any context. But the variety of contexts-of-use (user, platform and environment) makes the development of such ubiquitous applications for smart environments and especially its user interfaces a challenging and time-consuming task. We propose a model-based approach, which allows adapting the user interface at runtime to numerous (also unknown) contexts-of-use. Based on a user interface modelling language, defining the fundamentals and constraints of the user interface, a runtime architecture exploits the description to adapt the user interface to the current context-of-use. The architecture provides automatic distribution and layout algorithms for adapting the applications also to contexts unforeseen at design time. Designers do not specify predefined adaptations for each specific situation, but adaptation constraints and guidelines. Furthermore, users are provided with a meta user interface to influence the adaptations according to their needs. A smart home energy management system serves as running example to illustrate the approach.

  4. Interface characteristics of carbon nanotube reinforced polymer composites using an advanced pull-out model

    NASA Astrophysics Data System (ADS)

    Ahmed, Khondaker Sakil; Keng, Ang Kok

    2014-02-01

    An advanced pull-out model is presented to obtain the interface characteristics of carbon nanotube (CNT) in polymer composite. Since, a part of the CNT/matrix interface near the crack tip is considered to be debonded, there must present adhesive van der Waals (vdW) interaction which is generally presented in the form of Lennard-Jones potential. A separate analytical model is also proposed to account normal cohesive stress caused by the vdW interaction along the debonded CNT/polymer interface. Analytical solutions for axial and interfacial shear stress components are derived in closed form. The analytical result shows that contribution of vdW interaction is very significant and also enhances stress transfer potential of CNT in polymer composite. Parametric studies are also conducted to obtain the influence of key composite factors on bonded and debonded interface. The result reveals that the parameter dependency of interfacial stress transfer is significantly higher in the perfectly bonded interface than that of the debonded interface.

  5. A correction for Dupuit-Forchheimer interface flow models of seawater intrusion in unconfined coastal aquifers

    NASA Astrophysics Data System (ADS)

    Koussis, Antonis D.; Mazi, Katerina; Riou, Fabien; Destouni, Georgia

    2015-06-01

    Interface flow models that use the Dupuit-Forchheimer (DF) approximation for assessing the freshwater lens and the seawater intrusion in coastal aquifers lack representation of the gap through which fresh groundwater discharges to the sea. In these models, the interface outcrops unrealistically at the same point as the free surface, is too shallow and intersects the aquifer base too far inland, thus overestimating an intruding seawater front. To correct this shortcoming of DF-type interface solutions for unconfined aquifers, we here adapt the outflow gap estimate of an analytical 2-D interface solution for infinitely thick aquifers to fit the 50%-salinity contour of variable-density solutions for finite-depth aquifers. We further improve the accuracy of the interface toe location predicted with depth-integrated DF interface solutions by ∼20% (relative to the 50%-salinity contour of variable-density solutions) by combining the outflow-gap adjusted aquifer depth at the sea with a transverse-dispersion adjusted density ratio (Pool and Carrera, 2011), appropriately modified for unconfined flow. The effectiveness of the combined correction is exemplified for two regional Mediterranean aquifers, the Israel Coastal and Nile Delta aquifers.

  6. Dosimetry Modeling for Predicting Radiolytic Production at the Spent Fuel - Water Interface

    SciTech Connect

    Miller, William H.; Kline, Amanda J.; Hanson, Brady D.

    2006-04-30

    Modeling of the alpha, beta, and gamma dose from spent fuel as a function of particle size and fuel to water ratio was examined. These doses will be combined with modeling of G values and interactions to determine the concentration of various species formed at the fuel water interface and their affect on dissolution rates.

  7. Developing a User-process Model for Designing Menu-based Interfaces: An Exploratory Study.

    ERIC Educational Resources Information Center

    Ju, Boryung; Gluck, Myke

    2003-01-01

    The purpose of this study was to organize menu items based on a user-process model and implement a new version of current software for enhancing usability of interfaces. A user-process model was developed, drawn from actual users' understanding of their goals and strategies to solve their information needs by using Dervin's Sense-Making Theory…

  8. Rethinking Design Process: Using 3D Digital Models as an Interface in Collaborative Session

    ERIC Educational Resources Information Center

    Ding, Suining

    2008-01-01

    This paper describes a pilot study for an alternative design process by integrating a designer-user collaborative session with digital models. The collaborative session took place in a 3D AutoCAD class for a real world project. The 3D models served as an interface for designer-user collaboration during the design process. Students not only learned

  9. Rethinking Design Process: Using 3D Digital Models as an Interface in Collaborative Session

    ERIC Educational Resources Information Center

    Ding, Suining

    2008-01-01

    This paper describes a pilot study for an alternative design process by integrating a designer-user collaborative session with digital models. The collaborative session took place in a 3D AutoCAD class for a real world project. The 3D models served as an interface for designer-user collaboration during the design process. Students not only learned…

  10. A Translational Animal Model for Scar Compression Therapy Using an Automated Pressure Delivery System

    PubMed Central

    Alkhalil, A.; Tejiram, S.; Travis, T. E.; Prindeze, N. J.; Carney, B. C.; Moffatt, L. T.; Johnson, L. S.; Ramella-Roman, J.

    2015-01-01

    Background: Pressure therapy has been used to prevent and treat hypertrophic scars following cutaneous injury despite the limited understanding of its mechanism of action and lack of established animal model to optimize its usage. Objectives: The aim of this work was to test and characterize a novel automated pressure delivery system designed to deliver steady and controllable pressure in a red Duroc swine hypertrophic scar model. Methods: Excisional wounds were created by dermatome on 6 red Duroc pigs and allowed to scar while assessed weekly via gross visual inspection, laser Doppler imaging, and biopsy. A portable novel automated pressure delivery system was mounted on developing scars (n = 6) for 2 weeks. Results: The device maintained a pressure range of 30 ± 4 mm Hg for more than 90% of the 2-week treatment period. Pressure readings outside this designated range were attributed to normal animal behavior and responses to healing progression. Gross scar examination by the Vancouver Scar Scale showed significant and sustained (>4 weeks) improvement in pressure-treated scars (P < .05). Histological examination of pressure-treated scars showed a significant decrease in dermal thickness compared with other groups (P < .05). Pressure-treated scars also showed increased perfusion by laser Doppler imaging during the treatment period compared with sham-treated and untreated scars (P < .05). Cellular quantification showed differential changes among treatment groups. Conclusion: These results illustrate the applications of this technology in hypertrophic scar Duroc swine model and the evaluation and optimization of pressure therapy in wound-healing and hypertrophic scar management. PMID:26171101

  11. Automated Generation of Fault Management Artifacts from a Simple System Model

    NASA Technical Reports Server (NTRS)

    Kennedy, Andrew K.; Day, John C.

    2013-01-01

    Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.

  12. Examining Uncertainty in Demand Response Baseline Models and Variability in Automated Response to Dynamic Pricing

    SciTech Connect

    Mathieu, Johanna L.; Callaway, Duncan S.; Kiliccote, Sila

    2011-08-15

    Controlling electric loads to deliver power system services presents a number of interesting challenges. For example, changes in electricity consumption of Commercial and Industrial (C&I) facilities are usually estimated using counterfactual baseline models, and model uncertainty makes it difficult to precisely quantify control responsiveness. Moreover, C&I facilities exhibit variability in their response. This paper seeks to understand baseline model error and demand-side variability in responses to open-loop control signals (i.e. dynamic prices). Using a regression-based baseline model, we define several Demand Response (DR) parameters, which characterize changes in electricity use on DR days, and then present a method for computing the error associated with DR parameter estimates. In addition to analyzing the magnitude of DR parameter error, we develop a metric to determine how much observed DR parameter variability is attributable to real event-to-event variability versus simply baseline model error. Using data from 38 C&I facilities that participated in an automated DR program in California, we find that DR parameter errors are large. For most facilities, observed DR parameter variability is likely explained by baseline model error, not real DR parameter variability; however, a number of facilities exhibit real DR parameter variability. In some cases, the aggregate population of C&I facilities exhibits real DR parameter variability, resulting in implications for the system operator with respect to both resource planning and system stability.

  13. Conservative phase-field lattice Boltzmann model for interface tracking equation.

    PubMed

    Geier, Martin; Fakhari, Abbas; Lee, Taehun

    2015-06-01

    Based on the phase-field theory, we propose a conservative lattice Boltzmann method to track the interface between two different fluids. The presented model recovers the conservative phase-field equation and conserves mass locally and globally. Two entirely different approaches are used to calculate the gradient of the phase field, which is needed in computation of the normal to the interface. One approach uses finite-difference stencils similar to many existing lattice Boltzmann models for tracking the two-phase interface, while the other one invokes central moments to calculate the gradient of the phase field without any finite differences involved. The former approach suffers from the nonlocality of the collision operator while the latter is entirely local making it highly suitable for massive parallel implementation. Several benchmark problems are carried out to assess the accuracy and stability of the proposed model. PMID:26172824

  14. An Automated Application Framework to Model Disordered Materials Based on a High Throughput First Principles Approach

    NASA Astrophysics Data System (ADS)

    Oses, Corey; Yang, Kesong; Curtarolo, Stefano; Duke Univ Collaboration; UC San Diego Collaboration

    Predicting material properties of disordered systems remains a long-standing and formidable challenge in rational materials design. To address this issue, we introduce an automated software framework capable of modeling partial occupation within disordered materials using a high-throughput (HT) first principles approach. At the heart of the approach is the construction of supercells containing a virtually equivalent stoichiometry to the disordered material. All unique supercell permutations are enumerated and material properties of each are determined via HT electronic structure calculations. In accordance with a canonical ensemble of supercell states, the framework evaluates ensemble average properties of the system as a function of temperature. As proof of concept, we examine the framework's final calculated properties of a zinc chalcogenide (ZnS1-xSex), a wide-gap oxide semiconductor (MgxZn1-xO), and an iron alloy (Fe1-xCux) at various stoichiometries.

  15. ITER physics-safety interface: models and assessments

    SciTech Connect

    Uckan, N.A.; Putvinski, S.; Wesley, J.; Bartels, H-W.; Honda, T.; Amano, T.; Boucher, D.; Fujisawa, N.; Post, D.; Rosenbluth, M.

    1996-10-01

    Plasma operation conditions and physics requirements to be used as a basis for safety analysis studies are developed and physics results motivated by safety considerations are presented for the ITER design. Physics guidelines and specifications for enveloping plasma dynamic events for Category I (operational event), Category II (likely event), and Category III (unlikely event) are characterized. Safety related physics areas that are considered are: (i) effect of plasma on machined and safety (disruptions, runaway electrons, fast plasma shutdown) and (ii) plasma response to ex-vessel LOCA from first wall providing a potential passive plasma shutdown due to Be evaporation. Physics models and expressions developed are implemented in safety analysis code (SAFALY, couples 0-D dynamic plasma model to thermal response of the in-vessel components). Results from SAFALY are presented.

  16. Modeling and Measurements for Mitigating Interface from Skyshine

    SciTech Connect

    Kernan, Warnick J; Mace, Emily K; Siciliano, Edward R; Conlin, Kenneth E; Flumerfelt, Eric L; Kouzes, Richard T; Woodring, Mitchell L

    2009-12-21

    Skyshine, the radiation scattered in the air above a high-activity gamma-ray source, can produce interference with radiation portal monitor (RPM) systems at distances up to even many hundred meters. Pacific Northwest National Laboratory (PNNL) has been engaged in a campaign of measurements, design work and modeling that explore methods of mitigating the effects of skyshine on outdoor measurements with sensitive instruments. An overview of our work with shielding of skyshine is being reported by us in another paper at this conference. This paper will concentrate on two topics: measurements and modeling with Monte Carlo transport calculations to characterize skyshine from an iridium-192 source, and testing of a prototype louver system, designed and fabricated at PNNL, as a shielding approach to limit the impact of skyshine interference on RPM systems.

  17. Modeling Nitrogen Cycle at the Surface-Subsurface Water Interface

    NASA Astrophysics Data System (ADS)

    Marzadri, A.; Tonina, D.; Bellin, A.

    2011-12-01

    Anthropogenic activities, primarily food and energy production, have altered the global nitrogen cycle, increasing reactive dissolved inorganic nitrogen, Nr, chiefly ammonium NH4+ and nitrate NO3-, availability in many streams worldwide. Increased Nr promotes biological activity often with negative consequences such as water body eutrophication and emission of nitrous oxide gas, N2O, an important greenhouse gas as a by-product of denitrification. The hyporheic zone may play an important role in processing Nr and returning it to the atmosphere. Here, we present a process-based three-dimensional semi-analytical model, which couples hyporheic hydraulics with biogeochemical reactions and transport equations. Transport is solved by means of particle tracking with negligible local dispersion and biogeochemical reactions modeled by linearized Monod's kinetics with temperature dependant reaction rate coefficients. Comparison of measured and predicted N2O emissions from 7 natural stream shows a good match. We apply our model to gravel bed rivers with alternate bar morphology to investigate the role of hyporheic hydraulic, depth of alluvium, relative availability of stream concentration of NO3- and NH4+ and water temperature on nitrogen gradients within the sediment. Our model shows complex concentration dynamics, which depend on hyporheic residence time distribution and consequently on streambed morphology, within the hyporheic zone. Nitrogen gas emissions from the hyporheic zone increase with alluvium depth in large low-gradient streams but not in small steep streams. On the other hand, hyporheic water temperature influences nitrification/denitrification processes mainly in small-steep than large low-gradient streams, because of the long residence times, which offset the slow reaction rates induced by low temperatures in the latter stream. The overall conclusion of our analysis is that river morphology has a major impact on biogeochemical processes such as nitrification and denitrification with a direct impact on the stream nutrient removal and transport.

  18. Development of numerical models of interfaces for multiscale simulation of heterogeneous materials

    NASA Astrophysics Data System (ADS)

    Astafurov, S. V.; Shilko, E. V.; Dimaki, A. V.; Psakhie, S. G.

    2015-10-01

    The paper is devoted to development of a model of the "third body" in the framework of movable cellular automaton method to take account of interfaces in heterogeneous interfacial materials. The main feature of the developed approach is the ability of direct account of the width and rheology of interphase/grain boundaries as well as their non-equilibrium state. Results of the verification of the developed model showed that it can be effectively used to study the response of such interfacial materials, for which is hampered to us use "classical" approaches of implicit and explicit accounting interfaces in the framework of discrete element methods.

  19. Smart Frameworks and Self-Describing Models: Model Metadata for Automated Coupling of Hydrologic Process Components (Invited)

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.

    2013-12-01

    Model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that allow heterogeneous sets of process models to be assembled in a plug-and-play manner to create composite "system models". These mechanisms facilitate code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers, e.g. by requiring them to provide their output in specific forms that meet the input requirements of other models. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can compare the answers to these queries with similar answers from other process models in a collection and then automatically call framework service components as necessary to mediate the differences between the coupled models. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. To illustrate the power of standardized model interfaces and metadata, a smart, light-weight modeling framework written in Python will be introduced that can automatically (without user intervention) couple a set of BMI-enabled hydrologic process components together to create a spatial hydrologic model. The same mechanisms could also be used to provide seamless integration (import/export) of data and models.

  20. Model studies of Rayleigh instabilities via microdesigned interfaces

    SciTech Connect

    Glaeser, Andreas M.

    2000-10-17

    The energetic and kinetic properties of surfaces play a critical role in defining the microstructural changes that occur during sintering and high-temperature use of ceramics. Characterization of surface diffusion in ceramics is particularly difficult, and significant variations in reported values of surface diffusivities arise even in well-studied systems. Effects of impurities, surface energy anisotropy, and the onset of surface attachment limited kinetics (SALK) are believed to contribute to this variability. An overview of the use of Rayleigh instabilities as a means of characterizing surface diffusivities is presented. The development of models of morphological evolution that account for effects of surface energy anisotropy is reviewed, and the potential interplay between impurities and surface energy anisotropy is addressed. The status of experimental studies of Rayleigh instabilities in sapphire utilizing lithographically introduced pore channels of controlled geometry and crystallography is summarized. Results of model studies indicate that impurities can significantly influence both the spatial and temporal characteristics of Rayleigh instabilities; this is attributed at least in part to impurity effects on the surface energy anisotropy. Related model experiments indicate that the onset of SALK may also contribute significantly to apparent variations in surface diffusion coefficients.

  1. Accident prediction model for railway-highway interfaces.

    PubMed

    Oh, Jutaek; Washington, Simon P; Nam, Doohee

    2006-03-01

    Considerable past research has explored relationships between vehicle accidents and geometric design and operation of road sections, but relatively little research has examined factors that contribute to accidents at railway-highway crossings. Between 1998 and 2002 in Korea, about 95% of railway accidents occurred at highway-rail grade crossings, resulting in 402 accidents, of which about 20% resulted in fatalities. These statistics suggest that efforts to reduce crashes at these locations may significantly reduce crash costs. The objective of this paper is to examine factors associated with railroad crossing crashes. Various statistical models are used to examine the relationships between crossing accidents and features of crossings. The paper also compares accident models developed in the United States and the safety effects of crossing elements obtained using Korea data. Crashes were observed to increase with total traffic volume and average daily train volumes. The proximity of crossings to commercial areas and the distance of the train detector from crossings are associated with larger numbers of accidents, as is the time duration between the activation of warning signals and gates. The unique contributions of the paper are the application of the gamma probability model to deal with underdispersion and the insights obtained regarding railroad crossing related vehicle crashes. PMID:16297846

  2. Modeling and matching of landmarks for automation of Mars Rover localization

    NASA Astrophysics Data System (ADS)

    Wang, Jue

    The Mars Exploration Rover (MER) mission, begun in January 2004, has been extremely successful. However, decision-making for many operation tasks of the current MER mission and the 1997 Mars Pathfinder mission is performed on Earth through a predominantly manual, time-consuming process. Unmanned planetary rover navigation is ideally expected to reduce rover idle time, diminish the need for entering safe-mode, and dynamically handle opportunistic science events without required communication to Earth. Successful automation of rover navigation and localization during the extraterrestrial exploration requires that accurate position and attitude information can be received by a rover and that the rover has the support of simultaneous localization and mapping. An integrated approach with Bundle Adjustment (BA) and Visual Odometry (VO) can efficiently refine the rover position. However, during the MER mission, BA is done manually because of the difficulty in the automation of the cross-sitetie points selection. This dissertation proposes an automatic approach to select cross-site tie points from multiple rover sites based on the methods of landmark extraction, landmark modeling, and landmark matching. The first step in this approach is that important landmarks such as craters and rocks are defined. Methods of automatic feature extraction and landmark modeling are then introduced. Complex models with orientation angles and simple models without those angles are compared. The results have shown that simple models can provide reasonably good results. Next, the sensitivity of different modeling parameters is analyzed. Based on this analysis, cross-site rocks are matched through two complementary stages: rock distribution pattern matching and rock model matching. In addition, a preliminary experiment on orbital and ground landmark matching is also briefly introduced. Finally, the reliability of the cross-site tie points selection is validated by fault detection, which considers the mapping capability of MER cameras and the reason for mismatches. Fault detection strategies are applied in each step of the cross-site tie points selection to automatically verify the accuracy. The mismatches are excluded and localization errors are minimized. The method proposed in this dissertation is demonstrated with the datasets from the 2004 MER mission (traverse of 318 m) as well as the simulated test data at Silver Lake (traverse of 5.5 km), California. The accuracy analysis demonstrates that the algorithm is efficient at automatically selecting a sufficient number of well-distributed high-quality tie points to link the ground images into an image network for BA. The method worked successfully along with a continuous 1.1 km stretch. With the BA performed, highly accurate maps can be created to help the rover to navigate precisely and automatically. The method also enables autonomous long-range Mars rover localization.

  3. Tape-Drop Transient Model for In-Situ Automated Tape Placement of Thermoplastic Composites

    NASA Technical Reports Server (NTRS)

    Costen, Robert C.; Marchello, Joseph M.

    1998-01-01

    Composite parts of nonuniform thickness can be fabricated by in-situ automated tape placement (ATP) if the tape can be started and stopped at interior points of the part instead of always at its edges. This technique is termed start/stop-on-the-part, or, alternatively, tape-add/tape-drop. The resulting thermal transients need to be managed in order to achieve net shape and maintain uniform interlaminar weld strength and crystallinity. Starting-on-the-part has been treated previously. This paper continues the study with a thermal analysis of stopping-on-the-part. The thermal source is switched off when the trailing end of the tape enters the nip region of the laydown/consolidation head. The thermal transient is determined by a Fourier-Laplace transform solution of the two-dimensional, time-dependent thermal transport equation. This solution requires that the Peclet number Pe (the dimensionless ratio of inertial to diffusive heat transport) be independent of time and much greater than 1. Plotted isotherms show that the trailing tape-end cools more rapidly than the downstream portions of tape. This cooling can weaken the bond near the tape end; however the length of the affected region is found to be less than 2 mm. To achieve net shape, the consolidation head must continue to move after cut-off until the temperature on the weld interface decreases to the glass transition temperature. The time and elapsed distance for this condition to occur are computed for the Langley ATP robot applying PEEK/carbon fiber composite tape and for two upgrades in robot performance. The elapsed distance after cut-off ranges from about 1 mm for the present robot to about 1 cm for the second upgrade.

  4. Progress and challenges in the automated construction of Markov state models for full protein systems.

    PubMed

    Bowman, Gregory R; Beauchamp, Kyle A; Boxer, George; Pande, Vijay S

    2009-09-28

    Markov state models (MSMs) are a powerful tool for modeling both the thermodynamics and kinetics of molecular systems. In addition, they provide a rigorous means to combine information from multiple sources into a single model and to direct future simulations/experiments to minimize uncertainties in the model. However, constructing MSMs is challenging because doing so requires decomposing the extremely high dimensional and rugged free energy landscape of a molecular system into long-lived states, also called metastable states. Thus, their application has generally required significant chemical intuition and hand-tuning. To address this limitation we have developed a toolkit for automating the construction of MSMs called MSMBUILDER (available at https://simtk.org/home/msmbuilder). In this work we demonstrate the application of MSMBUILDER to the villin headpiece (HP-35 NleNle), one of the smallest and fastest folding proteins. We show that the resulting MSM captures both the thermodynamics and kinetics of the original molecular dynamics of the system. As a first step toward experimental validation of our methodology we show that our model provides accurate structure prediction and that the longest timescale events correspond to folding. PMID:19791846

  5. Progress and challenges in the automated construction of Markov state models for full protein systems

    PubMed Central

    Bowman, Gregory R.; Beauchamp, Kyle A.; Boxer, George; Pande, Vijay S.

    2009-01-01

    Markov state models (MSMs) are a powerful tool for modeling both the thermodynamics and kinetics of molecular systems. In addition, they provide a rigorous means to combine information from multiple sources into a single model and to direct future simulations?experiments to minimize uncertainties in the model. However, constructing MSMs is challenging because doing so requires decomposing the extremely high dimensional and rugged free energy landscape of a molecular system into long-lived states, also called metastable states. Thus, their application has generally required significant chemical intuition and hand-tuning. To address this limitation we have developed a toolkit for automating the construction of MSMs called MSMBUILDER (available at https:??simtk.org?home?msmbuilder). In this work we demonstrate the application of MSMBUILDER to the villin headpiece (HP-35 NleNle), one of the smallest and fastest folding proteins. We show that the resulting MSM captures both the thermodynamics and kinetics of the original molecular dynamics of the system. As a first step toward experimental validation of our methodology we show that our model provides accurate structure prediction and that the longest timescale events correspond to folding. PMID:19791846

  6. Progress and challenges in the automated construction of Markov state models for full protein systems

    NASA Astrophysics Data System (ADS)

    Bowman, Gregory R.; Beauchamp, Kyle A.; Boxer, George; Pande, Vijay S.

    2009-09-01

    Markov state models (MSMs) are a powerful tool for modeling both the thermodynamics and kinetics of molecular systems. In addition, they provide a rigorous means to combine information from multiple sources into a single model and to direct future simulations/experiments to minimize uncertainties in the model. However, constructing MSMs is challenging because doing so requires decomposing the extremely high dimensional and rugged free energy landscape of a molecular system into long-lived states, also called metastable states. Thus, their application has generally required significant chemical intuition and hand-tuning. To address this limitation we have developed a toolkit for automating the construction of MSMs called MSMBUILDER (available at https://simtk.org/home/msmbuilder). In this work we demonstrate the application of MSMBUILDER to the villin headpiece (HP-35 NleNle), one of the smallest and fastest folding proteins. We show that the resulting MSM captures both the thermodynamics and kinetics of the original molecular dynamics of the system. As a first step toward experimental validation of our methodology we show that our model provides accurate structure prediction and that the longest timescale events correspond to folding.

  7. pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling

    NASA Astrophysics Data System (ADS)

    Florian Wellmann, J.; Thiele, Sam T.; Lindsay, Mark D.; Jessell, Mark W.

    2016-03-01

    We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilize the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.

  8. pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling

    NASA Astrophysics Data System (ADS)

    Wellmann, J. F.; Thiele, S. T.; Lindsay, M. D.; Jessell, M. W.

    2015-11-01

    We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilise the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a~link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential-fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.

  9. A graphical user interface for numerical modeling of acclimation responses of vegetation to climate change

    NASA Astrophysics Data System (ADS)

    Le, Phong V. V.; Kumar, Praveen; Drewry, Darren T.; Quijano, Juan C.

    2012-12-01

    Ecophysiological models that vertically resolve vegetation canopy states are becoming a powerful tool for studying the exchange of mass, energy, and momentum between the land surface and the atmosphere. A mechanistic multilayer canopy-soil-root system model (MLCan) developed by Drewry et al. (2010a) has been used to capture the emergent vegetation responses to elevated atmospheric CO2 for both C3 and C4 plants under various climate conditions. However, processing input data and setting up such a model can be time-consuming and error-prone. In this paper, a graphical user interface that has been developed for MLCan is presented. The design of this interface aims to provide visualization capabilities and interactive support for processing input meteorological forcing data and vegetation parameter values to facilitate the use of this model. In addition, the interface also provides graphical tools for analyzing the forcing data and simulated numerical results. The model and its interface are both written in the MATLAB programming language. Finally, an application of this model package for capturing the ecohydrological responses of three bioenergy crops (maize, miscanthus, and switchgrass) to local environmental drivers at two different sites in the Midwestern United States is presented.

  10. Modeling the Assembly of Polymer-Grafted Nanoparticles at Oil-Water Interfaces.

    PubMed

    Yong, Xin

    2015-10-27

    Using dissipative particle dynamics (DPD), I model the interfacial adsorption and self-assembly of polymer-grafted nanoparticles at a planar oil-water interface. The amphiphilic core-shell nanoparticles irreversibly adsorb to the interface and create a monolayer covering the interface. The polymer chains of the adsorbed nanoparticles are significantly deformed by surface tension to conform to the interface. I quantitatively characterize the properties of the particle-laden interface and the structure of the monolayer in detail at different surface coverages. I observe that the monolayer of particles grafted with long polymer chains undergoes an intriguing liquid-crystalline-amorphous phase transition in which the relationship between the monolayer structure and the surface tension/pressure of the interface is elucidated. Moreover, my results indicate that the amorphous state at high surface coverage is induced by the anisotropic distribution of the randomly grafted chains on each particle core, which leads to noncircular in-plane morphology formed under excluded volume effects. These studies provide a fundamental understanding of the interfacial behavior of polymer-grafted nanoparticles for achieving complete control of the adsorption and subsequent self-assembly. PMID:26439456

  11. Library Automation: A Year on.

    ERIC Educational Resources Information Center

    Electronic Library, 1997

    1997-01-01

    A follow-up interview with librarians from Hong Kong, Mexico, Australia, Canada, and New Zealand about library automation systems in their libraries and their plans for the future. Discusses system performance, upgrades, services, resources, intranets, trends in automation, Web interfaces, full-text image/document systems, document delivery, OPACs…

  12. Photometric model of diffuse surfaces described as a distribution of interfaced Lambertian facets.

    PubMed

    Simonot, Lionel

    2009-10-20

    The Lambertian model for diffuse reflection is widely used for the sake of its simplicity. Nevertheless, this model is known to be inaccurate in describing a lot of real-world objects, including those that present a matte surface. To overcome this difficulty, we propose a photometric model where the surfaces are described as a distribution of facets where each facet consists of a flat interface on a Lambertian background. Compared to the Lambertian model, it includes two additional physical parameters: an interface roughness parameter and the ratio between the refractive indices of the background binder and of the upper medium. The Torrance-Sparrow model--distribution of strictly specular facets--and the Oren-Nayar model--distribution of strictly Lambertian facets--appear as special cases. PMID:19844317

  13. Modeling and Analysis Generic Interface for eXternal numerical codes (MAGIX)

    NASA Astrophysics Data System (ADS)

    Möller, T.; Bernst, I.; Panoglou, D.; Muders, D.; Ossenkopf, V.; Röllig, M.; Schilke, P.

    2013-01-01

    The Modeling and Analysis Generic Interface for eXternal numerical codes (MAGIX) is a model optimizer developed under the framework of the coherent set of astrophysical tools for spectroscopy (CATS) project. The MAGIX package provides a framework of an easy interface between existing codes and an iterating engine that attempts to minimize deviations of the model results from available observational data, constraining the values of the model parameters and providing corresponding error estimates. Many models (and, in principle, not only astrophysical models) can be plugged into MAGIX to explore their parameter space and find the set of parameter values that best fits observational/experimental data. MAGIX complies with the data structures and reduction tools of Atacama Large Millimeter Array (ALMA), but can be used with other astronomical and with non-astronomical data. http://www.astro.uni-koeln.de/projects/schilke/MAGIX

  14. Analytical solutions in a hydraulic model of seepage with sharp interfaces

    NASA Astrophysics Data System (ADS)

    Kacimov, A. R.

    2002-02-01

    Flows in horizontal homogeneous porous layers are studied in terms of a hydraulic model with an abrupt interface between two incompressible Darcian fluids of contrasting density driven by an imposed gradient along the layer. The flow of one fluid moving above a resting finger-type pool of another is studied. A straight interface between two moving fluids is shown to slump, rotate and propagate deeper under periodic drive conditions than in a constant-rate regime. Superpropagation of the interface is related to Philip's superelevation in tidal dynamics and acceleration of the front in vertical infiltration in terms of the Green-Ampt model with an oscillating ponding water level. All solutions studied are based on reduction of the governing PDE to nonlinear ODEs and further analytical and numerical integration by computer algebra routines.

  15. Laboratory measurements and theoretical modeling of seismoelectric interface response and coseismic wave fields

    SciTech Connect

    Schakel, M. D.; Slob, E. C.; Heller, H. K. J.; Smeulders, D. M. J.

    2011-04-01

    A full-waveform seismoelectric numerical model incorporating the directivity pattern of a pressure source is developed. This model provides predictions of coseismic electric fields and the electromagnetic waves that originate from a fluid/porous-medium interface. An experimental setup in which coseismic electric fields and interface responses are measured is constructed. The seismo-electric origin of the signals is confirmed. The numerically predicted polarity reversal of the interfacial signal and seismoelectric effects due to multiple scattering are detected in the measurements. Both the simulated coseismic electric fields and the electromagnetic waves originating from interfaces agree with the measurements in terms of travel times, waveform, polarity, amplitude, and spatial amplitude decay, demonstrating that seismoelectric effects are comprehensively described by theory.

  16. Time integration for diffuse interface models for two-phase flow

    SciTech Connect

    Aland, Sebastian

    2014-04-01

    We propose a variant of the θ-scheme for diffuse interface models for two-phase flow, together with three new linearization techniques for the surface tension. These involve either additional stabilizing force terms, or a fully implicit coupling of the Navier–Stokes and Cahn–Hilliard equation. In the common case that the equations for interface and flow are coupled explicitly, we find a time step restriction which is very different to other two-phase flow models and in particular is independent of the grid size. We also show that the proposed stabilization techniques can lift this time step restriction. Even more pronounced is the performance of the proposed fully implicit scheme which is stable for arbitrarily large time steps. We demonstrate in a Taylor-flow application that this superior coupling between flow and interface equation can decrease the computation time by several orders of magnitude.

  17. Experimental modelling of material interfaces with ultracold atoms

    NASA Astrophysics Data System (ADS)

    Corcovilos, Theodore A.; Brooke, Robert W. A.; Gillis, Julie; Ruggiero, Anthony C.; Tiber, Gage D.; Zaccagnini, Christopher A.

    2014-05-01

    We present a design for a new experimental apparatus for studying the physics of junctions using ultracold potassium atoms (K-39 and K-40). Junctions will be modeled using holographically projected 2D optical potentials. These potentials can be engineered to contain arbitrary features such as junctions between dissimilar lattices or the intentional insertion of defects. Long-term investigation goals include edge states, scattering at defects, and quantum depletion at junctions. In this poster we show our overall apparatus design and our progress in building experimental subsystems including the vacuum system, extended cavity diode lasers, digital temperature and current control circuits for the lasers, and the saturated absorption spectroscopy system. Funding provided by the Bayer School of Natural and Environmental.

  18. Automated identification of anatomical landmarks on 3D bone models reconstructed from CT scan images.

    PubMed

    Subburaj, K; Ravi, B; Agarwal, Manish

    2009-07-01

    Identification of anatomical landmarks on skeletal tissue reconstructed from CT/MR images is indispensable in patient-specific preoperative planning (tumour referencing, deformity evaluation, resection planning, and implant alignment and anchoring) as well as intra-operative navigation (bone registration and instruments referencing). Interactive localisation of landmarks on patient-specific anatomical models is time-consuming and may lack in repeatability and accuracy. We present a computer graphics-based method for automatic localisation and identification (labelling) of anatomical landmarks on a 3D model of bone reconstructed from CT images of a patient. The model surface is segmented into different landmark regions (peak, ridge, pit and ravine) based on surface curvature. These regions are labelled automatically by an iterative process using a spatial adjacency relationship matrix between the landmarks. The methodology has been implemented in a software program and its results (automatically identified landmarks) are compared with those manually palpated by three experienced orthopaedic surgeons, on three 3D reconstructed bone models. The variability in location of landmarks was found to be in the range of 2.15-5.98 mm by manual method (inter surgeon) and 1.92-4.88 mm by our program. Both methods performed well in identifying sharp features. Overall, the performance of the automated methodology was better or similar to the manual method and its results were reproducible. It is expected to have a variety of applications in surgery planning and intra-operative navigation. PMID:19345065

  19. Fast Model Adaptation for Automated Section Classification in Electronic Medical Records.

    PubMed

    Ni, Jian; Delaney, Brian; Florian, Radu

    2015-01-01

    Medical information extraction is the automatic extraction of structured information from electronic medical records, where such information can be used for improving healthcare processes and medical decision making. In this paper, we study one important medical information extraction task called section classification. The objective of section classification is to automatically identify sections in a medical document and classify them into one of the pre-defined section types. Training section classification models typically requires large amounts of human labeled training data to achieve high accuracy. Annotating institution-specific data, however, can be both expensive and time-consuming; which poses a big hurdle for adapting a section classification model to new medical institutions. In this paper, we apply two advanced machine learning techniques, active learning and distant supervision, to reduce annotation cost and achieve fast model adaptation for automated section classification in electronic medical records. Our experiment results show that active learning reduces the annotation cost and time by more than 50%, and distant supervision can achieve good model accuracy using weakly labeled training data only. PMID:26262005

  20. Diffuse interface models of locally inextensible vesicles in a viscous fluid.

    PubMed

    Aland, Sebastian; Egerer, Sabine; Lowengrub, John; Voigt, Axel

    2014-11-15

    We present a new diffuse interface model for the dynamics of inextensible vesicles in a viscous fluid with inertial forces. A new feature of this work is the implementation of the local inextensibility condition in the diffuse interface context. Local inextensibility is enforced by using a local Lagrange multiplier, which provides the necessary tension force at the interface. We introduce a new equation for the local Lagrange multiplier whose solution essentially provides a harmonic extension of the multiplier off the interface while maintaining the local inextensibility constraint near the interface. We also develop a local relaxation scheme that dynamically corrects local stretching/compression errors thereby preventing their accumulation. Asymptotic analysis is presented that shows that our new system converges to a relaxed version of the inextensible sharp interface model. This is also verified numerically. To solve the equations, we use an adaptive finite element method with implicit coupling between the Navier-Stokes and the diffuse interface inextensibility equations. Numerical simulations of a single vesicle in a shear flow at different Reynolds numbers demonstrate that errors in enforcing local inextensibility may accumulate and lead to large differences in the dynamics in the tumbling regime and smaller differences in the inclination angle of vesicles in the tank-treading regime. The local relaxation algorithm is shown to prevent the accumulation of stretching and compression errors very effectively. Simulations of two vesicles in an extensional flow show that local inextensibility plays an important role when vesicles are in close proximity by inhibiting fluid drainage in the near contact region. PMID:25246712

  1. Diffuse interface models of locally inextensible vesicles in a viscous fluid

    PubMed Central

    Aland, Sebastian; Egerer, Sabine; Lowengrub, John; Voigt, Axel

    2014-01-01

    We present a new diffuse interface model for the dynamics of inextensible vesicles in a viscous fluid with inertial forces. A new feature of this work is the implementation of the local inextensibility condition in the diffuse interface context. Local inextensibility is enforced by using a local Lagrange multiplier, which provides the necessary tension force at the interface. We introduce a new equation for the local Lagrange multiplier whose solution essentially provides a harmonic extension of the multiplier off the interface while maintaining the local inextensibility constraint near the interface. We also develop a local relaxation scheme that dynamically corrects local stretching/compression errors thereby preventing their accumulation. Asymptotic analysis is presented that shows that our new system converges to a relaxed version of the inextensible sharp interface model. This is also verified numerically. To solve the equations, we use an adaptive finite element method with implicit coupling between the Navier-Stokes and the diffuse interface inextensibility equations. Numerical simulations of a single vesicle in a shear flow at different Reynolds numbers demonstrate that errors in enforcing local inextensibility may accumulate and lead to large differences in the dynamics in the tumbling regime and smaller differences in the inclination angle of vesicles in the tank-treading regime. The local relaxation algorithm is shown to prevent the accumulation of stretching and compression errors very effectively. Simulations of two vesicles in an extensional flow show that local inextensibility plays an important role when vesicles are in close proximity by inhibiting fluid drainage in the near contact region. PMID:25246712

  2. Semi-automated calibration method for modelling of mountain permafrost evolution in Switzerland

    NASA Astrophysics Data System (ADS)

    Marmy, A.; Rajczak, J.; Delaloye, R.; Hilbich, C.; Hoelzle, M.; Kotlarski, S.; Lambiel, C.; Noetzli, J.; Phillips, M.; Salzmann, N.; Staub, B.; Hauck, C.

    2015-09-01

    Permafrost is a widespread phenomenon in the European Alps. Many important topics such as the future evolution of permafrost related to climate change and the detection of permafrost related to potential natural hazards sites are of major concern to our society. Numerical permafrost models are the only tools which facilitate the projection of the future evolution of permafrost. Due to the complexity of the processes involved and the heterogeneity of Alpine terrain, models must be carefully calibrated and results should be compared with observations at the site (borehole) scale. However, a large number of local point data are necessary to obtain a broad overview of the thermal evolution of mountain permafrost over a larger area, such as the Swiss Alps, and the site-specific model calibration of each point would be time-consuming. To face this issue, this paper presents a semi-automated calibration method using the Generalized Likelihood Uncertainty Estimation (GLUE) as implemented in a 1-D soil model (CoupModel) and applies it to six permafrost sites in the Swiss Alps prior to long-term permafrost evolution simulations. We show that this automated calibration method is able to accurately reproduce the main thermal condition characteristics with some limitations at sites with unique conditions such as 3-D air or water circulation, which have to be calibrated manually. The calibration obtained was used for RCM-based long-term simulations under the A1B climate scenario specifically downscaled at each borehole site. The projection shows general permafrost degradation with thawing at 10 m, even partially reaching 20 m depths until the end of the century, but with different timing among the sites. The degradation is more rapid at bedrock sites whereas ice-rich sites with a blocky surface cover showed a reduced sensitivity to climate change. The snow cover duration is expected to be reduced drastically (between -20 to -37 %) impacting the ground thermal regime. However, the uncertainty range of permafrost projections is large, resulting mainly from the broad range of input climate data from the different GCM-RCM chains of the ENSEMBLES data set.

  3. An Interface Stretching-Diffusion Model for Mixing-Limited Reactions During Convective Mixing

    NASA Astrophysics Data System (ADS)

    Hidalgo, J. J.; Dentz, M.; Cabeza, Y.; Carrera, J.

    2014-12-01

    We study the behavior of mixing-limited dissolution reactions under the unstable flow conditions caused by a Rayleigh-Bénard convective instability in a two fluids system. The reactions produce a dissolution pattern that follows the ascending fluids's interface where the largest concentration gradients and maximum mixing are found. Contrary to other chemical systems, the mixing history engraved by the dissolution does not map out the fingering geometry of the unstable flow. The temporal scaling of the mixing Χ and the reaction rate r are explained by a stretching-diffusion model of the interface between the fluids. The model accurately reproduces the three observed regimes: a diffusive regime at which Χ, r ~ t-1/2; a convective regime of at which the interface contracts to the Batchelor scale resulting in a constant Χf and r independent of the Rayleigh number; and an attenuated convection regime in which Χ and r decay faster than diffusion as t-3/2 and t-1, respectevely, because of the decompression of the interface and weakened reactions caused by the accumulation of dissolved fluid below the interface.

  4. Interface modeling to predict well casing damage for big hill strategic petroleum reserve.

    SciTech Connect

    Ehgartner, Brian L.; Park, Byoung Yoon

    2012-02-01

    Oil leaks were found in well casings of Caverns 105 and 109 at the Big Hill Strategic Petroleum Reserve site. According to the field observations, two instances of casing damage occurred at the depth of the interface between the caprock and top of salt. This damage could be caused by interface movement induced by cavern volume closure due to salt creep. A three dimensional finite element model, which allows each cavern to be configured individually, was constructed to investigate shear and vertical displacements across each interface. The model contains interfaces between each lithology and a shear zone to examine the interface behavior in a realistic manner. This analysis results indicate that the casings of Caverns 105 and 109 failed by shear stress that exceeded shear strength due to the horizontal movement of the top of salt relative to the caprock, and tensile stress due to the downward movement of the top of salt from the caprock, respectively. The casings of Caverns 101, 110, 111 and 114, located at the far ends of the field, are predicted to be failed by shear stress in the near future. The casings of inmost Caverns 107 and 108 are predicted to be failed by tensile stress in the near future.

  5. Automated generation of high-quality training data for appearance-based object models

    NASA Astrophysics Data System (ADS)

    Becker, Stefan; Voelker, Arno; Kieritz, Hilke; Hübner, Wolfgang; Arens, Michael

    2013-11-01

    Methods for automated person detection and person tracking are essential core components in modern security and surveillance systems. Most state-of-the-art person detectors follow a statistical approach, where prototypical appearances of persons are learned from training samples with known class labels. Selecting appropriate learning samples has a significant impact on the quality of the generated person detectors. For example, training a classifier on a rigid body model using training samples with strong pose variations is in general not effective, irrespective of the classifiers capabilities. Generation of high-quality training data is, apart from performance issues, a very time consuming process, comprising a significant amount of manual work. Furthermore, due to inevitable limitations of freely available training data, corresponding classifiers are not always transferable to a given sensor and are only applicable in a well-defined narrow variety of scenes and camera setups. Semi-supervised learning methods are a commonly used alternative to supervised training, in general requiring only few labeled samples. However, as a drawback semi-supervised methods always include a generative component, which is known to be difficult to learn. Therefore, automated processes for generating training data sets for supervised methods are needed. Such approaches could either help to better adjust classifiers to respective hardware, or serve as a complement to existing data sets. Towards this end, this paper provides some insights into the quality requirements of automatically generated training data for supervised learning methods. Assuming a static camera, labels are generated based on motion detection by background subtraction with respect to weak constraints on the enclosing bounding box of the motion blobs. Since this labeling method consists of standard components, we illustrate the effectiveness by adapting a person detector to cameras of a sensor network. While varying the training data and keeping the detection framework identical, we derive statements about the sample quality.

  6. Damage evolution of bi-body model composed of weakly cemented soft rock and coal considering different interface effect.

    PubMed

    Zhao, Zenghui; Lv, Xianzhou; Wang, Weiming; Tan, Yunliang

    2016-01-01

    Considering the structure effect of tunnel stability in western mining of China, three typical kinds of numerical model were respectively built as follows based on the strain softening constitutive model and linear elastic-perfectly plastic model for soft rock and interface: R-M, R-C(s)-M and R-C(w)-M. Calculation results revealed that the stress-strain relation and failure characteristics of the three models vary between each other. The combination model without interface or with a strong interface presented continuous failure, while weak interface exhibited 'cut off' effect. Thus, conceptual models of bi-material model and bi-body model were established. Then numerical experiments of tri-axial compression were carried out for the two models. The relationships between stress evolution, failure zone and deformation rate fluctuations as well as the displacement of interface were detailed analyzed. Results show that two breakaway points of deformation rate actually demonstrate the starting and penetration of the main rupture, respectively. It is distinguishable due to the large fluctuation. The bi-material model shows general continuous failure while bi-body model shows 'V' type shear zone in weak body and failure in strong body near the interface due to the interface effect. With the increasing of confining pressure, the 'cut off' effect of weak interface is not obvious. These conclusions lay the theoretical foundation for further development of constitutive model for soft rock-coal combination body. PMID:27066329

  7. Development and Implementation of an Extensible Interface-Based Spatiotemporal Geoprocessing and Modeling Toolbox

    NASA Astrophysics Data System (ADS)

    Cao, Y.; Ames, D. P.

    2011-12-01

    This poster presents an object oriented and interface-based spatiotemporal data processing and modeling toolbox that can be extended by third parties to include complete suites of new tools through the implementation of simple interfaces. The resulting software implementation includes both a toolbox and workflow designer or "model builder" constructed using the underlying open source DotSpatial library and MapWindow desktop GIS. The unique contribution of this research and software development activity is in the creation and use of an extensibility architecture for both specific tools (through a so-called "ITool" interface) and batches of tools (through a so-called "IToolProvider" interface.) This concept is introduced to allow for seamless integration of geoprocessing tools from various sources (e.g. distinct libraries of spatiotemporal processing code) - including online sources - within a single user environment. In this way, the IToolProvider interface allows developers to wrap large existing collections of data analysis code without having to re-write it for interoperability. Additionally, developers do not need to design the user interfaces for loading, displaying or interacting with their specific tools, but rather can simply implement the provided interfaces and have their tools and tool collections appear in the toolbox alongside other tools. The demonstration software presented here is based on an implementation of the interfaces and sample tool libraries using the C# .NET programming language. This poster will include a summary of the interfaces as well as a demonstration of the system using the Whitebox Geospatial Analysis Tools (GAT) as an example case of a large number of existing tools that can be exposed to users through this new system. Vector analysis tools which are native in DotSpatial are linked to the Whitebox raster analysis tools in the model builder environment for ease of execution and consistent/repeatable use. We expect that this approach to development of spatiotemporal analysis and geoprocessing software can be extended to many areas including basic GIS analysis, hydrological and terrain analysis, or processing images and working with LiDAR data.

  8. Microcontroller for automation application

    NASA Technical Reports Server (NTRS)

    Cooper, H. W.

    1975-01-01

    The description of a microcontroller currently being developed for automation application was given. It is basically an 8-bit microcomputer with a 40K byte random access memory/read only memory, and can control a maximum of 12 devices through standard 15-line interface ports.

  9. Automated Geospatial Watershed Assessment

    EPA Science Inventory

    The Automated Geospatial Watershed Assessment (AGWA) tool is a Geographic Information Systems (GIS) interface jointly developed by the U.S. Environmental Protection Agency, the U.S. Department of Agriculture (USDA) Agricultural Research Service, and the University of Arizona to a...

  10. Semi-automated DIRSIG scene modeling from three-dimensional lidar and passive imagery

    NASA Astrophysics Data System (ADS)

    Lach, Stephen R.

    The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is an established, first-principles based scene simulation tool that produces synthetic multispectral and hyperspectral images from the visible to long wave infrared (0.4 to 20 microns). Over the last few years, significant enhancements such as spectral polarimetric and active Light Detection and Ranging (lidar) models have also been incorporated into the software, providing an extremely powerful tool for multi-sensor algorithm testing and sensor evaluation. However, the extensive time required to create large-scale scenes has limited DIRSIG's ability to generate scenes "on demand." To date, scene generation has been a laborious, time-intensive process, as the terrain model, CAD objects and background maps have to be created and attributed manually. To shorten the time required for this process, this research developed an approach to reduce the man-in-the-loop requirements for several aspects of synthetic scene construction. Through a fusion of 3D lidar data with passive imagery, we were able to semi-automate several of the required tasks in the DIRSIG scene creation process. Additionally, many of the remaining tasks realized a shortened implementation time through this application of multi-modal imagery. Lidar data is exploited to identify ground and object features as well as to define initial tree location and building parameter estimates. These estimates are then refined by analyzing high-resolution frame array imagery using the concepts of projective geometry in lieu of the more common Euclidean approach found in most traditional photogrammetric references. Spectral imagery is also used to assign material characteristics to the modeled geometric objects. This is achieved through a modified atmospheric compensation applied to raw hyperspectral imagery. These techniques have been successfully applied to imagery collected over the RIT campus and the greater Rochester area. The data used include multiple-return point information provided by an Optech lidar linescanning sensor, multispectral frame array imagery from the Wildfire Airborne Sensor Program (WASP) and WASP-lite sensors, and hyperspectral data from the Modular Imaging Spectrometer Instrument (MISI) and the COMPact Airborne Spectral Sensor (COMPASS). Information from these image sources was fused and processed using the semi-automated approach to provide the DIRSIG input files used to define a synthetic scene. When compared to the standard manual process for creating these files, we achieved approximately a tenfold increase in speed, as well as a significant increase in geometric accuracy.

  11. Modelling and interpreting biologically crusted dryland soil sub-surface structure using automated micropenetrometry

    NASA Astrophysics Data System (ADS)

    Hoon, Stephen R.; Felde, Vincent J. M. N. L.; Drahorad, Sylvie L.; Felix-Henningsen, Peter

    2015-04-01

    Soil penetrometers are used routinely to determine the shear strength of soils and deformable sediments both at the surface and throughout a depth profile in disciplines as diverse as soil science, agriculture, geoengineering and alpine avalanche-safety (e.g. Grunwald et al. 2001, Van Herwijnen et al. 2009). Generically, penetrometers comprise two principal components: An advancing probe, and a transducer; the latter to measure the pressure or force required to cause the probe to penetrate or advance through the soil or sediment. The force transducer employed to determine the pressure can range, for example, from a simple mechanical spring gauge to an automatically data-logged electronic transducer. Automated computer control of the penetrometer step size and probe advance rate enables precise measurements to be made down to a resolution of 10's of microns, (e.g. the automated electronic micropenetrometer (EMP) described by Drahorad 2012). Here we discuss the determination, modelling and interpretation of biologically crusted dryland soil sub-surface structures using automated micropenetrometry. We outline a model enabling the interpretation of depth dependent penetration resistance (PR) profiles and their spatial differentials using the model equations, σ {}(z) ={}σ c0{}+Σ 1n[σ n{}(z){}+anz + bnz2] and dσ /dz = Σ 1n[dσ n(z) /dz{} {}+{}Frn(z)] where σ c0 and σ n are the plastic deformation stresses for the surface and nth soil structure (e.g. soil crust, layer, horizon or void) respectively, and Frn(z)dz is the frictional work done per unit volume by sliding the penetrometer rod an incremental distance, dz, through the nth layer. Both σ n(z) and Frn(z) are related to soil structure. They determine the form of σ {}(z){} measured by the EMP transducer. The model enables pores (regions of zero deformation stress) to be distinguished from changes in layer structure or probe friction. We have applied this method to both artificial calibration soils in the laboratory, and in-situ field studies. In particular, we discuss the nature and detection of surface and buried (fossil) subsurface Biological Soil Crusts (BSCs), voids, macroscopic particles and compositional layers. The strength of surface BSCs and the occurrence of buried BSCs and layers has been detected at sub millimetre scales to depths of 40mm. Our measurements and field observations of PR show the importance of morphological layering to overall BSC functions (Felde et al. 2015). We also discuss the effect of penetrometer shaft and probe-tip profiles upon the theoretical and experimental curves, EMP resolution and reproducibility, demonstrating how the model enables voids, buried biological soil crusts, exotic particles, soil horizons and layers to be distinguished one from another. This represents a potentially important contribution to advancing understanding of the relationship between BSCs and dryland soil structure. References: Drahorad SL, Felix-Henningsen P. (2012) An electronic micropenetrometer (EMP) for field measurements of biological soil crust stability, J. Plant Nutr. Soil Sci., 175, 519-520 Felde V.J.M.N.L., Drahorad S.L., Felix-Henningsen P., Hoon S.R. (2015) Ongoing oversanding induces biological soil crust layering - a new approach for BSC structure elucidation determined from high resolution penetration resistance data (submitted) Grunwald, S., Rooney D.J., McSweeney K., Lowery B. (2001) Development of pedotransfer functions for a profile cone penetrometer, Geoderma, 100, 25-47 Van Herwijnen A., Bellaire S., Schweizer J. (2009) Comparison of micro-structural snowpack parameters derived from penetration resistance measurements with fracture character observations from compression tests, Cold Regions Sci. {& Technol.}, 59, 193-201

  12. Numerical simulations of the moving contact line problem using a diffuse-interface model

    NASA Astrophysics Data System (ADS)

    Afzaal, Muhammad; Sibley, David; Duncan, Andrew; Yatsyshin, Petr; Duran-Olivencia, Miguel A.; Nold, Andreas; Savva, Nikos; Schmuck, Markus; Kalliadasis, Serafim

    2015-11-01

    Moving contact lines are a ubiquitous phenomenon both in nature and in many modern technologies. One prevalent way of numerically tackling the problem is with diffuse-interface (phase-field) models, where the classical sharp-interface model of continuum mechanics is relaxed to one with a finite thickness fluid-fluid interface, capturing physics from mesoscopic lengthscales. The present work is devoted to the study of the contact line between two fluids confined by two parallel plates, i.e. a dynamically moving meniscus. Our approach is based on a coupled Navier-Stokes/Cahn-Hilliard model. This system of partial differential equations allows a tractable numerical solution to be computed, capturing diffusive and advective effects in a prototypical case study in a finite-element framework. Particular attention is paid to the static and dynamic contact angle of the meniscus advancing or receding between the plates. The results obtained from our approach are compared to the classical sharp-interface model to elicit the importance of considering diffusion and associated effects. We acknowledge financial support from European Research Council via Advanced Grant No. 247031.

  13. Importance of interfaces in governing thermal transport in composite materials: modeling and experimental perspectives.

    PubMed

    Roy, Ajit K; Farmer, Barry L; Varshney, Vikas; Sihn, Sangwook; Lee, Jonghoon; Ganguli, Sabyasachi

    2012-02-01

    Thermal management in polymeric composite materials has become increasingly critical in the air-vehicle industry because of the increasing thermal load in small-scale composite devices extensively used in electronics and aerospace systems. The thermal transport phenomenon in these small-scale heterogeneous systems is essentially controlled by the interface thermal resistance because of the large surface-to-volume ratio. In this review article, several modeling strategies are discussed for different length scales, complemented by our experimental efforts to tailor the thermal transport properties of polymeric composite materials. Progress in the molecular modeling of thermal transport in thermosets is reviewed along with a discussion on the interface thermal resistance between functionalized carbon nanotube and epoxy resin systems. For the thermal transport in fiber-reinforced composites, various micromechanics-based analytical and numerical modeling schemes are reviewed in predicting the transverse thermal conductivity. Numerical schemes used to realize and scale the interface thermal resistance and the finite mean free path of the energy carrier in the mesoscale are discussed in the frame of the lattice Boltzmann-Peierls-Callaway equation. Finally, guided by modeling, complementary experimental efforts are discussed for exfoliated graphite and vertically aligned nanotubes based composites toward improving their effective thermal conductivity by tailoring interface thermal resistance. PMID:22295993

  14. AgRISTARS: Yield model development/soil moisture. Interface control document

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The interactions and support functions required between the crop Yield Model Development (YMD) Project and Soil Moisture (SM) Project are defined. The requirements for YMD support of SM and vice-versa are outlined. Specific tasks in support of these interfaces are defined for development of support functions.

  15. The integrity of welded interfaces in ultra high molecular weight polyethylene: Part 1-Model.

    PubMed

    Buckley, C Paul; Wu, Junjie; Haughie, David W

    2006-06-01

    The difficulty of eradicating memory of powder-particle interfaces in UHMWPE for bearing surfaces for hip and knee replacements is well-known, and 'fusion defects' have been implicated frequently in joint failures. During processing the polymer is formed into solid directly from the reactor powder, under pressure and at temperatures above the melting point, and two types of inter-particle defect occur: Type 1 (consolidation-deficient) and Type 2 (diffusion-deficient). To gain quantitative information on the extent of the problem, the formation of macroscopic butt welds in this material was studied, by (1) modelling the process and (2) measuring experimentally the resultant evolution of interface toughness. This paper reports on the model. A quantitative measure of interface structural integrity is defined, and related to the "maximum reptated molecular weight" introduced previously. The model assumes an idealised surface topography. It is used to calculate the evolution of interface integrity during welding, for given values of temperature, pressure, and parameters describing the surfaces, and a given molar mass distribution. Only four material properties are needed for the calculation; all of them available for polyethylene. The model shows that, for UHMWPE typically employed in knee transplants, the rate of eradication of Type 1 defects is highly sensitive to surface topography, process temperature and pressure. Also, even if Type 1 defects are prevented, Type 2 defects heal extremely slowly. They must be an intrinsic feature of UHMWPE for all reasonable forming conditions, and products and forming processes should be designed accordingly. PMID:16490249

  16. A Monthly Water-Balance Model Driven By a Graphical User Interface

    USGS Publications Warehouse

    McCabe, Gregory J.; Markstrom, Steven L.

    2007-01-01

    This report describes a monthly water-balance model driven by a graphical user interface, referred to as the Thornthwaite monthly water-balance program. Computations of monthly water-balance components of the hydrologic cycle are made for a specified location. The program can be used as a research tool, an assessment tool, and a tool for classroom instruction.

  17. Self-Observation Model Employing an Instinctive Interface for Classroom Active Learning

    ERIC Educational Resources Information Center

    Chen, Gwo-Dong; Nurkhamid; Wang, Chin-Yeh; Yang, Shu-Han; Chao, Po-Yao

    2014-01-01

    In a classroom, obtaining active, whole-focused, and engaging learning results from a design is often difficult. In this study, we propose a self-observation model that employs an instinctive interface for classroom active learning. Students can communicate with virtual avatars in the vertical screen and can react naturally according to the…

  18. Development of a GIS interface for WEPP model application to Great Lakes forested watersheds

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This presentation will highlight efforts on development of a new WEPP GIS interface, targeted toward application in forested regions bordering the Great Lakes. The key components and algorithms of the online GIS system will be outlined. The general procedures used to provide input to the WEPP model ...

  19. VoICE: A semi-automated pipeline for standardizing vocal analysis across models.

    PubMed

    Burkett, Zachary D; Day, Nancy F; Peñagarikano, Olga; Geschwind, Daniel H; White, Stephanie A

    2015-01-01

    The study of vocal communication in animal models provides key insight to the neurogenetic basis for speech and communication disorders. Current methods for vocal analysis suffer from a lack of standardization, creating ambiguity in cross-laboratory and cross-species comparisons. Here, we present VoICE (Vocal Inventory Clustering Engine), an approach to grouping vocal elements by creating a high dimensionality dataset through scoring spectral similarity between all vocalizations within a recording session. This dataset is then subjected to hierarchical clustering, generating a dendrogram that is pruned into meaningful vocalization "types" by an automated algorithm. When applied to birdsong, a key model for vocal learning, VoICE captures the known deterioration in acoustic properties that follows deafening, including altered sequencing. In a mammalian neurodevelopmental model, we uncover a reduced vocal repertoire of mice lacking the autism susceptibility gene, Cntnap2. VoICE will be useful to the scientific community as it can standardize vocalization analyses across species and laboratories. PMID:26018425

  20. Partially Automated Method for Localizing Standardized Acupuncture Points on the Heads of Digital Human Models

    PubMed Central

    Kim, Jungdae; Kang, Dae-In

    2015-01-01

    Having modernized imaging tools for precise positioning of acupuncture points over the human body where the traditional therapeutic method is applied is essential. For that reason, we suggest a more systematic positioning method that uses X-ray computer tomographic images to precisely position acupoints. Digital Korean human data were obtained to construct three-dimensional head-skin and skull surface models of six individuals. Depending on the method used to pinpoint the positions of the acupoints, every acupoint was classified into one of three types: anatomical points, proportional points, and morphological points. A computational algorithm and procedure were developed for partial automation of the positioning. The anatomical points were selected by using the structural characteristics of the skin surface and skull. The proportional points were calculated from the positions of the anatomical points. The morphological points were also calculated by using some control points related to the connections between the source and the target models. All the acupoints on the heads of the six individual were displayed on three-dimensional computer graphical image models. This method may be helpful for developing more accurate experimental designs and for providing more quantitative volumetric methods for performing analyses in acupuncture-related research. PMID:26101534

  1. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    SciTech Connect

    and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  2. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    SciTech Connect

    Robertson, J.; Polly, B.; Collis, J.

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  3. Automated Geometric Model Builder Using Range Image Sensor Data: Final Acquistion

    SciTech Connect

    Diegert, C.; Sackos, J.

    1999-02-01

    This report documents a data collection where we recorded redundant range image data from multiple views of a simple scene, and recorded accurate survey measurements of the same scene. Collecting these data was a focus of the research project Automated Geometric Model Builder Using Range Image Sensor Data (96-0384), supported by Sandia's Laboratory-Directed Research and Development (LDRD) Program during fiscal years 1996, 1997, and 1998. The data described here are available from the authors on CDROM, or electronically over the Internet. Included in this data distribution are Computer-Aided Design (CAD) models we constructed from the survey measurements. The CAD models are compatible with the SolidWorks 98 Plus system, the modern Computer-Aided Design software system that is central to Sandia's DeskTop Engineering Project (DTEP). Integration of our measurements (as built) with the constructive geometry process of the CAD system (as designed) delivers on a vision of the research project. This report on our final data collection will also serve as a final report on the project.

  4. Microwave landing system modeling with application to air traffic control automation

    NASA Technical Reports Server (NTRS)

    Poulose, M. M.

    1992-01-01

    Compared to the current instrument landing system, the microwave landing system (MLS), which is in the advanced stage of implementation, can potentially provide significant fuel and time savings as well as more flexibility in approach and landing functions. However, the expanded coverage and increased accuracy requirements of the MLS make it more susceptible to the features of the site in which it is located. An analytical approach is presented for evaluating the multipath effects of scatterers that are commonly found in airport environments. The approach combines a multiplane model with a ray-tracing technique and a formulation for estimating the electromagnetic fields caused by the antenna array in the presence of scatterers. The model is applied to several airport scenarios. The reduced computational burden enables the scattering effects on MLS position information to be evaluated in near real time. Evaluation in near real time would permit the incorporation of the modeling scheme into air traffic control automation; it would adaptively delineate zones of reduced accuracy within the MLS coverage volume, and help establish safe approach and takeoff trajectories in the presence of uneven terrain and other scatterers.

  5. DockTope: a Web-based tool for automated pMHC-I modelling

    PubMed Central

    Menegatti Rigo, Maurício; Amaral Antunes, Dinler; Vaz de Freitas, Martiela; Fabiano de Almeida Mendes, Marcus; Meira, Lindolfo; Sinigaglia, Marialva; Fioravanti Vieira, Gustavo

    2015-01-01

    The immune system is constantly challenged, being required to protect the organism against a wide variety of infectious pathogens and, at the same time, to avoid autoimmune disorders. One of the most important molecules involved in these events is the Major Histocompatibility Complex class I (MHC-I), responsible for binding and presenting small peptides from the intracellular environment to CD8+ T cells. The study of peptide:MHC-I (pMHC-I) molecules at a structural level is crucial to understand the molecular mechanisms underlying immunologic responses. Unfortunately, there are few pMHC-I structures in the Protein Data Bank (PDB) (especially considering the total number of complexes that could be formed combining different peptides), and pMHC-I modelling tools are scarce. Here, we present DockTope, a free and reliable web-based tool for pMHC-I modelling, based on crystal structures from the PDB. DockTope is fully automated and allows any researcher to construct a pMHC-I complex in an efficient way. We have reproduced a dataset of 135 non-redundant pMHC-I structures from the PDB (Cα RMSD below 1 Å). Modelling of pMHC-I complexes is remarkably important, contributing to the knowledge of important events such as cross-reactivity, autoimmunity, cancer therapy, transplantation and rational vaccine design. PMID:26674250

  6. DockTope: a Web-based tool for automated pMHC-I modelling.

    PubMed

    Rigo, Maurício Menegatti; Antunes, Dinler Amaral; Vaz de Freitas, Martiela; Fabiano de Almeida Mendes, Marcus; Meira, Lindolfo; Sinigaglia, Marialva; Vieira, Gustavo Fioravanti

    2015-01-01

    The immune system is constantly challenged, being required to protect the organism against a wide variety of infectious pathogens and, at the same time, to avoid autoimmune disorders. One of the most important molecules involved in these events is the Major Histocompatibility Complex class I (MHC-I), responsible for binding and presenting small peptides from the intracellular environment to CD8(+) T cells. The study of peptide:MHC-I (pMHC-I) molecules at a structural level is crucial to understand the molecular mechanisms underlying immunologic responses. Unfortunately, there are few pMHC-I structures in the Protein Data Bank (PDB) (especially considering the total number of complexes that could be formed combining different peptides), and pMHC-I modelling tools are scarce. Here, we present DockTope, a free and reliable web-based tool for pMHC-I modelling, based on crystal structures from the PDB. DockTope is fully automated and allows any researcher to construct a pMHC-I complex in an efficient way. We have reproduced a dataset of 135 non-redundant pMHC-I structures from the PDB (Cα RMSD below 1 Å). Modelling of pMHC-I complexes is remarkably important, contributing to the knowledge of important events such as cross-reactivity, autoimmunity, cancer therapy, transplantation and rational vaccine design. PMID:26674250

  7. Automated Probing and Inference of Analytical Models for Metabolic Network Dynamics

    NASA Astrophysics Data System (ADS)

    Wikswo, John; Schmidt, Michael; Jenkins, Jerry; Hood, Jonathan; Lipson, Hod

    2010-03-01

    We introduce a method to automatically construct mathematical models of a biological system, and apply this technique to infer a seven-dimensional nonlinear model of glycolytic oscillations in yeast -- based only on noisy observational data obtained from in silico experiments. Graph-based symbolic encoding, fitness prediction, and estimation-exploration can for the first time provide the level of symbolic regression required for biological applications. With no a priori knowledge of the system, the Cornell algorithm in several hours of computation correctly identified all seven ordinary nonlinear differential equations, the most complicated of which was dA3dt=-1.12.A3-192.24.A3S11+12.50.A3^4+124.92.S3+31.69.A3S3, where A3 = [ATP], S1= [glucose], and S3 = [cytosolic pyruvate and acetaldehyde pool]. Errors on the 26 parameters ranged from 0 to 14.5%. The algorithm also automatically identified new and potentially useful chemical constants of the motion, e.g. -k1N2+K2v1+k2S1A3-(k4-k5v1)A3^4+k6 0. This approach may enable automated design, control and analysis of wet-lab experiments for model identification/refinement.

  8. VoICE: A semi-automated pipeline for standardizing vocal analysis across models

    PubMed Central

    Burkett, Zachary D.; Day, Nancy F.; Peñagarikano, Olga; Geschwind, Daniel H.; White, Stephanie A.

    2015-01-01

    The study of vocal communication in animal models provides key insight to the neurogenetic basis for speech and communication disorders. Current methods for vocal analysis suffer from a lack of standardization, creating ambiguity in cross-laboratory and cross-species comparisons. Here, we present VoICE (Vocal Inventory Clustering Engine), an approach to grouping vocal elements by creating a high dimensionality dataset through scoring spectral similarity between all vocalizations within a recording session. This dataset is then subjected to hierarchical clustering, generating a dendrogram that is pruned into meaningful vocalization “types” by an automated algorithm. When applied to birdsong, a key model for vocal learning, VoICE captures the known deterioration in acoustic properties that follows deafening, including altered sequencing. In a mammalian neurodevelopmental model, we uncover a reduced vocal repertoire of mice lacking the autism susceptibility gene, Cntnap2. VoICE will be useful to the scientific community as it can standardize vocalization analyses across species and laboratories. PMID:26018425

  9. Cryo-EM Data Are Superior to Contact and Interface Information in Integrative Modeling.

    PubMed

    de Vries, Sjoerd J; Chauvot de Beauchêne, Isaure; Schindler, Christina E M; Zacharias, Martin

    2016-02-23

    Protein-protein interactions carry out a large variety of essential cellular processes. Cryo-electron microscopy (cryo-EM) is a powerful technique for the modeling of protein-protein interactions at a wide range of resolutions, and recent developments have caused a revolution in the field. At low resolution, cryo-EM maps can drive integrative modeling of the interaction, assembling existing structures into the map. Other experimental techniques can provide information on the interface or on the contacts between the monomers in the complex. This inevitably raises the question regarding which type of data is best suited to drive integrative modeling approaches. Systematic comparison of the prediction accuracy and specificity of the different integrative modeling paradigms is unavailable to date. Here, we compare EM-driven, interface-driven, and contact-driven integrative modeling paradigms. Models were generated for the protein docking benchmark using the ATTRACT docking engine and evaluated using the CAPRI two-star criterion. At 20 Å resolution, EM-driven modeling achieved a success rate of 100%, outperforming the other paradigms even with perfect interface and contact information. Therefore, even very low resolution cryo-EM data is superior in predicting heterodimeric and heterotrimeric protein assemblies. Our study demonstrates that a force field is not necessary, cryo-EM data alone is sufficient to accurately guide the monomers into place. The resulting rigid models successfully identify regions of conformational change, opening up perspectives for targeted flexible remodeling. PMID:26846888

  10. Facial pressure zones of an oronasal interface for noninvasive ventilation: a computer model analysis* **

    PubMed Central

    Barros, Luana Souto; Talaia, Pedro; Drummond, Marta; Natal-Jorge, Renato

    2014-01-01

    OBJECTIVE: To study the effects of an oronasal interface (OI) for noninvasive ventilation, using a three-dimensional (3D) computational model with the ability to simulate and evaluate the main pressure zones (PZs) of the OI on the human face. METHODS: We used a 3D digital model of the human face, based on a pre-established geometric model. The model simulated soft tissues, skull, and nasal cartilage. The geometric model was obtained by 3D laser scanning and post-processed for use in the model created, with the objective of separating the cushion from the frame. A computer simulation was performed to determine the pressure required in order to create the facial PZs. We obtained descriptive graphical images of the PZs and their intensity. RESULTS: For the graphical analyses of each face-OI model pair and their respective evaluations, we ran 21 simulations. The computer model identified several high-impact PZs in the nasal bridge and paranasal regions. The variation in soft tissue depth had a direct impact on the amount of pressure applied (438-724 cmH2O). CONCLUSIONS: The computer simulation results indicate that, in patients submitted to noninvasive ventilation with an OI, the probability of skin lesion is higher in the nasal bridge and paranasal regions. This methodology could increase the applicability of biomechanical research on noninvasive ventilation interfaces, providing the information needed in order to choose the interface that best minimizes the risk of skin lesion. PMID:25610506

  11. An ASM/ADM model interface for dynamic plant-wide simulation.

    PubMed

    Nopens, Ingmar; Batstone, Damien J; Copp, John B; Jeppsson, Ulf; Volcke, Eveline; Alex, Jens; Vanrolleghem, Peter A

    2009-04-01

    Mathematical modelling has proven to be very useful in process design, operation and optimisation. A recent trend in WWTP modelling is to include the different subunits in so-called plant-wide models rather than focusing on parts of the entire process. One example of a typical plant-wide model is the coupling of an upstream activated sludge plant (including primary settler, and secondary clarifier) to an anaerobic digester for sludge digestion. One of the key challenges when coupling these processes has been the definition of an interface between the well accepted activated sludge model (ASM1) and anaerobic digestion model (ADM1). Current characterisation and interface models have key limitations, the most critical of which is the over-use of X(c) (or lumped complex) variable as a main input to the ADM1. Over-use of X(c) does not allow for variation of degradability, carbon oxidation state or nitrogen content. In addition, achieving a target influent pH through the proper definition of the ionic system can be difficult. In this paper, we define an interface and characterisation model that maps degradable components directly to carbohydrates, proteins and lipids (and their soluble analogues), as well as organic acids, rather than using X(c). While this interface has been designed for use with the Benchmark Simulation Model No. 2 (BSM2), it is widely applicable to ADM1 input characterisation in general. We have demonstrated the model both hypothetically (BSM2), and practically on a full-scale anaerobic digester treating sewage sludge. PMID:19232670

  12. Automated System Marketplace 1987: Maturity and Competition.

    ERIC Educational Resources Information Center

    Walton, Robert A.; Bridge, Frank R.

    1988-01-01

    This annual review of the library automation marketplace presents profiles of 15 major library automation firms and looks at emerging trends. Seventeen charts and tables provide data on market shares, number and size of installations, hardware availability, operating systems, and interfaces. A directory of 49 automation sources is included. (MES)

  13. A Laboratory Seismoelectric Measurement for the Permafrost Model with a Frozen-unfrozen Interface

    NASA Astrophysics Data System (ADS)

    Liu, Z.

    2007-12-01

    For the Qing-Cang railway line located in the permafrost region, the freeze-thaw cycling with the seasons and spring-thaw of the permafrost are main factors to weaken the railway bed. Therefore, the determination of the frozen-unfrozen interface depth below the railway bed is important for the railway operation, and moreover, it can contribute to the evaluation of the permafrost environment effected by the railway. Since the frozen-unfrozen interface is a contact of two media with various porosity and saturation, an electric double-layer can be formed at the interface by the absorption of electrical charge to it. When a seismic wave is incident at the interface, a relative motion of the charges in the electric double-layer would induce an electromagnetic (EM) wave, or a seismoeletric conversion signal that can be measured remotely, which is potential for determining the frost depth. A simple permafrost model with a frozen-unfrozen interface was built mainly by two parts: the upper part was a frozen sand block with a 7cm thickness and the lower one with the same material was in an unfrozen state saturated with water. And the contact of the two parts simulated the frozen-unfrozen interface. The interface model was placed in a freezer, while it was heated from the bottom with a heating sheet made by the electric heating wires laid under the unfrozen part. A P-wave source transducer with 48 kHz narrow band frequency was set on the top the frozen part and driven by a square electric pulse. The six electrodes with a 1 cm even interval were fixed inside the frozen part with 1 cm vertical distance to the interface. In the experiment, all the analog signals acquired from the temperature sensors, acoustic transducers, and electrodes were sent through preamplifiers and recorded digitally by computer-based virtual instruments (VIs). At the beginning of the experiment, the first arrivals of the seismoeletric signals observed from the six electrodes with minimum offset set to be 7cm were proportional to the distances between the acoustic sources to electrodes, and thus the EM signals are originated from the stationary electromagnetic field that moves along with the acoustic waves. After the eight hours, we recognized two new events of EM waves by their exactly identical arrive times from the six electrodes. The event A with identical arrival time being close to zero is the EM interference of the high-voltage pulse exciting the acoustic source transducer. The identical arrival time 23-25 microsecond of the event B roughly equates to that of the acoustic wave travel time from the source to the interface, and it is obviously the conversion EM signal originated from the electric double-layer in the interface. With a minimum 14cm offset, the event A arrived at the same time only with greatly reduced amplitude, and the event B had not able to be detected for its weak amplitude. Another event B' with an about 50 microsecond identical arriving time could, however, be recognized, and it should be a conversion EM wave from the interface exited by the second acoustic vibration cycle from the acoustic source wave with higher amplitude, as the arrival time just equates to that of the second cycle of the narrow band acoustic wave to travel to the interface. These measurements in the laboratory show that , the electric double-layer formed at the frozen-unfrozen interface can be polarized to generate EM waves by both an EM pulse and a vibration source, which imply that the frozen-unfrozen interface of the permafrost could be surveying by both EM, and seismoelectric methods. And the results also show that the electric double-layer needs several hours to be formed in a laboratory experiment under low tempreture.

  14. Coarse Grained Modeling of The Interface BetweenWater and Heterogeneous Surfaces

    SciTech Connect

    Willard, Adam; Chandler, David

    2008-06-23

    Using coarse grained models we investigate the behavior of water adjacent to an extended hydrophobic surface peppered with various fractions of hydrophilic patches of different sizes. We study the spatial dependence of the mean interface height, the solvent density fluctuations related to drying the patchy substrate, and the spatial dependence of interfacial fluctuations. We find that adding small uniform attractive interactions between the substrate and solvent cause the mean position of the interface to be very close to the substrate. Nevertheless, the interfacial fluctuations are large and spatially heterogeneous in response to the underlying patchy substrate. We discuss the implications of these findings to the assembly of heterogeneous surfaces.

  15. Molecular modeling of the green leaf volatile methyl salicylate on atmospheric air/water interfaces.

    PubMed

    Liyana-Arachchi, Thilanga P; Hansel, Amie K; Stevens, Christopher; Ehrenhauser, Franz S; Valsaraj, Kalliat T; Hung, Francisco R

    2013-05-30

    Methyl salicylate (MeSA) is a green leaf volatile (GLV) compound that is emitted in significant amounts by plants, especially when they are under stress conditions. GLVs can then undergo chemical reactions with atmospheric oxidants, yielding compounds that contribute to the formation of secondary organic aerosols (SOAs). We investigated the adsorption of MeSA on atmospheric air/water interfaces at 298 K using thermodynamic integration (TI), potential of mean force (PMF) calculations, and classical molecular dynamics (MD) simulations. Our molecular models can reproduce experimental results of the 1-octanol/water partition coefficient of MeSA. A deep free energy minimum was found for MeSA at the air/water interface, which is mainly driven by energetic interactions between MeSA and water. At the interface, the oxygenated groups in MeSA tend to point toward the water side of the interface, with the aromatic group of MeSA lying farther away from water. Increases in the concentrations of MeSA lead to reductions in the height of the peaks in the MeSA-MeSA g(r) functions, a slowing down of the dynamics of both MeSA and water at the interface, and a reduction in the interfacial surface tension. Our results indicate that MeSA has a strong thermodynamic preference to remain at the air/water interface, and thus chemical reactions with atmospheric oxidants are more likely to take place at this interface, rather than in the water phase of atmospheric water droplets or in the gas phase. PMID:23668770

  16. Integrating Automated Data into Ecosystem Models: How Can We Drink from a Firehose?

    NASA Astrophysics Data System (ADS)

    Allen, M. F.; Harmon, T. C.

    2014-12-01

    Sensors and imaging are changing the way we are measuring ecosystem behavior. Within short time frames, we are able to capture how organisms behave in response to rapid change, and detect events that alter composition and shift states. To transform these observations into process-level understanding, we need to efficiently interpret signals. One way to do this is to automatically integrate the data into ecosystem models. In our soil carbon cycling studies, we collect continuous time series for meteorological conditions, soil processes, and automated imagery. To characterize the timing and clarity of change behavior in our data, we adopted signal-processing approaches like coupled wavelet/coherency analyses. In situ CO2 measurements allow us to visualize when root/microbial activity results in CO2 being respired from the soil surface, versus when other chemical/physical phenomena may alter gas pathways. While these approaches are interesting in understanding individual phenomena, they fail to get us beyond the study of individual processes. Sensor data are compared with the outputs from ecosystem models to detect the patterns in specific phenomena or to revise model parameters or traits. For instance, we measured unexpected levels of soil CO2 in a tropical ecosystem. By examining small-scale ecosystem model parameters, we were able to pinpoint those parameters that needed to be altered to resemble the data outputs. However, we do not capture the essence of large-scale ecosystem shifts. The time is right to utilize real-time data assimilation as an additional forcing of ecosystem models. Continuous, diurnal soil temperature and moisture, along with hourly hyphal or root growth could feed into well-established ecosystem models such as HYDRUS or DayCENT. This approach would provide instantaneous "measurements" of shifting ecosystem processes as they occur, allowing us to identify critical process connections more efficiently.

  17. A Demonstration of Automated DNA Sequencing.

    ERIC Educational Resources Information Center

    Latourelle, Sandra; Seidel-Rogol, Bonnie

    1998-01-01

    Details a simulation that employs a paper-and-pencil model to demonstrate the principles behind automated DNA sequencing. Discusses the advantages of automated sequencing as well as the chemistry of automated DNA sequencing. (DDR)

  18. Continuity-based model interfacing for plant-wide simulation: a general approach.

    PubMed

    Volcke, Eveline I P; van Loosdrecht, Mark C M; Vanrolleghem, Peter A

    2006-08-01

    In plant-wide simulation studies of wastewater treatment facilities, often existing models from different origin need to be coupled. However, as these submodels are likely to contain different state variables, their coupling is not straightforward. The continuity-based interfacing method (CBIM) provides a general framework to construct model interfaces for models of wastewater systems, taking into account conservation principles. In this contribution, the CBIM approach is applied to study the effect of sludge digestion reject water treatment with a SHARON-Anammox process on a plant-wide scale. Separate models were available for the SHARON process and for the Anammox process. The Benchmark simulation model no. 2 (BSM2) is used to simulate the behaviour of the complete WWTP including sludge digestion. The CBIM approach is followed to develop three different model interfaces. At the same time, the generally applicable CBIM approach was further refined and particular issues when coupling models in which pH is considered as a state variable, are pointed out. PMID:16846629

  19. Interface localization in the 2D Ising model with a driven line

    NASA Astrophysics Data System (ADS)

    Cohen, O.; Mukamel, D.

    2016-04-01

    We study the effect of a one-dimensional driving field on the interface between two coexisting phases in a two dimensional model. This is done by considering an Ising model on a cylinder with Glauber dynamics in all sites and additional biased Kawasaki dynamics in the central ring. Based on the exact solution of the two-dimensional Ising model, we are able to compute the phase diagram of the driven model within a special limit of fast drive and slow spin flips in the central ring. The model is found to exhibit two phases where the interface is pinned to the central ring: one in which it fluctuates symmetrically around the central ring and another where it fluctuates asymmetrically. In addition, we find a phase where the interface is centered in the bulk of the system, either below or above the central ring of the cylinder. In the latter case, the symmetry breaking is ‘stronger’ than that found in equilibrium when considering a repulsive potential on the central ring. This equilibrium model is analyzed here by using a restricted solid-on-solid model.

  20. Dynamics modeling for parallel haptic interfaces with force sensing and control.

    PubMed

    Bernstein, Nicholas; Lawrence, Dale; Pao, Lucy

    2013-01-01

    Closed-loop force control can be used on haptic interfaces (HIs) to mitigate the effects of mechanism dynamics. A single multidimensional force-torque sensor is often employed to measure the interaction force between the haptic device and the user's hand. The parallel haptic interface at the University of Colorado (CU) instead employs smaller 1D force sensors oriented along each of the five actuating rods to build up a 5D force vector. This paper shows that a particular manipulandum/hand partition in the system dynamics is induced by the placement and type of force sensing, and discusses the implications on force and impedance control for parallel haptic interfaces. The details of a "squaring down" process are also discussed, showing how to obtain reduced degree-of-freedom models from the general six degree-of-freedom dynamics formulation. PMID:24808395

  1. Modeling Complex Cross-Systems Software Interfaces Using SysML

    NASA Technical Reports Server (NTRS)

    Mandutianu, Sanda; Morillo, Ron; Simpson, Kim; Liepack, Otfrid; Bonanne, Kevin

    2013-01-01

    The complex flight and ground systems for NASA human space exploration are designed, built, operated and managed as separate programs and projects. However, each system relies on one or more of the other systems in order to accomplish specific mission objectives, creating a complex, tightly coupled architecture. Thus, there is a fundamental need to understand how each system interacts with the other. To determine if a model-based system engineering approach could be utilized to assist with understanding the complex system interactions, the NASA Engineering and Safety Center (NESC) sponsored a task to develop an approach for performing cross-system behavior modeling. This paper presents the results of applying Model Based Systems Engineering (MBSE) principles using the System Modeling Language (SysML) to define cross-system behaviors and how they map to crosssystem software interfaces documented in system-level Interface Control Documents (ICDs).

  2. Nuclear Reactor/Hydrogen Process Interface Including the HyPEP Model

    SciTech Connect

    Steven R. Sherman

    2007-05-01

    The Nuclear Reactor/Hydrogen Plant interface is the intermediate heat transport loop that will connect a very high temperature gas-cooled nuclear reactor (VHTR) to a thermochemical, high-temperature electrolysis, or hybrid hydrogen production plant. A prototype plant called the Next Generation Nuclear Plant (NGNP) is planned for construction and operation at the Idaho National Laboratory in the 2018-2021 timeframe, and will involve a VHTR, a high-temperature interface, and a hydrogen production plant. The interface is responsible for transporting high-temperature thermal energy from the nuclear reactor to the hydrogen production plant while protecting the nuclear plant from operational disturbances at the hydrogen plant. Development of the interface is occurring under the DOE Nuclear Hydrogen Initiative (NHI) and involves the study, design, and development of high-temperature heat exchangers, heat transport systems, materials, safety, and integrated system models. Research and development work on the system interface began in 2004 and is expected to continue at least until the start of construction of an engineering-scale demonstration plant.

  3. Open boundary conditions for the Diffuse Interface Model in 1-D

    NASA Astrophysics Data System (ADS)

    Desmarais, J. L.; Kuerten, J. G. M.

    2014-04-01

    New techniques are developed for solving multi-phase flows in unbounded domains using the Diffuse Interface Model in 1-D. They extend two open boundary conditions originally designed for the Navier-Stokes equations. The non-dimensional formulation of the DIM generalizes the approach to any fluid. The equations support a steady state whose analytical approximation close to the critical point depends only on temperature. This feature enables the use of detectors at the boundaries switching between conventional boundary conditions in bulk phases and a multi-phase strategy in interfacial regions. Moreover, the latter takes advantage of the steady state approximation to minimize the interface-boundary interactions. The techniques are applied to fluids experiencing a phase transition and where the interface between the phases travels through one of the boundaries. When the interface crossing the boundary is fully developed, the technique greatly improves results relative to cases where conventional boundary conditions can be used. Limitations appear when the interface crossing the boundary is not a stable equilibrium between the two phases: the terms responsible for creating the true balance between the phases perturb the interior solution. Both boundary conditions present good numerical stability properties: the error remains bounded when the initial conditions or the far field values are perturbed. For the PML, the influence of its main parameters on the global error is investigated to make a compromise between computational costs and maximum error. The approach can be extended to multiple spatial dimensions.

  4. NURBS- and T-spline-based isogeometric cohesive zone modeling of interface debonding

    NASA Astrophysics Data System (ADS)

    Dimitri, R.; De Lorenzis, L.; Wriggers, P.; Zavarise, G.

    2014-08-01

    Cohesive zone (CZ) models have long been used by the scientific community to analyze the progressive damage of materials and interfaces. In these models, non-linear relationships between tractions and relative displacements are assumed, which dictate both the work of separation per unit fracture surface and the peak stress that has to be reached for the crack formation. This contribution deals with isogeometric CZ modeling of interface debonding. The interface is discretized with generalized contact elements which account for both contact and cohesive debonding within a unified framework. The formulation is suitable for non-matching discretizations of the interacting surfaces in presence of large deformations and large relative displacements. The isogeometric discretizations are based on non uniform rational B-splines as well as analysis-suitable T-splines enabling local refinement. Conventional Lagrange polynomial discretizations are also used for comparison purposes. Some numerical examples demonstrate that the proposed formulation based on isogeometric analysis is a computationally accurate and efficient technology to solve challenging interface debonding problems in 2D and 3D.

  5. Sloan Digital Sky Survey photometric telescope automation and observing software

    SciTech Connect

    Eric H. Neilsen, Jr. et al.

    2002-10-16

    The photometric telescope (PT) provides observations necessary for the photometric calibration of the Sloan Digital Sky Survey (SDSS). Because the attention of the observing staff is occupied by the operation of the 2.5 meter telescope which takes the survey data proper, the PT must reliably take data with little supervision. In this paper we describe the PT's observing program, MOP, which automates most tasks necessary for observing. MOP's automated target selection is closely modeled on the actions a human observer might take, and is built upon a user interface that can be (and has been) used for manual operation. This results in an interface that makes it easy for an observer to track the activities of the automating procedures and intervene with minimum disturbance when necessary. MOP selects targets from the same list of standard star and calibration fields presented to the user, and chooses standard star fields covering ranges of airmass, color, and time necessary to monitor atmospheric extinction and produce a photometric solution. The software determines when additional standard star fields are unnecessary, and selects survey calibration fields according to availability and priority. Other automated features of MOP, such as maintaining the focus and keeping a night log, are also built around still functional manual interfaces, allowing the observer to be as active in observing as desired; MOP's automated features may be used as tools for manual observing, ignored entirely, or allowed to run the telescope with minimal supervision when taking routine data.

  6. Sloan Digital Sky Survey photometric telescope automation and observing software

    NASA Astrophysics Data System (ADS)

    Neilsen, Eric H., Jr.; Uomoto, Alan; Kent, Steven M.; Annis, James T.

    2002-12-01

    The photometric telescope (PT) provides observations necessary for the photometric calibration of the Sloan Digital Sky Survey (SDSS). Because the attention of the observing staff is occupied by the operation of the 2.5 meter telescope which takes the survey data proper, the PT must reliably take data with little supervision. In this paper we describe the PT's observing program, MOP, which automates most tasks necessary for observing. MOP's automated target selection is closely modeled on the actions a human observer might take, and is built upon a user interface that can be (and has been) used for manual operation. This results in an interface that makes it easy for an observer to track the activities of the automating procedures and intervene with minimum disturbance when necessary. MOP selects targets from the same list of standard star and calibration fields presented to the user, and chooses standard star fields covering ranges of airmass, color, and time necessary to monitor atmospheric extinction and produce a photometric solution. The software determines when additional standard star fields are unnecessary, and selects survey calibration fields according to availability and priority. Other automated features of MOP, such as maintaining the focus and keeping a night log, are also built around still functional manual interfaces, allowing the observer to be as active in observing as desired; MOP's automated features may be used as tools for manual observing, ignored entirely, or allowed to run the telescope with minimal supervision when taking routine data.

  7. Automated choroidal segmentation of 1060 nm OCT in healthy and pathologic eyes using a statistical model

    PubMed Central

    Kajić, Vedran; Esmaeelpour, Marieh; Považay, Boris; Marshall, David; Rosin, Paul L.; Drexler, Wolfgang

    2011-01-01

    A two stage statistical model based on texture and shape for fully automatic choroidal segmentation of normal and pathologic eyes obtained by a 1060 nm optical coherence tomography (OCT) system is developed. A novel dynamic programming approach is implemented to determine location of the retinal pigment epithelium/ Bruch’s membrane /choriocapillaris (RBC) boundary. The choroid–sclera interface (CSI) is segmented using a statistical model. The algorithm is robust even in presence of speckle noise, low signal (thick choroid), retinal pigment epithelium (RPE) detachments and atrophy, drusen, shadowing and other artifacts. Evaluation against a set of 871 manually segmented cross-sectional scans from 12 eyes achieves an average error rate of 13%, computed per tomogram as a ratio of incorrectly classified pixels and the total layer surface. For the first time a fully automatic choroidal segmentation algorithm is successfully applied to a wide range of clinical volumetric OCT data. PMID:22254171

  8. Integrated surface and groundwater modelling in the Thames Basin, UK using the Open Modelling Interface

    NASA Astrophysics Data System (ADS)

    Mackay, Jonathan; Abesser, Corinna; Hughes, Andrew; Jackson, Chris; Kingdon, Andrew; Mansour, Majdi; Pachocka, Magdalena; Wang, Lei; Williams, Ann

    2013-04-01

    The River Thames catchment is situated in the south-east of England. It covers approximately 16,000 km2 and is the most heavily populated river basin in the UK. It is also one of the driest and has experienced severe drought events in the recent past. With the onset of climate change and human exploitation of our environment, there are now serious concerns over the sustainability of water resources in this basin with 6 million m3 consumed every day for public water supply alone. Groundwater in the Thames basin is extremely important, providing 40% of water for public supply. The principal aquifer is the Chalk, a dual permeability limestone, which has been extensively studied to understand its hydraulic properties. The fractured Jurassic limestone in the upper catchment also forms an important aquifer, supporting baseflow downstream during periods of drought. These aquifers are unconnected other than through the River Thames and its tributaries, which provide two-thirds of London's drinking water. Therefore, to manage these water resources sustainably and to make robust projections into the future, surface and groundwater processes must be considered in combination. This necessitates the simulation of the feedbacks and complex interactions between different parts of the water cycle, and the development of integrated environmental models. The Open Modelling Interface (OpenMI) standard provides a method through which environmental models of varying complexity and structure can be linked, allowing them to run simultaneously and exchange data at each timestep. This architecture has allowed us to represent the surface and subsurface flow processes within the Thames basin at an appropriate level of complexity based on our understanding of particular hydrological processes and features. We have developed a hydrological model in OpenMI which integrates a process-driven, gridded finite difference groundwater model of the Chalk with a more simplistic, semi-distributed conceptual model of the Jurassic limestone. A distributed river routing model of the Thames has also been integrated to connect the surface and subsurface hydrological processes. This application demonstrates the potential benefits and issues associated with implementing this approach.

  9. MaxMod: a hidden Markov model based novel interface to MODELLER for improved prediction of protein 3D models.

    PubMed

    Parida, Bikram K; Panda, Prasanna K; Misra, Namrata; Mishra, Barada K

    2015-02-01

    Modeling the three-dimensional (3D) structures of proteins assumes great significance because of its manifold applications in biomolecular research. Toward this goal, we present MaxMod, a graphical user interface (GUI) of the MODELLER program that combines profile hidden Markov model (profile HMM) method with Clustal Omega program to significantly improve the selection of homologous templates and target-template alignment for construction of accurate 3D protein models. MaxMod distinguishes itself from other existing GUIs of MODELLER software by implementing effortless modeling of proteins using templates that bear modified residues. Additionally, it provides various features such as loop optimization, express modeling (a feature where protein model can be generated directly from its sequence, without any further user intervention) and automatic update of PDB database, thus enhancing the user-friendly control of computational tasks. We find that HMM-based MaxMod performs better than other modeling packages in terms of execution time and model quality. MaxMod is freely available as a downloadable standalone tool for academic and non-commercial purpose at http://www.immt.res.in/maxmod/. PMID:25636267

  10. Modeling strategic use of human computer interfaces with novel hidden Markov models.

    PubMed

    Mariano, Laura J; Poore, Joshua C; Krum, David M; Schwartz, Jana L; Coskren, William D; Jones, Eric M

    2015-01-01

    Immersive software tools are virtual environments designed to give their users an augmented view of real-world data and ways of manipulating that data. As virtual environments, every action users make while interacting with these tools can be carefully logged, as can the state of the software and the information it presents to the user, giving these actions context. This data provides a high-resolution lens through which dynamic cognitive and behavioral processes can be viewed. In this report, we describe new methods for the analysis and interpretation of such data, utilizing a novel implementation of the Beta Process Hidden Markov Model (BP-HMM) for analysis of software activity logs. We further report the results of a preliminary study designed to establish the validity of our modeling approach. A group of 20 participants were asked to play a simple computer game, instrumented to log every interaction with the interface. Participants had no previous experience with the game's functionality or rules, so the activity logs collected during their naïve interactions capture patterns of exploratory behavior and skill acquisition as they attempted to learn the rules of the game. Pre- and post-task questionnaires probed for self-reported styles of problem solving, as well as task engagement, difficulty, and workload. We jointly modeled the activity log sequences collected from all participants using the BP-HMM approach, identifying a global library of activity patterns representative of the collective behavior of all the participants. Analyses show systematic relationships between both pre- and post-task questionnaires, self-reported approaches to analytic problem solving, and metrics extracted from the BP-HMM decomposition. Overall, we find that this novel approach to decomposing unstructured behavioral data within software environments provides a sensible means for understanding how users learn to integrate software functionality for strategic task pursuit. PMID:26191026

  11. Modeling strategic use of human computer interfaces with novel hidden Markov models

    PubMed Central

    Mariano, Laura J.; Poore, Joshua C.; Krum, David M.; Schwartz, Jana L.; Coskren, William D.; Jones, Eric M.

    2015-01-01

    Immersive software tools are virtual environments designed to give their users an augmented view of real-world data and ways of manipulating that data. As virtual environments, every action users make while interacting with these tools can be carefully logged, as can the state of the software and the information it presents to the user, giving these actions context. This data provides a high-resolution lens through which dynamic cognitive and behavioral processes can be viewed. In this report, we describe new methods for the analysis and interpretation of such data, utilizing a novel implementation of the Beta Process Hidden Markov Model (BP-HMM) for analysis of software activity logs. We further report the results of a preliminary study designed to establish the validity of our modeling approach. A group of 20 participants were asked to play a simple computer game, instrumented to log every interaction with the interface. Participants had no previous experience with the game's functionality or rules, so the activity logs collected during their naïve interactions capture patterns of exploratory behavior and skill acquisition as they attempted to learn the rules of the game. Pre- and post-task questionnaires probed for self-reported styles of problem solving, as well as task engagement, difficulty, and workload. We jointly modeled the activity log sequences collected from all participants using the BP-HMM approach, identifying a global library of activity patterns representative of the collective behavior of all the participants. Analyses show systematic relationships between both pre- and post-task questionnaires, self-reported approaches to analytic problem solving, and metrics extracted from the BP-HMM decomposition. Overall, we find that this novel approach to decomposing unstructured behavioral data within software environments provides a sensible means for understanding how users learn to integrate software functionality for strategic task pursuit. PMID:26191026

  12. An Agent-Based Interface to Terrestrial Ecological Forecasting

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Nemani, Ramakrishna; Pang, Wan-Lin; Votava, Petr; Etzioni, Oren

    2004-01-01

    This paper describes a flexible agent-based ecological forecasting system that combines multiple distributed data sources and models to provide near-real-time answers to questions about the state of the Earth system We build on novel techniques in automated constraint-based planning and natural language interfaces to automatically generate data products based on descriptions of the desired data products.

  13. Visualization: A Mind-Machine Interface for Discovery.

    PubMed

    Nielsen, Cydney B

    2016-02-01

    Computation is critical for enabling us to process data volumes and model data complexities that are unthinkable by manual means. However, we are far from automating the sense-making process. Human knowledge and reasoning are critical for discovery. Visualization offers a powerful interface between mind and machine that should be further exploited in future genome analysis tools. PMID:26739384

  14. The location of the thermodynamic atmosphere-ice interface in fully-coupled models

    NASA Astrophysics Data System (ADS)

    West, A. E.; McLaren, A. J.; Hewitt, H. T.; Best, M. J.

    2015-11-01

    In fully-coupled climate models, it is now normal to include a sea ice component with multiple layers, each having their own temperature. When coupling this component to an atmosphere model, it is more common for surface variables to be calculated in the sea ice component of the model, the equivalent of placing an interface immediately above the surface. This study uses a one-dimensional (1-D) version of the Los Alamos sea ice model (CICE) thermodynamic solver and the Met Office atmospheric surface exchange solver (JULES) to compare this method with that of allowing the surface variables to be calculated instead in the atmosphere, the equivalent of placing an interface immediately below the surface. The model is forced with a sensible heat flux derived from a sinusoidally varying near-surface air temperature. The two coupling methods are tested first with a 1-h coupling frequency, and then a 3-h coupling frequency, both commonly-used. With an above-surface interface, the resulting surface temperature and flux cycles contain large phase and amplitude errors, as well as having a very "blocky" shape. The simulation of both quantities is greatly improved when the interface is instead placed within the top ice layer, allowing surface variables to be calculated on the shorter timescale of the atmosphere. There is also an unexpected slight improvement in the simulation of the top-layer ice temperature by the ice model. The study concludes with a discussion of the implications of these results to three-dimensional modelling. An appendix examines the stability of the alternative method of coupling under various physically realistic scenarios.

  15. Numerical modeling of flow in a differential chamber of the gas-dynamic interface of a portable mass-spectrometer

    NASA Astrophysics Data System (ADS)

    Pivovarova, E. A.; Smirnovsky, A. A.; Schmidt, A. A.

    2013-11-01

    Mathematical modeling of flow in the differential chamber of the gas-dynamic interface of a portable mass-spectrometer was carried out to comprehensively study the flow structure and make recommendations for the optimization of the gas-dynamic interface. Modeling was performed using an OpenFOAM open computational platform. Conditions for an optimal operating mode of the differential chamber were determined.

  16. Third-generation electrokinetically pumped sheath-flow nanospray interface with improved stability and sensitivity for automated capillary zone electrophoresis-mass spectrometry analysis of complex proteome digests.

    PubMed

    Sun, Liangliang; Zhu, Guijie; Zhang, Zhenbin; Mou, Si; Dovichi, Norman J

    2015-05-01

    We have reported a set of electrokinetically pumped sheath flow nanoelectrospray interfaces to couple capillary zone electrophoresis with mass spectrometry. A separation capillary is threaded through a cross into a glass emitter. A side arm provides fluidic contact with a sheath buffer reservoir that is connected to a power supply. The potential applied to the sheath buffer drives electro-osmosis in the emitter to pump the sheath fluid at nanoliter per minute rates. Our first-generation interface placed a flat-tipped capillary in the emitter. Sensitivity was inversely related to orifice size and to the distance from the capillary tip to the emitter orifice. A second-generation interface used a capillary with an etched tip that allowed the capillary exit to approach within a few hundred micrometers of the emitter orifice, resulting in a significant increase in sensitivity. In both the first- and second-generation interfaces, the emitter diameter was typically 8 μm; these narrow orifices were susceptible to plugging and tended to have limited lifetime. We now report a third-generation interface that employs a larger diameter emitter orifice with very short distance between the capillary tip and the emitter orifice. This modified interface is much more robust and produces much longer lifetime than our previous designs with no loss in sensitivity. We evaluated the third-generation interface for a 5000 min (127 runs, 3.5 days) repetitive analysis of bovine serum albumin digest using an uncoated capillary. We observed a 10% relative standard deviation in peak area, an average of 160,000 theoretical plates, and very low carry-over (much less than 1%). We employed a linear-polyacrylamide (LPA)-coated capillary for single-shot, bottom-up proteomic analysis of 300 ng of Xenopus laevis fertilized egg proteome digest and identified 1249 protein groups and 4038 peptides in a 110 min separation using an LTQ-Orbitrap Velos mass spectrometer; peak capacity was ∼330. The proteome data set using this third-generation interface-based CZE-MS/MS is similar in size to that generated using a commercial ultraperformance liquid chromatographic analysis of the same sample with the same mass spectrometer and similar analysis time. PMID:25786131

  17. Fully automated segmentation of oncological PET volumes using a combined multiscale and statistical model

    SciTech Connect

    Montgomery, David W. G.; Amira, Abbes; Zaidi, Habib

    2007-02-15

    The widespread application of positron emission tomography (PET) in clinical oncology has driven this imaging technology into a number of new research and clinical arenas. Increasing numbers of patient scans have led to an urgent need for efficient data handling and the development of new image analysis techniques to aid clinicians in the diagnosis of disease and planning of treatment. Automatic quantitative assessment of metabolic PET data is attractive and will certainly revolutionize the practice of functional imaging since it can lower variability across institutions and may enhance the consistency of image interpretation independent of reader experience. In this paper, a novel automated system for the segmentation of oncological PET data aiming at providing an accurate quantitative analysis tool is proposed. The initial step involves expectation maximization (EM)-based mixture modeling using a k-means clustering procedure, which varies voxel order for initialization. A multiscale Markov model is then used to refine this segmentation by modeling spatial correlations between neighboring image voxels. An experimental study using an anthropomorphic thorax phantom was conducted for quantitative evaluation of the performance of the proposed segmentation algorithm. The comparison of actual tumor volumes to the volumes calculated using different segmentation methodologies including standard k-means, spatial domain Markov Random Field Model (MRFM), and the new multiscale MRFM proposed in this paper showed that the latter dramatically reduces the relative error to less than 8% for small lesions (7 mm radii) and less than 3.5% for larger lesions (9 mm radii). The analysis of the resulting segmentations of clinical oncologic PET data seems to confirm that this methodology shows promise and can successfully segment patient lesions. For problematic images, this technique enables the identification of tumors situated very close to nearby high normal physiologic uptake. The use of this technique to estimate tumor volumes for assessment of response to therapy and to delineate treatment volumes for the purpose of combined PET/CT-based radiation therapy treatment planning is also discussed.

  18. A DIFFUSE-INTERFACE APPROACH FOR MODELING TRANSPORT, DIFFUSION AND ADSORPTION/DESORPTION OF MATERIAL QUANTITIES ON A DEFORMABLE INTERFACE*

    PubMed Central

    Teigen, Knut Erik; Li, Xiangrong; Lowengrub, John; Wang, Fan; Voigt, Axel

    2010-01-01

    A method is presented to solve two-phase problems involving a material quantity on an interface. The interface can be advected, stretched, and change topology, and material can be adsorbed to or desorbed from it. The method is based on the use of a diffuse interface framework, which allows a simple implementation using standard finite-difference or finite-element techniques. Here, finite-difference methods on a block-structured adaptive grid are used, and the resulting equations are solved using a non-linear multigrid method. Interfacial flow with soluble surfactants is used as an example of the application of the method, and several test cases are presented demonstrating its accuracy and convergence. PMID:21373370

  19. Modeling the Effect of Interface Wear on Fatigue Hysteresis Behavior of Carbon Fiber-Reinforced Ceramic-Matrix Composites

    NASA Astrophysics Data System (ADS)

    Longbiao, Li

    2015-12-01

    An analytical method has been developed to investigate the effect of interface wear on fatigue hysteresis behavior in carbon fiber-reinforced ceramic-matrix composites (CMCs). The damage mechanisms, i.e., matrix multicracking, fiber/matrix interface debonding and interface wear, fibers fracture, slip and pull-out, have been considered. The statistical matrix multicracking model and fracture mechanics interface debonding criterion were used to determine the matrix crack spacing and interface debonded length. Upon first loading to fatigue peak stress and subsequent cyclic loading, the fibers failure probabilities and fracture locations were determined by combining the interface wear model and fiber statistical failure model based on the assumption that the loads carried by broken and intact fibers satisfy the global load sharing criterion. The effects of matrix properties, i.e., matrix cracking characteristic strength and matrix Weibull modulus, interface properties, i.e., interface shear stress and interface debonded energy, fiber properties, i.e., fiber Weibull modulus and fiber characteristic strength, and cycle number on fibers failure, hysteresis loops and interface slip, have been investigated. The hysteresis loops under fatigue loading from the present analytical method were in good agreement with experimental data.

  20. Lattice-gas models of phase separation: interfaces, phase transitions, and multiphase flow

    SciTech Connect

    Rothman, D.H. ); Zaleski, S. )

    1994-10-01

    Momentum-conserving lattice gases are simple, discrete, microscopic models of fluids. This review describes their hydrodynamics, with particular attention given to the derivation of macroscopic constitutive equations from microscopic dynamics. Lattice-gas models of phase separation receive special emphasis. The current understanding of phase transitions in these momentum-conserving models is reviewed; included in this discussion is a summary of the dynamical properties of interfaces. Because the phase-separation models are microscopically time irreversible, interesting questions are raised about their relationship to real fluid mixtures. Simulation of certain complex-fluid problems, such as multiphase flow through porous media and the interaction of phase transitions with hydrodynamics, is illustrated.

  1. MARTINI Coarse-Grained Model of Triton TX-100 in Pure DPPC Monolayer and Bilayer Interfaces.

    PubMed

    Pizzirusso, Antonio; De Nicola, Antonio; Milano, Giuseppe

    2016-04-28

    The coarse-grained MARTINI model of Triton TX-100 has been validated by direct comparison of the experimental and calculated area increase in pure DPPC lipid bilayers and monolayers at water/air interfaces in the presence of surfactant and by comparison of electron density profiles calculated with more detailed atomistic models based on the CHARMM force field. Bilayer simulations have been performed and compared with monolayers and with atomistic models. The validated CG model has been employed to study the phase separation of TX-100 molecules in lipid bilayers and the effect of the lipid bilayer curvature. PMID:27042862

  2. Liquid-vapor interface of water-methanol mixture. II. A simple lattice-gas model

    NASA Astrophysics Data System (ADS)

    Matsumoto, Mitsuhiro; Mizukuchi, Hiroshi; Kataoka, Yosuke

    1993-01-01

    A simple lattice-gas model with a mean field approximation is presented to investigate qualitative features of liquid-vapor interface of water-methanol mixtures. The hydrophobicity of methanol molecules is incorporated by introducing anisotropic interactions. A rigorous framework to treat such anisotropy in a lattice-gas mixture model is described. The model is mathematically equivalent to an interfacial system of a diluted antiferro Ising spin system. Results of density profiles, orientational ordering near the surface, and surface excess thermodynamic quantities are compared with results of computer simulation based on a more realistic model.

  3. Thermo-Mechanical Modeling of Foil-Supported Carbon Nanotube Array Interface Materials

    NASA Astrophysics Data System (ADS)

    Pour Shahid Saeed Abadi, Parisa; Cola, Baratunde; Graham, Samuel

    2010-03-01

    A thin metal foil with vertically aligned carbon nanotube (CNT) arrays synthesized on both sides is a new class of thermal interface materials that has demonstrated thermal resistances less than 0.1 cm^2 K/W under moderate pressures. Such interface materials are able to obtain such low resistances due to their unique combination of high thermal conductivity and high conformability to surface roughness. For such structures, the contact resistances between CNT arrays and the adjacent surfaces are the major constituents of total resistance. Here we integrate a recently developed contact mechanics model for CNT arrays with a finite element code that captures the nonlinear mechanical behavior of the interface material and the effects of interface topography on the thermal performance. The developed model elucidates the relative affects of metal foil as well as CNT array deformation on the compliance of the composite structure. The results support previous experimental observations that the combination of foil and CNT array deformation significantly enhances interfacial contact and thermal conductance.

  4. Integration Of Heat Transfer Coefficient In Glass Forming Modeling With Special Interface Element

    SciTech Connect

    Moreau, P.; Gregoire, S.; Lochegnies, D.; Cesar de Sa, J.

    2007-05-17

    Numerical modeling of the glass forming processes requires the accurate knowledge of the heat exchange between the glass and the forming tools. A laboratory testing is developed to determine the evolution of the heat transfer coefficient in different glass/mould contact conditions (contact pressure, temperature, lubrication...). In this paper, trials are performed to determine heat transfer coefficient evolutions in experimental conditions close to the industrial blow-and-blow process conditions. In parallel of this work, a special interface element is implemented in a commercial Finite Element code in order to deal with heat transfer between glass and mould for non-meshing meshes and evolutive contact. This special interface element, implemented by using user subroutines, permits to introduce the previous heat transfer coefficient evolutions in the numerical modelings at the glass/mould interface in function of the local temperatures, contact pressures, contact time and kind of lubrication. The blow-and-blow forming simulation of a perfume bottle is finally performed to assess the special interface element performance.

  5. Integration Of Heat Transfer Coefficient In Glass Forming Modeling With Special Interface Element

    NASA Astrophysics Data System (ADS)

    Moreau, P.; César de Sá, J.; Grégoire, S.; Lochegnies, D.

    2007-05-01

    Numerical modeling of the glass forming processes requires the accurate knowledge of the heat exchange between the glass and the forming tools. A laboratory testing is developed to determine the evolution of the heat transfer coefficient in different glass/mould contact conditions (contact pressure, temperature, lubrication…). In this paper, trials are performed to determine heat transfer coefficient evolutions in experimental conditions close to the industrial blow-and-blow process conditions. In parallel of this work, a special interface element is implemented in a commercial Finite Element code in order to deal with heat transfer between glass and mould for non-meshing meshes and evolutive contact. This special interface element, implemented by using user subroutines, permits to introduce the previous heat transfer coefficient evolutions in the numerical modelings at the glass/mould interface in function of the local temperatures, contact pressures, contact time and kind of lubrication. The blow-and-blow forming simulation of a perfume bottle is finally performed to assess the special interface element performance.

  6. Distribution automation applications of fiber optics

    NASA Technical Reports Server (NTRS)

    Kirkham, Harold; Johnston, A.; Friend, H.

    1989-01-01

    Motivations for interest and research in distribution automation are discussed. The communication requirements of distribution automation are examined and shown to exceed the capabilities of power line carrier, radio, and telephone systems. A fiber optic based communication system is described that is co-located with the distribution system and that could satisfy the data rate and reliability requirements. A cost comparison shows that it could be constructed at a cost that is similar to that of a power line carrier system. The requirements for fiber optic sensors for distribution automation are discussed. The design of a data link suitable for optically-powered electronic sensing is presented. Empirical results are given. A modeling technique that was used to understand the reflections of guided light from a variety of surfaces is described. An optical position-indicator design is discussed. Systems aspects of distribution automation are discussed, in particular, the lack of interface, communications, and data standards. The economics of distribution automation are examined.

  7. Interfacing MATLAB and Python Optimizers to Black-Box Environmental Simulation Models

    NASA Astrophysics Data System (ADS)

    Matott, L. S.; Leung, K.; Tolson, B.

    2009-12-01

    A common approach for utilizing environmental models in a management or policy-analysis context is to incorporate them into a simulation-optimization framework - where an underlying process-based environmental model is linked with an optimization search algorithm. The optimization search algorithm iteratively adjusts various model inputs (i.e. parameters or design variables) in order to minimize an application-specific objective function computed on the basis of model outputs (i.e. response variables). Numerous optimization algorithms have been applied to the simulation-optimization of environmental systems and this research investigated the use of optimization libraries and toolboxes that are readily available in MATLAB and Python - two popular high-level programming languages. Inspired by model-independent calibration codes (e.g. PEST and UCODE), a small piece of interface software (known as PIGEON) was developed. PIGEON allows users to interface Python and MATLAB optimizers with arbitrary black-box environmental models without writing any additional interface code. An initial set of benchmark tests (involving more than 20 MATLAB and Python optimization algorithms) were performed to validate the interface software - results highlight the need to carefully consider such issues as numerical precision in output files and enforcement (or not) of parameter limits. Additional benchmark testing considered the problem of fitting isotherm expressions to laboratory data - with an emphasis on dual-mode expressions combining non-linear isotherms with a linear partitioning component. With respect to the selected isotherm fitting problems, derivative-free search algorithms significantly outperformed gradient-based algorithms. Attempts to improve gradient-based performance, via parameter tuning and also via several alternative multi-start approaches, were largely unsuccessful.

  8. Interaction of IAPP and Insulin with Model Interfaces Studied Using Neutron Reflectometry

    PubMed Central

    Jeworrek, Christoph; Hollmann, Oliver; Steitz, Roland; Winter, Roland; Czeslik, Claus

    2009-01-01

    The islet amyloid polypeptide (IAPP) and insulin are coproduced by the β-cells of the pancreatic islets of Langerhans. Both peptides can interact with negatively charged lipid membranes. The positively charged islet amyloid polypeptide partially inserts into these membranes and subsequently forms amyloid fibrils. The amyloid fibril formation of insulin is also accelerated by the presence of negatively charged lipids, although insulin has a negative net charge at neutral pH-values. We used water-polymer model interfaces to differentiate between the hydrophobic and electrostatic interactions that can drive these peptides to adsorb at an interface. By applying neutron reflectometry, the scattering-length density profiles of IAPP and insulin, as adsorbed at three different water-polymer interfaces, were determined. The islet amyloid polypeptide most strongly adsorbed at a hydrophobic poly-(styrene) surface, whereas at a hydrophilic, negatively charged poly-(styrene sulfonate) interface, the degree of adsorption was reduced by 50%. Almost no IAPP adsorption was evident at this negatively charged interface when we added 100 mM NaCl. On the other hand, negatively charged insulin was most strongly attracted to a hydrophilic, negatively charged interface. Our results suggest that IAPP is strongly attracted to a hydrophobic surface, whereas the few positive charges of IAPP cannot warrant a permanent immobilization of IAPP at a hydrophilic, negatively charged surface at an ionic strength of 100 mM. Furthermore, the interfacial accumulation of insulin at a hydrophilic, negatively charged surface may represent a favorable precondition for nucleus formation and fibril formation. PMID:19186147

  9. Phononic band structures and stability analysis using radial basis function method with consideration of different interface models

    NASA Astrophysics Data System (ADS)

    Yan, Zhi-zhong; Wei, Chun-qiu; Zheng, Hui; Zhang, Chuanzeng

    2016-05-01

    In this paper, a meshless radial basis function (RBF) collocation method is developed to calculate the phononic band structures taking account of different interface models. The present method is validated by using the analytical results in the case of perfect interfaces. The stability is fully discussed based on the types of RBFs, the shape parameters and the node numbers. And the advantages of the proposed RBF method compared to the finite element method (FEM) are also illustrated. In addition, the influences of the spring-interface model and the three-phase model on the wave band gaps are investigated by comparing with the perfect interfaces. For different interface models, the effects of various interface conditions, length ratios and density ratios on the band gap width are analyzed. The comparison results of the two models show that the weakly bonded interface has a significant effect on the properties of phononic crystals. Besides, the band structures of the spring-interface model have certain similarities and differences with those of the three-phase model.

  10. Automated parameter estimation for biological models using Bayesian statistical model checking

    PubMed Central

    2015-01-01

    Background Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Domain experts usually estimate the values of these parameters by fitting the model to experimental data. Model fitting is usually expressed as an optimization problem that requires minimizing a cost-function which measures some notion of distance between the model and the data. This optimization problem is often solved by combining local and global search methods that tend to perform well for the specific application domain. When some prior information about parameters is available, methods such as Bayesian inference are commonly used for parameter learning. Choosing the appropriate parameter search technique requires detailed domain knowledge and insight into the underlying system. Results Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. Conclusions We have developed a new algorithmic technique for discovering parameters in complex stochastic models of biological systems given behavioral specifications written in a formal mathematical logic. Our algorithm uses Bayesian model checking, sequential hypothesis testing, and stochastic optimization to automatically synthesize parameters of probabilistic biological models. PMID:26679759

  11. Design of a photoionization detector for high-performance liquid chromatography using an automated liquid-to-vapor phase interface and application to phenobarbital in an animal feed and to amantadine.

    PubMed

    Schmermund, J T; Locke, D C

    1990-05-01

    An automated liquid-to-vapor phase interface system forms the basis for a new high-performance liquid chromatography (HPLC)-photoionization detection (PID) system. The system incorporates a six-valve interface enabling peak trapping, solvent switching and thermal desorption of the solute of interest into a vapor phase PID. For reversed-phase HPLC, the eluted solute peak is isolated on a Tenax trap after dilution of the effluent with water; the water is then evaporated, following which the trapped solute is flash-evaporated into the PID system. For normal-phase HPLC, the column effluent is diluted with hexane, the solute peak is concentrated on a short column packed with a propyl-amino/cyano bonded phase and the solvent is evaporated. The solute is then eluted with water onto the Tenax trap, and the above procedure for reversed-phase HPLC followed. All operations are controlled with a microcomputer. The advantages of the new detector system include completely automated operation, fast sample preparation, high sensitivity, and inherent selectivity. The system was applied to phenobarbital, which was extracted with acetonitrile from spiked laboratory animal feed, and to amantadine. The phenobarbital assay used a normal-phase separation with hexane-methyl tert.-butyl ether-methanol eluent. The manual sample preparation time was 5 min and the limit of detection was 2 ng phenobarbital injected; a conventional HPLC assay with UV detection required a longer sample preparation time and had a detection limit of 700 ng. Amantadine was assayed using a reversed-phase HPLC system with a water-methanol-triethylamine-orthophosphoric acid mobile phase. The detection limit was 25 ng injected. PMID:2355063

  12. A new method for automated discontinuity trace mapping on rock mass 3D surface model

    NASA Astrophysics Data System (ADS)

    Li, Xiaojun; Chen, Jianqin; Zhu, Hehua

    2016-04-01

    This paper presents an automated discontinuity trace mapping method on a 3D surface model of rock mass. Feature points of discontinuity traces are first detected using the Normal Tensor Voting Theory, which is robust to noisy point cloud data. Discontinuity traces are then extracted from feature points in four steps: (1) trace feature point grouping, (2) trace segment growth, (3) trace segment connection, and (4) redundant trace segment removal. A sensitivity analysis is conducted to identify optimal values for the parameters used in the proposed method. The optimal triangular mesh element size is between 5 cm and 6 cm; the angle threshold in the trace segment growth step is between 70° and 90°; the angle threshold in the trace segment connection step is between 50° and 70°, and the distance threshold should be at least 15 times the mean triangular mesh element size. The method is applied to the excavation face trace mapping of a drill-and-blast tunnel. The results show that the proposed discontinuity trace mapping method is fast and effective and could be used as a supplement to traditional direct measurement of discontinuity traces.

  13. A model-free, fully automated baseline-removal method for Raman spectra.

    PubMed

    Schulze, H Georg; Foist, Rod B; Okuda, Kadek; Ivanov, André; Turner, Robin F B

    2011-01-01

    We present here a fully automated spectral baseline-removal procedure. The method uses a large-window moving average to estimate the baseline; thus, it is a model-free approach with a peak-stripping method to remove spectral peaks. After processing, the baseline-corrected spectrum should yield a flat baseline and this endpoint can be verified with the χ(2)-statistic. The approach provides for multiple passes or iterations, based on a given χ(2)-statistic for convergence. If the baseline is acceptably flat given the χ(2)-statistic after the first pass at correction, the problem is solved. If not, the non-flat baseline (i.e., after the first effort or first pass at correction) should provide an indication of where the first pass caused too much or too little baseline to be subtracted. The second pass thus permits one to compensate for the errors incurred on the first pass. Thus, one can use a very large window so as to avoid affecting spectral peaks--even if the window is so large that the baseline is inaccurately removed--because baseline-correction errors can be assessed and compensated for on subsequent passes. We start with the largest possible window and gradually reduce it until acceptable baseline correction based on the χ(2) statistic is achieved. Results, obtained on both simulated and measured Raman data, are presented and discussed. PMID:21211157

  14. A hybrid geometric-statistical deformable model for automated 3-D segmentation in brain MRI.

    PubMed

    Huang, Albert; Abugharbieh, Rafeef; Tam, Roger

    2009-07-01

    We present a novel 3-D deformable model-based approach for accurate, robust, and automated tissue segmentation of brain MRI data of single as well as multiple magnetic resonance sequences. The main contribution of this study is that we employ an edge-based geodesic active contour for the segmentation task by integrating both image edge geometry and voxel statistical homogeneity into a novel hybrid geometric-statistical feature to regularize contour convergence and extract complex anatomical structures. We validate the accuracy of the segmentation results on simulated brain MRI scans of both single T1-weighted and multiple T1/T2/PD-weighted sequences. We also demonstrate the robustness of the proposed method when applied to clinical brain MRI scans. When compared to a current state-of-the-art region-based level-set segmentation formulation, our white matter and gray matter segmentation resulted in significantly higher accuracy levels with a mean improvement in Dice similarity indexes of 8.55% ( p < 0.0001) and 10.18% ( p < 0.0001), respectively. PMID:19336280

  15. Multi-fractal analysis for vehicle distribution based on cellular automation model

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Li, Shi-Gao

    2015-09-01

    It is well known that traffic flow presents multi-fractal characteristics at time scales. The aim of this study is to test its multi-fractality at spacial scales. The vehicular cellular automation (CA) model is chosen as a tool to get vehicle positions on a single lane road. First, multi-fractal of vehicle distribution is checked, and multi-fractal spectrums are plotted. Second, analysis results show that the width of a multi-fractal spectrum expresses the ratio of the maximum to minimum densities, and the height difference between the left and right vertexes represents the relative size between the numbers of sections with the maximum and minimum densities. Finally, the effects of the random deceleration probability and the average density on homogeneity of vehicle distribution are analyzed. The results show that random deceleration increases the ratio of the maximum to minimum densities, and deceases the relative size between the numbers of sections with the maximum and minimum densities, when the global density is limited to a specific range. Therefore, the multi-fractal spectrum can be used to quantify the homogeneity of spacial distribution of traffic flow.

  16. Wall modeling for implicit large-eddy simulation and immersed-interface methods

    NASA Astrophysics Data System (ADS)

    Chen, Zhen Li; Hickel, Stefan; Devesa, Antoine; Berland, Julien; Adams, Nikolaus A.

    2014-02-01

    We propose and analyze a wall model based on the turbulent boundary layer equations (TBLE) for implicit large-eddy simulation (LES) of high Reynolds number wall-bounded flows in conjunction with a conservative immersed-interface method for mapping complex boundaries onto Cartesian meshes. Both implicit subgrid-scale model and immersed-interface treatment of boundaries offer high computational efficiency for complex flow configurations. The wall model operates directly on the Cartesian computational mesh without the need for a dual boundary-conforming mesh. The combination of wall model and implicit LES is investigated in detail for turbulent channel flow at friction Reynolds numbers from Re τ = 395 up to Re τ =100,000 on very coarse meshes. The TBLE wall model with implicit LES gives results of better quality than current explicit LES based on eddy viscosity subgrid-scale models with similar wall models. A straightforward formulation of the wall model performs well at moderately large Reynolds numbers. A logarithmic-layer mismatch, observed only at very large Reynolds numbers, is removed by introducing a new structure-based damping function. The performance of the overall approach is assessed for two generic configurations with flow separation: the backward-facing step at Re h = 5,000 and the periodic hill at Re H = 10,595 and Re H = 37,000 on very coarse meshes. The results confirm the observations made for the channel flow with respect to the good prediction quality and indicate that the combination of implicit LES, immersed-interface method, and TBLE-based wall modeling is a viable approach for simulating complex aerodynamic flows at high Reynolds numbers. They also reflect the limitations of TBLE-based wall models.

  17. Similarities of coherent tunneling spectroscopy of ferromagnet/ferromagnet junction within two interface models: Delta potential and finite width model

    NASA Astrophysics Data System (ADS)

    Pasanai, K.

    2016-03-01

    The tunneling conductance spectra of a ferromagnet/ferromagnet junction was theoretically studied under a scattering approach using two models of the interface: delta potential and finite width model in a one dimensional system. In the first model, the interface between the materials was characterized by the delta potential that has infinite height but no width. For the other model, the interface was modeled by an insulator with a finite thickness and potential barrier height. As a result, it was found that the potential strength under the delta potential model suppressed the conductance spectra as expected. In the finite width model, the insulating layer can give rise to an oscillation behavior when the layer is thick. This oscillation occurs in the region of the energy that is larger than the potential barrier. Moreover, the conductance spectra was suppressed by varying the insulating thickness, also depending on how high the potential barrier was. When the results from the two models were compared, they gave rise to the same result when the insulating layer was thin and the potential barrier was slightly larger than the energy of the bottom of the minority band of the ferromagnet.

  18. Micromechanical modeling of the cement-bone interface: the effect of friction, morphology and material properties on the micromechanical response

    PubMed Central

    Janssen, Dennis; Mann, Kenneth A.; Verdonschot, Nico

    2008-01-01

    In order to gain insight into the micro-mechanical behavior of the cement-bone interface, the effect of parametric variations of frictional, morphological and material properties on the mechanical response of the cement-bone interface were analyzed using a finite element approach. Finite element models of a cement-bone interface specimen were created from micro-computed tomography data of a physical specimen that was sectioned from an in vitro cemented total hip arthroplasty. In five models the friction coefficient was varied (μ= 0.0; 0.3; 0.7; 1.0 and 3.0), while in one model an ideally bonded interface was assumed. In two models cement interface gaps and an optimal cement penetration were simulated. Finally, the effect of bone cement stiffness variations was simulated (2.0 and 2.5 GPa, relative to the default 3.0 GPa). All models were loaded for a cycle of fully reversible tension-compression. From the simulated stress-displacement curves the interface deformation, stiffness and hysteresis were calculated. The results indicate that in the current model the mechanical properties of the cement-bone interface were caused by frictional phenomena at the shape-closed interlock rather than by adhesive properties of the cement. Our findings furthermore show that in our model maximizing cement penetration improved the micromechanical response of the cement-bone interface stiffness, while interface gaps had a detrimental effect. Relative to the frictional and morphological variations, variations in the cement stiffness had only a modest effect on the micromechanical behavior of the cement-bone interface. The current study provides information that may help to better understand the load transfer mechanisms taking place at the cement-bone interface. PMID:18848699

  19. Towards Automated Seismic Moment Tensor Inversion in Australia Using 3D Structural Model

    NASA Astrophysics Data System (ADS)

    Hingee, M.; Tkalcic, H.; Fichtner, A.; Sambridge, M.; Kennett, B. L.; Gorbatov, A.

    2009-12-01

    There is significant seismic activity in the region around Australia, largely due to the plate boundaries to the north and to the east of the mainland. This seismicity poses serious seismic and tsunamigenic hazard in a wider region, and risk to coastal areas of Australia, and is monitored by Geoscience Australia (GA) using a network of permanent broadband seismometers within Australia. Earthquake and tsunami warning systems were established by the Australian Government and have been using the waveforms from the GA seismological network. The permanent instruments are augmented by non-GA seismic stations based both within and outside of Australia. In particular, seismic moment tensor (MT) solutions for events around Australia as well as local distances are useful for both warning systems and geophysical studies in general. These monitoring systems, however, currently use only one dimensional, spherically-symmetric models of the Earth for source parameter determination. Recently, a novel 3D model of Australia and the surrounding area has been developed from spectral element simulations [1], taking into account not only velocity heterogeneities, but also radial anisotropy and seismic attenuation. This development, inter alia, introduces the potential of providing significant improvements in MT solution accuracy. Allowing reliable MT solutions with reduced dependence on non-GA stations is a secondary advantage. We studied the feasibility of using 1D versus 3D structural models. The accuracy of the 3D model has been investigated, confirming that these models are in most cases superior to the 1D models. A full MT inversion method using a point source approximation was developed as the first step, keeping in mind that for more complex source time functions, a finite source inversion will be needed. Synthetic experiments have been performed with random noise added to the signal to test the code in the both 1D and 3D setting, using a precomputed library of structural Greens functions. Implementation of this 3D model will improve warning systems, and we present results that are an important step towards automated MT inversion in Australia. [1] Fichtner, A., Kennett, B.L.N., Igel, H., Bunge, H.-P., 2009. Full seismic waveform tomography for upper-mantle structure in the Australasian region using adjoint methods. Geophys. J. Int., in press.

  20. Multiscale Modeling of Intergranular Fracture in Aluminum: Constitutive Relation For Interface Debonding

    NASA Technical Reports Server (NTRS)

    Yamakov, V.; Saether, E.; Glaessgen, E. H.

    2008-01-01

    Intergranular fracture is a dominant mode of failure in ultrafine grained materials. In the present study, the atomistic mechanisms of grain-boundary debonding during intergranular fracture in aluminum are modeled using a coupled molecular dynamics finite element simulation. Using a statistical mechanics approach, a cohesive-zone law in the form of a traction-displacement constitutive relationship, characterizing the load transfer across the plane of a growing edge crack, is extracted from atomistic simulations and then recast in a form suitable for inclusion within a continuum finite element model. The cohesive-zone law derived by the presented technique is free of finite size effects and is statistically representative for describing the interfacial debonding of a grain boundary (GB) interface examined at atomic length scales. By incorporating the cohesive-zone law in cohesive-zone finite elements, the debonding of a GB interface can be simulated in a coupled continuum-atomistic model, in which a crack starts in the continuum environment, smoothly penetrates the continuum-atomistic interface, and continues its propagation in the atomistic environment. This study is a step towards relating atomistically derived decohesion laws to macroscopic predictions of fracture and constructing multiscale models for nanocrystalline and ultrafine grained materials.

  1. An SPH model for multiphase flows with complex interfaces and large density differences

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Zong, Z.; Liu, M. B.; Zou, L.; Li, H. T.; Shu, C.

    2015-02-01

    In this paper, an improved SPH model for multiphase flows with complex interfaces and large density differences is developed. The multiphase SPH model is based on the assumption of pressure continuity over the interfaces and avoids directly using the information of neighboring particles' densities or masses in solving governing equations. In order to improve computational accuracy and to obtain smooth pressure fields, a corrected density re-initialization is applied. A coupled dynamic solid boundary treatment (SBT) is implemented both to reduce numerical oscillations and to prevent unphysical particle penetration in the boundary area. The density correction and coupled dynamics SBT algorithms are modified to adapt to the density discontinuity on fluid interfaces in multiphase simulation. A cut-off value of the particle density is set to avoid negative pressure, which can lead to severe numerical difficulties and may even terminate the simulations. Three representative numerical examples, including a Rayleigh-Taylor instability test, a non-Boussinesq problem and a dam breaking simulation, are presented and compared with analytical results or experimental data. It is demonstrated that the present SPH model is capable of modeling complex multiphase flows with large interfacial deformations and density ratios.

  2. Mixed-level optical-system simulation incorporating component-level modeling of interface elements

    NASA Astrophysics Data System (ADS)

    Mena, Pablo V.; Stone, Bryan; Heller, Evan; Herrmann, Dan; Ghillino, Enrico; Scarmozzino, Rob

    2014-03-01

    While system-level simulation can allow designers to assess optical system performance via measures such as signal waveforms, spectra, eye diagrams, and BER calculations, component-level modeling can provide a more accurate description of coupling into and out of individual devices, as well as their detailed signal propagation characteristics. In particular, the system-level simulation of interface components used in optical systems, including splitters, combiners, grating couplers, waveguides, spot-size converters, and lens assemblies, can benefit from more detailed component-level modeling. Depending upon the nature of the device and the scale of the problem, simulation of optical transmission through these components can be carried out using either electromagnetic device-level simulation, such as the beampropagation method, or ray-based approaches. In either case, system-level simulation can interface to such componentlevel modeling via a suitable exchange of optical signal data. This paper presents the use of a mixed-level simulation flow in which both electromagnetic device-level and ray-based tools are integrated with a system-level simulation environment in order to model the use of various interface components in optical systems for a range of purposes, including, for example, coupling to and from optical transmission media such as single- and multimode optical fiber. This approach enables case studies on the impact of physical and geometric component variations on system performance, and the sensitivity of system behavior to misalignment between components.

  3. The DaveMLTranslator: An Interface for DAVE-ML Aerodynamic Models

    NASA Technical Reports Server (NTRS)

    Hill, Melissa A.; Jackson, E. Bruce

    2007-01-01

    It can take weeks or months to incorporate a new aerodynamic model into a vehicle simulation and validate the performance of the model. The Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML) has been proposed as a means to reduce the time required to accomplish this task by defining a standard format for typical components of a flight dynamic model. The purpose of this paper is to describe an object-oriented C++ implementation of a class that interfaces a vehicle subsystem model specified in DAVE-ML and a vehicle simulation. Using the DaveMLTranslator class, aerodynamic or other subsystem models can be automatically imported and verified at run-time, significantly reducing the elapsed time between receipt of a DAVE-ML model and its integration into a simulation environment. The translator performs variable initializations, data table lookups, and mathematical calculations for the aerodynamic build-up, and executes any embedded static check-cases for verification. The implementation is efficient, enabling real-time execution. Simple interface code for the model inputs and outputs is the only requirement to integrate the DaveMLTranslator as a vehicle aerodynamic model. The translator makes use of existing table-lookup utilities from the Langley Standard Real-Time Simulation in C++ (LaSRS++). The design and operation of the translator class is described and comparisons with existing, conventional, C++ aerodynamic models of the same vehicle are given.

  4. Designing geo-spatial interfaces to scale process models: the GeoWEPP approach

    NASA Astrophysics Data System (ADS)

    Renschler, Chris S.

    2003-04-01

    Practical decision making in spatially distributed environmental assessment and management is increasingly based on environmental process models linked to geographical information systems. Powerful personal computers and Internet-accessible assessment tools are providing much greater public access to, and use of, environmental models and geo-spatial data. However traditional process models, such as the water erosion prediction project (WEPP), were not typically developed with a flexible graphical user interface (GUI) for applications across a wide range of spatial and temporal scales, utilizing readily available geo-spatial data of highly variable precision and accuracy, and communicating with a diverse spectrum of users with different levels of expertise. As the development of the geo-spatial interface for WEPP (GeoWEPP) demonstrates, the GUI plays a key role in facilitating effective communication between the tool developer and user about data and model scales. The GeoWEPP approach illustrates that it is critical to develop a scientific and functional framework for the design, implementation, and use of such geo-spatial model assessment tools. The way that GeoWEPP was developed and implemented suggests a framework and scaling theory leading to a practical approach for developing geo-spatial interfaces for process models. GeoWEPP accounts for fundamental water erosion processes, model, and users needs, but most important it also matches realistic data availability and environmental settings by enabling even non-GIS-literate users to assemble the available geo-spatial data quickly to start soil and water conservation planning. In general, it is potential users' spatial and temporal scales of interest, and scales of readily available data, that should drive model design or selection, as opposed to using or designing the most sophisticated process model as the starting point and then determining data needs and result scales.

  5. Continental hydrosystem modelling: the concept of nested stream-aquifer interfaces

    NASA Astrophysics Data System (ADS)

    Flipo, N.; Mouhri, A.; Labarthe, B.; Biancamaria, S.; Rivière, A.; Weill, P.

    2014-08-01

    Coupled hydrological-hydrogeological models, emphasising the importance of the stream-aquifer interface, are more and more used in hydrological sciences for pluri-disciplinary studies aiming at investigating environmental issues. Based on an extensive literature review, stream-aquifer interfaces are described at five different scales: local [10 cm-~10 m], intermediate [~10 m-~1 km], watershed [10 km2-~1000 km2], regional [10 000 km2-~1 M km2] and continental scales [>10 M km2]. This led us to develop the concept of nested stream-aquifer interfaces, which extends the well-known vision of nested groundwater pathways towards the surface, where the mixing of low frequency processes and high frequency processes coupled with the complexity of geomorphological features and heterogeneities creates hydrological spiralling. This conceptual framework allows the identification of a hierarchical order of the multi-scale control factors of stream-aquifer hydrological exchanges, from the larger scale to the finer scale. The hyporheic corridor, which couples the river to its 3-D hyporheic zone, is then identified as the key component for scaling hydrological processes occurring at the interface. The identification of the hyporheic corridor as the support of the hydrological processes scaling is an important step for the development of regional studies, which is one of the main concerns for water practitioners and resources managers. In a second part, the modelling of the stream-aquifer interface at various scales is investigated with the help of the conductance model. Although the usage of the temperature as a tracer of the flow is a robust method for the assessment of stream-aquifer exchanges at the local scale, there is a crucial need to develop innovative methodologies for assessing stream-aquifer exchanges at the regional scale. After formulating the conductance model at the regional and intermediate scales, we address this challenging issue with the development of an iterative modelling methodology, which ensures the consistency of stream-aquifer exchanges between the intermediate and regional scales. Finally, practical recommendations are provided for the study of the interface using the innovative methodology MIM (Measurements-Interpolation-Modelling), which is graphically developed, scaling in space the three pools of methods needed to fully understand stream-aquifer interfaces at various scales. In the MIM space, stream-aquifer interfaces that can be studied by a given approach are localised. The efficiency of the method is demonstrated with two examples. The first one proposes an upscaling framework, structured around river reaches of ~10-100 m, from the local to the watershed scale. The second example highlights the usefulness of space borne data to improve the assessment of stream-aquifer exchanges at the regional and continental scales. We conclude that further developments in modelling and field measurements have to be undertaken at the regional scale to enable a proper modelling of stream-aquifer exchanges from the local to the continental scale.

  6. A user interface for the Kansas Geological Survey slug test model.

    PubMed

    Esling, Steven P; Keller, John E

    2009-01-01

    The Kansas Geological Survey (KGS) developed a semianalytical solution for slug tests that incorporates the effects of partial penetration, anisotropy, and the presence of variable conductivity well skins. The solution can simulate either confined or unconfined conditions. The original model, written in FORTRAN, has a text-based interface with rigid input requirements and limited output options. We re-created the main routine for the KGS model as a Visual Basic macro that runs in most versions of Microsoft Excel and built a simple-to-use Excel spreadsheet interface that automatically displays the graphical results of the test. A comparison of the output from the original FORTRAN code to that of the new Excel spreadsheet version for three cases produced identical results. PMID:19583592

  7. Modelisation microstructurale en fatigue/fluage a froid des alliages de titane quasi alpha par le modele des automates cellulaires

    NASA Astrophysics Data System (ADS)

    Boutana, Mohammed Nabil

    Les proprietes d'emploi des alliages de titane sont extremement dependantes a certains aspects des microstructures developpees lors de leur elaboration. Ces microstructures peuvent etre fortement heterogenes du point de vue de leur orientation cristallographique et de leur repartition spatiale. Leurs influences sur le comportement du materiau et son endommagement precoce sont des questions qui sont actuellement soulevees. Dans le present projet de doctorat on chercher a repondre a cette question mais aussi de presenter des solutions tangibles quant a l'utilisation securitaire de ces alliages. Un nouveau modele appele automate cellulaire a ete developpe pour simuler le comportement mecanique des alliages de titane en fatigue-fluage a froid. Ces modeles ont permet de mieux comprendre la correlation entre la microstructure et le comportement mecanique du materiau et surtout une analyse detaillee du comportement local du materiau. Mots-cles: Automate cellulaire, fatigue/fluage, alliage de titane, inclusion d'Eshelby, modelisation

  8. Ergonomic Models of Anthropometry, Human Biomechanics and Operator-Equipment Interfaces

    NASA Technical Reports Server (NTRS)

    Kroemer, Karl H. E. (Editor); Snook, Stover H. (Editor); Meadows, Susan K. (Editor); Deutsch, Stanley (Editor)

    1988-01-01

    The Committee on Human Factors was established in October 1980 by the Commission on Behavioral and Social Sciences and Education of the National Research Council. The committee is sponsored by the Office of Naval Research, the Air Force Office of Scientific Research, the Army Research Institute for the Behavioral and Social Sciences, the National Aeronautics and Space Administration, and the National Science Foundation. The workshop discussed the following: anthropometric models; biomechanical models; human-machine interface models; and research recommendations. A 17-page bibliography is included.

  9. Formulation of consumables management models: Mission planning processor payload interface definition

    NASA Technical Reports Server (NTRS)

    Torian, J. G.

    1977-01-01

    Consumables models required for the mission planning and scheduling function are formulated. The relation of the models to prelaunch, onboard, ground support, and postmission functions for the space transportation systems is established. Analytical models consisting of an orbiter planning processor with consumables data base is developed. A method of recognizing potential constraint violations in both the planning and flight operations functions, and a flight data file storage/retrieval of information over an extended period which interfaces with a flight operations processor for monitoring of the actual flights is presented.

  10. Designing of Multi-Interface Diverging Experiments to Model Rayleigh-Taylor Growth in Supernovae

    NASA Astrophysics Data System (ADS)

    Grosskopf, Michael; Drake, R.; Kuranz, C.; Plewa, T.; Hearn, N.; Meakin, C.; Arnett, D.; Miles, A.; Robey, H.; Hansen, J.; Hsing, W.; Edwards, M.

    2008-05-01

    In previous experiments on the Omega Laser, researchers studying blast-wave-driven instabilities have observed the growth of Rayleigh-Taylor instabilities under conditions scaled to the He/H interface of SN1987A. Most of these experiments have been planar experiments, as the energy available proved unable to accelerate enough mass in a diverging geometry. With the advent of the NIF laser, which can deliver hundreds of kJ to an experiment, it is possible to produce 3D, blast-wave-driven, multiple-interface explosions and to study the mixing that develops. We report scaling simulations to model the interface dynamics of a multilayered, diverging Rayleigh-Taylor experiment for NIF using CALE, a hybrid adaptive Lagrangian-Eulerian code developed at LLNL. Specifically, we looked both qualitatively and quantitatively at the Rayleigh-Taylor growth and multi-interface interactions in mass-scaled, spherically divergent systems using different materials. The simulations will assist in the target design process and help choose diagnostics to maximize the information we receive in a particular shot. Simulations are critical for experimental planning, especially for experiments on large-scale facilities. *This research was sponsored by LLNL through contract LLNL B56128 and by the NNSA through DOE Research Grant DE-FG52-04NA00064.

  11. Rigorous interpolation near tilted interfaces in 3-D finite-difference EM modelling

    NASA Astrophysics Data System (ADS)

    Shantsev, Daniil V.; Maaø, Frank A.

    2015-02-01

    We present a rigorous method for interpolation of electric and magnetic fields close to an interface with a conductivity contrast. The method takes into account not only a well-known discontinuity in the normal electric field, but also discontinuity in all the normal derivatives of electric and magnetic tangential fields. The proposed method is applied to marine 3-D controlled-source electromagnetic modelling (CSEM) where sources and receivers are located close to the seafloor separating conductive seawater and resistive formation. For the finite-difference scheme based on the Yee grid, the new interpolation is demonstrated to be much more accurate than alternative methods (interpolation using nodes on one side of the interface or interpolation using nodes on both sides, but ignoring the derivative jumps). The rigorous interpolation can handle arbitrary orientation of interface with respect to the grid, which is demonstrated on a marine CSEM example with a dipping seafloor. The interpolation coefficients are computed by minimizing a misfit between values at the nearest nodes and linear expansions of the continuous field components in the coordinate system aligned with the interface. The proposed interpolation operators can handle either uniform or non-uniform grids and can be applied to interpolation for both sources and receivers.

  12. Modeling Geometry and Progressive Failure of Material Interfaces in Plain Weave Composites

    NASA Technical Reports Server (NTRS)

    Hsu, Su-Yuen; Cheng, Ron-Bin

    2010-01-01

    A procedure combining a geometrically nonlinear, explicit-dynamics contact analysis, computer aided design techniques, and elasticity-based mesh adjustment is proposed to efficiently generate realistic finite element models for meso-mechanical analysis of progressive failure in textile composites. In the procedure, the geometry of fiber tows is obtained by imposing a fictitious expansion on the tows. Meshes resulting from the procedure are conformal with the computed tow-tow and tow-matrix interfaces but are incongruent at the interfaces. The mesh interfaces are treated as cohesive contact surfaces not only to resolve the incongruence but also to simulate progressive failure. The method is employed to simulate debonding at the material interfaces in a ceramic-matrix plain weave composite with matrix porosity and in a polymeric matrix plain weave composite without matrix porosity, both subject to uniaxial cyclic loading. The numerical results indicate progression of the interfacial damage during every loading and reverse loading event in a constant strain amplitude cyclic process. However, the composites show different patterns of damage advancement.

  13. Mathematical modeling of planar and spherical vapor-liquid phase interfaces for multicomponent fluids

    NASA Astrophysics Data System (ADS)

    Celný, David; Vinš, Václav; Planková, Barbora; Hrubý, Jan

    2016-03-01

    Development of methods for accurate modeling of phase interfaces is important for understanding various natural processes and for applications in technology such as power production and carbon dioxide separation and storage. In particular, prediction of the course of the non-equilibrium phase transition processes requires knowledge of the properties of the strongly curved phase interfaces of microscopic droplets. In our work, we focus on the spherical vapor-liquid phase interfaces for binary mixtures. We developed a robust computational method to determine the density and concentration profiles. The fundamentals of our approach lie in the Cahn-Hilliard gradient theory, allowing to transcribe the functional formulation into a system of ordinary Euler-Langrange equations. This system is then split and modified into a shape suitable for iterative computation. For this task, we combine the Newton-Raphson and the shooting methods providing a good convergence speed. For the thermodynamic roperties, the PC-SAFT equation of state is used. We determine the density and concentration profiles for spherical phase interfaces at various saturation factors for the binary mixture of CO2 and C9H20. The computed concentration profiles allow to the determine the work of formation and other characteristics of the microscopic droplets.

  14. Automated phantom assay system

    SciTech Connect

    Sisk, D.R.; Nichols, L.L.; Olsen, P.C.

    1991-11-01

    This paper describes an automated phantom assay system developed for assaying phantoms spiked with minute quantities of radionuclides. The system includes a computer-controlled linear-translation table that positions the phantom at exact distances from a spectrometer. A multichannel analyzer (MCA) interfaces with a computer to collect gamma spectral data. Signals transmitted between the controller and MCA synchronize data collection and phantom positioning. Measured data are then stored on disk for subsequent analysis. The automated system allows continuous unattended operation and ensures reproducible results.

  15. Degenerate Ising model for atomistic simulation of crystal-melt interfaces

    SciTech Connect

    Schebarchov, D.; Schulze, T. P.; Hendy, S. C.; Department of Physics, University of Auckland, Auckland 1010

    2014-02-21

    One of the simplest microscopic models for a thermally driven first-order phase transition is an Ising-type lattice system with nearest-neighbour interactions, an external field, and a degeneracy parameter. The underlying lattice and the interaction coupling constant control the anisotropic energy of the phase boundary, the field strength represents the bulk latent heat, and the degeneracy quantifies the difference in communal entropy between the two phases. We simulate the (stochastic) evolution of this minimal model by applying rejection-free canonical and microcanonical Monte Carlo algorithms, and we obtain caloric curves and heat capacity plots for square (2D) and face-centred cubic (3D) lattices with periodic boundary conditions. Since the model admits precise adjustment of bulk latent heat and communal entropy, neither of which affect the interface properties, we are able to tune the crystal nucleation barriers at a fixed degree of undercooling and verify a dimension-dependent scaling expected from classical nucleation theory. We also analyse the equilibrium crystal-melt coexistence in the microcanonical ensemble, where we detect negative heat capacities and find that this phenomenon is more pronounced when the interface is the dominant contributor to the total entropy. The negative branch of the heat capacity appears smooth only when the equilibrium interface-area-to-volume ratio is not constant but varies smoothly with the excitation energy. Finally, we simulate microcanonical crystal nucleation and subsequent relaxation to an equilibrium Wulff shape, demonstrating the model's utility in tracking crystal-melt interfaces at the atomistic level.

  16. An approximate model and empirical energy function for solute interactions with a water-phosphatidylcholine interface.

    PubMed Central

    Sanders, C R; Schwonek, J P

    1993-01-01

    An empirical model of a liquid crystalline (L alpha phase) phosphatidylcholine (PC) bilayer interface is presented along with a function which calculates the position-dependent energy of associated solutes. The model approximates the interface as a gradual two-step transition, the first step being from an aqueous phase to a phase of reduced polarity, but which maintains a high enough concentration of water and/or polar head group moieties to satisfy the hydrogen bond-forming potential of the solute. The second transition is from the hydrogen bonding/low polarity region to an effectively anhydrous hydrocarbon phase. The "interfacial energies" of solutes within this variable medium are calculated based upon atomic positions and atomic parameters describing general polarity and hydrogen bond donor/acceptor propensities. This function was tested for its ability to reproduce experimental water-solvent partitioning energies and water-bilayer partitioning data. In both cases, the experimental data was reproduced fairly well. Energy minimizations carried out on beta-hexyl glucopyranoside led to identification of a global minimum for the interface-associated glycolipid which exhibited glycosidic torsion angles in agreement with prior results (Hare, B.J., K.P. Howard, and J.H. Prestegard. 1993. Biophys. J. 64:392-398). Molecular dynamics simulations carried out upon this same molecule within the simulated interface led to results which were consistent with a number of experimentally based conclusions from previous work, but failed to quantitatively reproduce an available NMR quadrupolar/dipolar coupling data set (Sanders, C.R., and J.H. Prestegard. 1991. J. Am. Chem. Soc. 113:1987-1996). The proposed model and functions are readily incorporated into computational energy modeling algorithms and may prove useful in future studies of membrane-associated molecules. PMID:8241401

  17. Evidence evaluation in fingerprint comparison and automated fingerprint identification systems--Modeling between finger variability.

    PubMed

    Egli Anthonioz, N M; Champod, C

    2014-02-01

    In the context of the investigation of the use of automated fingerprint identification systems (AFIS) for the evaluation of fingerprint evidence, the current study presents investigations into the variability of scores from an AFIS system when fingermarks from a known donor are compared to fingerprints that are not from the same source. The ultimate goal is to propose a model, based on likelihood ratios, which allows the evaluation of mark-to-print comparisons. In particular, this model, through its use of AFIS technology, benefits from the possibility of using a large amount of data, as well as from an already built-in proximity measure, the AFIS score. More precisely, the numerator of the LR is obtained from scores issued from comparisons between impressions from the same source and showing the same minutia configuration. The denominator of the LR is obtained by extracting scores from comparisons of the questioned mark with a database of non-matching sources. This paper focuses solely on the assignment of the denominator of the LR. We refer to it by the generic term of between-finger variability. The issues addressed in this paper in relation to between-finger variability are the required sample size, the influence of the finger number and general pattern, as well as that of the number of minutiae included and their configuration on a given finger. Results show that reliable estimation of between-finger variability is feasible with 10,000 scores. These scores should come from the appropriate finger number/general pattern combination as defined by the mark. Furthermore, strategies of obtaining between-finger variability when these elements cannot be conclusively seen on the mark (and its position with respect to other marks for finger number) have been presented. These results immediately allow case-by-case estimation of the between-finger variability in an operational setting. PMID:24447455

  18. Evaluation of automated statistical shape model based knee kinematics from biplane fluoroscopy

    PubMed Central

    Baka, Nora; Kaptein, Bart L.; Giphart, J. Erik; Staring, Marius; de Bruijne, Marleen; Lelieveldt, Boudewijn P.F.; Valstar, Edward

    2014-01-01

    State-of-the-art fluoroscopic knee kinematic analysis methods require the patient-specific bone shapes segmented from CT or MRI. Substituting the patient-specific bone shapes with personalizable models, such as statistical shape models (SSM), could eliminate the CT/MRI acquisitions, and thereby decrease costs and radiation dose (when eliminating CT). SSM based kinematics, however, have not yet been evaluated on clinically relevant joint motion parameters. Therefore, in this work the applicability of SSM-s for computing knee kinematics from biplane fluoroscopic sequences was explored. Kinematic precision with an edge based automated bone tracking method using SSM-s was evaluated on 6 cadaver and 10 in-vivo fluoroscopic sequences. The SSMs of the femur and the tibia-fibula were created using 61 training datasets. Kinematic precision was determined for medial-lateral tibial shift, anterior-posterior tibial drawer, joint distraction-contraction, flexion, tibial rotation and adduction. The relationship between kinematic precision and bone shape accuracy was also investigated. The SSM based kinematics resulted in sub-millimeter (0.48–0.81 mm) and approximately one degree (0.69–0.99°) median precision on the cadaveric knees compared to bone-marker-based kinematics. The precision on the in-vivo datasets was comparable to the cadaveric sequences when evaluated with a semi-automatic reference method. These results are promising, though further work is necessary to reach the accuracy of CT-based kinematics. We also demonstrated that a better shape reconstruction accuracy does not automatically imply a better kinematic precision. This result suggests that the ability of accurately fitting the edges in the fluoroscopic sequences has a larger role in determining the kinematic precision than the overall 3D shape accuracy. PMID:24207131

  19. Simplifying the interaction between cognitive models and task environments with the JSON Network Interface.

    PubMed

    Hope, Ryan M; Schoelles, Michael J; Gray, Wayne D

    2014-12-01

    Process models of cognition, written in architectures such as ACT-R and EPIC, should be able to interact with the same software with which human subjects interact. By eliminating the need to simulate the experiment, this approach would simplify the modeler's effort, while ensuring that all steps required of the human are also required by the model. In practice, the difficulties of allowing one software system to interact with another present a significant barrier to any modeler who is not also skilled at this type of programming. The barrier increases if the programming language used by the modeling software differs from that used by the experimental software. The JSON Network Interface simplifies this problem for ACT-R modelers, and potentially, modelers using other systems. PMID:24338626

  20. Deterministic contact mechanics model applied to electrode interfaces in polymer electrolyte fuel cells and interfacial water accumulation

    NASA Astrophysics Data System (ADS)

    Zenyuk, I. V.; Kumbur, E. C.; Litster, S.

    2013-11-01

    An elastic deterministic contact mechanics model is applied to the compressed micro-porous (MPL) and catalyst layer (CL) interfaces in polymer electrolyte fuel cells (PEFCs) to elucidate the interfacial morphology. The model employs measured two-dimensional surface profiles and computes local surface deformation and interfacial gap, average contact resistance, and percent contact area as a function of compression pressure. Here, we apply the model to one interface having a MPL with cracks and one with a crack-free MPL. The void size distributions and water retention curves for the two sets of CL|MPL interfaces under compression are also computed. The CL|MPL interfaces with cracks are observed to have higher roughness, resulting in twice the interfacial average gap compared to the non-cracked interface at a given level of compression. The results indicate that the interfacial contact resistance is roughly the same for cracked or non-cracked interfaces due to cracks occupying low percentage of overall area. However, the cracked CL|MPL interface yields higher liquid saturation levels at all capillary pressures, resulting in an order of magnitude higher water storage capacity compared to the smooth interface. The van Genuchten water retention curve correlation for log-normal void size distributions is found to fit non-cracked CL|MPL interfaces well.

  1. Structure and application of an interface program between a geographic-information system and a ground-water flow model

    USGS Publications Warehouse

    Van Metre, P.C.

    1990-01-01

    A computer-program interface between a geographic-information system and a groundwater flow model links two unrelated software systems for use in developing the flow models. The interface program allows the modeler to compile and manage geographic components of a groundwater model within the geographic information system. A significant savings of time and effort is realized in developing, calibrating, and displaying the groundwater flow model. Four major guidelines were followed in developing the interface program: (1) no changes to the groundwater flow model code were to be made; (2) a data structure was to be designed within the geographic information system that follows the same basic data structure as the groundwater flow model; (3) the interface program was to be flexible enough to support all basic data options available within the model; and (4) the interface program was to be as efficient as possible in terms of computer time used and online-storage space needed. Because some programs in the interface are written in control-program language, the interface will run only on a computer with the PRIMOS operating system. (USGS)

  2. Finite Element Modeling of Laminated Composite Plates with Locally Delaminated Interface Subjected to Impact Loading

    PubMed Central

    Abo Sabah, Saddam Hussein; Kueh, Ahmad Beng Hong

    2014-01-01

    This paper investigates the effects of localized interface progressive delamination on the behavior of two-layer laminated composite plates when subjected to low velocity impact loading for various fiber orientations. By means of finite element approach, the laminae stiffnesses are constructed independently from their interface, where a well-defined virtually zero-thickness interface element is discreetly adopted for delamination simulation. The present model has the advantage of simulating a localized interfacial condition at arbitrary locations, for various degeneration areas and intensities, under the influence of numerous boundary conditions since the interfacial description is expressed discretely. In comparison, the model shows good agreement with existing results from the literature when modeled in a perfectly bonded state. It is found that as the local delamination area increases, so does the magnitude of the maximum displacement history. Also, as top and bottom fiber orientations deviation increases, both central deflection and energy absorption increase although the relative maximum displacement correspondingly decreases when in contrast to the laminates perfectly bonded state. PMID:24696668

  3. Finite element modeling of laminated composite plates with locally delaminated interface subjected to impact loading.

    PubMed

    Abo Sabah, Saddam Hussein; Kueh, Ahmad Beng Hong

    2014-01-01

    This paper investigates the effects of localized interface progressive delamination on the behavior of two-layer laminated composite plates when subjected to low velocity impact loading for various fiber orientations. By means of finite element approach, the laminae stiffnesses are constructed independently from their interface, where a well-defined virtually zero-thickness interface element is discreetly adopted for delamination simulation. The present model has the advantage of simulating a localized interfacial condition at arbitrary locations, for various degeneration areas and intensities, under the influence of numerous boundary conditions since the interfacial description is expressed discretely. In comparison, the model shows good agreement with existing results from the literature when modeled in a perfectly bonded state. It is found that as the local delamination area increases, so does the magnitude of the maximum displacement history. Also, as top and bottom fiber orientations deviation increases, both central deflection and energy absorption increase although the relative maximum displacement correspondingly decreases when in contrast to the laminates perfectly bonded state. PMID:24696668

  4. Reduction of nonlinear embedded boundary models for problems with evolving interfaces

    NASA Astrophysics Data System (ADS)

    Balajewicz, Maciej; Farhat, Charbel

    2014-10-01

    Embedded boundary methods alleviate many computational challenges, including those associated with meshing complex geometries and solving problems with evolving domains and interfaces. Developing model reduction methods for computational frameworks based on such methods seems however to be challenging. Indeed, most popular model reduction techniques are projection-based, and rely on basis functions obtained from the compression of simulation snapshots. In a traditional interface-fitted computational framework, the computation of such basis functions is straightforward, primarily because the computational domain does not contain in this case a fictitious region. This is not the case however for an embedded computational framework because the computational domain typically contains in this case both real and ghost regions whose definitions complicate the collection and compression of simulation snapshots. The problem is exacerbated when the interface separating both regions evolves in time. This paper addresses this issue by formulating the snapshot compression problem as a weighted low-rank approximation problem where the binary weighting identifies the evolving component of the individual simulation snapshots. The proposed approach is application independent and therefore comprehensive. It is successfully demonstrated for the model reduction of several two-dimensional, vortex-dominated, fluid-structure interaction problems.

  5. A coupled cohesive zone model for transient analysis of thermoelastic interface debonding

    NASA Astrophysics Data System (ADS)

    Sapora, Alberto; Paggi, Marco

    2014-04-01

    A coupled cohesive zone model based on an analogy between fracture and contact mechanics is proposed to investigate debonding phenomena at imperfect interfaces due to thermomechanical loading and thermal fields in bodies with cohesive cracks. Traction-displacement and heat flux-temperature relations are theoretically derived and numerically implemented in the finite element method. In the proposed formulation, the interface conductivity is a function of the normal gap, generalizing the Kapitza constant resistance model to partial decohesion effects. The case of a centered interface in a bimaterial component subjected to thermal loads is used as a test problem. The analysis focuses on the time evolution of the displacement and temperature fields during the transient regime before debonding, an issue not yet investigated in the literature. The solution of the nonlinear numerical problem is gained via an implicit scheme both in space and in time. The proposed model is finally applied to a case study in photovoltaics where the evolution of the thermoelastic fields inside a defective solar cell is predicted.

  6. Automating ground-fixed target modeling with the smart target model generator

    NASA Astrophysics Data System (ADS)

    Verner, D.; Dukes, R.

    2007-04-01

    The Smart Target Model Generator (STMG) is an AFRL/MNAL sponsored tool for generating 3D building models for use in various weapon effectiveness tools. These tools include tri-service approved tools such as Modular Effectiveness/Vulnerability Assessment (MEVA), Building Analysis Module in Joint Weaponeering System (JWS), PENCRV3D, and WinBlast. It also supports internal dispersion modeling of chemical contaminants. STMG also has capabilities to generate infrared or other sensor images. Unlike most CAD-models, STMG provides physics-based component properties such as strength, density, reinforcement, and material type. Interior components such as electrical and mechanical equipment, rooms, and ducts are also modeled. Buildings can be manually created with a graphical editor or automatically generated using rule-bases which size and place the structural components using rules based on structural engineering principles. In addition to its primary purposes of supporting conventional kinetic munitions, it can also be used to support sensor modeling and automatic target recognition.

  7. An automated image analysis method to measure regularity in biological patterns: a case study in a Drosophila neurodegenerative model.

    PubMed

    Diez-Hermano, Sergio; Valero, Jorge; Rueda, Cristina; Ganfornina, Maria D; Sanchez, Diego

    2015-01-01

    The fruitfly compound eye has been broadly used as a model for neurodegenerative diseases. Classical quantitative techniques to estimate the degeneration level of an eye under certain experimental conditions rely either on time consuming histological techniques to measure retinal thickness, or pseudopupil visualization and manual counting. Alternatively, visual examination of the eye surface appearance gives only a qualitative approximation provided the observer is well-trained. Therefore, there is a need for a simplified and standardized analysis of fruitfly eye degeneration extent for both routine laboratory use and for automated high-throughput analysis. We have designed the freely available ImageJ plugin FLEYE, a novel and user-friendly method for quantitative unbiased evaluation of neurodegeneration levels based on the acquisition of fly eye surface pictures. The incorporation of automated image analysis tools and a classification algorithm sustained on a built-in statistical model allow the user to quickly analyze large sample size data with reliability and robustness. Pharmacological screenings or genetic studies using the Drosophila retina as a model system may benefit from our method, because it can be easily implemented in a fully automated environment. In addition, FLEYE can be trained to optimize the image detection capabilities, resulting in a versatile approach to evaluate the pattern regularity of other biological or non-biological samples and their experimental or pathological disruption. PMID:25887846

  8. Object-Based Integration of Photogrammetric and LiDAR Data for Automated Generation of Complex Polyhedral Building Models.

    PubMed

    Kim, Changjae; Habib, Ayman

    2009-01-01

    This research is concerned with a methodology for automated generation of polyhedral building models for complex structures, whose rooftops are bounded by straight lines. The process starts by utilizing LiDAR data for building hypothesis generation and derivation of individual planar patches constituting building rooftops. Initial boundaries of these patches are then refined through the integration of LiDAR and photogrammetric data and hierarchical processing of the planar patches. Building models for complex structures are finally produced using the refined boundaries. The performance of the developed methodology is evaluated through qualitative and quantitative analysis of the generated building models from real data. PMID:22346722

  9. Object-Based Integration of Photogrammetric and LiDAR Data for Automated Generation of Complex Polyhedral Building Models

    PubMed Central

    Kim, Changjae; Habib, Ayman

    2009-01-01

    This research is concerned with a methodology for automated generation of polyhedral building models for complex structures, whose rooftops are bounded by straight lines. The process starts by utilizing LiDAR data for building hypothesis generation and derivation of individual planar patches constituting building rooftops. Initial boundaries of these patches are then refined through the integration of LiDAR and photogrammetric data and hierarchical processing of the planar patches. Building models for complex structures are finally produced using the refined boundaries. The performance of the developed methodology is evaluated through qualitative and quantitative analysis of the generated building models from real data. PMID:22346722

  10. Automated quantification of carotid artery stenosis on contrast-enhanced MRA data using a deformable vascular tube model.

    PubMed

    Suinesiaputra, Avan; de Koning, Patrick J H; Zudilova-Seinstra, Elena; Reiber, Johan H C; van der Geest, Rob J

    2012-08-01

    The purpose of this study was to develop and validate a method for automated segmentation of the carotid artery lumen from volumetric MR Angiographic (MRA) images using a deformable tubular 3D Non-Uniform Rational B-Splines (NURBS) model. A flexible 3D tubular NURBS model was designed to delineate the carotid arterial lumen. User interaction was allowed to guide the model by placement of forbidden areas. Contrast-enhanced MRA (CE-MRA) from 21 patients with carotid atherosclerotic disease were included in this study. The validation was performed against expert drawn contours on multi-planar reformatted image slices perpendicular to the artery. Excellent linear correlations were found on cross-sectional area measurement (r = 0.98, P < 0.05) and on luminal diameter (r = 0.98, P < 0.05). Strong match in terms of the Dice similarity indices were achieved: 0.95 ± 0.02 (common carotid artery), 0.90 ± 0.07 (internal carotid artery), 0.87 ± 0.07 (external carotid artery), 0.88 ± 0.09 (carotid bifurcation) and 0.75 ± 0.20 (stenosed segments). Slight overestimation of stenosis grading by the automated method was observed. The mean differences was 7.20% (SD = 21.00%) and 5.2% (SD = 21.96%) when validated against two observers. Reproducibility in stenosis grade calculation by the automated method was high; the mean difference between two repeated analyses was 1.9 ± 7.3%. In conclusion, the automated method shows high potential for clinical application in the analysis of CE-MRA of carotid arteries. PMID:22160666

  11. Automated Bayesian model development for frequency detection in biological time series

    PubMed Central

    2011-01-01

    Background A first step in building a mathematical model of a biological system is often the analysis of the temporal behaviour of key quantities. Mathematical relationships between the time and frequency domain, such as Fourier Transforms and wavelets, are commonly used to extract information about the underlying signal from a given time series. This one-to-one mapping from time points to frequencies inherently assumes that both domains contain the complete knowledge of the system. However, for truncated, noisy time series with background trends this unique mapping breaks down and the question reduces to an inference problem of identifying the most probable frequencies. Results In this paper we build on the method of Bayesian Spectrum Analysis and demonstrate its advantages over conventional methods by applying it to a number of test cases, including two types of biological time series. Firstly, oscillations of calcium in plant root cells in response to microbial symbionts are non-stationary and noisy, posing challenges to data analysis. Secondly, circadian rhythms in gene expression measured over only two cycles highlights the problem of time series with limited length. The results show that the Bayesian frequency detection approach can provide useful results in specific areas where Fourier analysis can be uninformative or misleading. We demonstrate further benefits of the Bayesian approach for time series analysis, such as direct comparison of different hypotheses, inherent estimation of noise levels and parameter precision, and a flexible framework for modelling the data without pre-processing. Conclusions Modelling in systems biology often builds on the study of time-dependent phenomena. Fourier Transforms are a convenient tool for analysing the frequency domain of time series. However, there are well-known limitations of this method, such as the introduction of spurious frequencies when handling short and noisy time series, and the requirement for uniformly sampled data. Biological time series often deviate significantly from the requirements of optimality for Fourier transformation. In this paper we present an alternative approach based on Bayesian inference. We show the value of placing spectral analysis in the framework of Bayesian inference and demonstrate how model comparison can automate this procedure. PMID:21702910

  12. Automation based on knowledge modeling theory and its applications in engine diagnostic systems using Space Shuttle Main Engine vibrational data. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kim, Jonnathan H.

    1995-01-01

    Humans can perform many complicated tasks without explicit rules. This inherent and advantageous capability becomes a hurdle when a task is to be automated. Modern computers and numerical calculations require explicit rules and discrete numerical values. In order to bridge the gap between human knowledge and automating tools, a knowledge model is proposed. Knowledge modeling techniques are discussed and utilized to automate a labor and time intensive task of detecting anomalous bearing wear patterns in the Space Shuttle Main Engine (SSME) High Pressure Oxygen Turbopump (HPOTP).

  13. Blocking and Blending: Different Assembly Models of Cyclodextrin and Sodium Caseinate at the Oil/Water Interface.

    PubMed

    Xu, Hua-Neng; Liu, Huan-Huan; Zhang, Lianfu

    2015-08-25

    The stability of cyclodextrin (CD)-based emulsions is attributed to the formation of a solid film of oil-CD complexes at the oil/water interface. However, competitive interactions between CDs and other components at the interface still need to be understood. Here we develop two different routes that allow the incorporation of a model protein (sodium caseinate, SC) into emulsions based on β-CD. One route is the components adsorbed simultaneously from a mixed solution to the oil/water interface (route I), and the other is SC was added to a previously established CD-stabilized interface (route II). The adsorption mechanism of β-CD modified by SC at the oil/water interface is investigated by rheological and optical methods. Strong sensitivity of the rheological behavior to the routes is indicated by both steady-state and small-deformation oscillatory experiments. Possible β-CD/SC interaction models at the interface are proposed. In route I, the protein, due to its higher affinity for the interface, adsorbs strongly at the interface with blocking of the adsorption of β-CD and formation of oil-CD complexes. In route II, the protein penetrates and blends into the preadsorbed layer of oil-CD complexes already formed at the interface. The revelation of interfacial assembly is expected to help better understand CD-based emulsions in natural systems and improve their designs in engineering applications. PMID:26228663

  14. Automated identification of potential snow avalanche release areas based on digital elevation models

    NASA Astrophysics Data System (ADS)

    Bühler, Y.; Kumar, S.; Veitinger, J.; Christen, M.; Stoffel, A.; Snehmani

    2013-05-01

    The identification of snow avalanche release areas is a very difficult task. The release mechanism of snow avalanches depends on many different terrain, meteorological, snowpack and triggering parameters and their interactions, which are very difficult to assess. In many alpine regions such as the Indian Himalaya, nearly no information on avalanche release areas exists mainly due to the very rough and poorly accessible terrain, the vast size of the region and the lack of avalanche records. However avalanche release information is urgently required for numerical simulation of avalanche events to plan mitigation measures, for hazard mapping and to secure important roads. The Rohtang tunnel access road near Manali, Himachal Pradesh, India, is such an example. By far the most reliable way to identify avalanche release areas is using historic avalanche records and field investigations accomplished by avalanche experts in the formation zones. But both methods are not feasible for this area due to the rough terrain, its vast extent and lack of time. Therefore, we develop an operational, easy-to-use automated potential release area (PRA) detection tool in Python/ArcGIS which uses high spatial resolution digital elevation models (DEMs) and forest cover information derived from airborne remote sensing instruments as input. Such instruments can acquire spatially continuous data even over inaccessible terrain and cover large areas. We validate our tool using a database of historic avalanches acquired over 56 yr in the neighborhood of Davos, Switzerland, and apply this method for the avalanche tracks along the Rohtang tunnel access road. This tool, used by avalanche experts, delivers valuable input to identify focus areas for more-detailed investigations on avalanche release areas in remote regions such as the Indian Himalaya and is a precondition for large-scale avalanche hazard mapping.

  15. Automated modeling of ecosystem CO2 fluxes based on closed chamber measurements: A standardized conceptual and practical approach

    NASA Astrophysics Data System (ADS)

    Hoffmann, Mathias; Jurisch, Nicole; Albiac Borraz, Elisa; Hagemann, Ulrike; Sommer, Michael; Augustin, Jürgen

    2015-04-01

    Closed chamber measurements are widely used for determining the CO2 exchange of small-scale or heterogeneous ecosystems. Among the chamber design and operational handling, the data processing procedure is a considerable source of uncertainty of obtained results. We developed a standardized automatic data processing algorithm, based on the language and statistical computing environment R© to (i) calculate measured CO2 flux rates, (ii) parameterize ecosystem respiration (Reco) and gross primary production (GPP) models, (iii) optionally compute an adaptive temperature model, (iv) model Reco, GPP and net ecosystem exchange (NEE), and (v) evaluate model uncertainty (calibration, validation and uncertainty prediction). The algorithm was tested for different manual and automatic chamber measurement systems (such as e.g. automated NEE-chambers and the LI-8100A soil CO2 Flux system) and ecosystems. Our study shows that even minor changes within the modelling approach may result in considerable differences of calculated flux rates, derived photosynthetic active radiation and temperature dependencies and subsequently modeled Reco, GPP and NEE balance of up to 25%. Thus, certain modeling implications will be given, since automated and standardized data processing procedures, based on clearly defined criteria, such as statistical parameters and thresholds are a prerequisite and highly desirable to guarantee the reproducibility, traceability of modelling results and encourage a better comparability between closed chamber based CO2 measurements.

  16. Modeling the Charge Transport in Graphene Nano Ribbon Interfaces for Nano Scale Electronic Devices

    NASA Astrophysics Data System (ADS)

    Kumar, Ravinder; Engles, Derick

    2015-05-01

    In this research work we have modeled, simulated and compared the electronic charge transport for Metal-Semiconductor-Metal interfaces of Graphene Nano Ribbons (GNR) with different geometries using First-Principle calculations and Non-Equilibrium Green's Function (NEGF) method. We modeled junctions of Armchair GNR strip sandwiched between two Zigzag strips with (Z-A-Z) and Zigzag GNR strip sandwiched between two Armchair strips with (A-Z-A) using semi-empirical Extended Huckle Theory (EHT) within the framework of Non-Equilibrium Green Function (NEGF). I-V characteristics of the interfaces were visualized for various transport parameters. The distinct changes in conductance and I-V curves reported as the Width across layers, Channel length (Central part) was varied at different bias voltages from -1V to 1 V with steps of 0.25 V. From the simulated results we observed that the conductance through A-Z-A graphene junction is in the range of 10-13 Siemens whereas the conductance through Z-A-Z graphene junction is in the range of 10-5 Siemens. These suggested conductance controlled mechanisms for the charge transport in the graphene interfaces with different geometries is important for the design of graphene based nano scale electronic devices like Graphene FETs, Sensors.

  17. A model for low temperature interface passivation between amorphous and crystalline silicon

    NASA Astrophysics Data System (ADS)

    Mitchell, J.

    2013-11-01

    Excellent passivation of the crystalline surface is known to occur following post-deposition thermal annealing of intrinsic hydrogenated amorphous silicon thin-film layers deposited by plasma-enhanced chemical vapour deposition. The hydrogen primarily responsible for passivating dangling bonds at the crystalline silicon surface has often been singularly linked to a bulk diffusion mechanism within the thin-film layer. In this work, the origins and the mechanism by which hydrogen passivation occurs are more accurately identified by way of an interface-diffusion model, which operates independent of the a-Si:H bulk. This first-principles approach achieved good agreement with experimental results, describing a linear relationship between the average diffusion lengths and anneals temperature. Similarly, the time hydrogen spends between shallow-trap states is shown to decrease rapidly with increases in temperature circuitously related to probabilistic displacement distances. The interface reconfiguration model proposed in this work demonstrates the importance of interface states and identifies the misconception surrounding hydrogen passivation of the c-Si surface.

  18. Configuring a Graphical User Interface for Managing Local HYSPLIT Model Runs Through AWIPS

    NASA Technical Reports Server (NTRS)

    Wheeler, mark M.; Blottman, Peter F.; Sharp, David W.; Hoeth, Brian; VanSpeybroeck, Kurt M.

    2009-01-01

    Responding to incidents involving the release of harmful airborne pollutants is a continual challenge for Weather Forecast Offices in the National Weather Service. When such incidents occur, current protocol recommends forecaster-initiated requests of NOAA's Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model output through the National Centers of Environmental Prediction to obtain critical dispersion guidance. Individual requests are submitted manually through a secured web site, with desired multiple requests submitted in sequence, for the purpose of obtaining useful trajectory and concentration forecasts associated with the significant release of harmful chemical gases, radiation, wildfire smoke, etc., into local the atmosphere. To help manage the local HYSPLIT for both routine and emergency use, a graphical user interface was designed for operational efficiency. The interface allows forecasters to quickly determine the current HYSPLIT configuration for the list of predefined sites (e.g., fixed sites and floating sites), and to make any necessary adjustments to key parameters such as Input Model. Number of Forecast Hours, etc. When using the interface, forecasters will obtain desired output more confidently and without the danger of corrupting essential configuration files.

  19. Modeling interface trapping effect in organic field-effect transistor under illumination

    NASA Astrophysics Data System (ADS)

    Kwok, H. L.

    2009-02-01

    Organic field-effect transistors (OFETs) have received significant attention recently because of the potential application in low-cost flexible electronics. The physics behind their operation are relatively complex and require careful consideration particularly with respect to the effect of charge trapping at the insulator-semiconductor interface and field effect in a region with a thickness of a few molecular layers. Recent studies have shown that the so-called “onset” voltage ( V onset) in the rubrene OFET can vary significantly depending on past illumination and bias history. It is therefore important to define the role of the interface trap states in more concrete terms and show how they may affect device performance. In this work, we propose an equivalent-circuit model for the OFET to include mechanism(s) linked to trapping. This includes the existence of a light-sensitive “resistor” controlling charge flow into/out of the interface trap states. Based on the proposed equivalent-circuit model, an analytical expression of V onset is derived showing how it can depend on gate bias and illumination. Using data from the literature, we analyzed the I- V characteristics of a rubrene OFET after pulsed illumination and a tetracene OFET during steady-state illumination.

  20. Definition of common support equipment and space station interface requirements for IOC model technology experiments

    NASA Technical Reports Server (NTRS)

    Russell, Richard A.; Waiss, Richard D.

    1988-01-01

    A study was conducted to identify the common support equipment and Space Station interface requirements for the IOC (initial operating capabilities) model technology experiments. In particular, each principal investigator for the proposed model technology experiment was contacted and visited for technical understanding and support for the generation of the detailed technical backup data required for completion of this study. Based on the data generated, a strong case can be made for a dedicated technology experiment command and control work station consisting of a command keyboard, cathode ray tube, data processing and storage, and an alert/annunciator panel located in the pressurized laboratory.