Sample records for component object model

  1. Object-oriented biomedical system modelling--the language.

    PubMed

    Hakman, M; Groth, T

    1999-11-01

    The paper describes a new object-oriented biomedical continuous system modelling language (OOBSML). It is fully object-oriented and supports model inheritance, encapsulation, and model component instantiation and behaviour polymorphism. Besides the traditional differential and algebraic equation expressions the language includes also formal expressions for documenting models and defining model quantity types and quantity units. It supports explicit definition of model input-, output- and state quantities, model components and component connections. The OOBSML model compiler produces self-contained, independent, executable model components that can be instantiated and used within other OOBSML models and/or stored within model and model component libraries. In this way complex models can be structured as multilevel, multi-component model hierarchies. Technically the model components produced by the OOBSML compiler are executable computer code objects based on distributed object and object request broker technology. This paper includes both the language tutorial and the formal language syntax and semantic description.

  2. Method for distributed object communications based on dynamically acquired and assembled software components

    NASA Technical Reports Server (NTRS)

    Sundermier, Amy (Inventor)

    2002-01-01

    A method for acquiring and assembling software components at execution time into a client program, where the components may be acquired from remote networked servers is disclosed. The acquired components are assembled according to knowledge represented within one or more acquired mediating components. A mediating component implements knowledge of an object model. A mediating component uses its implemented object model knowledge, acquired component class information and polymorphism to assemble components into an interacting program at execution time. The interactions or abstract relationships between components in the object model may be implemented by the mediating component as direct invocations or indirect events or software bus exchanges. The acquired components may establish communications with remote servers. The acquired components may also present a user interface representing data to be exchanged with the remote servers. The mediating components may be assembled into layers, allowing arbitrarily complex programs to be constructed at execution time.

  3. A simple rule of thumb for elegant prehension.

    PubMed

    Mon-Williams, M; Tresilian, J R

    2001-07-10

    Reaching out to grasp an object (prehension) is a deceptively elegant and skilled behavior. The movement prior to object contact can be described as having two components, the movement of the hand to an appropriate location for gripping the object, the "transport" component, and the opening and closing of the aperture between the fingers as they prepare to grip the target, the "grasp" component. The grasp component is sensitive to the size of the object, so that a larger grasp aperture is formed for wider objects; the maximum grasp aperture (MGA) is a little wider than the width of the target object and occurs later in the movement for larger objects. We present a simple model that can account for the temporal relationship between the transport and grasp components. We report the results of an experiment providing empirical support for our "rule of thumb." The model provides a simple, but plausible, account of a neural control strategy that has been the center of debate over the last two decades.

  4. An Object Model for a Rocket Engine Numerical Simulator

    NASA Technical Reports Server (NTRS)

    Mitra, D.; Bhalla, P. N.; Pratap, V.; Reddy, P.

    1998-01-01

    Rocket Engine Numerical Simulator (RENS) is a packet of software which numerically simulates the behavior of a rocket engine. Different parameters of the components of an engine is the input to these programs. Depending on these given parameters the programs output the behaviors of those components. These behavioral values are then used to guide the design of or to diagnose a model of a rocket engine "built" by a composition of these programs simulating different components of the engine system. In order to use this software package effectively one needs to have a flexible model of a rocket engine. These programs simulating different components then should be plugged into this modular representation. Our project is to develop an object based model of such an engine system. We are following an iterative and incremental approach in developing the model, as is the standard practice in the area of object oriented design and analysis of softwares. This process involves three stages: object modeling to represent the components and sub-components of a rocket engine, dynamic modeling to capture the temporal and behavioral aspects of the system, and functional modeling to represent the transformational aspects. This article reports on the first phase of our activity under a grant (RENS) from the NASA Lewis Research center. We have utilized Rambaugh's object modeling technique and the tool UML for this purpose. The classes of a rocket engine propulsion system are developed and some of them are presented in this report. The next step, developing a dynamic model for RENS, is also touched upon here. In this paper we will also discuss the advantages of using object-based modeling for developing this type of an integrated simulator over other tools like an expert systems shell or a procedural language, e.g., FORTRAN. Attempts have been made in the past to use such techniques.

  5. Parameterized hardware description as object oriented hardware model implementation

    NASA Astrophysics Data System (ADS)

    Drabik, Pawel K.

    2010-09-01

    The paper introduces novel model for design, visualization and management of complex, highly adaptive hardware systems. The model settles component oriented environment for both hardware modules and software application. It is developed on parameterized hardware description research. Establishment of stable link between hardware and software, as a purpose of designed and realized work, is presented. Novel programming framework model for the environment, named Graphic-Functional-Components is presented. The purpose of the paper is to present object oriented hardware modeling with mentioned features. Possible model implementation in FPGA chips and its management by object oriented software in Java is described.

  6. Biologically Inspired Model for Visual Cognition Achieving Unsupervised Episodic and Semantic Feature Learning.

    PubMed

    Qiao, Hong; Li, Yinlin; Li, Fengfu; Xi, Xuanyang; Wu, Wei

    2016-10-01

    Recently, many biologically inspired visual computational models have been proposed. The design of these models follows the related biological mechanisms and structures, and these models provide new solutions for visual recognition tasks. In this paper, based on the recent biological evidence, we propose a framework to mimic the active and dynamic learning and recognition process of the primate visual cortex. From principle point of view, the main contributions are that the framework can achieve unsupervised learning of episodic features (including key components and their spatial relations) and semantic features (semantic descriptions of the key components), which support higher level cognition of an object. From performance point of view, the advantages of the framework are as follows: 1) learning episodic features without supervision-for a class of objects without a prior knowledge, the key components, their spatial relations and cover regions can be learned automatically through a deep neural network (DNN); 2) learning semantic features based on episodic features-within the cover regions of the key components, the semantic geometrical values of these components can be computed based on contour detection; 3) forming the general knowledge of a class of objects-the general knowledge of a class of objects can be formed, mainly including the key components, their spatial relations and average semantic values, which is a concise description of the class; and 4) achieving higher level cognition and dynamic updating-for a test image, the model can achieve classification and subclass semantic descriptions. And the test samples with high confidence are selected to dynamically update the whole model. Experiments are conducted on face images, and a good performance is achieved in each layer of the DNN and the semantic description learning process. Furthermore, the model can be generalized to recognition tasks of other objects with learning ability.

  7. Modelling robot construction systems

    NASA Technical Reports Server (NTRS)

    Grasso, Chris

    1990-01-01

    TROTER's are small, inexpensive robots that can work together to accomplish sophisticated construction tasks. To understand the issues involved in designing and operating a team of TROTER's, the robots and their components are being modeled. A TROTER system that features standardized component behavior is introduced. An object-oriented model implemented in the Smalltalk programming language is described and the advantages of the object-oriented approach for simulating robot and component interactions are discussed. The presentation includes preliminary results and a discussion of outstanding issues.

  8. A Neural Network Architecture For Rapid Model Indexing In Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Pawlicki, Ted

    1988-03-01

    Models of objects stored in memory have been shown to be useful for guiding the processing of computer vision systems. A major consideration in such systems, however, is how stored models are initially accessed and indexed by the system. As the number of stored models increases, the time required to search memory for the correct model becomes high. Parallel distributed, connectionist, neural networks' have been shown to have appealing content addressable memory properties. This paper discusses an architecture for efficient storage and reference of model memories stored as stable patterns of activity in a parallel, distributed, connectionist, neural network. The emergent properties of content addressability and resistance to noise are exploited to perform indexing of the appropriate object centered model from image centered primitives. The system consists of three network modules each of which represent information relative to a different frame of reference. The model memory network is a large state space vector where fields in the vector correspond to ordered component objects and relative, object based spatial relationships between the component objects. The component assertion network represents evidence about the existence of object primitives in the input image. It establishes local frames of reference for object primitives relative to the image based frame of reference. The spatial relationship constraint network is an intermediate representation which enables the association between the object based and the image based frames of reference. This intermediate level represents information about possible object orderings and establishes relative spatial relationships from the image based information in the component assertion network below. It is also constrained by the lawful object orderings in the model memory network above. The system design is consistent with current psychological theories of recognition by component. It also seems to support Marr's notions of hierarchical indexing. (i.e. the specificity, adjunct, and parent indices) It supports the notion that multiple canonical views of an object may have to be stored in memory to enable its efficient identification. The use of variable fields in the state space vectors appears to keep the number of required nodes in the network down to a tractable number while imposing a semantic value on different areas of the state space. This semantic imposition supports an interface between the analogical aspects of neural networks and the propositional paradigms of symbolic processing.

  9. A new multi-objective optimization model for preventive maintenance and replacement scheduling of multi-component systems

    NASA Astrophysics Data System (ADS)

    Moghaddam, Kamran S.; Usher, John S.

    2011-07-01

    In this article, a new multi-objective optimization model is developed to determine the optimal preventive maintenance and replacement schedules in a repairable and maintainable multi-component system. In this model, the planning horizon is divided into discrete and equally-sized periods in which three possible actions must be planned for each component, namely maintenance, replacement, or do nothing. The objective is to determine a plan of actions for each component in the system while minimizing the total cost and maximizing overall system reliability simultaneously over the planning horizon. Because of the complexity, combinatorial and highly nonlinear structure of the mathematical model, two metaheuristic solution methods, generational genetic algorithm, and a simulated annealing are applied to tackle the problem. The Pareto optimal solutions that provide good tradeoffs between the total cost and the overall reliability of the system can be obtained by the solution approach. Such a modeling approach should be useful for maintenance planners and engineers tasked with the problem of developing recommended maintenance plans for complex systems of components.

  10. Creating photorealistic virtual model with polarization-based vision system

    NASA Astrophysics Data System (ADS)

    Shibata, Takushi; Takahashi, Toru; Miyazaki, Daisuke; Sato, Yoichi; Ikeuchi, Katsushi

    2005-08-01

    Recently, 3D models are used in many fields such as education, medical services, entertainment, art, digital archive, etc., because of the progress of computational time and demand for creating photorealistic virtual model is increasing for higher reality. In computer vision field, a number of techniques have been developed for creating the virtual model by observing the real object in computer vision field. In this paper, we propose the method for creating photorealistic virtual model by using laser range sensor and polarization based image capture system. We capture the range and color images of the object which is rotated on the rotary table. By using the reconstructed object shape and sequence of color images of the object, parameter of a reflection model are estimated in a robust manner. As a result, then, we can make photorealistic 3D model in consideration of surface reflection. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. In separation of reflection components, we use polarization filter. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.

  11. Dynamics of Rotating Multi-component Turbomachinery Systems

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles

    1993-01-01

    The ultimate objective of turbomachinery vibration analysis is to predict both the overall, as well as component dynamic response. To accomplish this objective requires complete engine structural models, including multistages of bladed disk assemblies, flexible rotor shafts and bearings, and engine support structures and casings. In the present approach each component is analyzed as a separate structure and boundary information is exchanged at the inter-component connections. The advantage of this tactic is that even though readily available detailed component models are utilized, accurate and comprehensive system response information may be obtained. Sample problems, which include a fixed base rotating blade and a blade on a flexible rotor, are presented.

  12. Microsoft Repository Version 2 and the Open Information Model.

    ERIC Educational Resources Information Center

    Bernstein, Philip A.; Bergstraesser, Thomas; Carlson, Jason; Pal, Shankar; Sanders, Paul; Shutt, David

    1999-01-01

    Describes the programming interface and implementation of the repository engine and the Open Information Model for Microsoft Repository, an object-oriented meta-data management facility that ships in Microsoft Visual Studio and Microsoft SQL Server. Discusses Microsoft's component object model, object manipulation, queries, and information…

  13. Independent component model for cognitive functions of multiple subjects using [15O]H2O PET images.

    PubMed

    Park, Hae-Jeong; Kim, Jae-Jin; Youn, Tak; Lee, Dong Soo; Lee, Myung Chul; Kwon, Jun Soo

    2003-04-01

    An independent component model of multiple subjects' positron emission tomography (PET) images is proposed to explore the overall functional components involved in a task and to explain subject specific variations of metabolic activities under altered experimental conditions utilizing the Independent component analysis (ICA) concept. As PET images represent time-compressed activities of several cognitive components, we derived a mathematical model to decompose functional components from cross-sectional images based on two fundamental hypotheses: (1) all subjects share basic functional components that are common to subjects and spatially independent of each other in relation to the given experimental task, and (2) all subjects share common functional components throughout tasks which are also spatially independent. The variations of hemodynamic activities according to subjects or tasks can be explained by the variations in the usage weight of the functional components. We investigated the plausibility of the model using serial cognitive experiments of simple object perception, object recognition, two-back working memory, and divided attention of a syntactic process. We found that the independent component model satisfactorily explained the functional components involved in the task and discuss here the application of ICA in multiple subjects' PET images to explore the functional association of brain activations. Copyright 2003 Wiley-Liss, Inc.

  14. Dynamic elastic-plastic response of a 2-DOF mass-spring system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corona, Edmundo

    The objective of the work presented here arose from abnormal, drop scenarios and specifically the question of how the accelerations and accumulation of plastic strains of internal components could be a ected by the material properties of the external structure. In some scenarios, the impact loads can induce cyclic motion of the internal components. Therefore, a second objective was to explore di erences that could be expected when simulations are conducted using isotropic hardening vs. kinematic hardening plasticity models. The simplest model that can be used to investigate the objectives above is a two-degree-offreedom mass/spring model where the springs exhibitmore » elastic-plastic behavior. The purpose of this memo is to develop such model and present a few results that address the objectives.« less

  15. Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.

  16. An architecture for object-oriented intelligent control of power systems in space

    NASA Technical Reports Server (NTRS)

    Holmquist, Sven G.; Jayaram, Prakash; Jansen, Ben H.

    1993-01-01

    A control system for autonomous distribution and control of electrical power during space missions is being developed. This system should free the astronauts from localizing faults and reconfiguring loads if problems with the power distribution and generation components occur. The control system uses an object-oriented simulation model of the power system and first principle knowledge to detect, identify, and isolate faults. Each power system component is represented as a separate object with knowledge of its normal behavior. The reasoning process takes place at three different levels of abstraction: the Physical Component Model (PCM) level, the Electrical Equivalent Model (EEM) level, and the Functional System Model (FSM) level, with the PCM the lowest level of abstraction and the FSM the highest. At the EEM level the power system components are reasoned about as their electrical equivalents, e.g, a resistive load is thought of as a resistor. However, at the PCM level detailed knowledge about the component's specific characteristics is taken into account. The FSM level models the system at the subsystem level, a level appropriate for reconfiguration and scheduling. The control system operates in two modes, a reactive and a proactive mode, simultaneously. In the reactive mode the control system receives measurement data from the power system and compares these values with values determined through simulation to detect the existence of a fault. The nature of the fault is then identified through a model-based reasoning process using mainly the EEM. Compound component models are constructed at the EEM level and used in the fault identification process. In the proactive mode the reasoning takes place at the PCM level. Individual components determine their future health status using a physical model and measured historical data. In case changes in the health status seem imminent the component warns the control system about its impending failure. The fault isolation process uses the FSM level for its reasoning base.

  17. Hybrid polylingual object model: an efficient and seamless integration of Java and native components on the Dalvik virtual machine.

    PubMed

    Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv

    2014-01-01

    JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.

  18. Object-oriented approach for gas turbine engine simulation

    NASA Technical Reports Server (NTRS)

    Curlett, Brian P.; Felder, James L.

    1995-01-01

    An object-oriented gas turbine engine simulation program was developed. This program is a prototype for a more complete, commercial grade engine performance program now being proposed as part of the Numerical Propulsion System Simulator (NPSS). This report discusses architectural issues of this complex software system and the lessons learned from developing the prototype code. The prototype code is a fully functional, general purpose engine simulation program, however, only the component models necessary to model a transient compressor test rig have been written. The production system will be capable of steady state and transient modeling of almost any turbine engine configuration. Chief among the architectural considerations for this code was the framework in which the various software modules will interact. These modules include the equation solver, simulation code, data model, event handler, and user interface. Also documented in this report is the component based design of the simulation module and the inter-component communication paradigm. Object class hierarchies for some of the code modules are given.

  19. Extension of RCC Topological Relations for 3d Complex Objects Components Extracted from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Xing, Xu-Feng; Abolfazl Mostafavia, Mir; Wang, Chen

    2016-06-01

    Topological relations are fundamental for qualitative description, querying and analysis of a 3D scene. Although topological relations for 2D objects have been extensively studied and implemented in GIS applications, their direct extension to 3D is very challenging and they cannot be directly applied to represent relations between components of complex 3D objects represented by 3D B-Rep models in R3. Herein we present an extended Region Connection Calculus (RCC) model to express and formalize topological relations between planar regions for creating 3D model represented by Boundary Representation model in R3. We proposed a new dimension extended 9-Intersection model to represent the basic relations among components of a complex object, including disjoint, meet and intersect. The last element in 3*3 matrix records the details of connection through the common parts of two regions and the intersecting line of two planes. Additionally, this model can deal with the case of planar regions with holes. Finally, the geometric information is transformed into a list of strings consisting of topological relations between two planar regions and detailed connection information. The experiments show that the proposed approach helps to identify topological relations of planar segments of point cloud automatically.

  20. Modeling of Explorative Procedures for Remote Object Identification

    DTIC Science & Technology

    1991-09-01

    haptic sensory system and the simulated foveal component of the visual system. Eventually it will allow multiple applications in remote sensing and...superposition of sensory channels. The use of a force reflecting telemanipulator and computer simulated visual foveal component are the tools which...representation of human search models is achieved by using the proprioceptive component of the haptic sensory system and the simulated foveal component of the

  1. Cost decomposition of linear systems with application to model reduction

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.

    1980-01-01

    A means is provided to assess the value or 'cst' of each component of a large scale system, when the total cost is a quadratic function. Such a 'cost decomposition' of the system has several important uses. When the components represent physical subsystems which can fail, the 'component cost' is useful in failure mode analysis. When the components represent mathematical equations which may be truncated, the 'component cost' becomes a criterion for model truncation. In this latter event component costs provide a mechanism by which the specific control objectives dictate which components should be retained in the model reduction process. This information can be valuable in model reduction and decentralized control problems.

  2. Project Simu-School Component Washington State University

    ERIC Educational Resources Information Center

    Glass, Thomas E.

    1976-01-01

    This component of the project attempts to facilitate planning by furnishing models that manage cumbersome and complex data, supply an objectivity that identifies all relationships between elements of the model, and provide a quantitative model allowing for various forecasting techniques that describe the long-range impact of decisions. (Author/IRT)

  3. An object-relational model for structured representation of medical knowledge.

    PubMed

    Koch, S; Risch, T; Schneider, W; Wagner, I V

    2006-07-01

    Domain specific knowledge is often not static but continuously evolving. This is especially true for the medical domain. Furthermore, the lack of standardized structures for presenting knowledge makes it difficult or often impossible to assess new knowledge in the context of existing knowledge. Possibilities to compare knowledge easily and directly are often not given. It is therefore of utmost importance to create a model that allows for comparability, consistency and quality assurance of medical knowledge in specific work situations. For this purpose, we have designed on object-relational model based on structured knowledge elements that are dynamically reusable by different multi-media-based tools for case-based documentation, disease course simulation, and decision support. With this model, high-level components, such as patient case reports or simulations of the course of a disease, and low-level components (e.g., diagnoses, symptoms or treatments) as well as the relationships between these components are modeled. The resulting schema has been implemented in AMOS II, on object-relational multi-database system supporting different views with regard to search and analysis depending on different work situations.

  4. Fat segmentation on chest CT images via fuzzy models

    NASA Astrophysics Data System (ADS)

    Tong, Yubing; Udupa, Jayaram K.; Wu, Caiyun; Pednekar, Gargi; Subramanian, Janani Rajan; Lederer, David J.; Christie, Jason; Torigian, Drew A.

    2016-03-01

    Quantification of fat throughout the body is vital for the study of many diseases. In the thorax, it is important for lung transplant candidates since obesity and being underweight are contraindications to lung transplantation given their associations with increased mortality. Common approaches for thoracic fat segmentation are all interactive in nature, requiring significant manual effort to draw the interfaces between fat and muscle with low efficiency and questionable repeatability. The goal of this paper is to explore a practical way for the segmentation of subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) components of chest fat based on a recently developed body-wide automatic anatomy recognition (AAR) methodology. The AAR approach involves 3 main steps: building a fuzzy anatomy model of the body region involving all its major representative objects, recognizing objects in any given test image, and delineating the objects. We made several modifications to these steps to develop an effective solution to delineate SAT/VAT components of fat. Two new objects representing interfaces of SAT and VAT regions with other tissues, SatIn and VatIn are defined, rather than using directly the SAT and VAT components as objects for constructing the models. A hierarchical arrangement of these new and other reference objects is built to facilitate their recognition in the hierarchical order. Subsequently, accurate delineations of the SAT/VAT components are derived from these objects. Unenhanced CT images from 40 lung transplant candidates were utilized in experimentally evaluating this new strategy. Mean object location error achieved was about 2 voxels and delineation error in terms of false positive and false negative volume fractions were, respectively, 0.07 and 0.1 for SAT and 0.04 and 0.2 for VAT.

  5. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, Allan M.

    1996-01-01

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.

  6. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, A.M.

    1997-12-09

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user`s local host, thereby providing ease of use and minimal software maintenance for users of that remote service. 16 figs.

  7. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, Allan M.

    1999-01-01

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.

  8. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, A.M.

    1996-08-06

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user`s local host, thereby providing ease of use and minimal software maintenance for users of that remote service. 16 figs.

  9. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, Allan M.

    1997-01-01

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.

  10. Hybrid PolyLingual Object Model: An Efficient and Seamless Integration of Java and Native Components on the Dalvik Virtual Machine

    PubMed Central

    Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv

    2014-01-01

    JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded. PMID:25110745

  11. Rubber airplane: Constraint-based component-modeling for knowledge representation in computer-aided conceptual design

    NASA Technical Reports Server (NTRS)

    Kolb, Mark A.

    1990-01-01

    Viewgraphs on Rubber Airplane: Constraint-based Component-Modeling for Knowledge Representation in Computer Aided Conceptual Design are presented. Topics covered include: computer aided design; object oriented programming; airfoil design; surveillance aircraft; commercial aircraft; aircraft design; and launch vehicles.

  12. Models and Frameworks: A Synergistic Association for Developing Component-Based Applications

    PubMed Central

    Sánchez-Ledesma, Francisco; Sánchez, Pedro; Pastor, Juan A.; Álvarez, Bárbara

    2014-01-01

    The use of frameworks and components has been shown to be effective in improving software productivity and quality. However, the results in terms of reuse and standardization show a dearth of portability either of designs or of component-based implementations. This paper, which is based on the model driven software development paradigm, presents an approach that separates the description of component-based applications from their possible implementations for different platforms. This separation is supported by automatic integration of the code obtained from the input models into frameworks implemented using object-oriented technology. Thus, the approach combines the benefits of modeling applications from a higher level of abstraction than objects, with the higher levels of code reuse provided by frameworks. In order to illustrate the benefits of the proposed approach, two representative case studies that use both an existing framework and an ad hoc framework, are described. Finally, our approach is compared with other alternatives in terms of the cost of software development. PMID:25147858

  13. Models and frameworks: a synergistic association for developing component-based applications.

    PubMed

    Alonso, Diego; Sánchez-Ledesma, Francisco; Sánchez, Pedro; Pastor, Juan A; Álvarez, Bárbara

    2014-01-01

    The use of frameworks and components has been shown to be effective in improving software productivity and quality. However, the results in terms of reuse and standardization show a dearth of portability either of designs or of component-based implementations. This paper, which is based on the model driven software development paradigm, presents an approach that separates the description of component-based applications from their possible implementations for different platforms. This separation is supported by automatic integration of the code obtained from the input models into frameworks implemented using object-oriented technology. Thus, the approach combines the benefits of modeling applications from a higher level of abstraction than objects, with the higher levels of code reuse provided by frameworks. In order to illustrate the benefits of the proposed approach, two representative case studies that use both an existing framework and an ad hoc framework, are described. Finally, our approach is compared with other alternatives in terms of the cost of software development.

  14. Higher order memories for objects encountered in different spatio-temporal contexts in mice: evidence for episodic memory.

    PubMed

    Dere, Ekrem; Silva, Maria A De Souza; Huston, Joseph P

    2004-01-01

    The ability to build higher order multi-modal memories comprising information about the spatio-temporal context of events has been termed 'episodic memory'. Deficits in episodic memory are apparent in a number of neuropsychiatric diseases. Unfortunately, the development of animal models of episodic memory has made little progress. Towards the goal of such a model we devised an object exploration task for mice, providing evidence that rodents can associate object, spatial and temporal information. In our task the mice learned the temporal sequence by which identical objects were introduced into two different contexts. The 'what' component of an episodic memory was operationalized via physically distinct objects; the 'where' component through physically different contexts, and, most importantly, the 'when' component via the context-specific inverted sequence in which four objects were presented. Our results suggest that mice are able to recollect the inverted temporal sequence in which identical objects were introduced into two distinct environments. During two consecutive test trials mice showed an inverse context-specific exploration pattern regarding identical objects that were previously encountered with even frequencies. It seems that the contexts served as discriminative stimuli signaling which of the two sequences are decisive during the two test trials.

  15. SU-E-I-58: Objective Models of Breast Shape Undergoing Mammography and Tomosynthesis Using Principal Component Analysis.

    PubMed

    Feng, Ssj; Sechopoulos, I

    2012-06-01

    To develop an objective model of the shape of the compressed breast undergoing mammographic or tomosynthesis acquisition. Automated thresholding and edge detection was performed on 984 anonymized digital mammograms (492 craniocaudal (CC) view mammograms and 492 medial lateral oblique (MLO) view mammograms), to extract the edge of each breast. Principal Component Analysis (PCA) was performed on these edge vectors to identify a limited set of parameters and eigenvectors that. These parameters and eigenvectors comprise a model that can be used to describe the breast shapes present in acquired mammograms and to generate realistic models of breasts undergoing acquisition. Sample breast shapes were then generated from this model and evaluated. The mammograms in the database were previously acquired for a separate study and authorized for use in further research. The PCA successfully identified two principal components and their corresponding eigenvectors, forming the basis for the breast shape model. The simulated breast shapes generated from the model are reasonable approximations of clinically acquired mammograms. Using PCA, we have obtained models of the compressed breast undergoing mammographic or tomosynthesis acquisition based on objective analysis of a large image database. Up to now, the breast in the CC view has been approximated as a semi-circular tube, while there has been no objectively-obtained model for the MLO view breast shape. Such models can be used for various breast imaging research applications, such as x-ray scatter estimation and correction, dosimetry estimates, and computer-aided detection and diagnosis. © 2012 American Association of Physicists in Medicine.

  16. A Calculus for Boxes and Traits in a Java-Like Setting

    NASA Astrophysics Data System (ADS)

    Bettini, Lorenzo; Damiani, Ferruccio; de Luca, Marco; Geilmann, Kathrin; Schäfer, Jan

    The box model is a component model for the object-oriented paradigm, that defines components (the boxes) with clear encapsulation boundaries. Having well-defined boundaries is crucial in component-based software development, because it enables to argue about the interference and interaction between a component and its context. In general, boxes contain several objects and inner boxes, of which some are local to the box and cannot be accessed from other boxes and some can be accessible by other boxes. A trait is a set of methods divorced from any class hierarchy. Traits can be composed together to form classes or other traits. We present a calculus for boxes and traits. Traits are units of fine-grained reuse, whereas boxes can be seen as units of coarse-grained reuse. The calculus is equipped with an ownership type system and allows us to combine coarse- and fine-grained reuse of code by maintaining encapsulation of components.

  17. Application of model predictive control for optimal operation of wind turbines

    NASA Astrophysics Data System (ADS)

    Yuan, Yuan; Cao, Pei; Tang, J.

    2017-04-01

    For large-scale wind turbines, reducing maintenance cost is a major challenge. Model predictive control (MPC) is a promising approach to deal with multiple conflicting objectives using the weighed sum approach. In this research, model predictive control method is applied to wind turbine to find an optimal balance between multiple objectives, such as the energy capture, loads on turbine components, and the pitch actuator usage. The actuator constraints are integrated into the objective function at the control design stage. The analysis is carried out in both the partial load region and full load region, and the performances are compared with those of a baseline gain scheduling PID controller. The application of this strategy achieves enhanced balance of component loads, the average power and actuator usages in partial load region.

  18. Component Design Report: International Transportation Energy Demand Determinants Model

    EIA Publications

    2017-01-01

    This Component Design Report discusses working design elements for a new model to replace the International Transportation Model (ITran) in the World Energy Projection System Plus (WEPS ) that is maintained by the U.S. Energy Information Administration. The key objective of the new International Transportation Energy Demand Determinants (ITEDD) model is to enable more rigorous, quantitative research related to energy consumption in the international transportation sectors.

  19. Self-organized network of fractal-shaped components coupled through statistical interaction.

    PubMed

    Ugajin, R

    2001-09-01

    A dissipative dynamics is introduced to generate self-organized networks of interacting objects, which we call coupled-fractal networks. The growth model is constructed based on a growth hypothesis in which the growth rate of each object is a product of the probability of receiving source materials from faraway and the probability of receiving adhesives from other grown objects, where each object grows to be a random fractal if isolated, but connects with others if glued. The network is governed by the statistical interaction between fractal-shaped components, which can only be identified in a statistical manner over ensembles. This interaction is investigated using the degree of correlation between fractal-shaped components, enabling us to determine whether it is attractive or repulsive.

  20. DeepID-Net: Deformable Deep Convolutional Neural Networks for Object Detection.

    PubMed

    Ouyang, Wanli; Zeng, Xingyu; Wang, Xiaogang; Qiu, Shi; Luo, Ping; Tian, Yonglong; Li, Hongsheng; Yang, Shuo; Wang, Zhe; Li, Hongyang; Loy, Chen Change; Wang, Kun; Yan, Junjie; Tang, Xiaoou

    2016-07-07

    In this paper, we propose deformable deep convolutional neural networks for generic object detection. This new deep learning object detection framework has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the deformation of object parts with geometric constraint and penalty. A new pre-training strategy is proposed to learn feature representations more suitable for the object detection task and with good generalization capability. By changing the net structures, training strategies, adding and removing some key components in the detection pipeline, a set of models with large diversity are obtained, which significantly improves the effectiveness of model averaging. The proposed approach improves the mean averaged precision obtained by RCNN [16], which was the state-of-the-art, from 31% to 50.3% on the ILSVRC2014 detection test set. It also outperforms the winner of ILSVRC2014, GoogLeNet, by 6.1%. Detailed component-wise analysis is also provided through extensive experimental evaluation, which provides a global view for people to understand the deep learning object detection pipeline.

  1. Modelling Transformations of Quadratic Functions: A Proposal of Inductive Inquiry

    ERIC Educational Resources Information Center

    Sokolowski, Andrzej

    2013-01-01

    This paper presents a study about using scientific simulations to enhance the process of mathematical modelling. The main component of the study is a lesson whose major objective is to have students mathematise a trajectory of a projected object and then apply the model to formulate other trajectories by using the properties of function…

  2. Movement or Goal: Goal Salience and Verbal Cues Affect Preschoolers' Imitation of Action Components

    ERIC Educational Resources Information Center

    Elsner, Birgit; Pfeifer, Caroline

    2012-01-01

    The impact of goal salience and verbal cues given by the model on 3- to 5-year-olds' reproduction of action components (movement or goal) was investigated in an imitation choice task. Preschoolers watched an experimenter moving a puppet up or down a ramp, terminating at one of two target objects. The target objects were either differently colored…

  3. A Collaborative Support Approach on UML Sequence Diagrams for Aspect-Oriented Software

    NASA Astrophysics Data System (ADS)

    de Almeida Naufal, Rafael; Silveira, Fábio F.; Guerra, Eduardo M.

    AOP and its broader application on software projects brings the importance to provide the separation between aspects and OO components at design time, to leverage the understanding of AO systems, promote aspects' reuse and obtain the benefits of AO modularization. Since the UML is a standard for modeling OO systems, it can be applied to model the decoupling between aspects and OO components. The application of UML to this area is the subject of constant study and is the focus of this paper. In this paper it is presented an extension based on the default UML meta-model, named MIMECORA-DS, to show object-object, object-aspect and aspect-aspect interactions applying the UML's sequence diagram. This research also presents the application of MIMECORA-DS in a case example, to assess its applicability.

  4. Developing the snow component of a distributed hydrological model: a step-wise approach based on multi-objective analysis

    NASA Astrophysics Data System (ADS)

    Dunn, S. M.; Colohan, R. J. E.

    1999-09-01

    A snow component has been developed for the distributed hydrological model, DIY, using an approach that sequentially evaluates the behaviour of different functions as they are implemented in the model. The evaluation is performed using multi-objective functions to ensure that the internal structure of the model is correct. The development of the model, using a sub-catchment in the Cairngorm Mountains in Scotland, demonstrated that the degree-day model can be enhanced for hydroclimatic conditions typical of those found in Scotland, without increasing meteorological data requirements. An important element of the snow model is a function to account for wind re-distribution. This causes large accumulations of snow in small pockets, which are shown to be important in sustaining baseflows in the rivers during the late spring and early summer, long after the snowpack has melted from the bulk of the catchment. The importance of the wind function would not have been identified using a single objective function of total streamflow to evaluate the model behaviour.

  5. An investigation of constraint-based component-modeling for knowledge representation in computer-aided conceptual design

    NASA Technical Reports Server (NTRS)

    Kolb, Mark A.

    1990-01-01

    Originally, computer programs for engineering design focused on detailed geometric design. Later, computer programs for algorithmically performing the preliminary design of specific well-defined classes of objects became commonplace. However, due to the need for extreme flexibility, it appears unlikely that conventional programming techniques will prove fruitful in developing computer aids for engineering conceptual design. The use of symbolic processing techniques, such as object-oriented programming and constraint propagation, facilitate such flexibility. Object-oriented programming allows programs to be organized around the objects and behavior to be simulated, rather than around fixed sequences of function- and subroutine-calls. Constraint propagation allows declarative statements to be understood as designating multi-directional mathematical relationships among all the variables of an equation, rather than as unidirectional assignments to the variable on the left-hand side of the equation, as in conventional computer programs. The research has concentrated on applying these two techniques to the development of a general-purpose computer aid for engineering conceptual design. Object-oriented programming techniques are utilized to implement a user-extensible database of design components. The mathematical relationships which model both geometry and physics of these components are managed via constraint propagation. In addition, to this component-based hierarchy, special-purpose data structures are provided for describing component interactions and supporting state-dependent parameters. In order to investigate the utility of this approach, a number of sample design problems from the field of aerospace engineering were implemented using the prototype design tool, Rubber Airplane. The additional level of organizational structure obtained by representing design knowledge in terms of components is observed to provide greater convenience to the program user, and to result in a database of engineering information which is easier both to maintain and to extend.

  6. Priming Contour-Deleted Images: Evidence for Immediate Representations in Visual Object Recognition.

    ERIC Educational Resources Information Center

    Biederman, Irving; Cooper, Eric E.

    1991-01-01

    Speed and accuracy of identification of pictures of objects are facilitated by prior viewing. Contributions of image features, convex or concave components, and object models in a repetition priming task were explored in 2 studies involving 96 college students. Results provide evidence of intermediate representations in visual object recognition.…

  7. A flexible computer aid for conceptual design based on constraint propagation and component-modeling. [of aircraft in three dimensions

    NASA Technical Reports Server (NTRS)

    Kolb, Mark A.

    1988-01-01

    The Rubber Airplane program, which combines two symbolic processing techniques with a component-based database of design knowledge, is proposed as a computer aid for conceptual design. Using object-oriented programming, programs are organized around the objects and behavior to be simulated, and using constraint propagation, declarative statements designate mathematical relationships among all the equation variables. It is found that the additional level of organizational structure resulting from the arrangement of the design information in terms of design components provides greater flexibility and convenience.

  8. Structured decision making as a conceptual framework to identify thresholds for conservation and management

    USGS Publications Warehouse

    Martin, J.; Runge, M.C.; Nichols, J.D.; Lubow, B.C.; Kendall, W.L.

    2009-01-01

    Thresholds and their relevance to conservation have become a major topic of discussion in the ecological literature. Unfortunately, in many cases the lack of a clear conceptual framework for thinking about thresholds may have led to confusion in attempts to apply the concept of thresholds to conservation decisions. Here, we advocate a framework for thinking about thresholds in terms of a structured decision making process. The purpose of this framework is to promote a logical and transparent process for making informed decisions for conservation. Specification of such a framework leads naturally to consideration of definitions and roles of different kinds of thresholds in the process. We distinguish among three categories of thresholds. Ecological thresholds are values of system state variables at which small changes bring about substantial changes in system dynamics. Utility thresholds are components of management objectives (determined by human values) and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. The approach that we present focuses directly on the objectives of management, with an aim to providing decisions that are optimal with respect to those objectives. This approach clearly distinguishes the components of the decision process that are inherently subjective (management objectives, potential management actions) from those that are more objective (system models, estimates of system state). Optimization based on these components then leads to decision matrices specifying optimal actions to be taken at various values of system state variables. Values of state variables separating different actions in such matrices are viewed as decision thresholds. Utility thresholds are included in the objectives component, and ecological thresholds may be embedded in models projecting consequences of management actions. Decision thresholds are determined by the above-listed components of a structured decision process. These components may themselves vary over time, inducing variation in the decision thresholds inherited from them. These dynamic decision thresholds can then be determined using adaptive management. We provide numerical examples (that are based on patch occupancy models) of structured decision processes that include all three kinds of thresholds. ?? 2009 by the Ecological Society of America.

  9. Comparison of 3 Symptom Classification Methods to Standardize the History Component of the HEART Score.

    PubMed

    Marchick, Michael R; Setteducato, Michael L; Revenis, Jesse J; Robinson, Matthew A; Weeks, Emily C; Payton, Thomas F; Winchester, David E; Allen, Brandon R

    2017-09-01

    The History, Electrocardiography, Age, Risk factors, Troponin (HEART) score enables rapid risk stratification of emergency department patients presenting with chest pain. However, the subjectivity in scoring introduced by the history component has been criticized by some clinicians. We examined the association of 3 objective scoring models with the results of noninvasive cardiac testing. Medical records for all patients evaluated in the chest pain center of an academic medical center during a 1-year period were reviewed retrospectively. Each patient's history component score was calculated using 3 models developed by the authors. Differences in the distribution of HEART scores for each model, as well as their degree of agreement with one another, as well as the results of cardiac testing were analyzed. Seven hundred forty nine patients were studied, 58 of which had an abnormal stress test or computed tomography coronary angiography. The mean HEART scores for models 1, 2, and 3 were 2.97 (SD 1.17), 2.57 (SD 1.25), and 3.30 (SD 1.35), respectively, and were significantly different (P < 0.001). However, for each model, the likelihood of an abnormal cardiovascular test did not correlate with higher scores on the symptom component of the HEART score (P = 0.09, 0.41, and 0.86, respectively). While the objective scoring models produced different distributions of HEART scores, no model performed well with regards to identifying patients with abnormal advanced cardiac studies in this relatively low-risk cohort. Further studies in a broader cohort of patients, as well as comparison with the performance of subjective history scoring, is warranted before adoption of any of these objective models.

  10. Differentiation and Exploration of Model MACP for HE VER 1.0 on Prototype Performance Measurement Application for Higher Education

    NASA Astrophysics Data System (ADS)

    El Akbar, R. Reza; Anshary, Muhammad Adi Khairul; Hariadi, Dennis

    2018-02-01

    Model MACP for HE ver.1. Is a model that describes how to perform measurement and monitoring performance for Higher Education. Based on a review of the research related to the model, there are several parts of the model component to develop in further research, so this research has four main objectives. The first objective is to differentiate the CSF (critical success factor) components in the previous model, the two key KPI (key performance indicators) exploration in the previous model, the three based on the previous objective, the new and more detailed model design. The final goal is the fourth designed prototype application for performance measurement in higher education, based on a new model created. The method used is explorative research method and application design using prototype method. The results of this study are first, forming a more detailed new model for measurement and monitoring of performance in higher education, differentiation and exploration of the Model MACP for HE Ver.1. The second result compiles a dictionary of college performance measurement by re-evaluating the existing indicators. The third result is the design of prototype application of performance measurement in higher education.

  11. The relation of object naming and other visual speech production tasks: a large scale voxel-based morphometric study.

    PubMed

    Lau, Johnny King L; Humphreys, Glyn W; Douis, Hassan; Balani, Alex; Bickerton, Wai-Ling; Rotshtein, Pia

    2015-01-01

    We report a lesion-symptom mapping analysis of visual speech production deficits in a large group (280) of stroke patients at the sub-acute stage (<120 days post-stroke). Performance on object naming was evaluated alongside three other tests of visual speech production, namely sentence production to a picture, sentence reading and nonword reading. A principal component analysis was performed on all these tests' scores and revealed a 'shared' component that loaded across all the visual speech production tasks and a 'unique' component that isolated object naming from the other three tasks. Regions for the shared component were observed in the left fronto-temporal cortices, fusiform gyrus and bilateral visual cortices. Lesions in these regions linked to both poor object naming and impairment in general visual-speech production. On the other hand, the unique naming component was potentially associated with the bilateral anterior temporal poles, hippocampus and cerebellar areas. This is in line with the models proposing that object naming relies on a left-lateralised language dominant system that interacts with a bilateral anterior temporal network. Neuropsychological deficits in object naming can reflect both the increased demands specific to the task and the more general difficulties in language processing.

  12. Three-dimensional localization and optical imaging of objects in turbid media with independent component analysis.

    PubMed

    Xu, M; Alrubaiee, M; Gayen, S K; Alfano, R R

    2005-04-01

    A new approach for optical imaging and localization of objects in turbid media that makes use of the independent component analysis (ICA) from information theory is demonstrated. Experimental arrangement realizes a multisource illumination of a turbid medium with embedded objects and a multidetector acquisition of transmitted light on the medium boundary. The resulting spatial diversity and multiple angular observations provide robust data for three-dimensional localization and characterization of absorbing and scattering inhomogeneities embedded in a turbid medium. ICA of the perturbations in the spatial intensity distribution on the medium boundary sorts out the embedded objects, and their locations are obtained from Green's function analysis based on any appropriate light propagation model. Imaging experiments were carried out on two highly scattering samples of thickness approximately 50 times the transport mean-free path of the respective medium. One turbid medium had two embedded absorptive objects, and the other had four scattering objects. An independent component separation of the signal, in conjunction with diffusive photon migration theory, was used to locate the embedded inhomogeneities. In both cases, improved lateral and axial localizations of the objects over the result obtained by use of common photon migration reconstruction algorithms were achieved. The approach is applicable to different medium geometries, can be used with any suitable photon propagation model, and is amenable to near-real-time imaging applications.

  13. An ontology for component-based models of water resource systems

    NASA Astrophysics Data System (ADS)

    Elag, Mostafa; Goodall, Jonathan L.

    2013-08-01

    Component-based modeling is an approach for simulating water resource systems where a model is composed of a set of components, each with a defined modeling objective, interlinked through data exchanges. Component-based modeling frameworks are used within the hydrologic, atmospheric, and earth surface dynamics modeling communities. While these efforts have been advancing, it has become clear that the water resources modeling community in particular, and arguably the larger earth science modeling community as well, faces a challenge of fully and precisely defining the metadata for model components. The lack of a unified framework for model component metadata limits interoperability between modeling communities and the reuse of models across modeling frameworks due to ambiguity about the model and its capabilities. To address this need, we propose an ontology for water resources model components that describes core concepts and relationships using the Web Ontology Language (OWL). The ontology that we present, which is termed the Water Resources Component (WRC) ontology, is meant to serve as a starting point that can be refined over time through engagement by the larger community until a robust knowledge framework for water resource model components is achieved. This paper presents the methodology used to arrive at the WRC ontology, the WRC ontology itself, and examples of how the ontology can aid in component-based water resources modeling by (i) assisting in identifying relevant models, (ii) encouraging proper model coupling, and (iii) facilitating interoperability across earth science modeling frameworks.

  14. Optimal Robust Matching of Engine Models to Test Data

    DTIC Science & Technology

    2009-02-28

    Monte Carlo process 19 Figure 7: Flowchart of SVD Calculations 22 Figure 8: Schematic Diagram of NPSS Engine Model Components 24 Figure 9: PW2037...System Simulation ( NPSS ). NPSS is an object-oriented modeling environment widely used throughout industry and the USAF. With NPSS , the engine is...34 modifiers are available for adjusting the component representations. The scripting language in NPSS allowed for easy implementation of each solution

  15. Progress in modeling and simulation.

    PubMed

    Kindler, E

    1998-01-01

    For the modeling of systems, the computers are more and more used while the other "media" (including the human intellect) carrying the models are abandoned. For the modeling of knowledges, i.e. of more or less general concepts (possibly used to model systems composed of instances of such concepts), the object-oriented programming is nowadays widely used. For the modeling of processes existing and developing in the time, computer simulation is used, the results of which are often presented by means of animation (graphical pictures moving and changing in time). Unfortunately, the object-oriented programming tools are commonly not designed to be of a great use for simulation while the programming tools for simulation do not enable their users to apply the advantages of the object-oriented programming. Nevertheless, there are exclusions enabling to use general concepts represented at a computer, for constructing simulation models and for their easy modification. They are described in the present paper, together with true definitions of modeling, simulation and object-oriented programming (including cases that do not satisfy the definitions but are dangerous to introduce misunderstanding), an outline of their applications and of their further development. In relation to the fact that computing systems are being introduced to be control components into a large spectrum of (technological, social and biological) systems, the attention is oriented to models of systems containing modeling components.

  16. Object-oriented analysis and design of a health care management information system.

    PubMed

    Krol, M; Reich, D L

    1999-04-01

    We have created a prototype for a universal object-oriented model of a health care system compatible with the object-oriented approach used in version 3.0 of the HL7 standard for communication messages. A set of three models has been developed: (1) the Object Model describes the hierarchical structure of objects in a system--their identity, relationships, attributes, and operations; (2) the Dynamic Model represents the sequence of operations in time as a collection of state diagrams for object classes in the system; and (3) functional Diagram represents the transformation of data within a system by means of data flow diagrams. Within these models, we have defined major object classes of health care participants and their subclasses, associations, attributes and operators, states, and behavioral scenarios. We have also defined the major processes and subprocesses. The top-down design approach allows use, reuse, and cloning of standard components.

  17. Exploring novel objective functions for simulating muscle coactivation in the neck.

    PubMed

    Mortensen, J; Trkov, M; Merryweather, A

    2018-04-11

    Musculoskeletal modeling allows for analysis of individual muscles in various situations. However, current techniques to realistically simulate muscle response when significant amounts of intentional coactivation is required are inadequate. This would include stiffening the neck or spine through muscle coactivation in preparation for perturbations or impacts. Muscle coactivation has been modeled previously in the neck and spine using optimization techniques that seek to maximize the joint stiffness by maximizing total muscle activation or muscle force. These approaches have not sought to replicate human response, but rather to explore the possible effects of active muscle. Coactivation remains a challenging feature to include in musculoskeletal models, and may be improved by extracting optimization objective functions from experimental data. However, the components of such an objective function must be known before fitting to experimental data. This study explores the effect of components in several objective functions, in order to recommend components to be used for fitting to experimental data. Four novel approaches to modeling coactivation through optimization techniques are presented, two of which produce greater levels of stiffness than previous techniques. Simulations were performed using OpenSim and MATLAB cooperatively. Results show that maximizing the moment generated by a particular muscle appears analogous to maximizing joint stiffness. The approach of optimizing for maximum moment generated by individual muscles may be a good candidate for developing objective functions that accurately simulate muscle coactivation in complex joints. This new approach will be the focus of future studies with human subjects. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Image formation of thick three-dimensional objects in differential-interference-contrast microscopy.

    PubMed

    Trattner, Sigal; Kashdan, Eugene; Feigin, Micha; Sochen, Nir

    2014-05-01

    The differential-interference-contrast (DIC) microscope is of widespread use in life sciences as it enables noninvasive visualization of transparent objects. The goal of this work is to model the image formation process of thick three-dimensional objects in DIC microscopy. The model is based on the principles of electromagnetic wave propagation and scattering. It simulates light propagation through the components of the DIC microscope to the image plane using a combined geometrical and physical optics approach and replicates the DIC image of the illuminated object. The model is evaluated by comparing simulated images of three-dimensional spherical objects with the recorded images of polystyrene microspheres. Our computer simulations confirm that the model captures the major DIC image characteristics of the simulated object, and it is sensitive to the defocusing effects.

  19. Evaluation of Low-Voltage Distribution Network Index Based on Improved Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Fan, Hanlu; Gao, Suzhou; Fan, Wenjie; Zhong, Yinfeng; Zhu, Lei

    2018-01-01

    In order to evaluate the development level of the low-voltage distribution network objectively and scientifically, chromatography analysis method is utilized to construct evaluation index model of low-voltage distribution network. Based on the analysis of principal component and the characteristic of logarithmic distribution of the index data, a logarithmic centralization method is adopted to improve the principal component analysis algorithm. The algorithm can decorrelate and reduce the dimensions of the evaluation model and the comprehensive score has a better dispersion degree. The clustering method is adopted to analyse the comprehensive score because the comprehensive score of the courts is concentrated. Then the stratification evaluation of the courts is realized. An example is given to verify the objectivity and scientificity of the evaluation method.

  20. Combining virtual reality and multimedia techniques for effective maintenance training

    NASA Astrophysics Data System (ADS)

    McLin, David M.; Chung, James C.

    1996-02-01

    This paper describes a virtual reality (VR) system developed for use as part of an integrated, low-cost, stand-alone, multimedia trainer. The trainer is used to train National Guard personnel in maintenance and trouble-shooting tasks for the M1A1 Abrams tank, the M2A2 Bradley fighting vehicle and the TOW II missile system. The VR system features a modular, extensible, object-oriented design which consists of a training monitor component, a VR run time component, a model loader component, and a set of domain-specific object behaviors which mimic the behavior of objects encountered in the actual vehicles. The VR system is built from a combination of off-the-shelf commercial software and custom software developed at RTI.

  1. Framework for a clinical information system.

    PubMed

    Van de Velde, R

    2000-01-01

    The current status of our work towards the design and implementation of a reference architecture for a Clinical Information System is presented. This architecture has been developed and implemented based on components following a strong underlying conceptual and technological model. Common Object Request Broker and n-tier technology featuring centralised and departmental clinical information systems as the back-end store for all clinical data are used. Servers located in the 'middle' tier apply the clinical (business) model and application rules to communicate with so-called 'thin client' workstations. The main characteristics are the focus on modelling and reuse of both data and business logic as there is a shift away from data and functional modelling towards object modelling. Scalability as well as adaptability to constantly changing requirements via component driven computing are the main reasons for that approach.

  2. Pricing end-of-life components

    NASA Astrophysics Data System (ADS)

    Vadde, Srikanth; Kamarthi, Sagar V.; Gupta, Surendra M.

    2005-11-01

    The main objective of a product recovery facility (PRF) is to disassemble end-of-life (EOL) products and sell the reclaimed components for reuse and recovered materials in second-hand markets. Variability in the inflow of EOL products and fluctuation in demand for reusable components contribute to the volatility in inventory levels. To stay profitable the PRFs ought to manage their inventory by regulating the price appropriately to minimize holding costs. This work presents two deterministic pricing models for a PRF bounded by environmental regulations. In the first model, the demand is price dependent and in the second, the demand is both price and time dependent. The models are valid for single component with no inventory replenishment sale during the selling horizon . Numerical examples are presented to illustrate the models.

  3. Software for Guidance and Control: Guidance and Control Panel Symposium (52nd) Held in Helexpo, Thessaloniki, Greece on 7-10 May 1991 (Les Logiciels de Guidage et de Pilotage)

    DTIC Science & Technology

    1991-09-01

    in frack . Lists, Essential Model Objects, Implamentational Model Because each object in the model is tied to the Objects, Test specifications, etc...structure. hydraulic and even human components, it w. so hamediately Clear now the softwar techeipes could be adapted. Moreover. It was alon felt aeomWto...would be the implementation is anticipated as an analogue, digital, case when one of the monitoring tolerances within the system mechanical, hydraulic

  4. Importance of Matching Physical Friction, Hardness, and Texture in Creating Realistic Haptic Virtual Surfaces.

    PubMed

    Culbertson, Heather; Kuchenbecker, Katherine J

    2017-01-01

    Interacting with physical objects through a tool elicits tactile and kinesthetic sensations that comprise your haptic impression of the object. These cues, however, are largely missing from interactions with virtual objects, yielding an unrealistic user experience. This article evaluates the realism of virtual surfaces rendered using haptic models constructed from data recorded during interactions with real surfaces. The models include three components: surface friction, tapping transients, and texture vibrations. We render the virtual surfaces on a SensAble Phantom Omni haptic interface augmented with a Tactile Labs Haptuator for vibration output. We conducted a human-subject study to assess the realism of these virtual surfaces and the importance of the three model components. Following a perceptual discrepancy paradigm, subjects compared each of 15 real surfaces to a full rendering of the same surface plus versions missing each model component. The realism improvement achieved by including friction, tapping, or texture in the rendering was found to directly relate to the intensity of the surface's property in that domain (slipperiness, hardness, or roughness). A subsequent analysis of forces and vibrations measured during interactions with virtual surfaces indicated that the Omni's inherent mechanical properties corrupted the user's haptic experience, decreasing realism of the virtual surface.

  5. Rapid Prototyping of an Aircraft Model in an Object-Oriented Simulation

    NASA Technical Reports Server (NTRS)

    Kenney, P. Sean

    2003-01-01

    A team was created to participate in the Mars Scout Opportunity. Trade studies determined that an aircraft provided the best opportunity to complete the science objectives of the team. A high fidelity six degree of freedom flight simulation was required to provide credible evidence that the aircraft design fulfilled mission objectives and to support the aircraft design process by providing performance evaluations. The team created the simulation using the Langley Standard Real-Time Simulation in C++ (LaSRS++) application framework. A rapid prototyping approach was necessary because the team had only three months to both develop the aircraft simulation model and evaluate aircraft performance as the design and mission parameters matured. The design of LaSRS++ enabled rapid-prototyping in several ways. First, the framework allowed component models to be designed, implemented, unit-tested, and integrated quickly. Next, the framework provides a highly reusable infrastructure that allowed developers to maximize code reuse while concentrating on aircraft and mission specific features. Finally, the framework reduces risk by providing reusable components that allow developers to build a quality product with a compressed testing cycle that relies heavily on unit testing of new components.

  6. Feature-based component model for design of embedded systems

    NASA Astrophysics Data System (ADS)

    Zha, Xuan Fang; Sriram, Ram D.

    2004-11-01

    An embedded system is a hybrid of hardware and software, which combines software's flexibility and hardware real-time performance. Embedded systems can be considered as assemblies of hardware and software components. An Open Embedded System Model (OESM) is currently being developed at NIST to provide a standard representation and exchange protocol for embedded systems and system-level design, simulation, and testing information. This paper proposes an approach to representing an embedded system feature-based model in OESM, i.e., Open Embedded System Feature Model (OESFM), addressing models of embedded system artifacts, embedded system components, embedded system features, and embedded system configuration/assembly. The approach provides an object-oriented UML (Unified Modeling Language) representation for the embedded system feature model and defines an extension to the NIST Core Product Model. The model provides a feature-based component framework allowing the designer to develop a virtual embedded system prototype through assembling virtual components. The framework not only provides a formal precise model of the embedded system prototype but also offers the possibility of designing variation of prototypes whose members are derived by changing certain virtual components with different features. A case study example is discussed to illustrate the embedded system model.

  7. Creation of system of computer-aided design for technological objects

    NASA Astrophysics Data System (ADS)

    Zubkova, T. M.; Tokareva, M. A.; Sultanov, N. Z.

    2018-05-01

    Due to the competition in the market of process equipment, its production should be flexible, retuning to various product configurations, raw materials and productivity, depending on the current market needs. This process is not possible without CAD (computer-aided design). The formation of CAD begins with planning. Synthesizing, analyzing, evaluating, converting operations, as well as visualization and decision-making operations, can be automated. Based on formal description of the design procedures, the design route in the form of an oriented graph is constructed. The decomposition of the design process, represented by the formalized description of the design procedures, makes it possible to make an informed choice of the CAD component for the solution of the task. The object-oriented approach allows us to consider the CAD as an independent system whose properties are inherited from the components. The first step determines the range of tasks to be performed by the system, and a set of components for their implementation. The second one is the configuration of the selected components. The interaction between the selected components is carried out using the CALS standards. The chosen CAD / CAE-oriented approach allows creating a single model, which is stored in the database of the subject area. Each of the integration stages is implemented as a separate functional block. The transformation of the CAD model into the model of the internal representation is realized by the block of searching for the geometric parameters of the technological machine, in which the XML-model of the construction is obtained on the basis of the feature method from the theory of image recognition. The configuration of integrated components is divided into three consecutive steps: configuring tasks, components, interfaces. The configuration of the components is realized using the theory of "soft computations" using the Mamdani fuzzy inference algorithm.

  8. Blind source separation based on time-frequency morphological characteristics for rigid acoustic scattering by underwater objects

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Li, Xiukun

    2016-06-01

    Separation of the components of rigid acoustic scattering by underwater objects is essential in obtaining the structural characteristics of such objects. To overcome the problem of rigid structures appearing to have the same spectral structure in the time domain, time-frequency Blind Source Separation (BSS) can be used in combination with image morphology to separate the rigid scattering components of different objects. Based on a highlight model, the separation of the rigid scattering structure of objects with time-frequency distribution is deduced. Using a morphological filter, different characteristics in a Wigner-Ville Distribution (WVD) observed for single auto term and cross terms can be simplified to remove any cross-term interference. By selecting time and frequency points of the auto terms signal, the accuracy of BSS can be improved. An experimental simulation has been used, with changes in the pulse width of the transmitted signal, the relative amplitude and the time delay parameter, in order to analyzing the feasibility of this new method. Simulation results show that the new method is not only able to separate rigid scattering components, but can also separate the components when elastic scattering and rigid scattering exist at the same time. Experimental results confirm that the new method can be used in separating the rigid scattering structure of underwater objects.

  9. The Unified Plant Growth Model (UPGM): software framework overview and model application

    USDA-ARS?s Scientific Manuscript database

    Since the Environmental Policy Integrated Climate (EPIC) model was developed in 1989, the EPIC plant growth component has been incorporated into other erosion and crop management models (e.g., WEPS, WEPP, SWAT, ALMANAC, and APEX) and modified to meet model developer research objectives. This has re...

  10. Multicomponent pre-stack seismic waveform inversion in transversely isotropic media using a non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Padhi, Amit; Mallick, Subhashis

    2014-03-01

    Inversion of band- and offset-limited single component (P wave) seismic data does not provide robust estimates of subsurface elastic parameters and density. Multicomponent seismic data can, in principle, circumvent this limitation but adds to the complexity of the inversion algorithm because it requires simultaneous optimization of multiple objective functions, one for each data component. In seismology, these multiple objectives are typically handled by constructing a single objective given as a weighted sum of the objectives of individual data components and sometimes with additional regularization terms reflecting their interdependence; which is then followed by a single objective optimization. Multi-objective problems, inclusive of the multicomponent seismic inversion are however non-linear. They have non-unique solutions, known as the Pareto-optimal solutions. Therefore, casting such problems as a single objective optimization provides one out of the entire set of the Pareto-optimal solutions, which in turn, may be biased by the choice of the weights. To handle multiple objectives, it is thus appropriate to treat the objective as a vector and simultaneously optimize each of its components so that the entire Pareto-optimal set of solutions could be estimated. This paper proposes such a novel multi-objective methodology using a non-dominated sorting genetic algorithm for waveform inversion of multicomponent seismic data. The applicability of the method is demonstrated using synthetic data generated from multilayer models based on a real well log. We document that the proposed method can reliably extract subsurface elastic parameters and density from multicomponent seismic data both when the subsurface is considered isotropic and transversely isotropic with a vertical symmetry axis. We also compute approximate uncertainty values in the derived parameters. Although we restrict our inversion applications to horizontally stratified models, we outline a practical procedure of extending the method to approximately include local dips for each source-receiver offset pair. Finally, the applicability of the proposed method is not just limited to seismic inversion but it could be used to invert different data types not only requiring multiple objectives but also multiple physics to describe them.

  11. Contemporaneous broadband observations of three high-redshift BL Lac objects

    DOE PAGES

    Ackerman, M.

    2016-03-20

    We have collected broadband spectral energy distributions (SEDs) of three BL Lac objects, 3FGL J0022.1-1855 (z=0.689), 3FGL J0630.9-2406 (z > ~1.239), and 3FGL J0811.2-7529 (z=0.774), detected by Fermi with relatively flat GeV spectra. By observing simultaneously in the near-IR to hard X-ray band, we can well characterize the high end of the synchrotron component of the SED. Thus, fitting the SEDs to synchro-Compton models of the dominant emission from the relativistic jet, we can constrain the underlying particle properties and predict the shape of the GeV Compton component. Standard extragalactic background light (EBL) models explain the high-energy absorption well, withmore » poorer fits for high UV models. The fits show clear evidence for EBL absorption in the Fermi spectrum of our highest redshift source 3FGL J0630.9-2406. While synchrotron self-Compton models adequately describe the SEDs, the situation may be complicated by possible external Compton components.« less

  12. Automatic anatomy recognition via multiobject oriented active shape models.

    PubMed

    Chen, Xinjian; Udupa, Jayaram K; Alavi, Abass; Torigian, Drew A

    2010-12-01

    This paper studies the feasibility of developing an automatic anatomy recognition (AAR) system in clinical radiology and demonstrates its operation on clinical 2D images. The anatomy recognition method described here consists of two main components: (a) multiobject generalization of OASM and (b) object recognition strategies. The OASM algorithm is generalized to multiple objects by including a model for each object and assigning a cost structure specific to each object in the spirit of live wire. The delineation of multiobject boundaries is done in MOASM via a three level dynamic programming algorithm, wherein the first level is at pixel level which aims to find optimal oriented boundary segments between successive landmarks, the second level is at landmark level which aims to find optimal location for the landmarks, and the third level is at the object level which aims to find optimal arrangement of object boundaries over all objects. The object recognition strategy attempts to find that pose vector (consisting of translation, rotation, and scale component) for the multiobject model that yields the smallest total boundary cost for all objects. The delineation and recognition accuracies were evaluated separately utilizing routine clinical chest CT, abdominal CT, and foot MRI data sets. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF and FPVF). The recognition accuracy was assessed (1) in terms of the size of the space of the pose vectors for the model assembly that yielded high delineation accuracy, (2) as a function of the number of objects and objects' distribution and size in the model, (3) in terms of the interdependence between delineation and recognition, and (4) in terms of the closeness of the optimum recognition result to the global optimum. When multiple objects are included in the model, the delineation accuracy in terms of TPVF can be improved to 97%-98% with a low FPVF of 0.1%-0.2%. Typically, a recognition accuracy of > or = 90% yielded a TPVF > or = 95% and FPVF < or = 0.5%. Over the three data sets and over all tested objects, in 97% of the cases, the optimal solutions found by the proposed method constituted the true global optimum. The experimental results showed the feasibility and efficacy of the proposed automatic anatomy recognition system. Increasing the number of objects in the model can significantly improve both recognition and delineation accuracy. More spread out arrangement of objects in the model can lead to improved recognition and delineation accuracy. Including larger objects in the model also improved recognition and delineation. The proposed method almost always finds globally optimum solutions.

  13. Polarized BRDF for coatings based on three-component assumption

    NASA Astrophysics Data System (ADS)

    Liu, Hong; Zhu, Jingping; Wang, Kai; Xu, Rong

    2017-02-01

    A pBRDF(polarized bidirectional reflection distribution function) model for coatings is given based on three-component reflection assumption in order to improve the polarized scattering simulation capability for space objects. In this model, the specular reflection is given based on microfacet theory, the multiple reflection and volume scattering are given separately according to experimental results. The polarization of specular reflection is considered from Fresnel's law, and both multiple reflection and volume scattering are assumed depolarized. Simulation and measurement results of two satellite coating samples SR107 and S781 are given to validate that the pBRDF modeling accuracy can be significantly improved by the three-component model given in this paper.

  14. Thresholds for conservation and management: structured decision making as a conceptual framework

    USGS Publications Warehouse

    Nichols, James D.; Eaton, Mitchell J.; Martin, Julien; Edited by Guntenspergen, Glenn R.

    2014-01-01

    changes in system dynamics. They are frequently incorporated into ecological models used to project system responses to management actions. Utility thresholds are components of management objectives and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. Decision thresholds are derived from the other components of the decision process.We advocate a structured decision making (SDM) approach within which the following components are identified: objectives (possibly including utility thresholds), potential actions, models (possibly including ecological thresholds), monitoring program, and a solution algorithm (which produces decision thresholds). Adaptive resource management (ARM) is described as a special case of SDM developed for recurrent decision problems that are characterized by uncertainty. We believe that SDM, in general, and ARM, in particular, provide good approaches to conservation and management. Use of SDM and ARM also clarifies the distinct roles of ecological thresholds, utility thresholds, and decision thresholds in informed decision processes.

  15. Grid Transmission Expansion Planning Model Based on Grid Vulnerability

    NASA Astrophysics Data System (ADS)

    Tang, Quan; Wang, Xi; Li, Ting; Zhang, Quanming; Zhang, Hongli; Li, Huaqiang

    2018-03-01

    Based on grid vulnerability and uniformity theory, proposed global network structure and state vulnerability factor model used to measure different grid models. established a multi-objective power grid planning model which considering the global power network vulnerability, economy and grid security constraint. Using improved chaos crossover and mutation genetic algorithm to optimize the optimal plan. For the problem of multi-objective optimization, dimension is not uniform, the weight is not easy given. Using principal component analysis (PCA) method to comprehensive assessment of the population every generation, make the results more objective and credible assessment. the feasibility and effectiveness of the proposed model are validated by simulation results of Garver-6 bus system and Garver-18 bus.

  16. Assembling Components with Aspect-Oriented Modeling/Specification

    DTIC Science & Technology

    2003-10-01

    2 COM: Component Object Mod 3 EJB: Enterprise Java Beans, h 4 CCM: CORBA® Component M 5 http...nt (GM of the co mbly of ct weav el, http: ttp:// java odel, htFigure 2: Connector as a Containertructure in the form of framework, which...assembles components in EJB3 , CCM4; or a package, using such way as manifest file to JavaBeans5. Also such connector in some cases plays the role as

  17. Reasoning, Resilience, & Responsibility

    ERIC Educational Resources Information Center

    Cogan, Jeanine C.; Subotnik, Rena F.

    2006-01-01

    The Other 3Rs Project began with an investigation into the most important psychological components of academic success. The research pointed to reasoning, resilience, and responsibility. The objective of the project was to integrate these components into a useful problem solving model that could, with practice and guidance, be applied both inside…

  18. Modeling Creep Effects within SiC/SiC Turbine Components

    NASA Technical Reports Server (NTRS)

    DiCarlo, J. A.; Lang, J.

    2008-01-01

    Anticipating the implementation of advanced SiC/SiC ceramic composites into the hot section components of future gas turbine engines, the primary objective of this on-going study is to develop physics-based analytical and finite-element modeling tools to predict the effects of constituent creep on SiC/SiC component service life. A second objective is to understand how to possibly select and manipulate constituent materials, processes, and geometries in order to minimize these effects. In initial studies aimed at SiC/SiC components experiencing through-thickness stress gradients, creep models were developed that allowed an understanding of detrimental residual stress effects that can develop globally within the component walls. It was assumed that the SiC/SiC composites behaved as isotropic visco-elastic materials with temperature-dependent creep behavior as experimentally measured in-plane in the fiber direction of advanced thin-walled 2D SiC/SiC panels. The creep models and their key results are discussed assuming state-of-the-art SiC/SiC materials within a simple cylindrical thin-walled tubular structure, which is currently being employed to model creep-related effects for turbine airfoil leading edges subjected to through-thickness thermal stress gradients. Improvements in the creep models are also presented which focus on constituent behavior with more realistic non-linear stress dependencies in order to predict such key creep-related SiC/SiC properties as time-dependent matrix stress, constituent creep and content effects on composite creep rates and rupture times, and stresses on fiber and matrix during and after creep.

  19. On 3-D inelastic analysis methods for hot section components. Volume 1: Special finite element models

    NASA Technical Reports Server (NTRS)

    Nakazawa, S.

    1988-01-01

    This annual status report presents the results of work performed during the fourth year of the 3-D Inelastic Analysis Methods for Hot Section Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of new computer codes permitting more accurate and efficient 3-D analysis of selected hot section components, i.e., combustor liners, turbine blades and turbine vanes. The computer codes embody a progression of math models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components. Volume 1 of this report discusses the special finite element models developed during the fourth year of the contract.

  20. A neural network simulating human reach-grasp coordination by continuous updating of vector positioning commands.

    PubMed

    Ulloa, Antonio; Bullock, Daniel

    2003-10-01

    We developed a neural network model to simulate temporal coordination of human reaching and grasping under variable initial grip apertures and perturbations of object size and object location/orientation. The proposed model computes reach-grasp trajectories by continuously updating vector positioning commands. The model hypotheses are (1) hand/wrist transport, grip aperture, and hand orientation control modules are coupled by a gating signal that fosters synchronous completion of the three sub-goals. (2) Coupling from transport and orientation velocities to aperture control causes maximum grip apertures that scale with these velocities and exceed object size. (3) Part of the aperture trajectory is attributable to an aperture-reducing passive biomechanical effect that is stronger for larger apertures. (4) Discrepancies between internal representations of targets partially inhibit the gating signal, leading to movement time increases that compensate for perturbations. Simulations of the model replicate key features of human reach-grasp kinematics observed under three experimental protocols. Our results indicate that no precomputation of component movement times is necessary for online temporal coordination of the components of reaching and grasping.

  1. An approach for quantitative image quality analysis for CT

    NASA Astrophysics Data System (ADS)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  2. From scenarios to domain models: processes and representations

    NASA Astrophysics Data System (ADS)

    Haddock, Gail; Harbison, Karan

    1994-03-01

    The domain specific software architectures (DSSA) community has defined a philosophy for the development of complex systems. This philosophy improves productivity and efficiency by increasing the user's role in the definition of requirements, increasing the systems engineer's role in the reuse of components, and decreasing the software engineer's role to the development of new components and component modifications only. The scenario-based engineering process (SEP), the first instantiation of the DSSA philosophy, has been adopted by the next generation controller project. It is also the chosen methodology of the trauma care information management system project, and the surrogate semi-autonomous vehicle project. SEP uses scenarios from the user to create domain models and define the system's requirements. Domain knowledge is obtained from a variety of sources including experts, documents, and videos. This knowledge is analyzed using three techniques: scenario analysis, task analysis, and object-oriented analysis. Scenario analysis results in formal representations of selected scenarios. Task analysis of the scenario representations results in descriptions of tasks necessary for object-oriented analysis and also subtasks necessary for functional system analysis. Object-oriented analysis of task descriptions produces domain models and system requirements. This paper examines the representations that support the DSSA philosophy, including reference requirements, reference architectures, and domain models. The processes used to create and use the representations are explained through use of the scenario-based engineering process. Selected examples are taken from the next generation controller project.

  3. 3-D inelastic analysis methods for hot section components. Volume 2: Advanced special functions models

    NASA Technical Reports Server (NTRS)

    Wilson, R. B.; Banerjee, P. K.

    1987-01-01

    This Annual Status Report presents the results of work performed during the third year of the 3-D Inelastic Analysis Methods for Hot Sections Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of computer codes that permit more accurate and efficient three-dimensional analyses of selected hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The computer codes embody a progression of mathematical models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components.

  4. Feedback loops and temporal misalignment in component-based hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Elag, Mostafa M.; Goodall, Jonathan L.; Castronova, Anthony M.

    2011-12-01

    In component-based modeling, a complex system is represented as a series of loosely integrated components with defined interfaces and data exchanges that allow the components to be coupled together through shared boundary conditions. Although the component-based paradigm is commonly used in software engineering, it has only recently been applied for modeling hydrologic and earth systems. As a result, research is needed to test and verify the applicability of the approach for modeling hydrologic systems. The objective of this work was therefore to investigate two aspects of using component-based software architecture for hydrologic modeling: (1) simulation of feedback loops between components that share a boundary condition and (2) data transfers between temporally misaligned model components. We investigated these topics using a simple case study where diffusion of mass is modeled across a water-sediment interface. We simulated the multimedia system using two model components, one for the water and one for the sediment, coupled using the Open Modeling Interface (OpenMI) standard. The results were compared with a more conventional numerical approach for solving the system where the domain is represented by a single multidimensional array. Results showed that the component-based approach was able to produce the same results obtained with the more conventional numerical approach. When the two components were temporally misaligned, we explored the use of different interpolation schemes to minimize mass balance error within the coupled system. The outcome of this work provides evidence that component-based modeling can be used to simulate complicated feedback loops between systems and guidance as to how different interpolation schemes minimize mass balance error introduced when components are temporally misaligned.

  5. SAND: an automated VLBI imaging and analysing pipeline - I. Stripping component trajectories

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Collioud, A.; Charlot, P.

    2018-02-01

    We present our implementation of an automated very long baseline interferometry (VLBI) data-reduction pipeline that is dedicated to interferometric data imaging and analysis. The pipeline can handle massive VLBI data efficiently, which makes it an appropriate tool to investigate multi-epoch multiband VLBI data. Compared to traditional manual data reduction, our pipeline provides more objective results as less human interference is involved. The source extraction is carried out in the image plane, while deconvolution and model fitting are performed in both the image plane and the uv plane for parallel comparison. The output from the pipeline includes catalogues of CLEANed images and reconstructed models, polarization maps, proper motion estimates, core light curves and multiband spectra. We have developed a regression STRIP algorithm to automatically detect linear or non-linear patterns in the jet component trajectories. This algorithm offers an objective method to match jet components at different epochs and to determine their proper motions.

  6. 3D inelastic analysis methods for hot section components

    NASA Technical Reports Server (NTRS)

    Dame, L. T.; Chen, P. C.; Hartle, M. S.; Huang, H. T.

    1985-01-01

    The objective is to develop analytical tools capable of economically evaluating the cyclic time dependent plasticity which occurs in hot section engine components in areas of strain concentration resulting from the combination of both mechanical and thermal stresses. Three models were developed. A simple model performs time dependent inelastic analysis using the power law creep equation. The second model is the classical model of Professors Walter Haisler and David Allen of Texas A and M University. The third model is the unified model of Bodner, Partom, et al. All models were customized for linear variation of loads and temperatures with all material properties and constitutive models being temperature dependent.

  7. On 3-D inelastic analysis methods for hot section components. Volume 1: Special finite element models

    NASA Technical Reports Server (NTRS)

    Nakazawa, S.

    1987-01-01

    This Annual Status Report presents the results of work performed during the third year of the 3-D Inelastic Analysis Methods for Hot Section Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of new computer codes that permit more accurate and efficient three-dimensional analysis of selected hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The computer codes embody a progression of mathematical models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components. This report is presented in two volumes. Volume 1 describes effort performed under Task 4B, Special Finite Element Special Function Models, while Volume 2 concentrates on Task 4C, Advanced Special Functions Models.

  8. Commercial Demand Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Commercial Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.

  9. Mental visualization of objects from cross-sectional images

    PubMed Central

    Wu, Bing; Klatzky, Roberta L.; Stetten, George D.

    2011-01-01

    We extended the classic anorthoscopic viewing procedure to test a model of visualization of 3D structures from 2D cross-sections. Four experiments were conducted to examine key processes described in the model, localizing cross-sections within a common frame of reference and spatiotemporal integration of cross sections into a hierarchical object representation. Participants used a hand-held device to reveal a hidden object as a sequence of cross-sectional images. The process of localization was manipulated by contrasting two displays, in-situ vs. ex-situ, which differed in whether cross sections were presented at their source locations or displaced to a remote screen. The process of integration was manipulated by varying the structural complexity of target objects and their components. Experiments 1 and 2 demonstrated visualization of 2D and 3D line-segment objects and verified predictions about display and complexity effects. In Experiments 3 and 4, the visualized forms were familiar letters and numbers. Errors and orientation effects showed that displacing cross-sectional images to a remote display (ex-situ viewing) impeded the ability to determine spatial relationships among pattern components, a failure of integration at the object level. PMID:22217386

  10. OCAM - A CELSS modeling tool: Description and results. [Object-oriented Controlled Ecological Life Support System Analysis and Modeling

    NASA Technical Reports Server (NTRS)

    Drysdale, Alan; Thomas, Mark; Fresa, Mark; Wheeler, Ray

    1992-01-01

    Controlled Ecological Life Support System (CELSS) technology is critical to the Space Exploration Initiative. NASA's Kennedy Space Center has been performing CELSS research for several years, developing data related to CELSS design. We have developed OCAM (Object-oriented CELSS Analysis and Modeling), a CELSS modeling tool, and have used this tool to evaluate CELSS concepts, using this data. In using OCAM, a CELSS is broken down into components, and each component is modeled as a combination of containers, converters, and gates which store, process, and exchange carbon, hydrogen, and oxygen on a daily basis. Multiple crops and plant types can be simulated. Resource recovery options modeled include combustion, leaching, enzyme treatment, aerobic or anaerobic digestion, and mushroom and fish growth. Results include printouts and time-history graphs of total system mass, biomass, carbon dioxide, and oxygen quantities; energy consumption; and manpower requirements. The contributions of mass, energy, and manpower to system cost have been analyzed to compare configurations and determine appropriate research directions.

  11. A multi-objective genetic algorithm for a mixed-model assembly U-line balancing type-I problem considering human-related issues, training, and learning

    NASA Astrophysics Data System (ADS)

    Rabbani, Masoud; Montazeri, Mona; Farrokhi-Asl, Hamed; Rafiei, Hamed

    2016-12-01

    Mixed-model assembly lines are increasingly accepted in many industrial environments to meet the growing trend of greater product variability, diversification of customer demands, and shorter life cycles. In this research, a new mathematical model is presented considering balancing a mixed-model U-line and human-related issues, simultaneously. The objective function consists of two separate components. The first part of the objective function is related to balance problem. In this part, objective functions are minimizing the cycle time, minimizing the number of workstations, and maximizing the line efficiencies. The second part is related to human issues and consists of hiring cost, firing cost, training cost, and salary. To solve the presented model, two well-known multi-objective evolutionary algorithms, namely non-dominated sorting genetic algorithm and multi-objective particle swarm optimization, have been used. A simple solution representation is provided in this paper to encode the solutions. Finally, the computational results are compared and analyzed.

  12. Impact of a variational objective analysis scheme on a regional area numerical model: The Italian Air Force Weather Service experience

    NASA Astrophysics Data System (ADS)

    Bonavita, M.; Torrisi, L.

    2005-03-01

    A new data assimilation system has been designed and implemented at the National Center for Aeronautic Meteorology and Climatology of the Italian Air Force (CNMCA) in order to improve its operational numerical weather prediction capabilities and provide more accurate guidance to operational forecasters. The system, which is undergoing testing before operational use, is based on an “observation space” version of the 3D-VAR method for the objective analysis component, and on the High Resolution Regional Model (HRM) of the Deutscher Wetterdienst (DWD) for the prognostic component. Notable features of the system include a completely parallel (MPI+OMP) implementation of the solution of analysis equations by a preconditioned conjugate gradient descent method; correlation functions in spherical geometry with thermal wind constraint between mass and wind field; derivation of the objective analysis parameters from a statistical analysis of the innovation increments.

  13. Quality Evaluation of Raw Moutan Cortex Using the AHP and Gray Correlation-TOPSIS Method

    PubMed Central

    Zhou, Sujuan; Liu, Bo; Meng, Jiang

    2017-01-01

    Background: Raw Moutan cortex (RMC) is an important Chinese herbal medicine. Comprehensive and objective quality evaluation of Chinese herbal medicine has been one of the most important issues in the modern herbs development. Objective: To evaluate and compare the quality of RMC using the weighted gray correlation- Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) method. Materials and Methods: The percentage composition of gallic acid, catechin, oxypaeoniflorin, paeoniflorin, quercetin, benzoylpaeoniflorin, paeonol in different batches of RMC was determined, and then adopting MATLAB programming to construct the gray correlation-TOPSIS assessment model for quality evaluation of RMC. Results: The quality evaluation results of model evaluation and objective evaluation were consistent, reliable, and stable. Conclusion: The model of gray correlation-TOPSIS can be well applied to the quality evaluation of traditional Chinese medicine with multiple components and has broad prospect in application. SUMMARY The experiment tries to construct a model to evaluate the quality of RMC using the weighted gray correlation- Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) method. Results show the model is reliable and provide a feasible way in evaluating quality of traditional Chinese medicine with multiple components. PMID:28839384

  14. Diffuse Reflectance Spectroscopy of Hidden Objects. Part II: Recovery of a Target Spectrum.

    PubMed

    Pomerantsev, Alexey L; Rodionova, Oxana Ye; Skvortsov, Alexej N

    2017-08-01

    In this study, we consider the reconstruction of a diffuse reflectance near-infrared spectrum of an object (target spectrum) in case the object is covered by an interfering absorbing and scattering layer. Recovery is performed using a new empirical method, which was developed in our previous study. We focus on a system, which consists of several layers of polyethylene (PE) film and underlayer objects with different spectral features. The spectral contribution of the interfering layer is modeled by a three-component two-parameter multivariate curve resolution (MCR) model, which was built and calibrated using spectrally flat objects. We show that this model is applicable to real objects with non-uniform spectra. Ultimately, the target spectrum can be reconstructed from a single spectrum of the covered target. With calculation methods, we are able to recover quite accurately the spectrum of a target even when the object is covered by 0.7 mm of PE.

  15. The application of the unified modeling language in object-oriented analysis of healthcare information systems.

    PubMed

    Aggarwal, Vinod

    2002-10-01

    This paper concerns itself with the beneficial effects of the Unified Modeling Language (UML), a nonproprietary object modeling standard, in specifying, visualizing, constructing, documenting, and communicating the model of a healthcare information system from the user's perspective. The author outlines the process of object-oriented analysis (OOA) using the UML and illustrates this with healthcare examples to demonstrate the practicality of application of the UML by healthcare personnel to real-world information system problems. The UML will accelerate advanced uses of object-orientation such as reuse technology, resulting in significantly higher software productivity. The UML is also applicable in the context of a component paradigm that promises to enhance the capabilities of healthcare information systems and simplify their management and maintenance.

  16. Towards an Automated Full-Turbofan Engine Numerical Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Turner, Mark G.; Norris, Andrew; Veres, Joseph P.

    2003-01-01

    The objective of this study was to demonstrate the high-fidelity numerical simulation of a modern high-bypass turbofan engine. The simulation utilizes the Numerical Propulsion System Simulation (NPSS) thermodynamic cycle modeling system coupled to a high-fidelity full-engine model represented by a set of coupled three-dimensional computational fluid dynamic (CFD) component models. Boundary conditions from the balanced, steady-state cycle model are used to define component boundary conditions in the full-engine model. Operating characteristics of the three-dimensional component models are integrated into the cycle model via partial performance maps generated automatically from the CFD flow solutions using one-dimensional meanline turbomachinery programs. This paper reports on the progress made towards the full-engine simulation of the GE90-94B engine, highlighting the generation of the high-pressure compressor partial performance map. The ongoing work will provide a system to evaluate the steady and unsteady aerodynamic and mechanical interactions between engine components at design and off-design operating conditions.

  17. Cutting the Composite Gordian Knot: Untangling the AGN-Starburst Threads in Single Aperture Spectra

    NASA Astrophysics Data System (ADS)

    Flury, Sophia; Moran, Edward C.

    2018-01-01

    Standard emission line diagnostics are able to segregate star-forming galaxies and Seyfert nuclei, and it is often assumed that ambiguous emission-line galaxies falling between these two populations are “composite” objects exhibiting both types of photoionization. We have developed a method that predicts the most probable H II and AGN components that could plausibly explain the “composite” classed objects solely on the basis of their SDSS spectra. The majority of our analysis is driven by empirical relationships revealed by SDSS data rather than theoretical models founded in assumptions. To verify our method, we have compared the predictions of our model with publicly released IFU data from the S7 survey and find that composite objects are not in fact a simple linear combination of the two types of emission. The data reveal a key component in the mixing sequence: geometric dilution of the ionizing radiation which powers the NLR of the active nucleus. When accounting for this effect, our model is successful when applied to several composite-class galaxies. Some objects, however, appear to be at variance with the predicted results, suggesting they may not be powered by black hole accretion.

  18. NCWin — A Component Object Model (COM) for processing and visualizing NetCDF data

    USGS Publications Warehouse

    Liu, Jinxun; Chen, J.M.; Price, D.T.; Liu, S.

    2005-01-01

    NetCDF (Network Common Data Form) is a data sharing protocol and library that is commonly used in large-scale atmospheric and environmental data archiving and modeling. The NetCDF tool described here, named NCWin and coded with Borland C + + Builder, was built as a standard executable as well as a COM (component object model) for the Microsoft Windows environment. COM is a powerful technology that enhances the reuse of applications (as components). Environmental model developers from different modeling environments, such as Python, JAVA, VISUAL FORTRAN, VISUAL BASIC, VISUAL C + +, and DELPHI, can reuse NCWin in their models to read, write and visualize NetCDF data. Some Windows applications, such as ArcGIS and Microsoft PowerPoint, can also call NCWin within the application. NCWin has three major components: 1) The data conversion part is designed to convert binary raw data to and from NetCDF data. It can process six data types (unsigned char, signed char, short, int, float, double) and three spatial data formats (BIP, BIL, BSQ); 2) The visualization part is designed for displaying grid map series (playing forward or backward) with simple map legend, and displaying temporal trend curves for data on individual map pixels; and 3) The modeling interface is designed for environmental model development by which a set of integrated NetCDF functions is provided for processing NetCDF data. To demonstrate that the NCWin can easily extend the functions of some current GIS software and the Office applications, examples of calling NCWin within ArcGIS and MS PowerPoint for showing NetCDF map animations are given.

  19. EChem++--an object-oriented problem solving environment for electrochemistry. 2. The kinetic facilities of Ecco--a compiler for (electro-)chemistry.

    PubMed

    Ludwig, Kai; Speiser, Bernd

    2004-01-01

    We describe a modeling software component Ecco, implemented in the C++ programming language. It assists in the formulation of physicochemical systems including, in particular, electrochemical processes within general geometries. Ecco's kinetic part then translates any user defined reaction mechanism into an object-oriented representation and generates the according mathematical model equations. The input language, its grammar, the object-oriented design of Ecco, based on design patterns, and its integration into the open source software project EChem++ are discussed. Application Strategies are given.

  20. Marketing and Distribution: Better Learning Experiences through Proper Coordination.

    ERIC Educational Resources Information Center

    Coakley, Carroll B.

    1979-01-01

    Presents a cooperative education model that correlates the student's occupational objective with his/her training station. Components of the model discussed are (1) the task analysis, (2) the job description, (3) training plans, and (4) student evaluation. (LRA)

  1. A Model of Objective Weighting for EIA.

    ERIC Educational Resources Information Center

    Ying, Long Gen; Liu, You Ci

    1995-01-01

    In the research of environmental impact assessment (EIA), the problem of weight distribution for a set of parameters has not yet been properly solved. Presents an approach of objective weighting by using a procedure of Pij principal component-factor analysis (Pij PCFA), which suits specifically those parameters measured directly by physical…

  2. Synthesis of benthic flux components in the Patos Lagoon coastal zone, Rio Grande do Sul, Brazil

    NASA Astrophysics Data System (ADS)

    King, J. N.

    2012-12-01

    The primary objective of this work is to synthesize components of benthic flux in the Patos Lagoon coastal zone, Rio Grande do Sul, Brazil. Specifically, the component of benthic discharge flux forced by the terrestrial hydraulic gradient is 0.8 m3 d-1; components of benthic discharge and recharge flux associated with the groundwater tidal prism are both 2.1 m3 d-1; components of benthic discharge and recharge flux forced by surface-gravity wave setup are both 6.3 m3 d-1; the component of benthic discharge flux that transports radium-228 is 350 m3 d-1; and components of benthic discharge and recharge flux forced by surface-gravity waves propagating over a porous medium are both 1400 m3 d-1. (All models are normalized per meter shoreline.) Benthic flux is a function of components forced by individual mechanisms and nonlinear interactions that exist between components. Constructive and destructive interference may enhance or diminish the contribution of benthic flux components. It may not be possible to model benthic flux by summing component magnitudes. Geochemical tracer techniques may not accurately model benthic discharge flux or submarine groundwater discharge (SGD). A conceptual model provides a framework on which to quantitatively characterize benthic discharge flux and SGD with a multifaceted approach.

  3. Synthesis of benthic flux components in the Patos Lagooncoastal zone, Rio Grande do Sul, Brazil

    USGS Publications Warehouse

    King, Jeffrey N.

    2012-01-01

    The primary objective of this work is to synthesize components of benthic flux in the Patos Lagoon coastal zone, Rio Grande do Sul, Brazil. Specifically, the component of benthic discharge flux forced by the terrestrial hydraulic gradient is 0.8 m3 d-1; components of benthic discharge and recharge flux associated with the groundwater tidal prism are both 2.1 m3 d-1; components of benthic discharge and recharge flux forced by surface-gravity wave setup are both 6.3 m3 d-1; the component of benthic discharge flux that transports radium-228 is 350 m3 d-1; and components of benthic discharge and recharge flux forced by surface-gravity waves propagating over a porous medium are both 1400 m3 d-1. (All models are normalized per meter shoreline.) Benthic flux is a function of components forced by individual mechanisms and nonlinear interactions that exist between components. Constructive and destructive interference may enhance or diminish the contribution of benthic flux components. It may not be possible to model benthic flux by summing component magnitudes. Geochemical tracer techniques may not accurately model benthic discharge flux or submarine groundwater discharge (SGD). A conceptual model provides a framework on which to quantitatively characterize benthic discharge flux and SGD with a multifaceted approach.

  4. Shading of a computer-generated hologram by zone plate modulation.

    PubMed

    Kurihara, Takayuki; Takaki, Yasuhiro

    2012-02-13

    We propose a hologram calculation technique that enables reconstructing a shaded three-dimensional (3D) image. The amplitude distributions of zone plates, which generate the object points that constitute a 3D object, were two-dimensionally modulated. Two-dimensional (2D) amplitude modulation was determined on the basis of the Phong reflection model developed for computer graphics, which considers the specular, diffuse, and ambient reflection light components. The 2D amplitude modulation added variable and constant modulations: the former controlled the specular light component and the latter controlled the diffuse and ambient components. The proposed calculation technique was experimentally verified. The reconstructed image showed specular reflection that varied depending on the viewing position.

  5. A measure for objects clustering in principal component analysis biplot: A case study in inter-city buses maintenance cost data

    NASA Astrophysics Data System (ADS)

    Ginanjar, Irlandia; Pasaribu, Udjianna S.; Indratno, Sapto W.

    2017-03-01

    This article presents the application of the principal component analysis (PCA) biplot for the needs of data mining. This article aims to simplify and objectify the methods for objects clustering in PCA biplot. The novelty of this paper is to get a measure that can be used to objectify the objects clustering in PCA biplot. Orthonormal eigenvectors, which are the coefficients of a principal component model representing an association between principal components and initial variables. The existence of the association is a valid ground to objects clustering based on principal axes value, thus if m principal axes used in the PCA, then the objects can be classified into 2m clusters. The inter-city buses are clustered based on maintenance costs data by using two principal axes PCA biplot. The buses are clustered into four groups. The first group is the buses with high maintenance costs, especially for lube, and brake canvass. The second group is the buses with high maintenance costs, especially for tire, and filter. The third group is the buses with low maintenance costs, especially for lube, and brake canvass. The fourth group is buses with low maintenance costs, especially for tire, and filter.

  6. System and method for representing and manipulating three-dimensional objects on massively parallel architectures

    DOEpatents

    Karasick, Michael S.; Strip, David R.

    1996-01-01

    A parallel computing system is described that comprises a plurality of uniquely labeled, parallel processors, each processor capable of modelling a three-dimensional object that includes a plurality of vertices, faces and edges. The system comprises a front-end processor for issuing a modelling command to the parallel processors, relating to a three-dimensional object. Each parallel processor, in response to the command and through the use of its own unique label, creates a directed-edge (d-edge) data structure that uniquely relates an edge of the three-dimensional object to one face of the object. Each d-edge data structure at least includes vertex descriptions of the edge and a description of the one face. As a result, each processor, in response to the modelling command, operates upon a small component of the model and generates results, in parallel with all other processors, without the need for processor-to-processor intercommunication.

  7. Improved analyses using function datasets and statistical modeling

    Treesearch

    John S. Hogland; Nathaniel M. Anderson

    2014-01-01

    Raster modeling is an integral component of spatial analysis. However, conventional raster modeling techniques can require a substantial amount of processing time and storage space and have limited statistical functionality and machine learning algorithms. To address this issue, we developed a new modeling framework using C# and ArcObjects and integrated that framework...

  8. Simulating unstressed crop development and growth using the Unified Plant Growth Model (UPGM)

    USDA-ARS?s Scientific Manuscript database

    Since development of the EPIC model in 1989, many versions of the plant growth component have been incorporated into other erosion and crop management models and subsequently modified to meet model objectives (e.g., WEPS, WEPP, SWAT, ALMANAC, GPFARM). This has resulted in different versions of the ...

  9. Further Developments in Modeling Creep Effects Within Structural SiC/SiC Components

    NASA Technical Reports Server (NTRS)

    Lang, Jerry; DiCarlo, James A.

    2008-01-01

    Anticipating the implementation of advanced SiC/SiC composites into turbine section components of future aero-propulsion engines, the primary objective of this on-going study is to develop physics-based analytical and finite-element modeling tools to predict the effects of constituent creep on SiC/SiC component service life. A second objective is to understand how to possibly manipulate constituent materials and processes in order to minimize these effects. Focusing on SiC/SiC components experiencing through-thickness stress gradients (e.g., airfoil leading edge), prior NASA creep modeling studies showed that detrimental residual stress effects can develop globally within the component walls which can increase the risk of matrix cracking. These studies assumed that the SiC/SiC composites behaved as isotropic viscoelastic continuum materials with creep behavior that was linear and symmetric with stress and that the creep parameters could be obtained from creep data as experimentally measured in-plane in the fiber direction of advanced thin-walled 2D SiC/SiC panels. The present study expands on those prior efforts by including constituent behavior with non-linear stress dependencies in order to predict such key creep-related SiC/SiC properties as time-dependent matrix stress, constituent creep and content effects on composite creep rates and rupture times, and stresses on fiber and matrix during and after creep.

  10. An object oriented generic controller using CLIPS

    NASA Technical Reports Server (NTRS)

    Nivens, Cody R.

    1990-01-01

    In today's applications, the need for the division of code and data has focused on the growth of object oriented programming. This philosophy gives software engineers greater control over the environment of an application. Yet the use of object oriented design does not exclude the need for greater understanding by the application of what the controller is doing. Such understanding is only possible by using expert systems. Providing a controller that is capable of controlling an object by using rule-based expertise would expedite the use of both object oriented design and expert knowledge of the dynamic of an environment in modern controllers. This project presents a model of a controller that uses the CLIPS expert system and objects in C++ to create a generic controller. The polymorphic abilities of C++ allow for the design of a generic component stored in individual data files. Accompanying the component is a set of rules written in CLIPS which provide the following: the control of individual components, the input of sensory data from components and the ability to find the status of a given component. Along with the data describing the application, a set of inference rules written in CLIPS allows the application to make use of sensory facts and status and control abilities. As a demonstration of this ability, the control of the environment of a house is provided. This demonstration includes the data files describing the rooms and their contents as far as devices, windows and doors. The rules used for the home consist of the flow of people in the house and the control of devices by the home owner.

  11. Conjunction Assessment Late-Notice High-Interest Event Investigation: Space Weather Aspects

    NASA Technical Reports Server (NTRS)

    Pachura, D.; Hejduk, M. D.

    2016-01-01

    Late-notice events usually driven by large changes in primary (protected) object or secondary object state. Main parameter to represent size of state change is component position difference divided by associated standard deviation (epsilon divided by sigma) from covariance. Investigation determined actual frequency of large state changes, in both individual and combined states. Compared them to theoretically expected frequencies. Found that large changes ( (epsilon divided by sigma) is greater than 3) in individual object states occur much more frequently than theory dictates. Effect is less pronounced in radial components and in events with probability of collision (Pc) greater than 1 (sup -5) (1e-5). Found combined state matched much closer to theoretical expectation, especially for radial and cross-track. In-track is expected to be the most vulnerable to modeling errors, so not surprising that non-compliance largest in this component.

  12. How to constrain multi-objective calibrations of the SWAT model using water balance components

    USDA-ARS?s Scientific Manuscript database

    Automated procedures are often used to provide adequate fits between hydrologic model estimates and observed data. While the models may provide good fits based upon numeric criteria, they may still not accurately represent the basic hydrologic characteristics of the represented watershed. Here we ...

  13. A System-Science Approach towards Model Construction for Curriculum Development.

    ERIC Educational Resources Information Center

    Chang, Ren-Jung; Yang, Hui-Chin

    A new morphological model based on modern system science and engineering is constructed and proposed for curriculum research and development. A curriculum system is recognized as an engineering system that constitutes three components: clients, resources, and knowledge. Unlike the objective models that are purely rational and neatly sequential in…

  14. Human Centered Modeling and Simulation

    Science.gov Websites

    Contacts Researchers Thrust Area 2: Human Centered Modeling and Simulation Thrust Area Leader: Dr. Matthew performance of human occupants and operators are paramount in the achievement of ground vehicle design objectives, but these occupants are also the most variable components of the human-machine system. Modeling

  15. A Multidimensional Curriculum Model for Heritage or International Language Instruction.

    ERIC Educational Resources Information Center

    Lazaruk, Wally

    1993-01-01

    Describes the Multidimension Curriculum Model for developing a language curriculum and suggests a generic approach to selecting and sequencing learning objectives. Alberta Education used this model to design a new French-as-a-Second-Language program. The experience/communication, culture, language, and general language components at the beginning,…

  16. A Goal Seeking Strategy for Constructing Systems from Alternative Components

    NASA Technical Reports Server (NTRS)

    Valentine, Mark E.

    1999-01-01

    This paper describes a methodology to efficiently construct feasible systems then modify feasible systems to meet successive goals by selecting from alternative components, a problem recognized to be n-p complete. The methodology provides a means to catalog and model alternative components. A presented system modeling Structure is robust enough to model a wide variety of systems and provides a means to compare and evaluate alternative systems. These models act as input to a methodology for selecting alternative components to construct feasible systems and modify feasible systems to meet design goals and objectives. The presented algorithm's ability to find a restricted solution, as defined by a unique set of requirements, is demonstrated against an exhaustive search of a sample of proposed shuttle modifications. The utility of the algorithm is demonstrated by comparing results from the algorithm with results from three NASA shuttle evolution studies using their value systems and assumptions.

  17. Using a virtual world for robot planning

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Monaco, John V.; Lin, Yixia; Funk, Christopher; Lyons, Damian

    2012-06-01

    We are building a robot cognitive architecture that constructs a real-time virtual copy of itself and its environment, including people, and uses the model to process perceptual information and to plan its movements. This paper describes the structure of this architecture. The software components of this architecture include PhysX for the virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture that controls the perceptual processing and task planning. The RS (Robot Schemas) language is implemented in Soar, providing the ability to reason about concurrency and time. This Soar/RS component controls visual processing, deciding which objects and dynamics to render into PhysX, and the degree of detail required for the task. As the robot runs, its virtual model diverges from physical reality, and errors grow. The Match-Mediated Difference component monitors these errors by comparing the visual data with corresponding data from virtual cameras, and notifies Soar/RS of significant differences, e.g. a new object that appears, or an object that changes direction unexpectedly. Soar/RS can then run PhysX much faster than real-time and search among possible future world paths to plan the robot's actions. We report experimental results in indoor environments.

  18. Heterogeneous Deformable Modeling of Bio-Tissues and Haptic Force Rendering for Bio-Object Modeling

    NASA Astrophysics Data System (ADS)

    Lin, Shiyong; Lee, Yuan-Shin; Narayan, Roger J.

    This paper presents a novel technique for modeling soft biological tissues as well as the development of an innovative interface for bio-manufacturing and medical applications. Heterogeneous deformable models may be used to represent the actual internal structures of deformable biological objects, which possess multiple components and nonuniform material properties. Both heterogeneous deformable object modeling and accurate haptic rendering can greatly enhance the realism and fidelity of virtual reality environments. In this paper, a tri-ray node snapping algorithm is proposed to generate a volumetric heterogeneous deformable model from a set of object interface surfaces between different materials. A constrained local static integration method is presented for simulating deformation and accurate force feedback based on the material properties of a heterogeneous structure. Biological soft tissue modeling is used as an example to demonstrate the proposed techniques. By integrating the heterogeneous deformable model into a virtual environment, users can both observe different materials inside a deformable object as well as interact with it by touching the deformable object using a haptic device. The presented techniques can be used for surgical simulation, bio-product design, bio-manufacturing, and medical applications.

  19. Environmental modeling and recognition for an autonomous land vehicle

    NASA Technical Reports Server (NTRS)

    Lawton, D. T.; Levitt, T. S.; Mcconnell, C. C.; Nelson, P. C.

    1987-01-01

    An architecture for object modeling and recognition for an autonomous land vehicle is presented. Examples of objects of interest include terrain features, fields, roads, horizon features, trees, etc. The architecture is organized around a set of data bases for generic object models and perceptual structures, temporary memory for the instantiation of object and relational hypotheses, and a long term memory for storing stable hypotheses that are affixed to the terrain representation. Multiple inference processes operate over these databases. Researchers describe these particular components: the perceptual structure database, the grouping processes that operate over this, schemas, and the long term terrain database. A processing example that matches predictions from the long term terrain model to imagery, extracts significant perceptual structures for consideration as potential landmarks, and extracts a relational structure to update the long term terrain database is given.

  20. Adaptive model-based control systems and methods for controlling a gas turbine

    NASA Technical Reports Server (NTRS)

    Brunell, Brent Jerome (Inventor); Mathews, Jr., Harry Kirk (Inventor); Kumar, Aditya (Inventor)

    2004-01-01

    Adaptive model-based control systems and methods are described so that performance and/or operability of a gas turbine in an aircraft engine, power plant, marine propulsion, or industrial application can be optimized under normal, deteriorated, faulted, failed and/or damaged operation. First, a model of each relevant system or component is created, and the model is adapted to the engine. Then, if/when deterioration, a fault, a failure or some kind of damage to an engine component or system is detected, that information is input to the model-based control as changes to the model, constraints, objective function, or other control parameters. With all the information about the engine condition, and state and directives on the control goals in terms of an objective function and constraints, the control then solves an optimization so the optimal control action can be determined and taken. This model and control may be updated in real-time to account for engine-to-engine variation, deterioration, damage, faults and/or failures using optimal corrective control action command(s).

  1. Accumulated energy norm for full waveform inversion of marine data

    NASA Astrophysics Data System (ADS)

    Shin, Changsoo; Ha, Wansoo

    2017-12-01

    Macro-velocity models are important for imaging the subsurface structure. However, the conventional objective functions of full waveform inversion in the time and the frequency domain have a limited ability to recover the macro-velocity model because of the absence of low-frequency information. In this study, we propose new objective functions that can recover the macro-velocity model by minimizing the difference between the zero-frequency components of the square of seismic traces. Instead of the seismic trace itself, we use the square of the trace, which contains low-frequency information. We apply several time windows to the trace and obtain zero-frequency information of the squared trace for each time window. The shape of the new objective functions shows that they are suitable for local optimization methods. Since we use the acoustic wave equation in this study, this method can be used for deep-sea marine data, in which elastic effects can be ignored. We show that the zero-frequency components of the square of the seismic traces can be used to recover macro-velocities from synthetic and field data.

  2. Object selection costs in visual working memory: A diffusion model analysis of the focus of attention.

    PubMed

    Sewell, David K; Lilburn, Simon D; Smith, Philip L

    2016-11-01

    A central question in working memory research concerns the degree to which information in working memory is accessible to other cognitive processes (e.g., decision-making). Theories assuming that the focus of attention can only store a single object at a time require the focus to orient to a target representation before further processing can occur. The need to orient the focus of attention implies that single-object accounts typically predict response time costs associated with object selection even when working memory is not full (i.e., memory load is less than 4 items). For other theories that assume storage of multiple items in the focus of attention, predictions depend on specific assumptions about the way resources are allocated among items held in the focus, and how this affects the time course of retrieval of items from the focus. These broad theoretical accounts have been difficult to distinguish because conventional analyses fail to separate components of empirical response times related to decision-making from components related to selection and retrieval processes associated with accessing information in working memory. To better distinguish these response time components from one another, we analyze data from a probed visual working memory task using extensions of the diffusion decision model. Analysis of model parameters revealed that increases in memory load resulted in (a) reductions in the quality of the underlying stimulus representations in a manner consistent with a sample size model of visual working memory capacity and (b) systematic increases in the time needed to selectively access a probed representation in memory. The results are consistent with single-object theories of the focus of attention. The results are also consistent with a subset of theories that assume a multiobject focus of attention in which resource allocation diminishes both the quality and accessibility of the underlying representations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Buried Object Classification using a Sediment Volume Imaging SAS and Electromagnetic Gradiometer

    DTIC Science & Technology

    2006-09-01

    field data with simulated RTG data using AST’s in-house magnetic modeling tool EMAGINE . Given a set of input dipole moments, or pa- rameters to...approximate a moment by assuming the object is a prolate ellipsoid shell, EMAGINE uses Green’s func- tion formulations to generate three-component

  4. High Tech Educators Network Evaluation.

    ERIC Educational Resources Information Center

    O'Shea, Dan

    A process evaluation was conducted to assess the High Tech Educators Network's (HTEN's) activities. Four basic components to the evaluation approach were documentation review, program logic model, written survey, and participant interviews. The model mapped the basic goals and objectives, assumptions, activities, outcome expectations, and…

  5. C-Language Integrated Production System, Version 6.0

    NASA Technical Reports Server (NTRS)

    Riley, Gary; Donnell, Brian; Ly, Huyen-Anh Bebe; Ortiz, Chris

    1995-01-01

    C Language Integrated Production System (CLIPS) computer programs are specifically intended to model human expertise or other knowledge. CLIPS is designed to enable research on, and development and delivery of, artificial intelligence on conventional computers. CLIPS 6.0 provides cohesive software tool for handling wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming: representation of knowledge as heuristics - essentially, rules of thumb that specify set of actions performed in given situation. Object-oriented programming: modeling of complex systems comprised of modular components easily reused to model other systems or create new components. Procedural-programming: representation of knowledge in ways similar to those of such languages as C, Pascal, Ada, and LISP. Version of CLIPS 6.0 for IBM PC-compatible computers requires DOS v3.3 or later and/or Windows 3.1 or later.

  6. Learning Engines - A Functional Object Model for Developing Learning Resources for the WWW.

    ERIC Educational Resources Information Center

    Fritze, Paul; Ip, Albert

    The Learning Engines (LE) model, developed at the University of Melbourne (Australia), supports the integration of rich learning activities into the World Wide Web. The model is concerned with the practical design, educational value, and reusability of software components. The model is focused on the academic teacher who is in the best position to…

  7. Model documentation: Electricity Market Module, Electricity Fuel Dispatch Submodule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This report documents the objectives, analytical approach and development of the National Energy Modeling System Electricity Fuel Dispatch Submodule (EFD), a submodule of the Electricity Market Module (EMM). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.

  8. Advances in the spatially distributed ages-w model: parallel computation, java connection framework (JCF) integration, and streamflow/nitrogen dynamics assessment

    USDA-ARS?s Scientific Manuscript database

    AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic and water quality (H/WQ) simulation components under the Java Connection Framework (JCF) and the Object Modeling System (OMS) environmental modeling framework. AgES-W is implicitly scala...

  9. The Spatially-Distributed Agroecosystem-Watershed (Ages-W) Hydrologic/Water Quality (H/WQ) model for assessment of conservation effects

    USDA-ARS?s Scientific Manuscript database

    AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic/water quality (H/WQ) simulation components under the Object Modeling System (OMS3) environmental modeling framework. AgES-W has recently been enhanced with the addition of nitrogen (N) a...

  10. Component-based target recognition inspired by human vision

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Agyepong, Kwabena

    2009-05-01

    In contrast with machine vision, human can recognize an object from complex background with great flexibility. For example, given the task of finding and circling all cars (no further information) in a picture, you may build a virtual image in mind from the task (or target) description before looking at the picture. Specifically, the virtual car image may be composed of the key components such as driver cabin and wheels. In this paper, we propose a component-based target recognition method by simulating the human recognition process. The component templates (equivalent to the virtual image in mind) of the target (car) are manually decomposed from the target feature image. Meanwhile, the edges of the testing image can be extracted by using a difference of Gaussian (DOG) model that simulates the spatiotemporal response in visual process. A phase correlation matching algorithm is then applied to match the templates with the testing edge image. If all key component templates are matched with the examining object, then this object is recognized as the target. Besides the recognition accuracy, we will also investigate if this method works with part targets (half cars). In our experiments, several natural pictures taken on streets were used to test the proposed method. The preliminary results show that the component-based recognition method is very promising.

  11. Hierarchical, parallel computing strategies using component object model for process modelling responses of forest plantations to interacting multiple stresses

    Treesearch

    J. G. Isebrands; G. E. Host; K. Lenz; G. Wu; H. W. Stech

    2000-01-01

    Process models are powerful research tools for assessing the effects of multiple environmental stresses on forest plantations. These models are driven by interacting environmental variables and often include genetic factors necessary for assessing forest plantation growth over a range of different site, climate, and silvicultural conditions. However, process models are...

  12. Point sources from dissipative dark matter

    NASA Astrophysics Data System (ADS)

    Agrawal, Prateek; Randall, Lisa

    2017-12-01

    If a component of dark matter has dissipative interactions, it can cool to form compact astrophysical objects with higher density than that of conventional cold dark matter (sub)haloes. Dark matter annihilations might then appear as point sources, leading to novel morphology for indirect detection. We explore dissipative models where interaction with the Standard Model might provide visible signals, and show how such objects might give rise to the observed excess in gamma rays arising from the galactic center.

  13. Frontal–Occipital Connectivity During Visual Search

    PubMed Central

    Pantazatos, Spiro P.; Yanagihara, Ted K.; Zhang, Xian; Meitzler, Thomas

    2012-01-01

    Abstract Although expectation- and attention-related interactions between ventral and medial prefrontal cortex and stimulus category-selective visual regions have been identified during visual detection and discrimination, it is not known if similar neural mechanisms apply to other tasks such as visual search. The current work tested the hypothesis that high-level frontal regions, previously implicated in expectation and visual imagery of object categories, interact with visual regions associated with object recognition during visual search. Using functional magnetic resonance imaging, subjects searched for a specific object that varied in size and location within a complex natural scene. A model-free, spatial-independent component analysis isolated multiple task-related components, one of which included visual cortex, as well as a cluster within ventromedial prefrontal cortex (vmPFC), consistent with the engagement of both top-down and bottom-up processes. Analyses of psychophysiological interactions showed increased functional connectivity between vmPFC and object-sensitive lateral occipital cortex (LOC), and results from dynamic causal modeling and Bayesian Model Selection suggested bidirectional connections between vmPFC and LOC that were positively modulated by the task. Using image-guided diffusion-tensor imaging, functionally seeded, probabilistic white-matter tracts between vmPFC and LOC, which presumably underlie this effective interconnectivity, were also observed. These connectivity findings extend previous models of visual search processes to include specific frontal–occipital neuronal interactions during a natural and complex search task. PMID:22708993

  14. Modeling of short-term mechanism of arterial pressure control in the cardiovascular system: object-oriented and acausal approach.

    PubMed

    Kulhánek, Tomáš; Kofránek, Jiří; Mateják, Marek

    2014-11-01

    This letter introduces an alternative approach to modeling the cardiovascular system with a short-term control mechanism published in Computers in Biology and Medicine, Vol. 47 (2014), pp. 104-112. We recommend using abstract components on a distinct physical level, separating the model into hydraulic components, subsystems of the cardiovascular system and individual subsystems of the control mechanism and scenario. We recommend utilizing an acausal modeling feature of Modelica language, which allows model variables to be expressed declaratively. Furthermore, the Modelica tool identifies which are the dependent and independent variables upon compilation. An example of our approach is introduced on several elementary components representing the hydraulic resistance to fluid flow and the elastic response of the vessel, among others. The introduced model implementation can be more reusable and understandable for the general scientific community. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Designing an Educational Game with Ten Steps to Complex Learning

    ERIC Educational Resources Information Center

    Enfield, Jacob

    2012-01-01

    Few instructional design (ID) models exist which are specific for developing educational games. Moreover, those extant ID models have not been rigorously evaluated. No ID models were found which focus on educational games with complex learning objectives. "Ten Steps to Complex Learning" (TSCL) is based on the four component instructional…

  16. Information Environments

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.; Naiman, Cynthia

    2003-01-01

    The objective of GRC CNIS/IE work is to build a plug-n-play infrastructure that provides the Grand Challenge Applications with a suite of tools for coupling codes together, numerical zooming between fidelity of codes and gaining deployment of these simulations onto the Information Power Grid. The GRC CNIS/IE work will streamline and improve this process by providing tighter integration of various tools through the use of object oriented design of component models and data objects and through the use of CORBA (Common Object Request Broker Architecture).

  17. MarFS, Version 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inman, Jeffrey; Bonnie, David; Broomfield, Matthew

    There is a sea (mar is Spanish for sea) of data out there that needs to be handled efficiently. Object Stores are filling the hole of managing large amounts of data efficiently. However, in many cases, and our HPC case in particular, we need a traditional file (POSIX) interface to this data as HPC I/O models have not moved to object interfaces, such as Amazon S3, CDMI, etc.Eventually Object Store providers may deliver file interfaces to their object stores, but at this point those interfaces are not ready to do the job that we need done. MarFS will glue togethermore » two existing scalable components: a file system's scalable metadata component that provides the file interface; and existing scalable object stores (from one or more providers). There will be utilities to do work that is not critical to be done in real-time so that MarFS can manage the space used by objects and allocated to individual users.« less

  18. The adhesion and hysteresis effect in friction skin with artificial materials

    NASA Astrophysics Data System (ADS)

    Subhi, K. A.; Tudor, A.; Hussein, E. K.; Wahad, H. S.

    2017-02-01

    Human skin is a soft biomaterial with a complex anatomical structure and it has a complex material behavior during the mechanical contact with objects and surfaces. The friction adhesion component is defined by means of the theories of Johnson-Kendall-Roberts (JKR), Derjaguin-Muller-Toporov (DMT) and Maugis - Dugdale (MD). We shall consider the human skin entering into contact with a rigid surface. The deformation (hysteresis) component of the skin friction is evaluated with Voigt rheological model for the spherical contact, with the original model, developed in MATHCAD software. The adhesive component of the skin friction is greater than the hysteresis component for all friction parameters (load, velocity, the strength of interface between skin and the artificial material).

  19. Applications of AN OO Methodology and Case to a Daq System

    NASA Astrophysics Data System (ADS)

    Bee, C. P.; Eshghi, S.; Jones, R.; Kolos, S.; Magherini, C.; Maidantchik, C.; Mapelli, L.; Mornacchi, G.; Niculescu, M.; Patel, A.; Prigent, D.; Spiwoks, R.; Soloviev, I.; Caprini, M.; Duval, P. Y.; Etienne, F.; Ferrato, D.; Le van Suu, A.; Qian, Z.; Gaponenko, I.; Merzliakov, Y.; Ambrosini, G.; Ferrari, R.; Fumagalli, G.; Polesello, G.

    The RD13 project has evaluated the use of the Object Oriented Information Engineering (OOIE) method during the development of several software components connected to the DAQ system. The method is supported by a sophisticated commercial CASE tool (Object Management Workbench) and programming environment (Kappa) which covers the full life-cycle of the software including model simulation, code generation and application deployment. This paper gives an overview of the method, CASE tool, DAQ components which have been developed and we relate our experiences with the method and tool, its integration into our development environment and the spiral lifecycle it supports.

  20. CONFIG - Adapting qualitative modeling and discrete event simulation for design of fault management systems

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Basham, Bryan D.

    1989-01-01

    CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.

  1. Monitoring Aircraft Motion at Airports by LIDAR

    NASA Astrophysics Data System (ADS)

    Toth, C.; Jozkow, G.; Koppanyi, Z.; Young, S.; Grejner-Brzezinska, D.

    2016-06-01

    Improving sensor performance, combined with better affordability, provides better object space observability, resulting in new applications. Remote sensing systems are primarily concerned with acquiring data of the static components of our environment, such as the topographic surface of the earth, transportation infrastructure, city models, etc. Observing the dynamic component of the object space is still rather rare in the geospatial application field; vehicle extraction and traffic flow monitoring are a few examples of using remote sensing to detect and model moving objects. Deploying a network of inexpensive LiDAR sensors along taxiways and runways can provide both geometrically and temporally rich geospatial data that aircraft body can be extracted from the point cloud, and then, based on consecutive point clouds motion parameters can be estimated. Acquiring accurate aircraft trajectory data is essential to improve aviation safety at airports. This paper reports about the initial experiences obtained by using a network of four Velodyne VLP- 16 sensors to acquire data along a runway segment.

  2. RF control at SSCL — an object oriented design approach

    NASA Astrophysics Data System (ADS)

    Dohan, D. A.; Osberg, E.; Biggs, R.; Bossom, J.; Chillara, K.; Richter, R.; Wade, D.

    1994-12-01

    The Superconducting Super Collider (SSC) in Texas, the construction of which was stopped in 1994, would have represented a major challenge in accelerator research and development. This paper addresses the issues encountered in the parallel design and construction of the control systems for the RF equipment for the five accelerators comprising the SSC. An extensive analysis of the components of the RF control systems has been undertaken, based upon the Schlaer-Mellor object-oriented analysis and design (OOA/OOD) methodology. The RF subsystem components such as amplifiers, tubes, power supplies, PID loops, etc. were analyzed to produce OOA information, behavior and process models. Using these models, OOD was iteratively applied to develop a generic RF control system design. This paper describes the results of this analysis and the development of 'bridges' between the analysis objects, and the EPICS-based software and underlying VME-based hardware architectures. The application of this approach to several of the SSCL RF control systems is discussed.

  3. GUEST EDITORS' INTRODUCTION: Guest Editors' introduction

    NASA Astrophysics Data System (ADS)

    Guerraoui, Rachid; Vinoski, Steve

    1997-09-01

    The organization of a distributed system can have a tremendous impact on its capabilities, its performance, and its ability to evolve to meet changing requirements. For example, the client - server organization model has proven to be adequate for organizing a distributed system as a number of distributed servers that offer various functions to client processes across the network. However, it lacks peer-to-peer capabilities, and experience with the model has been predominantly in the context of local networks. To achieve peer-to-peer cooperation in a more global context, systems issues of scale, heterogeneity, configuration management, accounting and sharing are crucial, and the complexity of migrating from locally distributed to more global systems demands new tools and techniques. An emphasis on interfaces and modules leads to the modelling of a complex distributed system as a collection of interacting objects that communicate with each other only using requests sent to well defined interfaces. Although object granularity typically varies at different levels of a system architecture, the same object abstraction can be applied to various levels of a computing architecture. Since 1989, the Object Management Group (OMG), an international software consortium, has been defining an architecture for distributed object systems called the Object Management Architecture (OMA). At the core of the OMA is a `software bus' called an Object Request Broker (ORB), which is specified by the OMG Common Object Request Broker Architecture (CORBA) specification. The OMA distributed object model fits the structure of heterogeneous distributed applications, and is applied in all layers of the OMA. For example, each of the OMG Object Services, such as the OMG Naming Service, is structured as a set of distributed objects that communicate using the ORB. Similarly, higher-level OMA components such as Common Facilities and Domain Interfaces are also organized as distributed objects that can be layered over both Object Services and the ORB. The OMG creates specifications, not code, but the interfaces it standardizes are always derived from demonstrated technology submitted by member companies. The specified interfaces are written in a neutral Interface Definition Language (IDL) that defines contractual interfaces with potential clients. Interfaces written in IDL can be translated to a number of programming languages via OMG standard language mappings so that they can be used to develop components. The resulting components can transparently communicate with other components written in different languages and running on different operating systems and machine types. The ORB is responsible for providing the illusion of `virtual homogeneity' regardless of the programming languages, tools, operating systems and networks used to realize and support these components. With the adoption of the CORBA 2.0 specification in 1995, these components are able to interoperate across multi-vendor CORBA-based products. More than 700 member companies have joined the OMG, including Hewlett-Packard, Digital, Siemens, IONA Technologies, Netscape, Sun Microsystems, Microsoft and IBM, which makes it the largest standards body in existence. These companies continue to work together within the OMG to refine and enhance the OMA and its components. This special issue of Distributed Systems Engineering publishes five papers that were originally presented at the `Distributed Object-Based Platforms' track of the 30th Hawaii International Conference on System Sciences (HICSS), which was held in Wailea on Maui on 6 - 10 January 1997. The papers, which were selected based on their quality and the range of topics they cover, address different aspects of CORBA, including advanced aspects such as fault tolerance and transactions. These papers discuss the use of CORBA and evaluate CORBA-based development for different types of distributed object systems and architectures. The first paper, by S Rahkila and S Stenberg, discusses the application of CORBA to telecommunication management networks. In the second paper, P Narasimhan, L E Moser and P M Melliar-Smith present a fault-tolerant extension of an ORB. The third paper, by J Liang, S Sédillot and B Traverson, provides an overview of the CORBA Transaction Service and its integration with the ISO Distributed Transaction Processing protocol. In the fourth paper, D Sherer, T Murer and A Würtz discuss the evolution of a cooperative software engineering infrastructure to a CORBA-based framework. The fifth paper, by R Fatoohi, evaluates the communication performance of a commercially-available Object Request Broker (Orbix from IONA Technologies) on several networks, and compares the performance with that of more traditional communication primitives (e.g., BSD UNIX sockets and PVM). We wish to thank both the referees and the authors of these papers, as their cooperation was fundamental in ensuring timely publication.

  4. Integration of the Gene Ontology into an object-oriented architecture.

    PubMed

    Shegogue, Daniel; Zheng, W Jim

    2005-05-10

    To standardize gene product descriptions, a formal vocabulary defined as the Gene Ontology (GO) has been developed. GO terms have been categorized into biological processes, molecular functions, and cellular components. However, there is no single representation that integrates all the terms into one cohesive model. Furthermore, GO definitions have little information explaining the underlying architecture that forms these terms, such as the dynamic and static events occurring in a process. In contrast, object-oriented models have been developed to show dynamic and static events. A portion of the TGF-beta signaling pathway, which is involved in numerous cellular events including cancer, differentiation and development, was used to demonstrate the feasibility of integrating the Gene Ontology into an object-oriented model. Using object-oriented models we have captured the static and dynamic events that occur during a representative GO process, "transforming growth factor-beta (TGF-beta) receptor complex assembly" (GO:0007181). We demonstrate that the utility of GO terms can be enhanced by object-oriented technology, and that the GO terms can be integrated into an object-oriented model by serving as a basis for the generation of object functions and attributes.

  5. Integration of the Gene Ontology into an object-oriented architecture

    PubMed Central

    Shegogue, Daniel; Zheng, W Jim

    2005-01-01

    Background To standardize gene product descriptions, a formal vocabulary defined as the Gene Ontology (GO) has been developed. GO terms have been categorized into biological processes, molecular functions, and cellular components. However, there is no single representation that integrates all the terms into one cohesive model. Furthermore, GO definitions have little information explaining the underlying architecture that forms these terms, such as the dynamic and static events occurring in a process. In contrast, object-oriented models have been developed to show dynamic and static events. A portion of the TGF-beta signaling pathway, which is involved in numerous cellular events including cancer, differentiation and development, was used to demonstrate the feasibility of integrating the Gene Ontology into an object-oriented model. Results Using object-oriented models we have captured the static and dynamic events that occur during a representative GO process, "transforming growth factor-beta (TGF-beta) receptor complex assembly" (GO:0007181). Conclusion We demonstrate that the utility of GO terms can be enhanced by object-oriented technology, and that the GO terms can be integrated into an object-oriented model by serving as a basis for the generation of object functions and attributes. PMID:15885145

  6. Analogous Mechanisms of Selection and Updating in Declarative and Procedural Working Memory: Experiments and a Computational Model

    ERIC Educational Resources Information Center

    Oberauer, Klaus; Souza, Alessandra S.; Druey, Michel D.; Gade, Miriam

    2013-01-01

    The article investigates the mechanisms of selecting and updating representations in declarative and procedural working memory (WM). Declarative WM holds the objects of thought available, whereas procedural WM holds representations of what to do with these objects. Both systems consist of three embedded components: activated long-term memory, a…

  7. National Seminar on Research in Evaluation of Occupational Education.

    ERIC Educational Resources Information Center

    North Carolina State Univ., Raleigh. Center for Occupational Education.

    The purpose of this seminar, attended by 21 participants, was to examine issues, problems, and components of models for the evaluation of occupational education. A primary objective was to stimulate interest in evaluation as an object of research effort. Papers presented include: (1) "The Value Structure of Society Toward Work" by Arthur R. Jones,…

  8. Preparing for the Market. Teacher Edition. Fashion Buying Series.

    ERIC Educational Resources Information Center

    Collins, Cindy

    This teacher's guide presents material for a unit on preparing for the retail fashion market. Content focuses on merchandise plans, computing open-to-buy, computing turnover, the components of a model stock plan, and criteria used when selecting a supplier. The guide contains 5 objectives, 6 group learning activities keyed to the objectives, 21…

  9. MCViNE- An object oriented Monte Carlo neutron ray tracing simulation package

    DOE PAGES

    Lin, J. Y. Y.; Smith, Hillary L.; Granroth, Garrett E.; ...

    2015-11-28

    MCViNE (Monte-Carlo VIrtual Neutron Experiment) is an open-source Monte Carlo (MC) neutron ray-tracing software for performing computer modeling and simulations that mirror real neutron scattering experiments. We exploited the close similarity between how instrument components are designed and operated and how such components can be modeled in software. For example we used object oriented programming concepts for representing neutron scatterers and detector systems, and recursive algorithms for implementing multiple scattering. Combining these features together in MCViNE allows one to handle sophisticated neutron scattering problems in modern instruments, including, for example, neutron detection by complex detector systems, and single and multiplemore » scattering events in a variety of samples and sample environments. In addition, MCViNE can use simulation components from linear-chain-based MC ray tracing packages which facilitates porting instrument models from those codes. Furthermore it allows for components written solely in Python, which expedites prototyping of new components. These developments have enabled detailed simulations of neutron scattering experiments, with non-trivial samples, for time-of-flight inelastic instruments at the Spallation Neutron Source. Examples of such simulations for powder and single-crystal samples with various scattering kernels, including kernels for phonon and magnon scattering, are presented. As a result, with simulations that closely reproduce experimental results, scattering mechanisms can be turned on and off to determine how they contribute to the measured scattering intensities, improving our understanding of the underlying physics.« less

  10. Framework for a clinical information system.

    PubMed

    Van De Velde, R; Lansiers, R; Antonissen, G

    2002-01-01

    The design and implementation of Clinical Information System architecture is presented. This architecture has been developed and implemented based on components following a strong underlying conceptual and technological model. Common Object Request Broker and n-tier technology featuring centralised and departmental clinical information systems as the back-end store for all clinical data are used. Servers located in the "middle" tier apply the clinical (business) model and application rules. The main characteristics are the focus on modelling and reuse of both data and business logic. Scalability as well as adaptability to constantly changing requirements via component driven computing are the main reasons for that approach.

  11. System and method for representing and manipulating three-dimensional objects on massively parallel architectures

    DOEpatents

    Karasick, M.S.; Strip, D.R.

    1996-01-30

    A parallel computing system is described that comprises a plurality of uniquely labeled, parallel processors, each processor capable of modeling a three-dimensional object that includes a plurality of vertices, faces and edges. The system comprises a front-end processor for issuing a modeling command to the parallel processors, relating to a three-dimensional object. Each parallel processor, in response to the command and through the use of its own unique label, creates a directed-edge (d-edge) data structure that uniquely relates an edge of the three-dimensional object to one face of the object. Each d-edge data structure at least includes vertex descriptions of the edge and a description of the one face. As a result, each processor, in response to the modeling command, operates upon a small component of the model and generates results, in parallel with all other processors, without the need for processor-to-processor intercommunication. 8 figs.

  12. Toward improved calibration of watershed models: multisite many objective measures of information

    USDA-ARS?s Scientific Manuscript database

    This paper presents a computational framework for incorporation of disparate information from observed hydrologic responses at multiple locations into the calibration of watershed models. The framework consists of four components: (i) an a-priori characterization of system behavior; (ii) a formal an...

  13. ArgoEcoSystem-watershed (AgES-W) model evaluation for streamflow and nitrogen/sediment dynamics on a midwest agricultural watershed

    USDA-ARS?s Scientific Manuscript database

    AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic/water quality simulation components under the Object Modeling System Version 3 (OMS3). The AgES-W model was previously evaluated for streamflow and recently has been enhanced with the ad...

  14. The Speed of Serial Attention Shifts in Visual Search: Evidence from the N2pc Component.

    PubMed

    Grubert, Anna; Eimer, Martin

    2016-02-01

    Finding target objects among distractors in visual search display is often assumed to be based on sequential movements of attention between different objects. However, the speed of such serial attention shifts is still under dispute. We employed a search task that encouraged the successive allocation of attention to two target objects in the same search display and measured N2pc components to determine how fast attention moved between these objects. Each display contained one digit in a known color (fixed-color target) and another digit whose color changed unpredictably across trials (variable-color target) together with two gray distractor digits. Participants' task was to find the fixed-color digit and compare its numerical value with that of the variable-color digit. N2pc components to fixed-color targets preceded N2pc components to variable-color digits, demonstrating that these two targets were indeed selected in a fixed serial order. The N2pc to variable-color digits emerged approximately 60 msec after the N2pc to fixed-color digits, which shows that attention can be reallocated very rapidly between different target objects in the visual field. When search display durations were increased, thereby relaxing the temporal demands on serial selection, the two N2pc components to fixed-color and variable-color targets were elicited within 90 msec of each other. Results demonstrate that sequential shifts of attention between different target locations can operate very rapidly at speeds that are in line with the assumptions of serial selection models of visual search.

  15. Perceptual Learning of Object Shape

    PubMed Central

    Golcu, Doruk; Gilbert, Charles D.

    2009-01-01

    Recognition of objects is accomplished through the use of cues that depend on internal representations of familiar shapes. We used a paradigm of perceptual learning during visual search to explore what features human observers use to identify objects. Human subjects were trained to search for a target object embedded in an array of distractors, until their performance improved from near-chance levels to over 80% of trials in an object specific manner. We determined the role of specific object components in the recognition of the object as a whole by measuring the transfer of learning from the trained object to other objects sharing components with it. Depending on the geometric relationship of the trained object with untrained objects, transfer to untrained objects was observed. Novel objects that shared a component with the trained object were identified at much higher levels than those that did not, and this could be used as an indicator of which features of the object were important for recognition. Training on an object also transferred to the components of the object when these components were embedded in an array of distractors of similar complexity. These results suggest that objects are not represented in a holistic manner during learning, but that their individual components are encoded. Transfer between objects was not complete, and occurred for more than one component, regardless of how well they distinguish the object from distractors. This suggests that a joint involvement of multiple components was necessary for full performance. PMID:19864574

  16. Model documentation Renewable Fuels Module of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-01-01

    This report documents the objectives, analaytical approach and design of the National Energy Modeling System (NEMS) Renewable Fuels Module (RFM) as it relates to the production of the 1996 Annual Energy Outlook forecasts. The report catalogues and describes modeling assumptions, computational methodologies, data inputs, and parameter estimation techniques. A number of offline analyses used in lieu of RFM modeling components are also described.

  17. Nanomechanical characterization of heterogeneous and hierarchical biomaterials and tissues using nanoindentation: the role of finite mixture models.

    PubMed

    Zadpoor, Amir A

    2015-03-01

    Mechanical characterization of biological tissues and biomaterials at the nano-scale is often performed using nanoindentation experiments. The different constituents of the characterized materials will then appear in the histogram that shows the probability of measuring a certain range of mechanical properties. An objective technique is needed to separate the probability distributions that are mixed together in such a histogram. In this paper, finite mixture models (FMMs) are proposed as a tool capable of performing such types of analysis. Finite Gaussian mixture models assume that the measured probability distribution is a weighted combination of a finite number of Gaussian distributions with separate mean and standard deviation values. Dedicated optimization algorithms are available for fitting such a weighted mixture model to experimental data. Moreover, certain objective criteria are available to determine the optimum number of Gaussian distributions. In this paper, FMMs are used for interpreting the probability distribution functions representing the distributions of the elastic moduli of osteoarthritic human cartilage and co-polymeric microspheres. As for cartilage experiments, FMMs indicate that at least three mixture components are needed for describing the measured histogram. While the mechanical properties of the softer mixture components, often assumed to be associated with Glycosaminoglycans, were found to be more or less constant regardless of whether two or three mixture components were used, those of the second mixture component (i.e. collagen network) considerably changed depending on the number of mixture components. Regarding the co-polymeric microspheres, the optimum number of mixture components estimated by the FMM theory, i.e. 3, nicely matches the number of co-polymeric components used in the structure of the polymer. The computer programs used for the presented analyses are made freely available online for other researchers to use. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Object-based modeling, identification, and labeling of medical images for content-based retrieval by querying on intervals of attribute values

    NASA Astrophysics Data System (ADS)

    Thies, Christian; Ostwald, Tamara; Fischer, Benedikt; Lehmann, Thomas M.

    2005-04-01

    The classification and measuring of objects in medical images is important in radiological diagnostics and education, especially when using large databases as knowledge resources, for instance a picture archiving and communication system (PACS). The main challenge is the modeling of medical knowledge and the diagnostic context to label the sought objects. This task is referred to as closing the semantic gap between low-level pixel information and high level application knowledge. This work describes an approach which allows labeling of a-priori unknown objects in an intuitive way. Our approach consists of four main components. At first an image is completely decomposed into all visually relevant partitions on different scales. This provides a hierarchical organized set of regions. Afterwards, for each of the obtained regions a set of descriptive features is computed. In this data structure objects are represented by regions with characteristic attributes. The actual object identification is the formulation of a query. It consists of attributes on which intervals are defined describing those regions that correspond to the sought objects. Since the objects are a-priori unknown, they are described by a medical expert by means of an intuitive graphical user interface (GUI). This GUI is the fourth component. It enables complex object definitions by browsing the data structure and examinating the attributes to formulate the query. The query is executed and if the sought objects have not been identified its parameterization is refined. By using this heuristic approach, object models for hand radiographs have been developed to extract bones from a single hand in different anatomical contexts. This demonstrates the applicability of the labeling concept. By using a rule for metacarpal bones on a series of 105 images, this type of bone could be retrieved with a precision of 0.53 % and a recall of 0.6%.

  19. The X-ray background contributed by QSOs ejected from galaxies

    NASA Technical Reports Server (NTRS)

    Burbidge, G.; Hoyle, F.

    1996-01-01

    The X-ray background can be explained as coming from the integrated effect of X-ray emitting quasi-stellar objects (QSOs) ejected from spiral galaxies. The model developed to interpret the observations is summarized. The redshift of the QSOs consisted of an intrinsic component and of a cosmological component. The QSOs have a spatial density proportional to that of normal galaxies.

  20. Object-oriented analysis and design: a methodology for modeling the computer-based patient record.

    PubMed

    Egyhazy, C J; Eyestone, S M; Martino, J; Hodgson, C L

    1998-08-01

    The article highlights the importance of an object-oriented analysis and design (OOAD) methodology for the computer-based patient record (CPR) in the military environment. Many OOAD methodologies do not adequately scale up, allow for efficient reuse of their products, or accommodate legacy systems. A methodology that addresses these issues is formulated and used to demonstrate its applicability in a large-scale health care service system. During a period of 6 months, a team of object modelers and domain experts formulated an OOAD methodology tailored to the Department of Defense Military Health System and used it to produce components of an object model for simple order processing. This methodology and the lessons learned during its implementation are described. This approach is necessary to achieve broad interoperability among heterogeneous automated information systems.

  1. Development of an Aeroelastic Modeling Capability for Transient Nozzle Side Load Analysis

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Zhao, Xiang; Zhang, Sijun; Chen, Yen-Sen

    2013-01-01

    Lateral nozzle forces are known to cause severe structural damage to any new rocket engine in development. Currently there is no fully coupled computational tool to analyze this fluid/structure interaction process. The objective of this study was to develop a fully coupled aeroelastic modeling capability to describe the fluid/structure interaction process during the transient nozzle operations. The aeroelastic model composes of three components: the computational fluid dynamics component based on an unstructured-grid, pressure-based computational fluid dynamics formulation, the computational structural dynamics component developed in the framework of modal analysis, and the fluid-structural interface component. The developed aeroelastic model was applied to the transient nozzle startup process of the Space Shuttle Main Engine at sea level. The computed nozzle side loads and the axial nozzle wall pressure profiles from the aeroelastic nozzle are compared with those of the published rigid nozzle results, and the impact of the fluid/structure interaction on nozzle side loads is interrogated and presented.

  2. Tobacco Dependence Treatment Teaching by Medical School Clerkship Preceptors: Survey Responses from more than 1,000 US Medical Students

    PubMed Central

    Geller, Alan C.; Hayes, Rashelle B.; Leone, Frank; Churchill, Linda C.; Leung, Katherine; Reed, George; Jolicoeur, Denise; Okuliar, Catherine; Adams, Michael; Murray, David M.; Liu, Qin; Waugh, Jonathan; David, Sean; Ockene, Judith K.

    2013-01-01

    Objective To determine factors associated with tobacco cessation counseling in medical school clerkships Methods Third-year medical students at 10 medical schools across the United States completed a 100-item survey, measuring the frequency with which they experienced their preceptors’ providing clinical teaching components: clear instruction, feedback, modeling behavior, setting clear objectives, and responding to questions about tobacco dependence counseling as well as frequency of use of tobacco prompts and office systems. Our primary dependent measure was student self-reported skill level for items of tobacco dependence treatment (e.g. “5As”). Results Surveys were completed by 1213 students. For both family medicine and internal medicine clerkships, modeling and providing clear instruction on ways to provide tobacco counseling were reported most commonly. In contrast, providing feedback and clear objectives for tobacco dependence treatment lagged behind. Overall, students who reported preceptors’ provision of optimal clinical teaching components and office system prompts in both family medicine and internal medicine clerkships had higher self-reported skill (p<0.001) than students with no exposure or exposure during only one of the clerkships. Conclusions Future educational interventions intended to help students adopt effective tobacco dependence treatment techniques should be engineered to facilitate these critical precepting components. PMID:23623894

  3. Interpolation on the manifold of K component GMMs.

    PubMed

    Kim, Hyunwoo J; Adluru, Nagesh; Banerjee, Monami; Vemuri, Baba C; Singh, Vikas

    2015-12-01

    Probability density functions (PDFs) are fundamental objects in mathematics with numerous applications in computer vision, machine learning and medical imaging. The feasibility of basic operations such as computing the distance between two PDFs and estimating a mean of a set of PDFs is a direct function of the representation we choose to work with. In this paper, we study the Gaussian mixture model (GMM) representation of the PDFs motivated by its numerous attractive features. (1) GMMs are arguably more interpretable than, say, square root parameterizations (2) the model complexity can be explicitly controlled by the number of components and (3) they are already widely used in many applications. The main contributions of this paper are numerical algorithms to enable basic operations on such objects that strictly respect their underlying geometry. For instance, when operating with a set of K component GMMs, a first order expectation is that the result of simple operations like interpolation and averaging should provide an object that is also a K component GMM. The literature provides very little guidance on enforcing such requirements systematically. It turns out that these tasks are important internal modules for analysis and processing of a field of ensemble average propagators (EAPs), common in diffusion weighted magnetic resonance imaging. We provide proof of principle experiments showing how the proposed algorithms for interpolation can facilitate statistical analysis of such data, essential to many neuroimaging studies. Separately, we also derive interesting connections of our algorithm with functional spaces of Gaussians, that may be of independent interest.

  4. Model Strategies for the Recruitment and Retention of Undergraduate Criminal Justice Students.

    ERIC Educational Resources Information Center

    Positive Futures, Inc., Washington, DC.

    Components for a model strategy/program for the recruitment and retention of students in criminal justice (CJ) programs are presented to stimulate planning activity. These 24 general examples of approaches identify the strategy, state the objectives, provide a rationale, describe implementation, discuss intervention activities, and delineate the…

  5. Organizational Resilience and Culture a Model for Information Technology Service Management (ITSM)

    ERIC Educational Resources Information Center

    Granito, Francis A.

    2011-01-01

    Organizational change and organizational culture have been studied and written about by many authors, most notably by Edgar Schein (1990, 1992), and are named as critical components of organizational maturity through such industry standards as The Capability Maturity Model Integration (CMMI), Control Objectives for Information and Related…

  6. Violence Prevention: A Communication-Based Curriculum.

    ERIC Educational Resources Information Center

    Rancer, Andrew S.; Kosberg, Roberta L.

    This paper first outlines the objectives of programs which focus on conflict management and violence prevention. The paper then describes the application of a model of aggressive communication as a potential component in conflict management and violence prevention programs. The model presented in the paper incorporates training in argument and…

  7. Resource Manual for Teacher Training Programs in Economics.

    ERIC Educational Resources Information Center

    Saunders, Phillip, Ed.; And Others

    This resource manual uses a general systems model for educational planning, instruction, and evaluation to describe a college introductory economics course. The goal of the manual is to help beginning or experienced instructors teach more effectively. The model components include needs, goals, objectives, constraints, planning and strategy,…

  8. Development of a cultural heritage object BIM model

    NASA Astrophysics Data System (ADS)

    Braila, Natalya; Vakhrusheva, Svetlana; Martynenko, Elena; Kisel, Tatyana

    2017-10-01

    The BIM technology during her creation has been aimed, first of all, at design and construction branch, but its application in the field of studying and operation of architectural heritage can essentially change and transfer this kind of activity to new qualitative level. The question of effective introduction of BIM technologies at the solution of administrative questions of operation and development of monuments of architecture is considered in article. Creation of the information model of the building object of cultural heritage including a full complex of information on an object is offered: historical and archival, legal, technical, administrative, etc. The 3D model of an object of cultural heritage with color marking of elements on degree of wear and a first priority of carrying out repair will become one of components of model. This model will allow to estimate visually technical condition of the building in general and to gain general idea about scales of necessary repair and construction actions that promotes improvement of quality of operation of an object, and also simplifies and accelerates processing of information and in need of a memorial building assessment as subject to investment.

  9. Test model designs for advanced refractory ceramic materials

    NASA Technical Reports Server (NTRS)

    Tran, Huy Kim

    1993-01-01

    The next generation of space vehicles will be subjected to severe aerothermal loads and will require an improved thermal protection system (TPS) and other advanced vehicle components. In order to ensure the satisfactory performance system (TPS) and other advanced vehicle materials and components, testing is to be performed in environments similar to space flight. The design and fabrication of the test models should be fairly simple but still accomplish test objectives. In the Advanced Refractory Ceramic Materials test series, the models and model holders will need to withstand the required heat fluxes of 340 to 817 W/sq cm or surface temperatures in the range of 2700 K to 3000 K. The model holders should provide one dimensional (1-D) heat transfer to the samples and the appropriate flow field without compromising the primary test objectives. The optical properties such as the effective emissivity, catalytic efficiency coefficients, thermal properties, and mass loss measurements are also taken into consideration in the design process. Therefore, it is the intent of this paper to demonstrate the design schemes for different models and model holders that would accommodate these test requirements and ensure the safe operation in a typical arc jet facility.

  10. A Computational Model of Spatial Development

    NASA Astrophysics Data System (ADS)

    Hiraki, Kazuo; Sashima, Akio; Phillips, Steven

    Psychological experiments on children's development of spatial knowledge suggest experience at self-locomotion with visual tracking as important factors. Yet, the mechanism underlying development is unknown. We propose a robot that learns to mentally track a target object (i.e., maintaining a representation of an object's position when outside the field-of-view) as a model for spatial development. Mental tracking is considered as prediction of an object's position given the previous environmental state and motor commands, and the current environment state resulting from movement. Following Jordan & Rumelhart's (1992) forward modeling architecture the system consists of two components: an inverse model of sensory input to desired motor commands; and a forward model of motor commands to desired sensory input (goals). The robot was tested on the `three cups' paradigm (where children are required to select the cup containing the hidden object under various movement conditions). Consistent with child development, without the capacity for self-locomotion the robot's errors are self-center based. When given the ability of self-locomotion the robot responds allocentrically.

  11. Evidence of a primordial solar wind. [T Tauri-type evolution model

    NASA Technical Reports Server (NTRS)

    Sonett, C. P.

    1974-01-01

    A model is reviewed which requires a T Tauri 'wind' and at the same time encompasses certain early-object stellar features. The theory rests on electromagnetic induction driven by the 'wind'. Plasma confinement of the induced field prohibits a scattered field, and all energy loss is via ohmic heating in the scatterer (i.e., planetary objects). Two modes, one caused by the interplanetary electric field (transverse magnetic) and the other by time variations in the interplanetary magnetic field (transverse electric) are present. Parent body melting, lunar surface melting, and a primordial magnetic field are components of the proposed model.

  12. Response properties in the adsorption-desorption model on a triangular lattice

    NASA Astrophysics Data System (ADS)

    Šćepanović, J. R.; Stojiljković, D.; Jakšić, Z. M.; Budinski-Petković, Lj.; Vrhovac, S. B.

    2016-06-01

    The out-of-equilibrium dynamical processes during the reversible random sequential adsorption (RSA) of objects of various shapes on a two-dimensional triangular lattice are studied numerically by means of Monte Carlo simulations. We focused on the influence of the order of symmetry axis of the shape on the response of the reversible RSA model to sudden perturbations of the desorption probability Pd. We provide a detailed discussion of the significance of collective events for governing the time coverage behavior of shapes with different rotational symmetries. We calculate the two-time density-density correlation function C(t ,tw) for various waiting times tw and show that longer memory of the initial state persists for the more symmetrical shapes. Our model displays nonequilibrium dynamical effects such as aging. We find that the correlation function C(t ,tw) for all objects scales as a function of single variable ln(tw) / ln(t) . We also study the short-term memory effects in two-component mixtures of extended objects and give a detailed analysis of the contribution to the densification kinetics coming from each mixture component. We observe the weakening of correlation features for the deposition processes in multicomponent systems.

  13. Simulation of finite-strain inelastic phenomena governed by creep and plasticity

    NASA Astrophysics Data System (ADS)

    Li, Zhen; Bloomfield, Max O.; Oberai, Assad A.

    2017-11-01

    Inelastic mechanical behavior plays an important role in many applications in science and engineering. Phenomenologically, this behavior is often modeled as plasticity or creep. Plasticity is used to represent the rate-independent component of inelastic deformation and creep is used to represent the rate-dependent component. In several applications, especially those at elevated temperatures and stresses, these processes occur simultaneously. In order to model these process, we develop a rate-objective, finite-deformation constitutive model for plasticity and creep. The plastic component of this model is based on rate-independent J_2 plasticity, and the creep component is based on a thermally activated Norton model. We describe the implementation of this model within a finite element formulation, and present a radial return mapping algorithm for it. This approach reduces the additional complexity of modeling plasticity and creep, over thermoelasticity, to just solving one nonlinear scalar equation at each quadrature point. We implement this algorithm within a multiphysics finite element code and evaluate the consistent tangent through automatic differentiation. We verify and validate the implementation, apply it to modeling the evolution of stresses in the flip chip manufacturing process, and test its parallel strong-scaling performance.

  14. Integration of Component Knowledge in Penalized-Likelihood Reconstruction with Morphological and Spectral Uncertainties.

    PubMed

    Stayman, J Webster; Tilley, Steven; Siewerdsen, Jeffrey H

    2014-01-01

    Previous investigations [1-3] have demonstrated that integrating specific knowledge of the structure and composition of components like surgical implants, devices, and tools into a model-based reconstruction framework can improve image quality and allow for potential exposure reductions in CT. Using device knowledge in practice is complicated by uncertainties in the exact shape of components and their particular material composition. Such unknowns in the morphology and attenuation properties lead to errors in the forward model that limit the utility of component integration. In this work, a methodology is presented to accommodate both uncertainties in shape as well as unknown energy-dependent attenuation properties of the surgical devices. This work leverages the so-called known-component reconstruction (KCR) framework [1] with a generalized deformable registration operator and modifications to accommodate a spectral transfer function in the component model. Moreover, since this framework decomposes the object into separate background anatomy and "known" component factors, a mixed fidelity forward model can be adopted so that measurements associated with projections through the surgical devices can be modeled with much greater accuracy. A deformable KCR (dKCR) approach using the mixed fidelity model is introduced and applied to a flexible wire component with unknown structure and composition. Image quality advantages of dKCR over traditional reconstruction methods are illustrated in cone-beam CT (CBCT) data acquired on a testbench emulating a 3D-guided needle biopsy procedure - i.e., a deformable component (needle) with strong energy-dependent attenuation characteristics (steel) within a complex soft-tissue background.

  15. Coupled stochastic soil moisture simulation-optimization model of deficit irrigation

    NASA Astrophysics Data System (ADS)

    Alizadeh, Hosein; Mousavi, S. Jamshid

    2013-07-01

    This study presents an explicit stochastic optimization-simulation model of short-term deficit irrigation management for large-scale irrigation districts. The model which is a nonlinear nonconvex program with an economic objective function is built on an agrohydrological simulation component. The simulation component integrates (1) an explicit stochastic model of soil moisture dynamics of the crop-root zone considering interaction of stochastic rainfall and irrigation with shallow water table effects, (2) a conceptual root zone salt balance model, and 3) the FAO crop yield model. Particle Swarm Optimization algorithm, linked to the simulation component, solves the resulting nonconvex program with a significantly better computational performance compared to a Monte Carlo-based implicit stochastic optimization model. The model has been tested first by applying it in single-crop irrigation problems through which the effects of the severity of water deficit on the objective function (net benefit), root-zone water balance, and irrigation water needs have been assessed. Then, the model has been applied in Dasht-e-Abbas and Ein-khosh Fakkeh Irrigation Districts (DAID and EFID) of the Karkheh Basin in southwest of Iran. While the maximum net benefit has been obtained for a stress-avoidance (SA) irrigation policy, the highest water profitability has been resulted when only about 60% of the water used in the SA policy is applied. The DAID with respectively 33% of total cultivated area and 37% of total applied water has produced only 14% of the total net benefit due to low-valued crops and adverse soil and shallow water table conditions.

  16. Red nuggets grow inside-out: evidence from gravitational lensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oldham, Lindsay; Auger, Matthew W.; Fassnacht, Christopher D.

    Here, we present a new sample of strong gravitational lens systems where both the foreground lenses and background sources are early-type galaxies. Using imaging from Hubble Space Telescope (HST)/Advanced Camera for Studies (ACS) and Keck/NIRC2, we model the surface brightness distributions and show that the sources form a distinct population of massive, compact galaxies at redshifts 0.4 ≲ z ≲ 0.7, lying systematically below the size–mass relation of the global elliptical galaxy population at those redshifts. These may therefore represent relics of high-redshift red nuggets or their partly evolved descendants. We exploit the magnifying effect of lensing to investigate themore » structural properties, stellar masses and stellar populations of these objects with a view to understanding their evolution. We model these objects parametrically and find that they generally require two Sérsic components to properly describe their light profiles, with one more spheroidal component alongside a more envelope-like component, which is slightly more extended though still compact. This is consistent with the hypothesis of the inside-out growth of these objects via minor mergers. Lastly, we also find that the sources can be characterized by red-to-blue colour gradients as a function of radius which are stronger at low redshift – indicative of ongoing accretion – but that their environments generally appear consistent with that of the general elliptical galaxy population, contrary to recent suggestions that these objects are pre-dominantly associated with clusters.« less

  17. Red nuggets grow inside-out: evidence from gravitational lensing

    DOE PAGES

    Oldham, Lindsay; Auger, Matthew W.; Fassnacht, Christopher D.; ...

    2016-11-03

    Here, we present a new sample of strong gravitational lens systems where both the foreground lenses and background sources are early-type galaxies. Using imaging from Hubble Space Telescope (HST)/Advanced Camera for Studies (ACS) and Keck/NIRC2, we model the surface brightness distributions and show that the sources form a distinct population of massive, compact galaxies at redshifts 0.4 ≲ z ≲ 0.7, lying systematically below the size–mass relation of the global elliptical galaxy population at those redshifts. These may therefore represent relics of high-redshift red nuggets or their partly evolved descendants. We exploit the magnifying effect of lensing to investigate themore » structural properties, stellar masses and stellar populations of these objects with a view to understanding their evolution. We model these objects parametrically and find that they generally require two Sérsic components to properly describe their light profiles, with one more spheroidal component alongside a more envelope-like component, which is slightly more extended though still compact. This is consistent with the hypothesis of the inside-out growth of these objects via minor mergers. Lastly, we also find that the sources can be characterized by red-to-blue colour gradients as a function of radius which are stronger at low redshift – indicative of ongoing accretion – but that their environments generally appear consistent with that of the general elliptical galaxy population, contrary to recent suggestions that these objects are pre-dominantly associated with clusters.« less

  18. Impeller leakage flow modeling for mechanical vibration control

    NASA Technical Reports Server (NTRS)

    Palazzolo, Alan B.

    1996-01-01

    HPOTP and HPFTP vibration test results have exhibited transient and steady characteristics which may be due to impeller leakage path (ILP) related forces. For example, an axial shift in the rotor could suddenly change the ILP clearances and lengths yielding dynamic coefficient and subsequent vibration changes. ILP models are more complicated than conventional-single component-annular seal models due to their radial flow component (coriolis and centrifugal acceleration), complex geometry (axial/radial clearance coupling), internal boundary (transition) flow conditions between mechanical components along the ILP and longer length, requiring moment as well as force coefficients. Flow coupling between mechanical components results from mass and energy conservation applied at their interfaces. Typical components along the ILP include an inlet seal, curved shroud, and an exit seal, which may be a stepped labyrinth type. Von Pragenau (MSFC) has modeled labyrinth seals as a series of plain annular seals for leakage and dynamic coefficient prediction. These multi-tooth components increase the total number of 'flow coupled' components in the ILP. Childs developed an analysis for an ILP consisting of a single, constant clearance shroud with an exit seal represented by a lumped flow-loss coefficient. This same geometry was later extended to include compressible flow. The objective of the current work is to: supply ILP leakage-force impedance-dynamic coefficient modeling software to MSFC engineers, base on incompressible/compressible bulk flow theory; design the software to model a generic geometry ILP described by a series of components lying along an arbitrarily directed path; validate the software by comparison to available test data, CFD and bulk models; and develop a hybrid CFD-bulk flow model of an ILP to improve modeling accuracy within practical run time constraints.

  19. Identifying Bottom-Up and Top-Down Components of Attentional Weight by Experimental Analysis and Computational Modeling

    ERIC Educational Resources Information Center

    Nordfang, Maria; Dyrholm, Mads; Bundesen, Claus

    2013-01-01

    The attentional weight of a visual object depends on the contrast of the features of the object to its local surroundings (feature contrast) and the relevance of the features to one's goals (feature relevance). We investigated the dependency in partial report experiments with briefly presented stimuli but unspeeded responses. The task was to…

  20. Performance-Based Education Project: A Component of the Institutional Outcomes Model. Course Prototype. U.S. History 121 (HIS121).

    ERIC Educational Resources Information Center

    John Wood Community Coll., Quincy, IL.

    This document is an assessment of a performance-based education project that involved a United States history course, offered at a community college. Thirty-two student performance objectives are outlined and lesson plans designed to achieve each objective are presented. Each lesson plan consists of an instructional topic, prerequisites, interest…

  1. Using Experimental Design and Data Analysis to Study the Enlisted Specialty Model for the U.S. Army G1

    DTIC Science & Technology

    2010-06-01

    RECOMMENDATIONS FOR FUTURE STUDY ..................43 APPENDIX. OBJECTIVE FUNTION COEFFICIENTS ...................47 LIST OF REFERENCES...experiments are designed so an analyst can conduct simultaneous examination of multiple factors and explore these factors and their relationship to output...or more components. 46 THIS PAGE INTENTIONALLY LEFT BLANK 47 APPENDIX. OBJECTIVE FUNTION COEFFICIENTS

  2. Connected Component Model for Multi-Object Tracking.

    PubMed

    He, Zhenyu; Li, Xin; You, Xinge; Tao, Dacheng; Tang, Yuan Yan

    2016-08-01

    In multi-object tracking, it is critical to explore the data associations by exploiting the temporal information from a sequence of frames rather than the information from the adjacent two frames. Since straightforwardly obtaining data associations from multi-frames is an NP-hard multi-dimensional assignment (MDA) problem, most existing methods solve this MDA problem by either developing complicated approximate algorithms, or simplifying MDA as a 2D assignment problem based upon the information extracted only from adjacent frames. In this paper, we show that the relation between associations of two observations is the equivalence relation in the data association problem, based on the spatial-temporal constraint that the trajectories of different objects must be disjoint. Therefore, the MDA problem can be equivalently divided into independent subproblems by equivalence partitioning. In contrast to existing works for solving the MDA problem, we develop a connected component model (CCM) by exploiting the constraints of the data association and the equivalence relation on the constraints. Based upon CCM, we can efficiently obtain the global solution of the MDA problem for multi-object tracking by optimizing a sequence of independent data association subproblems. Experiments on challenging public data sets demonstrate that our algorithm outperforms the state-of-the-art approaches.

  3. LEGOS: Object-based software components for mission-critical systems. Final report, June 1, 1995--December 31, 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-08-01

    An estimated 85% of the installed base of software is a custom application with a production quantity of one. In practice, almost 100% of military software systems are custom software. Paradoxically, the marginal costs of producing additional units are near zero. So why hasn`t the software market, a market with high design costs and low productions costs evolved like other similar custom widget industries, such as automobiles and hardware chips? The military software industry seems immune to market pressures that have motivated a multilevel supply chain structure in other widget industries: design cost recovery, improve quality through specialization, and enablemore » rapid assembly from purchased components. The primary goal of the ComponentWare Consortium (CWC) technology plan was to overcome barriers to building and deploying mission-critical information systems by using verified, reusable software components (Component Ware). The adoption of the ComponentWare infrastructure is predicated upon a critical mass of the leading platform vendors` inevitable adoption of adopting emerging, object-based, distributed computing frameworks--initially CORBA and COM/OLE. The long-range goal of this work is to build and deploy military systems from verified reusable architectures. The promise of component-based applications is to enable developers to snap together new applications by mixing and matching prefabricated software components. A key result of this effort is the concept of reusable software architectures. A second important contribution is the notion that a software architecture is something that can be captured in a formal language and reused across multiple applications. The formalization and reuse of software architectures provide major cost and schedule improvements. The Unified Modeling Language (UML) is fast becoming the industry standard for object-oriented analysis and design notation for object-based systems. However, the lack of a standard real-time distributed object operating system, lack of a standard Computer-Aided Software Environment (CASE) tool notation and lack of a standard CASE tool repository has limited the realization of component software. The approach to fulfilling this need is the software component factory innovation. The factory approach takes advantage of emerging standards such as UML, CORBA, Java and the Internet. The key technical innovation of the software component factory is the ability to assemble and test new system configurations as well as assemble new tools on demand from existing tools and architecture design repositories.« less

  4. A dual-process account of auditory change detection.

    PubMed

    McAnally, Ken I; Martin, Russell L; Eramudugolla, Ranmalee; Stuart, Geoffrey W; Irvine, Dexter R F; Mattingley, Jason B

    2010-08-01

    Listeners can be "deaf" to a substantial change in a scene comprising multiple auditory objects unless their attention has been directed to the changed object. It is unclear whether auditory change detection relies on identification of the objects in pre- and post-change scenes. We compared the rates at which listeners correctly identify changed objects with those predicted by change-detection models based on signal detection theory (SDT) and high-threshold theory (HTT). Detected changes were not identified as accurately as predicted by models based on either theory, suggesting that some changes are detected by a process that does not support change identification. Undetected changes were identified as accurately as predicted by the HTT model but much less accurately than predicted by the SDT models. The process underlying change detection was investigated further by determining receiver-operating characteristics (ROCs). ROCs did not conform to those predicted by either a SDT or a HTT model but were well modeled by a dual-process that incorporated HTT and SDT components. The dual-process model also accurately predicted the rates at which detected and undetected changes were correctly identified.

  5. Combustion Technology for Incinerating Wastes from Air Force Industrial Processes.

    DTIC Science & Technology

    1984-02-01

    The assumption of equilibrium between environmental compartments. * The statistical extrapolations yielding "safe" doses of various constituents...would be contacted to identify the assumptions and data requirements needed to design, construct and implement the model. The model’s primary objective...Recovery Planning Model (RRPLAN) is described. This section of the paper summarizes the model’s assumptions , major components and modes of operation

  6. Shape and texture fused recognition of flying targets

    NASA Astrophysics Data System (ADS)

    Kovács, Levente; Utasi, Ákos; Kovács, Andrea; Szirányi, Tamás

    2011-06-01

    This paper presents visual detection and recognition of flying targets (e.g. planes, missiles) based on automatically extracted shape and object texture information, for application areas like alerting, recognition and tracking. Targets are extracted based on robust background modeling and a novel contour extraction approach, and object recognition is done by comparisons to shape and texture based query results on a previously gathered real life object dataset. Application areas involve passive defense scenarios, including automatic object detection and tracking with cheap commodity hardware components (CPU, camera and GPS).

  7. Energy Efficient Engine Low Pressure Subsystem Flow Analysis

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Lynn, Sean R.; Heidegger, Nathan J.; Delaney, Robert A.

    1998-01-01

    The objective of this project is to provide the capability to analyze the aerodynamic performance of the complete low pressure subsystem (LPS) of the Energy Efficient Engine (EEE). The analyses were performed using three-dimensional Navier-Stokes numerical models employing advanced clustered processor computing platforms. The analysis evaluates the impact of steady aerodynamic interaction effects between the components of the LPS at design and off-design operating conditions. Mechanical coupling is provided by adjusting the rotational speed of common shaft-mounted components until a power balance is achieved. The Navier-Stokes modeling of the complete low pressure subsystem provides critical knowledge of component aero/mechanical interactions that previously were unknown to the designer until after hardware testing.

  8. Energy Efficient Engine Low Pressure Subsystem Aerodynamic Analysis

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Delaney, Robert A.; Lynn, Sean R.; Veres, Joseph P.

    1998-01-01

    The objective of this study was to demonstrate the capability to analyze the aerodynamic performance of the complete low pressure subsystem (LPS) of the Energy Efficient Engine (EEE). Detailed analyses were performed using three- dimensional Navier-Stokes numerical models employing advanced clustered processor computing platforms. The analysis evaluates the impact of steady aerodynamic interaction effects between the components of the LPS at design and off- design operating conditions. Mechanical coupling is provided by adjusting the rotational speed of common shaft-mounted components until a power balance is achieved. The Navier-Stokes modeling of the complete low pressure subsystem provides critical knowledge of component acro/mechanical interactions that previously were unknown to the designer until after hardware testing.

  9. An Employee Total Health Management–Based Survey of Iowa Employers

    PubMed Central

    Merchant, James A.; Lind, David P.; Kelly, Kevin M.; Hall, Jennifer L.

    2015-01-01

    Objective To implement an Employee Total Health Management (ETHM) model-based questionnaire and provide estimates of model program elements among a statewide sample of Iowa employers. Methods Survey a stratified random sample of Iowa employers, characterize and estimate employer participation in ETHM program elements Results Iowa employers are implementing under 30% of all 12 components of ETHM, with the exception of occupational safety and health (46.6%) and worker compensation insurance coverage (89.2%), but intend modest expansion of all components in the coming year. Conclusions The Employee Total Health Management questionnaire-based survey provides estimates of progress Iowa employers are making toward implementing components of total worker health programs. PMID:24284757

  10. Evaluating models of object-decision priming: evidence from event-related potential repetition effects.

    PubMed

    Soldan, Anja; Mangels, Jennifer A; Cooper, Lynn A

    2006-03-01

    This study was designed to differentiate between structural description and bias accounts of performance in the possible/impossible object-decision test. Two event-related potential (ERP) studies examined how the visual system processes structurally possible and impossible objects. Specifically, the authors investigated the effects of object repetition on a series of early posterior components during structural (Experiment 1) and functional (Experiment 2) encoding and the relationship of these effects to behavioral measures of priming. In both experiments, the authors found repetition enhancement of the posterior N1 and N2 for possible objects only. In addition, the magnitude of the N1 repetition effect for possible objects was correlated with priming for possible objects. Although the behavioral results were more ambiguous, these ERP results fail to support bias models that hold that both possible and impossible objects are processed similarly in the visual system. Instead, they support the view that priming is supported by a structural description system that encodes the global 3-dimensional structure of an object.

  11. Comparative Analysis on Nonlinear Models for Ron Gasoline Blending Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Aguilera, R. Carreño; Yu, Wen; Rodríguez, J. C. Tovar; Mosqueda, M. Elena Acevedo; Ortiz, M. Patiño; Juarez, J. J. Medel; Bautista, D. Pacheco

    The blending process always being a nonlinear process is difficult to modeling, since it may change significantly depending on the components and the process variables of each refinery. Different components can be blended depending on the existing stock, and the chemical characteristics of each component are changing dynamically, they all are blended until getting the expected specification in different properties required by the customer. One of the most relevant properties is the Octane, which is difficult to control in line (without the component storage). Since each refinery process is quite different, a generic gasoline blending model is not useful when a blending in line wants to be done in a specific process. A mathematical gasoline blending model is presented in this paper for a given process described in state space as a basic gasoline blending process description. The objective is to adjust the parameters allowing the blending gasoline model to describe a signal in its trajectory, representing in neural networks extreme learning machine method and also for nonlinear autoregressive-moving average (NARMA) in neural networks method, such that a comparative work be developed.

  12. Component-Based Modelling for Scalable Smart City Systems Interoperability: A Case Study on Integrating Energy Demand Response Systems.

    PubMed

    Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan

    2016-10-28

    Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems' architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation.

  13. Component-Based Modelling for Scalable Smart City Systems Interoperability: A Case Study on Integrating Energy Demand Response Systems

    PubMed Central

    Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan

    2016-01-01

    Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems’ architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation. PMID:27801829

  14. Time-dependent behavior of passive skeletal muscle

    NASA Astrophysics Data System (ADS)

    Ahamed, T.; Rubin, M. B.; Trimmer, B. A.; Dorfmann, L.

    2016-03-01

    An isotropic three-dimensional nonlinear viscoelastic model is developed to simulate the time-dependent behavior of passive skeletal muscle. The development of the model is stimulated by experimental data that characterize the response during simple uniaxial stress cyclic loading and unloading. Of particular interest is the rate-dependent response, the recovery of muscle properties from the preconditioned to the unconditioned state and stress relaxation at constant stretch during loading and unloading. The model considers the material to be a composite of a nonlinear hyperelastic component in parallel with a nonlinear dissipative component. The strain energy and the corresponding stress measures are separated additively into hyperelastic and dissipative parts. In contrast to standard nonlinear inelastic models, here the dissipative component is modeled using an evolution equation that combines rate-independent and rate-dependent responses smoothly with no finite elastic range. Large deformation evolution equations for the distortional deformations in the elastic and in the dissipative component are presented. A robust, strongly objective numerical integration algorithm is used to model rate-dependent and rate-independent inelastic responses. The constitutive formulation is specialized to simulate the experimental data. The nonlinear viscoelastic model accurately represents the time-dependent passive response of skeletal muscle.

  15. Optimal Objective-Based Experimental Design for Uncertain Dynamical Gene Networks with Experimental Error.

    PubMed

    Mohsenizadeh, Daniel N; Dehghannasiri, Roozbeh; Dougherty, Edward R

    2018-01-01

    In systems biology, network models are often used to study interactions among cellular components, a salient aim being to develop drugs and therapeutic mechanisms to change the dynamical behavior of the network to avoid undesirable phenotypes. Owing to limited knowledge, model uncertainty is commonplace and network dynamics can be updated in different ways, thereby giving multiple dynamic trajectories, that is, dynamics uncertainty. In this manuscript, we propose an experimental design method that can effectively reduce the dynamics uncertainty and improve performance in an interaction-based network. Both dynamics uncertainty and experimental error are quantified with respect to the modeling objective, herein, therapeutic intervention. The aim of experimental design is to select among a set of candidate experiments the experiment whose outcome, when applied to the network model, maximally reduces the dynamics uncertainty pertinent to the intervention objective.

  16. A Three Component Model to Estimate Sensible Heat Flux Over Sparse Shrubs in Nevada

    USGS Publications Warehouse

    Chehbouni, A.; Nichols, W.D.; Njoku, E.G.; Qi, J.; Kerr, Y.H.; Cabot, F.

    1997-01-01

    It is now recognized that accurate partitioning of available energy into sensible and latent heat flux is crucial to understanding surface-atmosphere interactions. This issue is more complicated in arid and semi-arid regions where the relative contribution to surface fluxes from the soil and vegetation may vary significantly throughout the day and throughout the season. The objective of this paper is to present a three-component model to estimate sensible heat flux over heterogeneous surfaces. The surface was represented with two adjacent compartments. The first compartment is made up of two components, shrubs and shaded soil; the second compartment consists of bare, unshaded soil. Data collected at two different sites in Nevada during the summers of 1991 and 1992 were used to evaluate model performance. The results show that the present model is sufficiently general to yield satisfactory results for both sites.

  17. THE CANADA-FRANCE ECLIPTIC PLANE SURVEY-FULL DATA RELEASE: THE ORBITAL STRUCTURE OF THE KUIPER BELT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petit, J.-M.; Rousselot, P.; Mousis, O.

    2011-10-15

    We report the orbital distribution of the trans-Neptunian objects (TNOs) discovered during the Canada-France Ecliptic Plane Survey (CFEPS), whose discovery phase ran from early 2003 until early 2007. The follow-up observations started just after the first discoveries and extended until late 2009. We obtained characterized observations of 321 deg{sup 2} of sky to depths in the range g {approx} 23.5-24.4 AB mag. We provide a database of 169 TNOs with high-precision dynamical classification and known discovery efficiency. Using this database, we find that the classical belt is a complex region with sub-structures that go beyond the usual splitting of innermore » (interior to 3:2 mean-motion resonance [MMR]), main (between 3:2 and 2:1 MMR), and outer (exterior to 2:1 MMR). The main classical belt (a = 40-47 AU) needs to be modeled with at least three components: the 'hot' component with a wide inclination distribution and two 'cold' components (stirred and kernel) with much narrower inclination distributions. The hot component must have a significantly shallower absolute magnitude (H{sub g} ) distribution than the other two components. With 95% confidence, there are 8000{sup +1800}{sub -1600} objects in the main belt with H{sub g} {<=} 8.0, of which 50% are from the hot component, 40% from the stirred component, and 10% from the kernel; the hot component's fraction drops rapidly with increasing H{sub g} . Because of this, the apparent population fractions depend on the depth and ecliptic latitude of a trans-Neptunian survey. The stirred and kernel components are limited to only a portion of the main belt, while we find that the hot component is consistent with a smooth extension throughout the inner, main, and outer regions of the classical belt; in fact, the inner and outer belts are consistent with containing only hot-component objects. The H{sub g} {<=} 8.0 TNO population estimates are 400 for the inner belt and 10,000 for the outer belt to within a factor of two (95% confidence). We show how the CFEPS Survey Simulator can be used to compare a cosmogonic model for the orbital element distribution to the real Kuiper Belt.« less

  18. NASA Enterprise Architecture and Its Use in Transition of Research Results to Operations

    NASA Astrophysics Data System (ADS)

    Frisbie, T. E.; Hall, C. M.

    2006-12-01

    Enterprise architecture describes the design of the components of an enterprise, their relationships and how they support the objectives of that enterprise. NASA Stennis Space Center leads several projects involving enterprise architecture tools used to gather information on research assets within NASA's Earth Science Division. In the near future, enterprise architecture tools will link and display the relevant requirements, parameters, observatories, models, decision systems, and benefit/impact information relationships and map to the Federal Enterprise Architecture Reference Models. Components configured within the enterprise architecture serving the NASA Applied Sciences Program include the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool. The Earth Science Components Knowledge Base systematically catalogues NASA missions, sensors, models, data products, model products, and network partners appropriate for consideration in NASA Earth Science applications projects. The Systems Components database is a centralized information warehouse of NASA's Earth Science research assets and a critical first link in the implementation of enterprise architecture. The Earth Science Architecture Tool is used to analyze potential NASA candidate systems that may be beneficial to decision-making capabilities of other Federal agencies. Use of the current configuration of NASA enterprise architecture (the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool) has far exceeded its original intent and has tremendous potential for the transition of research results to operational entities.

  19. Step wise, multiple objective calibration of a hydrologic model for a snowmelt dominated basin

    USGS Publications Warehouse

    Hay, L.E.; Leavesley, G.H.; Clark, M.P.; Markstrom, S.L.; Viger, R.J.; Umemoto, M.

    2006-01-01

    The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model's simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey's Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated, consistently with measured values.

  20. Thermal Aspects of Lithium Ion Cells

    NASA Technical Reports Server (NTRS)

    Frank, H.; Shakkottai, P.; Bugga, R.; Smart, M.; Huang, C. K.; Timmerman, P.; Surampudi, S.

    2000-01-01

    This viewgraph presentation outlines the development of a thermal model of Li-ion cells in terms of heat generation, thermal mass, and thermal resistance. Intended for incorporation into battery model. The approach was to estimate heat generation: with semi-theoretical model, and then to check accuracy with efficiency measurements. Another objective was to compute thermal mass from component weights and specific heats, and to compute the thermal resistance from component dimensions and conductivities. Two lithium batteries are compared, the Cylindrical lithium battery, and the prismatic lithium cell. It reviews methodology for estimating the heat generation rate. Graphs of the Open-circuit curves of the cells and the heat evolution during discharge are given.

  1. Spatially-Distributed Stream Flow and Nutrient Dynamics Simulations Using the Component-Based AgroEcoSystem-Watershed (AgES-W) Model

    NASA Astrophysics Data System (ADS)

    Ascough, J. C.; David, O.; Heathman, G. C.; Smith, D. R.; Green, T. R.; Krause, P.; Kipka, H.; Fink, M.

    2010-12-01

    The Object Modeling System 3 (OMS3), currently being developed by the USDA-ARS Agricultural Systems Research Unit and Colorado State University (Fort Collins, CO), provides a component-based environmental modeling framework which allows the implementation of single- or multi-process modules that can be developed and applied as custom-tailored model configurations. OMS3 as a “lightweight” modeling framework contains four primary foundations: modeling resources (e.g., components) annotated with modeling metadata; domain specific knowledge bases and ontologies; tools for calibration, sensitivity analysis, and model optimization; and methods for model integration and performance scalability. The core is able to manage modeling resources and development tools for model and simulation creation, execution, evaluation, and documentation. OMS3 is based on the Java platform but is highly interoperable with C, C++, and FORTRAN on all major operating systems and architectures. The ARS Conservation Effects Assessment Project (CEAP) Watershed Assessment Study (WAS) Project Plan provides detailed descriptions of ongoing research studies at 14 benchmark watersheds in the United States. In order to satisfy the requirements of CEAP WAS Objective 5 (“develop and verify regional watershed models that quantify environmental outcomes of conservation practices in major agricultural regions”), a new watershed model development approach was initiated to take advantage of OMS3 modeling framework capabilities. Specific objectives of this study were to: 1) disaggregate and refactor various agroecosystem models (e.g., J2K-S, SWAT, WEPP) and implement hydrological, N dynamics, and crop growth science components under OMS3, 2) assemble a new modular watershed scale model for fully-distributed transfer of water and N loading between land units and stream channels, and 3) evaluate the accuracy and applicability of the modular watershed model for estimating stream flow and N dynamics. The Cedar Creek watershed (CCW) in northeastern Indiana, USA was selected for application of the OMS3-based AgroEcoSystem-Watershed (AgES-W) model. AgES-W performance for stream flow and N loading was assessed using Nash-Sutcliffe model efficiency (ENS) and percent bias (PBIAS) model evaluation statistics. Comparisons of daily and average monthly simulated and observed stream flow and N loads for the 1997-2005 simulation period resulted in PBIAS and ENS values that were similar or better than those reported in the literature for SWAT stream flow and N loading predictions at a similar scale. The results show that the AgES-W model was able to reproduce the hydrological and N dynamics of the CCW with sufficient quality, and should serve as a foundation upon which to better quantify additional water quality indicators (e.g., sediment transport and P dynamics) at the watershed scale.

  2. Neural representations of the concepts in simple sentences: Concept activation prediction and context effects.

    PubMed

    Just, Marcel Adam; Wang, Jing; Cherkassky, Vladimir L

    2017-08-15

    Although it has been possible to identify individual concepts from a concept's brain activation pattern, there have been significant obstacles to identifying a proposition from its fMRI signature. Here we demonstrate the ability to decode individual prototype sentences from readers' brain activation patterns, by using theory-driven regions of interest and semantic properties. It is possible to predict the fMRI brain activation patterns evoked by propositions and words which are entirely new to the model with reliably above-chance rank accuracy. The two core components implemented in the model that reflect the theory were the choice of intermediate semantic features and the brain regions associated with the neurosemantic dimensions. This approach also predicts the neural representation of object nouns across participants, studies, and sentence contexts. Moreover, we find that the neural representation of an agent-verb-object proto-sentence is more accurately characterized by the neural signatures of its components as they occur in a similar context than by the neural signatures of these components as they occur in isolation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Vision-based object detection and recognition system for intelligent vehicles

    NASA Astrophysics Data System (ADS)

    Ran, Bin; Liu, Henry X.; Martono, Wilfung

    1999-01-01

    Recently, a proactive crash mitigation system is proposed to enhance the crash avoidance and survivability of the Intelligent Vehicles. Accurate object detection and recognition system is a prerequisite for a proactive crash mitigation system, as system component deployment algorithms rely on accurate hazard detection, recognition, and tracking information. In this paper, we present a vision-based approach to detect and recognize vehicles and traffic signs, obtain their information, and track multiple objects by using a sequence of color images taken from a moving vehicle. The entire system consist of two sub-systems, the vehicle detection and recognition sub-system and traffic sign detection and recognition sub-system. Both of the sub- systems consist of four models: object detection model, object recognition model, object information model, and object tracking model. In order to detect potential objects on the road, several features of the objects are investigated, which include symmetrical shape and aspect ratio of a vehicle and color and shape information of the signs. A two-layer neural network is trained to recognize different types of vehicles and a parameterized traffic sign model is established in the process of recognizing a sign. Tracking is accomplished by combining the analysis of single image frame with the analysis of consecutive image frames. The analysis of the single image frame is performed every ten full-size images. The information model will obtain the information related to the object, such as time to collision for the object vehicle and relative distance from the traffic sings. Experimental results demonstrated a robust and accurate system in real time object detection and recognition over thousands of image frames.

  4. Sexual attraction to others: a comparison of two models of alloerotic responding in men.

    PubMed

    Blanchard, Ray; Kuban, Michael E; Blak, Thomas; Klassen, Philip E; Dickey, Robert; Cantor, James M

    2012-02-01

    The penile response profiles of homosexual and heterosexual pedophiles, hebephiles, and teleiophiles to laboratory stimuli depicting male and female children and adults may be conceptualized as a series of overlapping stimulus generalization gradients. This study used such profile data to compare two models of alloerotic responding (sexual responding to other people) in men. The first model was based on the notion that men respond to a potential sexual object as a compound stimulus made up of an age component and a gender component. The second model was based on the notion that men respond to a potential sexual object as a gestalt, which they evaluate in terms of global similarity to other potential sexual objects. The analytic strategy was to compare the accuracy of these models in predicting a man's penile response to each of his less arousing (nonpreferred) stimulus categories from his response to his most arousing (preferred) stimulus category. Both models based their predictions on the degree of dissimilarity between the preferred stimulus category and a given nonpreferred stimulus category, but each model used its own measure of dissimilarity. According to the first model ("summation model"), penile response should vary inversely as the sum of stimulus differences on separate dimensions of age and gender. According to the second model ("bipolar model"), penile response should vary inversely as the distance between stimulus categories on a single, bipolar dimension of morphological similarity-a dimension on which children are located near the middle, and adult men and women are located at opposite ends. The subjects were 2,278 male patients referred to a specialty clinic for phallometric assessment of their erotic preferences. Comparisons of goodness of fit to the observed data favored the unidimensional bipolar model.

  5. Variational objective analysis for cyclone studies

    NASA Technical Reports Server (NTRS)

    Achtemeier, Gary L.

    1989-01-01

    Significant accomplishments during 1987 to 1988 are summarized with regard to each of the major project components. Model 1 requires satisfaction of two nonlinear horizontal momentum equations, the integrated continuity equation, and the hydrostatic equation. Model 2 requires satisfaction of model 1 plus the thermodynamic equation for a dry atmosphere. Model 3 requires satisfaction of model 2 plus the radiative transfer equation. Model 4 requires satisfaction of model 3 plus a moisture conservation equation and a parameterization for moist processes.

  6. Molecular Modeling as a Self-Taught Component of a Conventional Undergraduate Chemical Reaction Engineering Course

    ERIC Educational Resources Information Center

    Rothe, Erhard W.; Zygmunt, William E.

    2016-01-01

    We inserted a self-taught molecular modeling project into an otherwise conventional undergraduate chemical-reaction-engineering course. Our objectives were that students should (a) learn with minimal instructor intervention, (b) gain an appreciation for the relationship between molecular structure and, first, macroscopic state functions in…

  7. A Working Model for the Development, Implementation and Evaluation of an Art Program on the College Level.

    ERIC Educational Resources Information Center

    Demery, Marie; And Others

    Components of an art program that was developed with federal funding are outlined. The model contains information on the grant proposal, including the rationale for funding, implementation strategies, activity timetable, qualifications of key personnel, activity objectives and performance measures, intitutional goals, activity milestones, and…

  8. The May Center for Early Childhood Education: Description of a Continuum of Services Model for Children with Autism.

    ERIC Educational Resources Information Center

    Campbell, Susan; Cannon, Barbara; Ellis, James T.; Lifter, Karen; Luiselli, James K.; Navalta, Carryl P.; Taras, Marie

    1998-01-01

    Describes a comprehensive continuum of services model for children with autism developed by a human services agency in Massachusetts, which incorporates these and additional empirically based approaches. Service components, methodologies, and program objectives are described, including representative summary data. Best practice approaches toward…

  9. A Model Program of Comprehensive Educational Services for Students With Learning Problems.

    ERIC Educational Resources Information Center

    Union Township Board of Education, NJ.

    Programs are described for learning-disabled or mantally-handicapped elementary and secondary students in regular and special classes in Union, New Jersey, and approximately 58 instructional episodes involving student made objects for understanding technology are presented. In part one, components of the model program such as the multi-learning…

  10. Custom controls

    NASA Astrophysics Data System (ADS)

    Butell, Bart

    1996-02-01

    Microsoft's Visual Basic (VB) and Borland's Delphi provide an extremely robust programming environment for delivering multimedia solutions for interactive kiosks, games and titles. Their object oriented use of standard and custom controls enable a user to build extremely powerful applications. A multipurpose, database enabled programming environment that can provide an event driven interface functions as a multimedia kernel. This kernel can provide a variety of authoring solutions (e.g. a timeline based model similar to Macromedia Director or a node authoring model similar to Icon Author). At the heart of the kernel is a set of low level multimedia components providing object oriented interfaces for graphics, audio, video and imaging. Data preparation tools (e.g., layout, palette and Sprite Editors) could be built to manage the media database. The flexible interface for VB allows the construction of an infinite number of user models. The proliferation of these models within a popular, easy to use environment will allow the vast developer segment of 'producer' types to bring their ideas to the market. This is the key to building exciting, content rich multimedia solutions. Microsoft's VB and Borland's Delphi environments combined with multimedia components enable these possibilities.

  11. Enhancements to the EPANET-RTX (Real-Time Analytics) ...

    EPA Pesticide Factsheets

    Technical brief and software The U.S. Environmental Protection Agency (EPA) developed EPANET-RTX as a collection of object-oriented software libraries comprising the core data access, data transformation, and data synthesis (real-time analytics) components of a real-time hydraulic and water quality modeling system. While EPANET-RTX uses the hydraulic and water quality solvers of EPANET, the object libraries are a self-contained set of building blocks for software developers. “Real-time EPANET” promises to change the way water utilities, commercial vendors, engineers, and the water community think about modeling.

  12. Two-tier tissue decomposition for histopathological image representation and classification.

    PubMed

    Gultekin, Tunc; Koyuncu, Can Fahrettin; Sokmensuer, Cenk; Gunduz-Demir, Cigdem

    2015-01-01

    In digital pathology, devising effective image representations is crucial to design robust automated diagnosis systems. To this end, many studies have proposed to develop object-based representations, instead of directly using image pixels, since a histopathological image may contain a considerable amount of noise typically at the pixel-level. These previous studies mostly employ color information to define their objects, which approximately represent histological tissue components in an image, and then use the spatial distribution of these objects for image representation and classification. Thus, object definition has a direct effect on the way of representing the image, which in turn affects classification accuracies. In this paper, our aim is to design a classification system for histopathological images. Towards this end, we present a new model for effective representation of these images that will be used by the classification system. The contributions of this model are twofold. First, it introduces a new two-tier tissue decomposition method for defining a set of multityped objects in an image. Different than the previous studies, these objects are defined combining texture, shape, and size information and they may correspond to individual histological tissue components as well as local tissue subregions of different characteristics. As its second contribution, it defines a new metric, which we call dominant blob scale, to characterize the shape and size of an object with a single scalar value. Our experiments on colon tissue images reveal that this new object definition and characterization provides distinguishing representation of normal and cancerous histopathological images, which is effective to obtain more accurate classification results compared to its counterparts.

  13. Magnetic Field of Conductive Objects as Superposition of Elementary Eddy Currents and Eddy Current Tomography

    NASA Astrophysics Data System (ADS)

    Sukhanov, D. Ya.; Zav'yalova, K. V.

    2018-03-01

    The paper represents induced currents in an electrically conductive object as a totality of elementary eddy currents. The proposed scanning method includes measurements of only one component of the secondary magnetic field. Reconstruction of the current distribution is performed by deconvolution with regularization. Numerical modeling supported by the field experiments show that this approach is of direct practical relevance.

  14. Object Segmentation from Motion Discontinuities and Temporal Occlusions–A Biologically Inspired Model

    PubMed Central

    Beck, Cornelia; Ognibeni, Thilo; Neumann, Heiko

    2008-01-01

    Background Optic flow is an important cue for object detection. Humans are able to perceive objects in a scene using only kinetic boundaries, and can perform the task even when other shape cues are not provided. These kinetic boundaries are characterized by the presence of motion discontinuities in a local neighbourhood. In addition, temporal occlusions appear along the boundaries as the object in front covers the background and the objects that are spatially behind it. Methodology/Principal Findings From a technical point of view, the detection of motion boundaries for segmentation based on optic flow is a difficult task. This is due to the problem that flow detected along such boundaries is generally not reliable. We propose a model derived from mechanisms found in visual areas V1, MT, and MSTl of human and primate cortex that achieves robust detection along motion boundaries. It includes two separate mechanisms for both the detection of motion discontinuities and of occlusion regions based on how neurons respond to spatial and temporal contrast, respectively. The mechanisms are embedded in a biologically inspired architecture that integrates information of different model components of the visual processing due to feedback connections. In particular, mutual interactions between the detection of motion discontinuities and temporal occlusions allow a considerable improvement of the kinetic boundary detection. Conclusions/Significance A new model is proposed that uses optic flow cues to detect motion discontinuities and object occlusion. We suggest that by combining these results for motion discontinuities and object occlusion, object segmentation within the model can be improved. This idea could also be applied in other models for object segmentation. In addition, we discuss how this model is related to neurophysiological findings. The model was successfully tested both with artificial and real sequences including self and object motion. PMID:19043613

  15. Observed and modelled solar radiation components in sugarcane crop grown under tropical conditions

    NASA Astrophysics Data System (ADS)

    Santos, Marcos A. dos; Souza, José L. de; Lyra, Gustavo B.; Teodoro, Iêdo; Ferreira, Ricardo A.; Santos Almeida, Alexsandro C. dos; Lyra, Guilherme B.; Souza, Renan C. de; Lemes, Marco A. Maringolo

    2017-04-01

    The net radiation over vegetated surfaces is one of the major input variables in many models of soil evaporation, evapotranspiration as well as leaf wetness duration. In the literature there are relatively few studies on net radiation over sugarcane crop in tropical climates. The main objective of the present study was to assess the solar radiation components measured and modelled for two crop stages of a sugarcane crop in the region of Rio Largo, Alagoas, North-eastern Brazil. The measurements of the radiation components were made with a net radiometer during the dry and rainy seasons and two models were used to estimate net radiation: the Ortega-Farias model and the Monteith and Unsworth model. The highest values of net radiation were observed at the crop development stage, due mainly to the high indices of incoming solar radiation. The daily average albedos of sugarcane at the crop development and mid-season stages were 0.16 and 0.20, respectively. Both models showed a better fit for the crop development stage than for the mid-season stage. When they were inter-compared, Monteith and Unsworth model was more efficient than Ortega-Farias model, despite the dispersion of their simulated radiation components which was similar.

  16. Integrating visual learning within a model-based ATR system

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark; Nebrich, Mark

    2017-05-01

    Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.

  17. From radio to TeV: the surprising spectral energy distribution of AP Librae

    DOE PAGES

    Sanchez, D. A.; Giebels, B.; Fortin, P.; ...

    2015-10-17

    Following the discovery of high-energy (HE; E > 10 MeV) and very-high-energy (VHE; E > 100 GeV) γ-ray emission from the low-frequency-peaked BL Lac (LBL) object AP Librae, its electromagnetic spectrum is studied over 60 octaves in energy. Contemporaneous data in radio, optical and UV together with the (non-simultaneous) γ-ray data are used to construct the most precise spectral energy distribution of this source. We found that the data was modelled with difficulties with single-zone homogeneous leptonic synchrotron self-Compton (SSC) radiative scenarios due to the unprecedented width of the HE component when compared to the lower-energy component. Furthermore, the twomore » other LBL objects also detected at VHE appear to have similar modelling difficulties. Nevertheless, VHE γ-rays produced in the extended jet could account for the VHE flux observed by HESS.« less

  18. Effects of electrons and protons on science instruments

    NASA Technical Reports Server (NTRS)

    Parker, R. H.

    1972-01-01

    The radiation effects on typical science instruments according to the Jupiter trapped radiation design restraint model are described, and specific aspects of the model where an improved understanding would be beneficial are suggested. The spacecraft design used is the TOPS 12L configuration. Ionization and displacement damage are considered, and damage criteria are placed on the most sensitive components. Possible protective measures are mentioned: selecting components as radiation resistant as possible, using a difference in desired and undesired signal shapes for electronic shielding, orienting and locating the component on the spacecraft for better shielding, and adding passive shields to protect specific components. Available options are listed in decreasing order of attractiveness: attempt to lower the design restraints without compromising the success of the missions, trade off experiment objectives for increased reliability, alter the trajectory, and remove sensitive instruments from the payload.

  19. An object-oriented computational model to study cardiopulmonary hemodynamic interactions in humans.

    PubMed

    Ngo, Chuong; Dahlmanns, Stephan; Vollmer, Thomas; Misgeld, Berno; Leonhardt, Steffen

    2018-06-01

    This work introduces an object-oriented computational model to study cardiopulmonary interactions in humans. Modeling was performed in object-oriented programing language Matlab Simscape, where model components are connected with each other through physical connections. Constitutive and phenomenological equations of model elements are implemented based on their non-linear pressure-volume or pressure-flow relationship. The model includes more than 30 physiological compartments, which belong either to the cardiovascular or respiratory system. The model considers non-linear behaviors of veins, pulmonary capillaries, collapsible airways, alveoli, and the chest wall. Model parameters were derisved based on literature values. Model validation was performed by comparing simulation results with clinical and animal data reported in literature. The model is able to provide quantitative values of alveolar, pleural, interstitial, aortic and ventricular pressures, as well as heart and lung volumes during spontaneous breathing and mechanical ventilation. Results of baseline simulation demonstrate the consistency of the assigned parameters. Simulation results during mechanical ventilation with PEEP trials can be directly compared with animal and clinical data given in literature. Object-oriented programming languages can be used to model interconnected systems including model non-linearities. The model provides a useful tool to investigate cardiopulmonary activity during spontaneous breathing and mechanical ventilation. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Dshell++: A Component Based, Reusable Space System Simulation Framework

    NASA Technical Reports Server (NTRS)

    Lim, Christopher S.; Jain, Abhinandan

    2009-01-01

    This paper describes the multi-mission Dshell++ simulation framework for high fidelity, physics-based simulation of spacecraft, robotic manipulation and mobility systems. Dshell++ is a C++/Python library which uses modern script driven object-oriented techniques to allow component reuse and a dynamic run-time interface for complex, high-fidelity simulation of spacecraft and robotic systems. The goal of the Dshell++ architecture is to manage the inherent complexity of physicsbased simulations while supporting component model reuse across missions. The framework provides several features that support a large degree of simulation configurability and usability.

  1. A bi-objective model for optimizing replacement time of age and block policies with consideration of spare parts’ availability

    NASA Astrophysics Data System (ADS)

    Alsyouf, Imad

    2018-05-01

    Reliability and availability of critical systems play an important role in achieving the stated objectives of engineering assets. Preventive replacement time affects the reliability of the components, thus the number of system failures encountered and its downtime expenses. On the other hand, spare parts inventory level is a very critical factor that affects the availability of the system. Usually, the decision maker has many conflicting objectives that should be considered simultaneously for the selection of the optimal maintenance policy. The purpose of this research was to develop a bi-objective model that will be used to determine the preventive replacement time for three maintenance policies (age, block good as new, block bad as old) with consideration of spare parts’ availability. It was suggested to use a weighted comprehensive criterion method with two objectives, i.e. cost and availability. The model was tested with a typical numerical example. The results of the model demonstrated its effectiveness in enabling the decision maker to select the optimal maintenance policy under different scenarios and taking into account preferences with respect to contradicting objectives such as cost and availability.

  2. Specialized data analysis of SSME and advanced propulsion system vibration measurements

    NASA Technical Reports Server (NTRS)

    Coffin, Thomas; Swanson, Wayne L.; Jong, Yen-Yi

    1993-01-01

    The basic objectives of this contract were to perform detailed analysis and evaluation of dynamic data obtained during Space Shuttle Main Engine (SSME) test and flight operations, including analytical/statistical assessment of component dynamic performance, and to continue the development and implementation of analytical/statistical models to effectively define nominal component dynamic characteristics, detect anomalous behavior, and assess machinery operational conditions. This study was to provide timely assessment of engine component operational status, identify probable causes of malfunction, and define feasible engineering solutions. The work was performed under three broad tasks: (1) Analysis, Evaluation, and Documentation of SSME Dynamic Test Results; (2) Data Base and Analytical Model Development and Application; and (3) Development and Application of Vibration Signature Analysis Techniques.

  3. Simulation of the National Aerospace System for Safety Analysis

    NASA Technical Reports Server (NTRS)

    Pritchett, Amy; Goldsman, Dave; Statler, Irv (Technical Monitor)

    2002-01-01

    Work started on this project on January 1, 1999, the first year of the grant. Following the outline of the grant proposal, a simulator architecture has been established which can incorporate the variety of types of models needed to accurately simulate national airspace dynamics. For the sake of efficiency, this architecture was based on an established single-aircraft flight simulator, the Reconfigurable Flight Simulator (RFS), already developed at Georgia Tech. Likewise, in the first year substantive changes and additions were made to the RFS to convert it into a simulation of the National Airspace System, with the flexibility to incorporate many types of models: aircraft models; controller models; airspace configuration generators; discrete event generators; embedded statistical functions; and display and data outputs. The architecture has been developed with the capability to accept any models of these types; due to its object-oriented structure, individual simulator components can be added and removed during run-time, and can be compiled separately. Simulation objects from other projects should be easy to convert to meet architecture requirements, with the intent that both this project may now be able to incorporate established simulation components from other projects, and that other projects may easily use this simulation without significant time investment.

  4. Error analysis of motion correction method for laser scanning of moving objects

    NASA Astrophysics Data System (ADS)

    Goel, S.; Lohani, B.

    2014-05-01

    The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.

  5. Computer-aided operations engineering with integrated models of systems and operations

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Ryan, Dan; Fleming, Land

    1994-01-01

    CONFIG 3 is a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operation of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. Integration is supported among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. Support is provided for integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems. CONFIG supports abstracted qualitative and symbolic modeling, for early conceptual design. System models are component structure models with operating modes, with embedded time-related behavior models. CONFIG supports failure modeling and modeling of state or configuration changes that result in dynamic changes in dependencies among components. Operations and procedure models are activity structure models that interact with system models. CONFIG is designed to support evaluation of system operability, diagnosability and fault tolerance, and analysis of the development of system effects of problems over time, including faults, failures, and procedural or environmental difficulties.

  6. Combining features from ERP components in single-trial EEG for discriminating four-category visual objects.

    PubMed

    Wang, Changming; Xiong, Shi; Hu, Xiaoping; Yao, Li; Zhang, Jiacai

    2012-10-01

    Categorization of images containing visual objects can be successfully recognized using single-trial electroencephalograph (EEG) measured when subjects view images. Previous studies have shown that task-related information contained in event-related potential (ERP) components could discriminate two or three categories of object images. In this study, we investigated whether four categories of objects (human faces, buildings, cats and cars) could be mutually discriminated using single-trial EEG data. Here, the EEG waveforms acquired while subjects were viewing four categories of object images were segmented into several ERP components (P1, N1, P2a and P2b), and then Fisher linear discriminant analysis (Fisher-LDA) was used to classify EEG features extracted from ERP components. Firstly, we compared the classification results using features from single ERP components, and identified that the N1 component achieved the highest classification accuracies. Secondly, we discriminated four categories of objects using combining features from multiple ERP components, and showed that combination of ERP components improved four-category classification accuracies by utilizing the complementarity of discriminative information in ERP components. These findings confirmed that four categories of object images could be discriminated with single-trial EEG and could direct us to select effective EEG features for classifying visual objects.

  7. Phenomenology of wall-bounded Newtonian turbulence.

    PubMed

    L'vov, Victor S; Pomyalov, Anna; Procaccia, Itamar; Zilitinkevich, Sergej S

    2006-01-01

    We construct a simple analytic model for wall-bounded turbulence, containing only four adjustable parameters. Two of these parameters are responsible for the viscous dissipation of the components of the Reynolds stress tensor. The other two parameters control the nonlinear relaxation of these objects. The model offers an analytic description of the profiles of the mean velocity and the correlation functions of velocity fluctuations in the entire boundary region, from the viscous sublayer, through the buffer layer, and further into the log-law turbulent region. In particular, the model predicts a very simple distribution of the turbulent kinetic energy in the log-law region between the velocity components: the streamwise component contains a half of the total energy whereas the wall-normal and cross-stream components contain a quarter each. In addition, the model predicts a very simple relation between the von Kármán slope k and the turbulent velocity in the log-law region v+ (in wall units): v+=6k. These predictions are in excellent agreement with direct numerical simulation data and with recent laboratory experiments.

  8. A structural analysis of the obsessional character: a Fairbairnian perspective.

    PubMed

    Celani, David P

    2007-06-01

    This paper reviews the object relations model of W.R.D. Fairbairn and applies it to the understanding of the obsessional personality. Fairbairn's model sees attachment to good objects as the immutable component of normal development. Parental failures are seen as intolerable to the child and trigger the splitting defense that isolates (via repression) the frustrating aspects of the object along with the part of the child's ego that relates only to that part-object. This fundamental defense protects the child from the knowledge that he is dependent on indifferent objects and preserves his attachment. The split-off part-self and part-object structures are too disruptive to remain conscious, yet despite being repressed make themselves known through repetition compulsions and transference. The specific characteristics of families that produce obsessional children impact the child's developing ego structures in similar ways. This style of developmental history creates predictable self and object configurations in the inner world, which then translate via repetition compulsion into obsessional behavior in adulthood.

  9. iTesla Power Systems Library (iPSL): A Modelica library for phasor time-domain simulations

    NASA Astrophysics Data System (ADS)

    Vanfretti, L.; Rabuzin, T.; Baudette, M.; Murad, M.

    The iTesla Power Systems Library (iPSL) is a Modelica package providing a set of power system components for phasor time-domain modeling and simulation. The Modelica language provides a systematic approach to develop models using a formal mathematical description, that uniquely specifies the physical behavior of a component or the entire system. Furthermore, the standardized specification of the Modelica language (Modelica Association [1]) enables unambiguous model exchange by allowing any Modelica-compliant tool to utilize the models for simulation and their analyses without the need of a specific model transformation tool. As the Modelica language is being developed with open specifications, any tool that implements these requirements can be utilized. This gives users the freedom of choosing an Integrated Development Environment (IDE) of their choice. Furthermore, any integration solver can be implemented within a Modelica tool to simulate Modelica models. Additionally, Modelica is an object-oriented language, enabling code factorization and model re-use to improve the readability of a library by structuring it with object-oriented hierarchy. The developed library is released under an open source license to enable a wider distribution and let the user customize it to their specific needs. This paper describes the iPSL and provides illustrative application examples.

  10. Animal behavior as a conceptual framework for the study of obsessive-compulsive disorder (OCD).

    PubMed

    Eilam, David; Zor, Rama; Fineberg, Naomi; Hermesh, Haggai

    2012-06-01

    Research on affective disorders may benefit from the methodology of studying animal behavior, in which tools are available for qualitatively and quantitatively measuring and assessing behavior with as much sophistication and attention to detail as in the analysis of the brain. To illustrate this, we first briefly review the characteristics of obsessive-compulsive disorder (OCD), and then demonstrate how the quinpirole rat model is used as a conceptual model in studying human OCD patients. Like the rat model, the study of OCD in humans is based on video-telemetry, whereby observable, measurable, and relatively objective characteristics of OCD behavior may be extracted. In this process, OCD rituals are defined in terms of the space in which they are executed and the movements (acts) that are performed at each location or object in this space. Accordingly, OCD behavior is conceived of as comprising three hierarchical components: (i) rituals (as defined by the patients); (ii) visits to objects/locations in the environment at which the patient stops during the ritual; and (iii) acts performed at each object/location during visits. Scoring these structural components (behavioral units) is conveniently possible with readily available tools for behavioral description and analysis, providing quantitative and qualitative measures of the OCD hallmarks of repetition and addition, as well as the reduced functionality in OCD behavior. Altogether, the concept that was developed in the context of an animal model provides a useful tool that may facilitate OCD diagnosis, assessment and treatment, and may be similarly applied for other psychiatric disorders. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Report of NPSAT1 Battery Thermal Contact Resistance Testing, Modeling and Simulation

    DTIC Science & Technology

    2012-10-01

    lithium ion battery is the spacecraft component with the smallest temperature range of 0?C to 45?C during operation. Thermal analysis results, however, can only provide adequate results if there is sufficient fidelity in thermal modeling. Arguably, the values used in defining thermal coupling for components are the most difficult to estimate because of the many variables that define them. This document describes the work performed by the authors starting in the 2012 winter quarter as part of the SS3900 directed study course. The objectives of the study were to

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szoka de Valladares, M.R.; Mack, S.

    The DOE Hydrogen Program needs to develop criteria as part of a systematic evaluation process for proposal identification, evaluation and selection. The H Scan component of this process provides a framework in which a project proposer can fully describe their candidate technology system and its components. The H Scan complements traditional methods of capturing cost and technical information. It consists of a special set of survey forms designed to elicit information so expert reviewers can assess the proposal relative to DOE specified selection criteria. The Analytic Hierarchy Process (AHP) component of the decision process assembles the management defined evaluation andmore » selection criteria into a coherent multi-level decision construct by which projects can be evaluated in pair-wise comparisons. The AHP model will reflect management`s objectives and it will assist in the ranking of individual projects based on the extent to which each contributes to management`s objectives. This paper contains a detailed description of the products and activities associated with the planning and evaluation process: The objectives or criteria; the H Scan; and The Analytic Hierarchy Process (AHP).« less

  13. Time dependent emission line profiles in the radially streaming particle model of Seyfert galaxy nuclei and quasi-stellar objects

    NASA Technical Reports Server (NTRS)

    Hubbard, R.

    1974-01-01

    The radially-streaming particle model for broad quasar and Seyfert galaxy emission features is modified to include sources of time dependence. The results are suggestive of reported observations of multiple components, variability, and transient features in the wings of Seyfert and quasi-stellar emission lines.

  14. Structural encoding processes contribute to individual differences in face and object cognition: Inferences from psychometric test performance and event-related brain potentials.

    PubMed

    Nowparast Rostami, Hadiseh; Sommer, Werner; Zhou, Changsong; Wilhelm, Oliver; Hildebrandt, Andrea

    2017-10-01

    The enhanced N1 component in event-related potentials (ERP) to face stimuli, termed N170, is considered to indicate the structural encoding of faces. Previously, individual differences in the latency of the N170 have been related to face and object cognition abilities. By orthogonally manipulating content domain (faces vs objects) and task demands (easy/speed vs difficult/accuracy) in both psychometric and EEG tasks, we investigated the uniqueness of the processes underlying face cognition as compared with object cognition and the extent to which the N1/N170 component can explain individual differences in face and object cognition abilities. Data were recorded from N = 198 healthy young adults. Structural equation modeling (SEM) confirmed that the accuracies of face perception (FP) and memory are specific abilities above general object cognition; in contrast, the speed of face processing was not differentiable from the speed of object cognition. Although there was considerable domain-general variance in the N170 shared with the N1, there was significant face-specific variance in the N170. The brain-behavior relationship showed that faster face-specific processes for structural encoding of faces are associated with higher accuracy in both perceiving and memorizing faces. Moreover, in difficult task conditions, qualitatively different processes are additionally needed for recognizing face and object stimuli as compared with easy tasks. The difficulty-dependent variance components in the N170 amplitude were related with both face and object memory (OM) performance. We discuss implications for understanding individual differences in face cognition. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Modeling a Thermoelectric HVAC System for Automobiles

    NASA Astrophysics Data System (ADS)

    Junior, C. S.; Strupp, N. C.; Lemke, N. C.; Koehler, J.

    2009-07-01

    In automobiles thermal energy is used at various energy scales. With regard to reduction of CO2 emissions, efficient generation of hot and cold temperatures and wise use of waste heat are of paramount importance for car manufacturers worldwide. Thermoelectrics could be a vital component in automobiles of the future. To evaluate the applicability of thermoelectric modules in automobiles, a Modelica model of a thermoelectric liquid-gas heat exchanger was developed for transient simulations. The model uses component models from the object-oriented Modelica library TIL. It was validated based on experimental data of a prototype heat exchanger and used to simulate transient and steady-state behavior. The use of the model within the energy management of an automobile is successfully shown for the air-conditioning system of a car.

  16. Agent-based Modeling Methodology for Analyzing Weapons Systems

    DTIC Science & Technology

    2015-03-26

    like programming language that allows access to AFSIM library objects. Figure 10 depicts the various objects that make up a platform within...AFSIM and can be accessed through the scripting language (Zeh & Birkmire, 2014). 29 Figure 10: AFSIM Platform Components (AFSIM Overview, 2014...defined, accessible , and has all the elements of both air-to-air and air-to-ground combat that allow sufficient exploration of the main factors of

  17. Three Dimensional Object Recognition Using an Unsupervised Neural Network: Understanding the Distinguishing Features

    DTIC Science & Technology

    1992-12-23

    predominance of structural models of recognition, of which a recent example is the Recognition By Components (RBC) theory ( Biederman , 1987 ). Structural...related to recent statistical theory (Huber, 1985; Friedman, 1987 ) and is derived from a biologically motivated computational theory (Bienenstock et...dimensional object recognition (Intrator and Gold, 1991). The method is related to recent statistical theory (Huber, 1985; Friedman, 1987 ) and is derived

  18. The Spectral Signatures Of BH Versus NS Sources

    NASA Astrophysics Data System (ADS)

    Seifina, E.; Titarchuk, L.

    2011-09-01

    We present a comparative analysis of spectral properties of Black Hole (BH) and Neutron Star (NS) X-ray binaries during transition events observed with BeppoSAX and RXTE satellites. In particular, we investigated the behavior of Comptonized component of X-ray spectra when object evolves from the low to high spectral states. The basic models to fit X-ray spectra of these objects are upscattering models (so called BMC and COMPTB models) which are the first principal models. These models taking into account both dynamical and thermal Comptonization and allow to study separate contributions of thermal component and Comptonization component (bulk and thermal effect of Comptonization processes). Specifically, we tested quite a few observations of BHs (GRS 1915+105 and SS 433) and NSs (4U 1728-34 and GX 3+1) applying BMC and COMPTB models. In this way it was found a crucial difference in behavior of photon index vs mass accretion rate (mdot) for BHs and NSs. Namely, we revealed the stability of the photon index around typical value of Gamma=2 versus mdot (or electron temperature) during spectral evolution of NS sources. This stability effect was previously suggested for a number of other neutron binaries (see Farinelli and Titarchuk, 2011). This intrinsic property of NS is fundamentally different from that in BH binary sources for which the index demonstrates monotonic growth with mass accretion rate followed by its saturation at high values of mdot. These index-mass accretion rate behavior during X-ray spectral transition events can be considered as signatures, which allow to differ NS from BH.

  19. The planum temporale as a computational hub.

    PubMed

    Griffiths, Timothy D; Warren, Jason D

    2002-07-01

    It is increasingly recognized that the human planum temporale is not a dedicated language processor, but is in fact engaged in the analysis of many types of complex sound. We propose a model of the human planum temporale as a computational engine for the segregation and matching of spectrotemporal patterns. The model is based on segregating the components of the acoustic world and matching these components with learned spectrotemporal representations. Spectrotemporal information derived from such a 'computational hub' would be gated to higher-order cortical areas for further processing, leading to object recognition and the perception of auditory space. We review the evidence for the model and specific predictions that follow from it.

  20. Accurate and efficient modeling of the detector response in small animal multi-head PET systems.

    PubMed

    Cecchetti, Matteo; Moehrs, Sascha; Belcari, Nicola; Del Guerra, Alberto

    2013-10-07

    In fully three-dimensional PET imaging, iterative image reconstruction techniques usually outperform analytical algorithms in terms of image quality provided that an appropriate system model is used. In this study we concentrate on the calculation of an accurate system model for the YAP-(S)PET II small animal scanner, with the aim to obtain fully resolution- and contrast-recovered images at low levels of image roughness. For this purpose we calculate the system model by decomposing it into a product of five matrices: (1) a detector response component obtained via Monte Carlo simulations, (2) a geometric component which describes the scanner geometry and which is calculated via a multi-ray method, (3) a detector normalization component derived from the acquisition of a planar source, (4) a photon attenuation component calculated from x-ray computed tomography data, and finally, (5) a positron range component is formally included. This system model factorization allows the optimization of each component in terms of computation time, storage requirements and accuracy. The main contribution of this work is a new, efficient way to calculate the detector response component for rotating, planar detectors, that consists of a GEANT4 based simulation of a subset of lines of flight (LOFs) for a single detector head whereas the missing LOFs are obtained by using intrinsic detector symmetries. Additionally, we introduce and analyze a probability threshold for matrix elements of the detector component to optimize the trade-off between the matrix size in terms of non-zero elements and the resulting quality of the reconstructed images. In order to evaluate our proposed system model we reconstructed various images of objects, acquired according to the NEMA NU 4-2008 standard, and we compared them to the images reconstructed with two other system models: a model that does not include any detector response component and a model that approximates analytically the depth of interaction as detector response component. The comparisons confirm previous research results, showing that the usage of an accurate system model with a realistic detector response leads to reconstructed images with better resolution and contrast recovery at low levels of image roughness.

  1. Accurate and efficient modeling of the detector response in small animal multi-head PET systems

    NASA Astrophysics Data System (ADS)

    Cecchetti, Matteo; Moehrs, Sascha; Belcari, Nicola; Del Guerra, Alberto

    2013-10-01

    In fully three-dimensional PET imaging, iterative image reconstruction techniques usually outperform analytical algorithms in terms of image quality provided that an appropriate system model is used. In this study we concentrate on the calculation of an accurate system model for the YAP-(S)PET II small animal scanner, with the aim to obtain fully resolution- and contrast-recovered images at low levels of image roughness. For this purpose we calculate the system model by decomposing it into a product of five matrices: (1) a detector response component obtained via Monte Carlo simulations, (2) a geometric component which describes the scanner geometry and which is calculated via a multi-ray method, (3) a detector normalization component derived from the acquisition of a planar source, (4) a photon attenuation component calculated from x-ray computed tomography data, and finally, (5) a positron range component is formally included. This system model factorization allows the optimization of each component in terms of computation time, storage requirements and accuracy. The main contribution of this work is a new, efficient way to calculate the detector response component for rotating, planar detectors, that consists of a GEANT4 based simulation of a subset of lines of flight (LOFs) for a single detector head whereas the missing LOFs are obtained by using intrinsic detector symmetries. Additionally, we introduce and analyze a probability threshold for matrix elements of the detector component to optimize the trade-off between the matrix size in terms of non-zero elements and the resulting quality of the reconstructed images. In order to evaluate our proposed system model we reconstructed various images of objects, acquired according to the NEMA NU 4-2008 standard, and we compared them to the images reconstructed with two other system models: a model that does not include any detector response component and a model that approximates analytically the depth of interaction as detector response component. The comparisons confirm previous research results, showing that the usage of an accurate system model with a realistic detector response leads to reconstructed images with better resolution and contrast recovery at low levels of image roughness.

  2. A microkernel design for component-based parallel numerical software systems.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balay, S.

    1999-01-13

    What is the minimal software infrastructure and what type of conventions are needed to simplify development of sophisticated parallel numerical application codes using a variety of software components that are not necessarily available as source code? We propose an opaque object-based model where the objects are dynamically loadable from the file system or network. The microkernel required to manage such a system needs to include, at most: (1) a few basic services, namely--a mechanism for loading objects at run time via dynamic link libraries, and consistent schemes for error handling and memory management; and (2) selected methods that all objectsmore » share, to deal with object life (destruction, reference counting, relationships), and object observation (viewing, profiling, tracing). We are experimenting with these ideas in the context of extensible numerical software within the ALICE (Advanced Large-scale Integrated Computational Environment) project, where we are building the microkernel to manage the interoperability among various tools for large-scale scientific simulations. This paper presents some preliminary observations and conclusions from our work with microkernel design.« less

  3. Mechanical Analysis of W78/88-1 Life Extension Program Warhead Design Options

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, Nathan

    2014-09-01

    Life Extension Program (LEP) is a program to repair/replace components of nuclear weapons to ensure the ability to meet military requirements. The W78/88-1 LEP encompasses the modernization of two major nuclear weapon reentry systems into an interoperable warhead. Several design concepts exist to provide different options for robust safety and security themes, maximum non-nuclear commonality, and cost. Simulation is one capability used to evaluate the mechanical performance of the designs in various operational environments, plan for system and component qualification efforts, and provide insight into the survivability of the warhead in environments that are not currently testable. The simulation effortsmore » use several Sandia-developed tools through the Advanced Simulation and Computing program, including Cubit for mesh generation, the DART Model Manager, SIERRA codes running on the HPC TLCC2 platforms, DAKOTA, and ParaView. Several programmatic objectives were met using the simulation capability including: (1) providing early environmental specification estimates that may be used by component designers to understand the severity of the loads their components will need to survive, (2) providing guidance for load levels and configurations for subassembly tests intended to represent operational environments, and (3) recommending design options including modified geometry and material properties. These objectives were accomplished through regular interactions with component, system, and test engineers while using the laboratory's computational infrastructure to effectively perform ensembles of simulations. Because NNSA has decided to defer the LEP program, simulation results are being documented and models are being archived for future reference. However, some advanced and exploratory efforts will continue to mature key technologies, using the results from these and ongoing simulations for design insights, test planning, and model validation.« less

  4. Estimating hydrologic budgets for six Persian Gulf watersheds, Iran

    NASA Astrophysics Data System (ADS)

    Hosseini, Majid; Ghafouri, Mohammad; Tabatabaei, MahmoudReza; Goodarzi, Masoud; Mokarian, Zeinab

    2017-10-01

    Estimation of the major components of the hydrologic budget is important for determining the impacts on the water supply and quality of either planned or proposed land management projects, vegetative changes, groundwater withdrawals, and reservoir management practices and plans. As acquisition of field data is costly and time consuming, models have been created to test various land use practices and their concomitant effects on the hydrologic budget of watersheds. To simulate such management scenarios realistically, a model should be able to simulate the individual components of the hydrologic budget. The main objective of this study is to perform the SWAT2012 model for estimation of hydrological budget in six subbasin of Persian Gulf watershed; Golgol, Baghan, Marghab Shekastian, Tangebirim and Daragah, which are located in south and south west of Iran during 1991-2009. In order to evaluate the performance of the model, hydrological data, soil map, land use map and digital elevation model (DEM) are obtained and prepared for each catchment to run the model. SWAT-CUP with SUFI2 program was used for simulation, uncertainty and validation with 95 Percent Prediction Uncertainty. Coefficient of determination ( R 2) and Nash-Sutcliffe coefficient (NS) were used for evaluation of the model simulation results. Comparison of measured and predicted values demonstrated that each component of the model gave reasonable output and that the interaction among components was realistic. The study has produced a technique with reliable capability for annual and monthly water budget components in Persian Gulf watershed.

  5. Vibrational Analysis of Engine Components Using Neural-Net Processing and Electronic Holography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.

    1997-01-01

    The use of computational-model trained artificial neural networks to acquire damage specific information from electronic holograms is discussed. A neural network is trained to transform two time-average holograms into a pattern related to the bending-induced-strain distribution of the vibrating component. The bending distribution is very sensitive to component damage unlike the characteristic fringe pattern or the displacement amplitude distribution. The neural network processor is fast for real-time visualization of damage. The two-hologram limit makes the processor more robust to speckle pattern decorrelation. Undamaged and cracked cantilever plates serve as effective objects for testing the combination of electronic holography and neural-net processing. The requirements are discussed for using finite-element-model trained neural networks for field inspections of engine components. The paper specifically discusses neural-network fringe pattern analysis in the presence of the laser speckle effect and the performances of two limiting cases of the neural-net architecture.

  6. Vibrational Analysis of Engine Components Using Neural-Net Processing and Electronic Holography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.

    1998-01-01

    The use of computational-model trained artificial neural networks to acquire damage specific information from electronic holograms is discussed. A neural network is trained to transform two time-average holograms into a pattern related to the bending-induced-strain distribution of the vibrating component. The bending distribution is very sensitive to component damage unlike the characteristic fringe pattern or the displacement amplitude distribution. The neural network processor is fast for real-time visualization of damage. The two-hologram limit makes the processor more robust to speckle pattern decorrelation. Undamaged and cracked cantilever plates serve as effective objects for testing the combination of electronic holography and neural-net processing. The requirements are discussed for using finite-element-model trained neural networks for field inspections of engine components. The paper specifically discusses neural-network fringe pattern analysis in the presence of the laser speckle effect and the performances of two limiting cases of the neural-net architecture.

  7. Mass storage system reference model, Version 4

    NASA Technical Reports Server (NTRS)

    Coleman, Sam (Editor); Miller, Steve (Editor)

    1993-01-01

    The high-level abstractions that underlie modern storage systems are identified. The information to generate the model was collected from major practitioners who have built and operated large storage facilities, and represents a distillation of the wisdom they have acquired over the years. The model provides a common terminology and set of concepts to allow existing systems to be examined and new systems to be discussed and built. It is intended that the model and the interfaces identified from it will allow and encourage vendors to develop mutually-compatible storage components that can be combined to form integrated storage systems and services. The reference model presents an abstract view of the concepts and organization of storage systems. From this abstraction will come the identification of the interfaces and modules that will be used in IEEE storage system standards. The model is not yet suitable as a standard; it does not contain implementation decisions, such as how abstract objects should be broken up into software modules or how software modules should be mapped to hosts; it does not give policy specifications, such as when files should be migrated; does not describe how the abstract objects should be used or connected; and does not refer to specific hardware components. In particular, it does not fully specify the interfaces.

  8. Artificial intelligence, neural network, and Internet tool integration in a pathology workstation to improve information access

    NASA Astrophysics Data System (ADS)

    Sargis, J. C.; Gray, W. A.

    1999-03-01

    The APWS allows user friendly access to several legacy systems which would normally each demand domain expertise for proper utilization. The generalized model, including objects, classes, strategies and patterns is presented. The core components of the APWS are the Microsoft Windows 95 Operating System, Oracle, Oracle Power Objects, Artificial Intelligence tools, a medical hyperlibrary and a web site. The paper includes a discussion of how could be automated by taking advantage of the expert system, object oriented programming and intelligent relational database tools within the APWS.

  9. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Demasi, J. T.; Sheffler, K. D.

    1986-01-01

    The objective of this program is to establish a methodology to predict Thermal Barrier Coating (TBC) life on gas turbine engine components. The approach involves experimental life measurement coupled with analytical modeling of relevant degradation modes. The coating being studied is a flight qualified two layer system, designated PWA 264, consisting of a nominal ten mil layer of seven percent yttria partially stabilized zirconia plasma deposited over a nominal five mil layer of low pressure plasma deposited NiCoCrAlY. Thermal barrier coating degradation modes being investigated include: thermomechanical fatigue, oxidation, erosion, hot corrosion, and foreign object damage.

  10. Computational model of precision grip in Parkinson's disease: a utility based approach

    PubMed Central

    Gupta, Ankur; Balasubramani, Pragathi P.; Chakravarthy, V. Srinivasa

    2013-01-01

    We propose a computational model of Precision Grip (PG) performance in normal subjects and Parkinson's Disease (PD) patients. Prior studies on grip force generation in PD patients show an increase in grip force during ON medication and an increase in the variability of the grip force during OFF medication (Ingvarsson et al., 1997; Fellows et al., 1998). Changes in grip force generation in dopamine-deficient PD conditions strongly suggest contribution of the Basal Ganglia, a deep brain system having a crucial role in translating dopamine signals to decision making. The present approach is to treat the problem of modeling grip force generation as a problem of action selection, which is one of the key functions of the Basal Ganglia. The model consists of two components: (1) the sensory-motor loop component, and (2) the Basal Ganglia component. The sensory-motor loop component converts a reference position and a reference grip force, into lift force and grip force profiles, respectively. These two forces cooperate in grip-lifting a load. The sensory-motor loop component also includes a plant model that represents the interaction between two fingers involved in PG, and the object to be lifted. The Basal Ganglia component is modeled using Reinforcement Learning with the significant difference that the action selection is performed using utility distribution instead of using purely Value-based distribution, thereby incorporating risk-based decision making. The proposed model is able to account for the PG results from normal and PD patients accurately (Ingvarsson et al., 1997; Fellows et al., 1998). To our knowledge the model is the first model of PG in PD conditions. PMID:24348373

  11. Distributed visualization framework architecture

    NASA Astrophysics Data System (ADS)

    Mishchenko, Oleg; Raman, Sundaresan; Crawfis, Roger

    2010-01-01

    An architecture for distributed and collaborative visualization is presented. The design goals of the system are to create a lightweight, easy to use and extensible framework for reasearch in scientific visualization. The system provides both single user and collaborative distributed environment. System architecture employs a client-server model. Visualization projects can be synchronously accessed and modified from different client machines. We present a set of visualization use cases that illustrate the flexibility of our system. The framework provides a rich set of reusable components for creating new applications. These components make heavy use of leading design patterns. All components are based on the functionality of a small set of interfaces. This allows new components to be integrated seamlessly with little to no effort. All user input and higher-level control functionality interface with proxy objects supporting a concrete implementation of these interfaces. These light-weight objects can be easily streamed across the web and even integrated with smart clients running on a user's cell phone. The back-end is supported by concrete implementations wherever needed (for instance for rendering). A middle-tier manages any communication and synchronization with the proxy objects. In addition to the data components, we have developed several first-class GUI components for visualization. These include a layer compositor editor, a programmable shader editor, a material editor and various drawable editors. These GUI components interact strictly with the interfaces. Access to the various entities in the system is provided by an AssetManager. The asset manager keeps track of all of the registered proxies and responds to queries on the overall system. This allows all user components to be populated automatically. Hence if a new component is added that supports the IMaterial interface, any instances of this can be used in the various GUI components that work with this interface. One of the main features is an interactive shader designer. This allows rapid prototyping of new visualization renderings that are shader-based and greatly accelerates the development and debug cycle.

  12. e-Phys: a suite of intracellular neurophysiology programs integrating COM (component object model) technologies.

    PubMed

    Nguyen, Quoc-Thang; Miledi, Ricardo

    2003-09-30

    Current computer programs for intracellular recordings often lack advanced data management, are usually incompatible with other applications and are also difficult to adapt to new experiments. We have addressed these shortcomings in e-Phys, a suite of electrophysiology applications for intracellular recordings. The programs in e-Phys use Component Object Model (COM) technologies available in the Microsoft Windows operating system to provide enhanced data storage, increased interoperability between e-Phys and other COM-aware applications, and easy customization of data acquisition and analysis thanks to a script-based integrated programming environment. Data files are extensible, hierarchically organized and integrated in the Windows shell by using the Structured Storage technology. Data transfers to and from other programs are facilitated by implementing the ActiveX Automation standard and distributed COM (DCOM). ActiveX Scripting allows experimenters to write their own event-driven acquisition and analysis programs in the VBScript language from within e-Phys. Scripts can reuse components available from other programs on other machines to create distributed meta-applications. This paper describes the main features of e-Phys and how this package was used to determine the effect of the atypical antipsychotic drug clozapine on synaptic transmission at the neuromuscular junction.

  13. EEG Characteristic Extraction Method of Listening Music and Objective Estimation Method Based on Latency Structure Model in Individual Characteristics

    NASA Astrophysics Data System (ADS)

    Ito, Shin-Ichi; Mitsukura, Yasue; Nakamura Miyamura, Hiroko; Saito, Takafumi; Fukumi, Minoru

    EEG is characterized by the unique and individual characteristics. Little research has been done to take into account the individual characteristics when analyzing EEG signals. Often the EEG has frequency components which can describe most of the significant characteristics. Then there is the difference of importance between the analyzed frequency components of the EEG. We think that the importance difference shows the individual characteristics. In this paper, we propose a new EEG extraction method of characteristic vector by a latency structure model in individual characteristics (LSMIC). The LSMIC is the latency structure model, which has personal error as the individual characteristics, based on normal distribution. The real-coded genetic algorithms (RGA) are used for specifying the personal error that is unknown parameter. Moreover we propose an objective estimation method that plots the EEG characteristic vector on a visualization space. Finally, the performance of the proposed method is evaluated using a realistic simulation and applied to a real EEG data. The result of our experiment shows the effectiveness of the proposed method.

  14. Make-up wells drilling cost in financial model for a geothermal project

    NASA Astrophysics Data System (ADS)

    Oktaviani Purwaningsih, Fitri; Husnie, Ruly; Afuar, Waldy; Abdurrahman, Gugun

    2017-12-01

    After commissioning of a power plant, geothermal reservoir will encounter pressure decline, which will affect wells productivity. Therefore, further drilling is carried out to enhance steam production. Make-up wells are production wells drilled inside an already confirmed reservoir to maintain steam production in a certain level. Based on Sanyal (2004), geothermal power cost consists of three components, those are capital cost, O&M cost and make-up drilling cost. The make-up drilling cost component is a major part of power cost which will give big influence in a whole economical value of the project. The objective of this paper it to analyse the make-up wells drilling cost component in financial model of a geothermal power project. The research will calculate make-up wells requirements, drilling costs as a function of time and how they influence the financial model and affect the power cost. The best scenario in determining make-up wells strategy in relation with the project financial model would be the result of this research.

  15. Development of an Aeroelastic Modeling Capability for Transient Nozzle Side Load Analysis

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Zhao, Xiang; Zhang, Sijun; Chen, Yen-Sen

    2013-01-01

    Lateral nozzle forces are known to cause severe structural damage to any new rocket engine in development during test. While three-dimensional, transient, turbulent, chemically reacting computational fluid dynamics methodology has been demonstrated to capture major side load physics with rigid nozzles, hot-fire tests often show nozzle structure deformation during major side load events, leading to structural damages if structural strengthening measures were not taken. The modeling picture is incomplete without the capability to address the two-way responses between the structure and fluid. The objective of this study is to develop a coupled aeroelastic modeling capability by implementing the necessary structural dynamics component into an anchored computational fluid dynamics methodology. The computational fluid dynamics component is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, while the computational structural dynamics component is developed in the framework of modal analysis. Transient aeroelastic nozzle startup analyses of the Block I Space Shuttle Main Engine at sea level were performed. The computed results from the aeroelastic nozzle modeling are presented.

  16. Developing Historic Building Information Modelling Guidelines and Procedures for Architectural Heritage in Ireland

    NASA Astrophysics Data System (ADS)

    Murphy, M.; Corns, A.; Cahill, J.; Eliashvili, K.; Chenau, A.; Pybus, C.; Shaw, R.; Devlin, G.; Deevy, A.; Truong-Hong, L.

    2017-08-01

    Cultural heritage researchers have recently begun applying Building Information Modelling (BIM) to historic buildings. The model is comprised of intelligent objects with semantic attributes which represent the elements of a building structure and are organised within a 3D virtual environment. Case studies in Ireland are used to test and develop the suitable systems for (a) data capture/digital surveying/processing (b) developing library of architectural components and (c) mapping these architectural components onto the laser scan or digital survey to relate the intelligent virtual representation of a historic structure (HBIM). While BIM platforms have the potential to create a virtual and intelligent representation of a building, its full exploitation and use is restricted to narrow set of expert users with access to costly hardware, software and skills. The testing of open BIM approaches in particular IFCs and the use of game engine platforms is a fundamental component for developing much wider dissemination. The semantically enriched model can be transferred into a WEB based game engine platform.

  17. Assessing the sensitivity of bovine tuberculosis surveillance in Canada's cattle population, 2009-2013.

    PubMed

    El Allaki, Farouk; Harrington, Noel; Howden, Krista

    2016-11-01

    The objectives of this study were (1) to estimate the annual sensitivity of Canada's bTB surveillance system and its three system components (slaughter surveillance, export testing and disease investigation) using a scenario tree modelling approach, and (2) to identify key model parameters that influence the estimates of the surveillance system sensitivity (SSSe). To achieve these objectives, we designed stochastic scenario tree models for three surveillance system components included in the analysis. Demographic data, slaughter data, export testing data, and disease investigation data from 2009 to 2013 were extracted for input into the scenario trees. Sensitivity analysis was conducted to identify key influential parameters on SSSe estimates. The median annual SSSe estimates generated from the study were very high, ranging from 0.95 (95% probability interval [PI]: 0.88-0.98) to 0.97 (95% PI: 0.93-0.99). Median annual sensitivity estimates for the slaughter surveillance component ranged from 0.95 (95% PI: 0.88-0.98) to 0.97 (95% PI: 0.93-0.99). This shows that slaughter surveillance to be the major contributor to overall surveillance system sensitivity with a high probability to detect M. bovis infection if present at a prevalence of 0.00028% or greater during the study period. The export testing and disease investigation components had extremely low component sensitivity estimates-the maximum median sensitivity estimates were 0.02 (95% PI: 0.014-0.023) and 0.0061 (95% PI: 0.0056-0.0066) respectively. The three most influential input parameters on the model's output (SSSe) were the probability of a granuloma being detected at slaughter inspection, the probability of a granuloma being present in older animals (≥12 months of age), and the probability of a granuloma sample being submitted to the laboratory. Additional studies are required to reduce the levels of uncertainty and variability associated with these three parameters influencing the surveillance system sensitivity. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  18. The gravitational potential of a homogeneous polyhedron or don't cut corners

    NASA Technical Reports Server (NTRS)

    Werner, Robert A.

    1994-01-01

    A polyhedron can model irregularly shaped objects such as asteroids, comet nuclei, and small planetary satellites. With minor effort, such a model can incorporate important surface features such as large craters. Here we develop closed-form expressions for the exterior gravitational potential and acceleration components due to a constant-density polyhedron. An equipotential surface of Phobos is illustrated.

  19. Double Blending: Online Theory with On-Campus Practice in Photography Instruction

    ERIC Educational Resources Information Center

    Abrahmov, Shlomo Lee; Ronen, Miky

    2008-01-01

    This paper presents a blended learning model in which the online component is not used to replace some of the traditional on-campus activities of a course but to "introduce new teaching objectives" that would not have been possible to achieve, because of class time limitations and the nature of the course. The instructional model was aimed at…

  20. The Role of Loneliness in the Relationship between Anxiety and Depression in Clinical and School-Based Youth

    ERIC Educational Resources Information Center

    Ebesutani, Chad; Fierstein, Matthew; Viana, Andres G.; Trent, Lindsay; Young, John; Sprung, Manuel

    2015-01-01

    Identifying mechanisms that explain the relationship between anxiety and depression are needed. The Tripartite Model is one model that has been proposed to help explain the association between these two problems, positing a shared component called negative affect. The objective of the present study was to examine the role of loneliness in relation…

  1. Research on Historic Bim of Built Heritage in Taiwan - a Case Study of Huangxi Academy

    NASA Astrophysics Data System (ADS)

    Lu, Y. C.; Shih, T. Y.; Yen, Y. N.

    2018-05-01

    Digital archiving technology for conserving cultural heritage is an important subject nowadays. The Taiwanese Ministry of Culture continues to try to converge the concept and technology of conservation towards international conventions. However, the products from these different technologies are not yet integrated due to the lack of research and development in this field. There is currently no effective schema in HBIM for Taiwanese cultural heritage. The aim of this research is to establish an HBIM schema for Chinese built heritage in Taiwan. The proposed method starts from the perspective of the components of built heritage buildings, up to the investigation of the important properties of the components through important international charters and Taiwanese laws of cultural heritage conservation. Afterwards, object-oriented class diagram and ontology from the scale of components were defined to clarify the concept and increase the interoperability. A historical database was then established for the historical information of components and to bring it into the concept of BIM in order to build a 3D model of heritage objects which can be used for visualization. An integration platform was developed for the users to browse and manipulate the database and 3D model simultaneously. In addition, this research also evaluated the feasibility of this method using the study case at the Huangxi academy located in Taiwan. The conclusion showed that class diagram could help the establishment of database and even its application for different Chinese built heritage objects. The establishment of ontology helped to convey knowledge and increase interoperability. In comparison to traditional documentation methods, the querying result of the platform was more accurate and less prone to human error.

  2. Componentware Approaches in Management Information Systems

    DTIC Science & Technology

    2000-11-01

    functionality. It offers plug & play readiness for service and is cooperative in combination with other programs Model ( Griffel 1998). The component view has...ISO195, DI199).terns: Elements of Reusable Object-Oriented Software.SAddison-Wesley 1995. Componentware approaches provide means that support Griffel

  3. A Catalog of Galaxy Clusters Observed by XMM-Newton

    NASA Technical Reports Server (NTRS)

    Snowden, S. L.; Mushotzky, R. M.; Kuntz, K. D.; Davis, David S.

    2007-01-01

    Images and the radial profiles of the temperature, abundance, and brightness for 70 clusters of galaxies observed by XMM-Newton are presented along with a detailed discussion of the data reduction and analysis methods, including background modeling, which were used in the processing. Proper consideration of the various background components is vital to extend the reliable determination of cluster parameters to the largest possible cluster radii. The various components of the background including the quiescent particle background, cosmic diffuse emission, soft proton contamination, and solar wind charge exchange emission are discussed along with suggested means of their identification, filtering, and/or their modeling and subtraction. Every component is spectrally variable, sometimes significantly so, and all components except the cosmic background are temporally variable as well. The distributions of the events over the FOV vary between the components, and some distributions vary with energy. The scientific results from observations of low surface brightness objects and the diffuse background itself can be strongly affected by these background components and therefore great care should be taken in their consideration.

  4. Advanced Turbine Technology Applications Project (ATTAP)

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Reports technical effort by AlliedSignal Engines in sixth year of DOE/NASA funded project. Topics include: gas turbine engine design modifications of production APU to incorporate ceramic components; fabrication and processing of silicon nitride blades and nozzles; component and engine testing; and refinement and development of critical ceramics technologies, including: hot corrosion testing and environmental life predictive model; advanced NDE methods for internal flaws in ceramic components; and improved carbon pulverization modeling during impact. ATTAP project is oriented toward developing high-risk technology of ceramic structural component design and fabrication to carry forward to commercial production by 'bridging the gap' between structural ceramics in the laboratory and near-term commercial heat engine application. Current ATTAP project goal is to support accelerated commercialization of advanced, high-temperature engines for hybrid vehicles and other applications. Project objectives are to provide essential and substantial early field experience demonstrating ceramic component reliability and durability in modified, available, gas turbine engine applications; and to scale-up and improve manufacturing processes of ceramic turbine engine components and demonstrate application of these processes in the production environment.

  5. Robust multiperson tracking from a mobile platform.

    PubMed

    Ess, Andreas; Leibe, Bastian; Schindler, Konrad; van Gool, Luc

    2009-10-01

    In this paper, we address the problem of multiperson tracking in busy pedestrian zones using a stereo rig mounted on a mobile platform. The complexity of the problem calls for an integrated solution that extracts as much visual information as possible and combines it through cognitive feedback cycles. We propose such an approach, which jointly estimates camera position, stereo depth, object detection, and tracking. The interplay between those components is represented by a graphical model. Since the model has to incorporate object-object interactions and temporal links to past frames, direct inference is intractable. We, therefore, propose a two-stage procedure: for each frame, we first solve a simplified version of the model (disregarding interactions and temporal continuity) to estimate the scene geometry and an overcomplete set of object detections. Conditioned on these results, we then address object interactions, tracking, and prediction in a second step. The approach is experimentally evaluated on several long and difficult video sequences from busy inner-city locations. Our results show that the proposed integration makes it possible to deliver robust tracking performance in scenes of realistic complexity.

  6. Real-Time Tracking by Double Templates Matching Based on Timed Motion History Image with HSV Feature

    PubMed Central

    Li, Zhiyong; Li, Pengfei; Yu, Xiaoping; Hashem, Mervat

    2014-01-01

    It is a challenge to represent the target appearance model for moving object tracking under complex environment. This study presents a novel method with appearance model described by double templates based on timed motion history image with HSV color histogram feature (tMHI-HSV). The main components include offline template and online template initialization, tMHI-HSV-based candidate patches feature histograms calculation, double templates matching (DTM) for object location, and templates updating. Firstly, we initialize the target object region and calculate its HSV color histogram feature as offline template and online template. Secondly, the tMHI-HSV is used to segment the motion region and calculate these candidate object patches' color histograms to represent their appearance models. Finally, we utilize the DTM method to trace the target and update the offline template and online template real-timely. The experimental results show that the proposed method can efficiently handle the scale variation and pose change of the rigid and nonrigid objects, even in illumination change and occlusion visual environment. PMID:24592185

  7. Three-dimensional modelling and geothermal process simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burns, K.L.

    1990-01-01

    The subsurface geological model or 3-D GIS is constructed from three kinds of objects, which are a lithotope (in boundary representation), a number of fault systems, and volumetric textures (vector fields). The chief task of the model is to yield an estimate of the conductance tensors (fluid permeability and thermal conductivity) throughout an array of voxels. This is input as material properties to a FEHM numerical physical process model. The main task of the FEHM process model is to distinguish regions of convective from regions of conductive heat flow, and to estimate the fluid phase, pressure and flow paths. Themore » temperature, geochemical, and seismic data provide the physical constraints on the process. The conductance tensors in the Franciscan Complex are to be derived by the addition of two components. The isotropic component is a stochastic spatial variable due to disruption of lithologies in melange. The deviatoric component is deterministic, due to smoothness and continuity in the textural vector fields. This decomposition probably also applies to the engineering hydrogeological properties of shallow terrestrial fluvial systems. However there are differences in quantity. The isotropic component is much more variable in the Franciscan, to the point where volumetric averages are misleading, and it may be necessary to select that component from several, discrete possible states. The deviatoric component is interpolated using a textural vector field. The Franciscan field is much more complicated, and contains internal singularities. 27 refs., 10 figs.« less

  8. Spectral Models of Kuiper Belt Objects and Centaurs

    NASA Technical Reports Server (NTRS)

    Cruikshank, Dale; Ore, Christina M. Dalle

    2003-01-01

    We present models of the spectral reflectances of groups of outer Solar System objects defined primarily by their colors in the spectral region 0.4 -1.2 microns, and which have geometric albedo 0.04 at wavelength 0.55 microns. Our models of the groups with the strongest reflectance gradients (reddest colors) use combinations of organic tholins. We test the hypothesis that metal-reddened igneous rock-forming minerals contribute to the red colors of Centaurs and KBOs by using the space-weathered lunar soil as one of the components of our models. We find that our models can admit the presence of moderate amounts of space-weathered (metal-reddened) minerals, but that they do not require this material to achieve the red colors of the reddest outer Solar System bodies. Our models with organic tholins are consistent with the results of other investigators.

  9. Using Model Point Spread Functions to Identifying Binary Brown Dwarf Systems

    NASA Astrophysics Data System (ADS)

    Matt, Kyle; Stephens, Denise C.; Lunsford, Leanne T.

    2017-01-01

    A Brown Dwarf (BD) is a celestial object that is not massive enough to undergo hydrogen fusion in its core. BDs can form in pairs called binaries. Due to the great distances between Earth and these BDs, they act as point sources of light and the angular separation between binary BDs can be small enough to appear as a single, unresolved object in images, according to Rayleigh Criterion. It is not currently possible to resolve some of these objects into separate light sources. Stephens and Noll (2006) developed a method that used model point spread functions (PSFs) to identify binary Trans-Neptunian Objects, we will use this method to identify binary BD systems in the Hubble Space Telescope archive. This method works by comparing model PSFs of single and binary sources to the observed PSFs. We also use a method to compare model spectral data for single and binary fits to determine the best parameter values for each component of the system. We describe these methods, its challenges and other possible uses in this poster.

  10. Impact of multi-component diffusion in turbulent combustion using direct numerical simulations

    DOE PAGES

    Bruno, Claudio; Sankaran, Vaidyanathan; Kolla, Hemanth; ...

    2015-08-28

    This study presents the results of DNS of a partially premixed turbulent syngas/air flame at atmospheric pressure. The objective was to assess the importance and possible effects of molecular transport on flame behavior and structure. To this purpose DNS were performed at with two proprietary DNS codes and with three different molecular diffusion transport models: fully multi-component, mixture averaged, and imposing the Lewis number of all species to be unity.

  11. View subspaces for indexing and retrieval of 3D models

    NASA Astrophysics Data System (ADS)

    Dutagaci, Helin; Godil, Afzal; Sankur, Bülent; Yemez, Yücel

    2010-02-01

    View-based indexing schemes for 3D object retrieval are gaining popularity since they provide good retrieval results. These schemes are coherent with the theory that humans recognize objects based on their 2D appearances. The viewbased techniques also allow users to search with various queries such as binary images, range images and even 2D sketches. The previous view-based techniques use classical 2D shape descriptors such as Fourier invariants, Zernike moments, Scale Invariant Feature Transform-based local features and 2D Digital Fourier Transform coefficients. These methods describe each object independent of others. In this work, we explore data driven subspace models, such as Principal Component Analysis, Independent Component Analysis and Nonnegative Matrix Factorization to describe the shape information of the views. We treat the depth images obtained from various points of the view sphere as 2D intensity images and train a subspace to extract the inherent structure of the views within a database. We also show the benefit of categorizing shapes according to their eigenvalue spread. Both the shape categorization and data-driven feature set conjectures are tested on the PSB database and compared with the competitor view-based 3D shape retrieval algorithms.

  12. Analyzing and designing object-oriented missile simulations with concurrency

    NASA Astrophysics Data System (ADS)

    Randorf, Jeffrey Allen

    2000-11-01

    A software object model for the six degree-of-freedom missile modeling domain is presented. As a precursor, a domain analysis of the missile modeling domain was started, based on the Feature-Oriented Domain Analysis (FODA) technique described by the Software Engineering Institute (SEI). It was subsequently determined the FODA methodology is functionally equivalent to the Object Modeling Technique. The analysis used legacy software documentation and code from the ENDOSIM, KDEC, and TFrames 6-DOF modeling tools, including other technical literature. The SEI Object Connection Architecture (OCA) was the template for designing the object model. Three variants of the OCA were considered---a reference structure, a recursive structure, and a reference structure with augmentation for flight vehicle modeling. The reference OCA design option was chosen for maintaining simplicity while not compromising the expressive power of the OMT model. The missile architecture was then analyzed for potential areas of concurrent computing. It was shown how protected objects could be used for data passing between OCA object managers, allowing concurrent access without changing the OCA reference design intent or structure. The implementation language was the 1995 release of Ada. OCA software components were shown how to be expressed as Ada child packages. While acceleration of several low level and other high operations level are possible on proper hardware, there was a 33% degradation of 4th order Runge-Kutta integrator performance of two simultaneous ordinary differential equations using Ada tasking on a single processor machine. The Defense Department's High Level Architecture was introduced and explained in context with the OCA. It was shown the HLA and OCA were not mutually exclusive architectures, but complimentary. HLA was shown as an interoperability solution, with the OCA as an architectural vehicle for software reuse. Further directions for implementing a 6-DOF missile modeling environment are discussed.

  13. Southern Regional Center for Lightweight Innovative Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horstemeyer, Mark F.; Wang, Paul

    The three major objectives of this Phase III project are: To develop experimentally validated cradle-to-grave modeling and simulation tools to optimize automotive and truck components for lightweighting materials (aluminum, steel, and Mg alloys and polymer-based composites) with consideration of uncertainty to decrease weight and cost, yet increase the performance and safety in impact scenarios; To develop multiscale computational models that quantify microstructure-property relations by evaluating various length scales, from the atomic through component levels, for each step of the manufacturing process for vehicles; and To develop an integrated K-12 educational program to educate students on lightweighting designs and impact scenarios.

  14. Measurement and Modeling of the Optical Scattering Properties of Crop Canopies

    NASA Technical Reports Server (NTRS)

    Vanderbilt, V. C.; Grant, L.

    1984-01-01

    Efforts in measuring, analyzing, and mathematically modeling the specular, polarized, and diffuse light scattering properties of several plant canopies and their component parts (leaves, stems, fruit, soil) as a function of view angle and illumination angle are reported. Specific objectives were: (1) to demonstrate a technique for determining the specular and diffuse components of the reflectance factor of plant canopies; (2) to acquire the measurements and begin assembling a data set for developing and testing canopy reflectance models; (3) to design and build a new optical instrument to measure the light scattering properties of individual leaves; and (4) to use this instrument to survey and investigate the information in the light scattering properties of individual leaves of crops, forests, weeds, and horticulture.

  15. Semantic modeling and structural synthesis of onboard electronics protection means as open information system

    NASA Astrophysics Data System (ADS)

    Zhevnerchuk, D. V.; Surkova, A. S.; Lomakina, L. S.; Golubev, A. S.

    2018-05-01

    The article describes the component representation approach and semantic models of on-board electronics protection from ionizing radiation of various nature. Semantic models are constructed, the feature of which is the representation of electronic elements, protection modules, sources of impact in the form of blocks with interfaces. The rules of logical inference and algorithms for synthesizing the object properties of the semantic network, imitating the interface between the components of the protection system and the sources of radiation, are developed. The results of the algorithm are considered using the example of radiation-resistant microcircuits 1645RU5U, 1645RT2U and the calculation and experimental method for estimating the durability of on-board electronics.

  16. Motion coherence affects human perception and pursuit similarly.

    PubMed

    Beutter, B R; Stone, L S

    2000-01-01

    Pursuit and perception both require accurate information about the motion of objects. Recovering the motion of objects by integrating the motion of their components is a difficult visual task. Successful integration produces coherent global object motion, while a failure to integrate leaves the incoherent local motions of the components unlinked. We compared the ability of perception and pursuit to perform motion integration by measuring direction judgments and the concomitant eye-movement responses to line-figure parallelograms moving behind stationary rectangular apertures. The apertures were constructed such that only the line segments corresponding to the parallelogram's sides were visible; thus, recovering global motion required the integration of the local segment motion. We investigated several potential motion-integration rules by using stimuli with different object, vector-average, and line-segment terminator-motion directions. We used an oculometric decision rule to directly compare direction discrimination for pursuit and perception. For visible apertures, the percept was a coherent object, and both the pursuit and perceptual performance were close to the object-motion prediction. For invisible apertures, the percept was incoherently moving segments, and both the pursuit and perceptual performance were close to the terminator-motion prediction. Furthermore, both psychometric and oculometric direction thresholds were much higher for invisible apertures than for visible apertures. We constructed a model in which both perception and pursuit are driven by a shared motion-processing stage, with perception having an additional input from an independent static-processing stage. Model simulations were consistent with our perceptual and oculomotor data. Based on these results, we propose the use of pursuit as an objective and continuous measure of perceptual coherence. Our results support the view that pursuit and perception share a common motion-integration stage, perhaps within areas MT or MST.

  17. Motion coherence affects human perception and pursuit similarly

    NASA Technical Reports Server (NTRS)

    Beutter, B. R.; Stone, L. S.

    2000-01-01

    Pursuit and perception both require accurate information about the motion of objects. Recovering the motion of objects by integrating the motion of their components is a difficult visual task. Successful integration produces coherent global object motion, while a failure to integrate leaves the incoherent local motions of the components unlinked. We compared the ability of perception and pursuit to perform motion integration by measuring direction judgments and the concomitant eye-movement responses to line-figure parallelograms moving behind stationary rectangular apertures. The apertures were constructed such that only the line segments corresponding to the parallelogram's sides were visible; thus, recovering global motion required the integration of the local segment motion. We investigated several potential motion-integration rules by using stimuli with different object, vector-average, and line-segment terminator-motion directions. We used an oculometric decision rule to directly compare direction discrimination for pursuit and perception. For visible apertures, the percept was a coherent object, and both the pursuit and perceptual performance were close to the object-motion prediction. For invisible apertures, the percept was incoherently moving segments, and both the pursuit and perceptual performance were close to the terminator-motion prediction. Furthermore, both psychometric and oculometric direction thresholds were much higher for invisible apertures than for visible apertures. We constructed a model in which both perception and pursuit are driven by a shared motion-processing stage, with perception having an additional input from an independent static-processing stage. Model simulations were consistent with our perceptual and oculomotor data. Based on these results, we propose the use of pursuit as an objective and continuous measure of perceptual coherence. Our results support the view that pursuit and perception share a common motion-integration stage, perhaps within areas MT or MST.

  18. Operational models of infrastructure resilience.

    PubMed

    Alderson, David L; Brown, Gerald G; Carlyle, W Matthew

    2015-04-01

    We propose a definition of infrastructure resilience that is tied to the operation (or function) of an infrastructure as a system of interacting components and that can be objectively evaluated using quantitative models. Specifically, for any particular system, we use quantitative models of system operation to represent the decisions of an infrastructure operator who guides the behavior of the system as a whole, even in the presence of disruptions. Modeling infrastructure operation in this way makes it possible to systematically evaluate the consequences associated with the loss of infrastructure components, and leads to a precise notion of "operational resilience" that facilitates model verification, validation, and reproducible results. Using a simple example of a notional infrastructure, we demonstrate how to use these models for (1) assessing the operational resilience of an infrastructure system, (2) identifying critical vulnerabilities that threaten its continued function, and (3) advising policymakers on investments to improve resilience. © 2014 Society for Risk Analysis.

  19. Dynamics of attitudes and genetic processes.

    PubMed

    Guastello, Stephen J; Guastello, Denise D

    2008-01-01

    Relatively new discoveries of a genetic component to attitudes have challenged the traditional viewpoint that attitudes are primarily learned ideas and behaviors. Attitudes that are regarded by respondents as "more important" tend to have greater genetic components to them, and tend to be more closely associated with authoritarianism. Nonlinear theories, nonetheless, have also been introduced to study attitude change. The objective of this study was to determine whether change in authoritarian attitudes across two generations would be more aptly described by a linear or a nonlinear model. Participants were 372 college students, their mothers, and their fathers who completed an attitude questionnaire. Results indicated that the nonlinear model (R2 = .09) was slightly better than the linear model (R2 = .08), but the two models offered very different forecasts for future generations of US society. The linear model projected a gradual and continuing bifurcation between authoritarians and non-authoritarians. The nonlinear model projected a stabilization of authoritarian attitudes.

  20. Multi-objective spatial tools to inform maritime spatial planning in the Adriatic Sea.

    PubMed

    Depellegrin, Daniel; Menegon, Stefano; Farella, Giulio; Ghezzo, Michol; Gissi, Elena; Sarretta, Alessandro; Venier, Chiara; Barbanti, Andrea

    2017-12-31

    This research presents a set of multi-objective spatial tools for sea planning and environmental management in the Adriatic Sea Basin. The tools address four objectives: 1) assessment of cumulative impacts from anthropogenic sea uses on environmental components of marine areas; 2) analysis of sea use conflicts; 3) 3-D hydrodynamic modelling of nutrient dispersion (nitrogen and phosphorus) from riverine sources in the Adriatic Sea Basin and 4) marine ecosystem services capacity assessment from seabed habitats based on an ES matrix approach. Geospatial modelling results were illustrated, analysed and compared on country level and for three biogeographic subdivisions, Northern-Central-Southern Adriatic Sea. The paper discusses model results for their spatial implications, relevance for sea planning, limitations and concludes with an outlook towards the need for more integrated, multi-functional tools development for sea planning. Copyright © 2017. Published by Elsevier B.V.

  1. Do Clouds Compute? A Framework for Estimating the Value of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Klems, Markus; Nimis, Jens; Tai, Stefan

    On-demand provisioning of scalable and reliable compute services, along with a cost model that charges consumers based on actual service usage, has been an objective in distributed computing research and industry for a while. Cloud Computing promises to deliver on this objective: consumers are able to rent infrastructure in the Cloud as needed, deploy applications and store data, and access them via Web protocols on a pay-per-use basis. The acceptance of Cloud Computing, however, depends on the ability for Cloud Computing providers and consumers to implement a model for business value co-creation. Therefore, a systematic approach to measure costs and benefits of Cloud Computing is needed. In this paper, we discuss the need for valuation of Cloud Computing, identify key components, and structure these components in a framework. The framework assists decision makers in estimating Cloud Computing costs and to compare these costs to conventional IT solutions. We demonstrate by means of representative use cases how our framework can be applied to real world scenarios.

  2. Modeling Yeast Cell Polarization Induced by Pheromone Gradients

    NASA Astrophysics Data System (ADS)

    Yi, Tau-Mu; Chen, Shanqin; Chou, Ching-Shan; Nie, Qing

    2007-07-01

    Yeast cells respond to spatial gradients of mating pheromones by polarizing and projecting up the gradient toward the source. It is thought that they employ a spatial sensing mechanism in which the cell compares the concentration of pheromone at different points on the cell surface and determines the maximum point, where the projection forms. Here we constructed the first spatial mathematical model of the yeast pheromone response that describes the dynamics of the heterotrimeric and Cdc42p G-protein cycles, which are linked in a cascade. Two key performance objectives of this system are (1) amplification—converting a shallow external gradient of ligand to a steep internal gradient of protein components and (2) tracking—following changes in gradient direction. We used simulations to investigate amplification mechanisms that allow tracking. We identified specific strategies for regulating the spatial dynamics of the protein components (i.e. their changing location in the cell) that would enable the cell to achieve both objectives.

  3. SEL Ada reuse analysis and representations

    NASA Technical Reports Server (NTRS)

    Kester, Rush

    1990-01-01

    Overall, it was revealed that the pattern of Ada reuse has evolved from initial reuse of utility components into reuse of generalized application architectures. Utility components were both domain-independent utilities, such as queues and stacks, and domain-specific utilities, such as those that implement spacecraft orbit and attitude mathematical functions and physics or astronomical models. The level of reuse was significantly increased with the development of a generalized telemetry simulator architecture. The use of Ada generics significantly increased the level of verbatum reuse, which is due to the ability, using Ada generics, to parameterize the aspects of design that are configurable during reuse. A key factor in implementing generalized architectures was the ability to use generic subprogram parameters to tailor parts of the algorithm embedded within the architecture. The use of object oriented design (in which objects model real world entities) significantly improved the modularity for reuse. Encapsulating into packages the data and operations associated with common real world entities creates natural building blocks for reuse.

  4. Model-based tomographic reconstruction of objects containing known components.

    PubMed

    Stayman, J Webster; Otake, Yoshito; Prince, Jerry L; Khanna, A Jay; Siewerdsen, Jeffrey H

    2012-10-01

    The likelihood of finding manufactured components (surgical tools, implants, etc.) within a tomographic field-of-view has been steadily increasing. One reason is the aging population and proliferation of prosthetic devices, such that more people undergoing diagnostic imaging have existing implants, particularly hip and knee implants. Another reason is that use of intraoperative imaging (e.g., cone-beam CT) for surgical guidance is increasing, wherein surgical tools and devices such as screws and plates are placed within or near to the target anatomy. When these components contain metal, the reconstructed volumes are likely to contain severe artifacts that adversely affect the image quality in tissues both near and far from the component. Because physical models of such components exist, there is a unique opportunity to integrate this knowledge into the reconstruction algorithm to reduce these artifacts. We present a model-based penalized-likelihood estimation approach that explicitly incorporates known information about component geometry and composition. The approach uses an alternating maximization method that jointly estimates the anatomy and the position and pose of each of the known components. We demonstrate that the proposed method can produce nearly artifact-free images even near the boundary of a metal implant in simulated vertebral pedicle screw reconstructions and even under conditions of substantial photon starvation. The simultaneous estimation of device pose also provides quantitative information on device placement that could be valuable to quality assurance and verification of treatment delivery.

  5. An Evaluative Review of Simulated Dynamic Smart 3d Objects

    NASA Astrophysics Data System (ADS)

    Romeijn, H.; Sheth, F.; Pettit, C. J.

    2012-07-01

    Three-dimensional (3D) modelling of plants can be an asset for creating agricultural based visualisation products. The continuum of 3D plants models ranges from static to dynamic objects, also known as smart 3D objects. There is an increasing requirement for smarter simulated 3D objects that are attributed mathematically and/or from biological inputs. A systematic approach to plant simulation offers significant advantages to applications in agricultural research, particularly in simulating plant behaviour and the influences of external environmental factors. This approach of 3D plant object visualisation is primarily evident from the visualisation of plants using photographed billboarded images, to more advanced procedural models that come closer to simulating realistic virtual plants. However, few programs model physical reactions of plants to external factors and even fewer are able to grow plants based on mathematical and/or biological parameters. In this paper, we undertake an evaluation of plant-based object simulation programs currently available, with a focus upon the components and techniques involved in producing these objects. Through an analytical review process we consider the strengths and weaknesses of several program packages, the features and use of these programs and the possible opportunities in deploying these for creating smart 3D plant-based objects to support agricultural research and natural resource management. In creating smart 3D objects the model needs to be informed by both plant physiology and phenology. Expert knowledge will frame the parameters and procedures that will attribute the object and allow the simulation of dynamic virtual plants. Ultimately, biologically smart 3D virtual plants that react to changes within an environment could be an effective medium to visually represent landscapes and communicate land management scenarios and practices to planners and decision-makers.

  6. Models of Speed Discrimination

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The prime purpose of this project was to investigate various theoretical issues concerning the integration of information across visual space. To date, most of the research efforts in the study of the visual system seem to have been focused in two almost non-overlaping directions. One research focus has been the low level perception as studied by psychophysics. The other focus has been the study of high level vision exemplified by the study of object perception. Most of the effort in psychophysics has been devoted to the search for the fundamental "features" of perception. The general idea is that the most peripheral processes of the visual system decompose the input into features that are then used for classification and recognition. The experimental and theoretical focus has been on finding and describing these analyzers that decompose images into useful components. Various models are then compared to the physiological measurements performed on neurons in the sensory systems. In the study of higher level perception, the work has been focused on the representation of objects and on the connections between various physical effects and object perception. In this category we find the perception of 3D from a variety of physical measurements including motion, shading and other physical phenomena. With few exceptions, there seem to be very limited development of theories describing how the visual system might combine the output of the analyzers to form the representation of visual objects. Therefore, the processes underlying the integration of information over space represent critical aspects of vision system. The understanding of these processes will have implications on our expectations for the underlying physiological mechanisms, as well as for our models of the internal representation for visual percepts. In this project, we explored several mechanisms related to spatial summation, attention, and eye movements. The project comprised three components: 1. Modeling visual search for the detection of speed deviation. 2. Perception of moving objects. 3. Exploring the role of eye movements in various visual tasks.

  7. Designers workbench: toward real-time immersive modeling

    NASA Astrophysics Data System (ADS)

    Kuester, Falko; Duchaineau, Mark A.; Hamann, Bernd; Joy, Kenneth I.; Ma, Kwan-Liu

    2000-05-01

    This paper introduces the Designers Workbench, a semi- immersive virtual environment for two-handed modeling, sculpting and analysis tasks. The paper outlines the fundamental tools, design metaphors and hardware components required for an intuitive real-time modeling system. As companies focus on streamlining productivity to cope with global competition, the migration to computer-aided design (CAD), computer-aided manufacturing, and computer-aided engineering systems has established a new backbone of modern industrial product development. However, traditionally a product design frequently originates form a clay model that, after digitization, forms the basis for the numerical description of CAD primitives. The Designers Workbench aims at closing this technology or 'digital gap' experienced by design and CAD engineers by transforming the classical design paradigm into its fully integrate digital and virtual analog allowing collaborative development in a semi- immersive virtual environment. This project emphasizes two key components form the classical product design cycle: freeform modeling and analysis. In the freedom modeling stage, content creation in the form of two-handed sculpting of arbitrary objects using polygonal, volumetric or mathematically defined primitives is emphasized, whereas the analysis component provides the tools required for pre- and post-processing steps for finite element analysis tasks applied to the created models.

  8. WMT: The CSDMS Web Modeling Tool

    NASA Astrophysics Data System (ADS)

    Piper, M.; Hutton, E. W. H.; Overeem, I.; Syvitski, J. P.

    2015-12-01

    The Community Surface Dynamics Modeling System (CSDMS) has a mission to enable model use and development for research in earth surface processes. CSDMS strives to expand the use of quantitative modeling techniques, promotes best practices in coding, and advocates for the use of open-source software. To streamline and standardize access to models, CSDMS has developed the Web Modeling Tool (WMT), a RESTful web application with a client-side graphical interface and a server-side database and API that allows users to build coupled surface dynamics models in a web browser on a personal computer or a mobile device, and run them in a high-performance computing (HPC) environment. With WMT, users can: Design a model from a set of components Edit component parameters Save models to a web-accessible server Share saved models with the community Submit runs to an HPC system Download simulation results The WMT client is an Ajax application written in Java with GWT, which allows developers to employ object-oriented design principles and development tools such as Ant, Eclipse and JUnit. For deployment on the web, the GWT compiler translates Java code to optimized and obfuscated JavaScript. The WMT client is supported on Firefox, Chrome, Safari, and Internet Explorer. The WMT server, written in Python and SQLite, is a layered system, with each layer exposing a web service API: wmt-db: database of component, model, and simulation metadata and output wmt-api: configure and connect components wmt-exe: launch simulations on remote execution servers The database server provides, as JSON-encoded messages, the metadata for users to couple model components, including descriptions of component exchange items, uses and provides ports, and input parameters. Execution servers are network-accessible computational resources, ranging from HPC systems to desktop computers, containing the CSDMS software stack for running a simulation. Once a simulation completes, its output, in NetCDF, is packaged and uploaded to a data server where it is stored and from which a user can download it as a single compressed archive file.

  9. Final Report Collaborative Project. Improving the Representation of Coastal and Estuarine Processes in Earth System Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryan, Frank; Dennis, John; MacCready, Parker

    This project aimed to improve long term global climate simulations by resolving and enhancing the representation of the processes involved in the cycling of freshwater through estuaries and coastal regions. This was a collaborative multi-institution project consisting of physical oceanographers, climate model developers, and computational scientists. It specifically targeted the DOE objectives of advancing simulation and predictive capability of climate models through improvements in resolution and physical process representation. The main computational objectives were: 1. To develop computationally efficient, but physically based, parameterizations of estuary and continental shelf mixing processes for use in an Earth System Model (CESM). 2. Tomore » develop a two-way nested regional modeling framework in order to dynamically downscale the climate response of particular coastal ocean regions and to upscale the impact of the regional coastal processes to the global climate in an Earth System Model (CESM). 3. To develop computational infrastructure to enhance the efficiency of data transfer between specific sources and destinations, i.e., a point-to-point communication capability, (used in objective 1) within POP, the ocean component of CESM.« less

  10. Peer Review for EPA's Biologically Based Dose-Response ...

    EPA Pesticide Factsheets

    EPA is developing a regulation for perchlorate in drinking water. As part the regulatory process EPA must develop a Maximum Contaminant Level Goal (MCLG). FDA and EPA scientists developed a biologically based dose-response (BBDR) model to assist in deriving the MCLG. This model is designed to determine under what conditions of iodine nutrition and exposure to perchlorate across sensitive lifestages would result in low serum free and total thyroxine (hypothyroxinemia). EPA is undertaking a peer review to provide a focused, objective independent peer evaluation of the draft model and its model results report. EPA is undertaking a peer review to provide a focused, objective independent peer evaluation of the draft model and its model results report. Peer review is an important component of the scientific process. The criticism, suggestions, and new ideas provided by the peer reviewers stimulate creative thought, strengthen the interpretation of the reviewed material, and confer credibility on the product. The peer review objective is to provide advice to EPA on steps that will yield a highly credible scientific product that is supported by the scientific community and a defensible perchlorate MCLG.

  11. Development of expert systems for modeling of technological process of pressure casting on the basis of artificial intelligence

    NASA Astrophysics Data System (ADS)

    Gavarieva, K. N.; Simonova, L. A.; Pankratov, D. L.; Gavariev, R. V.

    2017-09-01

    In article the main component of expert system of process of casting under pressure which consists of algorithms, united in logical models is considered. The characteristics of system showing data on a condition of an object of management are described. A number of logically interconnected steps allowing to increase quality of the received castings is developed

  12. Wind Assessment for Aerial Payload Delivery Systems Using GPS and IMU Sensors

    DTIC Science & Technology

    2016-09-01

    post- processing of the resultant test data were the research methods used in development of this thesis . Ultimately, this thesis presents two models ...processing of the resultant test data were the research methods used in development of this thesis . Ultimately, this thesis presents two models for winds...7  E .  THESIS OBJECTIVE AND ORGANIZATION ................................. 7  II.  BLIZZARD SYSTEM COMPONENTS

  13. Modeling respiration from snags and coarse woody debris before and after an invasive gypsy moth disturbance

    Treesearch

    Heidi J. Renninger; Nicholas Carlo; Kenneth L. Clark; Karina V.R. Schäfer

    2014-01-01

    Although snags and coarse woody debris are a small component of ecosystem respiration, disturbances can significantly increase the mass and respiration from these carbon (C) pools. The objectives of this study were to (1) measure respiration rates of snags and coarse woody debris throughout the year in a forest previously defoliated by gypsy moths, (2) develop models...

  14. A coupled ductile fracture phase-field model for crystal plasticity

    NASA Astrophysics Data System (ADS)

    Hernandez Padilla, Carlos Alberto; Markert, Bernd

    2017-07-01

    Nowadays crack initiation and evolution play a key role in the design of mechanical components. In the past few decades, several numerical approaches have been developed with the objective to predict these phenomena. The objective of this work is to present a simplified, nonetheless representative phenomenological model to predict the crack evolution of ductile fracture in single crystals. The proposed numerical approach is carried out by merging a conventional elasto-plastic crystal plasticity model and a phase-field model modified to predict ductile fracture. A two-dimensional initial boundary value problem of ductile fracture is introduced considering a single-crystal setup and Nickel-base superalloy material properties. The model is implemented into the finite element context subjected to a quasi-static uniaxial tension test. The results are then qualitatively analyzed and briefly compared to current benchmark results in the literature.

  15. Grip Forces During Object Manipulation: Experiment, Mathematical Model & Validation

    PubMed Central

    Slota, Gregory P.; Latash, Mark L.; Zatsiorsky, Vladimir M.

    2011-01-01

    When people transport handheld objects, they change the grip force with the object movement. Circular movement patterns were tested within three planes at two different rates (1.0, 1.5 Hz), and two diameters (20, 40 cm). Subjects performed the task reasonably well, matching frequencies and dynamic ranges of accelerations within expectations. A mathematical model was designed to predict the applied normal forces from kinematic data. The model is based on two hypotheses: (a) the grip force changes during movements along complex trajectories can be represented as the sum of effects of two basic commands associated with the parallel and orthogonal manipulation, respectively; (b) different central commands are sent to the thumb and virtual finger (Vf- four fingers combined). The model predicted the actual normal forces with a total variance accounted for of better than 98%. The effects of the two components of acceleration—along the normal axis and the resultant acceleration within the shear plane—on the digit normal forces are additive. PMID:21735245

  16. C-Language Integrated Production System, Version 5.1

    NASA Technical Reports Server (NTRS)

    Riley, Gary; Donnell, Brian; Ly, Huyen-Anh VU; Culbert, Chris; Savely, Robert T.; Mccoy, Daniel J.; Giarratano, Joseph

    1992-01-01

    CLIPS 5.1 provides cohesive software tool for handling wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming provides representation of knowledge by use of heuristics. Object-oriented programming enables modeling of complex systems as modular components. Procedural programming enables CLIPS to represent knowledge in ways similar to those allowed in such languages as C, Pascal, Ada, and LISP. Working with CLIPS 5.1, one can develop expert-system software by use of rule-based programming only, object-oriented programming only, procedural programming only, or combinations of the three.

  17. A solar energy estimation procedure using remote sensing techniques. [watershed hydrologic models

    NASA Technical Reports Server (NTRS)

    Khorram, S.

    1977-01-01

    The objective of this investigation is to design a remote sensing-aided procedure for daily location-specific estimation of solar radiation components over the watershed(s) of interest. This technique has been tested on the Spanish Creek Watershed, Northern California, with successful results.

  18. Vacuum solutions around spherically symmetric and static objects in the Starobinsky model

    NASA Astrophysics Data System (ADS)

    ćıkıntoǧlu, Sercan

    2018-02-01

    The vacuum solutions around a spherically symmetric and static object in the Starobinsky model are studied with a perturbative approach. The differential equations for the components of the metric and the Ricci scalar are obtained and solved by using the method of matched asymptotic expansions. The presence of higher order terms in this gravity model leads to the formation of a boundary layer near the surface of the star allowing the accommodation of the extra boundary conditions on the Ricci scalar. Accordingly, the metric can be different from the Schwarzschild solution near the star depending on the value of the Ricci scalar at the surface of the star while matching the Schwarzschild metric far from the star.

  19. Modeling the Impact of Space Suit Components and Anthropometry on the Center of Mass of a Seated Crewmember

    NASA Technical Reports Server (NTRS)

    Blackledge, Christopher; Margerum, Sarah; Ferrer, Mike; Morency, Richard; Rajulu, Sudhakar

    2010-01-01

    The Crew Impact Attenuation System (CIAS) is the energy-absorbing strut concept that dampens Orion Crew Exploration Vehicle (CEV) landing loads to levels sustainable by the crew. Significant COM variations across suited crew configurations would amplify the inertial effects of the pallet and potentially create unacceptable crew loading during launch and landing. The objective of this study was to obtain data needed for dynamic simulation models by quantifying the effects of posture, suit components, and the expected range of anthropometry on the COM of a seated individual. Several elements are required for the COM calculation of a suited human in a seated position: anthropometry, body segment mass, suit component mass, suit component location relative to the body, and joint angles defining the seated posture. Three-dimensional (3D) human body models, suit mass data, and vector calculus were utilized to compute the COM positions for 12 boundary manikins in two different seated postures. The analysis focused on two objectives: (1) quantify how much the wholebody COM varied from the smallest to largest subject and (2) quantify the effects of the suit components on the overall COM in each seat configuration. The location of the anterior-posterior COM varied across all boundary manikins by about 7 cm, and the vertical COM varied by approximately 9 to 10 cm. The mediolateral COM varied by 1.2 cm from the midline sagittal plane for both seat configurations. The suit components caused an anterior shift of the total COM by approximately 2 cm and a shift to the right along the mediolateral axis of 0.4 cm for both seat configurations. When the seat configuration was in the standard posture the suited vertical COM shifted inferiorly by as much as 1 cm, whereas in the CEV posture the vertical COM had no appreciable change. These general differences were due to the high proportion of suit mass located in the boots and lower legs and their corresponding distance from the body COM, as well as to the prevalence of suit components on the right side of the body.

  20. Roadmap for Lean implementation in Indian automotive component manufacturing industry: comparative study of UNIDO Model and ISM Model

    NASA Astrophysics Data System (ADS)

    Jadhav, J. R.; Mantha, S. S.; Rane, S. B.

    2015-06-01

    The demands for automobiles increased drastically in last two and half decades in India. Many global automobile manufacturers and Tier-1 suppliers have already set up research, development and manufacturing facilities in India. The Indian automotive component industry started implementing Lean practices to fulfill the demand of these customers. United Nations Industrial Development Organization (UNIDO) has taken proactive approach in association with Automotive Component Manufacturers Association of India (ACMA) and the Government of India to assist Indian SMEs in various clusters since 1999 to make them globally competitive. The primary objectives of this research are to study the UNIDO-ACMA Model as well as ISM Model of Lean implementation and validate the ISM Model by comparing with UNIDO-ACMA Model. It also aims at presenting a roadmap for Lean implementation in Indian automotive component industry. This paper is based on secondary data which include the research articles, web articles, doctoral thesis, survey reports and books on automotive industry in the field of Lean, JIT and ISM. ISM Model for Lean practice bundles was developed by authors in consultation with Lean practitioners. The UNIDO-ACMA Model has six stages whereas ISM Model has eight phases for Lean implementation. The ISM-based Lean implementation model is validated through high degree of similarity with UNIDO-ACMA Model. The major contribution of this paper is the proposed ISM Model for sustainable Lean implementation. The ISM-based Lean implementation framework presents greater insight of implementation process at more microlevel as compared to UNIDO-ACMA Model.

  1. An object model and database for functional genomics.

    PubMed

    Jones, Andrew; Hunt, Ela; Wastling, Jonathan M; Pizarro, Angel; Stoeckert, Christian J

    2004-07-10

    Large-scale functional genomics analysis is now feasible and presents significant challenges in data analysis, storage and querying. Data standards are required to enable the development of public data repositories and to improve data sharing. There is an established data format for microarrays (microarray gene expression markup language, MAGE-ML) and a draft standard for proteomics (PEDRo). We believe that all types of functional genomics experiments should be annotated in a consistent manner, and we hope to open up new ways of comparing multiple datasets used in functional genomics. We have created a functional genomics experiment object model (FGE-OM), developed from the microarray model, MAGE-OM and two models for proteomics, PEDRo and our own model (Gla-PSI-Glasgow Proposal for the Proteomics Standards Initiative). FGE-OM comprises three namespaces representing (i) the parts of the model common to all functional genomics experiments; (ii) microarray-specific components; and (iii) proteomics-specific components. We believe that FGE-OM should initiate discussion about the contents and structure of the next version of MAGE and the future of proteomics standards. A prototype database called RNA And Protein Abundance Database (RAPAD), based on FGE-OM, has been implemented and populated with data from microbial pathogenesis. FGE-OM and the RAPAD schema are available from http://www.gusdb.org/fge.html, along with a set of more detailed diagrams. RAPAD can be accessed by registration at the site.

  2. Application of Transfer Matrix Approach to Modeling and Decentralized Control of Lattice-Based Structures

    NASA Technical Reports Server (NTRS)

    Cramer, Nick; Swei, Sean Shan-Min; Cheung, Kenny; Teodorescu, Mircea

    2015-01-01

    This paper presents a modeling and control of aerostructure developed by lattice-based cellular materials/components. The proposed aerostructure concept leverages a building block strategy for lattice-based components which provide great adaptability to varying ight scenarios, the needs of which are essential for in- ight wing shaping control. A decentralized structural control design is proposed that utilizes discrete-time lumped mass transfer matrix method (DT-LM-TMM). The objective is to develop an e ective reduced order model through DT-LM-TMM that can be used to design a decentralized controller for the structural control of a wing. The proposed approach developed in this paper shows that, as far as the performance of overall structural system is concerned, the reduced order model can be as e ective as the full order model in designing an optimal stabilizing controller.

  3. Shifting attention from objective risk factors to patients' self-assessed health resources: a clinical model for general practice.

    PubMed

    Hollnagel, H; Malterud, K

    1995-12-01

    The study was designed to present and apply theoretical and empirical knowledge for the construction of a clinical model intended to shift the attention of the general practitioner from objective risk factors to self-assessed health resources in male and female patients. Review, discussion and analysis of selected theoretical models about personal health resources involving assessing existing theories according to their emphasis concerning self-assessed vs. doctor-assessed health resources, specific health resources vs. life and coping in general, abstract vs. clinically applicable theory, gender perspective explicitly included or not. Relevant theoretical models on health and coping (salutogenesis, coping and social support, control/demand, locus of control, health belief model, quality of life), and the perspective of the underprivileged Other (critical theory, feminist standpoint theory, the patient-centred clinical method) were presented and assessed. Components from Antonovsky's salutogenetic perspective and McWhinney's patient-centred clinical method, supported by gender perspectives, were integrated to a clinical model which is presented. General practitioners are recommended to shift their attention from objective risk factors to self-assessed health resources by means of the clinical model. The relevance and feasibility of the model should be explored in empirical research.

  4. Development of a Rubber-Based Product Using a Mixture Experiment: A Challenging Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaya, Yahya; Piepel, Gregory F.; Caniyilmaz, Erdal

    2013-07-01

    Many products used in daily life are made by blending two or more components. The properties of such products typically depend on the relative proportions of the components. Experimental design, modeling, and data analysis methods for mixture experiments provide for efficiently determining the component proportions that will yield a product with desired properties. This article presents a case study of the work performed to develop a new rubber formulation for an o-ring (a circular gasket) with requirements specified on 10 product properties. Each step of the study is discussed, including: 1) identifying the objective of the study and requirements formore » properties of the o-ring, 2) selecting the components to vary and specifying the component constraints, 3) constructing a mixture experiment design, 4) measuring the responses and assessing the data, 5) developing property-composition models, 6) selecting the new product formulation, and 7) confirming the selected formulation in manufacturing. The case study includes some challenging and new aspects, which are discussed in the article.« less

  5. Enhancements to the Engine Data Interpretation System (EDIS)

    NASA Technical Reports Server (NTRS)

    Hofmann, Martin O.

    1993-01-01

    The Engine Data Interpretation System (EDIS) expert system project assists the data review personnel at NASA/MSFC in performing post-test data analysis and engine diagnosis of the Space Shuttle Main Engine (SSME). EDIS uses knowledge of the engine, its components, and simple thermodynamic principles instead of, and in addition to, heuristic rules gathered from the engine experts. EDIS reasons in cooperation with human experts, following roughly the pattern of logic exhibited by human experts. EDIS concentrates on steady-state static faults, such as small leaks, and component degradations, such as pump efficiencies. The objective of this contract was to complete the set of engine component models, integrate heuristic rules into EDIS, integrate the Power Balance Model into EDIS, and investigate modification of the qualitative reasoning mechanisms to allow 'fuzzy' value classification. The results of this contract is an operational version of EDIS. EDIS will become a module of the Post-Test Diagnostic System (PTDS) and will, in this context, provide system-level diagnostic capabilities which integrate component-specific findings provided by other modules.

  6. Enhancements to the Engine Data Interpretation System (EDIS)

    NASA Technical Reports Server (NTRS)

    Hofmann, Martin O.

    1993-01-01

    The Engine Data Interpretation System (EDIS) expert system project assists the data review personnel at NASA/MSFC in performing post-test data analysis and engine diagnosis of the Space Shuttle Main Engine (SSME). EDIS uses knowledge of the engine, its components, and simple thermodynamic principles instead of, and in addition to, heuristic rules gathered from the engine experts. EDIS reasons in cooperation with human experts, following roughly the pattern of logic exhibited by human experts. EDIS concentrates on steady-state static faults, such as small leaks, and component degradations, such as pump efficiencies. The objective of this contract was to complete the set of engine component models, integrate heuristic rules into EDIS, integrate the Power Balance Model into EDIS, and investigate modification of the qualitative reasoning mechanisms to allow 'fuzzy' value classification. The result of this contract is an operational version of EDIS. EDIS will become a module of the Post-Test Diagnostic System (PTDS) and will, in this context, provide system-level diagnostic capabilities which integrate component-specific findings provided by other modules.

  7. MEMS Deformable Mirror Technology Development for Space-Based Exoplanet Detection

    NASA Astrophysics Data System (ADS)

    Bierden, Paul; Cornelissen, S.; Ryan, P.

    2014-01-01

    In the search for earth-like extrasolar planets that has become an important objective for NASA, a critical technology development requirement is to advance deformable mirror (DM) technology. High-actuator-count DMs are critical components for nearly all proposed coronagraph instrument concepts. The science case for exoplanet imaging is strong, and rapid recent advances in test beds with DMs made using microelectromechanical system (MEMS) technology have motivated a number of compelling mission concepts that set technical specifications for their use as wavefront controllers. This research will advance the technology readiness of the MEMS DMs components that are currently at the forefront of the field, and the project will be led by the manufacturer of those components, Boston Micromachines Corporation (BMC). The project aims to demonstrate basic functionality and performance of this key component in critical test environments and in simulated operational environments, while establishing model-based predictions of its performance relative to launch and space environments. Presented will be the current status of the project with modeling and initial test results.

  8. Stereo matching algorithm based on double components model

    NASA Astrophysics Data System (ADS)

    Zhou, Xiao; Ou, Kejun; Zhao, Jianxin; Mou, Xingang

    2018-03-01

    The tiny wires are the great threat to the safety of the UAV flight. Because they have only several pixels isolated far from the background, while most of the existing stereo matching methods require a certain area of the support region to improve the robustness, or assume the depth dependence of the neighboring pixels to meet requirement of global or semi global optimization method. So there will be some false alarms even failures when images contains tiny wires. A new stereo matching algorithm is approved in the paper based on double components model. According to different texture types the input image is decomposed into two independent component images. One contains only sparse wire texture image and another contains all remaining parts. Different matching schemes are adopted for each component image pairs. Experiment proved that the algorithm can effectively calculate the depth image of complex scene of patrol UAV, which can detect tiny wires besides the large size objects. Compared with the current mainstream method it has obvious advantages.

  9. A variation reduction allocation model for quality improvement to minimize investment and quality costs by considering suppliers’ learning curve

    NASA Astrophysics Data System (ADS)

    Rosyidi, C. N.; Jauhari, WA; Suhardi, B.; Hamada, K.

    2016-02-01

    Quality improvement must be performed in a company to maintain its product competitiveness in the market. The goal of such improvement is to increase the customer satisfaction and the profitability of the company. In current practice, a company needs several suppliers to provide the components in assembly process of a final product. Hence quality improvement of the final product must involve the suppliers. In this paper, an optimization model to allocate the variance reduction is developed. Variation reduction is an important term in quality improvement for both manufacturer and suppliers. To improve suppliers’ components quality, the manufacturer must invest an amount of their financial resources in learning process of the suppliers. The objective function of the model is to minimize the total cost consists of investment cost, and quality costs for both internal and external quality costs. The Learning curve will determine how the employee of the suppliers will respond to the learning processes in reducing the variance of the component.

  10. Aeroelastic Modeling of a Nozzle Startup Transient

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Zhao, Xiang; Zhang, Sijun; Chen, Yen-Sen

    2014-01-01

    Lateral nozzle forces are known to cause severe structural damage to any new rocket engine in development during test. While three-dimensional, transient, turbulent, chemically reacting computational fluid dynamics methodology has been demonstrated to capture major side load physics with rigid nozzles, hot-fire tests often show nozzle structure deformation during major side load events, leading to structural damages if structural strengthening measures were not taken. The modeling picture is incomplete without the capability to address the two-way responses between the structure and fluid. The objective of this study is to develop a tightly coupled aeroelastic modeling algorithm by implementing the necessary structural dynamics component into an anchored computational fluid dynamics methodology. The computational fluid dynamics component is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, while the computational structural dynamics component is developed under the framework of modal analysis. Transient aeroelastic nozzle startup analyses at sea level were performed, and the computed transient nozzle fluid-structure interaction physics presented,

  11. The mass of the compact object in the X-ray binary her X-1/HZ her

    NASA Astrophysics Data System (ADS)

    Abubekerov, M. K.; Antokhina, E. A.; Cherepashchuk, A. M.; Shimanskii, V. V.

    2008-05-01

    We have obtained the first estimates of the masses of the components of the Her X-1/HZ Her X-ray binary system taking into account non-LTE effects in the formation of the H γ absorption line: m x = 1.8 M ⊙ and m v = 2.5 M ⊙. These mass estimates were made in a Roche model based on the observed radial-velocity curve of the optical star, HZ Her. The masses for the X-ray pulsar and optical star obtained for an LTE model lie are m x = 0.85 ± 0.15 M ⊙ and m v = 1.87 ± 0.13 M ⊙. These mass estimates for the components of Her X-1/HZ Her derived from the radial-velocity curve should be considered tentative. Further mass estimates from high-precision observations of the orbital variability of the absorption profiles in a non-LTE model for the atmosphere of the optical component should be made.

  12. Laser heterodyne surface profiler

    DOEpatents

    Sommargren, Gary E.

    1982-01-01

    A method and apparatus is disclosed for testing the deviation of the face of an object from a flat smooth surface using a beam of coherent light of two plane-polarized components, one of a frequency constantly greater than the other by a fixed amount to produce a difference frequency with a constant phase to be used as a reference. The beam also is split into its two components with the separate components directed onto spaced apart points onthe face of the object to be tested for smoothness. The object is rotated on an axis coincident with one component which is directed to the face of the object at the center which constitutes a virtual fixed point. This component also is used as a reference. The other component follows a circular track on the face of the object as the object is rotated. The two components are recombined after reflection to produce a reflected frequency difference of a phase proportional to the difference in path length which is compared with the reference phase to produce a signal proportional to the deviation of the height of the surface along the circular track with respect to the fixed point at the center.

  13. A Comparison Between Spectral Properties of ULXs and Luminous X-ray Binaries

    NASA Astrophysics Data System (ADS)

    Berghea, C. T.; Colbert, E. J. M.; Roberts, T. P.

    2004-05-01

    What is special about the 1039 erg s-1 limit that is used to define the ULX class? We investigate this question by analyzing Chandra X-ray spectra of 71 X-ray bright point sources from nearby galaxies. Fifty-one of these sources are ULXs (LX(0.3-8.0 keV) ≥ 1039 erg s-1), and 20 sources (our comparison sample) are less-luminous X-ray binaries with LX(0.3-8.0 keV) = 1038-39 erg s-1. Our sample objects were selected from the Chandra archive to have ≥1000 counts and thus represent the highest quality spectra in the Chandra archives for extragalactic X-ray binaries and ULXs. We fit the spectra with one-component models (e.g., cold absorption with power-law, or cold absorption with multi-colored disk blackbody) and two-component models (e.g. absorption with both a power-law and a multi colored disk blackbody). A crude measure of the spectral states of the sources are determined observationally by calibrating the strength of the disk (blackbody) and coronal (power-law) components. These results are then use to determine if spectral properties of the ULXs are statistically distinct from those of the comparison objects, which are assumed to be ``normal'' black-hole X-ray binaries.

  14. NIR calibration of soluble stem carbohydrates for predicting drought tolerance in spring wheat

    USDA-ARS?s Scientific Manuscript database

    Soluble stem carbohydrates are a component of drought response in wheat (Triticum aestivum L.) and other grasses. Near-infrared spectroscopy (NIR) can rapidly assay for soluble carbohydrates indirectly, but this requires a statistical model for calibration. The objectives of this study were: (i) to ...

  15. Application of linear mixed-effects model with LASSO to identify metal components associated with cardiac autonomic responses among welders: a repeated measures study

    PubMed Central

    Zhang, Jinming; Cavallari, Jennifer M; Fang, Shona C; Weisskopf, Marc G; Lin, Xihong; Mittleman, Murray A; Christiani, David C

    2017-01-01

    Background Environmental and occupational exposure to metals is ubiquitous worldwide, and understanding the hazardous metal components in this complex mixture is essential for environmental and occupational regulations. Objective To identify hazardous components from metal mixtures that are associated with alterations in cardiac autonomic responses. Methods Urinary concentrations of 16 types of metals were examined and ‘acceleration capacity’ (AC) and ‘deceleration capacity’ (DC), indicators of cardiac autonomic effects, were quantified from ECG recordings among 54 welders. We fitted linear mixed-effects models with least absolute shrinkage and selection operator (LASSO) to identify metal components that are associated with AC and DC. The Bayesian Information Criterion was used as the criterion for model selection procedures. Results Mercury and chromium were selected for DC analysis, whereas mercury, chromium and manganese were selected for AC analysis through the LASSO approach. When we fitted the linear mixed-effects models with ‘selected’ metal components only, the effect of mercury remained significant. Every 1 µg/L increase in urinary mercury was associated with −0.58 ms (−1.03, –0.13) changes in DC and 0.67 ms (0.25, 1.10) changes in AC. Conclusion Our study suggests that exposure to several metals is associated with impaired cardiac autonomic functions. Our findings should be replicated in future studies with larger sample sizes. PMID:28663305

  16. Do object refixations during scene viewing indicate rehearsal in visual working memory?

    PubMed

    Zelinsky, Gregory J; Loschky, Lester C; Dickinson, Christopher A

    2011-05-01

    Do refixations serve a rehearsal function in visual working memory (VWM)? We analyzed refixations from observers freely viewing multiobject scenes. An eyetracker was used to limit the viewing of a scene to a specified number of objects fixated after the target (intervening objects), followed by a four-alternative forced choice recognition test. Results showed that the probability of target refixation increased with the number of fixated intervening objects, and these refixations produced a 16% accuracy benefit over the first five intervening-object conditions. Additionally, refixations most frequently occurred after fixations on only one to two other objects, regardless of the intervening-object condition. These behaviors could not be explained by random or minimally constrained computational models; a VWM component was required to completely describe these data. We explain these findings in terms of a monitor-refixate rehearsal system: The activations of object representations in VWM are monitored, with refixations occurring when these activations decrease suddenly.

  17. A New Conceptualization of Human Visual Sensory-Memory.

    PubMed

    Öğmen, Haluk; Herzog, Michael H

    2016-01-01

    Memory is an essential component of cognition and disorders of memory have significant individual and societal costs. The Atkinson-Shiffrin "modal model" forms the foundation of our understanding of human memory. It consists of three stores: Sensory Memory (SM), whose visual component is called iconic memory, Short-Term Memory (STM; also called working memory, WM), and Long-Term Memory (LTM). Since its inception, shortcomings of all three components of the modal model have been identified. While the theories of STM and LTM underwent significant modifications to address these shortcomings, models of the iconic memory remained largely unchanged: A high capacity but rapidly decaying store whose contents are encoded in retinotopic coordinates, i.e., according to how the stimulus is projected on the retina. The fundamental shortcoming of iconic memory models is that, because contents are encoded in retinotopic coordinates, the iconic memory cannot hold any useful information under normal viewing conditions when objects or the subject are in motion. Hence, half-century after its formulation, it remains an unresolved problem whether and how the first stage of the modal model serves any useful function and how subsequent stages of the modal model receive inputs from the environment. Here, we propose a new conceptualization of human visual sensory memory by introducing an additional component whose reference-frame consists of motion-grouping based coordinates rather than retinotopic coordinates. We review data supporting this new model and discuss how it offers solutions to the paradoxes of the traditional model of sensory memory.

  18. Simulation-based artifact correction (SBAC) for metrological computed tomography

    NASA Astrophysics Data System (ADS)

    Maier, Joscha; Leinweber, Carsten; Sawall, Stefan; Stoschus, Henning; Ballach, Frederic; Müller, Tobias; Hammer, Michael; Christoph, Ralf; Kachelrieß, Marc

    2017-06-01

    Computed tomography (CT) is a valuable tool for the metrolocical assessment of industrial components. However, the application of CT to the investigation of highly attenuating objects or multi-material components is often restricted by the presence of CT artifacts caused by beam hardening, x-ray scatter, off-focal radiation, partial volume effects or the cone-beam reconstruction itself. In order to overcome this limitation, this paper proposes an approach to calculate a correction term that compensates for the contribution of artifacts and thus enables an appropriate assessment of these components using CT. Therefore, we make use of computer simulations of the CT measurement process. Based on an appropriate model of the object, e.g. an initial reconstruction or a CAD model, two simulations are carried out. One simulation considers all physical effects that cause artifacts using dedicated analytic methods as well as Monte Carlo-based models. The other one represents an ideal CT measurement i.e. a measurement in parallel beam geometry with a monochromatic, point-like x-ray source and no x-ray scattering. Thus, the difference between these simulations is an estimate for the present artifacts and can be used to correct the acquired projection data or the corresponding CT reconstruction, respectively. The performance of the proposed approach is evaluated using simulated as well as measured data of single and multi-material components. Our approach yields CT reconstructions that are nearly free of artifacts and thereby clearly outperforms commonly used artifact reduction algorithms in terms of image quality. A comparison against tactile reference measurements demonstrates the ability of the proposed approach to increase the accuracy of the metrological assessment significantly.

  19. Problematic Internet use and problematic alcohol use from the cognitive-behavioral model: a longitudinal study among adolescents.

    PubMed

    Gámez-Guadix, Manuel; Calvete, Esther; Orue, Izaskun; Las Hayas, Carlota

    2015-01-01

    Problematic Internet use (PIU) and problematic alcohol use are two pervasive problems during adolescence that share similar characteristics and predictors. The first objective of this study was to analyze the temporal and reciprocal relationships among the main components of PIU from the cognitive-behavioral model (preference for online social interaction, mood regulation through the Internet, deficient self-regulation, and negative consequences). The second objective was to examine the temporal and reciprocal relationships between PIU components and problematic alcohol use. We also examined whether these relationships differ between males and females. The sample comprised 801 Spanish adolescents (mean age=14.92, SD=1.01) who completed the measures both at Time 1 (T1) and Time 2 (T2) six months apart. We used structural equation modeling to analyze the relationship among the variables. Results showed that deficient self-regulation at T1 predicted an increase in preference for online interactions, mood regulation, and negative consequences of the Internet at T2. In turn, the emergence of negative consequences of PIU at T1 predicted a rise in problematic alcohol use at T2. Longitudinal relationships between different components of PIU and between the components of PIU and problematic alcohol use were invariant across genders. Deficient self-regulation, consisting of diminished self-control over cognition and behaviors related to the Internet, plays a central role in the maintenance of PIU, increasing the preference for online interactions, mood regulation, and negative consequences from Internet use over time. In turn, adolescents who present negative consequences of PIU are vulnerable targets for problematic alcohol use. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. NASA JPL Distributed Systems Technology (DST) Object-Oriented Component Approach for Software Inter-Operability and Reuse

    NASA Technical Reports Server (NTRS)

    Hall, Laverne; Hung, Chaw-Kwei; Lin, Imin

    2000-01-01

    The purpose of this paper is to provide a description of NASA JPL Distributed Systems Technology (DST) Section's object-oriented component approach to open inter-operable systems software development and software reuse. It will address what is meant by the terminology object component software, give an overview of the component-based development approach and how it relates to infrastructure support of software architectures and promotes reuse, enumerate on the benefits of this approach, and give examples of application prototypes demonstrating its usage and advantages. Utilization of the object-oriented component technology approach for system development and software reuse will apply to several areas within JPL, and possibly across other NASA Centers.

  1. Stereo Viewing Modulates Three-Dimensional Shape Processing During Object Recognition: A High-Density ERP Study

    PubMed Central

    2017-01-01

    The role of stereo disparity in the recognition of 3-dimensional (3D) object shape remains an unresolved issue for theoretical models of the human visual system. We examined this issue using high-density (128 channel) recordings of event-related potentials (ERPs). A recognition memory task was used in which observers were trained to recognize a subset of complex, multipart, 3D novel objects under conditions of either (bi-) monocular or stereo viewing. In a subsequent test phase they discriminated previously trained targets from untrained distractor objects that shared either local parts, 3D spatial configuration, or neither dimension, across both previously seen and novel viewpoints. The behavioral data showed a stereo advantage for target recognition at untrained viewpoints. ERPs showed early differential amplitude modulations to shape similarity defined by local part structure and global 3D spatial configuration. This occurred initially during an N1 component around 145–190 ms poststimulus onset, and then subsequently during an N2/P3 component around 260–385 ms poststimulus onset. For mono viewing, amplitude modulation during the N1 was greatest between targets and distracters with different local parts for trained views only. For stereo viewing, amplitude modulation during the N2/P3 was greatest between targets and distracters with different global 3D spatial configurations and generalized across trained and untrained views. The results show that image classification is modulated by stereo information about the local part, and global 3D spatial configuration of object shape. The findings challenge current theoretical models that do not attribute functional significance to stereo input during the computation of 3D object shape. PMID:29022728

  2. Hot Swapping Protocol Implementations in the OPNET Modeler Development Environment

    DTIC Science & Technology

    2008-03-01

    components. Unfortunately, this style is not efficient or particularly human–readable. Even purely pedagogical scenarios consisting of a client and a...definition provided by the mock object. sion of this kernel procedure steers all packets sent with op pk deliver() to the unit testing’s specialized...forms of development. Moreover, batteries of unit tests could ship with the accompanying process models and serve as robust regression tests

  3. Object linking in repositories

    NASA Technical Reports Server (NTRS)

    Eichmann, David (Editor); Beck, Jon; Atkins, John; Bailey, Bill

    1992-01-01

    This topic is covered in three sections. The first section explores some of the architectural ramifications of extending the Eichmann/Atkins lattice-based classification scheme to encompass the assets of the full life cycle of software development. A model is considered that provides explicit links between objects in addition to the edges connecting classification vertices in the standard lattice. The second section gives a description of the efforts to implement the repository architecture using a commercially available object-oriented database management system. Some of the features of this implementation are described, and some of the next steps to be taken to produce a working prototype of the repository are pointed out. In the final section, it is argued that design and instantiation of reusable components have competing criteria (design-for-reuse strives for generality, design-with-reuse strives for specificity) and that providing mechanisms for each can be complementary rather than antagonistic. In particular, it is demonstrated how program slicing techniques can be applied to customization of reusable components.

  4. Spectral variations of LMC X-3 observed with Ginga

    NASA Technical Reports Server (NTRS)

    Ebisawa, Ken; Makino, Fumiyoshi; Mitsuda, Kazuhisa; Belloni, Tomaso; Cowley, Anne P.; Schmidtke, Paul C.; Treves, Aldo

    1993-01-01

    The prime black hole candidate LMC X-3 was observed over three years with the Ginga satellite, and a characteristic spectral variation was found accompanying the periodic intensity variation of about 198 (or possibly about 99) days (Cowley et al., 1991). The energy spectrum of LMC X-3 consists of the soft, thermal component and the hard, power-law component, which are respectively dominant below and above about 9 keV. The soft component, which carries most of the X-ray intensity, shows a clear correlation between the intensity and the hardness, while the hard component varies independently of the soft component. It was found that the spectral variation of the soft component is well described by an optically thick accretion disk model with a remarkably constant innermost radius and variable mass accretion rate. The constancy of the innermost radius suggests it is related to the mass of the central object.

  5. Analysis tool and methodology design for electronic vibration stress understanding and prediction

    NASA Astrophysics Data System (ADS)

    Hsieh, Sheng-Jen; Crane, Robert L.; Sathish, Shamachary

    2005-03-01

    The objectives of this research were to (1) understand the impact of vibration on electronic components under ultrasound excitation; (2) model the thermal profile presented under vibration stress; and (3) predict stress level given a thermal profile of an electronic component. Research tasks included: (1) retrofit of current ultrasonic/infrared nondestructive testing system with sensory devices for temperature readings; (2) design of software tool to process images acquired from the ultrasonic/infrared system; (3) developing hypotheses and conducting experiments; and (4) modeling and evaluation of electronic vibration stress levels using a neural network model. Results suggest that (1) an ultrasonic/infrared system can be used to mimic short burst high vibration loads for electronics components; (2) temperature readings for electronic components under vibration stress are consistent and repeatable; (3) as stress load and excitation time increase, temperature differences also increase; (4) components that are subjected to a relatively high pre-stress load, followed by a normal operating load, have a higher heating rate and lower cooling rate. These findings are based on grayscale changes in images captured during experimentation. Discriminating variables and a neural network model were designed to predict stress levels given temperature and/or grayscale readings. Preliminary results suggest a 15.3% error when using grayscale change rate and 12.8% error when using average heating rate within the neural network model. Data were obtained from a high stress point (the corner) of the chip.

  6. Suzaku  Observations of Heavily Obscured (Compton-thick) Active Galactic Nuclei Selected by the Swift/BAT Hard X-Ray Survey

    NASA Astrophysics Data System (ADS)

    Tanimoto, Atsushi; Ueda, Yoshihiro; Kawamuro, Taiki; Ricci, Claudio; Awaki, Hisamitsu; Terashima, Yuichi

    2018-02-01

    We present a uniform broadband X-ray (0.5–100.0 keV) spectral analysis of 12 Swift/Burst Alert Telescope selected Compton-thick ({log}{N}{{H}}/{{cm}}-2≥slant 24) active galactic nuclei (CTAGNs) observed with Suzaku. The Suzaku data of three objects are published here for the first time. We fit the Suzaku and Swift spectra with models utilizing an analytic reflection code and those utilizing the Monte-Carlo-based model from an AGN torus by Ikeda et al. The main results are as follows: (1) The estimated intrinsic luminosity of a CTAGN strongly depends on the model; applying Compton scattering to the transmitted component in an analytic model may largely overestimate the intrinsic luminosity at large column densities. (2) Unabsorbed reflection components are commonly observed, suggesting that the tori are clumpy. (3) Most of CTAGNs show small scattering fractions (<0.5%), implying a buried AGN nature. (4) Comparison with the results obtained for Compton-thin AGNs suggests that the properties of these CTAGNs can be understood as a smooth extension from Compton-thin AGNs with heavier obscuration; we find no evidence that the bulk of the population of hard-X-ray-selected CTAGNs are different from less obscured objects.

  7. Hybrid, experimental and computational, investigation of mechanical components

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    1996-07-01

    Computational and experimental methodologies have unique features for the analysis and solution of a wide variety of engineering problems. Computations provide results that depend on selection of input parameters such as geometry, material constants, and boundary conditions which, for correct modeling purposes, have to be appropriately chosen. In addition, it is relatively easy to modify the input parameters in order to computationally investigate different conditions. Experiments provide solutions which characterize the actual behavior of the object of interest subjected to specific operating conditions. However, it is impractical to experimentally perform parametric investigations. This paper discusses the use of a hybrid, computational and experimental, approach for study and optimization of mechanical components. Computational techniques are used for modeling the behavior of the object of interest while it is experimentally tested using noninvasive optical techniques. Comparisons are performed through a fringe predictor program used to facilitate the correlation between both techniques. In addition, experimentally obtained quantitative information, such as displacements and shape, can be applied in the computational model in order to improve this correlation. The result is a validated computational model that can be used for performing quantitative analyses and structural optimization. Practical application of the hybrid approach is illustrated with a representative example which demonstrates the viability of the approach as an engineering tool for structural analysis and optimization.

  8. Systems modeling to improve the hydro-ecological performance of diked wetlands

    NASA Astrophysics Data System (ADS)

    Alminagorta, Omar; Rosenberg, David E.; Kettenring, Karin M.

    2016-09-01

    Water scarcity and invasive vegetation threaten arid-region wetlands and wetland managers seek ways to enhance wetland ecosystem services with limited water, labor, and financial resources. While prior systems modeling efforts have focused on water management to improve flow-based ecosystem and habitat objectives, here we consider water allocation and invasive vegetation management that jointly target the concurrent hydrologic and vegetation habitat needs of priority wetland bird species. We formulate a composite weighted usable area for wetlands (WU) objective function that represents the wetland surface area that provides suitable water level and vegetation cover conditions for priority bird species. Maximizing the WU is subject to constraints such as water balance, hydraulic infrastructure capacity, invasive vegetation growth and control, and a limited financial budget to control vegetation. We apply the model at the Bear River Migratory Bird Refuge on the Great Salt Lake, Utah, compare model-recommended management actions to past Refuge water and vegetation control activities, and find that managers can almost double the area of suitable habitat by more dynamically managing water levels and managing invasive vegetation in August at the beginning of the window for control operations. Scenario and sensitivity analyses show the importance to jointly consider hydrology and vegetation system components rather than only the hydrological component.

  9. Integration of real-time 3D capture, reconstruction, and light-field display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  10. The development of a fear of falling interdisciplinary intervention program

    PubMed Central

    Gomez, Fernando; Curcio, Carmen-Lucia

    2007-01-01

    Objective: To describe the development process of a protocol for a fear of falling interdisciplinary intervention program based on the main factors associated with fear of falling. Design/methods: The process of developing a protocol consisted of defining the target population, selecting the initial assessment components, adapting the intervention program based on findings about fear of falling and restriction of activities in this population. Settings: University-affiliated outpatient vertigo, dizziness and falls clinic in coffee-growers zone of Colombian Andes Mountains. Results: An intervention program was developed based on three main falling conceptual models. A medical intervention, based on a biomedical and pathophysiological model, a physiotherapeutic intervention based on a postural control model and a psychological intervention based on a biological-behavioral model. Conclusion: This interdisciplinary fear of falling intervention program developed is based on particular characteristics of target population, with differences in the inclusion criteria and the program intervention components; with emphasis on medical (recurrent falls and dizziness evaluation and management), psychological (cognitive-behavioral therapy) and physiotherapeutic (balance and transfers training) components. PMID:18225468

  11. A component-based system for agricultural drought monitoring by remote sensing.

    PubMed

    Dong, Heng; Li, Jun; Yuan, Yanbin; You, Lin; Chen, Chao

    2017-01-01

    In recent decades, various kinds of remote sensing-based drought indexes have been proposed and widely used in the field of drought monitoring. However, the drought-related software and platform development lag behind the theoretical research. The current drought monitoring systems focus mainly on information management and publishing, and cannot implement professional drought monitoring or parameter inversion modelling, especially the models based on multi-dimensional feature space. In view of the above problems, this paper aims at fixing this gap with a component-based system named RSDMS to facilitate the application of drought monitoring by remote sensing. The system is designed and developed based on Component Object Model (COM) to ensure the flexibility and extendibility of modules. RSDMS realizes general image-related functions such as data management, image display, spatial reference management, image processing and analysis, and further provides drought monitoring and evaluation functions based on internal and external models. Finally, China's Ningxia region is selected as the study area to validate the performance of RSDMS. The experimental results show that RSDMS provide an efficient and scalable support to agricultural drought monitoring.

  12. A component-based system for agricultural drought monitoring by remote sensing

    PubMed Central

    Yuan, Yanbin; You, Lin; Chen, Chao

    2017-01-01

    In recent decades, various kinds of remote sensing-based drought indexes have been proposed and widely used in the field of drought monitoring. However, the drought-related software and platform development lag behind the theoretical research. The current drought monitoring systems focus mainly on information management and publishing, and cannot implement professional drought monitoring or parameter inversion modelling, especially the models based on multi-dimensional feature space. In view of the above problems, this paper aims at fixing this gap with a component-based system named RSDMS to facilitate the application of drought monitoring by remote sensing. The system is designed and developed based on Component Object Model (COM) to ensure the flexibility and extendibility of modules. RSDMS realizes general image-related functions such as data management, image display, spatial reference management, image processing and analysis, and further provides drought monitoring and evaluation functions based on internal and external models. Finally, China’s Ningxia region is selected as the study area to validate the performance of RSDMS. The experimental results show that RSDMS provide an efficient and scalable support to agricultural drought monitoring. PMID:29236700

  13. The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.

    PubMed

    Olivier, Brett G; Bergmann, Frank T

    2015-09-04

    Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state. The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).

  14. The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.

    PubMed

    Olivier, Brett G; Bergmann, Frank T

    2015-06-01

    Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state. The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).

  15. Cellular automata with object-oriented features for parallel molecular network modeling.

    PubMed

    Zhu, Hao; Wu, Yinghui; Huang, Sui; Sun, Yan; Dhar, Pawan

    2005-06-01

    Cellular automata are an important modeling paradigm for studying the dynamics of large, parallel systems composed of multiple, interacting components. However, to model biological systems, cellular automata need to be extended beyond the large-scale parallelism and intensive communication in order to capture two fundamental properties characteristic of complex biological systems: hierarchy and heterogeneity. This paper proposes extensions to a cellular automata language, Cellang, to meet this purpose. The extended language, with object-oriented features, can be used to describe the structure and activity of parallel molecular networks within cells. Capabilities of this new programming language include object structure to define molecular programs within a cell, floating-point data type and mathematical functions to perform quantitative computation, message passing capability to describe molecular interactions, as well as new operators, statements, and built-in functions. We discuss relevant programming issues of these features, including the object-oriented description of molecular interactions with molecule encapsulation, message passing, and the description of heterogeneity and anisotropy at the cell and molecule levels. By enabling the integration of modeling at the molecular level with system behavior at cell, tissue, organ, or even organism levels, the program will help improve our understanding of how complex and dynamic biological activities are generated and controlled by parallel functioning of molecular networks. Index Terms-Cellular automata, modeling, molecular network, object-oriented.

  16. Reliability models applicable to space telescope solar array assembly system

    NASA Technical Reports Server (NTRS)

    Patil, S. A.

    1986-01-01

    A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed.

  17. A cortical framework for invariant object categorization and recognition.

    PubMed

    Rodrigues, João; Hans du Buf, J M

    2009-08-01

    In this paper we present a new model for invariant object categorization and recognition. It is based on explicit multi-scale features: lines, edges and keypoints are extracted from responses of simple, complex and end-stopped cells in cortical area V1, and keypoints are used to construct saliency maps for Focus-of-Attention. The model is a functional but dichotomous one, because keypoints are employed to model the "where" data stream, with dynamic routing of features from V1 to higher areas to obtain translation, rotation and size invariance, whereas lines and edges are employed in the "what" stream for object categorization and recognition. Furthermore, both the "where" and "what" pathways are dynamic in that information at coarse scales is employed first, after which information at progressively finer scales is added in order to refine the processes, i.e., both the dynamic feature routing and the categorization level. The construction of group and object templates, which are thought to be available in the prefrontal cortex with "what" and "where" components in PF46d and PF46v, is also illustrated. The model was tested in the framework of an integrated and biologically plausible architecture.

  18. A Methodology for Modeling Nuclear Power Plant Passive Component Aging in Probabilistic Risk Assessment under the Impact of Operating Conditions, Surveillance and Maintenance Activities

    NASA Astrophysics Data System (ADS)

    Guler Yigitoglu, Askin

    In the context of long operation of nuclear power plants (NPPs) (i.e., 60-80 years, and beyond), investigation of the aging of passive systems, structures and components (SSCs) is important to assess safety margins and to decide on reactor life extension as indicated within the U.S. Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) Program. In the traditional probabilistic risk assessment (PRA) methodology, evaluating the potential significance of aging of passive SSCs on plant risk is challenging. Although passive SSC failure rates can be added as initiating event frequencies or basic event failure rates in the traditional event-tree/fault-tree methodology, these failure rates are generally based on generic plant failure data which means that the true state of a specific plant is not reflected in a realistic manner on aging effects. Dynamic PRA methodologies have gained attention recently due to their capability to account for the plant state and thus address the difficulties in the traditional PRA modeling of aging effects of passive components using physics-based models (and also in the modeling of digital instrumentation and control systems). Physics-based models can capture the impact of complex aging processes (e.g., fatigue, stress corrosion cracking, flow-accelerated corrosion, etc.) on SSCs and can be utilized to estimate passive SSC failure rates using realistic NPP data from reactor simulation, as well as considering effects of surveillance and maintenance activities. The objectives of this dissertation are twofold: The development of a methodology for the incorporation of aging modeling of passive SSC into a reactor simulation environment to provide a framework for evaluation of their risk contribution in both the dynamic and traditional PRA; and the demonstration of the methodology through its application to pressurizer surge line pipe weld and steam generator tubes in commercial nuclear power plants. In the proposed methodology, a multi-state physics based model is selected to represent the aging process. The model is modified via sojourn time approach to reflect the operational and maintenance history dependence of the transition rates. Thermal-hydraulic parameters of the model are calculated via the reactor simulation environment and uncertainties associated with both parameters and the models are assessed via a two-loop Monte Carlo approach (Latin hypercube sampling) to propagate input probability distributions through the physical model. The effort documented in this thesis towards this overall objective consists of : i) defining a process for selecting critical passive components and related aging mechanisms, ii) aging model selection, iii) calculating the probability that aging would cause the component to fail, iv) uncertainty/sensitivity analyses, v) procedure development for modifying an existing PRA to accommodate consideration of passive component failures, and, vi) including the calculated failure probability in the modified PRA. The proposed methodology is applied to pressurizer surge line pipe weld aging and steam generator tube degradation in pressurized water reactors.

  19. A residency clinic chronic condition management quality improvement project.

    PubMed

    Halverson, Larry W; Sontheimer, Dan; Duvall, Sharon

    2007-02-01

    Quality improvement in chronic disease management is a major agenda for improving health and reducing health care costs. A six-component chronic disease management model can help guide this effort. Several characteristics of the "new model" of family medicine described by the Future of Family Medicine (FFM) Project Leadership Committee are promulgated to foster practice changes that improve quality. Our objective was to implement and assess a quality improvement project guided by the components of a chronic disease management model and FFM new model characteristics. Diabetes was selected as a model chronic disease focus. Multiple practice changes were implemented. A mature electronic medical record facilitated data collection and measurement of quality improvement progress. Data from the diabetes registry demonstrates that our efforts have been effective. Significant improvement occurred in five out of six quality indicators. Multidisciplinary teamwork in a model residency practice guided by chronic disease management principles and the FFM new model characteristics can produce significant management improvements in one important chronic disease.

  20. Ground control station software design for micro aerial vehicles

    NASA Astrophysics Data System (ADS)

    Walendziuk, Wojciech; Oldziej, Daniel; Binczyk, Dawid Przemyslaw; Slowik, Maciej

    2017-08-01

    This article describes the process of designing the equipment part and the software of a ground control station used for configuring and operating micro unmanned aerial vehicles (UAV). All the works were conducted on a quadrocopter model being a commonly accessible commercial construction. This article contains a characteristics of the research object, the basics of operating the micro aerial vehicles (MAV) and presents components of the ground control station model. It also describes the communication standards for the purpose of building a model of the station. Further part of the work concerns the software of the product - the GIMSO application (Generally Interactive Station for Mobile Objects), which enables the user to manage the actions and communication and control processes from the UAV. The process of creating the software and the field tests of a station model are also presented in the article.

  1. A simple methodology to produce flood risk maps consistent with FEMA's base flood elevation maps: Implementation and validation over the entire contiguous United States

    NASA Astrophysics Data System (ADS)

    Goteti, G.; Kaheil, Y. H.; Katz, B. G.; Li, S.; Lohmann, D.

    2011-12-01

    In the United States, government agencies as well as the National Flood Insurance Program (NFIP) use flood inundation maps associated with the 100-year return period (base flood elevation, BFE), produced by the Federal Emergency Management Agency (FEMA), as the basis for flood insurance. A credibility check of the flood risk hydraulic models, often employed by insurance companies, is their ability to reasonably reproduce FEMA's BFE maps. We present results from the implementation of a flood modeling methodology aimed towards reproducing FEMA's BFE maps at a very fine spatial resolution using a computationally parsimonious, yet robust, hydraulic model. The hydraulic model used in this study has two components: one for simulating flooding of the river channel and adjacent floodplain, and the other for simulating flooding in the remainder of the catchment. The first component is based on a 1-D wave propagation model, while the second component is based on a 2-D diffusive wave model. The 1-D component captures the flooding from large-scale river transport (including upstream effects), while the 2-D component captures the flooding from local rainfall. The study domain consists of the contiguous United States, hydrologically subdivided into catchments averaging about 500 km2 in area, at a spatial resolution of 30 meters. Using historical daily precipitation data from the Climate Prediction Center (CPC), the precipitation associated with the 100-year return period event was computed for each catchment and was input to the hydraulic model. Flood extent from the FEMA BFE maps is reasonably replicated by the 1-D component of the model (riverine flooding). FEMA's BFE maps only represent the riverine flooding component and are unavailable for many regions of the USA. However, this modeling methodology (1-D and 2-D components together) covers the entire contiguous USA. This study is part of a larger modeling effort from Risk Management Solutions° (RMS) to estimate flood risk associated with extreme precipitation events in the USA. Towards this greater objective, state-of-the-art models of flood hazard and stochastic precipitation are being implemented over the contiguous United States. Results from the successful implementation of the modeling methodology will be presented.

  2. Multi-focus image fusion algorithm using NSCT and MPCNN

    NASA Astrophysics Data System (ADS)

    Liu, Kang; Wang, Lianli

    2018-04-01

    Based on nonsubsampled contourlet transform (NSCT) and modified pulse coupled neural network (MPCNN), the paper proposes an effective method of image fusion. Firstly, the paper decomposes the source image into the low-frequency components and high-frequency components using NSCT, and then processes the low-frequency components by regional statistical fusion rules. For high-frequency components, the paper calculates the spatial frequency (SF), which is input into MPCNN model to get relevant coefficients according to the fire-mapping image of MPCNN. At last, the paper restructures the final image by inverse transformation of low-frequency and high-frequency components. Compared with the wavelet transformation (WT) and the traditional NSCT algorithm, experimental results indicate that the method proposed in this paper achieves an improvement both in human visual perception and objective evaluation. It indicates that the method is effective, practical and good performance.

  3. Faint Object Camera imaging and spectroscopy of NGC 4151

    NASA Technical Reports Server (NTRS)

    Boksenberg, A.; Catchpole, R. M.; Macchetto, F.; Albrecht, R.; Barbieri, C.; Blades, J. C.; Crane, P.; Deharveng, J. M.; Disney, M. J.; Jakobsen, P.

    1995-01-01

    We describe ultraviolet and optical imaging and spectroscopy within the central few arcseconds of the Seyfert galaxy NGC 4151, obtained with the Faint Object Camera on the Hubble Space Telescope. A narrowband image including (O III) lambda(5007) shows a bright nucleus centered on a complex biconical structure having apparent opening angle approximately 65 deg and axis at a position angle along 65 deg-245 deg; images in bands including Lyman-alpha and C IV lambda(1550) and in the optical continuum near 5500 A, show only the bright nucleus. In an off-nuclear optical long-slit spectrum we find a high and a low radial velocity component within the narrow emission lines. We identify the low-velocity component with the bright, extended, knotty structure within the cones, and the high-velocity component with more confined diffuse emission. Also present are strong continuum emission and broad Balmer emission line components, which we attribute to the extended point spread function arising from the intense nuclear emission. Adopting the geometry pointed out by Pedlar et al. (1993) to explain the observed misalignment of the radio jets and the main optical structure we model an ionizing radiation bicone, originating within a galactic disk, with apex at the active nucleus and axis centered on the extended radio jets. We confirm that through density bounding the gross spatial structure of the emission line region can be reproduced with a wide opening angle that includes the line of sight, consistent with the presence of a simple opaque torus allowing direct view of the nucleus. In particular, our modelling reproduces the observed decrease in position angle with distance from the nucleus, progressing initially from the direction of the extended radio jet, through our optical structure, and on to the extended narrow-line region. We explore the kinematics of the narrow-line low- and high-velocity components on the basis of our spectroscopy and adopted model structure.

  4. International Planetary Data Alliance (IPDA) Information Model

    NASA Technical Reports Server (NTRS)

    Hughes, John Steven; Beebe, R.; Guinness, E.; Heather, D.; Huang, M.; Kasaba, Y.; Osuna, P.; Rye, E.; Savorskiy, V.

    2007-01-01

    This document is the third deliverable of the International Planetary Data Alliance (IPDA) Archive Data Standards Requirements Identification project. The goal of the project is to identify a subset of the standards currently in use by NASAs Planetary Data System (PDS) that are appropriate for internationalization. As shown in the highlighted sections of Figure 1, the focus of this project is the Information Model component of the Data Architecture Standards, namely the object models, a data dictionary, and a set of data formats.

  5. Pricing index-based catastrophe bonds: Part 1: Formulation and discretization issues using a numerical PDE approach

    NASA Astrophysics Data System (ADS)

    Unger, André J. A.

    2010-02-01

    This work is the first installment in a two-part series, and focuses on the development of a numerical PDE approach to price components of a Bermudan-style callable catastrophe (CAT) bond. The bond is based on two underlying stochastic variables; the PCS index which posts quarterly estimates of industry-wide hurricane losses as well as a single-factor CIR interest rate model for the three-month LIBOR. The aggregate PCS index is analogous to losses claimed under traditional reinsurance in that it is used to specify a reinsurance layer. The proposed CAT bond model contains a Bermudan-style call feature designed to allow the reinsurer to minimize their interest rate risk exposure on making substantial fixed coupon payments using capital from the reinsurance premium. Numerical PDE methods are the fundamental strategy for pricing early-exercise constraints, such as the Bermudan-style call feature, into contingent claim models. Therefore, the objective and unique contribution of this first installment in the two-part series is to develop a formulation and discretization strategy for the proposed CAT bond model utilizing a numerical PDE approach. Object-oriented code design is fundamental to the numerical methods used to aggregate the PCS index, and implement the call feature. Therefore, object-oriented design issues that relate specifically to the development of a numerical PDE approach for the component of the proposed CAT bond model that depends on the PCS index and LIBOR are described here. Formulation, numerical methods and code design issues that relate to aggregating the PCS index and introducing the call option are the subject of the companion paper.

  6. Multi objective decision making in hybrid energy system design

    NASA Astrophysics Data System (ADS)

    Merino, Gabriel Guillermo

    The design of grid-connected photovoltaic wind generator system supplying a farmstead in Nebraska has been undertaken in this dissertation. The design process took into account competing criteria that motivate the use of different sources of energy for electric generation. The criteria considered were 'Financial', 'Environmental', and 'User/System compatibility'. A distance based multi-objective decision making methodology was developed to rank design alternatives. The method is based upon a precedence order imposed upon the design objectives and a distance metric describing the performance of each alternative. This methodology advances previous work by combining ambiguous information about the alternatives with a decision-maker imposed precedence order in the objectives. Design alternatives, defined by the photovoltaic array and wind generator installed capacities, were analyzed using the multi-objective decision making approach. The performance of the design alternatives was determined by simulating the system using hourly data for an electric load for a farmstead and hourly averages of solar irradiation, temperature and wind speed from eight wind-solar energy monitoring sites in Nebraska. The spatial variability of the solar energy resource within the region was assessed by determining semivariogram models to krige hourly and daily solar radiation data. No significant difference was found in the predicted performance of the system when using kriged solar radiation data, with the models generated vs. using actual data. The spatial variability of the combined wind and solar energy resources was included in the design analysis by using fuzzy numbers and arithmetic. The best alternative was dependent upon the precedence order assumed for the main criteria. Alternatives with no PV array or wind generator dominated when the 'Financial' criteria preceded the others. In contrast, alternatives with a nil component of PV array but a high wind generator component, dominated when the 'Environment' objective or the 'User/System compatibility' objectives were more important than the 'Financial' objectives and they also dominated when the three criteria were considered equally important.

  7. Building Quantitative Hydrologic Storylines from Process-based Models for Managing Water Resources in the U.S. Under Climate-changed Futures

    NASA Astrophysics Data System (ADS)

    Arnold, J.; Gutmann, E. D.; Clark, M. P.; Nijssen, B.; Vano, J. A.; Addor, N.; Wood, A.; Newman, A. J.; Mizukami, N.; Brekke, L. D.; Rasmussen, R.; Mendoza, P. A.

    2016-12-01

    Climate change narratives for water-resource applications must represent the change signals contextualized by hydroclimatic process variability and uncertainty at multiple scales. Building narratives of plausible change includes assessing uncertainties across GCM structure, internal climate variability, climate downscaling methods, and hydrologic models. Work with this linked modeling chain has dealt mostly with GCM sampling directed separately to either model fidelity (does the model correctly reproduce the physical processes in the world?) or sensitivity (of different model responses to CO2 forcings) or diversity (of model type, structure, and complexity). This leaves unaddressed any interactions among those measures and with other components in the modeling chain used to identify water-resource vulnerabilities to specific climate threats. However, time-sensitive, real-world vulnerability studies typically cannot accommodate a full uncertainty ensemble across the whole modeling chain, so a gap has opened between current scientific knowledge and most routine applications for climate-changed hydrology. To close that gap, the US Army Corps of Engineers, the Bureau of Reclamation, and the National Center for Atmospheric Research are working on techniques to subsample uncertainties objectively across modeling chain components and to integrate results into quantitative hydrologic storylines of climate-changed futures. Importantly, these quantitative storylines are not drawn from a small sample of models or components. Rather, they stem from the more comprehensive characterization of the full uncertainty space for each component. Equally important from the perspective of water-resource practitioners, these quantitative hydrologic storylines are anchored in actual design and operations decisions potentially affected by climate change. This talk will describe part of our work characterizing variability and uncertainty across modeling chain components and their interactions using newly developed observational data, models and model outputs, and post-processing tools for making the resulting quantitative storylines most useful in practical hydrology applications.

  8. Convective radiation fluid-dynamics: formation and early evolution of ultra low-mass objects

    NASA Astrophysics Data System (ADS)

    Wuchterl, G.

    2005-12-01

    The formation process of ultra low-mass objects is some kind of extension of the star formation process. The physical changes towards lower mass are discussed by investigating the collapse of cloud cores that are modelled as Bonnor-Ebert spheres. Their collapse is followed by solving the equations of fluid dynamics with radiation and a model of time-dependent convection that has been calibrated to the Sun. For a sequence of cloud-cores with 1 to 0.01 solar masses, evolutionary tracks and isochrones are shown in the mass-radius diagram, the Hertzsprung-Russel diagram and the effective temperature-surface gravity or Kiel diagram. The collapse and the early hydrostatic evolution to ages of few Ma are briefly discussed and compared to observations of objects in Upper Scorpius and the low-mass components of GG Tau.

  9. Sparse modeling of spatial environmental variables associated with asthma

    PubMed Central

    Chang, Timothy S.; Gangnon, Ronald E.; Page, C. David; Buckingham, William R.; Tandias, Aman; Cowan, Kelly J.; Tomasallo, Carrie D.; Arndt, Brian G.; Hanrahan, Lawrence P.; Guilbert, Theresa W.

    2014-01-01

    Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin’s Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5–50 years over a three-year period. Each patient’s home address was geocoded to one of 3,456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin’s geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. PMID:25533437

  10. Sparse modeling of spatial environmental variables associated with asthma.

    PubMed

    Chang, Timothy S; Gangnon, Ronald E; David Page, C; Buckingham, William R; Tandias, Aman; Cowan, Kelly J; Tomasallo, Carrie D; Arndt, Brian G; Hanrahan, Lawrence P; Guilbert, Theresa W

    2015-02-01

    Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin's Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5-50years over a three-year period. Each patient's home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin's geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. From Sensory Signals to Modality-Independent Conceptual Representations: A Probabilistic Language of Thought Approach

    PubMed Central

    Erdogan, Goker; Yildirim, Ilker; Jacobs, Robert A.

    2015-01-01

    People learn modality-independent, conceptual representations from modality-specific sensory signals. Here, we hypothesize that any system that accomplishes this feat will include three components: a representational language for characterizing modality-independent representations, a set of sensory-specific forward models for mapping from modality-independent representations to sensory signals, and an inference algorithm for inverting forward models—that is, an algorithm for using sensory signals to infer modality-independent representations. To evaluate this hypothesis, we instantiate it in the form of a computational model that learns object shape representations from visual and/or haptic signals. The model uses a probabilistic grammar to characterize modality-independent representations of object shape, uses a computer graphics toolkit and a human hand simulator to map from object representations to visual and haptic features, respectively, and uses a Bayesian inference algorithm to infer modality-independent object representations from visual and/or haptic signals. Simulation results show that the model infers identical object representations when an object is viewed, grasped, or both. That is, the model’s percepts are modality invariant. We also report the results of an experiment in which different subjects rated the similarity of pairs of objects in different sensory conditions, and show that the model provides a very accurate account of subjects’ ratings. Conceptually, this research significantly contributes to our understanding of modality invariance, an important type of perceptual constancy, by demonstrating how modality-independent representations can be acquired and used. Methodologically, it provides an important contribution to cognitive modeling, particularly an emerging probabilistic language-of-thought approach, by showing how symbolic and statistical approaches can be combined in order to understand aspects of human perception. PMID:26554704

  12. Towards a three-component model of fan loyalty: a case study of Chinese youth.

    PubMed

    Zhang, Xiao-xiao; Liu, Li; Zhao, Xian; Zheng, Jian; Yang, Meng; Zhang, Ji-qi

    2015-01-01

    The term "fan loyalty" refers to the loyalty felt and expressed by a fan towards the object of his/her fanaticism in both everyday and academic discourses. However, much of the literature on fan loyalty has paid little attention to the topic from the perspective of youth pop culture. The present study explored the meaning of fan loyalty in the context of China. Data were collected by the method of in-depth interviews with 16 young Chinese people aged between 19 and 25 years who currently or once were pop fans. The results indicated that fan loyalty entails three components: involvement, satisfaction, and affiliation. These three components regulate the process of fan loyalty development, which can be divided into four stages: inception, upgrade, zenith, and decline. This model provides a conceptual explanation of why and how young Chinese fans are loyal to their favorite stars. The implications of the findings are discussed.

  13. A comparison of correlation-length estimation methods for the objective analysis of surface pollutants at Environment and Climate Change Canada.

    PubMed

    Ménard, Richard; Deshaies-Jacques, Martin; Gasset, Nicolas

    2016-09-01

    An objective analysis is one of the main components of data assimilation. By combining observations with the output of a predictive model we combine the best features of each source of information: the complete spatial and temporal coverage provided by models, with a close representation of the truth provided by observations. The process of combining observations with a model output is called an analysis. To produce an analysis requires the knowledge of observation and model errors, as well as its spatial correlation. This paper is devoted to the development of methods of estimation of these error variances and the characteristic length-scale of the model error correlation for its operational use in the Canadian objective analysis system. We first argue in favor of using compact support correlation functions, and then introduce three estimation methods: the Hollingsworth-Lönnberg (HL) method in local and global form, the maximum likelihood method (ML), and the [Formula: see text] diagnostic method. We perform one-dimensional (1D) simulation studies where the error variance and true correlation length are known, and perform an estimation of both error variances and correlation length where both are non-uniform. We show that a local version of the HL method can capture accurately the error variances and correlation length at each observation site, provided that spatial variability is not too strong. However, the operational objective analysis requires only a single and globally valid correlation length. We examine whether any statistics of the local HL correlation lengths could be a useful estimate, or whether other global estimation methods such as by the global HL, ML, or [Formula: see text] should be used. We found in both 1D simulation and using real data that the ML method is able to capture physically significant aspects of the correlation length, while most other estimates give unphysical and larger length-scale values. This paper describes a proposed improvement of the objective analysis of surface pollutants at Environment and Climate Change Canada (formerly known as Environment Canada). Objective analyses are essentially surface maps of air pollutants that are obtained by combining observations with an air quality model output, and are thought to provide a complete and more accurate representation of the air quality. The highlight of this study is an analysis of methods to estimate the model (or background) error correlation length-scale. The error statistics are an important and critical component to the analysis scheme.

  14. Modeling the Space Radiation Environment

    NASA Technical Reports Server (NTRS)

    Xapsos, Michael A.

    2006-01-01

    There has been a renaissance of interest in space radiation environment modeling. This has been fueled by the growing need to replace long time standard AP-9 and AE-8 trapped particle models, the interplanetary exploration initiative, the modern satellite instrumentation that has led to unprecedented measurement accuracy, and the pervasive use of Commercial off the Shelf (COTS) microelectronics that require more accurate predictive capabilities. The objective of this viewgraph presentation was to provide basic understanding of the components of the space radiation environment and their variations, review traditional radiation effects application models, and present recent developments.

  15. An Object-oriented Computer Code for Aircraft Engine Weight Estimation

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Naylor, Bret A.

    2008-01-01

    Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA s NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc. that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300- passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case. Keywords: NASA, aircraft engine, weight, object-oriented

  16. A New Conceptualization of Human Visual Sensory-Memory

    PubMed Central

    Öğmen, Haluk; Herzog, Michael H.

    2016-01-01

    Memory is an essential component of cognition and disorders of memory have significant individual and societal costs. The Atkinson–Shiffrin “modal model” forms the foundation of our understanding of human memory. It consists of three stores: Sensory Memory (SM), whose visual component is called iconic memory, Short-Term Memory (STM; also called working memory, WM), and Long-Term Memory (LTM). Since its inception, shortcomings of all three components of the modal model have been identified. While the theories of STM and LTM underwent significant modifications to address these shortcomings, models of the iconic memory remained largely unchanged: A high capacity but rapidly decaying store whose contents are encoded in retinotopic coordinates, i.e., according to how the stimulus is projected on the retina. The fundamental shortcoming of iconic memory models is that, because contents are encoded in retinotopic coordinates, the iconic memory cannot hold any useful information under normal viewing conditions when objects or the subject are in motion. Hence, half-century after its formulation, it remains an unresolved problem whether and how the first stage of the modal model serves any useful function and how subsequent stages of the modal model receive inputs from the environment. Here, we propose a new conceptualization of human visual sensory memory by introducing an additional component whose reference-frame consists of motion-grouping based coordinates rather than retinotopic coordinates. We review data supporting this new model and discuss how it offers solutions to the paradoxes of the traditional model of sensory memory. PMID:27375519

  17. Improved GGIW-PHD filter for maneuvering non-ellipsoidal extended targets or group targets tracking based on sub-random matrices.

    PubMed

    Liang, Zhibing; Liu, Fuxian; Gao, Jiale

    2018-01-01

    For non-ellipsoidal extended targets and group targets tracking (NETT and NGTT), using an ellipsoid to approximate the target extension may not be accurate enough because of the lack of shape and orientation information. In consideration of this, we model a non-ellipsoidal extended target or target group as a combination of multiple ellipsoidal sub-objects, each represented by a random matrix. Based on these models, an improved gamma Gaussian inverse Wishart probability hypothesis density (GGIW-PHD) filter is proposed to estimate the measurement rates, kinematic states, and extension states of the sub-objects for each extended target or target group. For maneuvering NETT and NGTT, a multi-model (MM) approach based GGIW-PHD (MM-GGIW-PHD) filter is proposed. The common and the individual dynamics of the sub-objects belonging to the same extended target or target group are described by means of the combination between the overall maneuver model and the sub-object models. For the merging of updating components, an improved merging criterion and a new merging method are derived. A specific implementation of prediction partition with pseudo-likelihood method is presented. Two scenarios for non-maneuvering and maneuvering NETT and NGTT are simulated. The results demonstrate the effectiveness of the proposed algorithms.

  18. Improved GGIW-PHD filter for maneuvering non-ellipsoidal extended targets or group targets tracking based on sub-random matrices

    PubMed Central

    Liu, Fuxian; Gao, Jiale

    2018-01-01

    For non-ellipsoidal extended targets and group targets tracking (NETT and NGTT), using an ellipsoid to approximate the target extension may not be accurate enough because of the lack of shape and orientation information. In consideration of this, we model a non-ellipsoidal extended target or target group as a combination of multiple ellipsoidal sub-objects, each represented by a random matrix. Based on these models, an improved gamma Gaussian inverse Wishart probability hypothesis density (GGIW-PHD) filter is proposed to estimate the measurement rates, kinematic states, and extension states of the sub-objects for each extended target or target group. For maneuvering NETT and NGTT, a multi-model (MM) approach based GGIW-PHD (MM-GGIW-PHD) filter is proposed. The common and the individual dynamics of the sub-objects belonging to the same extended target or target group are described by means of the combination between the overall maneuver model and the sub-object models. For the merging of updating components, an improved merging criterion and a new merging method are derived. A specific implementation of prediction partition with pseudo-likelihood method is presented. Two scenarios for non-maneuvering and maneuvering NETT and NGTT are simulated. The results demonstrate the effectiveness of the proposed algorithms. PMID:29444144

  19. Object-oriented design tools for supramolecular devices and biomedical nanotechnology.

    PubMed

    Lee, Stephen C; Bhalerao, Khaustaub; Ferrari, Mauro

    2004-05-01

    Nanotechnology provides multifunctional agents for in vivo use that increasingly blur the distinction between pharmaceuticals and medical devices. Realization of such therapeutic nanodevices requires multidisciplinary effort that is difficult for individual device developers to sustain, and identification of appropriate collaborations outside ones own field can itself be challenging. Further, as in vivo nanodevices become increasingly complex, their design will increasingly demand systems level thinking. System engineering tools such as object-oriented analysis, object-oriented design (OOA/D) and unified modeling language (UML) are applicable to nanodevices built from biological components, help logically manage the knowledge needed to design them, and help identify useful collaborative relationships for device designers. We demonstrate the utility of these systems engineering tools by reverse engineering an existing molecular device (the bacmid molecular cloning system) using them, and illustrate how object-oriented approaches identify fungible components (objects) in nanodevices in a way that facilitates design of families of related devices, rather than single inventions. We also explore the utility of object-oriented approaches for design of another class of therapeutic nanodevices, vaccines. While they are useful for design of current nanodevices, the power of systems design tools for biomedical nanotechnology will become increasingly apparent as the complexity and sophistication of in vivo nanosystems increases. The nested, hierarchical nature of object-oriented approaches allows treatment of devices as objects in higher-order structures, and so will facilitate concatenation of multiple devices into higher-order, higher-function nanosystems.

  20. Object-Oriented Modeling of an Energy Harvesting System Based on Thermoelectric Generators

    NASA Astrophysics Data System (ADS)

    Nesarajah, Marco; Frey, Georg

    This paper deals with the modeling of an energy harvesting system based on thermoelectric generators (TEG), and the validation of the model by means of a test bench. TEGs are capable to improve the overall energy efficiency of energy systems, e.g. combustion engines or heating systems, by using the remaining waste heat to generate electrical power. Previously, a component-oriented model of the TEG itself was developed in Modelica® language. With this model any TEG can be described and simulated given the material properties and the physical dimension. Now, this model was extended by the surrounding components to a complete model of a thermoelectric energy harvesting system. In addition to the TEG, the model contains the cooling system, the heat source, and the power electronics. To validate the simulation model, a test bench was built and installed on an oil-fired household heating system. The paper reports results of the measurements and discusses the validity of the developed simulation models. Furthermore, the efficiency of the proposed energy harvesting system is derived and possible improvements based on design variations tested in the simulation model are proposed.

  1. An Evaluation Tool for CONUS-Scale Estimates of Components of the Water Balance

    NASA Astrophysics Data System (ADS)

    Saxe, S.; Hay, L.; Farmer, W. H.; Markstrom, S. L.; Kiang, J. E.

    2016-12-01

    Numerous research groups are independently developing data products to represent various components of the water balance (e.g. runoff, evapotranspiration, recharge, snow water equivalent, soil moisture, and climate) at the scale of the conterminous United States. These data products are derived from a range of sources, including direct measurement, remotely-sensed measurement, and statistical and deterministic model simulations. An evaluation tool is needed to compare these data products and the components of the water balance they contain in order to identify the gaps in the understanding and representation of continental-scale hydrologic processes. An ideal tool will be an objective, universally agreed upon, framework to address questions related to closing the water balance. This type of generic, model agnostic evaluation tool would facilitate collaboration amongst different hydrologic research groups and improve modeling capabilities with respect to continental-scale water resources. By adopting a comprehensive framework to consider hydrologic modeling in the context of a complete water balance, it is possible to identify weaknesses in process modeling, data product representation and regional hydrologic variation. As part of its National Water Census initiative, the U.S. Geological survey is facilitating this dialogue to developing prototype evaluation tools.

  2. [Students' physical activity: an analysis according to Pender's health promotion model].

    PubMed

    Guedes, Nirla Gomes; Moreira, Rafaella Pessoa; Cavalcante, Tahissa Frota; de Araujo, Thelma Leite; Ximenes, Lorena Barbosa

    2009-12-01

    The objective of this study was to describe the everyday physical activity habits of students and analyze the practice of physical activity and its determinants, based on the first component of Pender's health promotion model. This cross-sectional study was performed from 2004 to 2005 with 79 students in a public school in Fortaleza, Ceará, Brazil. Data collection was performed by interviews and physical examinations. The data were analyzed according to the referred theoretical model. Most students (n=60) were physically active. Proportionally, adolescents were the most active (80.4%). Those with a sedentary lifestyle had higher rates for overweight and obesity (21.1%). Many students practiced outdoor physical activities, which did not require any physical structure and good financial conditions. The results show that it is possible to associate the first component of Pender's health promotion model with the everyday lives of students in terms of the physical activity practice.

  3. Vertical Scales of Turbulence at the Mount Wilson Observatory

    NASA Technical Reports Server (NTRS)

    Treuhaft, Robert N.; Lowe, Stephen T.; Bester, Manfred; Danchi, William C.; Townes, Charles H.

    1995-01-01

    The vertical scales of turbulence at the Mount Wilson Observatory are inferred from data from the University of California at Berkeley Infrared Spatial Interferometer (ISI), by modeling path length fluctuations observed in the interferometric paths to celestial objects and those in instrumental ground-based paths. The correlations between the stellar and ground-based path length fluctuations and the temporal statistics of those fluctuations are modeled on various timescales to constrain the vertical scales. A Kolmogorov-Taylor turbulence model with a finite outer scale was used to simulate ISI data. The simulation also included the white instrumental noise of the interferometer, aperture-filtering effects, and the data analysis algorithms. The simulations suggest that the path delay fluctuations observed in the 1992-1993 ISI data are largely consistent with being generated by refractivity fluctuations at two characteristic vertical scales: one extending to a height of 45 m above the ground, with a wind speed of about 1 m/ s, and another at a much higher altitude, with a wind speed of about 10 m/ s. The height of the lower layer is of the order of the dimensions of trees and other structures near the interferometer, which suggests that these objects, including elements of the interferometer, may play a role in generating the lower layer of turbulence. The modeling indicates that the high- attitude component contributes primarily to short-period (less than 10 s) fluctuations, while the lower component dominates the long-period (up to a few minutes) fluctuations. The lower component turbulent height, along with outer scales of the order of 10 m, suggest that the baseline dependence of long-term interferometric, atmospheric fluctuations should weaken for baselines greater than a few tens of meters. Simulations further show that there is the potential for improving the seeing or astrometric accuracy by about 30%-50% on average, if the path length fluctuations in the lower component are directly calibrated. Statistical and systematic effects induce an error of about 15 m in the estimate of the lower component turbulent altitude.

  4. A Generalized Decision Framework Using Multi-objective Optimization for Water Resources Planning

    NASA Astrophysics Data System (ADS)

    Basdekas, L.; Stewart, N.; Triana, E.

    2013-12-01

    Colorado Springs Utilities (CSU) is currently engaged in an Integrated Water Resource Plan (IWRP) to address the complex planning scenarios, across multiple time scales, currently faced by CSU. The modeling framework developed for the IWRP uses a flexible data-centered Decision Support System (DSS) with a MODSIM-based modeling system to represent the operation of the current CSU raw water system coupled with a state-of-the-art multi-objective optimization algorithm. Three basic components are required for the framework, which can be implemented for planning horizons ranging from seasonal to interdecadal. First, a water resources system model is required that is capable of reasonable system simulation to resolve performance metrics at the appropriate temporal and spatial scales of interest. The system model should be an existing simulation model, or one developed during the planning process with stakeholders, so that 'buy-in' has already been achieved. Second, a hydrologic scenario tool(s) capable of generating a range of plausible inflows for the planning period of interest is required. This may include paleo informed or climate change informed sequences. Third, a multi-objective optimization model that can be wrapped around the system simulation model is required. The new generation of multi-objective optimization models do not require parameterization which greatly reduces problem complexity. Bridging the gap between research and practice will be evident as we use a case study from CSU's planning process to demonstrate this framework with specific competing water management objectives. Careful formulation of objective functions, choice of decision variables, and system constraints will be discussed. Rather than treating results as theoretically Pareto optimal in a planning process, we use the powerful multi-objective optimization models as tools to more efficiently and effectively move out of the inferior decision space. The use of this framework will help CSU evaluate tradeoffs in a continually changing world.

  5. Infrared identification of internal overheating components inside an electric control cabinet by inverse heat transfer problem

    NASA Astrophysics Data System (ADS)

    Yang, Li; Wang, Ye; Liu, Huikai; Yan, Guanghui; Kou, Wei

    2014-11-01

    The components overheating inside an object, such as inside an electric control cabinet, a moving object, and a running machine, can easily lead to equipment failure or fire accident. The infrared remote sensing method is used to inspect the surface temperature of object to identify the overheating components inside the object in recent years. It has important practical application of using infrared thermal imaging surface temperature measurement to identify the internal overheating elements inside an electric control cabinet. In this paper, through the establishment of test bench of electric control cabinet, the experimental study was conducted on the inverse identification technology of internal overheating components inside an electric control cabinet using infrared thermal imaging. The heat transfer model of electric control cabinet was built, and the temperature distribution of electric control cabinet with internal overheating element is simulated using the finite volume method (FVM). The outer surface temperature of electric control cabinet was measured using the infrared thermal imager. Combining the computer image processing technology and infrared temperature measurement, the surface temperature distribution of electric control cabinet was extracted, and using the identification algorithm of inverse heat transfer problem (IHTP) the position and temperature of internal overheating element were identified. The results obtained show that for single element overheating inside the electric control cabinet the identifying errors of the temperature and position were 2.11% and 5.32%. For multiple elements overheating inside the electric control cabinet the identifying errors of the temperature and positions were 3.28% and 15.63%. The feasibility and effectiveness of the method of IHTP and the correctness of identification algorithm of FVM were validated.

  6. Designers Workbench: Towards Real-Time Immersive Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuester, F; Duchaineau, M A; Hamann, B

    2001-10-03

    This paper introduces the DesignersWorkbench, a semi-immersive virtual environment for two-handed modeling, sculpting and analysis tasks. The paper outlines the fundamental tools, design metaphors and hardware components required for an intuitive real-time modeling system. As companies focus on streamlining productivity to cope with global competition, the migration to computer-aided design (CAD), computer-aided manufacturing (CAM), and computer-aided engineering (CAE) systems has established a new backbone of modern industrial product development. However, traditionally a product design frequently originates from a clay model that, after digitization, forms the basis for the numerical description of CAD primitives. The DesignersWorkbench aims at closing this technologymore » or ''digital gap'' experienced by design and CAD engineers by transforming the classical design paradigm into its filly integrated digital and virtual analog allowing collaborative development in a semi-immersive virtual environment. This project emphasizes two key components from the classical product design cycle: freeform modeling and analysis. In the freeform modeling stage, content creation in the form of two-handed sculpting of arbitrary objects using polygonal, volumetric or mathematically defined primitives is emphasized, whereas the analysis component provides the tools required for pre- and post-processing steps for finite element analysis tasks applied to the created models.« less

  7. Inlet Acoustic Data from a High Bypass Ratio Turbofan Rotor in an Internal Flow Component Test Facility

    NASA Technical Reports Server (NTRS)

    Bozak, Richard F.

    2017-01-01

    In February 2017, aerodynamic and acoustic testing was completed on a scale-model high bypass ratio turbofan rotor, R4, in an internal flow component test facility. The objective of testing was to determine the aerodynamic and acoustic impact of fan casing treatments designed to reduce noise. The baseline configuration consisted of the R4 rotor with a hardwall fan case. Data are presented for a baseline acoustic run with fan exit instrumentation removed to give a clean acoustic configuration.

  8. Optical methods for the optimization of system SWaP-C using aspheric components and advanced optical polymers

    NASA Astrophysics Data System (ADS)

    Zelazny, Amy; Benson, Robert; Deegan, John; Walsh, Ken; Schmidt, W. David; Howe, Russell

    2013-06-01

    We describe the benefits to camera system SWaP-C associated with the use of aspheric molded glasses and optical polymers in the design and manufacture of optical components and elements. Both camera objectives and display eyepieces, typical for night vision man-portable EO/IR systems, are explored. We discuss optical trade-offs, system performance, and cost reductions associated with this approach in both visible and non-visible wavebands, specifically NIR and LWIR. Example optical models are presented, studied, and traded using this approach.

  9. Hard X-Ray-emitting Black Hole Fed by Accretion of Low Angular Momentum Matter

    NASA Astrophysics Data System (ADS)

    Igumenshchev, Igor V.; Illarionov, Andrei F.; Abramowicz, Marek A.

    1999-05-01

    Observed spectra of active galactic nuclei and luminous X-ray binaries in our Galaxy suggest that both hot (~109 K) and cold (~106 K) plasma components exist close to the central accreting black hole. The hard X-ray component of the spectra is usually explained by Compton upscattering of optical/UV photons from optically thick cold plasma by hot electrons. Observations also indicate that some of these objects are quite efficient in converting gravitational energy of accretion matter into radiation. Existing theoretical models have difficulties in explaining the two plasma components and high intensity of hard X-rays. Most of the models assume that the hot component emerges from the cold one because of some kind of instability, but no one offers a satisfactory physical explanation for this. Here we propose a solution to these difficulties that reverses what was imagined previously: in our model, the hot component forms first and afterward it cools down to form the cold component. In our model, the accretion flow initially has a small angular momentum, and thus it has a quasi-spherical geometry at large radii. Close to the black hole, the accreting matter is heated up in shocks that form because of the action of the centrifugal force. The hot postshock matter is very efficiently cooled down by Comptonization of low-energy photons and condensates into a thin and cool accretion disk. The thin disk emits the low-energy photons which cool the hot component. All the properties of our model, in particular the existence of hot and cold components, follow from an exact numerical solution of standard hydrodynamical equations--we postulate no unknown processes operating in the flow. In contrast to the recently discussed advection-dominated accretion flow, the particular type of accretion flow considered in this Letter is both very hot and quite radiatively efficient.

  10. Optimizing occupational exposure measurement strategies when estimating the log-scale arithmetic mean value--an example from the reinforced plastics industry.

    PubMed

    Lampa, Erik G; Nilsson, Leif; Liljelind, Ingrid E; Bergdahl, Ingvar A

    2006-06-01

    When assessing occupational exposures, repeated measurements are in most cases required. Repeated measurements are more resource intensive than a single measurement, so careful planning of the measurement strategy is necessary to assure that resources are spent wisely. The optimal strategy depends on the objectives of the measurements. Here, two different models of random effects analysis of variance (ANOVA) are proposed for the optimization of measurement strategies by the minimization of the variance of the estimated log-transformed arithmetic mean value of a worker group, i.e. the strategies are optimized for precise estimation of that value. The first model is a one-way random effects ANOVA model. For that model it is shown that the best precision in the estimated mean value is always obtained by including as many workers as possible in the sample while restricting the number of replicates to two or at most three regardless of the size of the variance components. The second model introduces the 'shared temporal variation' which accounts for those random temporal fluctuations of the exposure that the workers have in common. It is shown for that model that the optimal sample allocation depends on the relative sizes of the between-worker component and the shared temporal component, so that if the between-worker component is larger than the shared temporal component more workers should be included in the sample and vice versa. The results are illustrated graphically with an example from the reinforced plastics industry. If there exists a shared temporal variation at a workplace, that variability needs to be accounted for in the sampling design and the more complex model is recommended.

  11. NEXUS - Resilient Intelligent Middleware

    NASA Astrophysics Data System (ADS)

    Kaveh, N.; Hercock, R. Ghanea

    Service-oriented computing, a composition of distributed-object computing, component-based, and Web-based concepts, is becoming the widespread choice for developing dynamic heterogeneous software assets available as services across a network. One of the major strengths of service-oriented technologies is the high abstraction layer and large granularity level at which software assets are viewed compared to traditional object-oriented technologies. Collaboration through encapsulated and separately defined service interfaces creates a service-oriented environment, whereby multiple services can be linked together through their interfaces to compose a functional system. This approach enables better integration of legacy and non-legacy services, via wrapper interfaces, and allows for service composition at a more abstract level especially in cases such as vertical market stacks. The heterogeneous nature of service-oriented technologies and the granularity of their software components makes them a suitable computing model in the pervasive domain.

  12. Two arm robot path planning in a static environment using polytopes and string stretching. Thesis

    NASA Technical Reports Server (NTRS)

    Schima, Francis J., III

    1990-01-01

    The two arm robot path planning problem has been analyzed and reduced into components to be simplified. This thesis examines one component in which two Puma-560 robot arms are simultaneously holding a single object. The problem is to find a path between two points around obstacles which is relatively fast and minimizes the distance. The thesis involves creating a structure on which to form an advanced path planning algorithm which could ideally find the optimum path. An actual path planning method is implemented which is simple though effective in most common situations. Given the limits of computer technology, a 'good' path is currently found. Objects in the workspace are modeled with polytopes. These are used because they can be used for rapid collision detection and still provide a representation which is adequate for path planning.

  13. Hospital information system: reusability, designing, modelling, recommendations for implementing.

    PubMed

    Huet, B

    1998-01-01

    The aims of this paper are to precise some essential conditions for building reuse models for hospital information systems (HIS) and to present an application for hospital clinical laboratories. Reusability is a general trend in software, however reuse can involve a more or less part of design, classes, programs; consequently, a project involving reusability must be precisely defined. In the introduction it is seen trends in software, the stakes of reuse models for HIS and the special use case constituted with a HIS. The main three parts of this paper are: 1) Designing a reuse model (which objects are common to several information systems?) 2) A reuse model for hospital clinical laboratories (a genspec object model is presented for all laboratories: biochemistry, bacteriology, parasitology, pharmacology, ...) 3) Recommendations for generating plug-compatible software components (a reuse model can be implemented as a framework, concrete factors that increase reusability are presented). In conclusion reusability is a subtle exercise of which project must be previously and carefully defined.

  14. La Educacion Continua de Profesionales de la Salud--Un Modelo para su Desarrollo

    ERIC Educational Resources Information Center

    Stensland, Per G.

    1974-01-01

    The author suggests a framework for planning and evaluating continuing education, giving attention to the learner, his objectives, and the learning process; these components are discussed in determining the special characteristics of the continuing education of professional health workers, and a model program is presented. The article is in…

  15. A time-course analysis of effects of the steroidogenesis inhibitor ketoconazole on components of the hypothalamic-pituitary-gonadal axis of fathead minnows (Presentation)

    EPA Science Inventory

    The objective of this study was to evaluate temporal effects of the model steroidogenesis inhibitor ketoconazole (KTC) on aspects of reproductive endocrine function controlled by the hypothalamic-pituitary-gonadal (HPG) axis in the fathead minnow (Pimephales promelas). Ketoconazo...

  16. A Review and Conceptual Framework for Integrating Leadership into Clinical Practice

    ERIC Educational Resources Information Center

    Kutz, Matthew R.

    2012-01-01

    Context: The purpose of this review is to assess leadership education and practice in athletic training. Leadership is a critical component of athletic training and health care. Leadership research in athletic training is dramatically behind other health care professions. Objective: To develop a model for integrating leadership behavior and…

  17. Forest Management Under Uncertainty for Multiple Bird Population Objectives

    Treesearch

    Clinton T. Moore; W. Todd Plummer; Michael J. Conroy

    2005-01-01

    We advocate adaptive programs of decision making and monitoring for the management of forest birds when responses by populations to management, and particularly management trade-offs among populations, are uncertain. Models are necessary components of adaptive management. Under this approach, uncertainty about the behavior of a managed system is explicitly captured in...

  18. ALUMINUM BIOAVAILABILITY FROM DRINKING WATER IS VERY LOW AND IS NOT APPRECIABLY INFLUENCED BY STOMACH CONTENTS OR WATER HARDNESS. (R825357)

    EPA Science Inventory

    The objectives were to estimate aluminum (Al) oral bioavailability under conditions that model its consumption in drinking water, and to test the hypotheses that stomach contents and co-administration of the major components of hard water affect Al absorption. Rats received intra...

  19. A Time-course Analysis of Effects of the Steroidogenesis Inhibitor Ketoconazole on Components of the Hypothalamic-pituitary-gonadal Axis of Fathead Minnows

    EPA Science Inventory

    The objective of this study was to evaluate temporal effects of the model steroidogenesis inhibitor ketoconazole (KTC) on aspects of reproductive endocrine function controlled by the hypothalamic-pituitary-gonadal (HPG) axis in the fathead minnow (Pimephales promelas). Ketoconazo...

  20. A Time-course Analysis of Effects of the Steroidogenesis Inhibitor Ketoconazole on Components of the Hypothalamic-pituitary-gonadal Axis of Fathead Minnows

    EPA Science Inventory

    The objective of this study was to evaluate temporal effects of the model steroidogenesis inhibitor ketoconazole (KTC) on aspects of reproductive endocrine function controlled by the hypothalamic-pituitary-gonadal (HPG) axis in the fathead minnow (Pimephales promelas). Ketoconaz...

  1. Gravitational collapse and the vacuum energy

    NASA Astrophysics Data System (ADS)

    Campos, M.

    2014-03-01

    To explain the accelerated expansion of the universe, models with interacting dark components (dark energy and dark matter) have been considered recently in the literature. Generally, the dark energy component is physically interpreted as the vacuum energy of the all fields that fill the universe. As the other side of the same coin, the influence of the vacuum energy on the gravitational collapse is of great interest. We study such collapse adopting different parameterizations for the evolution of the vacuum energy. We discuss the homogeneous collapsing star fluid, that interacts with a vacuum energy component, using the stiff matter case as example. We conclude this work with a discussion of the Cahill-McVittie mass for the collapsed object.

  2. The model of the optical-electronic control system of vehicles location at level crossing

    NASA Astrophysics Data System (ADS)

    Verezhinskaia, Ekaterina A.; Gorbachev, Aleksei A.; Maruev, Ivan A.; Shavrygina, Margarita A.

    2016-04-01

    Level crossing - one of the most dangerous sections of the road network, where railway line crosses motor road at the same level. The collision of trains with vehicles at a level crossing is a serious type of road traffic accidents. The purpose of this research is to develop complex optical electronic control system of vehicles location in the dangerous zone of level crossing. The system consists of registration blocks (including photodetector, lens, infrared emitting diode), determinant devices and camera installed within the boundaries of level crossing. The system performs detection of objects (vehicles) by analysing the time of the object movement opposite to the registration block and level of the reflected signal from the object. The paper presents theoretical description and experimental research of main principles of the system operation. Experimental research of the system model with selected optical-electronic components have confirmed the possibility of metal objects detection at the required distance (0.5 - 2 m) with different values of background illuminance.

  3. Extraction of the aortic and pulmonary components of the second heart sound using a nonlinear transient chirp signal model.

    PubMed

    Xu, J; Durand, L G; Pibarot, P

    2001-03-01

    The objective of this paper is to adapt and validate a nonlinear transient chirp signal modeling approach for the analysis and synthesis of overlapping aortic (A2) and pulmonary (P2) components of the second heart sound (S2). The approach is based on the time-frequency representation of multicomponent signals for estimating and reconstructing the instantaneous phase and amplitude functions of each component. To evaluate the accuracy of the approach, a simulated S2 with A2 and P2 components having different overlapping intervals (5-30 ms) was synthesized. The simulation results show that the technique is very effective for extracting the two components, even in the presence of noise (-15 dB). The normalized root-mean-squared error between the original A2 and P2 components and their reconstructed versions varied between 1% and 6%, proportionally to the duration of the overlapping interval, and it increased by less than 2% in the presence of noise. The validated technique was then applied to S2 components recorded in pigs under normal or high pulmonary artery pressures. The results show that this approach can successfully isolate and extract overlapping A2 and P2 components from successive S2 recordings obtained from different heartbeats of the same animal as well from different animals.

  4. Composite load spectra for select space propulsion structural components

    NASA Technical Reports Server (NTRS)

    Newell, J. F.; Kurth, R. E.; Ho, H.

    1991-01-01

    The objective of this program is to develop generic load models with multiple levels of progressive sophistication to simulate the composite (combined) load spectra that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades, and liquid oxygen posts and system ducting. The first approach will consist of using state of the art probabilistic methods to describe the individual loading conditions and combinations of these loading conditions to synthesize the composite load spectra simulation. The second approach will consist of developing coupled models for composite load spectra simulation which combine the deterministic models for composite load dynamic, acoustic, high pressure, and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients will then be determined using advanced probabilistic simulation methods with and without strategically selected experimental data.

  5. An Object-Oriented Computer Code for Aircraft Engine Weight Estimation

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Naylor, Bret A.

    2009-01-01

    Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn Research Center (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA's NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc., that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300-passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case.

  6. Experimental Effects and Individual Differences in Linear Mixed Models: Estimating the Relationship between Spatial, Object, and Attraction Effects in Visual Attention

    PubMed Central

    Kliegl, Reinhold; Wei, Ping; Dambacher, Michael; Yan, Ming; Zhou, Xiaolin

    2011-01-01

    Linear mixed models (LMMs) provide a still underused methodological perspective on combining experimental and individual-differences research. Here we illustrate this approach with two-rectangle cueing in visual attention (Egly et al., 1994). We replicated previous experimental cue-validity effects relating to a spatial shift of attention within an object (spatial effect), to attention switch between objects (object effect), and to the attraction of attention toward the display centroid (attraction effect), also taking into account the design-inherent imbalance of valid and other trials. We simultaneously estimated variance/covariance components of subject-related random effects for these spatial, object, and attraction effects in addition to their mean reaction times (RTs). The spatial effect showed a strong positive correlation with mean RT and a strong negative correlation with the attraction effect. The analysis of individual differences suggests that slow subjects engage attention more strongly at the cued location than fast subjects. We compare this joint LMM analysis of experimental effects and associated subject-related variances and correlations with two frequently used alternative statistical procedures. PMID:21833292

  7. An integrated system for rainfall induced shallow landslides modeling

    NASA Astrophysics Data System (ADS)

    Formetta, Giuseppe; Capparelli, Giovanna; Rigon, Riccardo; Versace, Pasquale

    2014-05-01

    Rainfall induced shallow landslides (RISL) cause significant damages involving loss of life and properties. Predict susceptible locations for RISL is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, statistic. Usually to accomplish this task two main approaches are used: statistical or physically based model. In this work an open source (OS), 3-D, fully distributed hydrological model was integrated in an OS modeling framework (Object Modeling System). The chain is closed by linking the system to a component for safety factor computation with infinite slope approximation able to take into account layered soils and suction contribution to hillslope stability. The model composition was tested for a case study in Calabria (Italy) in order to simulate the triggering of a landslide happened in the Cosenza Province. The integration in OMS allows the use of other components such as a GIS to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. Finally, model performances were quantified by comparing modelled and simulated trigger time. This research is supported by Ambito/Settore AMBIENTE E SICUREZZA (PON01_01503) project.

  8. Use of Annotations for Component and Framework Interoperability

    NASA Astrophysics Data System (ADS)

    David, O.; Lloyd, W.; Carlson, J.; Leavesley, G. H.; Geter, F.

    2009-12-01

    The popular programming languages Java and C# provide annotations, a form of meta-data construct. Software frameworks for web integration, web services, database access, and unit testing now take advantage of annotations to reduce the complexity of APIs and the quantity of integration code between the application and framework infrastructure. Adopting annotation features in frameworks has been observed to lead to cleaner and leaner application code. The USDA Object Modeling System (OMS) version 3.0 fully embraces the annotation approach and additionally defines a meta-data standard for components and models. In version 3.0 framework/model integration previously accomplished using API calls is now achieved using descriptive annotations. This enables the framework to provide additional functionality non-invasively such as implicit multithreading, and auto-documenting capabilities while achieving a significant reduction in the size of the model source code. Using a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework. Since models and modeling components are not directly bound to framework by the use of specific APIs and/or data types they can more easily be reused both within the framework as well as outside of it. To study the effectiveness of an annotation based framework approach with other modeling frameworks, a framework-invasiveness study was conducted to evaluate the effects of framework design on model code quality. A monthly water balance model was implemented across several modeling frameworks and several software metrics were collected. The metrics selected were measures of non-invasive design methods for modeling frameworks from a software engineering perspective. It appears that the use of annotations positively impacts several software quality measures. In a next step, the PRMS model was implemented in OMS 3.0 and is currently being implemented for water supply forecasting in the western United States at the USDA NRCS National Water and Climate Center. PRMS is a component based modular precipitation-runoff model developed to evaluate the impacts of various combinations of precipitation, climate, and land use on streamflow and general basin hydrology. The new OMS 3.0 PRMS model source code is more concise and flexible as a result of using the new framework’s annotation based approach. The fully annotated components are now providing information directly for (i) model assembly and building, (ii) dataflow analysis for implicit multithreading, (iii) automated and comprehensive model documentation of component dependencies, physical data properties, (iv) automated model and component testing, and (v) automated audit-traceability to account for all model resources leading to a particular simulation result. Experience to date has demonstrated the multi-purpose value of using annotations. Annotations are also a feasible and practical method to enable interoperability among models and modeling frameworks. As a prototype example, model code annotations were used to generate binding and mediation code to allow the use of OMS 3.0 model components within the OpenMI context.

  9. Common world model for unmanned systems

    NASA Astrophysics Data System (ADS)

    Dean, Robert Michael S.

    2013-05-01

    The Robotic Collaborative Technology Alliance (RCTA) seeks to provide adaptive robot capabilities which move beyond traditional metric algorithms to include cognitive capabilities. Key to this effort is the Common World Model, which moves beyond the state-of-the-art by representing the world using metric, semantic, and symbolic information. It joins these layers of information to define objects in the world. These objects may be reasoned upon jointly using traditional geometric, symbolic cognitive algorithms and new computational nodes formed by the combination of these disciplines. The Common World Model must understand how these objects relate to each other. Our world model includes the concept of Self-Information about the robot. By encoding current capability, component status, task execution state, and histories we track information which enables the robot to reason and adapt its performance using Meta-Cognition and Machine Learning principles. The world model includes models of how aspects of the environment behave, which enable prediction of future world states. To manage complexity, we adopted a phased implementation approach to the world model. We discuss the design of "Phase 1" of this world model, and interfaces by tracing perception data through the system from the source to the meta-cognitive layers provided by ACT-R and SS-RICS. We close with lessons learned from implementation and how the design relates to Open Architecture.

  10. X-rays from Eta Carinae

    NASA Technical Reports Server (NTRS)

    Chlebowski, T.; Seward, F. D.; Swank, J.; Szymkowiak, A.

    1984-01-01

    X-ray observations of Eta Car obtained with the high-resolution imager and solid-state spectrometer of the Einstein observatory are reported and interpreted in terms of a two-shell model. A soft component with temperature 5 million K is located in the expanding outer shell, and the hard core component with temperature 80 million K is attributed to the interaction of a high-velocity stellar wind from the massive central object with the inner edge of a dust shell. Model calculations based on comparison with optical and IR data permit estimation of the mass of the outer shell (0.004 solar mass), the mass of the dust shell (3 solar mass), and the total shell expansion energy (less than 2 x 10 to the 49th ergs).

  11. Dynamic simulation of a reverse Brayton refrigerator

    NASA Astrophysics Data System (ADS)

    Peng, N.; Lei, L. L.; Xiong, L. Y.; Tang, J. C.; Dong, B.; Liu, L. Q.

    2014-01-01

    A test refrigerator based on the modified Reverse Brayton cycle has been developed in the Chinese Academy of Sciences recently. To study the behaviors of this test refrigerator, a dynamic simulation has been carried out. The numerical model comprises the typical components of the test refrigerator: compressor, valves, heat exchangers, expander and heater. This simulator is based on the oriented-object approach and each component is represented by a set of differential and algebraic equations. The control system of the test refrigerator is also simulated, which can be used to optimize the control strategies. This paper describes all the models and shows the simulation results. Comparisons between simulation results and experimental data are also presented. Experimental validation on the test refrigerator gives satisfactory results.

  12. The Scenario-Based Engineering Process (SEP): a user-centered approach for the development of health care systems.

    PubMed

    Harbison, K; Kelly, J; Burnell, L; Silva, J

    1995-01-01

    The Scenario-based Engineering Process (SEP) is a user-focused methodology for large and complex system design. This process supports new application development from requirements analysis with domain models to component selection, design and modification, implementation, integration, and archival placement. It is built upon object-oriented methodologies, domain modeling strategies, and scenario-based techniques to provide an analysis process for mapping application requirements to available components. We are using SEP in the health care applications that we are developing. The process has already achieved success in the manufacturing and military domains and is being adopted by many organizations. SEP should prove viable in any domain containing scenarios that can be decomposed into tasks.

  13. Microglia Morphological Categorization in a Rat Model of Neuroinflammation by Hierarchical Cluster and Principal Components Analysis.

    PubMed

    Fernández-Arjona, María Del Mar; Grondona, Jesús M; Granados-Durán, Pablo; Fernández-Llebrez, Pedro; López-Ávalos, María D

    2017-01-01

    It is known that microglia morphology and function are closely related, but only few studies have objectively described different morphological subtypes. To address this issue, morphological parameters of microglial cells were analyzed in a rat model of aseptic neuroinflammation. After the injection of a single dose of the enzyme neuraminidase (NA) within the lateral ventricle (LV) an acute inflammatory process occurs. Sections from NA-injected animals and sham controls were immunolabeled with the microglial marker IBA1, which highlights ramifications and features of the cell shape. Using images obtained by section scanning, individual microglial cells were sampled from various regions (septofimbrial nucleus, hippocampus and hypothalamus) at different times post-injection (2, 4 and 12 h). Each cell yielded a set of 15 morphological parameters by means of image analysis software. Five initial parameters (including fractal measures) were statistically different in cells from NA-injected rats (most of them IL-1β positive, i.e., M1-state) compared to those from control animals (none of them IL-1β positive, i.e., surveillant state). However, additional multimodal parameters were revealed more suitable for hierarchical cluster analysis (HCA). This method pointed out the classification of microglia population in four clusters. Furthermore, a linear discriminant analysis (LDA) suggested three specific parameters to objectively classify any microglia by a decision tree. In addition, a principal components analysis (PCA) revealed two extra valuable variables that allowed to further classifying microglia in a total of eight sub-clusters or types. The spatio-temporal distribution of these different morphotypes in our rat inflammation model allowed to relate specific morphotypes with microglial activation status and brain location. An objective method for microglia classification based on morphological parameters is proposed. Main points Microglia undergo a quantifiable morphological change upon neuraminidase induced inflammation.Hierarchical cluster and principal components analysis allow morphological classification of microglia.Brain location of microglia is a relevant factor.

  14. Microglia Morphological Categorization in a Rat Model of Neuroinflammation by Hierarchical Cluster and Principal Components Analysis

    PubMed Central

    Fernández-Arjona, María del Mar; Grondona, Jesús M.; Granados-Durán, Pablo; Fernández-Llebrez, Pedro; López-Ávalos, María D.

    2017-01-01

    It is known that microglia morphology and function are closely related, but only few studies have objectively described different morphological subtypes. To address this issue, morphological parameters of microglial cells were analyzed in a rat model of aseptic neuroinflammation. After the injection of a single dose of the enzyme neuraminidase (NA) within the lateral ventricle (LV) an acute inflammatory process occurs. Sections from NA-injected animals and sham controls were immunolabeled with the microglial marker IBA1, which highlights ramifications and features of the cell shape. Using images obtained by section scanning, individual microglial cells were sampled from various regions (septofimbrial nucleus, hippocampus and hypothalamus) at different times post-injection (2, 4 and 12 h). Each cell yielded a set of 15 morphological parameters by means of image analysis software. Five initial parameters (including fractal measures) were statistically different in cells from NA-injected rats (most of them IL-1β positive, i.e., M1-state) compared to those from control animals (none of them IL-1β positive, i.e., surveillant state). However, additional multimodal parameters were revealed more suitable for hierarchical cluster analysis (HCA). This method pointed out the classification of microglia population in four clusters. Furthermore, a linear discriminant analysis (LDA) suggested three specific parameters to objectively classify any microglia by a decision tree. In addition, a principal components analysis (PCA) revealed two extra valuable variables that allowed to further classifying microglia in a total of eight sub-clusters or types. The spatio-temporal distribution of these different morphotypes in our rat inflammation model allowed to relate specific morphotypes with microglial activation status and brain location. An objective method for microglia classification based on morphological parameters is proposed. Main points Microglia undergo a quantifiable morphological change upon neuraminidase induced inflammation.Hierarchical cluster and principal components analysis allow morphological classification of microglia.Brain location of microglia is a relevant factor. PMID:28848398

  15. GFEChutes Lo-Fi

    NASA Technical Reports Server (NTRS)

    Gist, Emily; Turner, Gary; Shelton, Robert; Vautier, Mana; Shaikh, Ashraf

    2013-01-01

    NASA needed to provide a software model of a parachute system for a manned re-entry vehicle. NASA has parachute codes, e.g., the Descent Simulation System (DSS), that date back to the Apollo Program. Since the space shuttle did not rely on parachutes as its primary descent control mechanism, DSS has not been maintained or incorporated into modern simulation architectures such as Osiris and Antares, which are used for new mission simulations. GFEChutes Lo-Fi is an object-oriented implementation of conventional parachute codes designed for use in modern simulation environments. The GFE (Government Furnished Equipment), low-fidelity (Lo-Fi) parachute model (GFEChutes Lo-Fi) is a software package capable of modeling the effects of multiple parachutes, deployed concurrently and/or sequentially, on a vehicle during the subsonic phase of reentry into planetary atmosphere. The term "low-fidelity" distinguishes models that represent the parachutes as simple forces acting on the vehicle, as opposed to independent aerodynamic bodies. GFEChutes Lo-Fi was created from these existing models to be clean, modular, certified as NASA Class C software, and portable, or "plug and play." The GFE Lo-Fi Chutes Model provides basic modeling capability of a sequential series of parachute activities. Actions include deploying the parachute, changing the reefing on the parachute, and cutting away the parachute. Multiple chutes can be deployed at any given time, but all chutes in that case are assumed to behave as individually isolated chutes; there is no modeling of any interactions between deployed chutes. Drag characteristics of a deployed chute are based on a coefficient of drag, the face area of the chute, and the local dynamic pressure only. The orientation of the chute is approximately modeled for purposes of obtaining torques on the vehicle, but the dynamic state of the chute as a separate entity is not integrated - the treatment is simply an approximation. The innovation in GFEChutes Lo-Fi is to use an object design that closely followed the mechanical characteristics and structure of a physical system of parachutes and their deployment mechanisms. Software objects represent the components of the system, and use of an object hierarchy allows a progression from general component outlines to specific implementations. These extra chutes were not part of the baseline deceleration sequence of drogues and mains, but still had to be simulated. The major innovation in GFEChutes Lo-Fi is the software design and architecture.

  16. Forging of metallic nano-objects for the fabrication of submicron-size components

    NASA Astrophysics Data System (ADS)

    Rösler, J.; Mukherji, D.; Schock, K.; Kleindiek, S.

    2007-03-01

    In recent years, nanoscale fabrication has developed considerably, but the fabrication of free-standing nanosize components is still a great challenge. The fabrication of metallic nanocomponents utilizing three basic steps is demonstrated here. First, metallic alloys are used as factories to produce a metallic raw stock of nano-objects/nanoparticles in large numbers. These objects are then isolated from the powder containing thousands of such objects inside a scanning electron microscope using manipulators, and placed on a micro-anvil or a die. Finally, the shape of the individual nano-object is changed by nanoforging using a microhammer. In this way free-standing, high-strength, metallic nano-objects may be shaped into components with dimensions in the 100 nm range. By assembling such nanocomponents, high-performance microsystems can be fabricated, which are truly in the micrometre scale (the size ratio of a system to its component is typically 10:1).

  17. Testing the Paradigm that Ultra-Luminous X-Ray Sources as a Class Represent Accreting Intermediate

    NASA Technical Reports Server (NTRS)

    Berghea, C. T.; Weaver, K. A.; Colbert, E. J. M.; Roberts, T. P.

    2008-01-01

    To test the idea that ultraluminous X-ray sources (ULXs) in external galaxies represent a class of accreting Intermediate-Mass Black Holes (IMBHs), we have undertaken a program to identify ULXs and a lower luminosity X-ray comparison sample with the highest quality data in the Chandra archive. We establish a general property of ULXs that the most X-ray luminous objects possess the fattest X-ray spectra (in the Chandra band pass). No prior sample studies have established the general hardening of ULX spectra with luminosity. This hardening occurs at the highest luminosities (absorbed luminosity > or equals 5x10(exp 39) ergs/s) and is in line with recent models arguing that ULXs are actually stellar-mass black holes. From spectral modeling, we show that the evidence originally taken to mean that ULXs are IMBHs - i.e., the "simple IMBH model" - is nowhere near as compelling when a large sample of ULXs is looked at properly. During the last couple of years, XMM-Newton spectroscopy of ULXs has to some large extent begun to negate the simple IMBH model based on fewer objects. We confirm and expand these results, which validates the XMM-Newton work in a broader sense with independent X-ray data. We find (1) that cool disk components are present with roughly equal probability and total flux fraction for any given ULX, regardless of luminosity, and (2) that cool disk components extend below the standard ULX luminosity cutoff of 10(exp 39) ergs/s, down to our sample limit of 10(exp 38:3) ergs/s. The fact that cool disk components are not correlated with luminosity damages the argument that cool disks indicate IMBHs in ULXs, for which a strong statistical support was never made.

  18. Testing the Paradigm that Ultraluminous X-Ray Sources as a Class Represent Accreting Intermediate-Mass Black Holes

    NASA Astrophysics Data System (ADS)

    Berghea, C. T.; Weaver, K. A.; Colbert, E. J. M.; Roberts, T. P.

    2008-11-01

    To test the idea that ultraluminous X-ray sources (ULXs) in external galaxies represent a class of accreting intermediate-mass black holes (IMBHs), we have undertaken a program to identify ULXs and a lower luminosity X-ray comparison sample with the highest quality data in the Chandra archive. We establish as a general property of ULXs that the most X-ray-luminous objects possess the flattest X-ray spectra (in the Chandra bandpass). No prior sample studies have established the general hardening of ULX spectra with luminosity. This hardening occurs at the highest luminosities (absorbed luminosity >=5 × 1039 erg s-1) and is in line with recent models arguing that ULXs are actually stellar mass black holes. From spectral modeling, we show that the evidence originally taken to mean that ULXs are IMBHs—i.e., the "simple IMBH model"—is nowhere near as compelling when a large sample of ULXs is looked at properly. During the last couple of years, XMM-Newton spectroscopy of ULXs has to a large extent begun to negate the simple IMBH model based on fewer objects. We confirm and expand these results, which validates the XMM-Newton work in a broader sense with independent X-ray data. We find that (1) cool-disk components are present with roughly equal probability and total flux fraction for any given ULX, regardless of luminosity, and (2) cool-disk components extend below the standard ULX luminosity cutoff of 1039 erg s-1, down to our sample limit of 1038.3 erg s-1. The fact that cool-disk components are not correlated with luminosity damages the argument that cool disks indicate IMBHs in ULXs, for which strong statistical support was never found.

  19. A component-based software environment for visualizing large macromolecular assemblies.

    PubMed

    Sanner, Michel F

    2005-03-01

    The interactive visualization of large biological assemblies poses a number of challenging problems, including the development of multiresolution representations and new interaction methods for navigating and analyzing these complex systems. An additional challenge is the development of flexible software environments that will facilitate the integration and interoperation of computational models and techniques from a wide variety of scientific disciplines. In this paper, we present a component-based software development strategy centered on the high-level, object-oriented, interpretive programming language: Python. We present several software components, discuss their integration, and describe some of their features that are relevant to the visualization of large molecular assemblies. Several examples are given to illustrate the interoperation of these software components and the integration of structural data from a variety of experimental sources. These examples illustrate how combining visual programming with component-based software development facilitates the rapid prototyping of novel visualization tools.

  20. Kaiser Permanente-Sandia National Health Care Model: Phase 1 prototype final report. Part 2 -- Domain analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, D.; Yoshimura, A.; Butler, D.

    This report describes the results of a Cooperative Research and Development Agreement between Sandia National Laboratories and Kaiser Permanente Southern California to develop a prototype computer model of Kaiser Permanente`s health care delivery system. As a discrete event simulation, SimHCO models for each of 100,000 patients the progression of disease, individual resource usage, and patient choices in a competitive environment. SimHCO is implemented in the object-oriented programming language C{sup 2}, stressing reusable knowledge and reusable software components. The versioned implementation of SimHCO showed that the object-oriented framework allows the program to grow in complexity in an incremental way. Furthermore, timingmore » calculations showed that SimHCO runs in a reasonable time on typical workstations, and that a second phase model will scale proportionally and run within the system constraints of contemporary computer technology.« less

  1. Designing Class Methods from Dataflow Diagrams

    NASA Astrophysics Data System (ADS)

    Shoval, Peretz; Kabeli-Shani, Judith

    A method for designing the class methods of an information system is described. The method is part of FOOM - Functional and Object-Oriented Methodology. In the analysis phase of FOOM, two models defining the users' requirements are created: a conceptual data model - an initial class diagram; and a functional model - hierarchical OO-DFDs (object-oriented dataflow diagrams). Based on these models, a well-defined process of methods design is applied. First, the OO-DFDs are converted into transactions, i.e., system processes that supports user task. The components and the process logic of each transaction are described in detail, using pseudocode. Then, each transaction is decomposed, according to well-defined rules, into class methods of various types: basic methods, application-specific methods and main transaction (control) methods. Each method is attached to a proper class; messages between methods express the process logic of each transaction. The methods are defined using pseudocode or message charts.

  2. Flexible Environments for Grand-Challenge Simulation in Climate Science

    NASA Astrophysics Data System (ADS)

    Pierrehumbert, R.; Tobis, M.; Lin, J.; Dieterich, C.; Caballero, R.

    2004-12-01

    Current climate models are monolithic codes, generally in Fortran, aimed at high-performance simulation of the modern climate. Though they adequately serve their designated purpose, they present major barriers to application in other problems. Tailoring them to paleoclimate of planetary simulations, for instance, takes months of work. Theoretical studies, where one may want to remove selected processes or break feedback loops, are similarly hindered. Further, current climate models are of little value in education, since the implementation of textbook concepts and equations in the code is obscured by technical detail. The Climate Systems Center at the University of Chicago seeks to overcome these limitations by bringing modern object-oriented design into the business of climate modeling. Our ultimate goal is to produce an end-to-end modeling environment capable of configuring anything from a simple single-column radiative-convective model to a full 3-D coupled climate model using a uniform, flexible interface. Technically, the modeling environment is implemented as a Python-based software component toolkit: key number-crunching procedures are implemented as discrete, compiled-language components 'glued' together and co-ordinated by Python, combining the high performance of compiled languages and the flexibility and extensibility of Python. We are incrementally working towards this final objective following a series of distinct, complementary lines. We will present an overview of these activities, including PyOM, a Python-based finite-difference ocean model allowing run-time selection of different Arakawa grids and physical parameterizations; CliMT, an atmospheric modeling toolkit providing a library of 'legacy' radiative, convective and dynamical modules which can be knitted into dynamical models, and PyCCSM, a version of NCAR's Community Climate System Model in which the coupler and run-control architecture are re-implemented in Python, augmenting its flexibility and adaptability.

  3. Biomechanics of forearm rotation: force and efficiency of pronator teres.

    PubMed

    Ibáñez-Gimeno, Pere; Galtés, Ignasi; Jordana, Xavier; Malgosa, Assumpció; Manyosa, Joan

    2014-01-01

    Biomechanical models are useful to assess the effect of muscular forces on bone structure. Using skeletal remains, we analyze pronator teres rotational efficiency and its force components throughout the entire flexion-extension and pronation-supination ranges by means of a new biomechanical model and 3D imaging techniques, and we explore the relationship between these parameters and skeletal structure. The results show that maximal efficiency is the highest in full elbow flexion and is close to forearm neutral position for each elbow angle. The vertical component of pronator teres force is the highest among all components and is greater in pronation and elbow extension. The radial component becomes negative in pronation and reaches lower values as the elbow flexes. Both components could enhance radial curvature, especially in pronation. The model also enables to calculate efficiency and force components simulating changes in osteometric parameters. An increase of radial curvature improves efficiency and displaces the position where the radial component becomes negative towards the end of pronation. A more proximal location of pronator teres radial enthesis and a larger humeral medial epicondyle increase efficiency and displace the position where this component becomes negative towards forearm neutral position, which enhances radial curvature. Efficiency is also affected by medial epicondylar orientation and carrying angle. Moreover, reaching an object and bringing it close to the face in a close-to-neutral position improve efficiency and entail an equilibrium between the forces affecting the elbow joint stability. When the upper-limb skeleton is used in positions of low efficiency, implying unbalanced force components, it undergoes plastic changes, which improve these parameters. These findings are useful for studies on ergonomics and orthopaedics, and the model could also be applied to fossil primates in order to infer their locomotor form. Moreover, activity patterns in human ancient populations could be deduced from parameters reported here.

  4. Fluid dynamic mechanisms and interactions within separated flows and their effects on missile aerodynamics

    NASA Astrophysics Data System (ADS)

    Addy, A. L.; Chow, W. L.; Korst, H. H.; White, R. A.

    1983-05-01

    Significant data and detailed results of a joint research effort investigating the fluid dynamic mechanisms and interactions within separated flows are presented. The results were obtained through analytical, experimental, and computational investigations of base flow related configurations. The research objectives focus on understanding the component mechanisms and interactions which establish and maintain separated flow regions. Flow models and theoretical analyses were developed to describe the base flowfield. The research approach has been to conduct extensive small-scale experiments on base flow configurations and to analyze these flows by component models and finite-difference techniques. The modeling of base flows of missiles (both powered and unpowered) for transonic and supersonic freestreams has been successful by component models. Research on plume effects and plume modeling indicated the need to match initial plume slope and plume surface curvature for valid wind tunnel simulation of an actual rocket plume. The assembly and development of a state-of-the-art laser Doppler velocimeter (LDV) system for experiments with two-dimensional small-scale models has been completed and detailed velocity and turbulence measurements are underway. The LDV experiments include the entire range of base flowfield mechanisms - shear layer development, recompression/reattachment, shock-induced separation, and plume-induced separation.

  5. Standard object recognition memory and "what" and "where" components: Improvement by post-training epinephrine in highly habituated rats.

    PubMed

    Jurado-Berbel, Patricia; Costa-Miserachs, David; Torras-Garcia, Meritxell; Coll-Andreu, Margalida; Portell-Cortés, Isabel

    2010-02-11

    The present work examined whether post-training systemic epinephrine (EPI) is able to modulate short-term (3h) and long-term (24 h and 48 h) memory of standard object recognition, as well as long-term (24 h) memory of separate "what" (object identity) and "where" (object location) components of object recognition. Although object recognition training is associated to low arousal levels, all the animals received habituation to the training box in order to further reduce emotional arousal. Post-training EPI improved long-term (24 h and 48 h), but not short-term (3 h), memory in the standard object recognition task, as well as 24 h memory for both object identity and object location. These data indicate that post-training epinephrine: (1) facilitates long-term memory for standard object recognition; (2) exerts separate facilitatory effects on "what" (object identity) and "where" (object location) components of object recognition; and (3) is capable of improving memory for a low arousing task even in highly habituated rats.

  6. Physical Conditions in the Ultraviolet Absorbers of IRAS F22456-5125

    NASA Astrophysics Data System (ADS)

    Dunn, Jay P.; Crenshaw, D. Michael; Kraemer, S. B.; Trippe, M. L.

    2010-04-01

    We present the ultraviolet (UV) and X-ray spectra observed with the Far Ultraviolet Spectroscopic Explorer (FUSE) and the XMM-Newton satellite, respectively, of the low-z Seyfert 1 galaxy IRAS F22456 - 5125. This object shows absorption from five distinct, narrow kinematic components that span a significant range in velocity (~0 to -700 km s-1) and ionization (Lyman series, C III, N III, and O VI). We also show that three of the five kinematic components in these lines appear to be saturated in Lyβ λ1026 and that all five components show evidence of saturation in the O VI doublet lines λλ1032, 1038. Further, all five components show evidence for partial covering due to the absorption seen in the O VI doublet. This object is peculiar because it shows no evidence for corresponding X-ray absorption to the UV absorption in the X-ray spectrum, which violates the 1:1 correlation known for low-z active galactic nuclei (AGNs). We perform photoionization modeling of the UV absorption lines and predict that the O VII column density should be small, which would produce little to no absorption in agreement with the X-ray observation. We also examine the UV variability of the continuum flux for this object (an increase of a factor of 6). As the absorption components lack variability, we find a lower limit of ~20 kpc for the distance for the absorbers from the central AGN. Based on observations made with the NASA-CNES-CSA Far Ultraviolet Spectroscopic Explorer. FUSE is operated for NASA by the Johns Hopkins University under NASA contract NAS5-32985.

  7. Python as a federation tool for GENESIS 3.0.

    PubMed

    Cornelis, Hugo; Rodriguez, Armando L; Coop, Allan D; Bower, James M

    2012-01-01

    The GENESIS simulation platform was one of the first broad-scale modeling systems in computational biology to encourage modelers to develop and share model features and components. Supported by a large developer community, it participated in innovative simulator technologies such as benchmarking, parallelization, and declarative model specification and was the first neural simulator to define bindings for the Python scripting language. An important feature of the latest version of GENESIS is that it decomposes into self-contained software components complying with the Computational Biology Initiative federated software architecture. This architecture allows separate scripting bindings to be defined for different necessary components of the simulator, e.g., the mathematical solvers and graphical user interface. Python is a scripting language that provides rich sets of freely available open source libraries. With clean dynamic object-oriented designs, they produce highly readable code and are widely employed in specialized areas of software component integration. We employ a simplified wrapper and interface generator to examine an application programming interface and make it available to a given scripting language. This allows independent software components to be 'glued' together and connected to external libraries and applications from user-defined Python or Perl scripts. We illustrate our approach with three examples of Python scripting. (1) Generate and run a simple single-compartment model neuron connected to a stand-alone mathematical solver. (2) Interface a mathematical solver with GENESIS 3.0 to explore a neuron morphology from either an interactive command-line or graphical user interface. (3) Apply scripting bindings to connect the GENESIS 3.0 simulator to external graphical libraries and an open source three dimensional content creation suite that supports visualization of models based on electron microscopy and their conversion to computational models. Employed in this way, the stand-alone software components of the GENESIS 3.0 simulator provide a framework for progressive federated software development in computational neuroscience.

  8. Python as a Federation Tool for GENESIS 3.0

    PubMed Central

    Cornelis, Hugo; Rodriguez, Armando L.; Coop, Allan D.; Bower, James M.

    2012-01-01

    The GENESIS simulation platform was one of the first broad-scale modeling systems in computational biology to encourage modelers to develop and share model features and components. Supported by a large developer community, it participated in innovative simulator technologies such as benchmarking, parallelization, and declarative model specification and was the first neural simulator to define bindings for the Python scripting language. An important feature of the latest version of GENESIS is that it decomposes into self-contained software components complying with the Computational Biology Initiative federated software architecture. This architecture allows separate scripting bindings to be defined for different necessary components of the simulator, e.g., the mathematical solvers and graphical user interface. Python is a scripting language that provides rich sets of freely available open source libraries. With clean dynamic object-oriented designs, they produce highly readable code and are widely employed in specialized areas of software component integration. We employ a simplified wrapper and interface generator to examine an application programming interface and make it available to a given scripting language. This allows independent software components to be ‘glued’ together and connected to external libraries and applications from user-defined Python or Perl scripts. We illustrate our approach with three examples of Python scripting. (1) Generate and run a simple single-compartment model neuron connected to a stand-alone mathematical solver. (2) Interface a mathematical solver with GENESIS 3.0 to explore a neuron morphology from either an interactive command-line or graphical user interface. (3) Apply scripting bindings to connect the GENESIS 3.0 simulator to external graphical libraries and an open source three dimensional content creation suite that supports visualization of models based on electron microscopy and their conversion to computational models. Employed in this way, the stand-alone software components of the GENESIS 3.0 simulator provide a framework for progressive federated software development in computational neuroscience. PMID:22276101

  9. Application of Microsoft's ActiveX and DirectX technologies to the visulization of physical system dynamics

    NASA Astrophysics Data System (ADS)

    Mann, Christopher; Narasimhamurthi, Natarajan

    1998-08-01

    This paper discusses a specific implementation of a web and complement based simulation systems. The overall simulation container is implemented within a web page viewed with Microsoft's Internet Explorer 4.0 web browser. Microsoft's ActiveX/Distributed Component Object Model object interfaces are used in conjunction with the Microsoft DirectX graphics APIs to provide visualization functionality for the simulation. The MathWorks' Matlab computer aided control system design program is used as an ActiveX automation server to provide the compute engine for the simulations.

  10. Estimating effective data density in a satellite retrieval or an objective analysis

    NASA Technical Reports Server (NTRS)

    Purser, R. J.; Huang, H.-L.

    1993-01-01

    An attempt is made to formulate consistent objective definitions of the concept of 'effective data density' applicable both in the context of satellite soundings and more generally in objective data analysis. The definitions based upon various forms of Backus-Gilbert 'spread' functions are found to be seriously misleading in satellite soundings where the model resolution function (expressing the sensitivity of retrieval or analysis to changes in the background error) features sidelobes. Instead, estimates derived by smoothing the trace components of the model resolution function are proposed. The new estimates are found to be more reliable and informative in simulated satellite retrieval problems and, for the special case of uniformly spaced perfect observations, agree exactly with their actual density. The new estimates integrate to the 'degrees of freedom for signal', a diagnostic that is invariant to changes of units or coordinates used.

  11. Objective determination of image end-members in spectral mixture analysis of AVIRIS data

    NASA Technical Reports Server (NTRS)

    Tompkins, Stefanie; Mustard, John F.; Pieters, Carle M.; Forsyth, Donald W.

    1993-01-01

    Spectral mixture analysis has been shown to be a powerful, multifaceted tool for analysis of multi- and hyper-spectral data. Applications of AVIRIS data have ranged from mapping soils and bedrock to ecosystem studies. During the first phase of the approach, a set of end-members are selected from an image cube (image end-members) that best account for its spectral variance within a constrained, linear least squares mixing model. These image end-members are usually selected using a priori knowledge and successive trial and error solutions to refine the total number and physical location of the end-members. However, in many situations a more objective method of determining these essential components is desired. We approach the problem of image end-member determination objectively by using the inherent variance of the data. Unlike purely statistical methods such as factor analysis, this approach derives solutions that conform to a physically realistic model.

  12. Simulation of laser detection and ranging (LADAR) and forward-looking infrared (FLIR) data for autonomous tracking of airborne objects

    NASA Astrophysics Data System (ADS)

    Powell, Gavin; Markham, Keith C.; Marshall, David

    2000-06-01

    This paper presents the results of an investigation leading into an implementation of FLIR and LADAR data simulation for use in a multi sensor data fusion automated target recognition system. At present the main areas of application are in military environments but systems can easily be adapted to other areas such as security applications, robotics and autonomous cars. Recent developments have been away from traditional sensor modeling and toward modeling of features that are external to the system, such as atmosphere and part occlusion, to create a more realistic and rounded system. We have implemented such techniques and introduced a means of inserting these models into a highly detailed scene model to provide a rich data set for later processing. From our study and implementation we are able to embed sensor model components into a commercial graphics and animation package, along with object and terrain models, which can be easily used to create a more realistic sequence of images.

  13. A study of binary Kuiper belt objects

    NASA Astrophysics Data System (ADS)

    Kern, Susan Diane

    2006-06-01

    About 10 5 bodies larger than 100km in diameter (Jewitt 1998) reside in the Kuiper Belt, beyond the orbit of Neptune. Since 1992 observational surveys have discovered over one thousand of these objects, believed to be fossil remnants of events that occurred nearly 4.5 billion years ago. Sixteen of these objects are currently known to be binaries, and many more are expected to be discovered. As part of the Deep Ecliptic Survey (DES) I have helped catalog nearly one third of the known Kuiper Belt object (KBO) population, and used that database for further physical studies. Recovery observations for dynamical studies of newly discovered objects with the Magellan telescopes and a high resolution imager, MagIC, revealed three binaries, 88611 (2001QT297), 2003QY90, and 2005EO304. One binary was found in the discovery observations, 2003UN284. Lightcurve measurements of these, and other non-binary KBOs, were obtained to look for unique rotational characteristics. Eleven of thirty-three objects, excluding the binaries, were found to have measurable variability. One of these objects, 2002GW32 has a particularly large amplitude (> 1 magnitude) of variability, and 2002GP32 has a relatively short (~3.3 hours, single-peaked) lightcurve. Among the binary population all the observed objects showed some level of variation. The secondary of 88611 was fit with a single-peaked period of 5.5±0.02 hours while the primary component appears to be non-variable above the measurement errors (0.05 magnitudes). Neither component appears to be color variable. The components of 2003QY90 are both highly variable yielding single- peaked rotation periods of 3.5±1.1 and 7.2±2.9 hours with amplitudes of 0.34±0.06 and 0.90±0.18 magnitudes, respectively. The rotation periods are comparable to those of other non-binary KBOs although distinct from that of an identified contact binary. Orbits and partial orbits for Kuiper belt binaries (KBBs) show a wide range of eccentricities, and an increasing number of binaries with decreasing binary semi-major axis. These characteristics exclude the formation models proposed by Funato et al. (2003) and Weidenschilling (2002), respectively. Conversely, the formation models of Astakhov et al. (2005) and Goldreich et al. (2002) appear to describe the observations, at least in part. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hale, Richard Edward; Cetiner, Sacit M.; Fugate, David L.

    The Small Modular Reactor (SMR) Dynamic System Modeling Tool project is in the third year of development. The project is designed to support collaborative modeling and study of various advanced SMR (non-light water cooled) concepts, including the use of multiple coupled reactors at a single site. The objective of the project is to provide a common simulation environment and baseline modeling resources to facilitate rapid development of dynamic advanced reactor SMR models, ensure consistency among research products within the Instrumentation, Controls, and Human-Machine Interface (ICHMI) technical area, and leverage cross-cutting capabilities while minimizing duplication of effort. The combined simulation environmentmore » and suite of models are identified as the Modular Dynamic SIMulation (MoDSIM) tool. The critical elements of this effort include (1) defining a standardized, common simulation environment that can be applied throughout the program, (2) developing a library of baseline component modules that can be assembled into full plant models using existing geometry and thermal-hydraulic data, (3) defining modeling conventions for interconnecting component models, and (4) establishing user interfaces and support tools to facilitate simulation development (i.e., configuration and parameterization), execution, and results display and capture.« less

  15. Limited capacity of working memory in unihemispheric random walks implies conceivable slow dispersal.

    PubMed

    Wei, Kun; Zhong, Suchuan

    2017-08-01

    Phenomenologically inspired by dolphins' unihemispheric sleep, we introduce a minimal model for random walks with physiological memory. The physiological memory consists of long-term memory which includes unconscious implicit memory and conscious explicit memory, and working memory which serves as a multi-component system for integrating, manipulating and managing short-term storage. The model assumes that the sleeping state allows retrievals of episodic objects merely from the episodic buffer where these memory objects are invoked corresponding to the ambient objects and are thus object-oriented, together with intermittent but increasing use of implicit memory in which decisions are unconsciously picked up from historical time series. The process of memory decay and forgetting is constructed in the episodic buffer. The walker's risk attitude, as a product of physiological heuristics according to the performance of objected-oriented decisions, is imposed on implicit memory. The analytical results of unihemispheric random walks with the mixture of object-oriented and time-oriented memory, as well as the long-time behavior which tends to the use of implicit memory, are provided, indicating the common sense that a conservative risk attitude is inclinable to slow movement.

  16. Modeling the Personal Health Ecosystem.

    PubMed

    Blobel, Bernd; Brochhausen, Mathias; Ruotsalainen, Pekka

    2018-01-01

    Complex ecosystems like the pHealth one combine different domains represented by a huge variety of different actors (human beings, organizations, devices, applications, components) belonging to different policy domains, coming from different disciplines, deploying different methodologies, terminologies, and ontologies, offering different levels of knowledge, skills, and experiences, acting in different scenarios and accommodating different business cases to meet the intended business objectives. For correctly modeling such systems, a system-oriented, architecture-centric, ontology-based, policy-driven approach is inevitable, thereby following established Good Modeling Best Practices. However, most of the existing standards, specifications and tools for describing, representing, implementing and managing health (information) systems reflect the advancement of information and communication technology (ICT) represented by different evolutionary levels of data modeling. The paper presents a methodology for integrating, adopting and advancing models, standards, specifications as well as implemented systems and components on the way towards the aforementioned ultimate approach, so meeting the challenge we face when transforming health systems towards ubiquitous, personalized, predictive, preventive, participative, and cognitive health and social care.

  17. The Regional Hydrologic Extremes Assessment System: A software framework for hydrologic modeling and data assimilation

    PubMed Central

    Das, Narendra; Stampoulis, Dimitrios; Ines, Amor; Fisher, Joshua B.; Granger, Stephanie; Kawata, Jessie; Han, Eunjin; Behrangi, Ali

    2017-01-01

    The Regional Hydrologic Extremes Assessment System (RHEAS) is a prototype software framework for hydrologic modeling and data assimilation that automates the deployment of water resources nowcasting and forecasting applications. A spatially-enabled database is a key component of the software that can ingest a suite of satellite and model datasets while facilitating the interfacing with Geographic Information System (GIS) applications. The datasets ingested are obtained from numerous space-borne sensors and represent multiple components of the water cycle. The object-oriented design of the software allows for modularity and extensibility, showcased here with the coupling of the core hydrologic model with a crop growth model. RHEAS can exploit multi-threading to scale with increasing number of processors, while the database allows delivery of data products and associated uncertainty through a variety of GIS platforms. A set of three example implementations of RHEAS in the United States and Kenya are described to demonstrate the different features of the system in real-world applications. PMID:28545077

  18. The Regional Hydrologic Extremes Assessment System: A software framework for hydrologic modeling and data assimilation.

    PubMed

    Andreadis, Konstantinos M; Das, Narendra; Stampoulis, Dimitrios; Ines, Amor; Fisher, Joshua B; Granger, Stephanie; Kawata, Jessie; Han, Eunjin; Behrangi, Ali

    2017-01-01

    The Regional Hydrologic Extremes Assessment System (RHEAS) is a prototype software framework for hydrologic modeling and data assimilation that automates the deployment of water resources nowcasting and forecasting applications. A spatially-enabled database is a key component of the software that can ingest a suite of satellite and model datasets while facilitating the interfacing with Geographic Information System (GIS) applications. The datasets ingested are obtained from numerous space-borne sensors and represent multiple components of the water cycle. The object-oriented design of the software allows for modularity and extensibility, showcased here with the coupling of the core hydrologic model with a crop growth model. RHEAS can exploit multi-threading to scale with increasing number of processors, while the database allows delivery of data products and associated uncertainty through a variety of GIS platforms. A set of three example implementations of RHEAS in the United States and Kenya are described to demonstrate the different features of the system in real-world applications.

  19. Design and Implementation of Hydrologic Process Knowledge-base Ontology: A case study for the Infiltration Process

    NASA Astrophysics Data System (ADS)

    Elag, M.; Goodall, J. L.

    2013-12-01

    Hydrologic modeling often requires the re-use and integration of models from different disciplines to simulate complex environmental systems. Component-based modeling introduces a flexible approach for integrating physical-based processes across disciplinary boundaries. Several hydrologic-related modeling communities have adopted the component-based approach for simulating complex physical systems by integrating model components across disciplinary boundaries in a workflow. However, it is not always straightforward to create these interdisciplinary models due to the lack of sufficient knowledge about a hydrologic process. This shortcoming is a result of using informal methods for organizing and sharing information about a hydrologic process. A knowledge-based ontology provides such standards and is considered the ideal approach for overcoming this challenge. The aims of this research are to present the methodology used in analyzing the basic hydrologic domain in order to identify hydrologic processes, the ontology itself, and how the proposed ontology is integrated with the Water Resources Component (WRC) ontology. The proposed ontology standardizes the definitions of a hydrologic process, the relationships between hydrologic processes, and their associated scientific equations. The objective of the proposed Hydrologic Process (HP) Ontology is to advance the idea of creating a unified knowledge framework for components' metadata by introducing a domain-level ontology for hydrologic processes. The HP ontology is a step toward an explicit and robust domain knowledge framework that can be evolved through the contribution of domain users. Analysis of the hydrologic domain is accomplished using the Formal Concept Approach (FCA), in which the infiltration process, an important hydrologic process, is examined. Two infiltration methods, the Green-Ampt and Philip's methods, were used to demonstrate the implementation of information in the HP ontology. Furthermore, a SPARQL service is provided for semantic-based querying of the ontology.

  20. Proposed biomimetic molecular sensor array for astrobiology applications

    NASA Astrophysics Data System (ADS)

    Cullen, D. C.; Grant, W. D.; Piletsky, S.; Sims, M. R.

    2001-08-01

    A key objective of future astrobiology lander missions, e.g. to Mars and Europa, is the detection of biomarkers - molecules whose presence indicates the existence of either current or extinct life. To address limitations of current analytical methods for biomarker detection, we describe the methodology of a new project for demonstration of a robust molecular-recognition sensor array for astrobiology biomarkers. The sensor array will be realised by assembling components that have been demonstrated individually in previous or current research projects. The major components are (1) robust artificial molecular receptors comprised of molecular imprinted polymer (MIP) recognition systems and (2) a sensor array comprised of both optical and electrochemical sensor elements. These components will be integrated together using ink-jet printing technology coupled with in situ photo-polymerisation of MIPs. For demonstration, four model biomarkers are chosen as targets and represent various classes of potential biomarkers. Objectives of the proposed work include (1) demonstration of practical proof-of-concept, (2) identify areas for further development and (3) provide performance and design data for follow-up projects leading to astrobiology missions.

  1. Developing models that analyze the economic/environmental trade-offs implicit in water resource management

    NASA Astrophysics Data System (ADS)

    Howitt, R. E.

    2016-12-01

    Hydro-economic models have been used to analyze optimal supply management and groundwater use for the past 25 years. They are characterized by an objective function that usually maximizes economic measures such as consumer and producer surplus subject to hydrologic equations of motion or water distribution systems. The hydrologic and economic components are sometimes fully integrated. Alternatively they may use an iterative interactive process. Environmental considerations have been included in hydro-economic models as inequality constraints. Representing environmental requirements as constraints is a rigid approximation of the range of management alternatives that could be used to implement environmental objectives. The next generation of hydro-economic models, currently being developed, require that the environmental alternatives be represented by continuous or semi-continuous functions which relate water resource use allocated to the environment with the probabilities of achieving environmental objectives. These functions will be generated by process models of environmental and biological systems which are now advanced to the state that they can realistically represent environmental systems and flexibility to interact with economic models. Examples are crop growth models, climate modeling, and biological models of forest, fish, and fauna systems. These process models can represent environmental outcomes in a form that is similar to economic production functions. When combined with economic models the interacting process models can reproduce a range of trade-offs between economic and environmental objectives, and thus optimize social value of many water and environmental resources. Some examples of this next-generation of hydro-enviro- economic models are reviewed. In these models implicit production functions for environmental goods are combined with hydrologic equations of motion and economic response functions. We discuss models that show interaction between environmental goods and agricultural production, and others that address alternative climate change policies, or habitat provision.

  2. Improvements to NASA's Debris Assessment Software

    NASA Technical Reports Server (NTRS)

    Opiela, J.; Johnson, Nicholas L.

    2007-01-01

    NASA's Debris Assessment Software (DAS) has been substantially revised and expanded. DAS is designed to assist NASA programs in performing orbital debris assessments, as described in NASA s Guidelines and Assessment Procedures for Limiting Orbital Debris. The extensive upgrade of DAS was undertaken to reflect changes in the debris mitigation guidelines, to incorporate recommendations from DAS users, and to take advantage of recent software capabilities for greater user utility. DAS 2.0 includes an updated environment model and enhanced orbital propagators and reentry-survivability models. The ORDEM96 debris environment model has been replaced by ORDEM2000 in DAS 2.0, which is also designed to accept anticipated revisions to the environment definition. Numerous upgrades have also been applied to the assessment of human casualty potential due to reentering debris. Routines derived from the Object Reentry Survival Analysis Tool, Version 6 (ORSAT 6), determine which objects are assessed to survive reentry, and the resulting risk of human casualty is calculated directly based upon the orbital inclination and a future world population database. When evaluating reentry risks, the user may enter up to 200 unique hardware components for each launched object, in up to four nested levels. This last feature allows the software to more accurately model components that are exposed below the initial breakup altitude. The new DAS 2.0 provides an updated set of tools for users to assess their mission s compliance with the NASA Safety Standard and does so with a clear and easy-to-understand interface. The new native Microsoft Windows graphical user interface (GUI) is a vast improvement over the previous DOS-based interface. In the new version, functions are more-clearly laid out, and the GUI includes the standard Windows-style Help functions. The underlying routines within the DAS code are also improved.

  3. Micro-Randomized Trials: An Experimental Design for Developing Just-in-Time Adaptive Interventions

    PubMed Central

    Klasnja, Predrag; Hekler, Eric B.; Shiffman, Saul; Boruvka, Audrey; Almirall, Daniel; Tewari, Ambuj; Murphy, Susan A.

    2015-01-01

    Objective This paper presents an experimental design, the micro-randomized trial, developed to support optimization of just-in-time adaptive interventions (JITAIs). JITAIs are mHealth technologies that aim to deliver the right intervention components at the right times and locations to optimally support individuals’ health behaviors. Micro-randomized trials offer a way to optimize such interventions by enabling modeling of causal effects and time-varying effect moderation for individual intervention components within a JITAI. Methods The paper describes the micro-randomized trial design, enumerates research questions that this experimental design can help answer, and provides an overview of the data analyses that can be used to assess the causal effects of studied intervention components and investigate time-varying moderation of those effects. Results Micro-randomized trials enable causal modeling of proximal effects of the randomized intervention components and assessment of time-varying moderation of those effects. Conclusions Micro-randomized trials can help researchers understand whether their interventions are having intended effects, when and for whom they are effective, and what factors moderate the interventions’ effects, enabling creation of more effective JITAIs. PMID:26651463

  4. Improved determination of vector lithospheric magnetic anomalies from MAGSAT data

    NASA Technical Reports Server (NTRS)

    Ravat, Dhananjay

    1993-01-01

    Scientific contributions made in developing new methods to isolate and map vector magnetic anomalies from measurements made by Magsat are described. In addition to the objective of the proposal, the isolation and mapping of equatorial vector lithospheric Magsat anomalies, isolation of polar ionospheric fields during the period were also studied. Significant progress was also made in isolation of polar delta(Z) component and scalar anomalies as well as integration and synthesis of various techniques of removing equatorial and polar ionospheric effects. The significant contributions of this research are: (1) development of empirical/analytical techniques in modeling ionospheric fields in Magsat data and their removal from uncorrected anomalies to obtain better estimates of lithospheric anomalies (this task was accomplished for equatorial delta(X), delta(Z), and delta(B) component and polar delta(Z) and delta(B) component measurements; (2) integration of important processing techniques developed during the last decade with the newly developed technologies of ionospheric field modeling into an optimum processing scheme; and (3) implementation of the above processing scheme to map the most robust magnetic anomalies of the lithosphere (components as well as scalar).

  5. Intelligent Integrated System Health Management

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando

    2012-01-01

    Intelligent Integrated System Health Management (ISHM) is the management of data, information, and knowledge (DIaK) with the purposeful objective of determining the health of a system (Management: storage, distribution, sharing, maintenance, processing, reasoning, and presentation). Presentation discusses: (1) ISHM Capability Development. (1a) ISHM Knowledge Model. (1b) Standards for ISHM Implementation. (1c) ISHM Domain Models (ISHM-DM's). (1d) Intelligent Sensors and Components. (2) ISHM in Systems Design, Engineering, and Integration. (3) Intelligent Control for ISHM-Enabled Systems

  6. View-Based Models of 3D Object Recognition and Class-Specific Invariance

    DTIC Science & Technology

    1994-04-01

    underlie recognition of geon-like com- ponents (see Edelman, 1991 and Biederman , 1987 ). I(X -_ ta)II1y = (X - ta)TWTW(x -_ ta) (3) View-invariant features...Institute of Technology, 1993. neocortex. Biological Cybernetics, 1992. 14] I. Biederman . Recognition by components: a theory [20] B. Olshausen, C...Anderson, and D. Van Essen. A of human image understanding. Psychol. Review, neural model of visual attention and invariant pat- 94:115-147, 1987 . tern

  7. Corrected confidence bands for functional data using principal components.

    PubMed

    Goldsmith, J; Greven, S; Crainiceanu, C

    2013-03-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.

  8. Corrected Confidence Bands for Functional Data Using Principal Components

    PubMed Central

    Goldsmith, J.; Greven, S.; Crainiceanu, C.

    2014-01-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003

  9. The balanced ideological antipathy model: explaining the effects of ideological attitudes on inter-group antipathy across the political spectrum.

    PubMed

    Crawford, Jarret T; Mallinas, Stephanie R; Furman, Bryan J

    2015-12-01

    We introduce the balanced ideological antipathy (BIA) model, which challenges assumptions that right-wing authoritarianism (RWA) and social dominance orientation (SDO) predict inter-group antipathy per se. Rather, the effects of RWA and SDO on antipathy should depend on the target's political orientation and political objectives, the specific components of RWA, and the type of antipathy expressed. Consistent with the model, two studies (N = 585) showed that the Traditionalism component of RWA positively and negatively predicted both political intolerance and prejudice toward tradition-threatening and -reaffirming groups, respectively, whereas SDO positively and negatively predicted prejudice (and to some extent political intolerance) toward hierarchy-attenuating and -enhancing groups, respectively. Critically, the Conservatism component of RWA positively predicted political intolerance (but not prejudice) toward each type of target group, suggesting it captures the anti-democratic impulse at the heart of authoritarianism. Recommendations for future research on the relationship between ideological attitudes and inter-group antipathy are discussed. © 2015 by the Society for Personality and Social Psychology, Inc.

  10. On Complex Networks Representation and Computation of Hydrologycal Quantities

    NASA Astrophysics Data System (ADS)

    Serafin, F.; Bancheri, M.; David, O.; Rigon, R.

    2017-12-01

    Water is our blue gold. Despite results of discovery-based science keep warning public opinion about the looming worldwide water crisis, water is still treated as a not worth taking resource. Could a different multi-scale perspective affect environmental decision-making more deeply? Can also a further pairing to a new graphical representation of processes interaction sway decision-making more effectively and public opinion consequently?This abstract introduces a complex networks driven way to represent catchments eco-hydrology and related flexible informatics to manage it. The representation is built upon mathematical category. A category is an algebraic structure that comprises "objects" linked by "arrows". It is an evolution of Petri Nets said Time Continuous Petri Nets (TCPN). It aims to display (water) budgets processes and catchment interactions using explicative and self-contained symbolism. The result improves readability of physical processes compared to current descriptions. The IT perspective hinges on the Object Modeling System (OMS) v3. The latter is a non-invasive flexible environmental modeling framework designed to support component-based model development. The implementation of a Directed Acyclic Graph (DAG) data structure, named Net3, has recently enhanced its flexibility. Net3 represents interacting systems as complex networks: vertices match up with any sort of time evolving quantity; edges correspond to their data (fluxes) interchange. It currently hosts JGrass-NewAge components, and those implementing travel time analysis of fluxes. Further bio-physical or management oriented components can be easily added.This talk introduces both graphical representation and related informatics exercising actual applications and examples.

  11. Southern Regional Center for Lightweight Innovative Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Paul T.

    The Southern Regional Center for Lightweight Innovative Design (SRCLID) has developed an experimentally validated cradle-to-grave modeling and simulation effort to optimize automotive components in order to decrease weight and cost, yet increase performance and safety in crash scenarios. In summary, the three major objectives of this project are accomplished: To develop experimentally validated cradle-to-grave modeling and simulation tools to optimize automotive and truck components for lightweighting materials (aluminum, steel, and Mg alloys and polymer-based composites) with consideration of uncertainty to decrease weight and cost, yet increase the performance and safety in impact scenarios; To develop multiscale computational models that quantifymore » microstructure-property relations by evaluating various length scales, from the atomic through component levels, for each step of the manufacturing process for vehicles; and To develop an integrated K-12 educational program to educate students on lightweighting designs and impact scenarios. In this final report, we divided the content into two parts: the first part contains the development of building blocks for the project, including materials and process models, process-structure-property (PSP) relationship, and experimental validation capabilities; the second part presents the demonstration task for Mg front-end work associated with USAMP projects.« less

  12. A 3D stand generator for central Appalachian hardwood forests

    Treesearch

    Jingxin Wang; Yaoxiang Li; Gary W. Miller

    2002-01-01

    A 3-dimensional (3D) stand generator was developed for central Appalachian hardwood forests. It was designed for a harvesting simulator to examine the interactions of stand, harvest, and machine. The Component Object Model (COM) was used to design and implement the program. Input to the generator includes species composition, stand density, and spatial pattern. Output...

  13. Internal Consistency in Components of International Management/International Business Syllabi: Roadmaps with Mixed Messages

    ERIC Educational Resources Information Center

    Veliyath, Rajaram; Adams, Janet S.

    2005-01-01

    The course syllabus is a contract between instructor and students, a schedule of course assignments and activities, and a roadmap delineating objectives and checkpoints in the course. It is also a planning and reference tool for both students and instructor, and it models professors' expectations for their students. This study investigated whether…

  14. An Innovative Child CBT Training Model for Community Mental Health Practitioners in Ontario

    ERIC Educational Resources Information Center

    Manassis, Katharina; Ickowicz, Abel; Picard, Erin; Antle, Beverley; McNeill, Ted; Chahauver, Anu; Mendlowitz, Sandra; Monga, Suneeta; Adler-Nevo, Gili

    2009-01-01

    Objective: Cognitive behavior therapy (CBT) for children has been shown efficacious, but community access to it is often limited by the lack of trained therapists. This study evaluated a child, CBT-focused, 20-session weekly group supervision seminar with a didactic component which was provided to community mental health practitioners by…

  15. Computation of Reacting Flows in Combustion Processes

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr.; Chen, K.-H.

    2001-01-01

    The objective of this research is to develop an efficient numerical algorithm with unstructured grids for the computation of three-dimensional chemical reacting flows that are known to occur in combustion components of propulsion systems. During the grant period (1996 to 1999), two companion codes have been developed and various numerical and physical models were implemented into the two codes.

  16. Cost Estimates by Program Mechanism, Appendix K. Vol. II, A Plan for Managing the Development, Implementation and Operation of a Model Elementary Teacher Education Program.

    ERIC Educational Resources Information Center

    Cole, R. D.; Hamreus, D. G.

    This appendix presents the following tables of program component cost estimates: 1) instructional design and development; 2) instructional operations; 3) program management--policy creation and adoption, and policy and program execution; 4) program coordination--instructional objectives, adaptation, accommodation, and dissemination; 5) general…

  17. Integrated model development for liquid fueled rocket propulsion systems

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    1993-01-01

    As detailed in the original statement of work, the objective of phase two of this research effort was to develop a general framework for rocket engine performance prediction that integrates physical principles, a rigorous mathematical formalism, component level test data, system level test data, and theory-observation reconciliation. Specific phase two development tasks are defined.

  18. The caCORE Software Development Kit: streamlining construction of interoperable biomedical information services.

    PubMed

    Phillips, Joshua; Chilukuri, Ram; Fragoso, Gilberto; Warzel, Denise; Covitz, Peter A

    2006-01-06

    Robust, programmatically accessible biomedical information services that syntactically and semantically interoperate with other resources are challenging to construct. Such systems require the adoption of common information models, data representations and terminology standards as well as documented application programming interfaces (APIs). The National Cancer Institute (NCI) developed the cancer common ontologic representation environment (caCORE) to provide the infrastructure necessary to achieve interoperability across the systems it develops or sponsors. The caCORE Software Development Kit (SDK) was designed to provide developers both within and outside the NCI with the tools needed to construct such interoperable software systems. The caCORE SDK requires a Unified Modeling Language (UML) tool to begin the development workflow with the construction of a domain information model in the form of a UML Class Diagram. Models are annotated with concepts and definitions from a description logic terminology source using the Semantic Connector component. The annotated model is registered in the Cancer Data Standards Repository (caDSR) using the UML Loader component. System software is automatically generated using the Codegen component, which produces middleware that runs on an application server. The caCORE SDK was initially tested and validated using a seven-class UML model, and has been used to generate the caCORE production system, which includes models with dozens of classes. The deployed system supports access through object-oriented APIs with consistent syntax for retrieval of any type of data object across all classes in the original UML model. The caCORE SDK is currently being used by several development teams, including by participants in the cancer biomedical informatics grid (caBIG) program, to create compatible data services. caBIG compatibility standards are based upon caCORE resources, and thus the caCORE SDK has emerged as a key enabling technology for caBIG. The caCORE SDK substantially lowers the barrier to implementing systems that are syntactically and semantically interoperable by providing workflow and automation tools that standardize and expedite modeling, development, and deployment. It has gained acceptance among developers in the caBIG program, and is expected to provide a common mechanism for creating data service nodes on the data grid that is under development.

  19. Constitutive and damage material modeling in a high pressure hydrogen environment

    NASA Technical Reports Server (NTRS)

    Russell, D. A.; Fritzemeier, L. G.

    1991-01-01

    Numerous components in reusable space propulsion systems such as the SSME are exposed to high pressure gaseous hydrogen environments. Flow areas and passages in the fuel turbopump, fuel and oxidizer preburners, main combustion chamber, and injector assembly contain high pressure hydrogen either high in purity or as hydrogen rich steam. Accurate constitutive and damage material models applicable to high pressure hydrogen environments are therefore needed for engine design and analysis. Existing constitutive and cyclic crack initiation models were evaluated only for conditions of oxidizing environments. The main objective is to evaluate these models for applicability to high pressure hydrogen environments.

  20. UAS in the NAS Project: Large-Scale Communication Architecture Simulations with NASA GRC Gen5 Radio Model

    NASA Technical Reports Server (NTRS)

    Kubat, Gregory

    2016-01-01

    This report provides a description and performance characterization of the large-scale, Relay architecture, UAS communications simulation capability developed for the NASA GRC, UAS in the NAS Project. The system uses a validated model of the GRC Gen5 CNPC, Flight-Test Radio model. Contained in the report is a description of the simulation system and its model components, recent changes made to the system to improve performance, descriptions and objectives of sample simulations used for test and verification, and a sampling and observations of results and performance data.

  1. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.

    PubMed

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-04-15

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.

  2. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    PubMed Central

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  3. Research on connection structure of aluminumbody bus using multi-objective topology optimization

    NASA Astrophysics Data System (ADS)

    Peng, Q.; Ni, X.; Han, F.; Rhaman, K.; Ulianov, C.; Fang, X.

    2018-01-01

    For connecting Aluminum Alloy bus body aluminum components often occur the problem of failure, a new aluminum alloy connection structure is designed based on multi-objective topology optimization method. Determining the shape of the outer contour of the connection structure with topography optimization, establishing a topology optimization model of connections based on SIMP density interpolation method, going on multi-objective topology optimization, and improving the design of the connecting piece according to the optimization results. The results show that the quality of the aluminum alloy connector after topology optimization is reduced by 18%, and the first six natural frequencies are improved and the strength performance and stiffness performance are obviously improved.

  4. Whisker Contact Detection of Rodents Based on Slow and Fast Mechanical Inputs

    PubMed Central

    Claverie, Laure N.; Boubenec, Yves; Debrégeas, Georges; Prevost, Alexis M.; Wandersman, Elie

    2017-01-01

    Rodents use their whiskers to locate nearby objects with an extreme precision. To perform such tasks, they need to detect whisker/object contacts with a high temporal accuracy. This contact detection is conveyed by classes of mechanoreceptors whose neural activity is sensitive to either slow or fast time varying mechanical stresses acting at the base of the whiskers. We developed a biomimetic approach to separate and characterize slow quasi-static and fast vibrational stress signals acting on a whisker base in realistic exploratory phases, using experiments on both real and artificial whiskers. Both slow and fast mechanical inputs are successfully captured using a mechanical model of the whisker. We present and discuss consequences of the whisking process in purely mechanical terms and hypothesize that free whisking in air sets a mechanical threshold for contact detection. The time resolution and robustness of the contact detection strategies based on either slow or fast stress signals are determined. Contact detection based on the vibrational signal is faster and more robust to exploratory conditions than the slow quasi-static component, although both slow/fast components allow localizing the object. PMID:28119582

  5. Multi-objective robust design of energy-absorbing components using coupled process-performance simulations

    NASA Astrophysics Data System (ADS)

    Najafi, Ali; Acar, Erdem; Rais-Rohani, Masoud

    2014-02-01

    The stochastic uncertainties associated with the material, process and product are represented and propagated to process and performance responses. A finite element-based sequential coupled process-performance framework is used to simulate the forming and energy absorption responses of a thin-walled tube in a manner that both material properties and component geometry can evolve from one stage to the next for better prediction of the structural performance measures. Metamodelling techniques are used to develop surrogate models for manufacturing and performance responses. One set of metamodels relates the responses to the random variables whereas the other relates the mean and standard deviation of the responses to the selected design variables. A multi-objective robust design optimization problem is formulated and solved to illustrate the methodology and the influence of uncertainties on manufacturability and energy absorption of a metallic double-hat tube. The results are compared with those of deterministic and augmented robust optimization problems.

  6. Classification and pose estimation of objects using nonlinear features

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1998-03-01

    A new nonlinear feature extraction method called the maximum representation and discrimination feature (MRDF) method is presented for extraction of features from input image data. It implements transformations similar to the Sigma-Pi neural network. However, the weights of the MRDF are obtained in closed form, and offer advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We show its use in estimating the class and pose of images of real objects and rendered solid CAD models of machine parts from single views using a feature-space trajectory (FST) neural network classifier. We show more accurate classification and pose estimation results than are achieved by standard principal component analysis (PCA) and Fukunaga-Koontz (FK) feature extraction methods.

  7. A diffuse radar scattering model from Martian surface rocks

    NASA Technical Reports Server (NTRS)

    Calvin, W. M.; Jakosky, B. M.; Christensen, P. R.

    1987-01-01

    Remote sensing of Mars has been done with a variety of instrumentation at various wavelengths. Many of these data sets can be reconciled with a surface model of bonded fines (or duricrust) which varies widely across the surface and a surface rock distribution which varies less so. A surface rock distribution map from -60 to +60 deg latitude has been generated by Christensen. Our objective is to model the diffuse component of radar reflection based on this surface distribution of rocks. The diffuse, rather than specular, scattering is modeled because the diffuse component arises due to scattering from rocks with sizes on the order of the wavelength of the radar beam. Scattering for radio waves of 12.5 cm is then indicative of the meter scale and smaller structure of the surface. The specular term is indicative of large scale surface undulations and should not be causally related to other surface physical properties. A simplified model of diffuse scattering is described along with two rock distribution models. The results of applying the models to a planet of uniform fractional rock coverage with values ranging from 5 to 20% are discussed.

  8. Multi-scale damage modelling in a ceramic matrix composite using a finite-element microstructure meshfree methodology

    PubMed Central

    2016-01-01

    The problem of multi-scale modelling of damage development in a SiC ceramic fibre-reinforced SiC matrix ceramic composite tube is addressed, with the objective of demonstrating the ability of the finite-element microstructure meshfree (FEMME) model to introduce important aspects of the microstructure into a larger scale model of the component. These are particularly the location, orientation and geometry of significant porosity and the load-carrying capability and quasi-brittle failure behaviour of the fibre tows. The FEMME model uses finite-element and cellular automata layers, connected by a meshfree layer, to efficiently couple the damage in the microstructure with the strain field at the component level. Comparison is made with experimental observations of damage development in an axially loaded composite tube, studied by X-ray computed tomography and digital volume correlation. Recommendations are made for further development of the model to achieve greater fidelity to the microstructure. This article is part of the themed issue ‘Multiscale modelling of the structural integrity of composite materials’. PMID:27242308

  9. Effect of Facilitation on Practice Outcomes in the National Demonstration Project Model of the Patient-Centered Medical Home

    PubMed Central

    Nutting, Paul A.; Crabtree, Benjamin F.; Stewart, Elizabeth E.; Miller, William L.; Palmer, Raymond F.; Stange, Kurt C.; Jaén, Carlos Roberto

    2010-01-01

    PURPOSE The objective of this study was to elucidate the effect of facilitation on practice outcomes in the 2-year patient-centered medical home (PCMH) National Demonstration Project (NDP) intervention, and to describe practices’ experience in implementing different components of the NDP model of the PCMH. METHODS Thirty-six family practices were randomized to a facilitated intervention group or a self-directed intervention group. We measured 3 practice-level outcomes: (1) the proportion of 39 components of the NDP model that practices implemented, (2) the aggregate patient rating of the practices’ PCMH attributes, and (3) the practices’ ability to make and sustain change, which we term adaptive reserve. We used a repeated-measures analysis of variance to test the intervention effects. RESULTS By the end of the 2 years of the NDP, practices in both facilitated and self-directed groups had at least 70% of the NDP model components in place. Implementation was relatively harder if the model component affected multiple roles and processes, required coordination across work units, necessitated additional resources and expertise, or challenged the traditional model of primary care. Electronic visits, group visits, team-based care, wellness promotion, and proactive population management presented the greatest challenges. Controlling for baseline differences and practice size, facilitated practices had greater increases in adaptive reserve (group difference by time, P = .005) and the proportion of NDP model components implemented (group difference by time, P=.02); the latter increased from 42% to 72% in the facilitated group and from 54% to 70% in the self-directed group. Patient ratings of the practices’ PCMH attributes did not differ between groups and, in fact, diminished in both of them. CONCLUSIONS Highly motivated practices can implement many components of the PCMH in 2 years, but apparently at a cost of diminishing the patient’s experience of care. Intense facilitation increases the number of components implemented and improves practices’ adaptive reserve. Longer follow-up is needed to assess the sustained and evolving effects of moving independent practices toward PCMHs PMID:20530393

  10. Connectionist model-based stereo vision for telerobotics

    NASA Technical Reports Server (NTRS)

    Hoff, William; Mathis, Donald

    1989-01-01

    Autonomous stereo vision for range measurement could greatly enhance the performance of telerobotic systems. Stereo vision could be a key component for autonomous object recognition and localization, thus enabling the system to perform low-level tasks, and allowing a human operator to perform a supervisory role. The central difficulty in stereo vision is the ambiguity in matching corresponding points in the left and right images. However, if one has a priori knowledge of the characteristics of the objects in the scene, as is often the case in telerobotics, a model-based approach can be taken. Researchers describe how matching ambiguities can be resolved by ensuring that the resulting three-dimensional points are consistent with surface models of the expected objects. A four-layer neural network hierarchy is used in which surface models of increasing complexity are represented in successive layers. These models are represented using a connectionist scheme called parameter networks, in which a parametrized object (for example, a planar patch p=f(h,m sub x, m sub y) is represented by a collection of processing units, each of which corresponds to a distinct combination of parameter values. The activity level of each unit in the parameter network can be thought of as representing the confidence with which the hypothesis represented by that unit is believed. Weights in the network are set so as to implement gradient descent in an energy function.

  11. A component-based problem list subsystem for the HOLON testbed. Health Object Library Online.

    PubMed Central

    Law, V.; Goldberg, H. S.; Jones, P.; Safran, C.

    1998-01-01

    One of the deliverables of the HOLON (Health Object Library Online) project is the specification of a reference architecture for clinical information systems that facilitates the development of a variety of discrete, reusable software components. One of the challenges facing the HOLON consortium is determining what kinds of components can be made available in a library for developers of clinical information systems. To further explore the use of component architectures in the development of reusable clinical subsystems, we have incorporated ongoing work in the development of enterprise terminology services into a Problem List subsystem for the HOLON testbed. We have successfully implemented a set of components using CORBA (Common Object Request Broker Architecture) and Java distributed object technologies that provide a functional problem list application and UMLS-based "Problem Picker." Through this development, we have overcome a variety of obstacles characteristic of rapidly emerging technologies, and have identified architectural issues necessary to scale these components for use and reuse within an enterprise clinical information system. PMID:9929252

  12. A component-based problem list subsystem for the HOLON testbed. Health Object Library Online.

    PubMed

    Law, V; Goldberg, H S; Jones, P; Safran, C

    1998-01-01

    One of the deliverables of the HOLON (Health Object Library Online) project is the specification of a reference architecture for clinical information systems that facilitates the development of a variety of discrete, reusable software components. One of the challenges facing the HOLON consortium is determining what kinds of components can be made available in a library for developers of clinical information systems. To further explore the use of component architectures in the development of reusable clinical subsystems, we have incorporated ongoing work in the development of enterprise terminology services into a Problem List subsystem for the HOLON testbed. We have successfully implemented a set of components using CORBA (Common Object Request Broker Architecture) and Java distributed object technologies that provide a functional problem list application and UMLS-based "Problem Picker." Through this development, we have overcome a variety of obstacles characteristic of rapidly emerging technologies, and have identified architectural issues necessary to scale these components for use and reuse within an enterprise clinical information system.

  13. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    NASA Astrophysics Data System (ADS)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  14. Regional climate change predictions from the Goddard Institute for Space Studies high resolution GCM

    NASA Technical Reports Server (NTRS)

    Crane, Robert G.; Hewitson, Bruce

    1990-01-01

    Model simulations of global climate change are seen as an essential component of any program aimed at understanding human impact on the global environment. A major weakness of current general circulation models (GCMs), however, is their inability to predict reliably the regional consequences of a global scale change, and it is these regional scale predictions that are necessary for studies of human/environmental response. This research is directed toward the development of a methodology for the validation of the synoptic scale climatology of GCMs. This is developed with regard to the Goddard Institute for Space Studies (GISS) GCM Model 2, with the specific objective of using the synoptic circulation form a doubles CO2 simulation to estimate regional climate change over North America, south of Hudson Bay. This progress report is specifically concerned with validating the synoptic climatology of the GISS GCM, and developing the transfer function to derive grid-point temperatures from the synoptic circulation. Principal Components Analysis is used to characterize the primary modes of the spatial and temporal variability in the observed and simulated climate, and the model validation is based on correlations between component loadings, and power spectral analysis of the component scores. The results show that the high resolution GISS model does an excellent job of simulating the synoptic circulation over the U.S., and that grid-point temperatures can be predicted with reasonable accuracy from the circulation patterns.

  15. Enhanced index tracking modeling in portfolio optimization with mixed-integer programming z approach

    NASA Astrophysics Data System (ADS)

    Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin

    2014-09-01

    Enhanced index tracking is a popular form of portfolio management in stock market investment. Enhanced index tracking aims to construct an optimal portfolio to generate excess return over the return achieved by the stock market index without purchasing all of the stocks that make up the index. The objective of this paper is to construct an optimal portfolio using mixed-integer programming model which adopts regression approach in order to generate higher portfolio mean return than stock market index return. In this study, the data consists of 24 component stocks in Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index from January 2010 until December 2012. The results of this study show that the optimal portfolio of mixed-integer programming model is able to generate higher mean return than FTSE Bursa Malaysia Kuala Lumpur Composite Index return with only selecting 30% out of the total stock market index components.

  16. Variability and Spectral Studies of Luminous Seyfert 1 Galaxy Fairall 9. Search for the Reflection Component is a Quasar: RXTE and ASCA Observation of a Nearby Radio-Quiet Quasar MR 2251-178

    NASA Technical Reports Server (NTRS)

    Leighly, Karen M.

    1999-01-01

    Monitoring observations with interval of 3 days using RXTE (X Ray Timing Explorer) of the luminous Seyfert 1 galaxy Fairall 9 were performed for one year. The purpose of the observations were to study the variability of Fairall 9 and compare the results with those from the radio-loud object 3C 390.3. The data has been received and analysis is underway, using the new background model. An observation of the quasar MR 2251-178 was made in order to determine whether or not it has a reflection component. Older background models gave an unacceptable subtraction and analysis is underway using the new background model. The observation of NGC 6300 showed that the X-ray spectrum from this Seyfert 2 galaxy appears to be dominated by Compton reflection.

  17. Health-aware Model Predictive Control of Pasteurization Plant

    NASA Astrophysics Data System (ADS)

    Karimi Pour, Fatemeh; Puig, Vicenç; Ocampo-Martinez, Carlos

    2017-01-01

    In order to optimize the trade-off between components life and energy consumption, the integration of a system health management and control modules is required. This paper proposes the integration of model predictive control (MPC) with a fatigue estimation approach that minimizes the damage of the components of a pasteurization plant. The fatigue estimation is assessed with the rainflow counting algorithm. Using data from this algorithm, a simplified model that characterizes the health of the system is developed and integrated with MPC. The MPC controller objective is modified by adding an extra criterion that takes into account the accumulated damage. But, a steady-state offset is created by adding this extra criterion. Finally, by including an integral action in the MPC controller, the steady-state error for regulation purpose is eliminated. The proposed control scheme is validated in simulation using a simulator of a utility-scale pasteurization plant.

  18. Evaluation of habitat quality for selected wildlife species associated with back channels.

    USGS Publications Warehouse

    Anderson, James T.; Zadnik, Andrew K.; Wood, Petra Bohall; Bledsoe, Kerry

    2013-01-01

    The islands and associated back channels on the Ohio River, USA, are believed to provide critical habitat features for several wildlife species. However, few studies have quantitatively evaluated habitat quality in these areas. Our main objective was to evaluate the habitat quality of back and main channel areas for several species using habitat suitability index (HSI) models. To test the effectiveness of these models, we attempted to relate HSI scores and the variables measured for each model with measures of relative abundance for the model species. The mean belted kingfisher (Ceryle alcyon) HSI was greater on the main than back channel. However, the model failed to predict kingfisher abundance. The mean reproduction component of the great blue heron (Ardea herodias) HSI, total common muskrat (Ondatra zibethicus) HSI, winter cover component of the snapping turtle (Chelydra serpentina) HSI, and brood-rearing component of the wood duck (Aix sponsa) HSI were all greater on the back than main channel, and were positively related with the relative abundance of each species. We found that island back channels provide characteristics not found elsewhere on the Ohio River and warrant conservation as important riparian wildlife habitat. The effectiveness of using HSI models to predict species abundance on the river was mixed. Modifications to several of the models are needed to improve their use on the Ohio River and, likely, other large rivers.

  19. The Separation of Between-person and Within-person Components of Individual Change Over Time: A Latent Curve Model with Structured Residuals

    PubMed Central

    Curran, Patrick J.; Howard, Andrea L.; Bainter, Sierra; Lane, Stephanie T.; McGinley, James S.

    2014-01-01

    Objective Although recent statistical and computational developments allow for the empirical testing of psychological theories in ways not previously possible, one particularly vexing challenge remains: how to optimally model the prospective, reciprocal relations between two constructs as they developmentally unfold over time. Several analytic methods currently exist that attempt to model these types of relations, and each approach is successful to varying degrees. However, none provide the unambiguous separation of between-person and within-person components of stability and change over time, components that are often hypothesized to exist in the psychological sciences. The goal of our paper is to propose and demonstrate a novel extension of the multivariate latent curve model to allow for the disaggregation of these effects. Method We begin with a review of the standard latent curve models and describe how these primarily capture between-person differences in change. We then extend this model to allow for regression structures among the time-specific residuals to capture within-person differences in change. Results We demonstrate this model using an artificial data set generated to mimic the developmental relation between alcohol use and depressive symptomatology spanning five repeated measures. Conclusions We obtain a specificity of results from the proposed analytic strategy that are not available from other existing methodologies. We conclude with potential limitations of our approach and directions for future research. PMID:24364798

  20. Glomerular Activity Patterns Evoked by Natural Odor Objects in the Rat Olfactory Bulb Are Related to Patterns Evoked by Major Odorant Components

    PubMed Central

    Johnson, Brett A.; Ong, Joan; Leon, Michael

    2014-01-01

    To determine how responses evoked by natural odorant mixtures compare to responses evoked by individual odorant chemicals, we mapped 2-deoxyglucose uptake during exposures to vapors arising from a variety of odor objects that may be important to rodents in the wild. We studied 21 distinct natural odor stimuli ranging from possible food sources such as fruits, vegetables, and meats to environmental odor objects such as grass, herbs, and tree leaves. The natural odor objects evoked robust and surprisingly focal patterns of 2-deoxyglucose uptake involving clusters of neighboring glomeruli, thereby resembling patterns evoked by pure chemicals. Overall, the patterns were significantly related to patterns evoked by monomolecular odorant components that had been studied previously. Object patterns also were significantly related to the molecular features present in the mixture components. Despite these overall relationships, there were individual examples of object patterns that were simpler than might have been predicted given the multiplicity of components present in the vapors. In these cases, the object patterns lacked certain responses evoked by their major odorant mixture components. These data suggest the possibility of mixture response interactions and provide a foundation for understanding the neural coding of natural odor stimuli. PMID:20187145

  1. Global water cycle

    NASA Technical Reports Server (NTRS)

    Robertson, Franklin; Goodman, Steven J.; Christy, John R.; Fitzjarrald, Daniel E.; Chou, Shi-Hung; Crosson, William; Wang, Shouping; Ramirez, Jorge

    1993-01-01

    This research is the MSFC component of a joint MSFC/Pennsylvania State University Eos Interdisciplinary Investigation on the global water cycle extension across the earth sciences. The primary long-term objective of this investigation is to determine the scope and interactions of the global water cycle with all components of the Earth system and to understand how it stimulates and regulates change on both global and regional scales. Significant accomplishments in the past year are presented and include the following: (1) water vapor variability; (2) multi-phase water analysis; (3) global modeling; and (4) optimal precipitation and stream flow analysis and hydrologic processes.

  2. The lift-fan aircraft: Lessons learned

    NASA Technical Reports Server (NTRS)

    Deckert, Wallace H.

    1995-01-01

    This report summarizes the highlights and results of a workshop held at NASA Ames Research Center in October 1992. The objective of the workshop was a thorough review of the lessons learned from past research on lift fans, and lift-fan aircraft, models, designs, and components. The scope included conceptual design studies, wind tunnel investigations, propulsion systems components, piloted simulation, flight of aircraft such as the SV-5A and SV-5B and a recent lift-fan aircraft development project. The report includes a brief summary of five technical presentations that addressed the subject The Lift-Fan Aircraft: Lessons Learned.

  3. Empowerment model of biomass in west java

    NASA Astrophysics Data System (ADS)

    Mulyana, C.; Fitriani, N. I.; Saad, A.; Yuliah, Y.

    2017-06-01

    Scarcity of fossil energy accelerates the search of renewable energy sources as the substitution. In West Java, biomass has potential to be developed into bio-briquette because the resources are abundant. The objectives of this research are mapping the potency of biomass as bio-briquette in West Java, and making the model of the empowerment biomass potential involving five fundamental step which are raw material, pre-processing process, conversion mechanism, products, and end user. The main object of this model focused on 3 forms which are solid, liquid, and gas which was made by involving the community component as the owner biomass, district government, academics and researcher communities, related industries as users of biomass, and the central government as the policy holders and investors as a funder. In the model was described their respective roles and mutual relationship one with another so that the bio-briquette as a substitute of fossil fuels can be realized. Application of this model will provide the benefits in renewability energy sources, environmental, socio economical and energy security.

  4. Compact Objects In Binary Systems: Formation and Evolution of X-ray Binaries and Tides in Double White Dwarfs

    NASA Astrophysics Data System (ADS)

    Valsecchi, Francesca

    Binary star systems hosting black holes, neutron stars, and white dwarfs are unique laboratories for investigating both extreme physical conditions, and stellar and binary evolution. Black holes and neutron stars are observed in X-ray binaries, where mass accretion from a stellar companion renders them X-ray bright. Although instruments like Chandra have revolutionized the field of X-ray binaries, our theoretical understanding of their origin and formation lags behind. Progress can be made by unravelling the evolutionary history of observed systems. As part of my thesis work, I have developed an analysis method that uses detailed stellar models and all the observational constraints of a system to reconstruct its evolutionary path. This analysis models the orbital evolution from compact-object formation to the present time, the binary orbital dynamics due to explosive mass loss and a possible kick at core collapse, and the evolution from the progenitor's Zero Age Main Sequence to compact-object formation. This method led to a theoretical model for M33 X-7, one of the most massive X-ray binaries known and originally marked as an evolutionary challenge. Compact objects are also expected gravitational wave (GW) sources. In particular, double white dwarfs are both guaranteed GW sources and observed electromagnetically. Although known systems show evidence of tidal deformation and a successful GW astronomy requires realistic models of the sources, detached double white dwarfs are generally approximated to point masses. For the first time, I used realistic models to study tidally-driven periastron precession in eccentric binaries. I demonstrated that its imprint on the GW signal yields constrains on the components' masses and that the source would be misclassified if tides are neglected. Beyond this adiabatic precession, tidal dissipation creates a sink of orbital angular momentum. Its efficiency is strongest when tides are dynamic and excite the components' free oscillation modes. Accounting for this effect will determine whether our interpretation of current and future observations will constrain the sources' true physical properties. To investigate dynamic tides I have developed CAFein, a novel code that calculates forced non-adiabatic stellar oscillations using a highly stable and efficient numerical method.

  5. Application of a single-objective, hybrid genetic algorithm approach to pharmacokinetic model building.

    PubMed

    Sherer, Eric A; Sale, Mark E; Pollock, Bruce G; Belani, Chandra P; Egorin, Merrill J; Ivy, Percy S; Lieberman, Jeffrey A; Manuck, Stephen B; Marder, Stephen R; Muldoon, Matthew F; Scher, Howard I; Solit, David B; Bies, Robert R

    2012-08-01

    A limitation in traditional stepwise population pharmacokinetic model building is the difficulty in handling interactions between model components. To address this issue, a method was previously introduced which couples NONMEM parameter estimation and model fitness evaluation to a single-objective, hybrid genetic algorithm for global optimization of the model structure. In this study, the generalizability of this approach for pharmacokinetic model building is evaluated by comparing (1) correct and spurious covariate relationships in a simulated dataset resulting from automated stepwise covariate modeling, Lasso methods, and single-objective hybrid genetic algorithm approaches to covariate identification and (2) information criteria values, model structures, convergence, and model parameter values resulting from manual stepwise versus single-objective, hybrid genetic algorithm approaches to model building for seven compounds. Both manual stepwise and single-objective, hybrid genetic algorithm approaches to model building were applied, blinded to the results of the other approach, for selection of the compartment structure as well as inclusion and model form of inter-individual and inter-occasion variability, residual error, and covariates from a common set of model options. For the simulated dataset, stepwise covariate modeling identified three of four true covariates and two spurious covariates; Lasso identified two of four true and 0 spurious covariates; and the single-objective, hybrid genetic algorithm identified three of four true covariates and one spurious covariate. For the clinical datasets, the Akaike information criterion was a median of 22.3 points lower (range of 470.5 point decrease to 0.1 point decrease) for the best single-objective hybrid genetic-algorithm candidate model versus the final manual stepwise model: the Akaike information criterion was lower by greater than 10 points for four compounds and differed by less than 10 points for three compounds. The root mean squared error and absolute mean prediction error of the best single-objective hybrid genetic algorithm candidates were a median of 0.2 points higher (range of 38.9 point decrease to 27.3 point increase) and 0.02 points lower (range of 0.98 point decrease to 0.74 point increase), respectively, than that of the final stepwise models. In addition, the best single-objective, hybrid genetic algorithm candidate models had successful convergence and covariance steps for each compound, used the same compartment structure as the manual stepwise approach for 6 of 7 (86 %) compounds, and identified 54 % (7 of 13) of covariates included by the manual stepwise approach and 16 covariate relationships not included by manual stepwise models. The model parameter values between the final manual stepwise and best single-objective, hybrid genetic algorithm models differed by a median of 26.7 % (q₁ = 4.9 % and q₃ = 57.1 %). Finally, the single-objective, hybrid genetic algorithm approach was able to identify models capable of estimating absorption rate parameters for four compounds that the manual stepwise approach did not identify. The single-objective, hybrid genetic algorithm represents a general pharmacokinetic model building methodology whose ability to rapidly search the feasible solution space leads to nearly equivalent or superior model fits to pharmacokinetic data.

  6. Evaluation of the Regional Atmospheric Modeling System in the Eastern Range Dispersion Assessment System

    NASA Technical Reports Server (NTRS)

    Case, Jonathan

    2000-01-01

    The Applied Meteorology Unit is conducting an evaluation of the Regional Atmospheric Modeling System (RAMS) contained within the Eastern Range Dispersion Assessment System (ERDAS). ERDAS provides emergency response guidance for operations at the Cape Canaveral Air Force Station and the Kennedy Space Center in the event of an accidental hazardous material release or aborted vehicle launch. The prognostic data from RAMS is available to ERDAS for display and is used to initialize the 45th Range Safety (45 SW/SE) dispersion model. Thus, the accuracy of the 45 SW/SE dispersion model is dependent upon the accuracy of RAMS forecasts. The RAMS evaluation task consists of an objective and subjective component for the Florida warm and cool seasons of 1999-2000. The objective evaluation includes gridded and point error statistics at surface and upper-level observational sites, a comparison of the model errors to a coarser grid configuration of RAMS, and a benchmark of RAMS against the widely accepted Eta model. The warm-season subjective evaluation involves a verification of the onset and movement of the Florida east coast sea breeze and RAMS forecast precipitation. This interim report provides a summary of the RAMS objective and subjective evaluation for the 1999 Florida warm season only.

  7. FNAS/summer faculty fellowship research continuation program. Task 6: Integrated model development for liquid fueled rocket propulsion systems. Task 9: Aspects of model-based rocket engine condition monitoring and control

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael; Helmicki, Arthur J.

    1993-01-01

    The objective of Phase I of this research effort was to develop an advanced mathematical-empirical model of SSME steady-state performance. Task 6 of Phase I is to develop component specific modification strategy for baseline case influence coefficient matrices. This report describes the background of SSME performance characteristics and provides a description of the control variable basis of three different gains models. The procedure used to establish influence coefficients for each of these three models is also described. Gains model analysis results are compared to Rocketdyne's power balance model (PBM).

  8. Dragon pulse information management system (DPIMS): A unique model-based approach to implementing domain agnostic system of systems and behaviors

    NASA Astrophysics Data System (ADS)

    Anderson, Thomas S.

    2016-05-01

    The Global Information Network Architecture is an information technology based on Vector Relational Data Modeling, a unique computational paradigm, DoD network certified by USARMY as the Dragon Pulse Informa- tion Management System. This network available modeling environment for modeling models, where models are configured using domain relevant semantics and use network available systems, sensors, databases and services as loosely coupled component objects and are executable applications. Solutions are based on mission tactics, techniques, and procedures and subject matter input. Three recent ARMY use cases are discussed a) ISR SoS. b) Modeling and simulation behavior validation. c) Networked digital library with behaviors.

  9. Neural Networks for Segregation of Multiple Objects: Visual Figure-Ground Separation and Auditory Pitch Perception.

    NASA Astrophysics Data System (ADS)

    Wyse, Lonce

    An important component of perceptual object recognition is the segmentation into coherent perceptual units of the "blooming buzzing confusion" that bombards the senses. The work presented herein develops neural network models of some key processes of pre-attentive vision and audition that serve this goal. A neural network model, called an FBF (Feature -Boundary-Feature) network, is proposed for automatic parallel separation of multiple figures from each other and their backgrounds in noisy images. Figure-ground separation is accomplished by iterating operations of a Boundary Contour System (BCS) that generates a boundary segmentation of a scene, and a Feature Contour System (FCS) that compensates for variable illumination and fills-in surface properties using boundary signals. A key new feature is the use of the FBF filling-in process for the figure-ground separation of connected regions, which are subsequently more easily recognized. The new CORT-X 2 model is a feed-forward version of the BCS that is designed to detect, regularize, and complete boundaries in up to 50 percent noise. It also exploits the complementary properties of on-cells and off -cells to generate boundary segmentations and to compensate for boundary gaps during filling-in. In the realm of audition, many sounds are dominated by energy at integer multiples, or "harmonics", of a fundamental frequency. For such sounds (e.g., vowels in speech), the individual frequency components fuse, so that they are perceived as one sound source with a pitch at the fundamental frequency. Pitch is integral to separating auditory sources, as well as to speaker identification and speech understanding. A neural network model of pitch perception called SPINET (SPatial PItch NETwork) is developed and used to simulate a broader range of perceptual data than previous spectral models. The model employs a bank of narrowband filters as a simple model of basilar membrane mechanics, spectral on-center off-surround competitive interactions, and a "harmonic sieve" mechanism whereby the strength of a pitch depends only on spectral regions near harmonics. The model is evaluated using data involving mistuned components, shifted harmonics, complex tones with varying phase relationships, and continuous spectra such as rippled noise and narrow noise bands.

  10. Adsorption of lignocelluloses of model pre-hydrolysis liquor on activated carbon.

    PubMed

    Fatehi, Pedram; Ryan, Jennifer; Ni, Yonghao

    2013-03-01

    The main objective of this work was to study the adsorption behavior of various components dissolved in the pre-hydrolysis of kraft process on activated carbon. In this work, model prehydrolysis liquor (PHL) solutions (MPHL)s were prepared via mixing various commercially available monosugars, xylan, lignin and furfural; and their adsorption performance on activated carbon (AC) was investigated. In singular (one component) MPHL/AC systems, furfural had the maximum and xylose had the minimum adsorption, and the adsorption of monosugars was basically similar on AC. Also, polydiallyldimethylammonium chloride (PDADMAC) was added (0.5 g/l) to singular xylan or lignin MPHL/AC system, which increased the lignin and xylan adsorptions to 350 and 190 mg/g on AC, respectively. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Global water cycle

    NASA Technical Reports Server (NTRS)

    Robertson, Franklin R.; Christy, John R.; Goodman, Steven J.; Miller, Tim L.; Fitzjarrald, Dan; Lapenta, Bill; Wang, Shouping

    1991-01-01

    The primary objective is to determine the scope and interactions of the global water cycle with all components of the Earth system and to understand how it stimulates and regulates changes on both global and regional scales. The following subject areas are covered: (1) water vapor variability; (2) multi-phase water analysis; (3) diabatic heating; (4) MSU (Microwave Sounding Unit) temperature analysis; (5) Optimal precipitation and streamflow analysis; (6) CCM (Community Climate Model) hydrological cycle; (7) CCM1 climate sensitivity to lower boundary forcing; and (8) mesoscale modeling of atmosphere/surface interaction.

  12. Building Interoperable FHIR-Based Vocabulary Mapping Services: A Case Study of OHDSI Vocabularies and Mappings.

    PubMed

    Jiang, Guoqian; Kiefer, Richard; Prud'hommeaux, Eric; Solbrig, Harold R

    2017-01-01

    The OHDSI Common Data Model (CDM) is a deep information model, in which its vocabulary component plays a critical role in enabling consistent coding and query of clinical data. The objective of the study is to create methods and tools to expose the OHDSI vocabularies and mappings as the vocabulary mapping services using two HL7 FHIR core terminology resources ConceptMap and ValueSet. We discuss the benefits and challenges in building the FHIR-based terminology services.

  13. A stochastic atmospheric model for remote sensing applications

    NASA Technical Reports Server (NTRS)

    Turner, R. E.

    1983-01-01

    There are many factors which reduce the accuracy of classification of objects in the satellite remote sensing of Earth's surface. One important factor is the variability in the scattering and absorptive properties of the atmospheric components such as particulates and the variable gases. For multispectral remote sensing of the Earth's surface in the visible and infrared parts of the spectrum the atmospheric particulates are a major source of variability in the received signal. It is difficult to design a sensor which will determine the unknown atmospheric components by remote sensing methods, at least to the accuracy needed for multispectral classification. The problem of spatial and temporal variations in the atmospheric quantities which can affect the measured radiances are examined. A method based upon the stochastic nature of the atmospheric components was developed, and, using actual data the statistical parameters needed for inclusion into a radiometric model was generated. Methods are then described for an improved correction of radiances. These algorithms will then result in a more accurate and consistent classification procedure.

  14. Towards a Three-Component Model of Fan Loyalty: A Case Study of Chinese Youth

    PubMed Central

    Zhang, Xiao-xiao; Liu, Li; Zhao, Xian; Zheng, Jian; Yang, Meng; Zhang, Ji-qi

    2015-01-01

    The term “fan loyalty” refers to the loyalty felt and expressed by a fan towards the object of his/her fanaticism in both everyday and academic discourses. However, much of the literature on fan loyalty has paid little attention to the topic from the perspective of youth pop culture. The present study explored the meaning of fan loyalty in the context of China. Data were collected by the method of in-depth interviews with 16 young Chinese people aged between 19 and 25 years who currently or once were pop fans. The results indicated that fan loyalty entails three components: involvement, satisfaction, and affiliation. These three components regulate the process of fan loyalty development, which can be divided into four stages: inception, upgrade, zenith, and decline. This model provides a conceptual explanation of why and how young Chinese fans are loyal to their favorite stars. The implications of the findings are discussed. PMID:25886557

  15. Seasonal-Scale Optimization of Conventional Hydropower Operations in the Upper Colorado System

    NASA Astrophysics Data System (ADS)

    Bier, A.; Villa, D.; Sun, A.; Lowry, T. S.; Barco, J.

    2011-12-01

    Sandia National Laboratories is developing the Hydropower Seasonal Concurrent Optimization for Power and the Environment (Hydro-SCOPE) tool to examine basin-wide conventional hydropower operations at seasonal time scales. This tool is part of an integrated, multi-laboratory project designed to explore different aspects of optimizing conventional hydropower operations. The Hydro-SCOPE tool couples a one-dimensional reservoir model with a river routing model to simulate hydrology and water quality. An optimization engine wraps around this model framework to solve for long-term operational strategies that best meet the specific objectives of the hydrologic system while honoring operational and environmental constraints. The optimization routines are provided by Sandia's open source DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) software. Hydro-SCOPE allows for multi-objective optimization, which can be used to gain insight into the trade-offs that must be made between objectives. The Hydro-SCOPE tool is being applied to the Upper Colorado Basin hydrologic system. This system contains six reservoirs, each with its own set of objectives (such as maximizing revenue, optimizing environmental indicators, meeting water use needs, or other objectives) and constraints. This leads to a large optimization problem with strong connectedness between objectives. The systems-level approach used by the Hydro-SCOPE tool allows simultaneous analysis of these objectives, as well as understanding of potential trade-offs related to different objectives and operating strategies. The seasonal-scale tool will be tightly integrated with the other components of this project, which examine day-ahead and real-time planning, environmental performance, hydrologic forecasting, and plant efficiency.

  16. A Test of Pre-Main-Sequence Evolutionary Models across the Stellar/Substellar Boundary Based on Spectra of the Young Quadruple GG Tauri

    NASA Astrophysics Data System (ADS)

    White, Russel J.; Ghez, A. M.; Reid, I. Neill; Schultz, Greg

    1999-08-01

    We present spatially separated optical spectra of the components of the young hierarchical quadruple GG Tau. Spectra of GG Tau Aa and Ab (separation 0.25"~35 AU) were obtained with the Faint Object Spectrograph on board the Hubble Space Telescope. Spectra of GG Tau Ba and Bb (separation 1.48"~207 AU) were obtained with both the HIRES and the LRIS spectrographs on the W. M. Keck telescopes. The components of this minicluster, which span a wide range in spectral type (K7-M7), are used to test both evolutionary models and the temperature scale for very young, low-mass stars under the assumption of coeval formation. Of the evolutionary models tested, those of Baraffe et al. yield the most consistent ages when combined with a temperature scale intermediate between that of dwarfs and giants. The version of the Baraffe et al. models computed with a mixing length nearly twice the pressure scale height is of particular interest, as it predicts masses for GG Tau Aa and Ab that are in agreement with their dynamical mass estimate. Using this evolutionary model and a coeval (at 1.5 Myr) temperature scale, we find that the coldest component of the GG Tau system, GG Tau Bb, is substellar with a mass of 0.044+/-0.006 Msolar. This brown dwarf companion is especially intriguing as it shows signatures of accretion, although this accretion is not likely to alter its mass significantly. GG Tau Bb is currently the lowest mass, spectroscopically confirmed companion to a T Tauri star, and is one of the coldest, lowest mass T Tauri objects in the Taurus-Auriga star-forming region. Based partly on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.

  17. Toward Reusable Graphics Components in Ada

    DTIC Science & Technology

    1993-03-01

    Then alternatives for obtaining well- engineered reusable software components were examined. Finally, the alternatives were analyzed, and the most...reusable software components. Chapter 4 describes detailed design and implementation strategies in building a well- engineered reusable set of components in...study. 2.2 The Object-Oriented Paradigm 2.2.1 The Need for Object-Oriented Techniques. Among software engineers the software crisis is a well known

  18. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    NASA Astrophysics Data System (ADS)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.

  19. A systems approach to modeling Community-Based Environmental Monitoring: a case of participatory water quality monitoring in rural Mexico.

    PubMed

    Burgos, Ana; Páez, Rosaura; Carmona, Estela; Rivas, Hilda

    2013-12-01

    Community-Based Environmental Monitoring (CBM) is a social practice that makes a valuable contribution to environmental management and construction of active societies for sustainable future. However, its documentation and analysis show deficiencies that hinder contrast and comparison of processes and effects. Based on systems approach, this article presents a model of CBM to orient assessment of programs, with heuristic or practical goals. In a focal level, the model comprises three components, the social subject, the object of monitoring, and the means of action, and five processes, data management, social learning, assimilation/decision making, direct action, and linking. Emergent properties were also identified in the focal and suprafocal levels considering community self-organization, response capacity, and autonomy for environmental management. The model was applied to the assessment of a CBM program of water quality implemented in rural areas in Mexico. Attributes and variables (indicators) for components, processes, and emergent properties were selected to measure changes that emerged since the program implementation. The assessment of the first 3 years (2010-2012) detected changes that indicated movement towards the expected results, but it revealed also the need to adjust the intervention strategy and procedures. Components and processes of the model reflected relevant aspects of the CBM in real world. The component called means of action as a key element to transit "from the data to the action." The CBM model offered a conceptual framework with advantages to understand CBM as a socioecological event and to strengthen its implementation under different conditions and contexts.

  20. Multi-band morpho-Spectral Component Analysis Deblending Tool (MuSCADeT): Deblending colourful objects

    NASA Astrophysics Data System (ADS)

    Joseph, R.; Courbin, F.; Starck, J.-L.

    2016-05-01

    We introduce a new algorithm for colour separation and deblending of multi-band astronomical images called MuSCADeT which is based on Morpho-spectral Component Analysis of multi-band images. The MuSCADeT algorithm takes advantage of the sparsity of astronomical objects in morphological dictionaries such as wavelets and their differences in spectral energy distribution (SED) across multi-band observations. This allows us to devise a model independent and automated approach to separate objects with different colours. We show with simulations that we are able to separate highly blended objects and that our algorithm is robust against SED variations of objects across the field of view. To confront our algorithm with real data, we use HST images of the strong lensing galaxy cluster MACS J1149+2223 and we show that MuSCADeT performs better than traditional profile-fitting techniques in deblending the foreground lensing galaxies from background lensed galaxies. Although the main driver for our work is the deblending of strong gravitational lenses, our method is fit to be used for any purpose related to deblending of objects in astronomical images. An example of such an application is the separation of the red and blue stellar populations of a spiral galaxy in the galaxy cluster Abell 2744. We provide a python package along with all simulations and routines used in this paper to contribute to reproducible research efforts. Codes can be found at http://lastro.epfl.ch/page-126973.html

  1. Volumetric segmentation of range images for printed circuit board inspection

    NASA Astrophysics Data System (ADS)

    Van Dop, Erik R.; Regtien, Paul P. L.

    1996-10-01

    Conventional computer vision approaches towards object recognition and pose estimation employ 2D grey-value or color imaging. As a consequence these images contain information about projections of a 3D scene only. The subsequent image processing will then be difficult, because the object coordinates are represented with just image coordinates. Only complicated low-level vision modules like depth from stereo or depth from shading can recover some of the surface geometry of the scene. Recent advances in fast range imaging have however paved the way towards 3D computer vision, since range data of the scene can now be obtained with sufficient accuracy and speed for object recognition and pose estimation purposes. This article proposes the coded-light range-imaging method together with superquadric segmentation to approach this task. Superquadric segments are volumetric primitives that describe global object properties with 5 parameters, which provide the main features for object recognition. Besides, the principle axes of a superquadric segment determine the phase of an object in the scene. The volumetric segmentation of a range image can be used to detect missing, false or badly placed components on assembled printed circuit boards. Furthermore, this approach will be useful to recognize and extract valuable or toxic electronic components on printed circuit boards scrap that currently burden the environment during electronic waste processing. Results on synthetic range images with errors constructed according to a verified noise model illustrate the capabilities of this approach.

  2. Burden, interdependence, ethnicity, and mental health in caregivers of patients with schizophrenia.

    PubMed

    Suro, Giulia; Weisman de Mamani, Amy G

    2013-06-01

    Caring for a patient with schizophrenia often results in high levels of perceived burden and poorer overall mental health. Using a sample of 176 caregivers of patients with schizophrenia, the present study examined how two components of burden (objective and subjective) interacted with interdependence and ethnicity to influence relatives' overall mental health. In line with study hypotheses, and with the stress-appraisal-coping model developed by Lazurus and Folkman (1984), we found that subjective burden mediated the relationship between objective burden and mental health. In other words, subjective appraisals of caregiving appeared to partially underlie the association between the concrete costs of caregiving and psychological outcomes in schizophrenia caregivers. Also as hypothesized, we found that interdependence, or the perceived interconnectedness of individuals within a group, moderated the relationship between objective burden and subjective burden. In other words, when levels of interdependence were high, the objective components of burden appeared to have a weaker relationship with subjective burden. When interdependence was low, on the other hand, objective burden was more likely to be associated with subjective burden. This finding suggests that helping caregivers to value harmony and connection with others over individual self-interests may reduce the likelihood that objective stressors (which are often inevitable in schizophrenia) will result in subjective distress. On the basis of prior research, we also tested several hypotheses regarding the role of ethnicity and its association with burden, interdependence, and mental health. However, contrary to expectations, no ethnic patterns were observed. © FPI, Inc.

  3. Patterns of IgE responses to multiple allergen components and clinical symptoms at age 11 years

    PubMed Central

    Simpson, Angela; Lazic, Nevena; Belgrave, Danielle C.M.; Johnson, Phil; Bishop, Christopher; Mills, Clare; Custovic, Adnan

    2015-01-01

    Background The relationship between sensitization to allergens and disease is complex. Objective We sought to identify patterns of response to a broad range of allergen components and investigate associations with asthma, eczema, and hay fever. Methods Serum specific IgE levels to 112 allergen components were measured by using a multiplex array (Immuno Solid-phase Allergen Chip) in a population-based birth cohort. Latent variable modeling was used to identify underlying patterns of component-specific IgE responses; these patterns were then related to asthma, eczema, and hay fever. Results Two hundred twenty-one of 461 children had IgE to 1 or more components. Seventy-one of the 112 components were recognized by 3 or more children. By using latent variable modeling, 61 allergen components clustered into 3 component groups (CG1, CG2, and CG3); protein families within each CG were exclusive to that group. CG1 comprised 27 components from 8 plant protein families. CG2 comprised 7 components of mite allergens from 3 protein families. CG3 included 27 components of plant, animal, and fungal origin from 12 protein families. Each CG included components from different biological sources with structural homology and also nonhomologous proteins arising from the same biological source. Sensitization to CG3 was most strongly associated with asthma (odds ratio [OR], 8.20; 95% CI, 3.49-19.24; P < .001) and lower FEV1 (P < .001). Sensitization to CG1 was associated with hay fever (OR, 12.79; 95% CI, 6.84-23.90; P < .001). Sensitization to CG2 was associated with both asthma (OR, 3.60; 95% CI, 2.05-6.29) and hay fever (OR, 2.52; 95% CI, 1.38-4.61). Conclusions Latent variable modeling with a large number of allergen components identified 3 patterns of IgE responses, each including different protein families. In 11-year-old children the pattern of response to components of multiple allergens appeared to be associated with current asthma and hay fever but not eczema. PMID:25935108

  4. Overview of the DAEDALOS project

    NASA Astrophysics Data System (ADS)

    Bisagni, Chiara

    2015-10-01

    The "Dynamics in Aircraft Engineering Design and Analysis for Light Optimized Structures" (DAEDALOS) project aimed to develop methods and procedures to determine dynamic loads by considering the effects of dynamic buckling, material damping and mechanical hysteresis during aircraft service. Advanced analysis and design principles were assessed with the scope of partly removing the uncertainty and the conservatism of today's design and certification procedures. To reach these objectives a DAEDALOS aircraft model representing a mid-size business jet was developed. Analysis and in-depth investigation of the dynamic response were carried out on full finite element models and on hybrid models. Material damping was experimentally evaluated, and different methods for damping evaluation were developed, implemented in finite element codes and experimentally validated. They include a strain energy method, a quasi-linear viscoelastic material model, and a generalized Maxwell viscous material damping. Panels and shells representative of typical components of the DAEDALOS aircraft model were experimentally tested subjected to static as well as dynamic loads. Composite and metallic components of the aircraft model were investigated to evaluate the benefit in terms of weight saving.

  5. Towards an Enhancement of Organizational Information Security through Threat Factor Profiling (TFP) Model

    NASA Astrophysics Data System (ADS)

    Sidi, Fatimah; Daud, Maslina; Ahmad, Sabariah; Zainuddin, Naqliyah; Anneisa Abdullah, Syafiqa; Jabar, Marzanah A.; Suriani Affendey, Lilly; Ishak, Iskandar; Sharef, Nurfadhlina Mohd; Zolkepli, Maslina; Nur Majdina Nordin, Fatin; Amat Sejani, Hashimah; Ramadzan Hairani, Saiful

    2017-09-01

    Information security has been identified by organizations as part of internal operations that need to be well implemented and protected. This is because each day the organizations face a high probability of increase of threats to their networks and services that will lead to information security issues. Thus, effective information security management is required in order to protect their information assets. Threat profiling is a method that can be used by an organization to address the security challenges. Threat profiling allows analysts to understand and organize intelligent information related to threat groups. This paper presents a comparative analysis that was conducted to study the existing threat profiling models. It was found that existing threat models were constructed based on specific objectives, thus each model is limited to only certain components or factors such as assets, threat sources, countermeasures, threat agents, threat outcomes and threat actors. It is suggested that threat profiling can be improved by the combination of components found in each existing threat profiling model/framework. The proposed model can be used by an organization in executing a proactive approach to incident management.

  6. Learning Gene Expression Through Modelling and Argumentation. A Case Study Exploring the Connections Between the Worlds of Knowledge

    NASA Astrophysics Data System (ADS)

    Puig, Blanca; Ageitos, Noa; Jiménez-Aleixandre, María Pilar

    2017-12-01

    There is emerging interest on the interactions between modelling and argumentation in specific contexts, such as genetics learning. It has been suggested that modelling might help students understand and argue on genetics. We propose modelling gene expression as a way to learn molecular genetics and diseases with a genetic component. The study is framed in Tiberghien's (2000) two worlds of knowledge, the world of "theories & models" and the world of "objects & events", adding a third component, the world of representations. We seek to examine how modelling and argumentation interact and connect the three worlds of knowledge while modelling gene expression. It is a case study of 10th graders learning about diseases with a genetic component. The research questions are as follows: (1) What argumentative and modelling operations do students enact in the process of modelling gene expression? Specifically, which operations allow connecting the three worlds of knowledge? (2) What are the interactions between modelling and argumentation in modelling gene expression? To what extent do these interactions help students connect the three worlds of knowledge and modelling gene expression? The argumentative operation of using evidence helps students to relate the three worlds of knowledge, enacted in all the connections. It seems to be a relationship among the number of interactions between modelling and argumentation, the connections between world of knowledge and students' capacity to develop a more sophisticated representation. Despite this is a case study, this approach of analysis reveals potentialities for a deeper understanding of learning genetics though scientific practices.

  7. Kaiser Permanente/Sandia National health care model. Phase I prototype final report. Part 1 - model overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, D.; Yoshimura, A.; Butler, D.

    1996-11-01

    This report describes the results of a Cooperative Research and Development Agreement between Sandia National Laboratories and Kaiser Permanente Southern California to develop a prototype computer model of Kaiser Permanente`s health care delivery system. As a discrete event simulation, SimHCO models for each of 100,000 patients the progression of disease, individual resource usage, and patient choices in a competitive environment. SimHCO is implemented in the object-oriented programming language C++, stressing reusable knowledge and reusable software components. The versioned implementation of SimHCO showed that the object-oriented framework allows the program to grow in complexity in an incremental way. Furthermore, timing calculationsmore » showed that SimHCO runs in a reasonable time on typical workstations, and that a second phase model will scale proportionally and run within the system constraints of contemporary computer technology. This report is published as two documents: Model Overview and Domain Analysis. A separate Kaiser-proprietary report contains the Disease and Health Care Organization Selection Models.« less

  8. Patients’ and Clinicians’ Views of the Psychological Components of Tinnitus Treatment That Could Inform Audiologists’ Usual Care: A Delphi Survey

    PubMed Central

    Taylor, John; Hall, Deborah A.; Walker, Dawn-Marie; McMurran, Mary; Casey, Amanda; Stockdale, David; Featherstone, Debbie; Hoare, Derek J.

    2018-01-01

    Objectives: The aim of this study was to determine which components of psychological therapies are most important and appropriate to inform audiologists’ usual care for people with tinnitus. Design: A 39-member panel of patients, audiologists, hearing therapists, and psychologists completed a three-round Delphi survey to reach consensus on essential components of audiologist-delivered psychologically informed care for tinnitus. Results: Consensus (≥80% agreement) was reached on including 76 of 160 components. No components reached consensus for exclusion. The components reaching consensus were predominantly common therapeutic skills such as Socratic questioning and active listening, rather than specific techniques, for example, graded exposure therapy or cognitive restructuring. Consensus on educational components to include largely concerned psychological models of tinnitus rather than neurophysiological information. Conclusions: The results of this Delphi survey provide a tool to develop audiologists’ usual tinnitus care using components that both patients and clinicians agree are important and appropriate to be delivered by an audiologist for adults with tinnitus-related distress. Research is now necessary to test the added effects of these components when delivered by audiologists. PMID:28930785

  9. QSO Broad Emission Line Asymmetries: Evidence of Gravitational Redshift?

    NASA Astrophysics Data System (ADS)

    Corbin, Michael R.

    1995-07-01

    The broad optical and ultraviolet emission lines of QSOs and active galactic nuclei (AGNs) display both redward and blueward asymmetries. This result is particularly well established for Hβ and C IV λ1549, and it has been found that Hβ becomes increasingly redward asymmetric with increasing soft X-ray luminosity. Two models for the origin of these asymmetries are investigated: (1) Anisotropic line emission from an ensemble of radially moving clouds, and (2) Two-component profiles consisting of a core of intermediate (˜1000-4000 km s-1) velocity width and a very broad (˜5000-20,000 km s-1) base, in which the asymmetries arise due to a velocity difference between the centroids of the components. The second model is motivated by the evidence that the traditional broad-line region is actually composed of an intermediate-line region (ILR) of optically thick clouds and a very broad line region (VBLR) of optically thin clouds lying closer to the central continuum source. Line profiles produced by model (1) are found to be inconsistent with those observed, being asymmetric mainly in their cores, whereas the asymmetries of actual profiles arise mainly from excess emission in their wings. By contrast, numerical fitting to actual Hβ and C IV λ1549 line profiles reveals that the majority can be accurately modeled by two components, either two Gaussians or the combination of a Gaussian base and a logarithmic core. The profile asymmetries in Hβ can be interpreted as arising from a shift of the base component over a range ˜6300 km s-1 relative to systemic velocity as defined by the position of the [O III] λ5007 line. A similar model appears to apply to C IV λ1549. The correlation between Hβ asymmetry and X-ray luminosity may thus be interpreted as a progressive red- shift of the VBLR velocity centroid relative to systemic velocity with increasing X-ray luminosity. This in turn suggests that the underlying effect is gravitational red shift, as soft X-ray emission arises from a region ˜ light-minutes in size and arguably traces the mass of the putative supermassive black hole. Depending on the size of the VBLR and the exact amount of its profile centroid shift, central masses in the range 109-10 Msun are implied for the objects displaying the strongest redward profile asymmetries, consistent with other estimates. The largest VBLR velocity dispersions measured from the two-component modeling are ˜20,000 km s-1, which also yields a virial mass ˜109 Msun for a VBLR size 0.1 pc. The gravitational redshift model does not explain the origin of the blueshift of the VBLR emission among low X-ray luminosity sources, however. This must be interpreted as arising from a competing effect such as electron scattering of line photons in the vicinity of the VBLR. On average, radio-loud objects have redward asymmetric broad-line profiles and stronger intermediate- and narrow-line emission than radio-quiet objects of comparable optical luminosity. Under the gravitational redshift model these differences may be interpreted as the result of black hole and host galaxy masses that are larger on average among the former class, consistent with the evidence that they are merger products.

  10. Development of sustainable precision farming systems for swine: estimating real-time individual amino acid requirements in growing-finishing pigs.

    PubMed

    Hauschild, L; Lovatto, P A; Pomar, J; Pomar, C

    2012-07-01

    The objective of this study was to develop and evaluate a mathematical model used to estimate the daily amino acid requirements of individual growing-finishing pigs. The model includes empirical and mechanistic model components. The empirical component estimates daily feed intake (DFI), BW, and daily gain (DG) based on individual pig information collected in real time. Based on DFI, BW, and DG estimates, the mechanistic component uses classic factorial equations to estimate the optimal concentration of amino acids that must be offered to each pig to meet its requirements. The model was evaluated with data from a study that investigated the effect of feeding pigs with a 3-phase or daily multiphase system. The DFI and BW values measured in this study were compared with those estimated by the empirical component of the model. The coherence of the values estimated by the mechanistic component was evaluated by analyzing if it followed a normal pattern of requirements. Lastly, the proposed model was evaluated by comparing its estimates with those generated by the existing growth model (InraPorc). The precision of the proposed model and InraPorc in estimating DFI and BW was evaluated through the mean absolute error. The empirical component results indicated that the DFI and BW trajectories of individual pigs fed ad libitum could be predicted 1 d (DFI) or 7 d (BW) ahead with the average mean absolute error of 12.45 and 1.85%, respectively. The average mean absolute error obtained with the InraPorc for the average individual of the population was 14.72% for DFI and 5.38% for BW. Major differences were observed when estimates from InraPorc were compared with individual observations. The proposed model, however, was effective in tracking the change in DFI and BW for each individual pig. The mechanistic model component estimated the optimal standardized ileal digestible Lys to NE ratio with reasonable between animal (average CV = 7%) and overtime (average CV = 14%) variation. Thus, the amino acid requirements estimated by model are animal- and time-dependent and follow, in real time, the individual DFI and BW growth patterns. The proposed model can follow the average feed intake and feed weight trajectory of each individual pig in real time with good accuracy. Based on these trajectories and using classical factorial equations, the model makes it possible to estimate dynamically the AA requirements of each animal, taking into account the intake and growth changes of the animal.

  11. Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion

    PubMed Central

    Fajen, Brett R.; Matthis, Jonathan S.

    2013-01-01

    Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983

  12. Compression strength of composite primary structural components

    NASA Technical Reports Server (NTRS)

    Johnson, Eric R.

    1993-01-01

    Two projects are summarized. The first project is entitled 'Stiffener Crippling Inititated by Delaminations' and its objective is to develop a computational model of the stiffener specimens that includes the capability to predict the interlaminar stress response at the flange free edge in postbuckling. The second is entitled 'Pressure Pillowing of an Orthogonally Stiffened Cylindrical Shell'. A paper written on this project is included.

  13. Development of the CSI phase-3 evolutionary model testbed

    NASA Technical Reports Server (NTRS)

    Gronet, M. J.; Davis, D. A.; Tan, M. K.

    1994-01-01

    This report documents the development effort for the reconfiguration of the Controls-Structures Integration (CSI) Evolutionary Model (CEM) Phase-2 testbed into the CEM Phase-3 configuration. This step responds to the need to develop and test CSI technologies associated with typical planned earth science and remote sensing platforms. The primary objective of the CEM Phase-3 ground testbed is to simulate the overall on-orbit dynamic behavior of the EOS AM-1 spacecraft. Key elements of the objective include approximating the low-frequency appendage dynamic interaction of EOS AM-1, allowing for the changeout of components, and simulating the free-free on-orbit environment using an advanced suspension system. The fundamentals of appendage dynamic interaction are reviewed. A new version of the multiple scaling method is used to design the testbed to have the full-scale geometry and dynamics of the EOS AM-1 spacecraft, but at one-tenth the weight. The testbed design is discussed, along with the testing of the solar array, high gain antenna, and strut components. Analytical performance comparisons show that the CEM Phase-3 testbed simulates the EOS AM-1 spacecraft with good fidelity for the important parameters of interest.

  14. Modeling a terminology-based electronic nursing record system: an object-oriented approach.

    PubMed

    Park, Hyeoun-Ae; Cho, InSook; Byeun, NamSoo

    2007-10-01

    The aim of this study was to present our perspectives on healthcare information analysis at a conceptual level and the lessons learned from our experience with the development of a terminology-based enterprise electronic nursing record system - which was one of components in an EMR system at a tertiary teaching hospital in Korea - using an object-oriented system analysis and design concept. To ensure a systematic approach and effective collaboration, the department of nursing constituted a system modeling team comprising a project manager, systems analysts, user representatives, an object-oriented methodology expert, and healthcare informaticists (including the authors). A rational unified process (RUP) and the Unified Modeling Language were used as a development process and for modeling notation, respectively. From the scenario and RUP approach, user requirements were formulated into use case sets and the sequence of activities in the scenario was depicted in an activity diagram. The structure of the system was presented in a class diagram. This approach allowed us to identify clearly the structural and behavioral states and important factors of a terminology-based ENR system (e.g., business concerns and system design concerns) according to the viewpoints of both domain and technical experts.

  15. Modeling The Frontal Collison In Vehicles And Determining The Degree Of Injury On The Driver

    NASA Astrophysics Data System (ADS)

    Oţăt, Oana Victoria

    2015-09-01

    The present research study aims at analysing the kinematic and the dynamic behaviour of the vehicle's driver in a frontal collision. Hence, a subsequent objective of the research paper is to establish the degree of injury suffered by the driver. Therefore, in order to achieve the objectives set, first, we had to define the type of the dummy placed in the position of the driver, and then to design the three-element assembly, i.e. the chair-steering wheel-dashboard assembly. Based on this model, the following step focused on the positioning of the dummy, which has also integrated the defining of the contacts between the components of the dummy and the seat elements. Seeking to model such a behaviour that would highly accurately reflect the driver's movements in a frontal collision, passive safety systems have also been defined and simulated, namely the seatbelt and the frontal airbag.

  16. OpenFOAM: Open source CFD in research and industry

    NASA Astrophysics Data System (ADS)

    Jasak, Hrvoje

    2009-12-01

    The current focus of development in industrial Computational Fluid Dynamics (CFD) is integration of CFD into Computer-Aided product development, geometrical optimisation, robust design and similar. On the other hand, in CFD research aims to extend the boundaries ofpractical engineering use in "non-traditional " areas. Requirements of computational flexibility and code integration are contradictory: a change of coding paradigm, with object orientation, library components, equation mimicking is proposed as a way forward. This paper describes OpenFOAM, a C++ object oriented library for Computational Continuum Mechanics (CCM) developed by the author. Efficient and flexible implementation of complex physical models is achieved by mimicking the form ofpartial differential equation in software, with code functionality provided in library form. Open Source deployment and development model allows the user to achieve desired versatility in physical modeling without the sacrifice of complex geometry support and execution efficiency.

  17. Hydrograph separation for karst watersheds using a two-domain rainfall-discharge model

    USGS Publications Warehouse

    Long, Andrew J.

    2009-01-01

    Highly parameterized, physically based models may be no more effective at simulating the relations between rainfall and outflow from karst watersheds than are simpler models. Here an antecedent rainfall and convolution model was used to separate a karst watershed hydrograph into two outflow components: one originating from focused recharge in conduits and one originating from slow flow in a porous annex system. In convolution, parameters of a complex system are lumped together in the impulse-response function (IRF), which describes the response of the system to an impulse of effective precipitation. Two parametric functions in superposition approximate the two-domain IRF. The outflow hydrograph can be separated into flow components by forward modeling with isolated IRF components, which provides an objective criterion for separation. As an example, the model was applied to a karst watershed in the Madison aquifer, South Dakota, USA. Simulation results indicate that this watershed is characterized by a flashy response to storms, with a peak response time of 1 day, but that 89% of the flow results from the slow-flow domain, with a peak response time of more than 1 year. This long response time may be the result of perched areas that store water above the main water table. Simulation results indicated that some aspects of the system are stationary but that nonlinearities also exist.

  18. Function-based payment model for inpatient medical rehabilitation: an evaluation.

    PubMed

    Sutton, J P; DeJong, G; Wilkerson, D

    1996-07-01

    To describe the components of a function-based prospective payment model for inpatient medical rehabilitation that parallels diagnosis-related groups (DRGs), to evaluate this model in relation to stakeholder objectives, and to detail the components of a quality of care incentive program that, when combined with this payment model, creates an incentive for provides to maximize functional outcomes. This article describes a conceptual model, involving no data collection or data synthesis. The basic payment model described parallels DRGs. Information on the potential impact of this model on medical rehabilitation is gleaned from the literature evaluating the impact of DRGs. The conceptual model described is evaluated against the results of a Delphi Survey of rehabilitation providers, consumers, policymakers, and researchers previously conducted by members of the research team. The major shortcoming of a function-based prospective payment model for inpatient medical rehabilitation is that it contains no inherent incentive to maximize functional outcomes. Linkage of reimbursement to outcomes, however, by withholding a fixed proportion of the standard FRG payment amount, placing that amount in a "quality of care" pool, and distributing that pool annually among providers whose predesignated, facility-level, case-mix-adjusted outcomes are attained, may be one strategy for maximizing outcome goals.

  19. Optical derotator alignment using image-processing algorithm for tracking laser vibrometer measurements of rotating objects.

    PubMed

    Khalil, Hossam; Kim, Dongkyu; Jo, Youngjoon; Park, Kyihwan

    2017-06-01

    An optical component called a Dove prism is used to rotate the laser beam of a laser-scanning vibrometer (LSV). This is called a derotator and is used for measuring the vibration of rotating objects. The main advantage of a derotator is that it works independently from an LSV. However, this device requires very specific alignment, in which the axis of the Dove prism must coincide with the rotational axis of the object. If the derotator is misaligned with the rotating object, the results of the vibration measurement are imprecise, owing to the alteration of the laser beam on the surface of the rotating object. In this study, a method is proposed for aligning a derotator with a rotating object through an image-processing algorithm that obtains the trajectory of a landmark attached to the object. After the trajectory of the landmark is mathematically modeled, the amount of derotator misalignment with respect to the object is calculated. The accuracy of the proposed method for aligning the derotator with the rotating object is experimentally tested.

  20. Control of actin-based motility through localized actin binding

    PubMed Central

    Banigan, Edward J.; Lee, Kun-Chun; Liu, Andrea J.

    2014-01-01

    A wide variety of cell biological and biomimetic systems use actin polymerization to drive motility. It has been suggested that an object such as a bacterium can propel itself by self-assembling a high concentration of actin behind it if it is repelled by actin. However, it is also known that it is essential for the moving object to bind actin. Therefore, a key question is how the actin tail can propel an object when it both binds and repels the object. We present a physically consistent Brownian dynamics model for actin-based motility that includes the minimal components of the dendritic nucleation model and allows for both attractive and repulsive interactions between actin and a moveable disk. We find that the concentration gradient of filamentous actin generated by polymerization is sufficient to propel the object, even with moderately strong binding interactions. Additionally, actin binding can act as a biophysical cap, and may directly control motility through modulation of network growth. Overall, this mechanism is robust in that it can drive motility against a load up to a stall pressure that depends on the Young’s modulus of the actin network and can explain several aspects of actin-based motility. PMID:24225232

Top