Science.gov

Sample records for architectural model transformations

  1. A Concept Transformation Learning Model for Architectural Design Learning Process

    ERIC Educational Resources Information Center

    Wu, Yun-Wu; Weng, Kuo-Hua; Young, Li-Ming

    2016-01-01

    Generally, in the foundation course of architectural design, much emphasis is placed on teaching of the basic design skills without focusing on teaching students to apply the basic design concepts in their architectural designs or promoting students' own creativity. Therefore, this study aims to propose a concept transformation learning model to…

  2. A Concept Transformation Learning Model for Architectural Design Learning Process

    ERIC Educational Resources Information Center

    Wu, Yun-Wu; Weng, Kuo-Hua; Young, Li-Ming

    2016-01-01

    Generally, in the foundation course of architectural design, much emphasis is placed on teaching of the basic design skills without focusing on teaching students to apply the basic design concepts in their architectural designs or promoting students' own creativity. Therefore, this study aims to propose a concept transformation learning model to…

  3. A Model Transformation Approach to Derive Architectural Models from Goal-Oriented Requirements Models

    NASA Astrophysics Data System (ADS)

    Lucena, Marcia; Castro, Jaelson; Silva, Carla; Alencar, Fernanda; Santos, Emanuel; Pimentel, João

    Requirements engineering and architectural design are key activities for successful development of software systems. Both activities are strongly intertwined and interrelated, but many steps toward generating architecture models from requirements models are driven by intuition and architectural knowledge. Thus, systematic approaches that integrate requirements engineering and architectural design activities are needed. This paper presents an approach based on model transformations to generate architectural models from requirements models. The source and target languages are respectively the i* modeling language and Acme architectural description language (ADL). A real web-based recommendation system is used as case study to illustrate our approach.

  4. Adaptive Neuron Model: An architecture for the rapid learning of nonlinear topological transformations

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1994-01-01

    A method for the rapid learning of nonlinear mappings and topological transformations using a dynamically reconfigurable artificial neural network is presented. This fully-recurrent Adaptive Neuron Model (ANM) network was applied to the highly degenerate inverse kinematics problem in robotics, and its performance evaluation is bench-marked. Once trained, the resulting neuromorphic architecture was implemented in custom analog neural network hardware and the parameters capturing the functional transformation downloaded onto the system. This neuroprocessor, capable of 10(exp 9) ops/sec, was interfaced directly to a three degree of freedom Heathkit robotic manipulator. Calculation of the hardware feed-forward pass for this mapping was benchmarked at approximately 10 microsec.

  5. Modeling dynamic reciprocity: Engineering three-dimensional culture models of breast architecture, function, and neoplastic transformation

    PubMed Central

    Nelson, Celeste M.; Bissell, Mina J.

    2010-01-01

    In order to understand why cancer develops as well as predict the outcome of pharmacological treatments, we need to model the structure and function of organs in culture so that our experimental manipulations occur under physiological contexts. This review traces the history of the development of a prototypic example, the three-dimensional (3D) model of the mammary gland acinus. We briefly describe the considerable information available on both normal mammary gland function and breast cancer generated by the current model and present future challenges that will require an increase in its complexity. We propose the need for engineered tissues that faithfully recapitulate their native structures to allow a greater understanding of tissue function, dysfunction, and potential therapeutic intervention. PMID:15963732

  6. Spatial transformation architectures with applications: an introduction

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    1993-08-01

    Spatial transformations (STs) constitute an important class of image operations, which include the well-known affine transformation, image rotation, scaling, warping, etc. Less well known are the anisomorphic transformations among cartographic projections such as the Mercator, gnomonic, and equal-area formats. In this preliminary study, we introduce a unifying theory of spatial transformation, expressed in terms of the Image Algebra, a rigorous, inherently parallel notation for image and signal processing. Via such theory, we can predict the implementational cost of various STs. Since spatial operations are frequently I/O-intensive, we first analyze the I/O performance of well-known architectures, in order to determine their suitability for ST implementation. Analyses are verified by simulation, with emphasis upon vision-based navigation applications. An additional applications area concerns the remapping of visual receptive fields, which facilitates visual rehabilitation in the presence of retinal damage.

  7. Comparing root architectural models

    NASA Astrophysics Data System (ADS)

    Schnepf, Andrea; Javaux, Mathieu; Vanderborght, Jan

    2017-04-01

    Plant roots play an important role in several soil processes (Gregory 2006). Root architecture development determines the sites in soil where roots provide input of carbon and energy and take up water and solutes. However, root architecture is difficult to determine experimentally when grown in opaque soil. Thus, root architectural models have been widely used and been further developed into functional-structural models that are able to simulate the fate of water and solutes in the soil-root system (Dunbabin et al. 2013). Still, a systematic comparison of the different root architectural models is missing. In this work, we focus on discrete root architecture models where roots are described by connected line segments. These models differ (a) in their model concepts, such as the description of distance between branches based on a prescribed distance (inter-nodal distance) or based on a prescribed time interval. Furthermore, these models differ (b) in the implementation of the same concept, such as the time step size, the spatial discretization along the root axes or the way stochasticity of parameters such as root growth direction, growth rate, branch spacing, branching angles are treated. Based on the example of two such different root models, the root growth module of R-SWMS and RootBox, we show the impact of these differences on simulated root architecture and aggregated information computed from this detailed simulation results, taking into account the stochastic nature of those models. References Dunbabin, V.M., Postma, J.A., Schnepf, A., Pagès, L., Javaux, M., Wu, L., Leitner, D., Chen, Y.L., Rengel, Z., Diggle, A.J. Modelling root-soil interactions using three-dimensional models of root growth, architecture and function (2013) Plant and Soil, 372 (1-2), pp. 93 - 124. Gregory (2006) Roots, rhizosphere and soil: the route to a better understanding of soil science? European Journal of Soil Science 57: 2-12.

  8. Consistent model driven architecture

    NASA Astrophysics Data System (ADS)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  9. Thermal modelling of a transform-divergent interaction zone, the Demerara Plateau, French Guiana margin: architecture of oceanic and continental crusts

    NASA Astrophysics Data System (ADS)

    Grall, Céline; Marcaillou, Boris; Loncke, Lies; Mercier de Lepinay, Marion; Basile, Christophe; Roest, Walter R.; A. M Van Wees, Jan Diederik; A. P. L Cloetingh, Sierd

    2014-05-01

    The crustal architecture of passive margins is a key to constrain their origin and subsequent evolution, as well as their thermal subsidence. The square shaped continental Demerara Plateau, French Guiana margin, surmounts Central and Equatorial Atlantic oceanic crusts surrounding it. Bounded to the northeast by a WNW-ESE-trending transform fault segment and to both the west and the east by N-S divergent fault segments, the Demerara Plateau is a complex transform-divergent interaction zone. The aim of this study is to refine the crustal architecture of this region as derived from gravity and seismic data, by thermal modelling, and by using surface heat flow data as an additional constraint. Previous studies show that the transform transition domain from continental to oceanic crust occurs across a region of approximately 70-km wide, where the Moho deepens abruptly from 25-27 km beneath the plateau (thinned continental crust), to 11-12 km in the abyssal oceanic domain (3-4 km thick oceanic crust). During the IGUANES cruise (onboard R/V L'Atalante in 2013) 10 surface heat flow measurements crossing the plateau have been carried out. These data are combined with borehole heat flows values around. Measures indicate that surface heat flow values range between 47 and 80 mW/m2 (with an uncertainty on the measurements of ~4mW/m2 on average), and slightly decreases in the continental domain toward the ocean. Preliminary 1D thermal modelling results indicate that these heat flow values are consistent with crustal and sediment thicknesses observed on the Plateau. Along the transform domain, at the transition towards the oceanic crust, heat flow values are lower than model results, if we consider an oceanic crust of more than hundred million years and with a thickness of around 3-4 km. We examine, using a 2D approach, whether this low heat flow could be reasonably accounted for by thermal exchange between oceanic and continental lithospheres.

  10. Supramolecular transformations within discrete coordination-driven supramolecular architectures.

    PubMed

    Wang, Wei; Wang, Yu-Xuan; Yang, Hai-Bo

    2016-05-03

    In this review, a comprehensive summary of supramolecular transformations within discrete coordination-driven supramolecular architectures, including helices, metallacycles, metallacages, etc., is presented. Recent investigations have demonstrated that coordination-driven self-assembled architectures provide an ideal platform to study supramolecular transformations mainly due to the relatively rigid yet dynamic nature of the coordination bonds. Various stimuli have been extensively employed to trigger the transformation processes of metallosupramolecular architectures, such as solvents, concentration, anions, guests, change in component fractions or chemical compositions, light, and post-modification reactions, which allowed for the formation of new structures with specific properties and functions. Thus, it is believed that supramolecular transformations could serve as another highly efficient approach for generating diverse metallosupramolecular architectures. Classified by the aforementioned various stimuli used to induce the interconversion processes, the emphasis in this review will be on the transformation conditions, structural changes, mechanisms, and the output of specific properties and functions upon induction of structural transformations.

  11. Protocol Architecture Model Report

    NASA Technical Reports Server (NTRS)

    Dhas, Chris

    2000-01-01

    NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to examine protocols and architectures for an In-Space Internet Node. CNS has developed a methodology for network reference models to support NASA's four mission areas: Earth Science, Space Science, Human Exploration and Development of Space (REDS), Aerospace Technology. This report applies the methodology to three space Internet-based communications scenarios for future missions. CNS has conceptualized, designed, and developed space Internet-based communications protocols and architectures for each of the independent scenarios. The scenarios are: Scenario 1: Unicast communications between a Low-Earth-Orbit (LEO) spacecraft inspace Internet node and a ground terminal Internet node via a Tracking and Data Rela Satellite (TDRS) transfer; Scenario 2: Unicast communications between a Low-Earth-Orbit (LEO) International Space Station and a ground terminal Internet node via a TDRS transfer; Scenario 3: Multicast Communications (or "Multicasting"), 1 Spacecraft to N Ground Receivers, N Ground Transmitters to 1 Ground Receiver via a Spacecraft.

  12. ESPC Common Model Architecture

    DTIC Science & Technology

    2014-09-30

    support for the Intel MIC architecture, the Apple Clang/LLVM C++ compiler is supported on both Linux and Darwin , and ESMF’s dependency on the NetCDF C...compiler on both Linux and Darwin systems. • Support was added to compile the ESMF library for the Intel MIC architecture under Linux. This allows

  13. Digital Architecture Planning Model

    SciTech Connect

    Oxstrand, Johanna Helene; Al Rashdan, Ahmad Yahya Mohammad; Bly, Aaron Douglas; Rice, Brandon Charles; Fitzgerald, Kirk; Wilson, Keith Leon

    2016-03-01

    As part of the U.S. Department of Energy’s Light Water Reactor Sustainability Program, the Digital Architecture (DA) Project focuses on providing a model that nuclear utilities can refer to when planning deployment of advanced technologies. The digital architecture planning model (DAPM) is the methodology for mapping power plant operational and support activities into a DA that unifies all data sources needed by the utilities to operate their plants. The DA is defined as a collection of information technology capabilities needed to support and integrate a wide spectrum of real-time digital capabilities for performance improvements of nuclear power plants. DA can be thought of as integration of the separate instrumentation and control and information systems already in place in nuclear power plants, which are brought together for the purpose of creating new levels of automation in plant work activities. A major objective in DAPM development was to survey all key areas that needed to be reviewed in order for a utility to make knowledgeable decisions regarding needs and plans to implement a DA at the plant. The development was done in two steps. First, researchers surveyed the nuclear industry in order to learn their near-term plans for adopting new advanced capabilities and implementing a network (i.e., wireless and wire) infrastructure throughout the plant, including the power block. Secondly, a literature review covering regulatory documents, industry standards, and technical research reports and articles was conducted. The objective of the review was to identify key areas to be covered by the DAPM, which included the following: 1. The need for a DA and its benefits to the plant 2. Resources required to implement the DA 3. Challenges that need to be addressed and resolved to implement the DA 4. Roles and responsibilities of the DA implementation plan. The DAPM was developed based on results from the survey and the literature review. Model development, including

  14. Transforming Space Missions into Service Oriented Architectures

    NASA Technical Reports Server (NTRS)

    Mandl, Dan; Frye, Stuart; Cappelaere, Pat

    2006-01-01

    This viewgraph presentation reviews the vision of the sensor web enablement via a Service Oriented Architecture (SOA). An generic example is given of a user finding a service through the Web, and initiating a request for the desired observation. The parts that comprise this system and how they interact are reviewed. The advantages of the use of SOA are reviewed.

  15. An improved architecture for video rate image transformations

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.; Juday, Richard D.

    1989-01-01

    Geometric image transformations are of interest to pattern recognition algorithms for their use in simplifying some aspects of the pattern recognition process. Examples include reducing sensitivity to rotation, scale, and perspective of the object being recognized. The NASA Programmable Remapper can perform a wide variety of geometric transforms at full video rate. An architecture is proposed that extends its abilities and alleviates many of the first version's shortcomings. The need for the improvements are discussed in the context of the initial Programmable Remapper and the benefits and limitations it has delivered. The implementation and capabilities of the proposed architecture are discussed.

  16. Macromolecular metamorphosis via stimulus-induced transformations of polymer architecture

    NASA Astrophysics Data System (ADS)

    Sun, Hao; Kabb, Christopher P.; Dai, Yuqiong; Hill, Megan R.; Ghiviriga, Ion; Bapat, Abhijeet P.; Sumerlin, Brent S.

    2017-08-01

    Macromolecular architecture plays a pivotal role in determining the properties of polymers. When designing polymers for specific applications, it is not only the size of a macromolecule that must be considered, but also its shape. In most cases, the topology of a polymer is a static feature that is inalterable once synthesized. Using reversible-covalent chemistry to prompt the disconnection of chemical bonds and the formation of new linkages in situ, we report polymers that undergo dramatic topological transformations via a process we term macromolecular metamorphosis. Utilizing this technique, a linear amphiphilic block copolymer or hyperbranched polymer undergoes 'metamorphosis' into comb, star and hydrophobic block copolymer architectures. This approach was extended to include a macroscopic gel which transitioned from a densely and covalently crosslinked network to one with larger distances between the covalent crosslinks when heated. These architectural transformations present an entirely new approach to 'smart' materials.

  17. Optical chirp z-transform processor with a simplified architecture.

    PubMed

    Ngo, Nam Quoc

    2014-12-29

    Using a simplified chirp z-transform (CZT) algorithm based on the discrete-time convolution method, this paper presents the synthesis of a simplified architecture of a reconfigurable optical chirp z-transform (OCZT) processor based on the silica-based planar lightwave circuit (PLC) technology. In the simplified architecture of the reconfigurable OCZT, the required number of optical components is small and there are no waveguide crossings which make fabrication easy. The design of a novel type of optical discrete Fourier transform (ODFT) processor as a special case of the synthesized OCZT is then presented to demonstrate its effectiveness. The designed ODFT can be potentially used as an optical demultiplexer at the receiver of an optical fiber orthogonal frequency division multiplexing (OFDM) transmission system.

  18. Efficient architecture for adaptive directional lifting-based wavelet transform

    NASA Astrophysics Data System (ADS)

    Yin, Zan; Zhang, Li; Shi, Guangming

    2010-07-01

    Adaptive direction lifting-based wavelet transform (ADL) has better performance than conventional lifting both in image compression and de-noising. However, no architecture has been proposed to hardware implement it because of its high computational complexity and huge internal memory requirements. In this paper, we propose a four-stage pipelined architecture for 2 Dimensional (2D) ADL with fast computation and high data throughput. The proposed architecture comprises column direction estimation, column lifting, row direction estimation and row lifting which are performed in parallel in a pipeline mode. Since the column processed data is transposed, the row processor can reuse the column processor which can decrease the design complexity. In the lifting step, predict and update are also performed in parallel. For an 8×8 image sub-block, the proposed architecture can finish the ADL forward transform within 78 clock cycles. The architecture is implemented on Xilinx Virtex5 device on which the frequency can achieve 367 MHz. The processed time is 212.5 ns, which can meet the request of real-time system.

  19. A VLSI architecture for simplified arithmetic Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.

    1992-01-01

    The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.

  20. A VLSI architecture for simplified arithmetic Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.

    1992-01-01

    The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.

  1. Compressive optical image watermarking using joint Fresnel transform correlator architecture

    NASA Astrophysics Data System (ADS)

    Li, Jun; Zhong, Ting; Dai, Xiaofang; Yang, Chanxia; Li, Rong; Tang, Zhilie

    2017-02-01

    A new optical image watermarking technique based on compressive sensing using joint Fresnel transform correlator architecture has been presented. A secret scene or image is first embedded into a host image to perform optical image watermarking by use of joint Fresnel transform correlator architecture. Then, the watermarked image is compressed to much smaller signal data using single-pixel compressive holographic imaging in optical domain. At the received terminal, the watermarked image is reconstructed well via compressive sensing theory and a specified holographic reconstruction algorithm. The preliminary numerical simulations show that it is effective and suitable for optical image security transmission in the coming absolutely optical network for the reason of the completely optical implementation and largely decreased holograms data volume.

  2. Transformation of legacy network management system to service oriented architecture

    NASA Astrophysics Data System (ADS)

    Sathyan, Jithesh; Shenoy, Krishnananda

    2007-09-01

    Service providers today are facing the challenge of operating and maintaining multiple networks, based on multiple technologies. Network Management System (NMS) solutions are being used to manage these networks. However the NMS is tightly coupled with Element or the Core network components. Hence there are multiple NMS solutions for heterogeneous networks. Current network management solutions are targeted at a variety of independent networks. The wide spread popularity of IP Multimedia Subsystem (IMS) is a clear indication that all of these independent networks will be integrated into a single IP-based infrastructure referred to as Next Generation Networks (NGN) in the near future. The services, network architectures and traffic pattern in NGN will dramatically differ from the current networks. The heterogeneity and complexity in NGN including concepts like Fixed Mobile Convergence will bring a number of challenges to network management. The high degree of complexity accompanying the network element technology necessitates network management systems (NMS) which can utilize this technology to provide more service interfaces while hiding the inherent complexity. As operators begin to add new networks and expand existing networks to support new technologies and products, the necessity of scalable, flexible and functionally rich NMS systems arises. Another important factor influencing NMS architecture is mergers and acquisitions among the key vendors. Ease of integration is a key impediment in the traditional hierarchical NMS architecture. These requirements trigger the need for an architectural framework that will address the NGNM (Next Generation Network Management) issues seamlessly. This paper presents a unique perspective of bringing service orientated architecture (SOA) to legacy network management systems (NMS). It advocates a staged approach in transforming a legacy NMS to SOA. The architecture at each stage is detailed along with the technical advantages and

  3. Data Model as an Architectural View

    DTIC Science & Technology

    2009-10-01

    for architecture documentation geared towards information systems prescribe a data view [Garland 2003], data architecture view [ TOGAF 2007], or...entity-relationship diagrams. The Open Group Architecture Framework ( TOGAF ) suggests entity-relationship dia- grams to illustrate the Information...Systems Architecture – Data Architecture views [ TOGAF 2007]. The ―4+1‖ View Model of Software Architecture indicates that entity-relationship dia

  4. Interoperability format translation and transformation between IFC architectural design file and simulation file formats

    DOEpatents

    Chao, Tian-Jy; Kim, Younghun

    2015-01-06

    Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function to convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.

  5. Interoperability format translation and transformation between IFC architectural design file and simulation file formats

    DOEpatents

    Chao, Tian-Jy; Kim, Younghun

    2015-02-03

    Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function to convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.

  6. HRST architecture modeling and assessments

    SciTech Connect

    Comstock, D.A.

    1997-01-01

    This paper presents work supporting the assessment of advanced concept options for the Highly Reusable Space Transportation (HRST) study. It describes the development of computer models as the basis for creating an integrated capability to evaluate the economic feasibility and sustainability of a variety of system architectures. It summarizes modeling capabilities for use on the HRST study to perform sensitivity analysis of alternative architectures (consisting of different combinations of highly reusable vehicles, launch assist systems, and alternative operations and support concepts) in terms of cost, schedule, performance, and demand. In addition, the identification and preliminary assessment of alternative market segments for HRST applications, such as space manufacturing, space tourism, etc., is described. Finally, the development of an initial prototype model that can begin to be used for modeling alternative HRST concepts at the system level is presented. {copyright} {ital 1997 American Institute of Physics.}

  7. HRST architecture modeling and assessments

    NASA Astrophysics Data System (ADS)

    Comstock, Douglas A.

    1997-01-01

    This paper presents work supporting the assessment of advanced concept options for the Highly Reusable Space Transportation (HRST) study. It describes the development of computer models as the basis for creating an integrated capability to evaluate the economic feasibility and sustainability of a variety of system architectures. It summarizes modeling capabilities for use on the HRST study to perform sensitivity analysis of alternative architectures (consisting of different combinations of highly reusable vehicles, launch assist systems, and alternative operations and support concepts) in terms of cost, schedule, performance, and demand. In addition, the identification and preliminary assessment of alternative market segments for HRST applications, such as space manufacturing, space tourism, etc., is described. Finally, the development of an initial prototype model that can begin to be used for modeling alternative HRST concepts at the system level is presented.

  8. Scalable Models Using Model Transformation

    DTIC Science & Technology

    2008-07-13

    huge number of web documents. We have created a simplified demo using 5 worker machines in the Ptolemy II modeling and simulation environment [3], as...the pattern of the transformation rule matches any subgraph of the input model. When the TransformationRule actor is opened in the Ptolemy II GUI...tool developed in the Ptolemy II frame- work, existing tools include AGG [14], PROGRES [15], AToM3 [16], FUJABA [17], VIATRA2 [18], and GReAT [19

  9. Predicting and Modeling RNA Architecture

    PubMed Central

    Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice

    2011-01-01

    SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963

  10. Optical encryption using a joint transform correlator architecture

    NASA Astrophysics Data System (ADS)

    Nomura, Takanori; Javidi, Bahram

    2000-08-01

    An optical double random-phase encryption method using a joint transform correlator architecture is proposed. In this method, the joint power spectrum of the image to be encrypted and the key codes is recorded as the encrypted data. Unlike the case with classical double random-phase encryption, the same key code is used to both encrypt and decrypt the data, and the conjugate key is not required. Computer simulations and optical experimental results using a photorefractive-crystal-based processor are presented.

  11. Formalism Challenges of the Cougaar Model Driven Architecture

    NASA Technical Reports Server (NTRS)

    Bohner, Shawn A.; George, Boby; Gracanin, Denis; Hinchey, Michael G.

    2004-01-01

    The Cognitive Agent Architecture (Cougaar) is one of the most sophisticated distributed agent architectures developed today. As part of its research and evolution, Cougaar is being studied for application to large, logistics-based applications for the Department of Defense (DoD). Anticipiting future complex applications of Cougaar, we are investigating the Model Driven Architecture (MDA) approach to understand how effective it would be for increasing productivity in Cougar-based development efforts. Recognizing the sophistication of the Cougaar development environment and the limitations of transformation technologies for agents, we have systematically developed an approach that combines component assembly in the large and transformation in the small. This paper describes some of the key elements that went into the Cougaar Model Driven Architecture approach and the characteristics that drove the approach.

  12. The PASS project architectural model

    SciTech Connect

    Day, C.T.; Loken, S.; Macfarlane, J.F.

    1994-08-01

    The PASS project has as its goal the implementation of solutions to the foreseen data access problems of the next generation of scientific experiments. The architectural model results from an evaluation of the operational and technical requirements and is described in terms of an abstract reference model, an implementation model and a discussion of some design aspects. The abstract reference model describes a system that matches the requirements in terms of its components and the mechanisms by which they communicate, but does not discuss policy or design issues that would be necessary to match the model to an actual implementation. Some of these issues are discussed, but more detailed design and simulation work will be necessary before choices can be made.

  13. Institutional Transformation Model

    SciTech Connect

    2015-10-19

    Reducing the energy consumption of large institutions with dozens to hundreds of existing buildings while maintaining and improving existing infrastructure is a critical economic and environmental challenge. SNL's Institutional Transformation (IX) work integrates facilities and infrastructure sustainability technology capabilities and collaborative decision support modeling approaches to help facilities managers at Sandia National Laboratories (SNL) simulate different future energy reduction strategies and meet long term energy conservation goals.

  14. A pipelined IC architecture for radon transform computations in a multiprocessor array

    SciTech Connect

    Agi, I.; Hurst, P.J.; Current, K.W. . Dept. of Electrical Engineering and Computer Science)

    1990-05-25

    The amount of data generated by CT scanners is enormous, making the reconstruction operation slow, especially for 3-D and limited-data scans requiring iterative algorithms. The Radon transform and its inverse, commonly used for CT image reconstruction from projections, are computationally burdensome for today's single-processor computer architectures. If the processing times for the forward and inverse Radon transforms were comparatively small, a large set of new CT algorithms would become feasible, especially those for 3-D and iterative tomographic image reconstructions. In addition to image reconstruction, a fast Radon Transform Computer'' could be naturally applied in other areas of multidimensional signal processing including 2-D power spectrum estimation, modeling of human perception, Hough transforms, image representation, synthetic aperture radar processing, and others. A high speed processor for this operation is likely to motivate new algorithms for general multidimensional signal processing using the Radon transform. In the proposed workshop paper, we will first describe interpolation schemes useful in computation of the discrete Radon transform and backprojection and compare their errors and hardware complexities. We then will evaluate through statistical means the fixed-point number system required to accept and generate 12-bit input and output data with acceptable error using the linear interpolation scheme selected. These results set some of the requirements that must be met by our new VLSI chip architecture. Finally we will present a new unified architecture for a single-chip processor for computing both the forward Radon transform and backprojection at high data rates. 3 refs., 2 figs.

  15. Entropy-based consistent model driven architecture

    NASA Astrophysics Data System (ADS)

    Niepostyn, Stanisław Jerzy

    2016-09-01

    A description of software architecture is a plan of the IT system construction, therefore any architecture gaps affect the overall success of an entire project. The definitions mostly describe software architecture as a set of views which are mutually unrelated, hence potentially inconsistent. Software architecture completeness is also often described in an ambiguous way. As a result most methods of IT systems building comprise many gaps and ambiguities, thus presenting obstacles for software building automation. In this article the consistency and completeness of software architecture are mathematically defined based on calculation of entropy of the architecture description. Following this approach, in this paper we also propose our method of automatic verification of consistency and completeness of the software architecture development method presented in our previous article as Consistent Model Driven Architecture (CMDA). The proposed FBS (Functionality-Behaviour-Structure) entropy-based metric applied in our CMDA approach enables IT architects to decide whether the modelling process is complete and consistent. With this metric, software architects could assess the readiness of undergoing modelling work for the start of IT system building. It even allows them to assess objectively whether the designed software architecture of the IT system could be implemented at all. The overall benefit of such an approach is that it facilitates the preparation of complete and consistent software architecture more effectively as well as it enables assessing and monitoring of the ongoing modelling development status. We demonstrate this with a few industry examples of IT system designs.

  16. Building Paradigms: Major Transformations in School Architecture (1798-2009)

    ERIC Educational Resources Information Center

    Gislason, Neil

    2009-01-01

    This article provides an historical overview of significant trends in school architecture from 1798 to the present. I divide the history of school architecture into two major phases. The first period falls between 1798 and 1921: the modern graded classroom emerged as a standard architectural feature during this period. The second period, which…

  17. Building Paradigms: Major Transformations in School Architecture (1798-2009)

    ERIC Educational Resources Information Center

    Gislason, Neil

    2009-01-01

    This article provides an historical overview of significant trends in school architecture from 1798 to the present. I divide the history of school architecture into two major phases. The first period falls between 1798 and 1921: the modern graded classroom emerged as a standard architectural feature during this period. The second period, which…

  18. Modeling and analysis of multiprocessor architectures

    NASA Technical Reports Server (NTRS)

    Yalamanchili, S.; Carpenter, T.

    1989-01-01

    Some technologies developed for system level modeling and analysis of algorithms/architectures using an architecture design and development system are reviewed. Modeling and analysis is described with attention given to modeling constraints and analysis using constrained software graphs. An example is presented of an ADAS graph and its associated attributes, such as firing delay, token consume rate, token produce rate, firing threshold, firing condition, arc queue lengths, associated C or Ada functional model, and stochastic behavior.

  19. Service entity network virtualization architecture and model

    NASA Astrophysics Data System (ADS)

    Jin, Xue-Guang; Shou, Guo-Chu; Hu, Yi-Hong; Guo, Zhi-Gang

    2017-07-01

    Communication network can be treated as a complex network carrying a variety of services and service can be treated as a network composed of functional entities. There are growing interests in multiplex service entities where individual entity and link can be used for different services simultaneously. Entities and their relationships constitute a service entity network. In this paper, we introduced a service entity network virtualization architecture including service entity network hierarchical model, service entity network model, service implementation and deployment of service entity networks. Service entity network oriented multiplex planning model were also studied and many of these multiplex models were characterized by a significant multiplex of the links or entities in different service entity network. Service entity networks were mapped onto shared physical resources by dynamic resource allocation controller. The efficiency of the proposed architecture was illustrated in a simulation environment that allows for comparative performance evaluation. The results show that, compared to traditional networking architecture, this architecture has a better performance.

  20. Electromagnetic Physics Models for Parallel Computing Architectures

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  1. Electromagnetic physics models for parallel computing architectures

    DOE PAGES

    Amadio, G.; Ananya, A.; Apostolakis, J.; ...

    2016-11-21

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part ofmore » the GeantV project. Finally, the results of preliminary performance evaluation and physics validation are presented as well.« less

  2. Electromagnetic physics models for parallel computing architectures

    SciTech Connect

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-11-21

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Finally, the results of preliminary performance evaluation and physics validation are presented as well.

  3. Unified transform architecture for AVC, AVS, VC-1 and HEVC high-performance codecs

    NASA Astrophysics Data System (ADS)

    Dias, Tiago; Roma, Nuno; Sousa, Leonel

    2014-12-01

    A unified architecture for fast and efficient computation of the set of two-dimensional (2-D) transforms adopted by the most recent state-of-the-art digital video standards is presented in this paper. Contrasting to other designs with similar functionality, the presented architecture is supported on a scalable, modular and completely configurable processing structure. This flexible structure not only allows to easily reconfigure the architecture to support different transform kernels, but it also permits its resizing to efficiently support transforms of different orders (e.g. order-4, order-8, order-16 and order-32). Consequently, not only is it highly suitable to realize high-performance multi-standard transform cores, but it also offers highly efficient implementations of specialized processing structures addressing only a reduced subset of transforms that are used by a specific video standard. The experimental results that were obtained by prototyping several configurations of this processing structure in a Xilinx Virtex-7 FPGA show the superior performance and hardware efficiency levels provided by the proposed unified architecture for the implementation of transform cores for the Advanced Video Coding (AVC), Audio Video coding Standard (AVS), VC-1 and High Efficiency Video Coding (HEVC) standards. In addition, such results also demonstrate the ability of this processing structure to realize multi-standard transform cores supporting all the standards mentioned above and that are capable of processing the 8k Ultra High Definition Television (UHDTV) video format (7,680 × 4,320 at 30 fps) in real time.

  4. The Software Architecture of Global Climate Models

    NASA Astrophysics Data System (ADS)

    Alexander, K. A.; Easterbrook, S. M.

    2011-12-01

    It has become common to compare and contrast the output of multiple global climate models (GCMs), such as in the Climate Model Intercomparison Project Phase 5 (CMIP5). However, intercomparisons of the software architecture of GCMs are almost nonexistent. In this qualitative study of seven GCMs from Canada, the United States, and Europe, we attempt to fill this gap in research. We describe the various representations of the climate system as computer programs, and account for architectural differences between models. Most GCMs now practice component-based software engineering, where Earth system components (such as the atmosphere or land surface) are present as highly encapsulated sub-models. This architecture facilitates a mix-and-match approach to climate modelling that allows for convenient sharing of model components between institutions, but it also leads to difficulty when choosing where to draw the lines between systems that are not encapsulated in the real world, such as sea ice. We also examine different styles of couplers in GCMs, which manage interaction and data flow between components. Finally, we pay particular attention to the varying levels of complexity in GCMs, both between and within models. Many GCMs have some components that are significantly more complex than others, a phenomenon which can be explained by the respective institution's research goals as well as the origin of the model components. In conclusion, although some features of software architecture have been adopted by every GCM we examined, other features show a wide range of different design choices and strategies. These architectural differences may provide new insights into variability and spread between models.

  5. Utilizing Rapid Prototyping for Architectural Modeling

    ERIC Educational Resources Information Center

    Kirton, E. F.; Lavoie, S. D.

    2006-01-01

    This paper will discuss our approach to, success with and future direction in rapid prototyping for architectural modeling. The premise that this emerging technology has broad and exciting applications in the building design and construction industry will be supported by visual and physical evidence. This evidence will be presented in the form of…

  6. Utilizing Rapid Prototyping for Architectural Modeling

    ERIC Educational Resources Information Center

    Kirton, E. F.; Lavoie, S. D.

    2006-01-01

    This paper will discuss our approach to, success with and future direction in rapid prototyping for architectural modeling. The premise that this emerging technology has broad and exciting applications in the building design and construction industry will be supported by visual and physical evidence. This evidence will be presented in the form of…

  7. Modeling Operations Costs for Human Exploration Architectures

    NASA Technical Reports Server (NTRS)

    Shishko, Robert

    2013-01-01

    Operations and support (O&S) costs for human spaceflight have not received the same attention in the cost estimating community as have development costs. This is unfortunate as O&S costs typically comprise a majority of life-cycle costs (LCC) in such programs as the International Space Station (ISS) and the now-cancelled Constellation Program. Recognizing this, the Constellation Program and NASA HQs supported the development of an O&S cost model specifically for human spaceflight. This model, known as the Exploration Architectures Operations Cost Model (ExAOCM), provided the operations cost estimates for a variety of alternative human missions to the moon, Mars, and Near-Earth Objects (NEOs) in architectural studies. ExAOCM is philosophically based on the DoD Architecture Framework (DoDAF) concepts of operational nodes, systems, operational functions, and milestones. This paper presents some of the historical background surrounding the development of the model, and discusses the underlying structure, its unusual user interface, and lastly, previous examples of its use in the aforementioned architectural studies.

  8. A parallel 3-D discrete wavelet transform architecture using pipelined lifting scheme approach for video coding

    NASA Astrophysics Data System (ADS)

    Hegde, Ganapathi; Vaya, Pukhraj

    2013-10-01

    This article presents a parallel architecture for 3-D discrete wavelet transform (3-DDWT). The proposed design is based on the 1-D pipelined lifting scheme. The architecture is fully scalable beyond the present coherent Daubechies filter bank (9, 7). This 3-DDWT architecture has advantages such as no group of pictures restriction and reduced memory referencing. It offers low power consumption, low latency and high throughput. The computing technique is based on the concept that lifting scheme minimises the storage requirement. The application specific integrated circuit implementation of the proposed architecture is done by synthesising it using 65 nm Taiwan Semiconductor Manufacturing Company standard cell library. It offers a speed of 486 MHz with a power consumption of 2.56 mW. This architecture is suitable for real-time video compression even with large frame dimensions.

  9. Editorial: Cognitive Architectures, Model Comparison and AGI

    NASA Astrophysics Data System (ADS)

    Lebiere, Christian; Gonzalez, Cleotilde; Warwick, Walter

    2010-12-01

    Cognitive Science and Artificial Intelligence share compatible goals of understanding and possibly generating broadly intelligent behavior. In order to determine if progress is made, it is essential to be able to evaluate the behavior of complex computational models, especially those built on general cognitive architectures, and compare it to benchmarks of intelligent behavior such as human performance. Significant methodological challenges arise, however, when trying to extend approaches used to compare model and human performance from tightly controlled laboratory tasks to complex tasks involving more open-ended behavior. This paper describes a model comparison challenge built around a dynamic control task, the Dynamic Stocks and Flows. We present and discuss distinct approaches to evaluating performance and comparing models. Lessons drawn from this challenge are discussed in light of the challenge of using cognitive architectures to achieve Artificial General Intelligence.

  10. Parameter estimation for transformer modeling

    NASA Astrophysics Data System (ADS)

    Cho, Sung Don

    Large Power transformers, an aging and vulnerable part of our energy infrastructure, are at choke points in the grid and are key to reliability and security. Damage or destruction due to vandalism, misoperation, or other unexpected events is of great concern, given replacement costs upward of $2M and lead time of 12 months. Transient overvoltages can cause great damage and there is much interest in improving computer simulation models to correctly predict and avoid the consequences. EMTP (the Electromagnetic Transients Program) has been developed for computer simulation of power system transients. Component models for most equipment have been developed and benchmarked. Power transformers would appear to be simple. However, due to their nonlinear and frequency-dependent behaviors, they can be one of the most complex system components to model. It is imperative that the applied models be appropriate for the range of frequencies and excitation levels that the system experiences. Thus, transformer modeling is not a mature field and newer improved models must be made available. In this work, improved topologically-correct duality-based models are developed for three-phase autotransformers having five-legged, three-legged, and shell-form cores. The main problem in the implementation of detailed models is the lack of complete and reliable data, as no international standard suggests how to measure and calculate parameters. Therefore, parameter estimation methods are developed here to determine the parameters of a given model in cases where available information is incomplete. The transformer nameplate data is required and relative physical dimensions of the core are estimated. The models include a separate representation of each segment of the core, including hysteresis of the core, lambda-i saturation characteristic, capacitive effects, and frequency dependency of winding resistance and core loss. Steady-state excitation, and de-energization and re-energization transients

  11. A parallel VLSI architecture for a digital filter of arbitrary length using Fermat number transforms

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Reed, I. S.; Yeh, C. S.; Shao, H. M.

    1982-01-01

    A parallel architecture for computation of the linear convolution of two sequences of arbitrary lengths using the Fermat number transform (FNT) is described. In particular a pipeline structure is designed to compute a 128-point FNT. In this FNT, only additions and bit rotations are required. A standard barrel shifter circuit is modified so that it performs the required bit rotation operation. The overlap-save method is generalized for the FNT to compute a linear convolution of arbitrary length. A parallel architecture is developed to realize this type of overlap-save method using one FNT and several inverse FNTs of 128 points. The generalized overlap save method alleviates the usual dynamic range limitation in FNTs of long transform lengths. Its architecture is regular, simple, and expandable, and therefore naturally suitable for VLSI implementation.

  12. Architecture Models and Data Flows in Local and Group Datawarehouses

    NASA Astrophysics Data System (ADS)

    Bogza, R. M.; Zaharie, Dorin; Avasilcai, Silvia; Bacali, Laura

    Architecture models and possible data flows for local and group datawarehouses are presented, together with some data processing models. The architecture models consists of several layers and the data flow between them. The choosen architecture of a datawarehouse depends on the data type and volumes from the source data, and inflences the analysis, data mining and reports done upon the data from DWH.

  13. A Cognitive Architecture for Human Performance Process Model Research

    DTIC Science & Technology

    1992-11-01

    Architecture for Human Performance Process Model C - F33615-91 -D-0009 Research PE - 62205F PR- 1710 6. AUTHOR(S) TA - 00 Michael J. Young WU - 60 7...OF PAGES cognitive architectures human performance process models 4 1 cognitive psychology Implementation architectures 16. PRICE CODE computational...1 Human Performance Process Models ............................................................ 2

  14. Generating 3D building models from architectural drawings: a survey.

    PubMed

    Yin, Xuetao; Wonka, Peter; Razdan, Anshuman

    2009-01-01

    Automatically generating 3D building models from 2D architectural drawings has many useful applications in the architecture engineering and construction community. This survey of model generation from paper and CAD-based architectural drawings covers the common pipeline and compares various algorithms for each step of the process.

  15. Performance and Architecture Lab Modeling Tool

    SciTech Connect

    2014-06-19

    Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this link makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior

  16. Performance and Architecture Lab Modeling Tool

    SciTech Connect

    2014-06-19

    Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this link makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior

  17. A Dynamic, Architectural Plant Model Simulating Resource‐dependent Growth

    PubMed Central

    YAN, HONG‐PING; KANG, MENG ZHEN; DE REFFYE, PHILIPPE; DINGKUHN, MICHAEL

    2004-01-01

    • Background and Aims Physiological and architectural plant models have originally been developed for different purposes and therefore have little in common, thus making combined applications difficult. There is, however, an increasing demand for crop models that simulate the genetic and resource‐dependent variability of plant geometry and architecture, because man is increasingly able to transform plant production systems through combined genetic and environmental engineering. • Model GREENLAB is presented, a mathematical plant model that simulates interactions between plant structure and function. Dual‐scale automaton is used to simulate plant organogenesis from germination to maturity on the basis of organogenetic growth cycles that have constant thermal time. Plant fresh biomass production is computed from transpiration, assuming transpiration efficiency to be constant and atmospheric demand to be the driving force, under non‐limiting water supply. The fresh biomass is then distributed among expanding organs according to their relative demand. Demand for organ growth is estimated from allometric relationships (e.g. leaf surface to weight ratios) and kinetics of potential growth rate for each organ type. These are obtained through parameter optimization against empirical, morphological data sets by running the model in inverted mode. Potential growth rates are then used as estimates of relative sink strength in the model. These and other ‘hidden’ plant parameters are calibrated using the non‐linear, least‐square method. • Key Results and Conclusions The model reproduced accurately the dynamics of plant growth, architecture and geometry of various annual and woody plants, enabling 3D visualization. It was also able to simulate the variability of leaf size on the plant and compensatory growth following pruning, as a result of internal competition for resources. The potential of the model’s underlying concepts to predict the plant

  18. Engineering Structurally Configurable Models with Model Transformation

    DTIC Science & Technology

    2008-12-15

    model in the case of Simulink, and a dataflow model in the case of LabVIEW). Research modeling tools such as Ptolemy II [14], ForSyDe [21], SPEX [30...functionality of our model transformation tool built in the Ptolemy II framework, and its application to large models of distributed and parallel embedded...in Ptolemy II, the same idea can be applied to other modeling tools such as Simulink, LabVIEW, ForSyDe, SPEX and ModHel’X. Moreover, the recent OMG

  19. Resource utilization model for the algorithm to architecture mapping model

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Patel, Rakesh R.

    1993-01-01

    The analytical model for resource utilization and the variable node time and conditional node model for the enhanced ATAMM model for a real-time data flow architecture are presented in this research. The Algorithm To Architecture Mapping Model, ATAMM, is a Petri net based graph theoretic model developed at Old Dominion University, and is capable of modeling the execution of large-grained algorithms on a real-time data flow architecture. Using the resource utilization model, the resource envelope may be obtained directly from a given graph and, consequently, the maximum number of required resources may be evaluated. The node timing diagram for one iteration period may be obtained using the analytical resource envelope. The variable node time model, which describes the change in resource requirement for the execution of an algorithm under node time variation, is useful to expand the applicability of the ATAMM model to heterogeneous architectures. The model also describes a method of detecting the presence of resource limited mode and its subsequent prevention. Graphs with conditional nodes are shown to be reduced to equivalent graphs with time varying nodes and, subsequently, may be analyzed using the variable node time model to determine resource requirements. Case studies are performed on three graphs for the illustration of applicability of the analytical theories.

  20. The fermilab central computing facility architectural model

    NASA Astrophysics Data System (ADS)

    Nicholls, J.

    1989-12-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing enviroment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front-end, a LargeScale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS cluster interactive front-end, an Amdahl VM computing engine, ACP farms, and (primary) VMS workstations. This paper will discuss the implemetation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab.

  1. A high-throughput two channel discrete wavelet transform architecture for the JPEG2000 standard

    NASA Astrophysics Data System (ADS)

    Badakhshannoory, Hossein; Hashemi, Mahmoud R.; Aminlou, Alireza; Fatemi, Omid

    2005-07-01

    The Discrete Wavelet Transform (DWT) is increasingly recognized in image and video compression standards, as indicated by its use in JPEG2000. The lifting scheme algorithm is an alternative DWT implementation that has a lower computational complexity and reduced resource requirement. In the JPEG2000 standard two lifting scheme based filter banks are introduced: the 5/3 and 9/7. In this paper a high throughput, two channel DWT architecture for both of the JPEG2000 DWT filters is presented. The proposed pipelined architecture has two separate input channels that process the incoming samples simultaneously with minimum memory requirement for each channel. The architecture had been implemented in VHDL and synthesized on a Xilinx Virtex2 XCV1000. The proposed architecture applies DWT on a 2K by 1K image at 33 fps with a 75 MHZ clock frequency. This performance is achieved with 70% less resources than two independent single channel modules. The high throughput and reduced resource requirement has made this architecture the proper choice for real time applications such as Digital Cinema.

  2. Architectural approach for semantic EHR systems development based on Detailed Clinical Models.

    PubMed

    Bernal, Juan G; Lopez, Diego M; Blobel, Bernd

    2012-01-01

    The integrative approach to health information in general and the development of pHealth systems in particular, require an integrated approach of formally modeled system architectures. Detailed Clinical Models (DCM) is one of the most promising modeling efforts for clinical concept representation in EHR system architectures. Although the feasibility of DCM modeling methodology has been demonstrated through examples, there is no formal, generic and automatic modeling transformation technique to ensure a semantic lossless transformation of clinical concepts expressed in DCM to either clinical concept representations based on ISO 13606/openEHR Archetypes or HL7 Templates. The objective of this paper is to propose a generic model transformation method and tooling for transforming DCM Clinical Concepts into ISO/EN 13606/openEHR Archetypes or HL7 Template models. The automation of the transformation process is supported by Model Driven-Development (MDD) transformation mechanisms and tools. The availability of processes, techniques and tooling for automatic DCM transformation would enable the development of intelligent, adaptive information systems as demanded for pHealth solutions.

  3. Executable Architecture Modeling and Simulation Based on fUML

    DTIC Science & Technology

    2014-06-01

    informal constructs. The paper proposes an approach of executable architecture modeling and simulation by introducing formal UML specification. Firstly...ones. UML is accepted as an Architectural Description Language by architects, and it has become a standard notation to document the architecture...these UML models are not executable. Object Management Group proposes the fUML to enable UML models execution [5]. Accordingly, we propose an

  4. Modeling the Europa Pathfinder avionics system with a model based avionics architecture tool

    NASA Technical Reports Server (NTRS)

    Chau, S.; Traylor, M.; Hall, R.; Whitfield, A.

    2002-01-01

    In order to shorten the avionics architecture development time, the Jet Propulsion Laboratory has developed a model-based architecture simultion tool called the Avionics System Architecture Tool (ASAT).

  5. A Framework and Model for Evaluating Clinical Decision Support Architectures

    PubMed Central

    Wright, Adam; Sittig, Dean F.

    2008-01-01

    In this paper, we develop a four-phase model for evaluating architectures for clinical decision support that focuses on: defining a set of desirable features for a decision support architecture; building a proof-of-concept prototype; demonstrating that the architecture is useful by showing that it can be integrated with existing decision support systems and comparing its coverage to that of other architectures. We apply this framework to several well-known decision support architectures, including Arden Syntax, GLIF, SEBASTIAN and SAGE PMID:18462999

  6. Using Model Based Systems Engineering and the Systems Modeling Language to Develop Space Mission Area Architectures

    DTIC Science & Technology

    2013-09-01

    SYSTEMS ENGINEERING AND THE SYSTEMS MODELING LANGUAGE TO DEVELOP SPACE MISSION AREA ARCHITECTURES by Dustin B. Jepperson September 2013...AND THE SYSTEMS MODELING LANGUAGE TO DEVELOP SPACE MISSION AREA ARCHITECTURES 5. FUNDING NUMBERS 6. AUTHOR(S) Dustin B. Jepperson 7. PERFORMING...Application Protocol 233 (AP233), Department of Defense Architecture Framework (DoDAF), Space Mission Area System Architecture (MASA), Overhead

  7. A comparison of VLSI architectures for time and transform domain decoding of Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Truong, T. K.; Deutsch, L. J.; Satorius, E. H.; Reed, I. S.

    1988-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial needed to decode a Reed-Solomon (RS) code. It is shown that this algorithm can be used for both time and transform domain decoding by replacing its initial conditions with the Forney syndromes and the erasure locator polynomial. By this means both the errata locator polynomial and the errate evaluator polynomial can be obtained with the Euclidean algorithm. With these ideas, both time and transform domain Reed-Solomon decoders for correcting errors and erasures are simplified and compared. As a consequence, the architectures of Reed-Solomon decoders for correcting both errors and erasures can be made more modular, regular, simple, and naturally suitable for VLSI implementation.

  8. Space Generic Open Avionics Architecture (SGOAA) reference model technical guide

    NASA Technical Reports Server (NTRS)

    Wray, Richard B.; Stovall, John R.

    1993-01-01

    This report presents a full description of the Space Generic Open Avionics Architecture (SGOAA). The SGOAA consists of a generic system architecture for the entities in spacecraft avionics, a generic processing architecture, and a six class model of interfaces in a hardware/software system. The purpose of the SGOAA is to provide an umbrella set of requirements for applying the generic architecture interface model to the design of specific avionics hardware/software systems. The SGOAA defines a generic set of system interface points to facilitate identification of critical interfaces and establishes the requirements for applying appropriate low level detailed implementation standards to those interface points. The generic core avionics system and processing architecture models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.

  9. Modelling the pulse transformer in SPICE

    NASA Astrophysics Data System (ADS)

    Godlewska, Malgorzata; Górecki, Krzysztof; Górski, Krzysztof

    2016-01-01

    The paper is devoted to modelling pulse transformers in SPICE. It shows the character of the selected models of this element, points out their advantages and disadvantages, and presents the results of experimental verification of the considered models. These models are characterized by varying degrees of complexity - from linearly coupled linear coils to nonlinear electrothermal models. The study was conducted for transformer with ring cores made of a variety of ferromagnetic materials, while exciting the sinusoidal signal of a frequency 100 kHz and different values of load resistance. The transformers operating conditions under which the considered models ensure the acceptable accuracy of calculations are indicated.

  10. Model-Drive Architecture for Agent-Based Systems

    NASA Technical Reports Server (NTRS)

    Gradanin, Denis; Singh, H. Lally; Bohner, Shawn A.; Hinchey, Michael G.

    2004-01-01

    The Model Driven Architecture (MDA) approach uses a platform-independent model to define system functionality, or requirements, using some specification language. The requirements are then translated to a platform-specific model for implementation. An agent architecture based on the human cognitive model of planning, the Cognitive Agent Architecture (Cougaar) is selected for the implementation platform. The resulting Cougaar MDA prescribes certain kinds of models to be used, how those models may be prepared and the relationships of the different kinds of models. Using the existing Cougaar architecture, the level of application composition is elevated from individual components to domain level model specifications in order to generate software artifacts. The software artifacts generation is based on a metamodel. Each component maps to a UML structured component which is then converted into multiple artifacts: Cougaar/Java code, documentation, and test cases.

  11. A Transformation Model of Engineering Education

    ERIC Educational Resources Information Center

    Owens, Camelia L.; Fortenberry, Norman L.

    2007-01-01

    A transformation model of engineering education at the undergraduate level is constructed to define the human and technical resources that contribute to the production of a university-trained engineer. The theory of technical systems is applied in the development of the model to transform a graduating pre-university pupil into a university-trained…

  12. Quantum decoration transformation for spin models

    SciTech Connect

    Braz, F.F.; Rodrigues, F.C.; Souza, S.M. de; Rojas, Onofre

    2016-09-15

    It is quite relevant the extension of decoration transformation for quantum spin models since most of the real materials could be well described by Heisenberg type models. Here we propose an exact quantum decoration transformation and also showing interesting properties such as the persistence of symmetry and the symmetry breaking during this transformation. Although the proposed transformation, in principle, cannot be used to map exactly a quantum spin lattice model into another quantum spin lattice model, since the operators are non-commutative. However, it is possible the mapping in the “classical” limit, establishing an equivalence between both quantum spin lattice models. To study the validity of this approach for quantum spin lattice model, we use the Zassenhaus formula, and we verify how the correction could influence the decoration transformation. But this correction could be useless to improve the quantum decoration transformation because it involves the second-nearest-neighbor and further nearest neighbor couplings, which leads into a cumbersome task to establish the equivalence between both lattice models. This correction also gives us valuable information about its contribution, for most of the Heisenberg type models, this correction could be irrelevant at least up to the third order term of Zassenhaus formula. This transformation is applied to a finite size Heisenberg chain, comparing with the exact numerical results, our result is consistent for weak xy-anisotropy coupling. We also apply to bond-alternating Ising–Heisenberg chain model, obtaining an accurate result in the limit of the quasi-Ising chain.

  13. Nonlinear, lumped parameter transformer model reduction technique

    SciTech Connect

    Degeneff, R.C.; Gutierrez, M.R.; Vakilian, M.

    1995-04-01

    Utility engineers often need nonlinear transformer models in order to investigate power system transient events. Methods exist to create accurate wideband reduced order linear transformer models, however, to date a method of creating a reduced order wideband nonlinear transformer model has not been presented. This paper describes a technique that starts with a detailed nonlinear transformer model used for insulation design studies and reduces its order so that it can be used conveniently in EMTP. The method is based on linearization of the core`s saturable characteristic during each solution time intervals. The technique uses Kron`s reduction approach in each solution time interval. It can be applied to any nonlinear lumped parameter network which uses electric parameter analogies (i.e., FEM networks). This paper outlines the nonlinear reduction technique. An illustrative example is given using the transient voltage response during saturation for a 785/345/34.5kV, YYD 500 MVA single phase auto transformer.

  14. A Model of Transformative Collaboration

    ERIC Educational Resources Information Center

    Swartz, Ann L.; Triscari, Jacqlyn S.

    2011-01-01

    Two collaborative writing partners sought to deepen their understanding of transformative learning by conducting several spirals of grounded theory research on their own collaborative relationship. Drawing from adult education, business, and social science literature and including descriptive analysis of their records of activity and interaction…

  15. Modeling Techniques for High Dependability Protocols and Architecture

    NASA Technical Reports Server (NTRS)

    LaValley, Brian; Ellis, Peter; Walter, Chris J.

    2012-01-01

    This report documents an investigation into modeling high dependability protocols and some specific challenges that were identified as a result of the experiments. The need for an approach was established and foundational concepts proposed for modeling different layers of a complex protocol and capturing the compositional properties that provide high dependability services for a system architecture. The approach centers around the definition of an architecture layer, its interfaces for composability with other layers and its bindings to a platform specific architecture model that implements the protocols required for the layer.

  16. E-Governance and Service Oriented Computing Architecture Model

    NASA Astrophysics Data System (ADS)

    Tejasvee, Sanjay; Sarangdevot, S. S.

    2010-11-01

    E-Governance is the effective application of information communication and technology (ICT) in the government processes to accomplish safe and reliable information lifecycle management. Lifecycle of the information involves various processes as capturing, preserving, manipulating and delivering information. E-Governance is meant to transform of governance in better manner to the citizens which is transparent, reliable, participatory, and accountable in point of view. The purpose of this paper is to attempt e-governance model, focus on the Service Oriented Computing Architecture (SOCA) that includes combination of information and services provided by the government, innovation, find out the way of optimal service delivery to citizens and implementation in transparent and liable practice. This paper also try to enhance focus on the E-government Service Manager as a essential or key factors service oriented and computing model that provides a dynamically extensible structural design in which all area or branch can bring in innovative services. The heart of this paper examine is an intangible model that enables E-government communication for trade and business, citizen and government and autonomous bodies.

  17. Application of the medical data warehousing architecture EPIDWARE to epidemiological follow-up: data extraction and transformation.

    PubMed

    Kerkri, E; Quantin, C; Yetongnon, K; Allaert, F A; Dusserre, L

    1999-01-01

    In this paper, we present an application of EPIDWARE, medical data warehousing architecture, to our epidemiological follow-up project. The aim of this project is to extract and regroup information from various information systems for epidemiological studies. We give a description of the requirements of the epidemiological follow-up project such as anonymity of medical data information and data file linkage procedure. We introduce the concept of Data Warehousing Architecture. The particularities of data extraction and transformation are presented and discussed.

  18. Metabotropic glutamate receptor 1 disrupts mammary acinar architecture and initiates malignant transformation of mammary epithelial cells

    PubMed Central

    Teh, Jessica L. F.; Shah, Raj; La Cava, Stephanie; Dolfi, Sonia C.; Mehta, Madhura S.; Kongara, Sameera; Price, Sandy; Ganesan, Shridar; Reuhl, Kenneth R.; Hirshfield, Kim M.

    2016-01-01

    Metabotropic glutamate receptor 1 (mGluR1/Grm1) is a member of the G-protein-coupled receptor superfamily, which was once thought to only participate in synaptic transmission and neuronal excitability, but has more recently been implicated in non-neuronal tissue functions. We previously described the oncogenic properties of Grm1 in cultured melanocytes in vitro and in spontaneous melanoma development with 100 % penetrance in vivo. Aberrant mGluR1 expression was detected in 60–80 % of human melanoma cell lines and biopsy samples. As most human cancers are of epithelial origin, we utilized immortalized mouse mammary epithelial cells (iMMECs) as a model system to study the transformative properties of Grm1. We introduced Grm1 into iMMECs and isolated several stable mGluR1-expressing clones. Phenotypic alterations in mammary acinar architecture were assessed using three-dimensional morphogenesis assays. We found that mGluR1-expressing iMMECs exhibited delayed lumen formation in association with decreased central acinar cell death, disrupted cell polarity, and a dramatic increase in the activation of the mitogen-activated protein kinase pathway. Orthotopic implantation of mGluR1-expressing iMMEC clones into mammary fat pads of immunodeficient nude mice resulted in mammary tumor formation in vivo. Persistent mGluR1 expression was required for the maintenance of the tumorigenic phenotypes in vitro and in vivo, as demonstrated by an inducible Grm1-silencing RNA system. Furthermore, mGluR1 was found be expressed in human breast cancer cell lines and breast tumor biopsies. Elevated levels of extracellular glutamate were observed in mGluR1-expressing breast cancer cell lines and concurrent treatment of MCF7 xenografts with glutamate release inhibitor, riluzole, and an AKT inhibitor led to suppression of tumor progression. Our results are likely relevant to human breast cancer, highlighting a putative role of mGluR1 in the pathophysiology of breast cancer and the potential

  19. Framework for the Parametric System Modeling of Space Exploration Architectures

    NASA Technical Reports Server (NTRS)

    Komar, David R.; Hoffman, Jim; Olds, Aaron D.; Seal, Mike D., II

    2008-01-01

    This paper presents a methodology for performing architecture definition and assessment prior to, or during, program formulation that utilizes a centralized, integrated architecture modeling framework operated by a small, core team of general space architects. This framework, known as the Exploration Architecture Model for IN-space and Earth-to-orbit (EXAMINE), enables: 1) a significantly larger fraction of an architecture trade space to be assessed in a given study timeframe; and 2) the complex element-to-element and element-to-system relationships to be quantitatively explored earlier in the design process. Discussion of the methodology advantages and disadvantages with respect to the distributed study team approach typically used within NASA to perform architecture studies is presented along with an overview of EXAMINE s functional components and tools. An example Mars transportation system architecture model is used to demonstrate EXAMINE s capabilities in this paper. However, the framework is generally applicable for exploration architecture modeling with destinations to any celestial body in the solar system.

  20. A transformer model for winding fault studies

    SciTech Connect

    Bastard, P.; Meunier, M. . Electrical Engineering Dept.); Bertrand, P. . Protection and Control Dept.)

    1994-04-01

    This paper deals with a method of modeling internal faults in a power transformer. The method leads to a model which is entirely compatible with the EMTP software. It enables simulation of faults between any turn and the earth or between any two turns of the transformer windings. Implementation of the proposed method assumes knowledge of how to evaluate the leakage factors between the various coils of the transformer. A very simple method is proposed to evaluate these leakage factors. At last, an experimental validation of the model allows the estimation of its accuracy.

  1. Vibrational testing of trabecular bone architectures using rapid prototype models.

    PubMed

    Mc Donnell, P; Liebschner, M A K; Tawackoli, Wafa; Mc Hugh, P E

    2009-01-01

    The purpose of this study was to investigate if standard analysis of the vibrational characteristics of trabecular architectures can be used to detect changes in the mechanical properties due to progressive bone loss. A cored trabecular specimen from a human lumbar vertebra was microCT scanned and a three-dimensional, virtual model in stereolithography (STL) format was generated. Uniform bone loss was simulated using a surface erosion algorithm. Rapid prototype (RP) replicas were manufactured from these virtualised models with 0%, 16% and 42% bone loss. Vibrational behaviour of the RP replicas was evaluated by performing a dynamic compression test through a frequency range using an electro-dynamic shaker. The acceleration and dynamic force responses were recorded and fast Fourier transform (FFT) analyses were performed to determine the response spectrum. Standard resonant frequency analysis and damping factor calculations were performed. The RP replicas were subsequently tested in compression beyond failure to determine their strength and modulus. It was found that the reductions in resonant frequency with increasing bone loss corresponded well with reductions in apparent stiffness and strength. This suggests that structural dynamics has the potential to be an alternative diagnostic technique for osteoporosis, although significant challenges must be overcome to determine the effect of the skin/soft tissue interface, the cortex and variabilities associated with in vivo testing.

  2. Code generator for implementing dual tree complex wavelet transform on reconfigurable architectures for mobile applications

    PubMed Central

    Canbay, Ferhat; Levent, Vecdi Emre; Serbes, Gorkem; Ugurdag, H. Fatih; Goren, Sezer

    2016-01-01

    The authors aimed to develop an application for producing different architectures to implement dual tree complex wavelet transform (DTCWT) having near shift-invariance property. To obtain a low-cost and portable solution for implementing the DTCWT in multi-channel real-time applications, various embedded-system approaches are realised. For comparison, the DTCWT was implemented in C language on a personal computer and on a PIC microcontroller. However, in the former approach portability and in the latter desired speed performance properties cannot be achieved. Hence, implementation of the DTCWT on a reconfigurable platform such as field programmable gate array, which provides portable, low-cost, low-power, and high-performance computing, is considered as the most feasible solution. At first, they used the system generator DSP design tool of Xilinx for algorithm design. However, the design implemented by using such tools is not optimised in terms of area and power. To overcome all these drawbacks mentioned above, they implemented the DTCWT algorithm by using Verilog Hardware Description Language, which has its own difficulties. To overcome these difficulties, simplify the usage of proposed algorithms and the adaptation procedures, a code generator program that can produce different architectures is proposed. PMID:27733925

  3. Code generator for implementing dual tree complex wavelet transform on reconfigurable architectures for mobile applications.

    PubMed

    Canbay, Ferhat; Levent, Vecdi Emre; Serbes, Gorkem; Ugurdag, H Fatih; Goren, Sezer; Aydin, Nizamettin

    2016-09-01

    The authors aimed to develop an application for producing different architectures to implement dual tree complex wavelet transform (DTCWT) having near shift-invariance property. To obtain a low-cost and portable solution for implementing the DTCWT in multi-channel real-time applications, various embedded-system approaches are realised. For comparison, the DTCWT was implemented in C language on a personal computer and on a PIC microcontroller. However, in the former approach portability and in the latter desired speed performance properties cannot be achieved. Hence, implementation of the DTCWT on a reconfigurable platform such as field programmable gate array, which provides portable, low-cost, low-power, and high-performance computing, is considered as the most feasible solution. At first, they used the system generator DSP design tool of Xilinx for algorithm design. However, the design implemented by using such tools is not optimised in terms of area and power. To overcome all these drawbacks mentioned above, they implemented the DTCWT algorithm by using Verilog Hardware Description Language, which has its own difficulties. To overcome these difficulties, simplify the usage of proposed algorithms and the adaptation procedures, a code generator program that can produce different architectures is proposed.

  4. 128-point memory-based architecture for a fast Fourier transform

    NASA Astrophysics Data System (ADS)

    Chen, Chuen-Yau; Huang, Chun-Kai

    2013-02-01

    In this article, we take advantage of the merits of a one-sixteenth circle storage technique, radix-2 and radix-2/4/8 algorithms to implement a 128-point memory-based architecture for a fast Fourier transform processor. The one-sixteenth circle storage technique results in reducing 50% of the size of a look-up table (LUT) for storing the twiddle factors. The combination of radix-2 and radix-2/4/8 algorithms results in reducing the number of twiddle factors and allowing the processor to possess a regular architecture which is suitable for hardware implementation. This design has been synthesised by Altera Quartus II 6.0. The experimental results indicate that this design needs only 65,169 ALUTs for LUT. The operating frequency is 59.76 MHz. The signal-to-noise ratios for the real and imaginary parts of the output signal are 67.72 dB and 68.55 dB, respectively.

  5. Direct model extraction of RFCMOS spiral transformers

    NASA Astrophysics Data System (ADS)

    Pan, Jie; Yang, Hai-Gang

    2010-11-01

    In a spiral transformer, couplings between the coils are interlaced and correlative, and are difficult to independently extract from limited network parameters. In this article, we present a method for directly extracting model parameters including mutual inductances and port-to-port capacitances one by one. In the method, by leaving unmeasured ports short-circuited or open-circuited on the wafer, we transform a 4-port transformer into four 2-port networks for obtaining adequate measurement data, enabling us to extract all the '2-π'-like model parameters independently. We adopt this method into the modelling of a 5:5-turn spiral transformer fabricated in 0.18 μm CMOS technology. Finally, comparisons between electromagnetic (EM)-simulated results, measured results and model-simulated results demonstrate that our method is accurate and reliable.

  6. Model based analysis of piezoelectric transformers.

    PubMed

    Hemsel, T; Priya, S

    2006-12-22

    Piezoelectric transformers are increasingly getting popular in the electrical devices owing to several advantages such as small size, high efficiency, no electromagnetic noise and non-flammable. In addition to the conventional applications such as ballast for back light inverter in notebook computers, camera flash, and fuel ignition several new applications have emerged such as AC/DC converter, battery charger and automobile lighting. These new applications demand high power density and wide range of voltage gain. Currently, the transformer power density is limited to 40 W/cm(3) obtained at low voltage gain. The purpose of this study was to investigate a transformer design that has the potential of providing higher power density and wider range of voltage gain. The new transformer design utilizes radial mode both at the input and output port and has the unidirectional polarization in the ceramics. This design was found to provide 30 W power with an efficiency of 98% and 30 degrees C temperature rise from the room temperature. An electro-mechanical equivalent circuit model was developed to describe the characteristics of the piezoelectric transformer. The model was found to successfully predict the characteristics of the transformer. Excellent matching was found between the computed and experimental results. The results of this study will allow to deterministically design unipoled piezoelectric transformers with specified performance. It is expected that in near future the unipoled transformer will gain significant importance in various electrical components.

  7. Novel Fourier transform infrared spectrometer architecture based on cascaded Fabry-Perot interferometers

    NASA Astrophysics Data System (ADS)

    Eltagoury, Yomna M.; Sabry, Yasser M.; Khalil, Diaa A.

    2016-03-01

    In this work, we present a novel architecture for Fourier transform spectrometers based on cascaded low-finesse FP interferometers. One of the interferometers has fixed path length while the second is a scanning one using a relatively large stroke electrostatic comb-drive actuator. The fixed interferometer results in a spectrum modulation and, hence, a shifted version of the interferogram away from the point of the zero spacing between the two mirrors. The shifted interferogram can then be used with the Fourier transform algorithm to obtain the spectrum of the measured light. This cascaded FP configuration results in a simple arrangement of mirrors on a line, which makes it much tolerant to misalignment errors. The proposed configuration is implemented using the MEMS DRIE technology on an SOI wafer with a simple MEMS process flow without metallization or dielectric coating of the vertical optical surface. The fabricated compact structure is measured with both a laser source with narrow spectrum at 1550 nm and a wide spectrum source composed of an SLED and the ASE of a semiconductor optical amplifier source. The obtained results validate the concept of the new configuration.

  8. Transformation as a Design Process and Runtime Architecture for High Integrity Software

    SciTech Connect

    Bespalko, S.J.; Winter, V.L.

    1999-04-05

    We have discussed two aspects of creating high integrity software that greatly benefit from the availability of transformation technology, which in this case is manifest by the requirement for a sophisticated backtracking parser. First, because of the potential for correctly manipulating programs via small changes, an automated non-procedural transformation system can be a valuable tool for constructing high assurance software. Second, modeling the processing of translating data into information as a, perhaps, context-dependent grammar leads to an efficient, compact implementation. From a practical perspective, the transformation process should begin in the domain language in which a problem is initially expressed. Thus in order for a transformation system to be practical it must be flexible with respect to domain-specific languages. We have argued that transformation applied to specification results in a highly reliable system. We also attempted to briefly demonstrate that transformation technology applied to the runtime environment will result in a safe and secure system. We thus believe that the sophisticated multi-lookahead backtracking parsing technology is central to the task of being in a position to demonstrate the existence of HIS.

  9. Model based design introduction: modeling game controllers to microprocessor architectures

    NASA Astrophysics Data System (ADS)

    Jungwirth, Patrick; Badawy, Abdel-Hameed

    2017-04-01

    We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.

  10. Demand Activated Manufacturing Architecture (DAMA) model for supply chain collaboration

    SciTech Connect

    CHAPMAN,LEON D.; PETERSEN,MARJORIE B.

    2000-03-13

    The Demand Activated Manufacturing Architecture (DAMA) project during the last five years of work with the U.S. Integrated Textile Complex (retail, apparel, textile, and fiber sectors) has developed an inter-enterprise architecture and collaborative model for supply chains. This model will enable improved collaborative business across any supply chain. The DAMA Model for Supply Chain Collaboration is a high-level model for collaboration to achieve Demand Activated Manufacturing. The five major elements of the architecture to support collaboration are (1) activity or process, (2) information, (3) application, (4) data, and (5) infrastructure. These five elements are tied to the application of the DAMA architecture to three phases of collaboration - prepare, pilot, and scale. There are six collaborative activities that may be employed in this model: (1) Develop Business Planning Agreements, (2) Define Products, (3) Forecast and Plan Capacity Commitments, (4) Schedule Product and Product Delivery, (5) Expedite Production and Delivery Exceptions, and (6) Populate Supply Chain Utility. The Supply Chain Utility is a set of applications implemented to support collaborative product definition, forecast visibility, planning, scheduling, and execution. The DAMA architecture and model will be presented along with the process for implementing this DAMA model.

  11. Hardware architecture for projective model calculation and false match refining using random sample consensus algorithm

    NASA Astrophysics Data System (ADS)

    Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid

    2016-11-01

    The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.

  12. Bayesian Transformation Models for Multivariate Survival Data

    PubMed Central

    DE CASTRO, MÁRIO; CHEN, MING-HUI; IBRAHIM, JOSEPH G.; KLEIN, JOHN P.

    2014-01-01

    In this paper we propose a general class of gamma frailty transformation models for multivariate survival data. The transformation class includes the commonly used proportional hazards and proportional odds models. The proposed class also includes a family of cure rate models. Under an improper prior for the parameters, we establish propriety of the posterior distribution. A novel Gibbs sampling algorithm is developed for sampling from the observed data posterior distribution. A simulation study is conducted to examine the properties of the proposed methodology. An application to a data set from a cord blood transplantation study is also reported. PMID:24904194

  13. Conformal map transformations for meteorological modelers

    NASA Astrophysics Data System (ADS)

    Taylor, Albion D.

    1997-02-01

    This paper describes a utility function library which meteorological computer modelers can incorporate in their programs to provide the mathematical transformations of conformai maps that their models may need. In addition to coordinate transformations, routines supply projection-dependent terms of the governing equations, wind component conversions, and rotation axis orientation components. The routines seamlessly handle the transitions from Polar Stereographic through Lambert Conformai to Mercator projections. Initialization routines allow concurrent handling of multiple projections, and allow a simple method of defining computational model grids to the software.

  14. Non-linear transformer modeling and simulation

    SciTech Connect

    Archer, W.E.; Deveney, M.F.; Nagel, R.L.

    1994-08-01

    Transformers models for simulation with Pspice and Analogy`s Saber are being developed using experimental B-H Loop and network analyzer measurements. The models are evaluated for accuracy and convergence using several test circuits. Results are presented which demonstrate the effects on circuit performance from magnetic core losses eddy currents and mechanical stress on the magnetic cores.

  15. Modeling and testing of ethernet transformers

    NASA Astrophysics Data System (ADS)

    Bowen, David

    2011-12-01

    Twisted-pair Ethernet is now the standard home and office last-mile network technology. For decades, the IEEE standard that defines Ethernet has required electrical isolation between the twisted pair cable and the Ethernet device. So, for decades, every Ethernet interface has used magnetic core Ethernet transformers to isolate Ethernet devices and keep users safe in the event of a potentially dangerous fault on the network media. The current state-of-the-art Ethernet transformers are miniature (<5mm diameter) ferrite-core toroids wrapped with approximately 10 to 30 turns of wire. As small as current Ethernet transformers are, they still limit further Ethernet device miniaturization and require a separate bulky package or jack housing. New coupler designs must be explored which are capable of exceptional miniaturization or on-chip fabrication. This dissertation thoroughly explores the performance of the current commercial Ethernet transformers to both increase understanding of the device's behavior and outline performance parameters for replacement devices. Lumped element and distributed circuit models are derived; testing schemes are developed and used to extract model parameters from commercial Ethernet devices. Transfer relation measurements of the commercial Ethernet transformers are compared against the model's behavior and it is found that the tuned, distributed models produce the best transfer relation match to the measured data. Process descriptions and testing results on fabricated thin-film dielectric-core toroid transformers are presented. The best results were found for a 32-turn transformer loaded with 100Ω, the impedance of twisted pair cable. This transformer gave a flat response from about 10MHz to 40MHz with a height of approximately 0.45. For the fabricated transformer structures, theoretical methods to determine resistance, capacitance and inductance are presented. A special analytical and numerical analysis of the fabricated transformer

  16. Optimizing transformations of stencil operations for parallel cache-based architectures

    SciTech Connect

    Bassetti, F.; Davis, K.

    1999-06-28

    This paper describes a new technique for optimizing serial and parallel stencil- and stencil-like operations for cache-based architectures. This technique takes advantage of the semantic knowledge implicity in stencil-like computations. The technique is implemented as a source-to-source program transformation; because of its specificity it could not be expected of a conventional compiler. Empirical results demonstrate a uniform factor of two speedup. The experiments clearly show the benefits of this technique to be a consequence, as intended, of the reduction in cache misses. The test codes are based on a 5-point stencil obtained by the discretization of the Poisson equation and applied to a two-dimensional uniform grid using the Jacobi method as an iterative solver. Results are presented for a 1-D tiling for a single processor, and in parallel using 1-D data partition. For the parallel case both blocking and non-blocking communication are tested. The same scheme of experiments has bee n performed for the 2-D tiling case. However, for the parallel case the 2-D partitioning is not discussed here, so the parallel case handled for 2-D is 2-D tiling with 1-D data partitioning.

  17. Transformation model selection by multiple hypotheses testing

    NASA Astrophysics Data System (ADS)

    Lehmann, Rüdiger

    2014-12-01

    Transformations between different geodetic reference frames are often performed such that first the transformation parameters are determined from control points. If in the first place we do not know which of the numerous transformation models is appropriate then we can set up a multiple hypotheses test. The paper extends the common method of testing transformation parameters for significance, to the case that also constraints for such parameters are tested. This provides more flexibility when setting up such a test. One can formulate a general model with a maximum number of transformation parameters and specialize it by adding constraints to those parameters, which need to be tested. The proper test statistic in a multiple test is shown to be either the extreme normalized or the extreme studentized Lagrange multiplier. They are shown to perform superior to the more intuitive test statistics derived from misclosures. It is shown how model selection by multiple hypotheses testing relates to the use of information criteria like AICc and Mallows' , which are based on an information theoretic approach. Nevertheless, whenever comparable, the results of an exemplary computation almost coincide.

  18. Extending enterprise architecture modelling with business goals and requirements

    NASA Astrophysics Data System (ADS)

    Engelsman, Wilco; Quartel, Dick; Jonkers, Henk; van Sinderen, Marten

    2011-02-01

    The methods for enterprise architecture (EA), such as The Open Group Architecture Framework, acknowledge the importance of requirements modelling in the development of EAs. Modelling support is needed to specify, document, communicate and reason about goals and requirements. The current modelling techniques for EA focus on the products, services, processes and applications of an enterprise. In addition, techniques may be provided to describe structured requirements lists and use cases. Little support is available however for modelling the underlying motivation of EAs in terms of stakeholder concerns and the high-level goals that address these concerns. This article describes a language that supports the modelling of this motivation. The definition of the language is based on existing work on high-level goal and requirements modelling and is aligned with an existing standard for enterprise modelling: the ArchiMate language. Furthermore, the article illustrates how EA can benefit from analysis techniques from the requirements engineering domain.

  19. Modeling and Verification of Dependable Electronic Power System Architecture

    NASA Astrophysics Data System (ADS)

    Yuan, Ling; Fan, Ping; Zhang, Xiao-fang

    The electronic power system can be viewed as a system composed of a set of concurrently interacting subsystems to generate, transmit, and distribute electric power. The complex interaction among sub-systems makes the design of electronic power system complicated. Furthermore, in order to guarantee the safe generation and distribution of electronic power, the fault tolerant mechanisms are incorporated in the system design to satisfy high reliability requirements. As a result, the incorporation makes the design of such system more complicated. We propose a dependable electronic power system architecture, which can provide a generic framework to guide the development of electronic power system to ease the development complexity. In order to provide common idioms and patterns to the system *designers, we formally model the electronic power system architecture by using the PVS formal language. Based on the PVS model of this system architecture, we formally verify the fault tolerant properties of the system architecture by using the PVS theorem prover, which can guarantee that the system architecture can satisfy high reliability requirements.

  20. A 3D forward stratigraphic model of fluvial meander-bend evolution for prediction of point-bar lithofacies architecture

    NASA Astrophysics Data System (ADS)

    Yan, Na; Mountney, Nigel P.; Colombera, Luca; Dorrell, Robert M.

    2017-08-01

    Although fundamental types of fluvial meander-bend transformations - expansion, translation, rotation, and combinations thereof - are widely recognised, the relationship between the migratory behaviour of a meander bend, and its resultant accumulated sedimentary architecture and lithofacies distribution remains relatively poorly understood. Three-dimensional data from both currently active fluvial systems and from ancient preserved successions known from outcrop and subsurface settings are limited. To tackle this problem, a 3D numerical forward stratigraphic model - the Point-Bar Sedimentary Architecture Numerical Deduction (PB-SAND) - has been devised as a tool for the reconstruction and prediction of the complex spatio-temporal migratory evolution of fluvial meanders, their generated bar forms and the associated lithofacies distributions that accumulate as heterogeneous fluvial successions. PB-SAND uses a dominantly geometric modelling approach supplemented by process-based and stochastic model components, and is constrained by quantified sedimentological data derived from modern point bars or ancient successions that represent suitable analogues. The model predicts the internal architecture and geometry of fluvial point-bar elements in three dimensions. The model is applied to predict the sedimentary lithofacies architecture of ancient preserved point-bar and counter-point-bar deposits of the middle Jurassic Scalby Formation (North Yorkshire, UK) to demonstrate the predictive capabilities of PB-SAND in modelling 3D architectures of different types of meander-bend transformations. PB-SAND serves as a practical tool with which to predict heterogeneity in subsurface hydrocarbon reservoirs and water aquifers.

  1. Human Spaceflight Architecture Model (HSFAM) Data Dictionary

    NASA Technical Reports Server (NTRS)

    Shishko, Robert

    2016-01-01

    HSFAM is a data model based on the DoDAF 2.02 data model with some for purpose extensions. These extensions are designed to permit quantitative analyses regarding stakeholder concerns about technical feasibility, configuration and interface issues, and budgetary and/or economic viability.

  2. Human Spaceflight Architecture Model (HSFAM) Data Dictionary

    NASA Technical Reports Server (NTRS)

    Shishko, Robert

    2016-01-01

    HSFAM is a data model based on the DoDAF 2.02 data model with some for purpose extensions. These extensions are designed to permit quantitative analyses regarding stakeholder concerns about technical feasibility, configuration and interface issues, and budgetary and/or economic viability.

  3. RT 24 - Architecture, Modeling & Simulation, and Software Design

    DTIC Science & Technology

    2010-11-01

    focus on tool extensions (UPDM, SysML, SoaML, BPMN ) Leverage “best of breed” architecture methodologies Provide tooling to support the methodology DoDAF...Capability 10 Example: BPMN 11 DoDAF 2.0 MetaModel BPMN MetaModel Mapping SysML to DoDAF 2.0 12 DoDAF V2.0 Models OV-2 SysML Diagrams Requirement

  4. Improving Project Management Using Formal Models and Architectures

    NASA Technical Reports Server (NTRS)

    Kahn, Theodore; Sturken, Ian

    2011-01-01

    This talk discusses the advantages formal modeling and architecture brings to project management. These emerging technologies have both great potential and challenges for improving information available for decision-making. The presentation covers standards, tools and cultural issues needing consideration, and includes lessons learned from projects the presenters have worked on.

  5. Modeling of Euclidean braided fiber architectures to optimize composite properties

    NASA Technical Reports Server (NTRS)

    Armstrong-Carroll, E.; Pastore, C.; Ko, F. K.

    1992-01-01

    Three-dimensional braided fiber reinforcements are a very effective toughening mechanism for composite materials. The integral yarn path inherent to this fiber architecture allows for effective multidirectional dispersion of strain energy and negates delamination problems. In this paper a geometric model of Euclidean braid fiber architectures is presented. This information is used to determine the degree of geometric isotropy in the braids. This information, when combined with candidate material properties, can be used to quickly generate an estimate of the available load-carrying capacity of Euclidean braids at any arbitrary angle.

  6. Modeling Virtual Organization Architecture with the Virtual Organization Breeding Methodology

    NASA Astrophysics Data System (ADS)

    Paszkiewicz, Zbigniew; Picard, Willy

    While Enterprise Architecture Modeling (EAM) methodologies become more and more popular, an EAM methodology tailored to the needs of virtual organizations (VO) is still to be developed. Among the most popular EAM methodologies, TOGAF has been chosen as the basis for a new EAM methodology taking into account characteristics of VOs presented in this paper. In this new methodology, referred as Virtual Organization Breeding Methodology (VOBM), concepts developed within the ECOLEAD project, e.g. the concept of Virtual Breeding Environment (VBE) or the VO creation schema, serve as fundamental elements for development of VOBM. VOBM is a generic methodology that should be adapted to a given VBE. VOBM defines the structure of VBE and VO architectures in a service-oriented environment, as well as an architecture development method for virtual organizations (ADM4VO). Finally, a preliminary set of tools and methods for VOBM is given in this paper.

  7. Three dimensional cultures: a tool to study normal acinar architecture vs. malignant transformation of breast cells.

    PubMed

    Pal, Anupama; Kleer, Celina G

    2014-04-25

    Invasive breast carcinomas are a group of malignant epithelial tumors characterized by the invasion of adjacent tissues and propensity to metastasize. The interplay of signals between cancer cells and their microenvironment exerts a powerful influence on breast cancer growth and biological behavior(1). However, most of these signals from the extracellular matrix are lost or their relevance is understudied when cells are grown in two dimensional culture (2D) as a monolayer. In recent years, three dimensional (3D) culture on a reconstituted basement membrane has emerged as a method of choice to recapitulate the tissue architecture of benign and malignant breast cells. Cells grown in 3D retain the important cues from the extracellular matrix and provide a physiologically relevant ex vivo system(2,3). Of note, there is growing evidence suggesting that cells behave differently when grown in 3D as compared to 2D(4). 3D culture can be effectively used as a means to differentiate the malignant phenotype from the benign breast phenotype and for underpinning the cellular and molecular signaling involved(3). One of the distinguishing characteristics of benign epithelial cells is that they are polarized so that the apical cytoplasm is towards the lumen and the basal cytoplasm rests on the basement membrane. This apico-basal polarity is lost in invasive breast carcinomas, which are characterized by cellular disorganization and formation of anastomosing and branching tubules that haphazardly infiltrates the surrounding stroma. These histopathological differences between benign gland and invasive carcinoma can be reproduced in 3D(6,7). Using the appropriate read-outs like the quantitation of single round acinar structures, or differential expression of validated molecular markers for cell proliferation, polarity and apoptosis in combination with other molecular and cell biology techniques, 3D culture can provide an important tool to better understand the cellular changes during

  8. Empirical Memory-Access Cost Models in Multicore NUMA Architectures

    SciTech Connect

    McCormick, Patrick S.; Braithwaite, Ryan Karl; Feng, Wu-chun

    2011-01-01

    Data location is of prime importance when scheduling tasks in a non-uniform memory access (NUMA) architecture. The characteristics of the NUMA architecture must be understood so tasks can be scheduled onto processors that are close to the task's data. However, in modern NUMA architectures, such as AMD Magny-Cours and Intel Nehalem, there may be a relatively large number of memory controllers with sockets that are connected in a non-intuitive manner, leading to performance degradation due to uninformed task-scheduling decisions. In this paper, we provide a method for experimentally characterizing memory-access costs for modern NUMA architectures via memory latency and bandwidth microbenchmarks. Using the results of these benchmarks, we propose a memory-access cost model to improve task-scheduling decisions by scheduling tasks near the data they need. Simple task-scheduling experiments using the memory-access cost models validate the use of empirical memory-access cost models to significantly improve program performance.

  9. A Model-Driven Architecture Approach for Modeling, Specifying and Deploying Policies in Autonomous and Autonomic Systems

    NASA Technical Reports Server (NTRS)

    Pena, Joaquin; Hinchey, Michael G.; Sterritt, Roy; Ruiz-Cortes, Antonio; Resinas, Manuel

    2006-01-01

    Autonomic Computing (AC), self-management based on high level guidance from humans, is increasingly gaining momentum as the way forward in designing reliable systems that hide complexity and conquer IT management costs. Effectively, AC may be viewed as Policy-Based Self-Management. The Model Driven Architecture (MDA) approach focuses on building models that can be transformed into code in an automatic manner. In this paper, we look at ways to implement Policy-Based Self-Management by means of models that can be converted to code using transformations that follow the MDA philosophy. We propose a set of UML-based models to specify autonomic and autonomous features along with the necessary procedures, based on modification and composition of models, to deploy a policy as an executing system.

  10. Superstatistical model of bacterial DNA architecture

    PubMed Central

    Bogachev, Mikhail I.; Markelov, Oleg A.; Kayumov, Airat R.; Bunde, Armin

    2017-01-01

    Understanding the physical principles that govern the complex DNA structural organization as well as its mechanical and thermodynamical properties is essential for the advancement in both life sciences and genetic engineering. Recently we have discovered that the complex DNA organization is explicitly reflected in the arrangement of nucleotides depicted by the universal power law tailed internucleotide interval distribution that is valid for complete genomes of various prokaryotic and eukaryotic organisms. Here we suggest a superstatistical model that represents a long DNA molecule by a series of consecutive ~150 bp DNA segments with the alternation of the local nucleotide composition between segments exhibiting long-range correlations. We show that the superstatistical model and the corresponding DNA generation algorithm explicitly reproduce the laws governing the empirical nucleotide arrangement properties of the DNA sequences for various global GC contents and optimal living temperatures. Finally, we discuss the relevance of our model in terms of the DNA mechanical properties. As an outlook, we focus on finding the DNA sequences that encode a given protein while simultaneously reproducing the nucleotide arrangement laws observed from empirical genomes, that may be of interest in the optimization of genetic engineering of long DNA molecules. PMID:28225058

  11. Superstatistical model of bacterial DNA architecture

    NASA Astrophysics Data System (ADS)

    Bogachev, Mikhail I.; Markelov, Oleg A.; Kayumov, Airat R.; Bunde, Armin

    2017-02-01

    Understanding the physical principles that govern the complex DNA structural organization as well as its mechanical and thermodynamical properties is essential for the advancement in both life sciences and genetic engineering. Recently we have discovered that the complex DNA organization is explicitly reflected in the arrangement of nucleotides depicted by the universal power law tailed internucleotide interval distribution that is valid for complete genomes of various prokaryotic and eukaryotic organisms. Here we suggest a superstatistical model that represents a long DNA molecule by a series of consecutive ~150 bp DNA segments with the alternation of the local nucleotide composition between segments exhibiting long-range correlations. We show that the superstatistical model and the corresponding DNA generation algorithm explicitly reproduce the laws governing the empirical nucleotide arrangement properties of the DNA sequences for various global GC contents and optimal living temperatures. Finally, we discuss the relevance of our model in terms of the DNA mechanical properties. As an outlook, we focus on finding the DNA sequences that encode a given protein while simultaneously reproducing the nucleotide arrangement laws observed from empirical genomes, that may be of interest in the optimization of genetic engineering of long DNA molecules.

  12. System Architecture Modeling for Technology Portfolio Management using ATLAS

    NASA Technical Reports Server (NTRS)

    Thompson, Robert W.; O'Neil, Daniel A.

    2006-01-01

    Strategic planners and technology portfolio managers have traditionally relied on consensus-based tools, such as Analytical Hierarchy Process (AHP) and Quality Function Deployment (QFD) in planning the funding of technology development. While useful to a certain extent, these tools are limited in the ability to fully quantify the impact of a technology choice on system mass, system reliability, project schedule, and lifecycle cost. The Advanced Technology Lifecycle Analysis System (ATLAS) aims to provide strategic planners a decision support tool for analyzing technology selections within a Space Exploration Architecture (SEA). Using ATLAS, strategic planners can select physics-based system models from a library, configure the systems with technologies and performance parameters, and plan the deployment of a SEA. Key parameters for current and future technologies have been collected from subject-matter experts and other documented sources in the Technology Tool Box (TTB). ATLAS can be used to compare the technical feasibility and economic viability of a set of technology choices for one SEA, and compare it against another set of technology choices or another SEA. System architecture modeling in ATLAS is a multi-step process. First, the modeler defines the system level requirements. Second, the modeler identifies technologies of interest whose impact on an SEA. Third, the system modeling team creates models of architecture elements (e.g. launch vehicles, in-space transfer vehicles, crew vehicles) if they are not already in the model library. Finally, the architecture modeler develops a script for the ATLAS tool to run, and the results for comparison are generated.

  13. A modeling process to understand complex system architectures

    NASA Astrophysics Data System (ADS)

    Robinson, Santiago Balestrini

    2009-12-01

    In recent decades, several tools have been developed by the armed forces, and their contractors, to test the capability of a force. These campaign level analysis tools, often times characterized as constructive simulations are generally expensive to create and execute, and at best they are extremely difficult to verify and validate. This central observation, that the analysts are relying more and more on constructive simulations to predict the performance of future networks of systems, leads to the two central objectives of this thesis: (1) to enable the quantitative comparison of architectures in terms of their ability to satisfy a capability without resorting to constructive simulations, and (2) when constructive simulations must be created, to quantitatively determine how to spend the modeling effort amongst the different system classes. The first objective led to Hypothesis A, the first main hypotheses, which states that by studying the relationships between the entities that compose an architecture, one can infer how well it will perform a given capability. The method used to test the hypothesis is based on two assumptions: (1) the capability can be defined as a cycle of functions, and that it (2) must be possible to estimate the probability that a function-based relationship occurs between any two types of entities. If these two requirements are met, then by creating random functional networks, different architectures can be compared in terms of their ability to satisfy a capability. In order to test this hypothesis, a novel process for creating representative functional networks of large-scale system architectures was developed. The process, named the Digraph Modeling for Architectures (DiMA), was tested by comparing its results to those of complex constructive simulations. Results indicate that if the inputs assigned to DiMA are correct (in the tests they were based on time-averaged data obtained from the ABM), DiMA is able to identify which of any two

  14. Space station architectural elements model study

    NASA Technical Reports Server (NTRS)

    Taylor, T. C.; Spencer, J. S.; Rocha, C. J.; Kahn, E.; Cliffton, E.; Carr, C.

    1987-01-01

    The worksphere, a user controlled computer workstation enclosure, was expanded in scope to an engineering workstation suitable for use on the Space Station as a crewmember desk in orbit. The concept was also explored as a module control station capable of enclosing enough equipment to control the station from each module. The concept has commercial potential for the Space Station and surface workstation applications. The central triangular beam interior configuration was expanded and refined to seven different beam configurations. These included triangular on center, triangular off center, square, hexagonal small, hexagonal medium, hexagonal large and the H beam. Each was explored with some considerations as to the utilities and a suggested evaluation factor methodology was presented. Scale models of each concept were made. The models were helpful in researching the seven beam configurations and determining the negative residual (unused) volume of each configuration. A flexible hardware evaluation factor concept is proposed which could be helpful in evaluating interior space volumes from a human factors point of view. A magnetic version with all the graphics is available from the author or the technical monitor.

  15. Model-based service-oriented architectures for Internetworked Enterprises

    NASA Astrophysics Data System (ADS)

    Bianchini, Devis; Brambilla, Marco; Campi, Alessandro; Cappiello, Cinzia; Ceri, Stefano; Comuzzi, Marco; de Antonellis, Valeria; Pernici, Barbara; Plebani, Pierluigi

    Service-oriented architectures (SOA) provide the basis to (re)design business processes in order to develop flexible applications where available services are dynamically composed to satisfy business goals. The adoption of this type of architecture enables the design of information systems that connect IEs to each other to run collaborative business processes. In fact, organizations can design service-based processes based either on simple internal applications or on external services. This chapter provides models and methods for the design and execution of service-based processes able to exploit all the services offered in an IEs registry. This service registry contains services that need to be defined with the same granularity and described via the same functional and non-functional models. The alignment in process and service design and modeling is discussed in this chapter, to enable the adoption of efficient techniques for service sharing, discovery and invocation.

  16. Transformation of standardized clinical models based on OWL technologies: from CEM to OpenEHR archetypes.

    PubMed

    Legaz-García, María del Carmen; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás; Chute, Christopher G; Tao, Cui

    2015-05-01

    The semantic interoperability of electronic healthcare records (EHRs) systems is a major challenge in the medical informatics area. International initiatives pursue the use of semantically interoperable clinical models, and ontologies have frequently been used in semantic interoperability efforts. The objective of this paper is to propose a generic, ontology-based, flexible approach for supporting the automatic transformation of clinical models, which is illustrated for the transformation of Clinical Element Models (CEMs) into openEHR archetypes. Our transformation method exploits the fact that the information models of the most relevant EHR specifications are available in the Web Ontology Language (OWL). The transformation approach is based on defining mappings between those ontological structures. We propose a way in which CEM entities can be transformed into openEHR by using transformation templates and OWL as common representation formalism. The transformation architecture exploits the reasoning and inferencing capabilities of OWL technologies. We have devised a generic, flexible approach for the transformation of clinical models, implemented for the unidirectional transformation from CEM to openEHR, a series of reusable transformation templates, a proof-of-concept implementation, and a set of openEHR archetypes that validate the methodological approach. We have been able to transform CEM into archetypes in an automatic, flexible, reusable transformation approach that could be extended to other clinical model specifications. We exploit the potential of OWL technologies for supporting the transformation process. We believe that our approach could be useful for international efforts in the area of semantic interoperability of EHR systems. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Analytical Performance Modeling and Validation of Intel’s Xeon Phi Architecture

    SciTech Connect

    Chunduri, Sudheer; Balaprakash, Prasanna; Morozov, Vitali; Vishwanath, Venkatram; Kumaran, Kalyan

    2017-01-01

    Modeling the performance of scientific applications on emerging hardware plays a central role in achieving extreme-scale computing goals. Analytical models that capture the interaction between applications and hardware characteristics are attractive because even a reasonably accurate model can be useful for performance tuning before the hardware is made available. In this paper, we develop a hardware model for Intel’s second-generation Xeon Phi architecture code-named Knights Landing (KNL) for the SKOPE framework. We validate the KNL hardware model by projecting the performance of mini-benchmarks and application kernels. The results show that our KNL model can project the performance with prediction errors of 10% to 20%. The hardware model also provides informative recommendations for code transformations and tuning.

  18. Modelling parallel programs and multiprocessor architectures with AXE

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Fineman, Charles E.

    1991-01-01

    AXE, An Experimental Environment for Parallel Systems, was designed to model and simulate for parallel systems at the process level. It provides an integrated environment for specifying computation models, multiprocessor architectures, data collection, and performance visualization. AXE is being used at NASA-Ames for developing resource management strategies, parallel problem formulation, multiprocessor architectures, and operating system issues related to the High Performance Computing and Communications Program. AXE's simple, structured user-interface enables the user to model parallel programs and machines precisely and efficiently. Its quick turn-around time keeps the user interested and productive. AXE models multicomputers. The user may easily modify various architectural parameters including the number of sites, connection topologies, and overhead for operating system activities. Parallel computations in AXE are represented as collections of autonomous computing objects known as players. Their use and behavior is described. Performance data of the multiprocessor model can be observed on a color screen. These include CPU and message routing bottlenecks, and the dynamic status of the software.

  19. Transforming Collaborative Process Models into Interface Process Models by Applying an MDA Approach

    NASA Astrophysics Data System (ADS)

    Lazarte, Ivanna M.; Chiotti, Omar; Villarreal, Pablo D.

    Collaborative business models among enterprises require defining collaborative business processes. Enterprises implement B2B collaborations to execute these processes. In B2B collaborations the integration and interoperability of processes and systems of the enterprises are required to support the execution of collaborative processes. From a collaborative process model, which describes the global view of the enterprise interactions, each enterprise must define the interface process that represents the role it performs in the collaborative process in order to implement the process in a Business Process Management System. Hence, in this work we propose a method for the automatic generation of the interface process model of each enterprise from a collaborative process model. This method is based on a Model-Driven Architecture to transform collaborative process models into interface process models. By applying this method, interface processes are guaranteed to be interoperable and defined according to a collaborative process.

  20. Entity-Centric Abstraction and Modeling Framework for Transportation Architectures

    NASA Technical Reports Server (NTRS)

    Lewe, Jung-Ho; DeLaurentis, Daniel A.; Mavris, Dimitri N.; Schrage, Daniel P.

    2007-01-01

    A comprehensive framework for representing transpportation architectures is presented. After discussing a series of preceding perspectives and formulations, the intellectual underpinning of the novel framework using an entity-centric abstraction of transportation is described. The entities include endogenous and exogenous factors and functional expressions are offered that relate these and their evolution. The end result is a Transportation Architecture Field which permits analysis of future concepts under the holistic perspective. A simulation model which stems from the framework is presented and exercised producing results which quantify improvements in air transportation due to advanced aircraft technologies. Finally, a modeling hypothesis and its accompanying criteria are proposed to test further use of the framework for evaluating new transportation solutions.

  1. Coaching Model + Clinical Playbook = Transformative Learning.

    PubMed

    Fletcher, Katherine A; Meyer, Mary

    2016-01-01

    Health care employers demand that workers be skilled in clinical reasoning, able to work within complex interprofessional teams to provide safe, quality patient-centered care in a complex evolving system. To this end, there have been calls for radical transformation of nursing education including the development of a baccalaureate generalist nurse. Based on recommendations from the American Association of Colleges of Nursing, faculty concluded that clinical education must change moving beyond direct patient care by applying the concepts associated with designer, manager, and coordinator of care and being a member of a profession. To accomplish this, the faculty utilized a system of focused learning assignments (FLAs) that present transformative learning opportunities that expose students to "disorienting dilemmas," alternative perspectives, and repeated opportunities to reflect and challenge their own beliefs. The FLAs collected in a "Playbook" were scaffolded to build the student's competencies over the course of the clinical experience. The FLAs were centered on the 6 Quality and Safety Education for Nurses competencies, with 2 additional concepts of professionalism and systems-based practice. The FLAs were competency-based exercises that students performed when not assigned to direct patient care or had free clinical time. Each FLA had a lesson plan that allowed the student and faculty member to see the competency addressed by the lesson, resources, time on task, student instructions, guide for reflection, grading rubric, and recommendations for clinical instructor. The major advantages of the model included (a) consistent implementation of structured learning experiences by a diverse teaching staff using a coaching model of instruction; (b) more systematic approach to present learning activities that build upon each other; (c) increased time for faculty to interact with students providing direct patient care; (d) guaranteed capture of selected transformative

  2. Derivation of Rigid Body Analysis Models from Vehicle Architecture Abstractions

    DTIC Science & Technology

    2011-06-17

    models of every type have their basis in some type of physical representation of the design domain. Rather than describing three-dimensional continua of...arrangement, while capturing just enough physical detail to be used as the basis for a meaningful representation of the design , and eventually, analyses that...permit architecture assessment. The design information captured by the abstractions is available at the very earliest stages of the vehicle

  3. Assessing Aegis Program Transition to an Open-Architecture Model

    DTIC Science & Technology

    2013-01-01

    executive officer for Integrated Warfare Systems, encouraged and supported this research effort. Bill Bray, Myron Liszniansky, Kathy Emery, and...Architecture Model line development initiatives begin by “ cloning ”3 the software of a previ- ous baseline. After the baseline is certified, it is...maintained separately. Navy officials sometimes refer to this approach as “ clone and own.” One of the implications of this approach is that fixes or

  4. A Coupled Simulation Architecture for Agent-Based/Geohydrological Modelling

    NASA Astrophysics Data System (ADS)

    Jaxa-Rozen, M.

    2016-12-01

    The quantitative modelling of social-ecological systems can provide useful insights into the interplay between social and environmental processes, and their impact on emergent system dynamics. However, such models should acknowledge the complexity and uncertainty of both of the underlying subsystems. For instance, the agent-based models which are increasingly popular for groundwater management studies can be made more useful by directly accounting for the hydrological processes which drive environmental outcomes. Conversely, conventional environmental models can benefit from an agent-based depiction of the feedbacks and heuristics which influence the decisions of groundwater users. From this perspective, this work describes a Python-based software architecture which couples the popular NetLogo agent-based platform with the MODFLOW/SEAWAT geohydrological modelling environment. This approach enables users to implement agent-based models in NetLogo's user-friendly platform, while benefiting from the full capabilities of MODFLOW/SEAWAT packages or reusing existing geohydrological models. The software architecture is based on the pyNetLogo connector, which provides an interface between the NetLogo agent-based modelling software and the Python programming language. This functionality is then extended and combined with Python's object-oriented features, to design a simulation architecture which couples NetLogo with MODFLOW/SEAWAT through the FloPy library (Bakker et al., 2016). The Python programming language also provides access to a range of external packages which can be used for testing and analysing the coupled models, which is illustrated for an application of Aquifer Thermal Energy Storage (ATES).

  5. Study of performance on SMP and distributed memory architectures using a shared memory programming model

    SciTech Connect

    Brooks, E.D.; Warren, K.H.

    1997-08-08

    In this paper we examine the use of a shared memory programming model to address the problem of portability of application codes between distributed memory and shared memory architectures. We do this with an extension of the Parallel C Preprocessor. The extension, borrowed from Split-C and AC, uses type qualifiers instead of storage class modifiers to declare variables that are shared among processors. The type qualifier declaration supports an abstract shared memory facility on distributed memory machines while making direct use of hardware support on shared memory architectures. Our benchmarking study spans a wide range of shared memory and distributed memory platforms. Benchmarks include Gaussian elimination with back substitution, a two-dimensional fast Fourier transform, and a matrix-matrix multiply. We find that the type-qualifier-based shared memory programming model is capable of efficiently spanning both distributed memory and shared memory architectures. Although the resulting shared memory programming model is portable, it does not remove the need to arrange for overlapped or blocked remote memory references on platforms that require these tuning measures in order to obtain good performance.

  6. Managing changes in the enterprise architecture modelling context

    NASA Astrophysics Data System (ADS)

    Khanh Dam, Hoa; Lê, Lam-Son; Ghose, Aditya

    2016-07-01

    Enterprise architecture (EA) models the whole enterprise in various aspects regarding both business processes and information technology resources. As the organisation grows, the architecture of its systems and processes must also evolve to meet the demands of the business environment. Evolving an EA model may involve making changes to various components across different levels of the EA. As a result, an important issue before making a change to an EA model is assessing the ripple effect of the change, i.e. change impact analysis. Another critical issue is change propagation: given a set of primary changes that have been made to the EA model, what additional secondary changes are needed to maintain consistency across multiple levels of the EA. There has been however limited work on supporting the maintenance and evolution of EA models. This article proposes an EA description language, namely ChangeAwareHierarchicalEA, integrated with an evolution framework to support both change impact analysis and change propagation within an EA model. The core part of our framework is a technique for computing the impact of a change and a new method for generating interactive repair plans from Alloy consistency rules that constrain the EA model.

  7. SpaceWire model development technology for satellite architecture.

    SciTech Connect

    Eldridge, John M.; Leemaster, Jacob Edward; Van Leeuwen, Brian P.

    2011-09-01

    Packet switched data communications networks that use distributed processing architectures have the potential to simplify the design and development of new, increasingly more sophisticated satellite payloads. In addition, the use of reconfigurable logic may reduce the amount of redundant hardware required in space-based applications without sacrificing reliability. These concepts were studied using software modeling and simulation, and the results are presented in this report. Models of the commercially available, packet switched data interconnect SpaceWire protocol were developed and used to create network simulations of data networks containing reconfigurable logic with traffic flows for timing system distribution.

  8. Plant growth and architectural modelling and its applications

    PubMed Central

    Guo, Yan; Fourcaud, Thierry; Jaeger, Marc; Zhang, Xiaopeng; Li, Baoguo

    2011-01-01

    Over the last decade, a growing number of scientists around the world have invested in research on plant growth and architectural modelling and applications (often abbreviated to plant modelling and applications, PMA). By combining physical and biological processes, spatially explicit models have shown their ability to help in understanding plant–environment interactions. This Special Issue on plant growth modelling presents new information within this topic, which are summarized in this preface. Research results for a variety of plant species growing in the field, in greenhouses and in natural environments are presented. Various models and simulation platforms are developed in this field of research, opening new features to a wider community of researchers and end users. New modelling technologies relating to the structure and function of plant shoots and root systems are explored from the cellular to the whole-plant and plant-community levels. PMID:21638797

  9. Reservoir architecture modeling: Nonstationary models for quantitative geological characterization. Final report, April 30, 1998

    SciTech Connect

    Kerr, D.; Epili, D.; Kelkar, M.; Redner, R.; Reynolds, A.

    1998-12-01

    The study was comprised of four investigations: facies architecture; seismic modeling and interpretation; Markov random field and Boolean models for geologic modeling of facies distribution; and estimation of geological architecture using the Bayesian/maximum entropy approach. This report discusses results from all four investigations. Investigations were performed using data from the E and F units of the Middle Frio Formation, Stratton Field, one of the major reservoir intervals in the Gulf Coast Basin.

  10. An Executable Architecture Tool for the Modeling and Simulation of Operational Process Models

    DTIC Science & Technology

    2015-03-16

    Massachusetts: Gensym Corporation, 2007. [9] Simul8 Simulation Software. Boston, Massachusetts: Simul8 Corpora- tion, 2013. [10] COREsim. Blacksburg...An Executable Architecture Tool for the Modeling and Simulation of Operational Process Models Natalie M. Nakhla Member, IEEE Canadian Forces Warfare...Department of National Defence E-mail:Kendall.Wheaton@forces.gc.ca Abstract—This paper presents an executable architecture tool for the modeling and simulation

  11. A Distributed, Cross-Agency Software Architecture for Sharing Climate Models and Observational Data Sets (Invited)

    NASA Astrophysics Data System (ADS)

    Crichton, D. J.; Mattmann, C. A.; Braverman, A. J.; Cinquini, L.

    2010-12-01

    The Jet Propulsion Laboratory (JPL) has been developing a distributed infrastructure to supporting access and sharing of Earth Science observational data sets with climate models to support model-to-data intercomparison for climate research. The Climate Data Exchange (CDX), a framework for linking distributed repositories coupled with tailored distributed services to support the intercomparison, provides mechanisms to discover, access, transform and share observational and model output data [2]. These services are critical to allowing data to remain distributed, but be pulled together to support analysis. The architecture itself provides a services-based approach allowing for integrating and working with other computing infrastructures through well-defined software interfaces. Specifically, JPL has worked very closely with the Earth System Grid (ESG) and the Program for Climate Model Diagnostics and Intercomparisons (PCMDI) at Lawrence Livermore National Laboratory (LLNL) to integrate NASA science data systems with the Earth System Grid to support federation across organizational and agency boundaries [1]. Of particular interest near-term is enabling access to NASA observational data along-side climate models for the Coupled Model Intercomparison Project known as CMIP5. CMIP5 is the protocol that will be used for the next International Panel for Climate Change (IPCC) Assessment Report (AR5) on climate change. JPL and NASA are currently engaged in a project to ensure that observational data are available to the climate research community through the Earth System Grid. By both developing a software architecture and working with the key architects for the ESG, JPL has been successful at building a prototype for AR5. This presentation will review the software architecture including core principles, models and interfaces, the Climate Data Exchange project and specific goals to support access to both observational data and models for AR5. It will highlight the progress

  12. Investigation of Transformer Model for TRV Calculation by EMTP

    NASA Astrophysics Data System (ADS)

    Thein, Myo Min; Ikeda, Hisatoshi; Harada, Katsuhiko; Ohtsuka, Shinya; Hikita, Masayuki; Haginomori, Eiichi; Koshiduka, Tadashi

    Analysis of the EMTP transformer model was performed with the 4kVA two windings low voltage transformer with the current injection (CIJ) measurement method to study a transient recovery voltage (TRV) at the transformer limited fault (TLF) current interrupting condition. Tested transformer's impedance was measured by the frequency response analyzer (FRA). From FRA measurement graphs leakage inductance, stray capacitance and resistance were calculated. The EMTP transformer model was constructed with those values. The EMTP simulation was done for a current injection circuit by using transformer model. The experiment and simulation results show a reasonable agreement.

  13. Low cost high throughput pipelined architecture of 2-D 8 × 8 integer transforms for H.264/AVC

    NASA Astrophysics Data System (ADS)

    Sharma, Meeturani; Durga Tiwari, Honey; Cho, Yong Beom

    2013-08-01

    In this article, we present the implementation of high throughput two-dimensional (2-D) 8 × 8 forward and inverse integer DCT transform for H.264. Using matrix decomposition and matrix operation, such as the Kronecker product and direct sum, the forward and inverse integer transform can be represented using simple addition operations. The dual clocked pipelined structure of the proposed implementation uses non-floating point adders and does not require any transpose memory. Hardware synthesis shows that the maximum operating frequency of the proposed pipelined architecture is 1.31 GHz, which achieves 21.05 Gpixels/s throughput rate with the hardware cost of 42932 gates. High throughput and low hardware makes the proposed design useful for real time H.264/AVC high definition processing.

  14. Building energy modeling for green architecture and intelligent dashboard applications

    NASA Astrophysics Data System (ADS)

    DeBlois, Justin

    Buildings are responsible for 40% of the carbon emissions in the United States. Energy efficiency in this sector is key to reducing overall greenhouse gas emissions. This work studied the passive technique called the roof solar chimney for reducing the cooling load in homes architecturally. Three models of the chimney were created: a zonal building energy model, computational fluid dynamics model, and numerical analytic model. The study estimated the error introduced to the building energy model (BEM) through key assumptions, and then used a sensitivity analysis to examine the impact on the model outputs. The conclusion was that the error in the building energy model is small enough to use it for building simulation reliably. Further studies simulated the roof solar chimney in a whole building, integrated into one side of the roof. Comparisons were made between high and low efficiency constructions, and three ventilation strategies. The results showed that in four US climates, the roof solar chimney results in significant cooling load energy savings of up to 90%. After developing this new method for the small scale representation of a passive architecture technique in BEM, the study expanded the scope to address a fundamental issue in modeling - the implementation of the uncertainty from and improvement of occupant behavior. This is believed to be one of the weakest links in both accurate modeling and proper, energy efficient building operation. A calibrated model of the Mascaro Center for Sustainable Innovation's LEED Gold, 3,400 m2 building was created. Then algorithms were developed for integration to the building's dashboard application that show the occupant the energy savings for a variety of behaviors in real time. An approach using neural networks to act on real-time building automation system data was found to be the most accurate and efficient way to predict the current energy savings for each scenario. A stochastic study examined the impact of the

  15. Probabilistic logic modeling of network reliability for hybrid network architectures

    SciTech Connect

    Wyss, G.D.; Schriner, H.K.; Gaylor, T.R.

    1996-10-01

    Sandia National Laboratories has found that the reliability and failure modes of current-generation network technologies can be effectively modeled using fault tree-based probabilistic logic modeling (PLM) techniques. We have developed fault tree models that include various hierarchical networking technologies and classes of components interconnected in a wide variety of typical and atypical configurations. In this paper we discuss the types of results that can be obtained from PLMs and why these results are of great practical value to network designers and analysts. After providing some mathematical background, we describe the `plug-and-play` fault tree analysis methodology that we have developed for modeling connectivity and the provision of network services in several current- generation network architectures. Finally, we demonstrate the flexibility of the method by modeling the reliability of a hybrid example network that contains several interconnected ethernet, FDDI, and token ring segments. 11 refs., 3 figs., 1 tab.

  16. Architecture for time or transform domain decoding of reed-solomon codes

    NASA Technical Reports Server (NTRS)

    Shao, Howard M. (Inventor); Truong, Trieu-Kie (Inventor); Hsu, In-Shek (Inventor); Deutsch, Leslie J. (Inventor)

    1989-01-01

    Two pipeline (255,233) RS decoders, one a time domain decoder and the other a transform domain decoder, use the same first part to develop an errata locator polynomial .tau.(x), and an errata evaluator polynominal A(x). Both the time domain decoder and transform domain decoder have a modified GCD that uses an input multiplexer and an output demultiplexer to reduce the number of GCD cells required. The time domain decoder uses a Chien search and polynomial evaluator on the GCD outputs .tau.(x) and A(x), for the final decoding steps, while the transform domain decoder uses a transform error pattern algorithm operating on .tau.(x) and the initial syndrome computation S(x), followed by an inverse transform algorithm in sequence for the final decoding steps prior to adding the received RS coded message to produce a decoded output message.

  17. Crystal Level Continuum Modeling of Phase Transformations: The (alpha) <--> (epsilon) Transformation in Iron

    SciTech Connect

    Barton, N R; Benson, D J; Becker, R; Bykov, Y; Caplan, M

    2004-10-18

    We present a crystal level model for thermo-mechanical deformation with phase transformation capabilities. The model is formulated to allow for large pressures (on the order of the elastic moduli) and makes use of a multiplicative decomposition of the deformation gradient. Elastic and thermal lattice distortions are combined into a single lattice stretch to allow the model to be used in conjunction with general equation of state relationships. Phase transformations change the mass fractions of the material constituents. The driving force for phase transformations includes terms arising from mechanical work, from the temperature dependent chemical free energy change on transformation, and from interaction energy among the constituents. Deformation results from both these phase transformations and elasto-viscoplastic deformation of the constituents themselves. Simulation results are given for the {alpha} to {epsilon} phase transformation in iron. Results include simulations of shock induced transformation in single crystals and of compression of polycrystals. Results are compared to available experimental data.

  18. Architecture in motion: A model for music composition

    NASA Astrophysics Data System (ADS)

    Variego, Jorge Elias

    2011-12-01

    Speculations regarding the relationship between music and architecture go back to the very origins of these disciplines. Throughout history, these links have always reaffirmed that music and architecture are analogous art forms that only diverge in their object of study. In the 1 st c. BCE Vitruvius conceived Architecture as "one of the most inclusive and universal human activities" where the architect should be educated in all the arts, having a vast knowledge in history, music and philosophy. In the 18th c., the German thinker Johann Wolfgang von Goethe, described Architecture as "frozen music". More recently, in the 20th c., Iannis Xenakis studied the similar structuring principles between Music and Architecture creating his own "models" of musical composition based on mathematical principles and geometric constructions. The goal of this document is to propose a compositional method that will function as a translator between the acoustical properties of a room and music, to facilitate the creation of musical works that will not only happen within an enclosed space but will also intentionally interact with the space. Acoustical measurements of rooms such as reverberation time, frequency response and volume will be measured and systematically organized in correspondence with orchestrational parameters. The musical compositions created after the proposed model are evocative of the spaces on which they are based. They are meant to be performed in any space, not exclusively in the one where the acoustical measurements were obtained. The visual component of architectural design is disregarded; the room is considered a musical instrument, with its particular sound qualities and resonances. Compositions using the proposed model will not result as sonified shapes, they will be musical works literally "tuned" to a specific space. This Architecture in motion is an attempt to adopt scientific research to the service of a creative activity and to let the aural properties of

  19. Predicting chromatin architecture from models of polymer physics.

    PubMed

    Bianco, Simona; Chiariello, Andrea M; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario

    2017-01-09

    We review the picture of chromatin large-scale 3D organization emerging from the analysis of Hi-C data and polymer modeling. In higher mammals, Hi-C contact maps reveal a complex higher-order organization, extending from the sub-Mb to chromosomal scales, hierarchically folded in a structure of domains-within-domains (metaTADs). The domain folding hierarchy is partially conserved throughout differentiation, and deeply correlated to epigenomic features. Rearrangements in the metaTAD topology relate to gene expression modifications: in particular, in neuronal differentiation models, topologically associated domains (TADs) tend to have coherent expression changes within architecturally conserved metaTAD niches. To identify the nature of architectural domains and their molecular determinants within a principled approach, we discuss models based on polymer physics. We show that basic concepts of interacting polymer physics explain chromatin spatial organization across chromosomal scales and cell types. The 3D structure of genomic loci can be derived with high accuracy and its molecular determinants identified by crossing information with epigenomic databases. In particular, we illustrate the case of the Sox9 locus, linked to human congenital disorders. The model in-silico predictions on the effects of genomic rearrangements are confirmed by available 5C data. That can help establishing new diagnostic tools for diseases linked to chromatin mis-folding, such as congenital disorders and cancer.

  20. Architecture for Integrated Medical Model Dynamic Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    Jaworske, D. A.; Myers, J. G.; Goodenow, D.; Young, M.; Arellano, J. D.

    2016-01-01

    Probabilistic Risk Assessment (PRA) is a modeling tool used to predict potential outcomes of a complex system based on a statistical understanding of many initiating events. Utilizing a Monte Carlo method, thousands of instances of the model are considered and outcomes are collected. PRA is considered static, utilizing probabilities alone to calculate outcomes. Dynamic Probabilistic Risk Assessment (dPRA) is an advanced concept where modeling predicts the outcomes of a complex system based not only on the probabilities of many initiating events, but also on a progression of dependencies brought about by progressing down a time line. Events are placed in a single time line, adding each event to a queue, as managed by a planner. Progression down the time line is guided by rules, as managed by a scheduler. The recently developed Integrated Medical Model (IMM) summarizes astronaut health as governed by the probabilities of medical events and mitigation strategies. Managing the software architecture process provides a systematic means of creating, documenting, and communicating a software design early in the development process. The software architecture process begins with establishing requirements and the design is then derived from the requirements.

  1. A functional model of sensemaking in a neurocognitive architecture.

    PubMed

    Lebiere, Christian; Pirolli, Peter; Thomson, Robert; Paik, Jaehyon; Rutledge-Taylor, Matthew; Staszewski, James; Anderson, John R

    2013-01-01

    Sensemaking is the active process of constructing a meaningful representation (i.e., making sense) of some complex aspect of the world. In relation to intelligence analysis, sensemaking is the act of finding and interpreting relevant facts amongst the sea of incoming reports, images, and intelligence. We present a cognitive model of core information-foraging and hypothesis-updating sensemaking processes applied to complex spatial probability estimation and decision-making tasks. While the model was developed in a hybrid symbolic-statistical cognitive architecture, its correspondence to neural frameworks in terms of both structure and mechanisms provided a direct bridge between rational and neural levels of description. Compared against data from two participant groups, the model correctly predicted both the presence and degree of four biases: confirmation, anchoring and adjustment, representativeness, and probability matching. It also favorably predicted human performance in generating probability distributions across categories, assigning resources based on these distributions, and selecting relevant features given a prior probability distribution. This model provides a constrained theoretical framework describing cognitive biases as arising from three interacting factors: the structure of the task environment, the mechanisms and limitations of the cognitive architecture, and the use of strategies to adapt to the dual constraints of cognition and the environment.

  2. A Functional Model of Sensemaking in a Neurocognitive Architecture

    PubMed Central

    Lebiere, Christian; Paik, Jaehyon; Rutledge-Taylor, Matthew; Staszewski, James; Anderson, John R.

    2013-01-01

    Sensemaking is the active process of constructing a meaningful representation (i.e., making sense) of some complex aspect of the world. In relation to intelligence analysis, sensemaking is the act of finding and interpreting relevant facts amongst the sea of incoming reports, images, and intelligence. We present a cognitive model of core information-foraging and hypothesis-updating sensemaking processes applied to complex spatial probability estimation and decision-making tasks. While the model was developed in a hybrid symbolic-statistical cognitive architecture, its correspondence to neural frameworks in terms of both structure and mechanisms provided a direct bridge between rational and neural levels of description. Compared against data from two participant groups, the model correctly predicted both the presence and degree of four biases: confirmation, anchoring and adjustment, representativeness, and probability matching. It also favorably predicted human performance in generating probability distributions across categories, assigning resources based on these distributions, and selecting relevant features given a prior probability distribution. This model provides a constrained theoretical framework describing cognitive biases as arising from three interacting factors: the structure of the task environment, the mechanisms and limitations of the cognitive architecture, and the use of strategies to adapt to the dual constraints of cognition and the environment. PMID:24302930

  3. Integrating Physiology and Architecture in Models of Fruit Expansion

    PubMed Central

    Cieslak, Mikolaj; Cheddadi, Ibrahim; Boudon, Frédéric; Baldazzi, Valentina; Génard, Michel; Godin, Christophe; Bertin, Nadia

    2016-01-01

    Architectural properties of a fruit, such as its shape, vascular patterns, and skin morphology, play a significant role in determining the distributions of water, carbohydrates, and nutrients inside the fruit. Understanding the impact of these properties on fruit quality is difficult because they develop over time and are highly dependent on both genetic and environmental controls. We present a 3D functional-structural fruit model that can be used to investigate effects of the principle architectural properties on fruit quality. We use a three step modeling pipeline in the OpenAlea platform: (1) creating a 3D volumetric mesh representation of the internal and external fruit structure, (2) generating a complex network of vasculature that is embedded within this mesh, and (3) integrating aspects of the fruit's function, such as water and dry matter transport, with the fruit's structure. We restrict our approach to the phase where fruit growth is mostly due to cell expansion and the fruit has already differentiated into different tissue types. We show how fruit shape affects vascular patterns and, as a consequence, the distribution of sugar/water in tomato fruit. Furthermore, we show that strong interaction between tomato fruit shape and vessel density induces, independently of size, an important and contrasted gradient of water supply from the pedicel to the blossom end of the fruit. We also demonstrate how skin morphology related to microcracking distribution affects the distribution of water and sugars inside nectarine fruit. Our results show that such a generic model permits detailed studies of various, unexplored architectural features affecting fruit quality development. PMID:27917187

  4. Integrating Physiology and Architecture in Models of Fruit Expansion.

    PubMed

    Cieslak, Mikolaj; Cheddadi, Ibrahim; Boudon, Frédéric; Baldazzi, Valentina; Génard, Michel; Godin, Christophe; Bertin, Nadia

    2016-01-01

    Architectural properties of a fruit, such as its shape, vascular patterns, and skin morphology, play a significant role in determining the distributions of water, carbohydrates, and nutrients inside the fruit. Understanding the impact of these properties on fruit quality is difficult because they develop over time and are highly dependent on both genetic and environmental controls. We present a 3D functional-structural fruit model that can be used to investigate effects of the principle architectural properties on fruit quality. We use a three step modeling pipeline in the OpenAlea platform: (1) creating a 3D volumetric mesh representation of the internal and external fruit structure, (2) generating a complex network of vasculature that is embedded within this mesh, and (3) integrating aspects of the fruit's function, such as water and dry matter transport, with the fruit's structure. We restrict our approach to the phase where fruit growth is mostly due to cell expansion and the fruit has already differentiated into different tissue types. We show how fruit shape affects vascular patterns and, as a consequence, the distribution of sugar/water in tomato fruit. Furthermore, we show that strong interaction between tomato fruit shape and vessel density induces, independently of size, an important and contrasted gradient of water supply from the pedicel to the blossom end of the fruit. We also demonstrate how skin morphology related to microcracking distribution affects the distribution of water and sugars inside nectarine fruit. Our results show that such a generic model permits detailed studies of various, unexplored architectural features affecting fruit quality development.

  5. Building Structure Design as an Integral Part of Architecture: A Teaching Model for Students of Architecture

    ERIC Educational Resources Information Center

    Unay, Ali Ihsan; Ozmen, Cengiz

    2006-01-01

    This paper explores the place of structural design within undergraduate architectural education. The role and format of lecture-based structure courses within an education system, organized around the architectural design studio is discussed with its most prominent problems and proposed solutions. The fundamental concept of the current teaching…

  6. Building Structure Design as an Integral Part of Architecture: A Teaching Model for Students of Architecture

    ERIC Educational Resources Information Center

    Unay, Ali Ihsan; Ozmen, Cengiz

    2006-01-01

    This paper explores the place of structural design within undergraduate architectural education. The role and format of lecture-based structure courses within an education system, organized around the architectural design studio is discussed with its most prominent problems and proposed solutions. The fundamental concept of the current teaching…

  7. Parametric Wave Transformation Models on Natural Beaches

    NASA Astrophysics Data System (ADS)

    Apotsos, A. A.; Raubenheimer, B.; Elgar, S.; Guza, R. T.

    2006-12-01

    Seven parametric models for wave height transformation across the surf zone [e.g., Thornton and Guza, 1983] are tested with observations collected between the shoreline and about 5-m water depth during 2 experiments on a barred beach near Duck, NC, and between the shoreline and about 3.5-m water depth during 2 experiments on unbarred beaches near La Jolla, CA. Offshore wave heights ranged from about 0.1 to 3.0 m. Beach profiles were surveyed approximately every other day. The models predict the observations well. Root-mean-square errors between observed and simulated wave heights are small in water depths h > 2 m (average rms errors < 10%), and increase with decreasing depth for h < 2 m (average rms errors > 20%). The lowest rms errors (i.e., the most accurate predictions) are achieved by tuning a free parameter, γ, in each model. To tune the models accurately to the data considered here, observations are required at 3 to 5 locations, and must span the surf zone. No tuned or untuned model provides the best predictions for all data records in any one experiment. The best fit γ's for each model-experiment pair are represented well with an empirical hyperbolic tangent curve based on the inverse Iribarren number. In 3 of the 4 data sets, estimating γ for each model using an average curve based on the predictions and observations from all 4 experiments typically improves model-data agreement relative to using a constant or previously determined empirical γ. The best fit γ's at the 4th experiment (conducted off La Jolla, CA) are roughly 20% smaller than the γ's for the other 3 experiments, and thus using the experiment-averaged curve increases prediction errors. Possible causes for the smaller γ's at the 4th experiment will be discussed. Funded by ONR and NSF.

  8. Methodology of modeling and measuring computer architectures for plasma simulations

    NASA Technical Reports Server (NTRS)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  9. Methodology of modeling and measuring computer architectures for plasma simulations

    NASA Technical Reports Server (NTRS)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  10. An architecture model for multiple disease management information systems.

    PubMed

    Chen, Lichin; Yu, Hui-Chu; Li, Hao-Chun; Wang, Yi-Van; Chen, Huang-Jen; Wang, I-Ching; Wang, Chiou-Shiang; Peng, Hui-Yu; Hsu, Yu-Ling; Chen, Chi-Huang; Chuang, Lee-Ming; Lee, Hung-Chang; Chung, Yufang; Lai, Feipei

    2013-04-01

    Disease management is a program which attempts to overcome the fragmentation of healthcare system and improve the quality of care. Many studies have proven the effectiveness of disease management. However, the case managers were spending the majority of time in documentation, coordinating the members of the care team. They need a tool to support them with daily practice and optimizing the inefficient workflow. Several discussions have indicated that information technology plays an important role in the era of disease management. Whereas applications have been developed, it is inefficient to develop information system for each disease management program individually. The aim of this research is to support the work of disease management, reform the inefficient workflow, and propose an architecture model that enhance on the reusability and time saving of information system development. The proposed architecture model had been successfully implemented into two disease management information system, and the result was evaluated through reusability analysis, time consumed analysis, pre- and post-implement workflow analysis, and user questionnaire survey. The reusability of the proposed model was high, less than half of the time was consumed, and the workflow had been improved. The overall user aspect is positive. The supportiveness during daily workflow is high. The system empowers the case managers with better information and leads to better decision making.

  11. Multiresolution Stochastic Models, Data Fusion, and Wavelet Transforms

    DTIC Science & Technology

    1992-05-01

    based on the wavelet transform . The statistical structure of these models is Markovian in scale, and in addition the eigenstructure of these models is...given by the wavelet transform . The implication of this is that by using the wavelet transform we can convert the apparently complicated problem of...plays the role of the time-like variable. In addition we show how the wavelet transform , which is defined for signals that extend from -infinity to

  12. Modern multicore and manycore architectures: Modelling, optimisation and benchmarking a multiblock CFD code

    NASA Astrophysics Data System (ADS)

    Hadade, Ioan; di Mare, Luca

    2016-08-01

    Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.

  13. The Application of Architecture Frameworks to Modelling Exploration Operations Costs

    NASA Technical Reports Server (NTRS)

    Shishko, Robert

    2006-01-01

    Developments in architectural frameworks and system-of-systems thinking have provided useful constructs for systems engineering. DoDAF concepts, language, and formalisms, in particular, provide a natural way of conceptualizing an operations cost model applicable to NASA's space exploration vision. Not all DoDAF products have meaning or apply to a DoDAF inspired operations cost model, but this paper describes how such DoDAF concepts as nodes, systems, and operational activities relate to the development of a model to estimate exploration operations costs. The paper discusses the specific implementation to the Mission Operations Directorate (MOD) operational functions/activities currently being developed and presents an overview of how this powerful representation can apply to robotic space missions as well.

  14. The Application of Architecture Frameworks to Modelling Exploration Operations Costs

    NASA Technical Reports Server (NTRS)

    Shishko, Robert

    2006-01-01

    Developments in architectural frameworks and system-of-systems thinking have provided useful constructs for systems engineering. DoDAF concepts, language, and formalisms, in particular, provide a natural way of conceptualizing an operations cost model applicable to NASA's space exploration vision. Not all DoDAF products have meaning or apply to a DoDAF inspired operations cost model, but this paper describes how such DoDAF concepts as nodes, systems, and operational activities relate to the development of a model to estimate exploration operations costs. The paper discusses the specific implementation to the Mission Operations Directorate (MOD) operational functions/activities currently being developed and presents an overview of how this powerful representation can apply to robotic space missions as well.

  15. The caBIG® Life Science Business Architecture Model.

    PubMed

    Boyd, Lauren Becnel; Hunicke-Smith, Scott P; Stafford, Grace A; Freund, Elaine T; Ehlman, Michele; Chandran, Uma; Dennis, Robert; Fernandez, Anna T; Goldstein, Stephen; Steffen, David; Tycko, Benjamin; Klemm, Juli D

    2011-05-15

    Business Architecture Models (BAMs) describe what a business does, who performs the activities, where and when activities are performed, how activities are accomplished and which data are present. The purpose of a BAM is to provide a common resource for understanding business functions and requirements and to guide software development. The cancer Biomedical Informatics Grid (caBIG®) Life Science BAM (LS BAM) provides a shared understanding of the vocabulary, goals and processes that are common in the business of LS research. LS BAM 1.1 includes 90 goals and 61 people and groups within Use Case and Activity Unified Modeling Language (UML) Diagrams. Here we report on the model's current release, LS BAM 1.1, its utility and usage, and plans for future use and continuing development for future releases. The LS BAM is freely available as UML, PDF and HTML (https://wiki.nci.nih.gov/x/OFNyAQ).

  16. T:XML: A Tool Supporting User Interface Model Transformation

    NASA Astrophysics Data System (ADS)

    López-Jaquero, Víctor; Montero, Francisco; González, Pascual

    Model driven development of user interfaces is based on the transformation of an abstract specification into the final user interface the user will interact with. The design of transformation rules to carry out this transformation process is a key issue in any model-driven user interface development approach. In this paper, we introduce T:XML, an integrated development environment for managing, creating and previewing transformation rules. The tool supports the specification of transformation rules by using a graphical notation that works on the basis of the transformation of the input model into a graph-based representation. T:XML allows the design and execution of transformation rules in an integrated development environment. Furthermore, the designer can also preview how the generated user interface looks like after the transformations have been applied. These previewing capabilities can be used to quickly create prototypes to discuss with the users in user-centered design methods.

  17. A parallel-pipeline architecture of the fast polynomial transform for computing a two-dimensional cyclic convolution

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Liu, K. Y.; Reed, I. S.

    1983-01-01

    It is pointed out that the two-dimensional cyclic convolution is a useful tool for many two-dimensional digital signal processing applications. Two important applications are related to spaceborne high-resolution synthetic aperture radar (SAR) processing and image processing. Nussbaumer and Quandalle (1978) showed that a radix-2 polynomial transform analogous to the conventional radix-2 FFT algorithm can be used to compute a two-dimensional cyclic convolution. On the basis of results reported by Arambepola and Rayner (1979), a radix-2 polynomial transform can be defined to compute a multidimensional cyclic convolution. Truong et al. (1981) used the considered ideas together with the Chinese Theorem to further reduce the complexity of the radix-2 fast polynomial transform (FPT). Reed et al. (1981) demonstrated that such a new FPT algorithm is significantly faster than the FFT algorithm for computing a two-dimensional convolution. In the present investigation, a parallel-pipeline architecture is considered for implementing the FPT developed by Truong et al.

  18. Polygonal Shapes Detection in 3d Models of Complex Architectures

    NASA Astrophysics Data System (ADS)

    Benciolini, G. B.; Vitti, A.

    2015-02-01

    A sequential application of two global models defined on a variational framework is proposed for the detection of polygonal shapes in 3D models of complex architectures. As a first step, the procedure involves the use of the Mumford and Shah (1989) 1st-order variational model in dimension two (gridded height data are processed). In the Mumford-Shah model an auxiliary function detects the sharp changes, i.e., the discontinuities, of a piecewise smooth approximation of the data. The Mumford-Shah model requires the global minimization of a specific functional to simultaneously produce both the smooth approximation and its discontinuities. In the proposed procedure, the edges of the smooth approximation derived by a specific processing of the auxiliary function are then processed using the Blake and Zisserman (1987) 2nd-order variational model in dimension one (edges are processed in the plane). This second step permits to describe the edges of an object by means of piecewise almost-linear approximation of the input edges themselves and to detects sharp changes of the first-derivative of the edges so to detect corners. The Mumford-Shah variational model is used in two dimensions accepting the original data as primary input. The Blake-Zisserman variational model is used in one dimension for the refinement of the description of the edges. The selection among all the boundaries detected by the Mumford-Shah model of those that present a shape close to a polygon is performed by considering only those boundaries for which the Blake-Zisserman model identified discontinuities in their first derivative. The output of the procedure are hence shapes, coming from 3D geometric data, that can be considered as polygons. The application of the procedure is suitable for, but not limited to, the detection of objects such as foot-print of polygonal buildings, building facade boundaries or windows contours. v The procedure is applied to a height model of the building of the Engineering

  19. A conceptual framework to design a dimensional model based on the HL7 Clinical Document Architecture.

    PubMed

    Pecoraro, Fabrizio; Luzi, Daniela; Ricci, Fabrizio L

    2014-01-01

    This paper proposes a conceptual framework to design a dimensional model based on the HL7 Clinical Document Architecture (CDA) standard. The adoption of this framework can represent a possible solution to facilitate the integration of heterogeneous information systems in a clinical data warehouse. This can simplify the Extract, Transform and Load (ETL) procedures that are considered the most time-consuming and expensive part of the data warehouse development process. The paper describes the main activities to be carried out to design the dimensional model outlining the main advantages in the application of the proposed framework. The feasibility of our approach is also demonstrated providing a case study to define clinical indicators for quality assessment.

  20. 3D reconstruction and dynamic modeling of root architecture in situ and its application to crop phosphorus research.

    PubMed

    Fang, Suqin; Yan, Xiaolong; Liao, Hong

    2009-12-01

    Root architecture plays important roles in plant water and nutrient acquisition. However, accurate modeling of the root system that provides a realistic representation of roots in the soil is limited by a lack of appropriate tools for the non-destructive and precise measurement of the root system architecture in situ. Here we describe a root growth system in which the roots grow in a solid gel matrix that was used to reconstruct 3D root architecture in situ and dynamically simulate its changes under various nutrient conditions with a high degree of precision. A 3D laser scanner combined with a transparent gel-based growth system was used to capture 3D images of roots. The root system skeleton was extracted using a skeleton extraction method based on the Hough transformation, and mesh modeling using Ball-B spline was employed. We successfully used this system to reconstruct rice and soybean root architectures and determine their changes under various phosphorus (P) supply conditions. Our results showed that the 3D root architecture parameters that were dynamically calculated based on the skeletonization and simulation of root systems were significantly correlated with the biomass and P content of rice and soybean based on both the simulation system and previous reports. Therefore, this approach provides a novel technique for the study of crop root growth and its adaptive changes to various environmental conditions.

  1. Spatial Models for Architectural Heritage in Urban Database Context

    NASA Astrophysics Data System (ADS)

    Costamagna, E.; Spanò, A.

    2011-08-01

    Despite the GIS (Geographic Information Systems/Geospatial Information Systems) have been provided with several applications to manage the two-dimensional geometric information and arrange the topological relations among different spatial primitives, most of these systems have limited capabilities to manage the three-dimensional space. Other tools, such as CAD systems, have already achieved a full capability of representing 3D data. Most of the researches in the field of GIS have underlined the necessity of a full 3D management capability which is not yet achieved by the available systems (Rahman, Pilouk 2008) (Zlatanova 2002). First of all to reach this goal is important to define the spatial data model, which is at the same time a geometric and topological model and so integrating these two aspects in relation to the database management efficiency and documentation purposes. The application field on which these model can be tested is the spatial data managing of Architectural Heritage documentation, to evaluate the pertinence of these spatial models to the requested scale for the needs of such a documentation. Most of the important aspects are the integration of metric data originated from different sources and the representation and management of multiscale data. The issues connected with the representation of objects at higher LOD than the ones defined by the CityGML will be taken into account. The aim of this paper is then to investigate which are the favorable application of a framework in order to integrate two different approaches: architectural heritage spatial documentation and urban scale spatial data management.

  2. 3D model tools for architecture and archaeology reconstruction

    NASA Astrophysics Data System (ADS)

    Vlad, Ioan; Herban, Ioan Sorin; Stoian, Mircea; Vilceanu, Clara-Beatrice

    2016-06-01

    The main objective of architectural and patrimonial survey is to provide a precise documentation of the status quo of the surveyed objects (monuments, buildings, archaeological object and sites) for preservation and protection, for scientific studies and restoration purposes, for the presentation to the general public. Cultural heritage documentation includes an interdisciplinary approach having as purpose an overall understanding of the object itself and an integration of the information which characterize it. The accuracy and the precision of the model are directly influenced by the quality of the measurements realized on field and by the quality of the software. The software is in the process of continuous development, which brings many improvements. On the other side, compared to aerial photogrammetry, close range photogrammetry and particularly architectural photogrammetry is not limited to vertical photographs with special cameras. The methodology of terrestrial photogrammetry has changed significantly and various photographic acquisitions are widely in use. In this context, the present paper brings forward a comparative study of TLS (Terrestrial Laser Scanner) and digital photogrammetry for 3D modeling. The authors take into account the accuracy of the 3D models obtained, the overall costs involved for each technology and method and the 4th dimension - time. The paper proves its applicability as photogrammetric technologies are nowadays used at a large scale for obtaining the 3D model of cultural heritage objects, efficacious in their assessment and monitoring, thus contributing to historic conservation. Its importance also lies in highlighting the advantages and disadvantages of each method used - very important issue for both the industrial and scientific segment when facing decisions such as in which technology to invest more research and funds.

  3. Hydrologic Modeling in a Service-Oriented Architecture

    NASA Astrophysics Data System (ADS)

    Goodall, J. L.

    2008-12-01

    Service Oriented Architectures (SOA) offer an approach for creating hydrologic models whereby a model is decomposed into independent computational services that are geographically distributed yet accessible through the Internet. The advantage of this modeling approach is that diverse groups can contribute computational routines that are usable by a wide community, and these routines can be used across operating systems and languages with minimal requirements on the client computer. While the approach has clear benefits in building next generation hydrologic models, a number of challenges must be addressed in order for the approach to reach its full potential. One such challenge in achieving service-oriented hydrologic modeling is establishing standards for web service interfaces and for service-to-service data exchanges. This study presents a prototype service-oriented modeling system that leverages existing protocols and standards (OpenMI, WaterML, GML, etc.) to perform service-oriented hydrologic modeling. The goal of the research is to access the completeness of these existing protocols and standards in achieving the goal, and to highlight shortcomings that should be addressed through future research and development efforts.

  4. Optimization of Forward Wave Modeling on Contemporary HPC Architectures

    SciTech Connect

    Krueger, Jens; Micikevicius, Paulius; Williams, Samuel

    2012-07-20

    Reverse Time Migration (RTM) is one of the main approaches in the seismic processing industry for imaging the subsurface structure of the Earth. While RTM provides qualitative advantages over its predecessors, it has a high computational cost warranting implementation on HPC architectures. We focus on three progressively more complex kernels extracted from RTM: for isotropic (ISO), vertical transverse isotropic (VTI) and tilted transverse isotropic (TTI) media. In this work, we examine performance optimization of forward wave modeling, which describes the computational kernels used in RTM, on emerging multi- and manycore processors and introduce a novel common subexpression elimination optimization for TTI kernels. We compare attained performance and energy efficiency in both the single-node and distributed memory environments in order to satisfy industry’s demands for fidelity, performance, and energy efficiency. Moreover, we discuss the interplay between architecture (chip and system) and optimizations (both on-node computation) highlighting the importance of NUMA-aware approaches to MPI communication. Ultimately, our results show we can improve CPU energy efficiency by more than 10× on Magny Cours nodes while acceleration via multiple GPUs can surpass the energy-efficient Intel Sandy Bridge by as much as 3.6×.

  5. Low ratio current transformer models in the electromagnetic transients program

    SciTech Connect

    Wrate, G.T.; Mork, B.; Mustaphi, K.

    1995-09-01

    Low ratio current transformers are sometimes applied for both overload and fault protection. If sized for overload or neutral imbalance protection of a circuit, the current transformer can be driven deeply into saturation during faults. This could have an effect on the ability of its associated relay to operate properly. To investigate this effect, an EMTP model of a current transformer is developed using a duality derivation. Unlike other models in the literature, this model includes only a small impedance on the primary.

  6. Three phase transformer modelling for fast electromagnetic transient studies

    SciTech Connect

    Papadias, B.C.; Hatziargyriou, N.D.; Bakopoulos, J.A.; Prousalidis, J.M. . Electric Energy Systems Lab.)

    1994-04-01

    In this paper the overvoltages produced by switching the primary side of reactor loaded transformers are simulated using the Electromagnetic Transients Program (EMTP). Attention is focused on transformer modeling. Five general three-phase transformer models are used and from the results obtained, and comparisons with field tests positive conclusions concerning the reliability and the accuracy of these models in the study of switching fast electromagnetic transients are drawn.

  7. Executable Behavioral Modeling of System and Software Architecture Specifications to Inform Resourcing Decisions

    DTIC Science & Technology

    2016-09-01

    model that all users can interpret, used to communicate with a spectrum of stakeholders. The development of a precise architecture and commonly...A system’s required behaviors can be modeled in MP to confirm that the requirements communicated by the stakeholders have been satisfied...require a different view of the architecture or architecture model to communicate relevant information to different stakeholders with different

  8. Java Architecture for Detect and Avoid Extensibility and Modeling

    NASA Technical Reports Server (NTRS)

    Santiago, Confesor; Mueller, Eric Richard; Johnson, Marcus A.; Abramson, Michael; Snow, James William

    2015-01-01

    Unmanned aircraft will equip with a detect-and-avoid (DAA) system that enables them to comply with the requirement to "see and avoid" other aircraft, an important layer in the overall set of procedural, strategic and tactical separation methods designed to prevent mid-air collisions. This paper describes a capability called Java Architecture for Detect and Avoid Extensibility and Modeling (JADEM), developed to prototype and help evaluate various DAA technological requirements by providing a flexible and extensible software platform that models all major detect-and-avoid functions. Figure 1 illustrates JADEM's architecture. The surveillance module can be actual equipment on the unmanned aircraft or simulators that model the process by which sensors on-board detect other aircraft and provide track data to the traffic display. The track evaluation function evaluates each detected aircraft and decides whether to provide an alert to the pilot and its severity. Guidance is a combination of intruder track information, alerting, and avoidance/advisory algorithms behind the tools shown on the traffic display to aid the pilot in determining a maneuver to avoid a loss of well clear. All these functions are designed with a common interface and configurable implementation, which is critical in exploring DAA requirements. To date, JADEM has been utilized in three computer simulations of the National Airspace System, three pilot-in-the-loop experiments using a total of 37 professional UAS pilots, and two flight tests using NASA's Predator-B unmanned aircraft, named Ikhana. The data collected has directly informed the quantitative separation standard for "well clear", safety case, requirements development, and the operational environment for the DAA minimum operational performance standards. This work was performed by the Separation Assurance/Sense and Avoid Interoperability team under NASA's UAS Integration in the NAS project.

  9. Automatic Texture Mapping of Architectural and Archaeological 3d Models

    NASA Astrophysics Data System (ADS)

    Kersten, T. P.; Stallmann, D.

    2012-07-01

    Today, detailed, complete and exact 3D models with photo-realistic textures are increasingly demanded for numerous applications in architecture and archaeology. Manual texture mapping of 3D models by digital photographs with software packages, such as Maxon Cinema 4D, Autodesk 3Ds Max or Maya, still requires a complex and time-consuming workflow. So, procedures for automatic texture mapping of 3D models are in demand. In this paper two automatic procedures are presented. The first procedure generates 3D surface models with textures by web services, while the second procedure textures already existing 3D models with the software tmapper. The program tmapper is based on the Multi Layer 3D image (ML3DImage) algorithm and developed in the programming language C++. The studies showing that the visibility analysis using the ML3DImage algorithm is not sufficient to obtain acceptable results of automatic texture mapping. To overcome the visibility problem the Point Cloud Painter algorithm in combination with the Z-buffer-procedure will be applied in the future.

  10. Computer modeling the internal architecture of carbonate platforms

    SciTech Connect

    Bosence, D.; Waltham, D. )

    1990-01-01

    A numerical computer model is described that calculates the internal architecture of carbonate platforms in response to varying values of carbonate production, subaerial and submarine erosion, sediment redeposition, and sea-level changes. The computer-generated sections closely resemble large-scale outcrops and interpreted seismic profiles through carbonate platforms. Stillstand and transgressive sequences have prograding and downlapping platform geometries with lagoons developing in transgressive systems. Regressive sequences have downlapping clinoforms and erosional upper surfaces. Glacioeustatic scale cycles have a major control on platform geometry with erosional sequence boundaries developing during low stands and platform drowning occurring during transgressive periods. Lowstand downlapping wedges are minor features when compared with clastic systems, and major progradation and downlap of slope deposits develop with transgressions and flooding of platform tops.

  11. An avionics scenario and command model description for Space Generic Open Avionics Architecture (SGOAA)

    NASA Technical Reports Server (NTRS)

    Stovall, John R.; Wray, Richard B.

    1994-01-01

    This paper presents a description of a model for a space vehicle operational scenario and the commands for avionics. This model will be used in developing a dynamic architecture simulation model using the Statemate CASE tool for validation of the Space Generic Open Avionics Architecture (SGOAA). The SGOAA has been proposed as an avionics architecture standard to NASA through its Strategic Avionics Technology Working Group (SATWG) and has been accepted by the Society of Automotive Engineers (SAE) for conversion into an SAE Avionics Standard. This architecture was developed for the Flight Data Systems Division (FDSD) of the NASA Johnson Space Center (JSC) by the Lockheed Engineering and Sciences Company (LESC), Houston, Texas. This SGOAA includes a generic system architecture for the entities in spacecraft avionics, a generic processing external and internal hardware architecture, and a nine class model of interfaces. The SGOAA is both scalable and recursive and can be applied to any hierarchical level of hardware/software processing systems.

  12. Fortran Transformational Tools in Support of Scientific Application Development for Petascale Computer Architectures

    SciTech Connect

    Sottille, Matthew

    2013-09-12

    This document is the final report for a multi-year effort building infrastructure to support tool development for Fortran programs. We also investigated static analysis and code transformation methods relevant to scientific programmers who are writing Fortran programs for petascale-class high performance computing systems. This report details our accomplishments, technical approaches, and provides information on where the research results and code may be obtained from an open source software repository. The report for the first year of the project that was performed at the University of Oregon prior to the PI moving to Galois, Inc. is included as an appendix.

  13. A high frequency transformer model for the EMTP

    SciTech Connect

    Morched, A.; Marti, L.; Ottevangers, J. )

    1993-07-01

    A model to simulate the high frequency behavior of a power transformer is presented. This model is based on the frequency characteristics of the transformer admittance matrix between its terminals over a given range of frequencies. The transformer admittance characteristics can be obtained from measurements or from detailed internal models based on the physical layout of the transformer. The elements of the nodal admittance matrix are approximated with rational functions consisting of real as well as complex conjugate poles and zeros. These approximations are realized in the form of an RLC network in a format suitable for direct use with EMTP. The high frequency transformer model can be used as a stand-alone linear model or as an add-on module of a more comprehensive model where iron core nonlinearities are represented in detail.

  14. Transform continental margins - part 1: Concepts and models

    NASA Astrophysics Data System (ADS)

    Basile, Christophe

    2015-10-01

    This paper reviews the geodynamic concepts and models related to transform continental margins, and their implications on the structure of these margins. Simple kinematic models of transform faulting associated with continental rifting and oceanic accretion allow to define three successive stages of evolution, including intra-continental transform faulting, active transform margin, and passive transform margin. Each part of the transform margin experiences these three stages, but the evolution is diachronous along the margin. Both the duration of each stage and the cumulated strike-slip deformation increase from one extremity of the margin (inner corner) to the other (outer corner). Initiation of transform faulting is related to the obliquity between the trend of the lithospheric deformed zone and the relative displacement of the lithospheric plates involved in divergence. In this oblique setting, alternating transform and divergent plate boundaries correspond to spatial partitioning of the deformation. Both obliquity and the timing of partitioning influence the shape of transform margins. Oblique margin can be defined when oblique rifting is followed by oblique oceanic accretion. In this case, no transform margin should exist in the prolongation of the oceanic fracture zones. Vertical displacements along transform margins were mainly studied to explain the formation of marginal ridges. Numerous models were proposed, one of the most used is being based on thermal exchanges between the oceanic and the continental lithospheres across the transform fault. But this model is compatible neither with numerical computation including flexural behavior of the lithosphere nor with timing of vertical displacements and the lack of heating related to the passing of the oceanic accretion axis as recorded by the Côte d'Ivoire-Ghana marginal ridge. Enhanced models are still needed. They should better take into account the erosion on the continental slope, and the level of coupling

  15. A Survey of Enterprise Architecture Analysis Using Multi Criteria Decision Making Models (MCDM)

    NASA Astrophysics Data System (ADS)

    Zia, Mehmooda Jabeen; Azam, Farooque; Allauddin, Maria

    System design becomes really important for software production due to continuous increase in size and complexity of software systems. It is a complex design activity to build architecture for the systems like large enterprises. Thus it is a critical issue to select the correct architecture in software engineering domain. Moreover, in enterprise architecture selection different goals and objectives must be taken into consideration as it is a multi-criteria decision making problem. Generally this field of enterprise architecture analysis has progressed from the application of linear weighting, through integer programming and linear programming to multi-criteria decision making (MCDM) models. In this paper we survey two multi-criteria decision making models (AHP, ANP) to determine that to what extent they have been used in making powerful decisions in complex enterprise architecture analysis. We have found that by using ANP model, decision makers of an enterprise can make more precise and suitable decisions in selection of enterprise architecture styles.

  16. Modeling cognitive and emotional processes: a novel neural network architecture.

    PubMed

    Khashman, Adnan

    2010-12-01

    In our continuous attempts to model natural intelligence and emotions in machine learning, many research works emerge with different methods that are often driven by engineering concerns and have the common goal of modeling human perception in machines. This paper aims to go further in that direction by investigating the integration of emotion at the structural level of cognitive systems using the novel emotional DuoNeural Network (DuoNN). This network has hidden layer DuoNeurons, where each has two embedded neurons: a dorsal neuron and a ventral neuron for cognitive and emotional data processing, respectively. When input visual stimuli are presented to the DuoNN, the dorsal cognitive neurons process local features while the ventral emotional neurons process the entire pattern. We present the computational model and the learning algorithm of the DuoNN, the input information-cognitive and emotional-parallel streaming method, and a comparison between the DuoNN and a recently developed emotional neural network. Experimental results show that the DuoNN architecture, configuration, and the additional emotional information processing, yield higher recognition rates and faster learning and decision making.

  17. Developing a scalable modeling architecture for studying survivability technologies

    NASA Astrophysics Data System (ADS)

    Mohammad, Syed; Bounker, Paul; Mason, James; Brister, Jason; Shady, Dan; Tucker, David

    2006-05-01

    To facilitate interoperability of models in a scalable environment, and provide a relevant virtual environment in which Survivability technologies can be evaluated, the US Army Research Development and Engineering Command (RDECOM) Modeling Architecture for Technology Research and Experimentation (MATREX) Science and Technology Objective (STO) program has initiated the Survivability Thread which will seek to address some of the many technical and programmatic challenges associated with the effort. In coordination with different Thread customers, such as the Survivability branches of various Army labs, a collaborative group has been formed to define the requirements for the simulation environment that would in turn provide them a value-added tool for assessing models and gauge system-level performance relevant to Future Combat Systems (FCS) and the Survivability requirements of other burgeoning programs. An initial set of customer requirements has been generated in coordination with the RDECOM Survivability IPT lead, through the Survivability Technology Area at RDECOM Tank-automotive Research Development and Engineering Center (TARDEC, Warren, MI). The results of this project are aimed at a culminating experiment and demonstration scheduled for September, 2006, which will include a multitude of components from within RDECOM and provide the framework for future experiments to support Survivability research. This paper details the components with which the MATREX Survivability Thread was created and executed, and provides insight into the capabilities currently demanded by the Survivability faculty within RDECOM.

  18. The caBIG® Life Science Business Architecture Model

    PubMed Central

    Boyd, Lauren Becnel; Hunicke-Smith, Scott P.; Stafford, Grace A.; Freund, Elaine T.; Ehlman, Michele; Chandran, Uma; Dennis, Robert; Fernandez, Anna T.; Goldstein, Stephen; Steffen, David; Tycko, Benjamin; Klemm, Juli D.

    2011-01-01

    Motivation: Business Architecture Models (BAMs) describe what a business does, who performs the activities, where and when activities are performed, how activities are accomplished and which data are present. The purpose of a BAM is to provide a common resource for understanding business functions and requirements and to guide software development. The cancer Biomedical Informatics Grid (caBIG®) Life Science BAM (LS BAM) provides a shared understanding of the vocabulary, goals and processes that are common in the business of LS research. Results: LS BAM 1.1 includes 90 goals and 61 people and groups within Use Case and Activity Unified Modeling Language (UML) Diagrams. Here we report on the model's current release, LS BAM 1.1, its utility and usage, and plans for future use and continuing development for future releases. Availability and Implementation: The LS BAM is freely available as UML, PDF and HTML (https://wiki.nci.nih.gov/x/OFNyAQ). Contact: lbboyd@bcm.edu; laurenbboyd@gmail.com Supplementary information: Supplementary data) are avaliable at Bioinformatics online. PMID:21450709

  19. An improved equivalent circuit model of radial mode piezoelectric transformer.

    PubMed

    Huang, Yihua; Huang, Wei

    2011-05-01

    In this paper, both the equivalent circuit models of the radial mode and the coupled thickness vibration mode of the radial mode piezoelectric transformer are deduced, and then with the Y-parameter matrix method and the dual-port network theory, an improved equivalent circuit model for the multilayer radial mode piezoelectric transformer is established. A radial mode transformer sample is tested to verify the equivalent circuit model. The experimental results show that the model proposed in this paper is more precise than the typical model.

  20. Multiphase model for transformation induced plasticity. Extended Leblond's model

    NASA Astrophysics Data System (ADS)

    Weisz-Patrault, Daniel

    2017-09-01

    Transformation induced plasticity (TRIP) classically refers to plastic strains observed during phase transitions that occur under mechanical loads (that can be lower than the yield stress). A theoretical approach based on homogenization is proposed to deal with multiphase changes and to extend the validity of the well known and widely used model proposed by Leblond (1989). The approach is similar, but several product phases are considered instead of one and several assumptions have been released. Thus, besides the generalization for several phases, one can mention three main improvements in the calculation of the local equivalent plastic strain: the deviatoric part of the phase transformation is taken into account, both parent and product phases are elastic-plastic with linear isotropic hardening and the applied stress is considered. Results show that classical issues of singularities arising in the Leblond's model (corrected by ad hoc numerical functions or thresholding) are solved in this contribution excepted when the applied equivalent stress reaches the yield stress. Indeed, in this situation the parent phase is entirely plastic as soon as the phase transformation begins and the same singularity as in the Leblond's model arises. A physical explanation of the cutoff function is introduced in order to regularize the singularity. Furthermore, experiments extracted from the literature dealing with multiphase transitions and multiaxial loads are compared with the original Leblond's model and the proposed extended version. For the extended version, very good agreement is observed without any fitting procedures (i.e., material parameters are extracted from other dedicated experiments) and for the original version results are more qualitative.

  1. Pi: A Parallel Architecture Interface for Multi-Model Execution

    DTIC Science & Technology

    1990-07-01

    Directory Schemes for Cache Coherence. In The 15th Annual Interna- tional Symposium on Computer Architecture. IEEE Computer Society and ACM, June 1988. [3...Annual International Symposium on Computer Architecture. IEEE Computer Society and ACM, June 1986. [5] Arvind and Rishiyur S. Nikhil. A Dataflow...Overview, 1987. [9] Roberto Bisiani and Alessandro Forin. Multilanguage Parallel Programming of Heterogeneous Machines. IEEE Transactions on Computers

  2. On the Role of Connectors in Modeling and Implementing Software Architectures

    DTIC Science & Technology

    1998-02-15

    On the Role of Connectors in Modeling and Implementing Software Architectures Peyman Oreizy, David S. Rosenblum, and Richard N. Taylor Department of...Std Z39-18 On the Role of Connectors in Modeling and Implementing Software Architectures Peyman Oreizy David S. Rosenblum Richard N. Taylor

  3. Interaction of epithelium with mesenchyme affects global features of lung architecture: a computer model of development.

    PubMed

    Tebockhorst, Seth; Lee, Dongyoub; Wexler, Anthony S; Oldham, Michael J

    2007-01-01

    Lung airway morphogenesis is simulated in a simplified diffusing environment that simulates the mesenchyme to explore the role of morphogens in airway architecture development. Simple rules govern local branching morphogenesis. Morphogen gradients are modeled by four pairs of sources and their diffusion through the mesenchyme. Sensitivity to lobar architecture and mesenchymal morphogen are explored. Even if the model accurately represents observed patterns of local development, it could not produce realistic global patterns of lung architecture if interaction with its environment was not taken into account, implying that reciprocal interaction between airway growth and morphogens in the mesenchyme plays a critical role in producing realistic global features of lung architecture.

  4. Modeling Human Spatial Memory Within a Symbolic Architecture of Cognition

    NASA Astrophysics Data System (ADS)

    Winkelholz, Carsten; Schlick, Christopher M.

    This paper presents a study on the integration of spatial cognition into a symbolic theory. The concepts of encoding object-locations in local allocentric reference systems and noisy representations of locations have been integrated into the ACT-R architecture of cognition. The intrinsic reference axis of the local reference systems automatically result from the sequence of attended locations. The first part of the paper describes experiments we performed to test hypotheses on the usage of local allocentric reference systems in the context of object-location memory in graphical layout structures. The second part describes in more detail the theory and its integration into ACT-R. Based on the theory a model has been developed for the task in the experiments. The parameters for the noise in the representation of locations and the parameters for the recall of symbolic memory chunks were set to values in the magnitude quoted in literature. The model satisfyingly reproduces the data from user studies with 30 subjects.

  5. Model Problems in Technologies for Interoperability: Model-Driven Architecture

    DTIC Science & Technology

    2005-05-01

    used as the servlet container [Apache 05]. The J2EE application server used is JBoss Application Server [JBoss 05]. Data is stored in an Oracle ...database [ Oracle 05]. We present the details of implementing the technical solution in Section 3. 2.4 Evaluate-Model Solution Against Criteria In this step...EJBs) JDBC Legend ORACLE (T EDatabxase cnent Figure 2: High-Level Component and Connector View for the HR System CMU/SEI-2005-TN-022 7 Windows XP HR

  6. Organoids as Models for Neoplastic Transformation | Office of Cancer Genomics

    Cancer.gov

    Cancer models strive to recapitulate the incredible diversity inherent in human tumors. A key challenge in accurate tumor modeling lies in capturing the panoply of homo- and heterotypic cellular interactions within the context of a three-dimensional tissue microenvironment. To address this challenge, researchers have developed organotypic cancer models (organoids) that combine the 3D architecture of in vivo tissues with the experimental facility of 2D cell lines.

  7. Diverse muscle architecture adaptations in a rabbit tibial lengthening model.

    PubMed

    Takahashi, Mitsuhiko; Yasui, Natsuo; Enishi, Tetsuya; Sato, Nori; Mizobuchi, Takatoshi; Homma, Yukako; Sairyo, Koichi

    2014-01-01

    during limb lengthening, muscles are thought to increase the number of sarcomeres. However, this adaptation may differ among muscles with diverse architecture. this study wish to clarify the differences in muscle adaptation in a rabbit model of tibial lengthening. twelve rabbits underwent tibial lengthening (0.7 mm/day for 4 weeks), with the contralateral limb serving as a control, and were euthanized after either the lengthening or the consolidation period. Six muscles around the tibia were investigated in terms of muscle belly length, muscle weight, sarcomere length and serial sarcomere number. muscle belly length increased in all the lengthened muscles. No increases in muscle mass were noted. Sarcomere length increased in the ankle plantar-flexors and was kept longer than the optimal sarcomere length after the consolidation period. Nevertheless, significant increases in sarcomere number were observed in two ankle plantar-flexors. this study demonstrated that muscle belly length largely adapted to the lengthening. The increase in sarcomere number did not match the increase in muscle belly length. We estimated that elongation of the intramuscular aponeuroses is another mechanism of the adaptation in addition to the increase in sarcomere number.

  8. Typical Phases of Transformative Learning: A Practice-Based Model

    ERIC Educational Resources Information Center

    Nohl, Arnd-Michael

    2015-01-01

    Empirical models of transformative learning offer important insights into the core characteristics of this concept. Whereas previous analyses were limited to specific social groups or topical terrains, this article empirically typifies the phases of transformative learning on the basis of a comparative analysis of various social groups and topical…

  9. Typical Phases of Transformative Learning: A Practice-Based Model

    ERIC Educational Resources Information Center

    Nohl, Arnd-Michael

    2015-01-01

    Empirical models of transformative learning offer important insights into the core characteristics of this concept. Whereas previous analyses were limited to specific social groups or topical terrains, this article empirically typifies the phases of transformative learning on the basis of a comparative analysis of various social groups and topical…

  10. A Multiperspectival Conceptual Model of Transformative Meaning Making

    ERIC Educational Resources Information Center

    Freed, Maxine

    2009-01-01

    Meaning making is central to transformative learning, but little work has explored how meaning is constructed in the process. Moreover, no meaning-making theory adequately captures its characteristics and operations during radical transformation. The purpose of this dissertation was to formulate and specify a multiperspectival conceptual model of…

  11. A Multiperspectival Conceptual Model of Transformative Meaning Making

    ERIC Educational Resources Information Center

    Freed, Maxine

    2009-01-01

    Meaning making is central to transformative learning, but little work has explored how meaning is constructed in the process. Moreover, no meaning-making theory adequately captures its characteristics and operations during radical transformation. The purpose of this dissertation was to formulate and specify a multiperspectival conceptual model of…

  12. Strategic Mobility 21. Service Oriented Architecture (SOA) Reference Model - Global Transportation Management System Architecture

    DTIC Science & Technology

    2009-10-07

    multiple commercial and military customers, it will be provided in the form of Software-as-a-Service ( SaaS ), whereby a vendor can provide the user with the...Oriented Architecture (SOA), Software-as-a-Service ( SaaS ), U U U 82 Dr. John Hwang 1-562-985-7392 Reset INSTRUCTIONS FOR COMPLETING SF 298 1. REPORT DATE...commercial and military customers, it will be provided in the form of Software-as-a- Service ( SaaS ), whereby a vendor can provide the user with the

  13. Practical Application of Model-based Programming and State-based Architecture to Space Missions

    NASA Technical Reports Server (NTRS)

    Horvath, Gregory; Ingham, Michel; Chung, Seung; Martin, Oliver; Williams, Brian

    2006-01-01

    A viewgraph presentation to develop models from systems engineers that accomplish mission objectives and manage the health of the system is shown. The topics include: 1) Overview; 2) Motivation; 3) Objective/Vision; 4) Approach; 5) Background: The Mission Data System; 6) Background: State-based Control Architecture System; 7) Background: State Analysis; 8) Overview of State Analysis; 9) Background: MDS Software Frameworks; 10) Background: Model-based Programming; 10) Background: Titan Model-based Executive; 11) Model-based Execution Architecture; 12) Compatibility Analysis of MDS and Titan Architectures; 13) Integrating Model-based Programming and Execution into the Architecture; 14) State Analysis and Modeling; 15) IMU Subsystem State Effects Diagram; 16) Titan Subsystem Model: IMU Health; 17) Integrating Model-based Programming and Execution into the Software IMU; 18) Testing Program; 19) Computationally Tractable State Estimation & Fault Diagnosis; 20) Diagnostic Algorithm Performance; 21) Integration and Test Issues; 22) Demonstrated Benefits; and 23) Next Steps

  14. Practical Application of Model-based Programming and State-based Architecture to Space Missions

    NASA Technical Reports Server (NTRS)

    Horvath, Gregory; Ingham, Michel; Chung, Seung; Martin, Oliver; Williams, Brian

    2006-01-01

    A viewgraph presentation to develop models from systems engineers that accomplish mission objectives and manage the health of the system is shown. The topics include: 1) Overview; 2) Motivation; 3) Objective/Vision; 4) Approach; 5) Background: The Mission Data System; 6) Background: State-based Control Architecture System; 7) Background: State Analysis; 8) Overview of State Analysis; 9) Background: MDS Software Frameworks; 10) Background: Model-based Programming; 10) Background: Titan Model-based Executive; 11) Model-based Execution Architecture; 12) Compatibility Analysis of MDS and Titan Architectures; 13) Integrating Model-based Programming and Execution into the Architecture; 14) State Analysis and Modeling; 15) IMU Subsystem State Effects Diagram; 16) Titan Subsystem Model: IMU Health; 17) Integrating Model-based Programming and Execution into the Software IMU; 18) Testing Program; 19) Computationally Tractable State Estimation & Fault Diagnosis; 20) Diagnostic Algorithm Performance; 21) Integration and Test Issues; 22) Demonstrated Benefits; and 23) Next Steps

  15. Connection and coordination: the interplay between architecture and dynamics in evolved model pattern generators.

    PubMed

    Psujek, Sean; Ames, Jeffrey; Beer, Randall D

    2006-03-01

    We undertake a systematic study of the role of neural architecture in shaping the dynamics of evolved model pattern generators for a walking task. First, we consider the minimum number of connections necessary to achieve high performance on this task. Next, we identify architectural motifs associated with high fitness. We then examine how high-fitness architectures differ in their ability to evolve. Finally, we demonstrate the existence of distinct parameter subgroups in some architectures and show that these subgroups are characterized by differences in neuron excitabilities and connection signs.

  16. Modeling the Contribution of Enterprise Architecture Practice to the Achievement of Business Goals

    NASA Astrophysics Data System (ADS)

    van Steenbergen, Marlies; Brinkkemper, Sjaak

    Enterprise architecture is a young, but well-accepted discipline in information management. Establishing the effectiveness of an enterprise architecture practice, however, appears difficult. In this chapter we introduce an architecture effectiveness model (AEM) to express how enterprise architecture practices are meant to contribute to the business goals of an organization. We developed an AEM for three different organizations. These three instances show that the concept of the AEM is applicable in a variety of organizations. It also shows that the objectives of enterprise architecture are not to be restricted to financial goals. The AEM can be used by organizations to set coherent priorities for their architectural practices and to define KPIs for measuring the effectiveness of these practices.

  17. Plum (Prunus domestica) Trees Transformed with Poplar FT1 Result in Altered Architecture, Dormancy Requirement, and Continuous Flowering

    PubMed Central

    Callahan, Ann; Scorza, Ralph

    2012-01-01

    The Flowering Locus T1 (FT1) gene from Populus trichocarpa under the control of the 35S promoter was transformed into European plum (Prunus domestica L). Transgenic plants expressing higher levels of FT flowered and produced fruits in the greenhouse within 1 to 10 months. FT plums did not enter dormancy after cold or short day treatments yet field planted FT plums remained winter hardy down to at least −10°C. The plants also displayed pleiotropic phenotypes atypical for plum including shrub-type growth habit and panicle flower architecture. The flowering and fruiting phenotype was found to be continuous in the greenhouse but limited to spring and fall in the field. The pattern of flowering in the field correlated with lower daily temperatures. This apparent temperature effect was subsequently confirmed in growth chamber studies. The pleitropic phenotypes associated with FT1 expression in plum suggests a fundamental role of this gene in plant growth and development. This study demonstrates the potential for a single transgene event to markedly affect the vegetative and reproductive growth and development of an economically important temperate woody perennial crop. We suggest that FT1 may be a useful tool to modify temperate plants to changing climates and/or to adapt these crops to new growing areas. PMID:22859952

  18. Plum (Prunus domestica) trees transformed with poplar FT1 result in altered architecture, dormancy requirement, and continuous flowering.

    PubMed

    Srinivasan, Chinnathambi; Dardick, Chris; Callahan, Ann; Scorza, Ralph

    2012-01-01

    The Flowering Locus T1 (FT1) gene from Populus trichocarpa under the control of the 35S promoter was transformed into European plum (Prunus domestica L). Transgenic plants expressing higher levels of FT flowered and produced fruits in the greenhouse within 1 to 10 months. FT plums did not enter dormancy after cold or short day treatments yet field planted FT plums remained winter hardy down to at least -10°C. The plants also displayed pleiotropic phenotypes atypical for plum including shrub-type growth habit and panicle flower architecture. The flowering and fruiting phenotype was found to be continuous in the greenhouse but limited to spring and fall in the field. The pattern of flowering in the field correlated with lower daily temperatures. This apparent temperature effect was subsequently confirmed in growth chamber studies. The pleitropic phenotypes associated with FT1 expression in plum suggests a fundamental role of this gene in plant growth and development. This study demonstrates the potential for a single transgene event to markedly affect the vegetative and reproductive growth and development of an economically important temperate woody perennial crop. We suggest that FT1 may be a useful tool to modify temperate plants to changing climates and/or to adapt these crops to new growing areas.

  19. An architectural model of conscious and unconscious brain functions: Global Workspace Theory and IDA.

    PubMed

    Baars, Bernard J; Franklin, Stan

    2007-11-01

    While neural net models have been developed to a high degree of sophistication, they have some drawbacks at a more integrative, "architectural" level of analysis. We describe a "hybrid" cognitive architecture that is implementable in neuronal nets, and which has uniform brainlike features, including activation-passing and highly distributed "codelets," implementable as small-scale neural nets. Empirically, this cognitive architecture accounts qualitatively for the data described by Baars' Global Workspace Theory (GWT), and Franklin's LIDA architecture, including state-of-the-art models of conscious contents in action-planning, Baddeley-style Working Memory, and working models of episodic and semantic longterm memory. These terms are defined both conceptually and empirically for the current theoretical domain. The resulting architecture meets four desirable goals for a unified theory of cognition: practical workability, autonomous agency, a plausible role for conscious cognition, and translatability into plausible neural terms. It also generates testable predictions, both empirical and computational.

  20. Robust transformation with applications to structural equation modelling.

    PubMed

    Yuan, K H; Chan, W; Bentler, P M

    2000-05-01

    Data sets in social and behavioural sciences are seldom normal. Influential cases or outliers can lead to inappropriate solutions and problematic conclusions in structural equation modelling. By giving a proper weight to each case, the influence of outliers on a robust procedure can be minimized. We propose using a robust procedure as a transformation technique, generating a new data matrix that can be analysed by a variety of multivariate methods. Mardia's multivariate skewness and kurtosis statistics are used to measure the effect of the transformation in achieving approximate normality. Since the transformation makes the data approximately normal, applying a classical normal theory based procedure to the transformed data gives more efficient parameter estimates. Three procedures for parameter evaluation and model testing are discussed. Six examples illustrate the various aspects with the robust transformation.

  1. Macro-economic factors influencing the architectural business model shift in the pharmaceutical industry.

    PubMed

    Dierks, Raphaela Marie Louisa; Bruyère, Olivier; Reginster, Jean-Yves; Richy, Florent-Frederic

    2016-10-01

    Technological innovations, new regulations, increasing costs of drug productions and new demands are only few key drivers of a projected alternation in the pharmaceutical industry. The purpose of this review is to understand the macro economic factors responsible for the business model revolution to possess a competitive advantage over market players. Areas covered: Existing literature on macro-economic factors changing the pharmaceutical landscape has been reviewed to present a clear image of the current market environment. Expert commentary: Literature shows that pharmaceutical companies are facing an architectural alteration, however the evidence on the rationale driving the transformation is outstanding. Merger & Acquisitions (M&A) deals and collaborations are headlining the papers. Q1 2016 did show a major slowdown in M&A deals by volume since 2013 (with deal cancellations of Pfizer and Allergan, or the downfall of Valeant), but pharmaceutical analysts remain confident that this shortfall was a consequence of the equity market volatility. It seems likely that the shift to an M&A model will become apparent during the remainder of 2016, with deal announcements of Abbott Laboratories, AbbVie and Sanofi worth USD 45billion showing the appetite of big pharma companies to shift from the fully vertical integrated business model to more horizontal business models.

  2. Fourier transform methods in local gravity modeling

    NASA Technical Reports Server (NTRS)

    Harrison, J. C.; Dickinson, M.

    1989-01-01

    New algorithms were derived for computing terrain corrections, all components of the attraction of the topography at the topographic surface and the gradients of these attractions. These algoriithms utilize fast Fourier transforms, but, in contrast to methods currently in use, all divergences of the integrals are removed during the analysis. Sequential methods employing a smooth intermediate reference surface were developed to avoid the very large transforms necessary when making computations at high resolution over a wide area. A new method for the numerical solution of Molodensky's problem was developed to mitigate the convergence difficulties that occur at short wavelengths with methods based on a Taylor series expansion. A trial field on a level surface is continued analytically to the topographic surface, and compared with that predicted from gravity observations. The difference is used to compute a correction to the trial field and the process iterated. Special techniques are employed to speed convergence and prevent oscillations. Three different spectral methods for fitting a point-mass set to a gravity field given on a regular grid at constant elevation are described. Two of the methods differ in the way that the spectrum of the point-mass set, which extends to infinite wave number, is matched to that of the gravity field which is band-limited. The third method is essentially a space-domain technique in which Fourier methods are used to solve a set of simultaneous equations.

  3. Dynamic model of a three-phase power transformer

    SciTech Connect

    Dolinar, D.; Pihler, J.; Grcar, B. . Faculty of Technical Sciences)

    1993-10-01

    An adequate mathematical model of a three-phase power transformer is one of the important elements in the programs for the computer analysis of power system transients. Featured in this paper is the simulation model of a three-phase, three-limb core-type power transformer. Non-linear effects of saturation, hysteresis and eddy currents are considered. Two ways of creating major and minor hysteresis loops are presented. The transformer model, described by a system of time dependent differential equations, is solved by an efficient numerical algorithm. The behavior of the transformer model during switching-in and fault transients, as well as other types of transients, has been tested. The computed transient waveforms are compared with the measured ones of there exists very close agreement between them.

  4. Phase transformations at interfaces: Observations from atomistic modeling

    SciTech Connect

    Frolov, T.; Asta, M.; Mishin, Y.

    2016-10-01

    Here, we review the recent progress in theoretical understanding and atomistic computer simulations of phase transformations in materials interfaces, focusing on grain boundaries (GBs) in metallic systems. Recently developed simulation approaches enable the search and structural characterization of GB phases in single-component metals and binary alloys, calculation of thermodynamic properties of individual GB phases, and modeling of the effect of the GB phase transformations on GB kinetics. Atomistic simulations demonstrate that the GB transformations can be induced by varying the temperature, loading the GB with point defects, or varying the amount of solute segregation. The atomic-level understanding obtained from such simulations can provide input for further development of thermodynamics theories and continuous models of interface phase transformations while simultaneously serving as a testing ground for validation of theories and models. They can also help interpret and guide experimental work in this field.

  5. Phase transformations at interfaces: Observations from atomistic modeling

    DOE PAGES

    Frolov, T.; Asta, M.; Mishin, Y.

    2016-10-01

    Here, we review the recent progress in theoretical understanding and atomistic computer simulations of phase transformations in materials interfaces, focusing on grain boundaries (GBs) in metallic systems. Recently developed simulation approaches enable the search and structural characterization of GB phases in single-component metals and binary alloys, calculation of thermodynamic properties of individual GB phases, and modeling of the effect of the GB phase transformations on GB kinetics. Atomistic simulations demonstrate that the GB transformations can be induced by varying the temperature, loading the GB with point defects, or varying the amount of solute segregation. The atomic-level understanding obtained from suchmore » simulations can provide input for further development of thermodynamics theories and continuous models of interface phase transformations while simultaneously serving as a testing ground for validation of theories and models. They can also help interpret and guide experimental work in this field.« less

  6. Rice Morphogenesis and Plant Architecture: Measurement, Specification and the Reconstruction of Structural Development by 3D Architectural Modelling

    PubMed Central

    WATANABE, TOMONARI; HANAN, JIM S.; ROOM, PETER M.; HASEGAWA, TOSHIHIRO; NAKAGAWA, HIROSHI; TAKAHASHI, WATARU

    2005-01-01

    • Background and Aims The morphogenesis and architecture of a rice plant, Oryza sativa, are critical factors in the yield equation, but they are not well studied because of the lack of appropriate tools for 3D measurement. The architecture of rice plants is characterized by a large number of tillers and leaves. The aims of this study were to specify rice plant architecture and to find appropriate functions to represent the 3D growth across all growth stages. • Methods A japonica type rice, ‘Namaga’, was grown in pots under outdoor conditions. A 3D digitizer was used to measure the rice plant structure at intervals from the young seedling stage to maturity. The L-system formalism was applied to create ‘3D virtual rice’ plants, incorporating models of phenological development and leaf emergence period as a function of temperature and photoperiod, which were used to determine the timing of tiller emergence. • Key Results The relationships between the nodal positions and leaf lengths, leaf angles and tiller angles were analysed and used to determine growth functions for the models. The ‘3D virtual rice’ reproduces the structural development of isolated plants and provides a good estimation of the tillering process, and of the accumulation of leaves. • Conclusions The results indicated that the ‘3D virtual rice’ has a possibility to demonstrate the differences in the structure and development between cultivars and under different environmental conditions. Future work, necessary to reflect both cultivar and environmental effects on the model performance, and to link with physiological models, is proposed in the discussion. PMID:15820987

  7. Transforming teacher knowledge: Modeling instruction in physics

    NASA Astrophysics Data System (ADS)

    Cabot, Lloyd H.

    I show that the Modeling physics curriculum is readily accommodated by most teachers in favor of traditional didactic pedagogies. This is so, at least in part, because Modeling focuses on a small set of connected models embedded in a self-consistent theoretical framework and thus is closely congruent with human cognition in this context which is to generate mental models of physical phenomena as both predictive and explanatory devices. Whether a teacher fully implements the Modeling pedagogy depends on the depth of the teacher's commitment to inquiry-based instruction, specifically Modeling instruction, as a means of promoting student understanding of Newtonian mechanics. Moreover, this commitment trumps all other characteristics: teacher educational background, content coverage issues, student achievement data, district or state learning standards, and district or state student assessments. Indeed, distinctive differences exist in how Modeling teachers deliver their curricula and some teachers are measurably more effective than others in their delivery, but they all share an unshakable belief in the efficacy of inquiry-based, constructivist-oriented instruction. The Modeling Workshops' pedagogy, duration, and social interactions impacts teachers' self-identification as members of a professional community. Finally, I discuss the consequences my research may have for the Modeling Instruction program designers and for designers of professional development programs generally.

  8. An IIOP Architecture for Web-Enabled Physiological Models

    DTIC Science & Technology

    2007-11-02

    available. This need can be met by a web-based architecture that uses the equivalent of interactive browsers such as Netscape and Microsoft...With the backing of major players like Sun Microsystems, Netscape , and Oracle, the combined use of Java and CORBA will become commonplace in

  9. Evaluating the Effectiveness of Reference Models in Federating Enterprise Architectures

    ERIC Educational Resources Information Center

    Wilson, Jeffery A.

    2012-01-01

    Agencies need to collaborate with each other to perform missions, improve mission performance, and find efficiencies. The ability of individual government agencies to collaborate with each other for mission and business success and efficiency is complicated by the different techniques used to describe their Enterprise Architectures (EAs).…

  10. Evaluating the Effectiveness of Reference Models in Federating Enterprise Architectures

    ERIC Educational Resources Information Center

    Wilson, Jeffery A.

    2012-01-01

    Agencies need to collaborate with each other to perform missions, improve mission performance, and find efficiencies. The ability of individual government agencies to collaborate with each other for mission and business success and efficiency is complicated by the different techniques used to describe their Enterprise Architectures (EAs).…

  11. Analysis of Cognitive Architecture in the Cultural Geography Model

    DTIC Science & Technology

    2012-09-01

    FIGURES Figure 1. Theory of Planned Behavior (From Ajzen , 1991). ................................ 9 Figure 2. Cognitive Architecture Components (From...tradeoff. 57 LIST OF REFERENCES Ajzen , I. (1985). From intentions to actions: A theory of planned behavior . In J. Kuhl & J.Beckmann (Eds...Action-control: From cognition to behavior (pp. 11–39). Heidelberg, Germany: Springer. Ajzen , I. (1991). The theory of planned behavior

  12. New Models of Mechanisms for the Motion Transformation

    NASA Astrophysics Data System (ADS)

    Petrović, Tomislav; Ivanov, Ivan

    In this paper two new mechanisms for the motion transformations are presented: screw mechanism for the transformation of one-way circular into two-way linear motion with impulse control and worm-planetary gear train with extremely height gear ratio. Both mechanisms represent new models of construction solutions for which patent protection has been achieved. These mechanisms are based on the application of the differential gearbox with two degrees of freedom. They are characterized by series of kinematic impacts at motion transformation and the possibility of temporary or permanent changes in the structure by subtracting the redundant degree of freedom. Thus the desired characteristic of the motion transformation is achieved. For each mechanism separately the principles of motion and transformation are described and the basic equations that describe the interdependence of geometric and kinematic and kinetic parameters of the system dynamics are given. The basic principles of controlling new mechanisms for motion transformation have been pointed to and the basic constructional performances which may find practical application have been given. The physical models of new systems of motion transformation have been designed and their operation has been presented. Performed experimental researches confirmed the theoretical results and very favorable kinematic characteristics of the mechanisms.

  13. TRANSFORMATION

    SciTech Connect

    LACKS,S.A.

    2003-10-09

    Transformation, which alters the genetic makeup of an individual, is a concept that intrigues the human imagination. In Streptococcus pneumoniae such transformation was first demonstrated. Perhaps our fascination with genetics derived from our ancestors observing their own progeny, with its retention and assortment of parental traits, but such interest must have been accelerated after the dawn of agriculture. It was in pea plants that Gregor Mendel in the late 1800s examined inherited traits and found them to be determined by physical elements, or genes, passed from parents to progeny. In our day, the material basis of these genetic determinants was revealed to be DNA by the lowly bacteria, in particular, the pneumococcus. For this species, transformation by free DNA is a sexual process that enables cells to sport new combinations of genes and traits. Genetic transformation of the type found in S. pneumoniae occurs naturally in many species of bacteria (70), but, initially only a few other transformable species were found, namely, Haemophilus influenzae, Neisseria meningitides, Neisseria gonorrheae, and Bacillus subtilis (96). Natural transformation, which requires a set of genes evolved for the purpose, contrasts with artificial transformation, which is accomplished by shocking cells either electrically, as in electroporation, or by ionic and temperature shifts. Although such artificial treatments can introduce very small amounts of DNA into virtually any type of cell, the amounts introduced by natural transformation are a million-fold greater, and S. pneumoniae can take up as much as 10% of its cellular DNA content (40).

  14. Rapid architecture alternative modeling (RAAM): A framework for capability-based analysis of system of systems architectures

    NASA Astrophysics Data System (ADS)

    Iacobucci, Joseph V.

    The research objective for this manuscript is to develop a Rapid Architecture Alternative Modeling (RAAM) methodology to enable traceable Pre-Milestone A decision making during the conceptual phase of design of a system of systems. Rather than following current trends that place an emphasis on adding more analysis which tends to increase the complexity of the decision making problem, RAAM improves on current methods by reducing both runtime and model creation complexity. RAAM draws upon principles from computer science, system architecting, and domain specific languages to enable the automatic generation and evaluation of architecture alternatives. For example, both mission dependent and mission independent metrics are considered. Mission dependent metrics are determined by the performance of systems accomplishing a task, such as Probability of Success. In contrast, mission independent metrics, such as acquisition cost, are solely determined and influenced by the other systems in the portfolio. RAAM also leverages advances in parallel computing to significantly reduce runtime by defining executable models that are readily amendable to parallelization. This allows the use of cloud computing infrastructures such as Amazon's Elastic Compute Cloud and the PASTEC cluster operated by the Georgia Institute of Technology Research Institute (GTRI). Also, the amount of data that can be generated when fully exploring the design space can quickly exceed the typical capacity of computational resources at the analyst's disposal. To counter this, specific algorithms and techniques are employed. Streaming algorithms and recursive architecture alternative evaluation algorithms are used that reduce computer memory requirements. Lastly, a domain specific language is created to provide a reduction in the computational time of executing the system of systems models. A domain specific language is a small, usually declarative language that offers expressive power focused on a particular

  15. Developing a Conceptual Architecture for a Generalized Agent-based Modeling Environment (GAME)

    DTIC Science & Technology

    2008-03-01

    possible. A conceptual architecture for a generalized agent- based modeling environment (GAME) based upon design principles from OR/MS systems was created...conceptual architecture for a generalized agent-based modeling environment (GAME) based upon design principles from OR/MS systems was created that...handle the event, and subsequently form the relevant plans. One of these plans will be selected, and either pushed to the top of the current

  16. Transformational mentorship models for nurse educators.

    PubMed

    Jacobson, Sheri L; Sherrod, Dennis R

    2012-07-01

    A consistent supply of competent and confident faculty is essential to meeting the growing demand for nurses. One way to ensure continuity among nurse educators is through faculty mentorship. There is very little literature about nurse educator mentorship models and no research was found that tested mentoring frameworks or strategies with nurse educators. The matriculation and retention of nursing faculty requires diligence in the areas of practice, teaching, and scholarship. The authors of this article discuss current nursing mentorship models and propose a new one for consideration.

  17. Transforming Community Access to Space Science Models

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Heese, Michael; Kunetsova, Maria; Maddox, Marlo; Rastaetter, Lutz; Berrios, David; Pulkkinen, Antti

    2012-01-01

    Researching and forecasting the ever changing space environment (often referred to as space weather) and its influence on humans and their activities are model-intensive disciplines. This is true because the physical processes involved are complex, but, in contrast to terrestrial weather, the supporting observations are typically sparse. Models play a vital role in establishing a physically meaningful context for interpreting limited observations, testing theory, and producing both nowcasts and forecasts. For example, with accurate forecasting of hazardous space weather conditions, spacecraft operators can place sensitive systems in safe modes, and power utilities can protect critical network components from damage caused by large currents induced in transmission lines by geomagnetic storms.

  18. A model for heterogeneous materials including phase transformations

    NASA Astrophysics Data System (ADS)

    Addessio, F. L.; Clements, B. E.; Williams, T. O.

    2005-04-01

    A model is developed for particulate composites, which includes phase transformations in one or all of the constituents. The model is an extension of the method of cells formalism. Representative simulations for a single-phase, brittle particulate (SiC) embedded in a ductile material (Ti), which undergoes a solid-solid phase transformation, are provided. Also, simulations for a tungsten heavy alloy (WHA) are included. In the WHA analyses a particulate composite, composed of tungsten particles embedded in a tungsten-iron-nickel alloy matrix, is modeled. A solid-liquid phase transformation of the matrix material is included in the WHA numerical calculations. The example problems also demonstrate two approaches for generating free energies for the material constituents. Simulations for volumetric compression, uniaxial strain, biaxial strain, and pure shear are used to demonstrate the versatility of the model.

  19. The Transformation of the Getzels Model.

    ERIC Educational Resources Information Center

    McPherson, R. Bruce

    The author describes the model of social behavior in a social system first framed by Jacob Getzels, with the assistance of Egon Guba, in the middle 1950s. Significant changes in the conceptualization of organizational functioning have occurred in the years since then, though the methodological processes for studying that functioning have remained…

  20. A Transformational Bilingual Model for Teacher Education.

    ERIC Educational Resources Information Center

    Moheno, Phil; Pacheco, Richard

    At San Diego State University, the training program for bilingual education teachers was developed to systematically accommodate changing needs in education, particularly the needs to educate students with academic proficiency in both Spanish and English and to have a multicultural perspective. The emerging teacher education model empowers…

  1. TRANSFORMER

    DOEpatents

    Baker, W.R.

    1959-08-25

    Transformers of a type adapted for use with extreme high power vacuum tubes where current requirements may be of the order of 2,000 to 200,000 amperes are described. The transformer casing has the form of a re-entrant section being extended through an opening in one end of the cylinder to form a coaxial terminal arrangement. A toroidal multi-turn primary winding is disposed within the casing in coaxial relationship therein. In a second embodiment, means are provided for forming the casing as a multi-turn secondary. The transformer is characterized by minimized resistance heating, minimized external magnetic flux, and an economical construction.

  2. The System of Systems Architecture Feasibility Assessment Model

    DTIC Science & Technology

    2016-06-01

    process , and organization. In considering multiple perspectives of an SoS, one better defines the SoS and is more likely to correctly represent...multiple perspectives of an SoS—the physical, process , and organization. In considering multiple perspectives of an SoS, one better defines the SoS...88   1.   Physical Architecture Design Space .......................................... 89   2.   Process

  3. Building Information Modeling (BIM): A Road Map for Implementation to Support MILCON Transformation and Civil Works Projects within the U.S. Army Corps of Engineers

    DTIC Science & Technology

    2006-10-01

    ER D C TR -0 6 -1 0 Building Information Modeling (BIM) A Road Map for Implementation To Support MILCON Transformation and Civil Works...Transformation and Civil Works Projects within the U.S. Army Corps of Engineers Beth A. Brucker, Michael P. Case, E. William East, and Susan D... civil works and military construction business processes, including the process for working with the USACE Architectural Engineering Construction (AEC

  4. Transformative leadership: an ethical stewardship model for healthcare.

    PubMed

    Caldwell, Cam; Voelker, Carolyn; Dixon, Rolf D; LeJeune, Adena

    2008-01-01

    The need for effective leadership is a compelling priority for those who would choose to govern in public, private, and nonprofit organizations, and applies as much to the healthcare profession as it does to other sectors of the economy (Moody, Horton-Deutsch, & Pesut, 2007). Transformative Leadership, an approach to leadership and governance that incorporates the best characteristics of six other highly respected leadership models, is an integrative theory of ethical stewardship that can help healthcare professionals to more effectively achieve organizational efficiencies, build stakeholder commitment and trust, and create valuable synergies to transform and enrich today's healthcare systems (cf. Caldwell, LeJeune, & Dixon, 2007). The purpose of this article is to introduce the concept of Transformative Leadership and to explain how this model applies within a healthcare context. We define Transformative Leadership and identify its relationship to Transformational, Charismatic, Level 5, Principle-Centered, Servant, and Covenantal Leadership--providing examples of each of these elements of Transformative Leadership within a healthcare leadership context. We conclude by identifying contributions of this article to the healthcare leadership literature.

  5. Transform Coding for Point Clouds Using a Gaussian Process Model.

    PubMed

    De Queiroz, Ricardo; Chou, Philip A

    2017-04-28

    We propose using stationary Gaussian Processes (GPs) to model the statistics of the signal on points in a point cloud, which can be considered samples of a GP at the positions of the points. Further, we propose using Gaussian Process Transforms (GPTs), which are Karhunen-Lo`eve transforms of the GP, as the basis of transform coding of the signal. Focusing on colored 3D point clouds, we propose a transform coder that breaks the point cloud into blocks, transforms the blocks using GPTs, and entropy codes the quantized coefficients. The GPT for each block is derived from both the covariance function of the GP and the locations of the points in the block, which are separately encoded. The covariance function of the GP is parameterized, and its parameters are sent as side information. The quantized coefficients are sorted by eigenvalues of the GPTs, binned, and encoded using an arithmetic coder with bin-dependent Laplacian models whose parameters are also sent as side information. Results indicate that transform coding of 3D point cloud colors using the proposed GPT and entropy coding achieves superior compression performance on most of our data sets.

  6. Transfer Function Identification Using Orthogonal Fourier Transform Modeling Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2013-01-01

    A method for transfer function identification, including both model structure determination and parameter estimation, was developed and demonstrated. The approach uses orthogonal modeling functions generated from frequency domain data obtained by Fourier transformation of time series data. The method was applied to simulation data to identify continuous-time transfer function models and unsteady aerodynamic models. Model fit error, estimated model parameters, and the associated uncertainties were used to show the effectiveness of the method for identifying accurate transfer function models from noisy data.

  7. NASA Integrated Model Centric Architecture (NIMA) Model Use and Re-Use

    NASA Technical Reports Server (NTRS)

    Conroy, Mike; Mazzone, Rebecca; Lin, Wei

    2012-01-01

    This whitepaper accepts the goals, needs and objectives of NASA's Integrated Model-centric Architecture (NIMA); adds experience and expertise from the Constellation program as well as NASA's architecture development efforts; and provides suggested concepts, practices and norms that nurture and enable model use and re-use across programs, projects and other complex endeavors. Key components include the ability to effectively move relevant information through a large community, process patterns that support model reuse and the identification of the necessary meta-information (ex. history, credibility, and provenance) to safely use and re-use that information. In order to successfully Use and Re-Use Models and Simulations we must define and meet key organizational and structural needs: 1. We must understand and acknowledge all the roles and players involved from the initial need identification through to the final product, as well as how they change across the lifecycle. 2. We must create the necessary structural elements to store and share NIMA-enabled information throughout the Program or Project lifecycle. 3. We must create the necessary organizational processes to stand up and execute a NIMA-enabled Program or Project throughout its lifecycle. NASA must meet all three of these needs to successfully use and re-use models. The ability to Reuse Models a key component of NIMA and the capabilities inherent in NIMA are key to accomplishing NASA's space exploration goals. 11

  8. An Interactive Design Space Supporting Development of Vehicle Architecture Concept Models

    DTIC Science & Technology

    2011-01-01

    Denver, Colorado, USA IMECE2011-64510 AN INTERACTIVE DESIGN SPACE SUPPORTING DEVELOPMENT OF VEHICLE ARCHITECTURE CONCEPT MODELS Gary Osborne...early in the development cycle. Optimization taking place later in the cycle usually occurs at the detail design level, and tends to result in...architecture changes may be imposed, but such modifications are equivalent to a huge optimization cycle covering almost the entire design process, and

  9. Plant Growth Modelling and Applications: The Increasing Importance of Plant Architecture in Growth Models

    PubMed Central

    Fourcaud, Thierry; Zhang, Xiaopeng; Stokes, Alexia; Lambers, Hans; Körner, Christian

    2008-01-01

    Background Modelling plant growth allows us to test hypotheses and carry out virtual experiments concerning plant growth processes that could otherwise take years in field conditions. The visualization of growth simulations allows us to see directly and vividly the outcome of a given model and provides us with an instructive tool useful for agronomists and foresters, as well as for teaching. Functional–structural (FS) plant growth models are nowadays particularly important for integrating biological processes with environmental conditions in 3-D virtual plants, and provide the basis for more advanced research in plant sciences. Scope In this viewpoint paper, we ask the following questions. Are we modelling the correct processes that drive plant growth, and is growth driven mostly by sink or source activity? In current models, is the importance of soil resources (nutrients, water, temperature and their interaction with meristematic activity) considered adequately? Do classic models account for architectural adjustment as well as integrating the fundamental principles of development? Whilst answering these questions with the available data in the literature, we put forward the opinion that plant architecture and sink activity must be pushed to the centre of plant growth models. In natural conditions, sinks will more often drive growth than source activity, because sink activity is often controlled by finite soil resources or developmental constraints. PMA06 This viewpoint paper also serves as an introduction to this Special Issue devoted to plant growth modelling, which includes new research covering areas stretching from cell growth to biomechanics. All papers were presented at the Second International Symposium on Plant Growth Modeling, Simulation, Visualization and Applications (PMA06), held in Beijing, China, from 13–17 November, 2006. Although a large number of papers are devoted to FS models of agricultural and forest crop species, physiological and genetic

  10. Modelling of internal architecture of kinesin nanomotor as a machine language.

    PubMed

    Khataee, H R; Ibrahim, M Y

    2012-09-01

    Kinesin is a protein-based natural nanomotor that transports molecular cargoes within cells by walking along microtubules. Kinesin nanomotor is considered as a bio-nanoagent which is able to sense the cell through its sensors (i.e. its heads and tail), make the decision internally and perform actions on the cell through its actuator (i.e. its motor domain). The study maps the agent-based architectural model of internal decision-making process of kinesin nanomotor to a machine language using an automata algorithm. The applied automata algorithm receives the internal agent-based architectural model of kinesin nanomotor as a deterministic finite automaton (DFA) model and generates a regular machine language. The generated regular machine language was acceptable by the architectural DFA model of the nanomotor and also in good agreement with its natural behaviour. The internal agent-based architectural model of kinesin nanomotor indicates the degree of autonomy and intelligence of the nanomotor interactions with its cell. Thus, our developed regular machine language can model the degree of autonomy and intelligence of kinesin nanomotor interactions with its cell as a language. Modelling of internal architectures of autonomous and intelligent bio-nanosystems as machine languages can lay the foundation towards the concept of bio-nanoswarms and next phases of the bio-nanorobotic systems development.

  11. An Architectural Overlay: Modifying an Architecture to Help Cognitive Models Understand and Explain Themselves

    DTIC Science & Technology

    2006-02-24

    the metacognitive facilities Herbal can include in a model to create an intelligent opponent in dTank. dTank works, but it needs to be made even easier...the portfolio of mitigation projects that provides the overall greatest net benefit given resource constraints. Such planning is rarely performed in a

  12. Fractional brownian functions as mathematical models of natural rhythm in architecture.

    PubMed

    Cirovic, Ivana M

    2014-10-01

    Carl Bovill suggested and described a method of generating rhythm in architecture with the help of fractional Brownian functions, as they are mathematical models of natural rhythm. A relationship established in the stated procedure between fractional Brownian functions as models of rhythm, and the observed group of architectural elements, is recognized as an analogical relationship, and the procedure of generating rhythm as a process of analogical transfer from the natural domain to the architectural domain. Since analogical transfer implies relational similarity of two domains, and the establishment of one-to-one correspondence, this paper is trying to determine under which conditions such correspondence could be established. For example, if the values of the observed visual feature of architectural elements are not similar to each other in a way in which they can form a monotonically increasing, or a monotonically decreasing bounded sequence, then the structural alignment and the one-to-one correspondence with a single fractional Brownian function cannot be established, hence, this function is deemed inappropriate as a model for the architectural rhythm. In this case we propose overlapping of two or more functions, so that each of them is an analog for one subset of mutually similar values of the visual feature of architectural elements.

  13. The role of technology and engineering models in transforming healthcare.

    PubMed

    Pavel, Misha; Jimison, Holly Brugge; Wactlar, Howard D; Hayes, Tamara L; Barkis, Will; Skapik, Julia; Kaye, Jeffrey

    2013-01-01

    The healthcare system is in crisis due to challenges including escalating costs, the inconsistent provision of care, an aging population, and high burden of chronic disease related to health behaviors. Mitigating this crisis will require a major transformation of healthcare to be proactive, preventive, patient-centered, and evidence-based with a focus on improving quality-of-life. Information technology, networking, and biomedical engineering are likely to be essential in making this transformation possible with the help of advances, such as sensor technology, mobile computing, machine learning, etc. This paper has three themes: 1) motivation for a transformation of healthcare; 2) description of how information technology and engineering can support this transformation with the help of computational models; and 3) a technical overview of several research areas that illustrate the need for mathematical modeling approaches, ranging from sparse sampling to behavioral phenotyping and early detection. A key tenet of this paper concerns complementing prior work on patient-specific modeling and simulation by modeling neuropsychological, behavioral, and social phenomena. The resulting models, in combination with frequent or continuous measurements, are likely to be key components of health interventions to enhance health and wellbeing and the provision of healthcare.

  14. Modeling of variant-interaction during bainitic phase transformation

    NASA Astrophysics Data System (ADS)

    Ehlenbröker, U.; Mahnken, R.; Petersmann, M.; Antretter, T.

    2016-03-01

    In our research, we develop a thermodynamically consistent multi-scale model for phase transformations from austenite into n possible bainite variants. Each material point of the macroscopic configuration represents a polycrystal which describes the mesoscopic configuration. The microscopic configuration consists of an agglomeration of variants which is attached to each single crystal of the mesoscopic configuration. In addition, the model allows simulation of the macroscopic effects of volume change due to phase transformation as well as transformation-induced plasticity (TRIP). In this paper we present the results of recent work in the context of this model, which is concerned with the extension of the model for an effect of variant-interaction between the different crystallographic variants of bainite. For this reason we make use of the theory of transformation hardening. Thereby we are able to include an effect of preferential variant formation. This leads to the simultaneous formation of a selection of variants while the evolution of crystallographic unfavorable variants is handicapped or even completely suppressed. This extension of the model aims for the fact that in general not every crystallographic variant of bainite forms within a single austenite grain.

  15. Modeling of a 3DTV service in the software-defined networking architecture

    NASA Astrophysics Data System (ADS)

    Wilczewski, Grzegorz

    2014-11-01

    In this article a newly developed concept towards modeling of a multimedia service offering stereoscopic motion imagery is presented. Proposed model is based on the approach of utilization of Software-defined Networking or Software Defined Networks architecture (SDN). The definition of 3D television service spanning SDN concept is identified, exposing basic characteristic of a 3DTV service in a modern networking organization layout. Furthermore, exemplary functionalities of the proposed 3DTV model are depicted. It is indicated that modeling of a 3DTV service in the Software-defined Networking architecture leads to multiplicity of improvements, especially towards flexibility of a service supporting heterogeneity of end user devices.

  16. The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding.

    PubMed

    Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco

    2017-01-01

    The recent "deep learning revolution" in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems.

  17. The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding

    PubMed Central

    Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco

    2017-01-01

    The recent “deep learning revolution” in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems. PMID:28377709

  18. Simplified three-phase transformer model for electromagnetic transient studies

    SciTech Connect

    Chimklai, S.; Marti, J.R.

    1995-07-01

    This paper presents a simplified high-frequency model for three-phase, two- and three-winding transformers. The model is based on the classical 60-Hz equivalent circuit, extended to high frequencies by the addition of the winding capacitances and the synthesis of the frequency-dependent short-circuit branch by an RLC equivalent network. By retaining the T-form of the classical model, it is possible to separate the frequency-dependent series branch from the constant-valued shunt capacitances. Since the short-circuit branch can be synthesized by a minimum-phase-shift rational approximation, the mathematical complications of fitting mutual impedance or admittance functions are avoided and the model is guaranteed to be numerically absolutely stable. Experimental tests were performed on actual power transformers to determine the parameters of the model. EMTP simulation results are also presented.

  19. Comparison of different artificial neural network architectures in modeling of Chlorella sp. flocculation.

    PubMed

    Zenooz, Alireza Moosavi; Ashtiani, Farzin Zokaee; Ranjbar, Reza; Nikbakht, Fatemeh; Bolouri, Oberon

    2017-07-03

    Biodiesel production from microalgae feedstock should be performed after growth and harvesting of the cells, and the most feasible method for harvesting and dewatering of microalgae is flocculation. Flocculation modeling can be used for evaluation and prediction of its performance under different affective parameters. However, the modeling of flocculation in microalgae is not simple and has not performed yet, under all experimental conditions, mostly due to different behaviors of microalgae cells during the process under different flocculation conditions. In the current study, the modeling of microalgae flocculation is studied with different neural network architectures. Microalgae species, Chlorella sp., was flocculated with ferric chloride under different conditions and then the experimental data modeled using artificial neural network. Neural network architectures of multilayer perceptron (MLP) and radial basis function architectures, failed to predict the targets successfully, though, modeling was effective with ensemble architecture of MLP networks. Comparison between the performances of the ensemble and each individual network explains the ability of the ensemble architecture in microalgae flocculation modeling.

  20. Estimation in a semi-Markov transformation model

    PubMed Central

    Dabrowska, Dorota M.

    2012-01-01

    Multi-state models provide a common tool for analysis of longitudinal failure time data. In biomedical applications, models of this kind are often used to describe evolution of a disease and assume that patient may move among a finite number of states representing different phases in the disease progression. Several authors developed extensions of the proportional hazard model for analysis of multi-state models in the presence of covariates. In this paper, we consider a general class of censored semi-Markov and modulated renewal processes and propose the use of transformation models for their analysis. Special cases include modulated renewal processes with interarrival times specified using transformation models, and semi-Markov processes with with one-step transition probabilities defined using copula-transformation models. We discuss estimation of finite and infinite dimensional parameters of the model, and develop an extension of the Gaussian multiplier method for setting confidence bands for transition probabilities. A transplant outcome data set from the Center for International Blood and Marrow Transplant Research is used for illustrative purposes. PMID:22740583

  1. Model Transformation for a System of Systems Dependability Safety Case

    NASA Technical Reports Server (NTRS)

    Murphy, Judy; Driskell, Steve

    2011-01-01

    The presentation reviews the dependability and safety effort of NASA's Independent Verification and Validation Facility. Topics include: safety engineering process, applications to non-space environment, Phase I overview, process creation, sample SRM artifact, Phase I end result, Phase II model transformation, fault management, and applying Phase II to individual projects.

  2. Software architecture and design of the web services facilitating climate model diagnostic analysis

    NASA Astrophysics Data System (ADS)

    Pan, L.; Lee, S.; Zhang, J.; Tang, B.; Zhai, C.; Jiang, J. H.; Wang, W.; Bao, Q.; Qi, M.; Kubar, T. L.; Teixeira, J.

    2015-12-01

    Climate model diagnostic analysis is a computationally- and data-intensive task because it involves multiple numerical model outputs and satellite observation data that can both be high resolution. We have built an online tool that facilitates this process. The tool is called Climate Model Diagnostic Analyzer (CMDA). It employs the web service technology and provides a web-based user interface. The benefits of these choices include: (1) No installation of any software other than a browser, hence it is platform compatable; (2) Co-location of computation and big data on the server side, and small results and plots to be downloaded on the client side, hence high data efficiency; (3) multi-threaded implementation to achieve parallel performance on multi-core servers; and (4) cloud deployment so each user has a dedicated virtual machine. In this presentation, we will focus on the computer science aspects of this tool, namely the architectural design, the infrastructure of the web services, the implementation of the web-based user interface, the mechanism of provenance collection, the approach to virtualization, and the Amazon Cloud deployment. As an example, We will describe our methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks (i.e., Flask, Gunicorn, and Tornado). Another example is the use of Docker, a light-weight virtualization container, to distribute and deploy CMDA onto an Amazon EC2 instance. Our tool of CMDA has been successfully used in the 2014 Summer School hosted by the JPL Center for Climate Science. Students had positive feedbacks in general and we will report their comments. An enhanced version of CMDA with several new features, some requested by the 2014 students, will be used in the 2015 Summer School soon.

  3. Transitioning ISR architecture into the cloud

    NASA Astrophysics Data System (ADS)

    Lash, Thomas D.

    2012-06-01

    Emerging cloud computing platforms offer an ideal opportunity for Intelligence, Surveillance, and Reconnaissance (ISR) intelligence analysis. Cloud computing platforms help overcome challenges and limitations of traditional ISR architectures. Modern ISR architectures can benefit from examining commercial cloud applications, especially as they relate to user experience, usage profiling, and transformational business models. This paper outlines legacy ISR architectures and their limitations, presents an overview of cloud technologies and their applications to the ISR intelligence mission, and presents an idealized ISR architecture implemented with cloud computing.

  4. A Microscale Model for Ausferritic Transformation of Austempered Ductile Irons

    NASA Astrophysics Data System (ADS)

    Boccardo, Adrián D.; Dardati, Patricia M.; Celentano, Diego J.; Godoy, Luis A.

    2017-01-01

    This paper presents a new metallurgical model for the ausferritic transformation of ductile cast iron. The model allows predicting the evolution of phases in terms of the chemical composition, austenitization and austempering temperatures, graphite nodule count, and distribution of graphite nodule size. The ferrite evolution is predicted according to the displacive growth mechanism. A representative volume element is employed at the microscale to consider the phase distributions, the inhomogeneous austenite carbon content, and the nucleation of ferrite subunits at the graphite nodule surface and at the tips of existing ferrite subunits. The performance of the model is evaluated by comparison with experimental results. The results indicate that the increment of the ausferritic transformation rate, which is caused by increments of austempering temperature and graphite nodule count, is adequately represented by this model.

  5. Laguerre-Volterra model and architecture for MIMO system identification and output prediction.

    PubMed

    Li, Will X Y; Xin, Yao; Chan, Rosa H M; Song, Dong; Berger, Theodore W; Cheung, Ray C C

    2014-01-01

    A generalized mathematical model is proposed for behaviors prediction of biological causal systems with multiple inputs and multiple outputs (MIMO). The system properties are represented by a set of model parameters, which can be derived with random input stimuli probing it. The system calculates predicted outputs based on the estimated parameters and its novel inputs. An efficient hardware architecture is established for this mathematical model and its circuitry has been implemented using the field-programmable gate arrays (FPGAs). This architecture is scalable and its functionality has been validated by using experimental data gathered from real-world measurement.

  6. Assessing biocomputational modelling in transforming clinical guidelines for osteoporosis management.

    PubMed

    Thiel, Rainer; Viceconti, Marco; Stroetmann, Karl

    2011-01-01

    Biocomputational modelling as developed by the European Virtual Physiological Human (VPH) Initiative is the area of ICT most likely to revolutionise in the longer term the practice of medicine. Using the example of osteoporosis management, a socio-economic assessment framework is presented that captures how the transformation of clinical guidelines through VPH models can be evaluated. Applied to the Osteoporotic Virtual Physiological Human Project, a consequent benefit-cost analysis delivers promising results, both methodologically and substantially.

  7. Wavelet transforms in a critical interface model for Barkhausen noise.

    PubMed

    de Queiroz, S L A

    2008-02-01

    We discuss the application of wavelet transforms to a critical interface model which is known to provide a good description of Barkhausen noise in soft ferromagnets. The two-dimensional version of the model (one-dimensional interface) is considered, mainly in the adiabatic limit of very slow driving. On length scales shorter than a crossover length (which grows with the strength of the surface tension), the effective interface roughness exponent zeta is approximately 1.20 , close to the expected value for the universality class of the quenched Edwards-Wilkinson model. We find that the waiting times between avalanches are fully uncorrelated, as the wavelet transform of their autocorrelations scales as white noise. Similarly, detrended size-size correlations give a white-noise wavelet transform. Consideration of finite driving rates, still deep within the intermittent regime, shows the wavelet transform of correlations scaling as 1/f(1.5) for intermediate frequencies. This behavior is ascribed to intra-avalanche correlations.

  8. Similarity transformation approach to identifiability analysis of nonlinear compartmental models.

    PubMed

    Vajda, S; Godfrey, K R; Rabitz, H

    1989-04-01

    Through use of the local state isomorphism theorem instead of the algebraic equivalence theorem of linear systems theory, the similarity transformation approach is extended to nonlinear models, resulting in finitely verifiable sufficient and necessary conditions for global and local identifiability. The approach requires testing of certain controllability and observability conditions, but in many practical examples these conditions prove very easy to verify. In principle the method also involves nonlinear state variable transformations, but in all of the examples presented in the paper the transformations turn out to be linear. The method is applied to an unidentifiable nonlinear model and a locally identifiable nonlinear model, and these are the first nonlinear models other than bilinear models where the reason for lack of global identifiability is nontrivial. The method is also applied to two models with Michaelis-Menten elimination kinetics, both of considerable importance in pharmacokinetics, and for both of which the complicated nature of the algebraic equations arising from the Taylor series approach has hitherto defeated attempts to establish identifiability results for specific input functions.

  9. Tests for Regression Parameters in Power Transformation Models.

    DTIC Science & Technology

    1980-01-01

    of estimating the correct %.JI.J scale and then performing the usual linear model F-test in this estimated Ascale. We explore situations in which this...transformation model. In this model, a simple test consists of estimating the correct scale and t ihv. performin g the usutal l iiear model F-test in ’ this...X (yi,y ) will be the least squares estimaites in the estimated scale X and -(yiY2) will be the least squares estimates calculated in the true but

  10. Multimodal electromechanical model of piezoelectric transformers by Hamilton's principle.

    PubMed

    Nadal, Clement; Pigache, Francois

    2009-11-01

    This work deals with a general energetic approach to establish an accurate electromechanical model of a piezoelectric transformer (PT). Hamilton's principle is used to obtain the equations of motion for free vibrations. The modal characteristics (mass, stiffness, primary and secondary electromechanical conversion factors) are also deduced. Then, to illustrate this general electromechanical method, the variational principle is applied to both homogeneous and nonhomogeneous Rosen-type PT models. A comparison of modal parameters, mechanical displacements, and electrical potentials are presented for both models. Finally, the validity of the electrodynamical model of nonhomogeneous Rosen-type PT is confirmed by a numerical comparison based on a finite elements method and an experimental identification.

  11. Processes models, environmental analyses, and cognitive architectures: quo vadis quantum probability theory?

    PubMed

    Marewski, Julian N; Hoffrage, Ulrich

    2013-06-01

    A lot of research in cognition and decision making suffers from a lack of formalism. The quantum probability program could help to improve this situation, but we wonder whether it would provide even more added value if its presumed focus on outcome models were complemented by process models that are, ideally, informed by ecological analyses and integrated into cognitive architectures.

  12. Understanding transparency perception in architecture: presentation of the simplified perforated model.

    PubMed

    Brzezicki, Marcin

    2013-01-01

    Issues of transparency perception are addressed from an architectural perspective, pointing out previously neglected factors that greatly influence this phenomenon in the scale of a building. The simplified perforated model of a transparent surface presented in the paper has been based on previously developed theories and involves the balance of light reflected versus light transmitted. Its aim is to facilitate an understanding of non-intuitive phenomena related to transparency (eg dynamically changing reflectance) for readers without advanced knowledge of molecular physics. A verification of the presented model has been based on the comparison of optical performance of the model with the results of Fresnel's equations for light-transmitting materials. The presented methodology is intended to be used both in the design and explanatory stages of architectural practice and vision research. Incorporation of architectural issues could enrich the perspective of scientists representing other disciplines.

  13. Cultural heritage conservation and communication by digital modeling tools. Case studies: minor architectures of the Thirties in the Turin area

    NASA Astrophysics Data System (ADS)

    Bruno, A., Jr.; Spallone, R.

    2015-08-01

    Between the end of the twenties and the beginning of the World war two Turin, as the most of the Italian cities, was endowed by the fascist regime of many new buildings to guarantee its visibility and to control the territory: the fascist party main houses and the local ones. The style that was adopted for these constructions was inspired by the guide lines of the Modern movement which were spreading by a generation of architects as Le Corbusier, Gropius, Mendelsohn. At the end of the war many buildings were reconverted to several functions that led heavy transformations not respectful of the original worth, other were demolished. Today it's possible to rebuild those lost architectures in their primal format as it was created by their architects on paper (and in their mind). This process can guarantee the three-dimensional perception, the authenticity of the materials and the placement into the Turin urban tissue, using static and dynamic digital representation systems. The "three-dimensional re-drawing" of the projects, thought as an heuristic practice devoted to reveal the original idea of the project, inserts itself in a digital model of the urban and natural context as we can live it today, to simulate the perceptive effects that the building could stir up today. The modeling skills are the basis to product videos able to explore the relationship between the environment and "re-built architectures", describing with the synthetic movie techniques, the main formal and perceptive roots. The model represents a scientific product that can be involved in a virtual archive of cultural goods to preserve the collective memory of the architectural and urban past image of Turin.

  14. Assessing the photochemical transformation pathways of acetaminophen relevant to surface waters: transformation kinetics, intermediates, and modelling.

    PubMed

    De Laurentiis, Elisa; Prasse, Carsten; Ternes, Thomas A; Minella, Marco; Maurino, Valter; Minero, Claudio; Sarakha, Mohamed; Brigante, Marcello; Vione, Davide

    2014-04-15

    This work shows that the main photochemical pathways of acetaminophen (APAP) transformation in surface waters would be direct photolysis (with quantum yield of (4.57 ± 0.17)⋅10(-2)), reaction with CO3(-·) (most significant at pH > 7, with second-order rate constant of (3.8 ± 1.1)⋅10(8) M(-1) s(-1)) and possibly, for dissolved organic carbon higher than 5 mg C L(-1), reaction with the triplet states of chromophoric dissolved organic matter ((3)CDOM*). The modelled photochemical half-life time of APAP in environmental waters would range from days to few weeks in summertime, which suggests that the importance of phototransformation might be comparable to biodegradation. APAP transformation by the main photochemical pathways yields hydroxylated derivatives, ring-opening compounds as well as dimers and trimers (at elevated concentration levels). In the case of (3)CDOM* (for which the triplet state of anthraquinone-2-sulphonate was used as proxy), ring rearrangement is also hypothesised. Photochemistry would produce different transformation products (TPs) of APAP than microbial biodegradation or human metabolism, thus the relevant TPs might be used as markers of APAP photochemical reaction pathways in environmental waters. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Stable Eutectoid Transformation in Nodular Cast Iron: Modeling and Validation

    NASA Astrophysics Data System (ADS)

    Carazo, Fernando D.; Dardati, Patricia M.; Celentano, Diego J.; Godoy, Luis A.

    2017-01-01

    This paper presents a new microstructural model of the stable eutectoid transformation in a spheroidal cast iron. The model takes into account the nucleation and growth of ferrite grains and the growth of graphite spheroids. Different laws are assumed for the growth of both phases during and below the intercritical stable eutectoid. At a microstructural level, the initial conditions for the phase transformations are obtained from the microstructural simulation of solidification of the material, which considers the divorced eutectic and the subsequent growth of graphite spheroids up to the initiation of the stable eutectoid transformation. The temperature field is obtained by solving the energy equation by means of finite elements. The microstructural (phase change) and macrostructural (energy balance) models are coupled by a sequential multiscale procedure. Experimental validation of the model is achieved by comparison with measured values of fractions and radius of 2D view of ferrite grains. Agreement with such experiments indicates that the present model is capable of predicting ferrite phase fraction and grain size with reasonable accuracy.

  16. In search of best fitted composite model to the ALAE data set with transformed Gamma and inversed transformed Gamma families

    NASA Astrophysics Data System (ADS)

    Maghsoudi, Mastoureh; Bakar, Shaiful Anuar Abu

    2017-05-01

    In this paper, a recent novel approach is applied to estimate the threshold parameter of a composite model. Several composite models from Transformed Gamma and Inverse Transformed Gamma families are constructed based on this approach and their parameters are estimated by the maximum likelihood method. These composite models are fitted to allocated loss adjustment expenses (ALAE). In comparison to all composite models studied, the composite Weibull-Inverse Transformed Gamma model is proved to be a competitor candidate as it best fit the loss data. The final part considers the backtesting method to verify the validation of VaR and CTE risk measures.

  17. Algorithm To Architecture Mapping Model (ATAMM) multicomputer operating system functional specification

    NASA Technical Reports Server (NTRS)

    Mielke, R.; Stoughton, J.; Som, S.; Obando, R.; Malekpour, M.; Mandala, B.

    1990-01-01

    A functional description of the ATAMM Multicomputer Operating System is presented. ATAMM (Algorithm to Architecture Mapping Model) is a marked graph model which describes the implementation of large grained, decomposed algorithms on data flow architectures. AMOS, the ATAMM Multicomputer Operating System, is an operating system which implements the ATAMM rules. A first generation version of AMOS which was developed for the Advanced Development Module (ADM) is described. A second generation version of AMOS being developed for the Generic VHSIC Spaceborne Computer (GVSC) is also presented.

  18. Spatial Modeling of Iron Transformations Within Artificial Soil Aggregates

    NASA Astrophysics Data System (ADS)

    Kausch, M.; Meile, C.; Pallud, C.

    2008-12-01

    Structured soils exhibit significant variations in transport characteristics at the aggregate scale. Preferential flow occurs through macropores while predominantly diffusive exchange takes place in intra-aggregate micropores. Such environments characterized by mass transfer limitations are conducive to the formation of small-scale chemical gradients and promote strong spatial variation in processes controlling the fate of redox-sensitive elements such as Fe. In this study, we present a reactive transport model used to spatially resolve iron bioreductive processes occurring within a spherical aggregate at the interface between advective and diffusive domains. The model is derived from current conceptual models of iron(hydr)oxide (HFO) transformations and constrained by literature and experimental data. Data were obtained from flow-through experiments on artificial soil aggregates inoculated with Shewanella putrefaciens strain CN32, and include the temporal evolution of the bulk solution composition, as well as spatial information on the final solid phase distribution within aggregates. With all iron initially in the form of ferrihydrite, spatially heterogeneous formation of goethite/lepidocrocite, magnetite and siderite was observed during the course of the experiments. These transformations were reproduced by the model, which ascribes a central role to divalent iron as a driver of HFO transformations and master variable in the rate laws of the considered reaction network. The predicted dissolved iron breakthrough curves also match the experimental ones closely. Thus, the computed chemical concentration fields help identify factors governing the observed trends in the solid phase distribution patterns inside the aggregate. Building on a mechanistic description of transformation reactions, fluid flow and solute transport, the model was able to describe the observations and hence illustrates the importance of small-scale gradients and dynamics of bioreductive

  19. Rasch family models in e-learning: analyzing architectural sketching with a digital pen.

    PubMed

    Scalise, Kathleen; Cheng, Nancy Yen-Wen; Oskui, Nargas

    2009-01-01

    Since architecture students studying design drawing are usually assessed qualitatively on the basis of their final products, the challenges and stages of their learning have remained masked. To clarify the challenges in design drawing, we have been using the BEAR Assessment System and Rasch family models to measure levels of understanding for individuals and groups, in order to correct pedagogical assumptions and tune teaching materials. This chapter discusses the analysis of 81 drawings created by architectural students to solve a space layout problem, collected and analyzed with digital pen-and-paper technology. The approach allows us to map developmental performance criteria and perceive achievement overlaps in learning domains assumed separate, and then re-conceptualize a three-part framework to represent learning in architectural drawing. Results and measurement evidence from the assessment and Rasch modeling are discussed.

  20. Representation and inference of cellular architecture for metabolic reconstruction and modeling.

    PubMed

    Paley, Suzanne; Krummenacker, Markus; Karp, Peter D

    2016-04-01

    Metabolic modeling depends on accurately representing the cellular locations of enzyme-catalyzed and transport reactions. We sought to develop a representation of cellular compartmentation that would accurately capture cellular location information. We further sought a representation that would support automated inference of the cellular compartments present in newly sequenced organisms to speed model development, and that would enable representing the cellular compartments present in multiple cell types within a multicellular organism. We define the cellular architecture of a unicellular organism, or of a cell type from a multicellular organism, as the collection of cellular components it contains plus the topological relationships among those components. We developed a tool for inferring cellular architectures across many domains of life and extended our Cell Component Ontology to enable representation of the inferred architectures. We provide software for visualizing cellular architectures to verify their correctness and software for editing cellular architectures to modify or correct them. We also developed a representation that records the cellular compartment assignments of reactions with minimal duplication of information. The Cell Component Ontology is freely available. The Pathway Tools software is freely available for academic research and is available for a fee for commercial use. pkarp@ai.sri.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Understanding Portability of a High-Level Programming Model on Contemporary Heterogeneous Architectures

    DOE PAGES

    Sabne, Amit J.; Sakdhnagool, Putt; Lee, Seyong; ...

    2015-07-13

    Accelerator-based heterogeneous computing is gaining momentum in the high-performance computing arena. However, the increased complexity of heterogeneous architectures demands more generic, high-level programming models. OpenACC is one such attempt to tackle this problem. Although the abstraction provided by OpenACC offers productivity, it raises questions concerning both functional and performance portability. In this article, the authors propose HeteroIR, a high-level, architecture-independent intermediate representation, to map high-level programming models, such as OpenACC, to heterogeneous architectures. They present a compiler approach that translates OpenACC programs into HeteroIR and accelerator kernels to obtain OpenACC functional portability. They then evaluate the performance portability obtained bymore » OpenACC with their approach on 12 OpenACC programs on Nvidia CUDA, AMD GCN, and Intel Xeon Phi architectures. They study the effects of various compiler optimizations and OpenACC program settings on these architectures to provide insights into the achieved performance portability.« less

  2. Understanding Portability of a High-Level Programming Model on Contemporary Heterogeneous Architectures

    SciTech Connect

    Sabne, Amit J.; Sakdhnagool, Putt; Lee, Seyong; Vetter, Jeffrey S.

    2015-07-13

    Accelerator-based heterogeneous computing is gaining momentum in the high-performance computing arena. However, the increased complexity of heterogeneous architectures demands more generic, high-level programming models. OpenACC is one such attempt to tackle this problem. Although the abstraction provided by OpenACC offers productivity, it raises questions concerning both functional and performance portability. In this article, the authors propose HeteroIR, a high-level, architecture-independent intermediate representation, to map high-level programming models, such as OpenACC, to heterogeneous architectures. They present a compiler approach that translates OpenACC programs into HeteroIR and accelerator kernels to obtain OpenACC functional portability. They then evaluate the performance portability obtained by OpenACC with their approach on 12 OpenACC programs on Nvidia CUDA, AMD GCN, and Intel Xeon Phi architectures. They study the effects of various compiler optimizations and OpenACC program settings on these architectures to provide insights into the achieved performance portability.

  3. A transformation model for Laminaria Japonica (Phaeophyta, Laminariales)

    NASA Astrophysics Data System (ADS)

    Qin, Song; Jiang, Peng; Li, Xin-Ping; Wang, Xi-Hua; Zeng, Cheng-Kui

    1998-03-01

    A genetic transformation model for the seaweed Laminaria japonica mainly includes the following aspects: 1. The method to introduce foreign genes into the kelp, L. japonica Biolistic bombardment has been proved to be an effective method to bombard foreign DNA through cell walls into intact cells of both sporophytes and gametophytes. The expression of cat and lacZ was detected in regenerated sporophytes, which suggests that this method could induce random integration of foreign genes. Promoters to drive gene expression

  4. Bio-inspired FPGA architecture for self-calibration of an image compression core based on wavelet transforms in embedded systems

    NASA Astrophysics Data System (ADS)

    Salvador, Rubén; Vidal, Alberto; Moreno, Félix; Riesgo, Teresa; Sekanina, Lukáš

    2011-05-01

    A generic bio-inspired adaptive architecture for image compression suitable to be implemented in embedded systems is presented. The architecture allows the system to be tuned during its calibration phase. An evolutionary algorithm is responsible of making the system evolve towards the required performance. A prototype has been implemented in a Xilinx Virtex-5 FPGA featuring an adaptive wavelet transform core directed at improving image compression for specific types of images. An Evolution Strategy has been chosen as the search algorithm and its typical genetic operators adapted to allow for a hardware friendly implementation. HW/SW partitioning issues are also considered after a high level description of the algorithm is profiled which validates the proposed resource allocation in the device fabric. To check the robustness of the system and its adaptation capabilities, different types of images have been selected as validation patterns. A direct application of such a system is its deployment in an unknown environment during design time, letting the calibration phase adjust the system parameters so that it performs efcient image compression. Also, this prototype implementation may serve as an accelerator for the automatic design of evolved transform coefficients which are later on synthesized and implemented in a non-adaptive system in the final implementation device, whether it is a HW or SW based computing device. The architecture has been built in a modular way so that it can be easily extended to adapt other types of image processing cores. Details on this pluggable component point of view are also given in the paper.

  5. A model of tumor architecture and spatial interactions with tumor microenvironment in breast carcinoma

    NASA Astrophysics Data System (ADS)

    Ben Cheikh, Bassem; Bor-Angelier, Catherine; Racoceanu, Daniel

    2017-03-01

    Breast carcinomas are cancers that arise from the epithelial cells of the breast, which are the cells that line the lobules and the lactiferous ducts. Breast carcinoma is the most common type of breast cancer and can be divided into different subtypes based on architectural features and growth patterns, recognized during a histopathological examination. Tumor microenvironment (TME) is the cellular environment in which tumor cells develop. Being composed of various cell types having different biological roles, TME is recognized as playing an important role in the progression of the disease. The architectural heterogeneity in breast carcinomas and the spatial interactions with TME are, to date, not well understood. Developing a spatial model of tumor architecture and spatial interactions with TME can advance our understanding of tumor heterogeneity. Furthermore, generating histological synthetic datasets can contribute to validating, and comparing analytical methods that are used in digital pathology. In this work, we propose a modeling method that applies to different breast carcinoma subtypes and TME spatial distributions based on mathematical morphology. The model is based on a few morphological parameters that give access to a large spectrum of breast tumor architectures and are able to differentiate in-situ ductal carcinomas (DCIS) and histological subtypes of invasive carcinomas such as ductal (IDC) and lobular carcinoma (ILC). In addition, a part of the parameters of the model controls the spatial distribution of TME relative to the tumor. The validation of the model has been performed by comparing morphological features between real and simulated images.

  6. RUBE: an XML-based architecture for 3D process modeling and model fusion

    NASA Astrophysics Data System (ADS)

    Fishwick, Paul A.

    2002-07-01

    Information fusion is a critical problem for science and engineering. There is a need to fuse information content specified as either data or model. We frame our work in terms of fusing dynamic and geometric models, to create an immersive environment where these models can be juxtaposed in 3D, within the same interface. The method by which this is accomplished fits well into other eXtensible Markup Language (XML) approaches to fusion in general. The task of modeling lies at the heart of the human-computer interface, joining the human to the system under study through a variety of sensory modalities. I overview modeling as a key concern for the Defense Department and the Air Force, and then follow with a discussion of past, current, and future work. Past work began with a package with C and has progressed, in current work, to an implementation in XML. Our current work is defined within the RUBE architecture, which is detailed in subsequent papers devoted to key components. We have built RUBE as a next generation modeling framework using our prior software, with research opportunities in immersive 3D and tangible user interfaces.

  7. Multinomial logistic estimation of Markov-chain models for modeling sleep architecture in primary insomnia patients.

    PubMed

    Bizzotto, Roberto; Zamuner, Stefano; De Nicolao, Giuseppe; Karlsson, Mats O; Gomeni, Roberto

    2010-04-01

    Hypnotic drug development calls for a better understanding of sleep physiology in order to improve and differentiate novel medicines for the treatment of sleep disorders. On this basis, a proper evaluation of polysomnographic data collected in clinical trials conducted to explore clinical efficacy of novel hypnotic compounds should include the assessment of sleep architecture and its drug-induced changes. This work presents a non-linear mixed-effect Markov-chain model based on multinomial logistic functions which characterize the time course of transition probabilities between sleep stages in insomniac patients treated with placebo. Polysomnography measurements were obtained from patients during one night treatment. A population approach was used to describe the time course of sleep stages (awake stage, stage 1, stage 2, slow-wave sleep and REM sleep) using a Markov-chain model. The relationship between time and individual transition probabilities between sleep stages was modelled through piecewise linear multinomial logistic functions. The identification of the model produced a good adherence of mean post-hoc estimates to the observed transition frequencies. Parameters were generally well estimated in terms of CV, shrinkage and distribution of empirical Bayes estimates around the typical values. The posterior predictive check analysis showed good consistency between model-predicted and observed sleep parameters. In conclusion, the Markov-chain model based on multinomial logistic functions provided an accurate description of the time course of sleep stages together with an assessment of the probabilities of transition between different stages.

  8. MATREX - Modeling Architecture for Technology, Research and EXperimentation

    DTIC Science & Technology

    2008-03-10

    Cross-Army Environment FCS LSI Simulation Environment (FSE) And Integrated Phase 2 (IP-2) Simulation Middleware (HLA / RTI) C3 Human Performance Model... C3 HPM) hlaControl hlaResults SQL Database Comprehensive Munitions and Sensor Simulation (CMS2) Logistics Server (LOG) NEC2 Effects Engine (NEC2...Model Y $ LVS MC2CDAS CMS2/UCModel BModel A $ AMS ARMS VDMSMSLS C3 Grid Meta-Modeling & Code Generation Concept M&S Interoperability Problem Space

  9. Hierarchical Architectural Considerations in Econometric Modeling of Manufacturing Systems

    DTIC Science & Technology

    1981-06-01

    the model (e.g. center level as a function of cell level, etc.). Although the current effort was to develop an IDEF o activ- ity model, the...concepts and thoughts on synthesizing existing knowledge toward the objective of developing a hierarchical IDEF o econo- metric model for a large scale...review of the termin- ology and structure of IDEF o (ICAM definition method-version 0) is given in the subsequent paragraphs. Structured analysis

  10. Failed oceanic transform models: experience of shaking the tree

    NASA Astrophysics Data System (ADS)

    Gerya, Taras

    2017-04-01

    In geodynamics, numerical modeling is often used as a trial-and-error tool, which does not necessarily requires full understanding or even a correct concept for a modeled phenomenon. Paradoxically, in order to understand an enigmatic process one should simply try to model it based on some initial assumptions, which must not even be correct… The reason is that our intuition is not always well "calibrated" for understanding of geodynamic phenomena, which develop on space- and timescales that are very different from our everyday experience. We often have much better ideas about physical laws governing geodynamic processes than on how these laws should interact on geological space- and timescales. From this prospective, numerical models, in which these physical laws are self-consistently implemented, can gradually calibrate our intuition by exploring what scenarios are physically sensible and what are not. I personally went through this painful learning path many times and one noteworthy example was my 3D numerical modeling of oceanic transform faults. As I understand in retrospective, my initial literature-inspired concept of how and why transform faults form and evolve was thermomechanically inconsistent and based on two main assumptions (btw. both were incorrect!): (1) oceanic transforms are directly inherited from the continental rifting and breakup stages and (2) they represent plate fragmentation structures having peculiar extension-parallel orientation due to the stress rotation caused by thermal contraction of the oceanic lithosphere. During one year (!) of high-resolution thermomechanical numerical experiments exploring various physics (including very computationally demanding thermal contraction) I systematically observed how my initially prescribed extension-parallel weak transform faults connecting ridge segments rotated away from their original orientation and get converted into oblique ridge sections… This was really an epic failure! However, at the

  11. Research and development of the evolving architecture for beyond the Standard Model

    NASA Astrophysics Data System (ADS)

    Cho, Kihyeon; Kim, Jangho; Kim, Junghyun

    2015-12-01

    The Standard Model (SM) has been successfully validated with the discovery of Higgs boson. However, the model is not yet fully regarded as a complete description. There are efforts to develop phenomenological models that are collectively termed beyond the standard model (BSM). The BSM requires several orders of magnitude more simulations compared with those required for the Higgs boson events. On the other hand, particle physics research involves major investments in hardware coupled with large-scale theoretical and computational efforts along with experiments. These fields include simulation toolkits based on an evolving computing architecture. Using the simulation toolkits, we study particle physics beyond the standard model. Here, we describe the state of this research and development effort for evolving computing architecture of high throughput computing (HTC) and graphic processing units (GPUs) for searching beyond the standard model.

  12. Modelling a single phase voltage controlled rectifier using Laplace transforms

    NASA Technical Reports Server (NTRS)

    Kraft, L. Alan; Kankam, M. David

    1992-01-01

    The development of a 20 kHz, AC power system by NASA for large space projects has spurred a need to develop models for the equipment which will be used on these single phase systems. To date, models for the AC source (i.e., inverters) have been developed. It is the intent of this paper to develop a method to model the single phase voltage controlled rectifiers which will be attached to the AC power grid as an interface for connected loads. A modified version of EPRI's HARMFLO program is used as the shell for these models. The results obtained from the model developed in this paper are quite adequate for the analysis of problems such as voltage resonance. The unique technique presented in this paper uses the Laplace transforms to determine the harmonic content of the load current of the rectifier rather than a curve fitting technique. Laplace transforms yield the coefficient of the differential equations which model the line current to the rectifier directly.

  13. Modeling & Analysis of Multicore Architectures for Embedded SIGINT Applications

    DTIC Science & Technology

    2015-03-01

    consumption and/or more efficiently task processing resources. The capability offered by this modeling technique is expected to allow system designers ...efficiently task processing resources. The capability offered by this modeling technique is expected to allow system designers to make more informed...selection of high performance embedded computing (HPEC) technologies. Furthermore, it could allow researchers to design resource management and PED

  14. Towards automatic Markov reliability modeling of computer architectures

    NASA Technical Reports Server (NTRS)

    Liceaga, C. A.; Siewiorek, D. P.

    1986-01-01

    The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.

  15. Modelling of Singapore's topographic transformation based on DEMs

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Belle, Iris; Hassler, Uta

    2015-02-01

    Singapore's topography has been heavily transformed by industrialization and urbanization processes. To investigate topographic changes and evaluate soil mass flows, historical topographic maps of 1924 and 2012 were employed, and basic topographic features were vectorized. Digital elevation models (DEMs) for the two years were reconstructed based on vector features. Corresponding slope maps, a surface difference map and a scatter plot of elevation changes were generated and used to quantify and categorize the nature of the topographic transformation. The surface difference map is aggregated into five main categories of changes: (1) areas without significant height changes, (2) lowered-down areas where hill ranges were cut down, (3) raised-up areas where valleys and swamps were filled in, (4) reclaimed areas from the sea, and (5) new water-covered areas. Considering spatial proximity and configurations of different types of changes, topographic transformation can be differentiated as either creating inland flat areas or reclaiming new land from the sea. Typical topographic changes are discussed in the context of Singapore's urbanization processes. The two slope maps and elevation histograms show that generally, the topographic surface of Singapore has become flatter and lower since 1924. More than 89% of height changes have happened within a range of 20 m and 95% have been below 40 m. Because of differences in land surveying and map drawing methods, uncertainties and inaccuracies inherent in the 1924 topographic maps are discussed in detail. In this work, a modified version of a traditional scatter plot is used to present height transformation patterns intuitively. This method of deriving categorical maps of topographical changes from a surface difference map can be used in similar studies to qualitatively interpret transformation. Slope maps and histograms were also used jointly to reveal additional patterns of topographic change.

  16. Using UML Modeling to Facilitate Three-Tier Architecture Projects in Software Engineering Courses

    ERIC Educational Resources Information Center

    Mitra, Sandeep

    2014-01-01

    This article presents the use of a model-centric approach to facilitate software development projects conforming to the three-tier architecture in undergraduate software engineering courses. Many instructors intend that such projects create software applications for use by real-world customers. While it is important that the first version of these…

  17. Using UML Modeling to Facilitate Three-Tier Architecture Projects in Software Engineering Courses

    ERIC Educational Resources Information Center

    Mitra, Sandeep

    2014-01-01

    This article presents the use of a model-centric approach to facilitate software development projects conforming to the three-tier architecture in undergraduate software engineering courses. Many instructors intend that such projects create software applications for use by real-world customers. While it is important that the first version of these…

  18. A data-driven parallel execution model and architecture for logic programs

    SciTech Connect

    Tseng, Chien-Chao.

    1989-01-01

    Logic Programming has come to prominence in recent years after the decision of the Japanese Fifth Generation Project to adopt it as the kernel language. A significant number of research projects are attempting to implement different schemes to exploit the inherent parallelism in logic programs. Data flow architectural model has been found to attractive for parallel execution of logic programs. In this research, five dataflow execution models available in literature, have been critically reviewed. The primary aim of the critical review was to establish a set of design issues critical to efficient execution. Based on the established design issues, the abstract date - driven machine model, names LogDf, is developed for parallel execution of logic programs. The execution scheme supports OR - parallelism, Restricted AND parallelism and stream parallelism. Multiple binding environments are represented using stream of streams structure (S-stream). Eager evaluation is performed by passing binding environment between subgoal literals as S-streams, which are formed using non-strict constructors. The hierarchical multi-level stream structure provides a logical framework for distributing the streams to enhance parallelism in production/consumption as well as control of parallelism. The scheme for compiling the dataflow graphs, developed in this thesis, eliminates the necessity of any operand matching unit in the underlying dynamic dataflow architecture. In this thesis, an architecture for the abstract machine LogDf is also provided and the performance evaluation of this model is based on this architecture.

  19. A Model Based Framework for Semantic Interpretation of Architectural Construction Drawings

    ERIC Educational Resources Information Center

    Babalola, Olubi Oluyomi

    2011-01-01

    The study addresses the automated translation of architectural drawings from 2D Computer Aided Drafting (CAD) data into a Building Information Model (BIM), with emphasis on the nature, possible role, and limitations of a drafting language Knowledge Representation (KR) on the problem and process. The central idea is that CAD to BIM translation is a…

  20. Can diversity in root architecture explain plant water use efficiency? A modeling study

    PubMed Central

    Tron, Stefania; Bodner, Gernot; Laio, Francesco; Ridolfi, Luca; Leitner, Daniel

    2015-01-01

    Drought stress is a dominant constraint to crop production. Breeding crops with adapted root systems for effective uptake of water represents a novel strategy to increase crop drought resistance. Due to complex interaction between root traits and high diversity of hydrological conditions, modeling provides important information for trait based selection. In this work we use a root architecture model combined with a soil-hydrological model to analyze whether there is a root system ideotype of general adaptation to drought or water uptake efficiency of root systems is a function of specific hydrological conditions. This was done by modeling transpiration of 48 root architectures in 16 drought scenarios with distinct soil textures, rainfall distributions, and initial soil moisture availability. We find that the efficiency in water uptake of root architecture is strictly dependent on the hydrological scenario. Even dense and deep root systems are not superior in water uptake under all hydrological scenarios. Our results demonstrate that mere architectural description is insufficient to find root systems of optimum functionality. We find that in environments with sufficient rainfall before the growing season, root depth represents the key trait for the exploration of stored water, especially in fine soils. Root density, instead, especially near the soil surface, becomes the most relevant trait for exploiting soil moisture when plant water supply is mainly provided by rainfall events during the root system development. We therefore concluded that trait based root breeding has to consider root systems with specific adaptation to the hydrology of the target environment. PMID:26412932

  1. A Model Based Framework for Semantic Interpretation of Architectural Construction Drawings

    ERIC Educational Resources Information Center

    Babalola, Olubi Oluyomi

    2011-01-01

    The study addresses the automated translation of architectural drawings from 2D Computer Aided Drafting (CAD) data into a Building Information Model (BIM), with emphasis on the nature, possible role, and limitations of a drafting language Knowledge Representation (KR) on the problem and process. The central idea is that CAD to BIM translation is a…

  2. Conformations of seven-membered rings: The Fourier transform model

    NASA Astrophysics Data System (ADS)

    Cano, F. H.; Foces-Foces, C.

    A representation of the puckered conformations of seven-membered rings, using the Fourier Fourier Transform model and derived from the torsion angles, is presented in terms of two puckering amplitudes and their corresponding puckering phases. These four parameters are used to describe the main conformational types and to study the planarity of the rings, symmetrical forms, pseudorotation pathways and symmetrical interconversions through the puckering levels. This analysis provides a criterion for characterizing the basic conformations which have already been established by earlier work. A comparison with previous models is also given and the representation applied to some 1,4-benzodiazepine compounds.

  3. A biofilm model for prediction of pollutant transformation in sewers.

    PubMed

    Jiang, Feng; Leung, Derek Hoi-Wai; Li, Shiyu; Chen, Guang-Hao; Okabe, Satoshi; van Loosdrecht, Mark C M

    2009-07-01

    This study developed a new sewer biofilm model to simulate the pollutant transformation and biofilm variation in sewers under aerobic, anoxic and anaerobic conditions. The biofilm model can describe the activities of heterotrophic, autotrophic, and sulfate-reducing bacteria (SRB) in the biofilm as well as the variations in biofilm thickness, the spatial profiles of SRB population and biofilm density. The model can describe dynamic biofilm growth, multiple biomass evolution and competitions among organic oxidation, denitrification, nitrification, sulfate reduction and sulfide oxidation in a heterogeneous biofilm growing in a sewer. The model has been extensively verified by three different approaches, including direct verification by measurement of the spatial concentration profiles of dissolved oxygen, nitrate, ammonia, and hydrogen sulfide in sewer biofilm. The spatial distribution profile of SRB in sewer biofilm was determined from the fluorescent in situ hybridization (FISH) images taken by a confocal laser scanning microscope (CLSM) and were predicted well by the model.

  4. Designing Capital-Intensive Systems with Architectural and Operational Flexibility Using a Screening Model

    NASA Astrophysics Data System (ADS)

    Lin, Jijun; de Weck, Olivier; de Neufville, Richard; Robinson, Bob; MacGowan, David

    Development of capital intensive systems, such as offshore oil platforms or other industrial infrastructure, generally requires a significant amount of capital investment under various resource, technical, and market uncertainties. It is a very challenging task for development co-owners or joint ventures because important decisions, such as system architectures, have to be made while uncertainty remains high. This paper develops a screening model and a simulation framework to quickly explore the design space for complex engineering systems under uncertainty allowing promising strategies or architectures to be identified. Flexibility in systems’ design and operation is proposed as a proactive means to enable systems to adapt to future uncertainty. Architectural and operational flexibility can improve systems’ lifecycle value by mitigating downside risks and capturing upside opportunities. In order to effectively explore different flexible strategies addressing a view of uncertainty which changes with time, a computational framework based on Monte Carlo simulation is proposed in this paper. This framework is applied to study flexible development strategies for a representative offshore petroleum project. The complexity of this problem comes from multi-domain uncertainties, large architectural design space, and structure of flexibility decision rules. The results demonstrate that architectural and operational flexibility can significantly improve projects’ Expected Net Present Value (ENPV), reduce downside risks, and improve upside gains, compared to adopting an inflexible strategy appropriate to the view of uncertainty at the start of the project. In this particular case study, the most flexible strategy improves ENPV by 85% over an inflexible base case.

  5. Connecting Requirements to Architecture and Analysis via Model-Based Systems Engineering

    NASA Technical Reports Server (NTRS)

    Cole, Bjorn F.; Jenkins, J. Steven

    2015-01-01

    In traditional systems engineering practice, architecture, concept development, and requirements development are related but still separate activities. Concepts for operation, key technical approaches, and related proofs of concept are developed. These inform the formulation of an architecture at multiple levels, starting with the overall system composition and functionality and progressing into more detail. As this formulation is done, a parallel activity develops a set of English statements that constrain solutions. These requirements are often called "shall statements" since they are formulated to use "shall." The separation of requirements from design is exacerbated by well-meaning tools like the Dynamic Object-Oriented Requirements System (DOORS) that remained separated from engineering design tools. With the Europa Clipper project, efforts are being taken to change the requirements development approach from a separate activity to one intimately embedded in formulation effort. This paper presents a modeling approach and related tooling to generate English requirement statements from constraints embedded in architecture definition.

  6. Connecting Requirements to Architecture and Analysis via Model-Based Systems Engineering

    NASA Technical Reports Server (NTRS)

    Cole, Bjorn F.; Jenkins, J. Steven

    2015-01-01

    In traditional systems engineering practice, architecture, concept development, and requirements development are related but still separate activities. Concepts for operation, key technical approaches, and related proofs of concept are developed. These inform the formulation of an architecture at multiple levels, starting with the overall system composition and functionality and progressing into more detail. As this formulation is done, a parallel activity develops a set of English statements that constrain solutions. These requirements are often called "shall statements" since they are formulated to use "shall." The separation of requirements from design is exacerbated by well-meaning tools like the Dynamic Object-Oriented Requirements System (DOORS) that remained separated from engineering design tools. With the Europa Clipper project, efforts are being taken to change the requirements development approach from a separate activity to one intimately embedded in formulation effort. This paper presents a modeling approach and related tooling to generate English requirement statements from constraints embedded in architecture definition.

  7. SASAgent: an agent based architecture for search, retrieval and composition of scientific models.

    PubMed

    Felipe Mendes, Luiz; Silva, Laryssa; Matos, Ely; Braga, Regina; Campos, Fernanda

    2011-07-01

    Scientific computing is a multidisciplinary field that goes beyond the use of computer as machine where researchers write simple texts, presentations or store analysis and results of their experiments. Because of the huge hardware/software resources invested in experiments and simulations, this new approach to scientific computing currently adopted by research groups is well represented by e-Science. This work aims to propose a new architecture based on intelligent agents to search, recover and compose simulation models, generated in the context of research projects related to biological domain. The SASAgent architecture is described as a multi-tier, comprising three main modules, where CelO ontology satisfies requirements put by e-science projects mainly represented by the semantic knowledge base. Preliminary results suggest that the proposed architecture is promising to achieve requirements found in e-Science projects, considering mainly the biological domain. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Distributed model predictive control with hierarchical architecture for communication: application in automated irrigation channels

    NASA Astrophysics Data System (ADS)

    Farhadi, Alireza; Khodabandehlou, Ali

    2016-08-01

    This paper is concerned with a distributed model predictive control (DMPC) method that is based on a distributed optimisation method with two-level architecture for communication. Feasibility (constraints satisfaction by the approximated solution), convergence and optimality of this distributed optimisation method are mathematically proved. For an automated irrigation channel, the satisfactory performance of the proposed DMPC method in attenuation of the undesired upstream transient error propagation and amplification phenomenon is illustrated and compared with the performance of another DMPC method that exploits a single-level architecture for communication. It is illustrated that the DMPC that exploits a two-level architecture for communication has a better performance by better managing communication overhead.

  9. Integrated Architectural Level Power-Performance Modeling Toolkit

    DTIC Science & Technology

    2004-08-20

    laptop) systems. We utilize the MET/ Turandot toolkit originally developed at IBM TJ Watson Research Center as the underlying PowerPC...microarchitecture performance simulator [3]. Turandot is flexible enough to model a broad range of microarchitectures and has undergone extensive validation [3...In addition, Turandot has been augmented with power models to explore power-performance tradeoffs in an internal IBM tool called PowerTimer [4

  10. A microcomputer algorithm for solving compartmental models involving radionuclide transformations.

    PubMed

    Birchall, A

    1986-03-01

    An algorithm for solving first-order non-recycling compartment models is described. Given the initial amounts of a radioactive material in each compartment and the fundamental transfer rate constants between each compartment, the algorithm gives both the amount of material remaining at any time t and the integrated number of transformations that would occur up to time t. The method is analytical, and consequently, is ideally suited for implementation on a microcomputer. For a typical microcomputer with 64 kilobytes of random access memory, a model containing up to 100 compartments, with any number of interconnecting translocation routes, can be solved in a few seconds; providing that no recycling occurs. An example computer program, written in 30 lines of Microsoft BASIC, is included in an appendix to demonstrate the use of the algorithm. A detailed description is included to show how the algorithm is modified to satisfy the requirements commonly encountered in compartment modelling, for example, continuous intake, partitioning of activity, and transformations from radioactive progeny. Although the algorithm does not solve models involving recycling, it is often possible to represent such cases by a non-recycling model which is mathematically equivalent.

  11. Use of the Chemical Transformation Simulator as a Parameterization Tool for Modeling the Environmental Fate of Organic Chemicals and their Transformation Products

    EPA Science Inventory

    A Chemical Transformation Simulator is a web-based system for predicting transformation pathways and physicochemical properties of organic chemicals. Role in Environmental Modeling • Screening tool for identifying likely transformation products in the environment • Parameteri...

  12. Use of the Chemical Transformation Simulator as a Parameterization Tool for Modeling the Environmental Fate of Organic Chemicals and their Transformation Products

    EPA Science Inventory

    A Chemical Transformation Simulator is a web-based system for predicting transformation pathways and physicochemical properties of organic chemicals. Role in Environmental Modeling • Screening tool for identifying likely transformation products in the environment • Parameteri...

  13. Analysis of trabecular bone architectural changes induced by osteoarthritis in rabbit femur using 3D active shape model and digital topology

    NASA Astrophysics Data System (ADS)

    Saha, P. K.; Rajapakse, C. S.; Williams, D. S.; Duong, L.; Coimbra, A.

    2007-03-01

    Osteoarthritis (OA) is the most common chronic joint disease, which causes the cartilage between the bone joints to wear away, leading to pain and stiffness. Currently, progression of OA is monitored by measuring joint space width using x-ray or cartilage volume using MRI. However, OA affects all periarticular tissues, including cartilage and bone. It has been shown previously that in animal models of OA, trabecular bone (TB) architecture is particularly affected. Furthermore, relative changes in architecture are dependent on the depth of the TB region with respect to the bone surface and main direction of load on the bone. The purpose of this study was to develop a new method for accurately evaluating 3D architectural changes induced by OA in TB. Determining the TB test domain that represents the same anatomic region across different animals is crucial for studying disease etiology, progression and response to therapy. It also represents a major technical challenge in analyzing architectural changes. Here, we solve this problem using a new active shape model (ASM)-based approach. A new and effective semi-automatic landmark selection approach has been developed for rabbit distal femur surface that can easily be adopted for many other anatomical regions. It has been observed that, on average, a trained operator can complete the user interaction part of landmark specification process in less than 15 minutes for each bone data set. Digital topological analysis and fuzzy distance transform derived parameters are used for quantifying TB architecture. The method has been applied on micro-CT data of excised rabbit femur joints from anterior cruciate ligament transected (ACLT) (n = 6) and sham (n = 9) operated groups collected at two and two-to-eight week post-surgery, respectively. An ASM of the rabbit right distal femur has been generated from the sham group micro-CT data. The results suggest that, in conjunction with ASM, digital topological parameters are suitable for

  14. Transformational change in health care systems: an organizational model.

    PubMed

    Lukas, Carol VanDeusen; Holmes, Sally K; Cohen, Alan B; Restuccia, Joseph; Cramer, Irene E; Shwartz, Michael; Charns, Martin P

    2007-01-01

    The Institute of Medicine's 2001 report Crossing the Quality Chasm argued for fundamental redesign of the U.S. health care system. Six years later, many health care organizations have embraced the report's goals, but few have succeeded in making the substantial transformations needed to achieve those aims. This article offers a model for moving organizations from short-term, isolated performance improvements to sustained, reliable, organization-wide, and evidence-based improvements in patient care. Longitudinal comparative case studies were conducted in 12 health care systems using a mixed-methods evaluation design based on semistructured interviews and document review. Participating health care systems included seven systems funded through the Robert Wood Johnson Foundation's Pursuing Perfection Program and five systems with long-standing commitments to improvement and high-quality care. Five interactive elements appear critical to successful transformation of patient care: (1) Impetus to transform; (2) Leadership commitment to quality; (3) Improvement initiatives that actively engage staff in meaningful problem solving; (4) Alignment to achieve consistency of organization goals with resource allocation and actions at all levels of the organization; and (5) Integration to bridge traditional intra-organizational boundaries among individual components. These elements drive change by affecting the components of the complex health care organization in which they operate: (1) Mission, vision, and strategies that set its direction and priorities; (2) Culture that reflects its informal values and norms; (3) Operational functions and processes that embody the work done in patient care; and (4) Infrastructure such as information technology and human resources that support the delivery of patient care. Transformation occurs over time with iterative changes being sustained and spread across the organization. The conceptual model holds promise for guiding health care

  15. Models for Predicting the Architecture of Different Shoot Types in Apple

    PubMed Central

    Baïram, Emna; Delaire, Mickaël; Le Morvan, Christian; Buck-Sorlin, Gerhard

    2017-01-01

    In apple, the first-order branch of a tree has a characteristic architecture constituting three shoot types: bourses (rosettes), bourse shoots, and vegetative shoots. Its overall architecture as well as that of each shoot thus determines the distribution of sources (leaves) and sinks (fruits) and could have an influence on the amount of sugar allocated to fruits. Knowledge of architecture, in particular the position and area of leaves helps to quantify source strength. In order to reconstruct this initial architecture, rules equipped with allometric relations could be used: these allow predicting model parameters that are difficult to measure from simple traits that can be determined easily, non-destructively and directly in the orchard. Once such allometric relations are established they can be used routinely to recreate initial structures. Models based on allometric relations have been established in this study in order to predict the leaf areas of the three different shoot types of three apple cultivars with different branch architectures: “Fuji,” “Ariane,” and “Rome Beauty.” The allometric relations derived from experimental data allowed us to model the total shoot leaf area as well as the individual leaf area for each leaf rank, for each shoot type and each genotype. This was achieved using two easily measurable input variables: total leaf number per shoot and the length of the biggest leaf on the shoot. The models were tested using a different data set, and they were able to accurately predict leaf area of all shoot types and genotypes. Additional focus on internode lengths on spurs contributed to refine the models. PMID:28203241

  16. Implementation of Remaining Useful Lifetime Transformer Models in the Fleet-Wide Prognostic and Health Management Suite

    SciTech Connect

    Agarwal, Vivek; Lybeck, Nancy J.; Pham, Binh; Rusaw, Richard; Bickford, Randall

    2015-02-01

    Research and development efforts are required to address aging and reliability concerns of the existing fleet of nuclear power plants. As most plants continue to operate beyond the license life (i.e., towards 60 or 80 years), plant components are more likely to incur age-related degradation mechanisms. To assess and manage the health of aging plant assets across the nuclear industry, the Electric Power Research Institute has developed a web-based Fleet-Wide Prognostic and Health Management (FW-PHM) Suite for diagnosis and prognosis. FW-PHM is a set of web-based diagnostic and prognostic tools and databases, comprised of the Diagnostic Advisor, the Asset Fault Signature Database, the Remaining Useful Life Advisor, and the Remaining Useful Life Database, that serves as an integrated health monitoring architecture. The main focus of this paper is the implementation of prognostic models for generator step-up transformers in the FW-PHM Suite. One prognostic model discussed is based on the functional relationship between degree of polymerization, (the most commonly used metrics to assess the health of the winding insulation in a transformer) and furfural concentration in the insulating oil. The other model is based on thermal-induced degradation of the transformer insulation. By utilizing transformer loading information, established thermal models are used to estimate the hot spot temperature inside the transformer winding. Both models are implemented in the Remaining Useful Life Database of the FW-PHM Suite. The Remaining Useful Life Advisor utilizes the implemented prognostic models to estimate the remaining useful life of the paper winding insulation in the transformer based on actual oil testing and operational data.

  17. Evaluating models of community psychology: social transformation in South Africa.

    PubMed

    Edwards, Steve

    2002-01-01

    Tricket (1996) described community psychology in terms of contexts of diversity within a diversity of contexts. As abstract representations of reality, various community psychological models provide further diverse contexts through which to view the diversity of community psychological reality. The Zululand Community Psychology Project is a South African initiative aimed at improving community life. This includes treating the violent sequelae of the unjust Apartheid system through improving relationships among communities divided in terms of historical, colonial, racial, ethnic, political, gender, and other boundaries as well as promoting health and social change. The aim of this article is to evaluate the applicability of various models of community psychology used in this project. The initial quantitative investigation in the Zululand Community Psychology Project involved five coresearchers, who evaluated five community psychology models--the mental health, social action, organizational, ecological, and phenomenological models--in terms of their differential applicability in three partnership centers, representing health, education, and business sectors of the local community. In all three contexts, the models were rank ordered by a representative of each center, an intern community psychologist, and his supervisor in terms of the models' respective applicability to the particular partnership center concerned. Results indicated significant agreement with regard to the differential applicability of the mental health, phenomenological, and organizational models in the health, education, and business centers respectively, with the social action model being most generally applicable across all centers. This led to a further qualitative individual and focus group investigation with eight university coresearchers into the experience of social transformation with special reference to social changes needed in the South African context. These social transformation

  18. The Laminar Cortex Model: A New Continuum Cortex Model Incorporating Laminar Architecture

    PubMed Central

    Du, Jiaxin; Vegh, Viktor; Reutens, David C.

    2012-01-01

    Local field potentials (LFPs) are widely used to study the function of local networks in the brain. They are also closely correlated with the blood-oxygen-level-dependent signal, the predominant contrast mechanism in functional magnetic resonance imaging. We developed a new laminar cortex model (LCM) to simulate the amplitude and frequency of LFPs. Our model combines the laminar architecture of the cerebral cortex and multiple continuum models to simulate the collective activity of cortical neurons. The five cortical layers (layer I, II/III, IV, V, and VI) are simulated as separate continuum models between which there are synaptic connections. The LCM was used to simulate the dynamics of the visual cortex under different conditions of visual stimulation. LFPs are reported for two kinds of visual stimulation: general visual stimulation and intermittent light stimulation. The power spectra of LFPs were calculated and compared with existing empirical data. The LCM was able to produce spontaneous LFPs exhibiting frequency-inverse (1/ƒ) power spectrum behaviour. Laminar profiles of current source density showed similarities to experimental data. General stimulation enhanced the oscillation of LFPs corresponding to gamma frequencies. During simulated intermittent light stimulation, the LCM captured the fundamental as well as high order harmonics as previously reported. The power spectrum expected with a reduction in layer IV neurons, often observed with focal cortical dysplasias associated with epilepsy was also simulated. PMID:23093925

  19. Statistical modeling of nitrogen-dependent modulation of root system architecture in Arabidopsis thaliana.

    PubMed

    Araya, Takao; Kubo, Takuya; von Wirén, Nicolaus; Takahashi, Hideki

    2016-03-01

    Plant root development is strongly affected by nutrient availability. Despite the importance of structure and function of roots in nutrient acquisition, statistical modeling approaches to evaluate dynamic and temporal modulations of root system architecture in response to nutrient availability have remained as widely open and exploratory areas in root biology. In this study, we developed a statistical modeling approach to investigate modulations of root system architecture in response to nitrogen availability. Mathematical models were designed for quantitative assessment of root growth and root branching phenotypes and their dynamic relationships based on hierarchical configuration of primary and lateral roots formulating the fishbone-shaped root system architecture in Arabidopsis thaliana. Time-series datasets reporting dynamic changes in root developmental traits on different nitrate or ammonium concentrations were generated for statistical analyses. Regression analyses unraveled key parameters associated with: (i) inhibition of primary root growth under nitrogen limitation or on ammonium; (ii) rapid progression of lateral root emergence in response to ammonium; and (iii) inhibition of lateral root elongation in the presence of excess nitrate or ammonium. This study provides a statistical framework for interpreting dynamic modulation of root system architecture, supported by meta-analysis of datasets displaying morphological responses of roots to diverse nitrogen supplies. © 2015 Institute of Botany, Chinese Academy of Sciences.

  20. Deep Phenotyping of Coarse Root Architecture in R. pseudoacacia Reveals That Tree Root System Plasticity Is Confined within Its Architectural Model

    PubMed Central

    Danjon, Frédéric; Khuder, Hayfa; Stokes, Alexia

    2013-01-01

    This study aims at assessing the influence of slope angle and multi-directional flexing and their interaction on the root architecture of Robinia pseudoacacia seedlings, with a particular focus on architectural model and trait plasticity. 36 trees were grown from seed in containers inclined at 0° (control) or 45° (slope) in a glasshouse. The shoots of half the plants were gently flexed for 5 minutes a day. After 6 months, root systems were excavated and digitized in 3D, and biomass measured. Over 100 root architectural traits were determined. Both slope and flexing increased significantly plant size. Non-flexed trees on 45° slopes developed shallow roots which were largely aligned perpendicular to the slope. Compared to the controls, flexed trees on 0° slopes possessed a shorter and thicker taproot held in place by regularly distributed long and thin lateral roots. Flexed trees on the 45° slope also developed a thick vertically aligned taproot, with more volume allocated to upslope surface lateral roots, due to the greater soil volume uphill. We show that there is an inherent root system architectural model, but that a certain number of traits are highly plastic. This plasticity will permit root architectural design to be modified depending on external mechanical signals perceived by young trees. PMID:24386227

  1. Deep phenotyping of coarse root architecture in R. pseudoacacia reveals that tree root system plasticity is confined within its architectural model.

    PubMed

    Danjon, Frédéric; Khuder, Hayfa; Stokes, Alexia

    2013-01-01

    This study aims at assessing the influence of slope angle and multi-directional flexing and their interaction on the root architecture of Robinia pseudoacacia seedlings, with a particular focus on architectural model and trait plasticity. 36 trees were grown from seed in containers inclined at 0° (control) or 45° (slope) in a glasshouse. The shoots of half the plants were gently flexed for 5 minutes a day. After 6 months, root systems were excavated and digitized in 3D, and biomass measured. Over 100 root architectural traits were determined. Both slope and flexing increased significantly plant size. Non-flexed trees on 45° slopes developed shallow roots which were largely aligned perpendicular to the slope. Compared to the controls, flexed trees on 0° slopes possessed a shorter and thicker taproot held in place by regularly distributed long and thin lateral roots. Flexed trees on the 45° slope also developed a thick vertically aligned taproot, with more volume allocated to upslope surface lateral roots, due to the greater soil volume uphill. We show that there is an inherent root system architectural model, but that a certain number of traits are highly plastic. This plasticity will permit root architectural design to be modified depending on external mechanical signals perceived by young trees.

  2. Product Lifecycle Management Architecture: A Model Based Systems Engineering Analysis.

    SciTech Connect

    Noonan, Nicholas James

    2015-07-01

    This report is an analysis of the Product Lifecycle Management (PLM) program. The analysis is centered on a need statement generated by a Nuclear Weapons (NW) customer. The need statement captured in this report creates an opportunity for the PLM to provide a robust service as a solution. Lifecycles for both the NW and PLM are analyzed using Model Based System Engineering (MBSE).

  3. Accounting for nonlinear material characteristics in modeling ferroresonant transformers

    NASA Astrophysics Data System (ADS)

    Voisine, J. T.

    1985-04-01

    A mathematical model relating core material properties, including nonlinear magnetization characteristics, to the performance of ferroresonant transformers has been developed. In accomplishing this, other factors such as fabrication destruction factors, leakage flux, air gap characteristics, loading, and coil resistances and self-inductances are also accounted for. From a material manufacturer's view, knowing such information facilitates isolating sources of performance variations between units of similar design and is therefore highly desirable. The model predicts the primary induction necessary to establish a specified secondary induction and determines peak induction at other points in the magnetic circuit. A study comparing the model with a transformer indicated that each predicted peak induction was within ±5% of the corresponding measured peak induction. A generalized 4-node magnetic circuit having two shunt paths was chosen and modeled. Such a circuit is easily modified facilitating the analyses of numerous other core designs. A computer program designed to run on an HP-41 programmable calculator was also developed and is briefly described.

  4. From data to the decision: A software architecture to integrate predictive modelling in clinical settings.

    PubMed

    Martinez-Millana, A; Fernandez-Llatas, C; Sacchi, L; Segagni, D; Guillen, S; Bellazzi, R; Traver, V

    2015-08-01

    The application of statistics and mathematics over large amounts of data is providing healthcare systems with new tools for screening and managing multiple diseases. Nonetheless, these tools have many technical and clinical limitations as they are based on datasets with concrete characteristics. This proposition paper describes a novel architecture focused on providing a validation framework for discrimination and prediction models in the screening of Type 2 diabetes. For that, the architecture has been designed to gather different data sources under a common data structure and, furthermore, to be controlled by a centralized component (Orchestrator) in charge of directing the interaction flows among data sources, models and graphical user interfaces. This innovative approach aims to overcome the data-dependency of the models by providing a validation framework for the models as they are used within clinical settings.

  5. Architecture and Programming Models for High Performance Intensive Computation

    DTIC Science & Technology

    2016-06-29

    requirement for efficient operation of DDDAS is flexibility and efficiency of managing memory and processing resources for an operating environment...Driven Application Systems (DDDAS). The foremost requirement for efficient operation of DDDAS is flexibility and efficiency of managing memory and...model for all data objects as trees of fixed-size memory chunks, and provides a built-in scheduler for tasks that execute codelets. Each data object is

  6. Healthcare information system architecture (HISA) and its middleware models.

    PubMed

    Scherrer, J R; Spahni, S

    1999-01-01

    The use of middleware to develop widely distributed healthcare information systems (HIS) has become inevitable. However, the fact that many different platforms, even sometimes heterogeneous to each other, are hooked into the same network makes the integration of various middleware components more difficult than some might believe. This paper discusses the HISA standard and proposes extensions to the model that, in turn, could be compliant with other various existing distributed platforms and their middleware components.

  7. Mapping a Domain Model and Architecture to a Generic Design

    DTIC Science & Technology

    1994-05-01

    software engineering life cycle entitled Mode/-Based Software Engineerng ( MBSE ), a concept first described by the SEI In [Feller 93]. MBSE enables...organizations to build software applications which must evolve with a minimum of rework and scrap to meet changes in mission and technology. MBSE Involves...software models are also built. MBSE is a focus area for the SEI’s Engineering Techniques Program and is the subjedt of a recent SEI report [Withey 94

  8. Coupling root architecture and pore network modeling - an attempt towards better understanding root-soil interactions

    NASA Astrophysics Data System (ADS)

    Leitner, Daniel; Bodner, Gernot; Raoof, Amir

    2013-04-01

    Understanding root-soil interactions is of high importance for environmental and agricultural management. Root uptake is an essential component in water and solute transport modeling. The amount of groundwater recharge and solute leaching significantly depends on the demand based plant extraction via its root system. Plant uptake however not only responds to the potential demand, but in most situations is limited by supply form the soil. The ability of the plant to access water and solutes in the soil is governed mainly by root distribution. Particularly under conditions of heterogeneous distribution of water and solutes in the soil, it is essential to capture the interaction between soil and roots. Root architecture models allow studying plant uptake from soil by describing growth and branching of root axes in the soil. Currently root architecture models are able to respond dynamically to water and nutrient distribution in the soil by directed growth (tropism), modified branching and enhanced exudation. The porous soil medium as rooting environment in these models is generally described by classical macroscopic water retention and sorption models, average over the pore scale. In our opinion this simplified description of the root growth medium implies several shortcomings for better understanding root-soil interactions: (i) It is well known that roots grow preferentially in preexisting pores, particularly in more rigid/dry soil. Thus the pore network contributes to the architectural form of the root system; (ii) roots themselves can influence the pore network by creating preferential flow paths (biopores) which are an essential element of structural porosity with strong impact on transport processes; (iii) plant uptake depend on both the spatial location of water/solutes in the pore network as well as the spatial distribution of roots. We therefore consider that for advancing our understanding in root-soil interactions, we need not only to extend our root models

  9. Research on mixed network architecture collaborative application model

    NASA Astrophysics Data System (ADS)

    Jing, Changfeng; Zhao, Xi'an; Liang, Song

    2009-10-01

    When facing complex requirements of city development, ever-growing spatial data, rapid development of geographical business and increasing business complexity, collaboration between multiple users and departments is needed urgently, however conventional GIS software (such as Client/Server model or Browser/Server model) are not support this well. Collaborative application is one of the good resolutions. Collaborative application has four main problems to resolve: consistency and co-edit conflict, real-time responsiveness, unconstrained operation, spatial data recoverability. In paper, application model called AMCM is put forward based on agent and multi-level cache. AMCM can be used in mixed network structure and supports distributed collaborative. Agent is an autonomous, interactive, initiative and reactive computing entity in a distributed environment. Agent has been used in many fields such as compute science and automation. Agent brings new methods for cooperation and the access for spatial data. Multi-level cache is a part of full data. It reduces the network load and improves the access and handle of spatial data, especially, in editing the spatial data. With agent technology, we make full use of its characteristics of intelligent for managing the cache and cooperative editing that brings a new method for distributed cooperation and improves the efficiency.

  10. Role of System Architecture in Architecture in Developing New Drafting Tools

    NASA Astrophysics Data System (ADS)

    Sorguç, Arzu Gönenç

    In this study, the impact of information technologies in architectural design process is discussed. In this discussion, first the differences/nuances between the concept of software engineering and system architecture are clarified. Then, the design process in engineering, and design process in architecture has been compared by considering 3-D models as the center of design process over which the other disciplines involve the design. It is pointed out that in many high-end engineering applications, 3-D solid models and consequently digital mock-up concept has become a common practice. But, architecture as one of the important customers of CAD systems employing these tools has not started to use these 3-D models. It is shown that the reason of this time lag between architecture and engineering lies behind the tradition of design attitude. Therefore, it is proposed a new design scheme a meta-model to develop an integrated design model being centered on 3-D model. It is also proposed a system architecture to achieve the transformation of architectural design process by replacing 2-D thinking with 3-D thinking. It is stated that in the proposed system architecture, the CAD systems are included and adapted for 3-D architectural design in order to provide interfaces for integration of all possible disciplines to design process. It is also shown that such a change will allow to elaborate the intelligent or smart building concept in future.

  11. Development and validation of a tokamak skin effect transformer model

    NASA Astrophysics Data System (ADS)

    Romero, J. A.; Moret, J.-M.; Coda, S.; Felici, F.; Garrido, I.

    2012-02-01

    A lumped parameter, state space model for a tokamak transformer including the slow flux penetration in the plasma (skin effect transformer model) is presented. The model does not require detailed or explicit information about plasma profiles or geometry. Instead, this information is lumped in system variables, parameters and inputs. The model has an exact mathematical structure built from energy and flux conservation theorems, predicting the evolution and non-linear interaction of plasma current and internal inductance as functions of the primary coil currents, plasma resistance, non-inductive current drive and the loop voltage at a specific location inside the plasma (equilibrium loop voltage). Loop voltage profile in the plasma is substituted by a three-point discretization, and ordinary differential equations are used to predict the equilibrium loop voltage as a function of the boundary and resistive loop voltages. This provides a model for equilibrium loop voltage evolution, which is reminiscent of the skin effect. The order and parameters of this differential equation are determined empirically using system identification techniques. Fast plasma current modulation experiments with random binary signals have been conducted in the TCV tokamak to generate the required data for the analysis. Plasma current was modulated under ohmic conditions between 200 and 300 kA with 30 ms rise time, several times faster than its time constant L/R ≈ 200 ms. A second-order linear differential equation for equilibrium loop voltage is sufficient to describe the plasma current and internal inductance modulation with 70% and 38% fit parameters, respectively. The model explains the most salient features of the plasma current transients, such as the inverse correlation between plasma current ramp rates and internal inductance changes, without requiring detailed or explicit information about resistivity profiles. This proves that a lumped parameter modelling approach can be used to

  12. Creative Practices Embodied, Embedded, and Enacted in Architectural Settings: Toward an Ecological Model of Creativity.

    PubMed

    Malinin, Laura H

    2015-01-01

    Memoires by eminently creative people often describe architectural spaces and qualities they believe instrumental for their creativity. However, places designed to encourage creativity have had mixed results, with some found to decrease creative productivity for users. This may be due, in part, to lack of suitable empirical theory or model to guide design strategies. Relationships between creative cognition and features of the physical environment remain largely uninvestigated in the scientific literature, despite general agreement among researchers that human cognition is physically and socially situated. This paper investigates what role architectural settings may play in creative processes by examining documented first person and biographical accounts of creativity with respect to three central theories of situated cognition. First, the embodied thesis argues that cognition encompasses both the mind and the body. Second, the embedded thesis maintains that people exploit features of the physical and social environment to increase their cognitive capabilities. Third, the enaction thesis describes cognition as dependent upon a person's interactions with the world. Common themes inform three propositions, illustrated in a new theoretical framework describing relationships between people and their architectural settings with respect to different cognitive processes of creativity. The framework is intended as a starting point toward an ecological model of creativity, which may be used to guide future creative process research and architectural design strategies to support user creative productivity.

  13. Modeling the impact of scaffold architecture and mechanical loading on collagen turnover in engineered cardiovascular tissues.

    PubMed

    Argento, G; de Jonge, N; Söntjens, S H M; Oomens, C W J; Bouten, C V C; Baaijens, F P T

    2015-06-01

    The anisotropic collagen architecture of an engineered cardiovascular tissue has a major impact on its in vivo mechanical performance. This evolving collagen architecture is determined by initial scaffold microstructure and mechanical loading. Here, we developed and validated a theoretical and computational microscale model to quantitatively understand the interplay between scaffold architecture and mechanical loading on collagen synthesis and degradation. Using input from experimental studies, we hypothesize that both the microstructure of the scaffold and the loading conditions influence collagen turnover. The evaluation of the mechanical and topological properties of in vitro engineered constructs reveals that the formation of extracellular matrix layers on top of the scaffold surface influences the mechanical anisotropy on the construct. Results show that the microscale model can successfully capture the collagen arrangement between the fibers of an electrospun scaffold under static and cyclic loading conditions. Contact guidance by the scaffold, and not applied load, dominates the collagen architecture. Therefore, when the collagen grows inside the pores of the scaffold, pronounced scaffold anisotropy guarantees the development of a construct that mimics the mechanical anisotropy of the native cardiovascular tissue.

  14. Creative Practices Embodied, Embedded, and Enacted in Architectural Settings: Toward an Ecological Model of Creativity

    PubMed Central

    Malinin, Laura H.

    2016-01-01

    Memoires by eminently creative people often describe architectural spaces and qualities they believe instrumental for their creativity. However, places designed to encourage creativity have had mixed results, with some found to decrease creative productivity for users. This may be due, in part, to lack of suitable empirical theory or model to guide design strategies. Relationships between creative cognition and features of the physical environment remain largely uninvestigated in the scientific literature, despite general agreement among researchers that human cognition is physically and socially situated. This paper investigates what role architectural settings may play in creative processes by examining documented first person and biographical accounts of creativity with respect to three central theories of situated cognition. First, the embodied thesis argues that cognition encompasses both the mind and the body. Second, the embedded thesis maintains that people exploit features of the physical and social environment to increase their cognitive capabilities. Third, the enaction thesis describes cognition as dependent upon a person’s interactions with the world. Common themes inform three propositions, illustrated in a new theoretical framework describing relationships between people and their architectural settings with respect to different cognitive processes of creativity. The framework is intended as a starting point toward an ecological model of creativity, which may be used to guide future creative process research and architectural design strategies to support user creative productivity. PMID:26779087

  15. Objective Evaluation of Sensor Web Modeling and Data System Architectures

    NASA Astrophysics Data System (ADS)

    Seablom, M. S.; Atlas, R. M.; Ardizzone, J.; Kemp, E. M.; Talabac, S.

    2013-12-01

    We discuss the recent development of an end-to-end simulator designed to quantitatively assess the scientific value of incorporating model- and event-driven "sensor web" capabilities into future NASA Earth Science missions. The intent is to provide an objective analysis tool for performing engineering and scientific trade studies in which new technologies are introduced. In the case study presented here we focus on meteorological applications in which a numerical model is used to intelligently schedule data collection by space-based assets. Sensor web observing systems that enable dynamic targeting by various observing platforms have the potential to significantly improve our ability to monitor, understand, and predict the evolution of rapidly evolving, transient, or variable meteorological events. The use case focuses on landfalling hurricanes and was selected due to the obvious societal impact and the ongoing need to improve warning times. Although hurricane track prediction has improved over the past several decades, further improvement is necessary in the prediction of hurricane intensity. We selected a combination of future observing platforms to apply sensor web measurement techniques: global 3D lidar winds, next-generation scatterometer ocean vector winds, and high resolution cloud motion vectors from GOES-R. Targeting of the assets by a numerical model would allow the spacecraft to change its attitude by performing a roll maneuver to enable off-nadir measurements to be acquired. In this study, synthetic measurements were derived through Observing System Simulation Experiments (OSSEs) and enabled in part through the Dopplar Lidar Simulation Model developed by Simpson Weather Associates. We describe the capabilities of the simulator through three different sensor web configurations of the wind lidar: winds obtained from a nominal "survey mode" operation, winds obtained with a reduced duty cycle of the lidar (designed for preserving the life of the instrument

  16. Executable Architectures for Modeling Command and Control Processes

    DTIC Science & Technology

    2006-06-01

    of introducing new NCES capabilities (such as the Federated Search ) to the ‘To Be’ model. 2 Table of Contents 1 INTRODUCTION...Conventional Method for SME Discovery ToBe.JCAS.3.2 Send Alert and/or Request OR AND ToBe.JCAS.3.4 Employ Federated Search for CAS-related Info JCAS.1.3.6.13...instant messaging, web browser, etc. • Federated Search – this capability provides a way to search enterprise contents across various search-enabled

  17. Architectural Improvements and New Processing Tools for the Open XAL Online Model

    SciTech Connect

    Allen, Christopher K; Pelaia II, Tom; Freed, Jonathan M

    2015-01-01

    The online model is the component of Open XAL providing accelerator modeling, simulation, and dynamic synchronization to live hardware. Significant architectural changes and feature additions have been recently made in two separate areas: 1) the managing and processing of simulation data, and 2) the modeling of RF cavities. Simulation data and data processing have been completely decoupled. A single class manages all simulation data while standard tools were developed for processing the simulation results. RF accelerating cavities are now modeled as composite structures where parameter and dynamics computations are distributed. The beam and hardware models both maintain their relative phase information, which allows for dynamic phase slip and elapsed time computation.

  18. A service-oriented architecture for integrating the modeling and formal verification of genetic regulatory networks

    PubMed Central

    2009-01-01

    Background The study of biological networks has led to the development of increasingly large and detailed models. Computer tools are essential for the simulation of the dynamical behavior of the networks from the model. However, as the size of the models grows, it becomes infeasible to manually verify the predictions against experimental data or identify interesting features in a large number of simulation traces. Formal verification based on temporal logic and model checking provides promising methods to automate and scale the analysis of the models. However, a framework that tightly integrates modeling and simulation tools with model checkers is currently missing, on both the conceptual and the implementational level. Results We have developed a generic and modular web service, based on a service-oriented architecture, for integrating the modeling and formal verification of genetic regulatory networks. The architecture has been implemented in the context of the qualitative modeling and simulation tool GNA and the model checkers NUSMV and CADP. GNA has been extended with a verification module for the specification and checking of biological properties. The verification module also allows the display and visual inspection of the verification results. Conclusions The practical use of the proposed web service is illustrated by means of a scenario involving the analysis of a qualitative model of the carbon starvation response in E. coli. The service-oriented architecture allows modelers to define the model and proceed with the specification and formal verification of the biological properties by means of a unified graphical user interface. This guarantees a transparent access to formal verification technology for modelers of genetic regulatory networks. PMID:20042075

  19. Culture models of human mammary epithelial cell transformation

    SciTech Connect

    Stampfer, Martha R.; Yaswen, Paul

    2000-11-10

    Human pre-malignant breast diseases, particularly ductal carcinoma in situ (DCIS)3 already display several of the aberrant phenotypes found in primary breast cancers, including chromosomal abnormalities, telomerase activity, inactivation of the p53 gene and overexpression of some oncogenes. Efforts to model early breast carcinogenesis in human cell cultures have largely involved studies in vitro transformation of normal finite lifespan human mammary epithelial cells (HMEC) to immortality and malignancy. We present a model of HMEC immortal transformation consistent with the know in vivo data. This model includes a recently described, presumably epigenetic process, termed conversion, which occurs in cells that have overcome stringent replicative senescence and are thus able to maintain proliferation with critically short telomeres. The conversion process involves reactivation of telomerase activity, and acquisition of good uniform growth in the absence and presence of TFGB. We propose th at overcoming the proliferative constraints set by senescence, and undergoing conversion, represent key rate-limiting steps in human breast carcinogenesis, and occur during early stage breast cancer progression.

  20. Assessment of Mechanical Performance of Bone Architecture Using Rapid Prototyping Models

    NASA Astrophysics Data System (ADS)

    Saparin, Peter; Woesz, Alexander; Thomsen, Jasper S.; Fratzl, Peter

    2008-06-01

    The aim of this on-going research project is to assess the influence of bone microarchitecture on the mechanical performance of trabecular bone. A testing chain consist-ing of three steps was established: 1) micro computed tomography (μCT) imaging of human trabecular bone; 2) building of models of the bone from a light-sensitive polymer using Rapid Prototyping (RP); 3) mechanical testing of the models in a material testing machine. A direct resampling procedure was developed to convert μCT data into the format of the RP machine. Standardized parameters for production and testing of the plastic models were established by use of regular cellular structures. Next, normal, osteoporotic, and extreme osteoporotic vertebral trabecular bone architectures were re-produced by RP and compression tested. We found that normal architecture of vertebral trabecular bone exhibit behaviour characteristic of a cellular structure. In normal bone the fracture occurs at much higher strain values that in osteoporotic bone. After the fracture a normal trabecular architecture is able to carry much higher loads than an osteoporotic architecture. However, no statistically significant differences were found in maximal stress during uniaxial compression of the central part of normal, osteoporotic, and extreme osteoporotic vertebral trabecular bone. This supports the hypothesis that osteoporotic trabecular bone can compensate for a loss of trabeculae by thickening the remaining trabeculae in the loading direction (compensatory hypertrophy). The developed approach could be used for mechanical evaluation of structural data acquired non-invasively and assessment of changes in performance of bone architecture.

  1. Diagnostic and Prognostic Models for Generator Step-Up Transformers

    SciTech Connect

    Vivek Agarwal; Nancy J. Lybeck; Binh T. Pham

    2014-09-01

    In 2014, the online monitoring (OLM) of active components project under the Light Water Reactor Sustainability program at Idaho National Laboratory (INL) focused on diagnostic and prognostic capabilities for generator step-up transformers. INL worked with subject matter experts from the Electric Power Research Institute (EPRI) to augment and revise the GSU fault signatures previously implemented in the Electric Power Research Institute’s (EPRI’s) Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. Two prognostic models were identified and implemented for GSUs in the FW-PHM Suite software. INL and EPRI demonstrated the use of prognostic capabilities for GSUs. The complete set of fault signatures developed for GSUs in the Asset Fault Signature Database of the FW-PHM Suite for GSUs is presented in this report. Two prognostic models are described for paper insulation: the Chendong model for degree of polymerization, and an IEEE model that uses a loading profile to calculates life consumption based on hot spot winding temperatures. Both models are life consumption models, which are examples of type II prognostic models. Use of the models in the FW-PHM Suite was successfully demonstrated at the 2014 August Utility Working Group Meeting, Idaho Falls, Idaho, to representatives from different utilities, EPRI, and the Halden Research Project.

  2. Modeling, construction and experimental validation of actuated rolling dynamics of the cylindrical Transforming Roving-Rolling Explorer (TRREx)

    NASA Astrophysics Data System (ADS)

    Edwin, L.; Mazzoleni, A.; Gemmer, T.; Ferguson, S.

    2017-03-01

    Planetary surface exploration technology over the past few years has seen significant advancements on multiple fronts. Robotic exploration platforms are becoming more sophisticated and capable of embarking on more challenging missions. More unconventional designs, particularly transforming architectures that have multiple modes of locomotion, are being studied. This work explores the capabilities of one such novel transforming rover called the Transforming Roving-Rolling Explorer (TRREx). Biologically inspired by the armadillo and the golden-wheel spider, the TRREx has two modes of locomotion: it can traverse on six wheels like a conventional rover on benign terrain, but can transform into a sphere when necessary to negotiate steep rugged slopes. The ability to self-propel in the spherical configuration, even in the absence of a negative gradient, increases the TRREx's versatility and its concept value. This paper describes construction and testing of a prototype cylindrical TRREx that demonstrates that "actuated rolling" can be achieved, and also presents a dynamic model of this prototype version of the TRREx that can be used to investigate the feasibility and value of such self-propelled locomotion. Finally, we present results that validate our dynamic model by comparing results from computer simulations made using the dynamic model to experimental results acquired from test runs using the prototype.

  3. Operations Assessment of Launch Vehicle Architectures using Activity Based Cost Models

    NASA Technical Reports Server (NTRS)

    Ruiz-Torres, Alex J.; McCleskey, Carey

    2000-01-01

    The growing emphasis on affordability for space transportation systems requires the assessment of new space vehicles for all life cycle activities, from design and development, through manufacturing and operations. This paper addresses the operational assessment of launch vehicles, focusing on modeling the ground support requirements of a vehicle architecture, and estimating the resulting costs and flight rate. This paper proposes the use of Activity Based Costing (ABC) modeling for this assessment. The model uses expert knowledge to determine the activities, the activity times and the activity costs based on vehicle design characteristics. The approach provides several advantages to current approaches to vehicle architecture assessment including easier validation and allowing vehicle designers to understand the cost and cycle time drivers.

  4. Photo-Modeling and Cloud Computing. Applications in the Survey of Late Gothic Architectural Elements

    NASA Astrophysics Data System (ADS)

    Casu, P.; Pisu, C.

    2013-02-01

    This work proposes the application of the latest methods of photo-modeling to the study of Gothic architecture in Sardinia. The aim is to consider the versatility and ease of use of such documentation tools in order to study architecture and its ornamental details. The paper illustrates a procedure of integrated survey and restitution, with the purpose to obtain an accurate 3D model of some gothic portals. We combined the contact survey and the photographic survey oriented to the photo-modelling. The software used is 123D Catch by Autodesk an Image Based Modelling (IBM) system available free. It is a web-based application that requires a few simple steps to produce a mesh from a set of not oriented photos. We tested the application on four portals, working at different scale of detail: at first the whole portal and then the different architectural elements that composed it. We were able to model all the elements and to quickly extrapolate simple sections, in order to make a comparison between the moldings, highlighting similarities and differences. Working in different sites at different scale of detail, have allowed us to test the procedure under different conditions of exposure, sunshine, accessibility, degradation of surface, type of material, and with different equipment and operators, showing if the final result could be affected by these factors. We tested a procedure, articulated in a few repeatable steps, that can be applied, with the right corrections and adaptations, to similar cases and/or larger or smaller elements.

  5. Probabilistic sleep architecture models in patients with and without sleep apnea.

    PubMed

    Bianchi, Matt T; Eiseman, Nathaniel A; Cash, Sydney S; Mietus, Joseph; Peng, Chung-Kang; Thomas, Robert J

    2012-06-01

    Sleep fragmentation of any cause is disruptive to the rejuvenating value of sleep. However, methods to quantify sleep architecture remain limited. We have previously shown that human sleep-wake stage distributions exhibit multi-exponential dynamics, which are fragmented by obstructive sleep apnea (OSA), suggesting that Markov models may be a useful method to quantify architecture in health and disease. Sleep stage data were obtained from two subsets of the Sleep Heart Health Study database: control subjects with no medications, no OSA, no medical co-morbidities and no sleepiness (n = 374); and subjects with severe OSA (n = 338). Sleep architecture was simplified into three stages: wake after sleep onset (WASO); non-rapid eye movement (NREM) sleep; and rapid eye movement (REM) sleep. The connectivity and transition rates among eight 'generator' states of a first-order continuous-time Markov model were inferred from the observed ('phenotypic') distributions: three exponentials each of NREM sleep and WASO; and two exponentials of REM sleep. Ultradian REM cycling was accomplished by imposing time-variation to REM state entry rates. Fragmentation in subjects with severe OSA involved faster transition probabilities as well as additional state transition paths within the model. The Markov models exhibit two important features of human sleep architecture: multi-exponential stage dynamics (accounting for observed bout distributions); and probabilistic transitions (an inherent source of variability). In addition, the model quantifies the fragmentation associated with severe OSA. Markov sleep models may prove important for quantifying sleep disruption to provide objective metrics to correlate with endpoints ranging from sleepiness to cardiovascular morbidity.

  6. Double images hiding by using joint transform correlator architecture adopting two-step phase-shifting digital holography

    NASA Astrophysics Data System (ADS)

    Shi, Xiaoyan; Zhao, Daomu; Huang, Yinbo

    2013-06-01

    Based on the joint Fresnel transform correlator, a new system for double images hiding is presented. By this security system, the dual secret images are encrypted and recorded as intensity patterns employing phase-shifting interference technology. To improve the system security, a dual images hiding method is used. By digital means, the deduced encryption complex distribution is divided into two subparts. For each image, only one subpart is reserved and modulated by a phase factor. Then these modified results are combined together and embedded into the host image. With all correct keys, by inverse Fresnel transform, the secret images can be extracted. By the phase modulation, the cross talk caused by images superposition can be reduced for their spatial parallel separation. Theoretical analyses have shown the system's feasibility. Computer simulations are performed to show the encryption capacity of the proposed system. Numerical results are presented to verify the validity and the efficiency of the proposed method.

  7. Optimizing transformations of stencil operations for parallel object-oriented scientific frameworks on cache-based architectures

    SciTech Connect

    Bassetti, F.; Davis, K.; Quinlan, D.

    1998-12-31

    High-performance scientific computing relies increasingly on high-level large-scale object-oriented software frameworks to manage both algorithmic complexity and the complexities of parallelism: distributed data management, process management, inter-process communication, and load balancing. This encapsulation of data management, together with the prescribed semantics of a typical fundamental component of such object-oriented frameworks--a parallel or serial array-class library--provides an opportunity for increasingly sophisticated compile-time optimization techniques. This paper describes two optimizing transformations suitable for certain classes of numerical algorithms, one for reducing the cost of inter-processor communication, and one for improving cache utilization; demonstrates and analyzes the resulting performance gains; and indicates how these transformations are being automated.

  8. Development of the transformational advanced professional practice model.

    PubMed

    Elliott, Elizabeth C; Walden, Marlene

    2015-09-01

    The purpose of this article is to describe the development of a professional practice model (PPM) for advanced practice registered nurses (APRNs). A literature review was conducted on PPMs. Simultaneous review of authoritative resources, including The National Organization of Nurse Practitioner Faculties (NONPF) and the Licensure, Accreditation, Certification and Education (LACE) Consensus Model, was performed. An expert panel was established to validate the transformational advanced professional practice (TAPP) model. APRNs are relied upon by organizations to provide leadership in the delivery of high-quality, cost-effective health care while improving access and eliminating preventable morbidities. Existing models fail to fully capture the professional scope of practice for APRNs. The TAPP model serves as a framework to guide professional development and mentorship of APRNs in seven domains of professional practice (DOPP). To meet the Institute of Medicine's recommendations for the future of nursing, APRNs should practice to the fullest extent of their education and training. Providing clarification regarding the DOPP of the APRN role is needed to standardized professional practice. The TAPP model is an inspiring blueprint that allows APRNs to model the way by delivering comprehensive health care in seven DOPP. ©2015 American Association of Nurse Practitioners.

  9. Particle MCMC algorithms and architectures for accelerating inference in state-space models.

    PubMed

    Mingas, Grigorios; Bottolo, Leonardo; Bouganis, Christos-Savvas

    2017-04-01

    Particle Markov Chain Monte Carlo (pMCMC) is a stochastic algorithm designed to generate samples from a probability distribution, when the density of the distribution does not admit a closed form expression. pMCMC is most commonly used to sample from the Bayesian posterior distribution in State-Space Models (SSMs), a class of probabilistic models used in numerous scientific applications. Nevertheless, this task is prohibitive when dealing with complex SSMs with massive data, due to the high computational cost of pMCMC and its poor performance when the posterior exhibits multi-modality. This paper aims to address both issues by: 1) Proposing a novel pMCMC algorithm (denoted ppMCMC), which uses multiple Markov chains (instead of the one used by pMCMC) to improve sampling efficiency for multi-modal posteriors, 2) Introducing custom, parallel hardware architectures, which are tailored for pMCMC and ppMCMC. The architectures are implemented on Field Programmable Gate Arrays (FPGAs), a type of hardware accelerator with massive parallelization capabilities. The new algorithm and the two FPGA architectures are evaluated using a large-scale case study from genetics. Results indicate that ppMCMC achieves 1.96x higher sampling efficiency than pMCMC when using sequential CPU implementations. The FPGA architecture of pMCMC is 12.1x and 10.1x faster than state-of-the-art, parallel CPU and GPU implementations of pMCMC and up to 53x more energy efficient; the FPGA architecture of ppMCMC increases these speedups to 34.9x and 41.8x respectively and is 173x more power efficient, bringing previously intractable SSM-based data analyses within reach.

  10. A scaleable architecture for the modeling and simulation of intelligent transportation systems.

    SciTech Connect

    Ewing, T.; Tentner, A.

    1999-03-17

    A distributed, scaleable architecture for the modeling and simulation of Intelligent Transportation Systems on a network of workstations or a parallel computer has been developed at Argonne National Laboratory. The resulting capability provides a modular framework supporting plug-in models, hardware, and live data sources; visually realistic graphics displays to support training and human factors studies; and a set of basic ITS models. The models and capabilities are described, along with atypical scenario involving dynamic rerouting of smart vehicles which send probe reports to and receive traffic advisories from a traffic management center capable of incident detection.

  11. Imputation for semiparametric transformation models with biased-sampling data

    PubMed Central

    Liu, Hao; Qin, Jing; Shen, Yu

    2012-01-01

    Widely recognized in many fields including economics, engineering, epidemiology, health sciences, technology and wildlife management, length-biased sampling generates biased and right-censored data but often provide the best information available for statistical inference. Different from traditional right-censored data, length-biased data have unique aspects resulting from their sampling procedures. We exploit these unique aspects and propose a general imputation-based estimation method for analyzing length-biased data under a class of flexible semiparametric transformation models. We present new computational algorithms that can jointly estimate the regression coefficients and the baseline function semiparametrically. The imputation-based method under the transformation model provides an unbiased estimator regardless whether the censoring is independent or not on the covariates. We establish large-sample properties using the empirical processes method. Simulation studies show that under small to moderate sample sizes, the proposed procedure has smaller mean square errors than two existing estimation procedures. Finally, we demonstrate the estimation procedure by a real data example. PMID:22903245

  12. Cloud GIS and 3d Modelling to Enhance Sardinian Late Gothic Architectural Heritage

    NASA Astrophysics Data System (ADS)

    Pisu, C.; Casu, P.

    2013-07-01

    This work proposes the documentation, virtual reconstruction and spreading of architectural heritage through the use of software packages that operate in cloud computing. Cloud computing makes available a variety of applications and tools which can be effective both for the preparation and for the publication of different kinds of data. We tested the versatil ity and ease of use of such documentation tools in order to study a particular architectural phenomenon. The ultimate aim is to develop a multi-scale and multi-layer information system, oriented to the divulgation of Sardinian late gothic architecture. We tested the applications on portals of late Gothic architecture in Sardinia. The actions of conservation, protection and enhancement of cultural heritage are all founded on the social function that can be reached only through the widest possible fruition by the community. The applications of digital technologies on cultural heritage can contribute to the construction of effective communication models that, relying on sensory and emotional involvement of the viewer, can attract a wider audience to cultural content.

  13. A compact physical model for the simulation of pNML-based architectures

    NASA Astrophysics Data System (ADS)

    Turvani, G.; Riente, F.; Plozner, E.; Schmitt-Landsiedel, D.; Breitkreutz-v. Gamm, S.

    2017-05-01

    Among emerging technologies, perpendicular Nanomagnetic Logic (pNML) seems to be very promising because of its capability of combining logic and memory onto the same device, scalability, 3D-integration and low power consumption. Recently, Full Adder (FA) structures clocked by a global magnetic field have been experimentally demonstrated and detailed characterizations of the switching process governing the domain wall (DW) nucleation probability Pnuc and time tnuc have been performed. However, the design of pNML architectures represent a crucial point in the study of this technology; this can have a remarkable impact on the reliability of pNML structures. Here, we present a compact model developed in VHDL which enables to simulate complex pNML architectures while keeping into account critical physical parameters. Therefore, such parameters have been extracted from the experiments, fitted by the corresponding physical equations and encapsulated into the proposed model. Within this, magnetic structures are decomposed into a few basic elements (nucleation centers, nanowires, inverters etc.) represented by the according physical description. To validate the model, we redesigned a FA and compared our simulation results to the experiment. With this compact model of pNML devices we have envisioned a new methodology which makes it possible to simulate and test the physical behavior of complex architectures with very low computational costs.

  14. Using two coefficients modeling of nonsubsampled Shearlet transform for despeckling

    NASA Astrophysics Data System (ADS)

    Jafari, Saeed; Ghofrani, Sedigheh

    2016-01-01

    Synthetic aperture radar (SAR) images are inherently affected by multiplicative speckle noise. Two approaches based on modeling the nonsubsampled Shearlet transform (NSST) coefficients are presented. Two-sided generalized Gamma distribution and normal inverse Gaussian probability density function have been used to model the statistics of NSST coefficients. Bayesian maximum a posteriori estimator is applied to the corrupted NSST coefficients in order to estimate the noise-free NSST coefficients. Finally, experimental results, according to objective and subjective criteria, carried out on both artificially speckled images and the true SAR images, demonstrate that the proposed methods outperform other state of art references via two points of view, speckle noise reduction and image quality preservation.

  15. Practical Application of Model-based Programming and State-based Architecture to Space Missions

    NASA Technical Reports Server (NTRS)

    Horvath, Gregory A.; Ingham, Michel D.; Chung, Seung; Martin, Oliver; Williams, Brian

    2006-01-01

    Innovative systems and software engineering solutions are required to meet the increasingly challenging demands of deep-space robotic missions. While recent advances in the development of an integrated systems and software engineering approach have begun to address some of these issues, they are still at the core highly manual and, therefore, error-prone. This paper describes a task aimed at infusing MIT's model-based executive, Titan, into JPL's Mission Data System (MDS), a unified state-based architecture, systems engineering process, and supporting software framework. Results of the task are presented, including a discussion of the benefits and challenges associated with integrating mature model-based programming techniques and technologies into a rigorously-defined domain specific architecture.

  16. Practical Application of Model-based Programming and State-based Architecture to Space Missions

    NASA Technical Reports Server (NTRS)

    Horvath, Gregory A.; Ingham, Michel D.; Chung, Seung; Martin, Oliver; Williams, Brian

    2006-01-01

    Innovative systems and software engineering solutions are required to meet the increasingly challenging demands of deep-space robotic missions. While recent advances in the development of an integrated systems and software engineering approach have begun to address some of these issues, they are still at the core highly manual and, therefore, error-prone. This paper describes a task aimed at infusing MIT's model-based executive, Titan, into JPL's Mission Data System (MDS), a unified state-based architecture, systems engineering process, and supporting software framework. Results of the task are presented, including a discussion of the benefits and challenges associated with integrating mature model-based programming techniques and technologies into a rigorously-defined domain specific architecture.

  17. 150 kW Class Solar Electric Propulsion Spacecraft Power Architecture Model

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Aulisio, Michael V.; Loop, Benjamin

    2017-01-01

    The National Aeronautics and Space Administration (NASA) Solar Electric Propulsion Technology Demonstration Mission in conjunction with PC Krause and Associates has created a Simulink-based power architecture model for a 50 kilo-Watt (kW) solar electric propulsion system. NASA has extended this model to investigate 150 kW solar electric propulsion systems. Increasing the power system capability from 50 kW to 150 kW better aligns with the anticipated power requirements for Mars and other deep space explorations. The high-power solar electric propulsion capability has been identified as a critical part of NASAs future beyond-low-Earth-orbit for human-crewed exploration missions. This paper presents multiple 150 kW architectures, simulation results, and a discussion of their merits.

  18. 150 kW Class Solar Electric Propulsion Spacecraft Power Architecture Model

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Aulisio, Michael V.; Loop, Benjamin

    2017-01-01

    The National Aeronautics and Space Administration (NASA) Solar Electric Propulsion Technology Demonstration Mission (SEP TDM), in conjunction with PC Krause and Associates, has created a Simulink-based power architecture model for a 50 kilo-Watt (kW) solar electric propulsion system. NASA has extended this model to investigate 150 kW solar electric propulsion systems. Increasing the power capability to 150 kW is an intermediate step to the anticipated power requirements for Mars and other deep space applications. The high-power solar electric propulsion capability has been identified as a critical part of NASA’s future beyond-low-Earth-orbit for human-crewed exploration missions. This paper presents four versions of a 150 kW architecture, simulation results, and a discussion of their merits.

  19. An Interactive Design Space Supporting Development of Vehicle Architecture Concept Models

    DTIC Science & Technology

    2011-06-17

    ponents that are not designed to carry structural loads in the assembly, such as seats and other trim items. However, these inertial items have an...Denver, Colorado, USA IMECE2011-64510 AN INTERACTIVE DESIGN SPACE SUPPORTING DEVELOPMENT OF VEHICLE ARCHITECTURE CONCEPT MODELS Gary Osborne...early in the development cycle. Optimization taking place later in the cycle usually occurs at the detail design level, and tends to result in

  20. Accounting Models of the Human Factor and its Architecture in Scheduling and Acceptance of Administrative Solutions

    DTIC Science & Technology

    2010-10-01

    terrorism or fighting, as for example in Bhopal, Goiânia, Chernobyl , Novosibirsk. General global trend is an extension of the tasks from military... animals . Accounting Models of the Human Factor and its Architecture in Scheduling and Acceptance of Administrative Solutions RTO-MP-HFM-202 P14 - 5...endemic infections, dangerous insects and animals . Vector equipment and protective equipment (Eq) describes the physiological and hygienic

  1. Guidelines for a Digital Reinterpretation of Architectural Restoration Work: Reality-Based Models and Reverse Modelling Techniques Applied to the Architectural Decoration of the Teatro Marittimo, Villa Adriana

    NASA Astrophysics Data System (ADS)

    Adembri, B.; Cipriani, L.; Bertacchi, G.

    2017-05-01

    The Maritime Theatre is one of the iconic buildings of Hadrian's Villa, Tivoli. The state of conservation of the theatre is not only the result of weathering over time, but also due to restoration work carried out during the Fifties of the past century. Although this anastylosis process had the virtue of partially restoring a few of the fragments of the compound's original image, it now reveals diverse inconsistencies and genuine errors in the reassembling of the fragments. This study aims at carrying out a digital reinterpretation of the restoration of the architectural fragments in relation to the architectural order, with particular reference to the miscellaneous decoration of the frieze of the Teatro Marittimo (vestibule and atrium). Over the course of the last few years the Teatro Marittimo has been the target of numerous surveying campaigns using digital methodology (laser scanner and photogrammetry SfM/MVS). Starting with the study of the remains of the opus caementicium on the ground, it is possible to identify surfaces which are then used in the model for subsequent cross sections, so as to achieve the best fitting circumferences to use as reference points to put the fragments back into place.

  2. A transformational model for the practice of professional nursing. Part 1, The model.

    PubMed

    Wolf, G A; Boland, S; Aukerman, M

    1994-04-01

    Our healthcare system is undergoing major transformation. Most nurse executives know that change is necessary and inevitable, but are less certain how to position their departments for these changes. The Transformational Model for the Practice of Professional Nursing was developed as a "road map" for that purpose. Part 1 of the model discusses the paradigm shifts that need to occur in professional practice for future success. The various components of the model are presented, and applications are identified. Part 2 will appear in the May 1994 issue of JONA, and will discuss the implementation of this model into a practice setting.

  3. A functional–structural kiwifruit vine model integrating architecture, carbon dynamics and effects of the environment

    PubMed Central

    Cieslak, Mikolaj; Seleznyova, Alla N.; Hanan, Jim

    2011-01-01

    Background and Aims Functional–structural modelling can be used to increase our understanding of how different aspects of plant structure and function interact, identify knowledge gaps and guide priorities for future experimentation. By integrating existing knowledge of the different aspects of the kiwifruit (Actinidia deliciosa) vine's architecture and physiology, our aim is to develop conceptual and mathematical hypotheses on several of the vine's features: (a) plasticity of the vine's architecture; (b) effects of organ position within the canopy on its size; (c) effects of environment and horticultural management on shoot growth, light distribution and organ size; and (d) role of carbon reserves in early shoot growth. Methods Using the L-system modelling platform, a functional–structural plant model of a kiwifruit vine was created that integrates architectural development, mechanistic modelling of carbon transport and allocation, and environmental and management effects on vine and fruit growth. The branching pattern was captured at the individual shoot level by modelling axillary shoot development using a discrete-time Markov chain. An existing carbon transport resistance model was extended to account for several source/sink components of individual plant elements. A quasi-Monte Carlo path-tracing algorithm was used to estimate the absorbed irradiance of each leaf. Key Results Several simulations were performed to illustrate the model's potential to reproduce the major features of the vine's behaviour. The model simulated vine growth responses that were qualitatively similar to those observed in experiments, including the plastic response of shoot growth to local carbon supply, the branching patterns of two Actinidia species, the effect of carbon limitation and topological distance on fruit size and the complex behaviour of sink competition for carbon. Conclusions The model is able to reproduce differences in vine and fruit growth arising from various

  4. Development of Groundwater Modeling Support System Based on Service-Oriented Architecture

    NASA Astrophysics Data System (ADS)

    WANG, Y.; Tsai, J. P.; Hsiao, C. T.; Chang, L. C.

    2014-12-01

    Groundwater simulation has become an essential step on the groundwater resources management and assessment. There are many stand-alone pre and post processing software packages to alleviate the model simulation loading, but the stand-alone software do not consider centralized management of data and simulation results neither do they provide network sharing function. The model buildings are still implemented independently case to case when using these packages. Hence, it is difficult to share and reuse the data and knowledge (simulation cases) systematically within or across companies. Therefore, this study develops a centralized and network based groundwater model developing system to assist model simulation. The system is based on service-oriented architecture and allows remote user to develop their modeling cases on internet. The data and cases (knowledge) are thus easy to manage centralized. MODFLOW is the modeling engine of the system, which is the most popular groundwater model in the world. Other functions include the database management and variety of model developing assisted web services including auto digitalizing of geology profile map、groundwater missing data recovery assisting、graphic data demonstration and auto generation of MODFLOW input files from database that is the most important function of the system. Since the system architecture is service-oriented, it is scalable and flexible. The system can be easily extended to include the scenarios analysis and knowledge management to facilitate the reuse of groundwater modeling knowledge.

  5. Grenville-era Crustal Architecture of Central Australia, and its Importance in Constraining Rodinia Models.

    NASA Astrophysics Data System (ADS)

    Aitken, A. R.; Betts, P. G.

    2007-12-01

    continuous and coherent northeast trending orogenic belt connecting the Albany Fraser, Musgrave and Warumpi provinces. The geometry and extent of this orogenic belt precludes a direct connection between the Musgrave Province and contemporaneous orogens in Laurentia. Any model of Australian orogenic activity during the Grenvillian era, must take account of the NE oriented architecture, and intracontinental termination of the orogenic belt. Continental reconfiguration within Australia via the rotation of the South Australian Craton can adequately explain the Grenville-aged architecture of Australia.

  6. Impact of plant shoot architecture on leaf cooling: a coupled heat and mass transfer model

    PubMed Central

    Bridge, L. J.; Franklin, K. A.; Homer, M. E.

    2013-01-01

    Plants display a range of striking architectural adaptations when grown at elevated temperatures. In the model plant Arabidopsis thaliana, these include elongation of petioles, and increased petiole and leaf angles from the soil surface. The potential physiological significance of these architectural changes remains speculative. We address this issue computationally by formulating a mathematical model and performing numerical simulations, testing the hypothesis that elongated and elevated plant configurations may reflect a leaf-cooling strategy. This sets in place a new basic model of plant water use and interaction with the surrounding air, which couples heat and mass transfer within a plant to water vapour diffusion in the air, using a transpiration term that depends on saturation, temperature and vapour concentration. A two-dimensional, multi-petiole shoot geometry is considered, with added leaf-blade shape detail. Our simulations show that increased petiole length and angle generally result in enhanced transpiration rates and reduced leaf temperatures in well-watered conditions. Furthermore, our computations also reveal plant configurations for which elongation may result in decreased transpiration rate owing to decreased leaf liquid saturation. We offer further qualitative and quantitative insights into the role of architectural parameters as key determinants of leaf-cooling capacity. PMID:23720538

  7. Characterization of Model-Based Reasoning Strategies for Use in IVHM Architectures

    NASA Technical Reports Server (NTRS)

    Poll, Scott; Iverson, David; Patterson-Hine, Ann

    2003-01-01

    Open architectures are gaining popularity for Integrated Vehicle Health Management (IVHM) applications due to the diversity of subsystem health monitoring strategies in use and the need to integrate a variety of techniques at the system health management level. The basic concept of an open architecture suggests that whatever monitoring or reasoning strategy a subsystem wishes to deploy, the system architecture will support the needs of that subsystem and will be capable of transmitting subsystem health status across subsystem boundaries and up to the system level for system-wide fault identification and diagnosis. There is a need to understand the capabilities of various reasoning engines and how they, coupled with intelligent monitoring techniques, can support fault detection and system level fault management. Researchers in IVHM at NASA Ames Research Center are supporting the development of an IVHM system for liquefying-fuel hybrid rockets. In the initial stage of this project, a few readily available reasoning engines were studied to assess candidate technologies for application in next generation launch systems. Three tools representing the spectrum of model-based reasoning approaches, from a quantitative simulation based approach to a graph-based fault propagation technique, were applied to model the behavior of the Hybrid Combustion Facility testbed at Ames. This paper summarizes the characterization of the modeling process for each of the techniques.

  8. A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture

    NASA Technical Reports Server (NTRS)

    Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.

    2005-01-01

    Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.

  9. Modeling Two-Channel Speech Processing With the EPIC Cognitive Architecture.

    PubMed

    Kieras, David E; Wakefield, Gregory H; Thompson, Eric R; Iyer, Nandini; Simpson, Brian D

    2016-01-01

    An important application of cognitive architectures is to provide human performance models that capture psychological mechanisms in a form that can be "programmed" to predict task performance of human-machine system designs. Although many aspects of human performance have been successfully modeled in this approach, accounting for multitalker speech task performance is a novel problem. This article presents a model for performance in a two-talker task that incorporates concepts from psychoacoustics, in particular, masking effects and stream formation. Copyright © 2016 Cognitive Science Society, Inc.

  10. A dynamic object-oriented architecture approach to ecosystem modeling and simulation.

    SciTech Connect

    Dolph, J. E.; Majerus, K. A.; Sydelko, P. J.; Taxon, T. N.

    1999-04-09

    Modeling and simulation in support of adaptive ecosystem management can be better accomplished through a dynamic, integrated, and flexible approach that incorporates scientific and technological components into a comprehensive ecosystem-modeling framework. The Integrated Dynamic Landscape Analysis and Modeling System (IDLAMS) integrates ecological models and decision support techniques, through a geographic information system (GIS)-based framework. The Strategic Environmental Research and Development Program (SERDP) sponsored the development of IDLAMS. Initially built upon a GIS framework, IDLAMS is migrating to an object-oriented (OO) architectural framework. An object-oriented architecture is more flexible and modular. It allows disparate applications and dynamic models to be integrated in a manner that minimizes (or eliminates) the need to rework or recreate the system as new models are added to the suite. In addition, an object-oriented design makes it easier to provide run-time feedback among models, thereby making it a more dynamic tool for exploring and providing insight into the interactions among ecosystem processes. Finally, an object-oriented design encourages the reuse of existing technology because OO-IDLAMS is able to integrate disparate models, databases, or applications executed in their native languages. Reuse is also accomplished through a structured approach to building a consistent and reusable object library. This reusability can substantially reduce the time and effort needed to develop future integrated ecosystem simulations.

  11. Applying Service-Oriented Architecture on The Development of Groundwater Modeling Support System

    NASA Astrophysics Data System (ADS)

    Li, C. Y.; WANG, Y.; Chang, L. C.; Tsai, J. P.; Hsiao, C. T.

    2016-12-01

    Groundwater simulation has become an essential step on the groundwater resources management and assessment. There are many stand-alone pre- and post-processing software packages to alleviate the model simulation loading, but the stand-alone software do not consider centralized management of data and simulation results neither do they provide network sharing functions. Hence, it is difficult to share and reuse the data and knowledge (simulation cases) systematically within or across companies. Therefore, this study develops a centralized and network based groundwater modeling support system to assist model construction. The system is based on service-oriented architecture and allows remote user to develop their modeling cases on internet. The data and cases (knowledge) are thus easy to manage centralized. MODFLOW is the modeling engine of the system, which is the most popular groundwater model in the world. The system provides a data warehouse to restore groundwater observations, MODFLOW Support Service, MODFLOW Input File & Shapefile Convert Service, MODFLOW Service, and Expert System Service to assist researchers to build models. Since the system architecture is service-oriented, it is scalable and flexible. The system can be easily extended to include the scenarios analysis and knowledge management to facilitate the reuse of groundwater modeling knowledge.

  12. Simulation models relevant to the protection of synchronous machines and transformers

    NASA Astrophysics Data System (ADS)

    Muthumuni, Dharshana De Silva

    2001-07-01

    The purpose of this research is to develop models which can be used to produce realistic test waveforms for the evaluation of protection systems used for generators and transformers. Software models of generators and transformers which have the capability to calculate voltage and current waveforms in the presence of internal faults are presented in this thesis. The thesis also presents accurate models of current transformers used in differential current protection schemes. These include air gapped current transformers which are widely used in transformer and generator protection. The models of generators and transformers can be used with the models of current transformers to obtain test waveforms to evaluate a protection system. The models are validated by comparing the results obtained from simulations with recorded waveforms.

  13. Model-based security analysis of the German health card architecture.

    PubMed

    Jürjens, J; Rumm, R

    2008-01-01

    Health-care information systems are particularly security-critical. In order to make these applications secure, the security analysis has to be an integral part of the system design and IT management process for such systems. This work presents the experiences and results from the security analysis of the system architecture of the German Health Card, by making use of an approach to model-based security engineering that is based on the UML extension UMLsec. The focus lies on the security mechanisms and security policies of the smart-card-based architecture which were analyzed using the UMLsec method and tools. Main results of the paper include a report on the employment of the UMLsec method in an industrial health information systems context as well as indications of its benefits and limitations. In particular, two potential security weaknesses were detected and countermeasures discussed. The results indicate that it can be feasible to apply a model-based security analysis using UMLsec to an industrial health information system like the German Health Card architecture, and that doing so can have concrete benefits (such as discovering potential weaknesses, and an increased confidence that no further vulnerabilities of the kind that were considered are present).

  14. A transformational model for the practice of professional nursing. Part 2, Implementation of the model.

    PubMed

    Wolf, G A; Boland, S; Aukerman, M

    1994-05-01

    Our healthcare system is undergoing major transformation. Most nurse executives are convinced that change is necessary and inevitable, but they are less certain how to position their departments for future success. The Transformational Model for the Practice of Professional Nursing was developed as a "road map" for that purpose. Part 1 (JONA, April 1994) discussed the professional practice paradigm shifts that are needed for future success. The model components were presented and applications identified. Part 2 discusses the implementation of this model in a practice setting.

  15. An Evaluation of the High Level Architecture (HLA) as a Framework for NASA Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reid, Michael R.; Powers, Edward I. (Technical Monitor)

    2000-01-01

    The High Level Architecture (HLA) is a current US Department of Defense and an industry (IEEE-1516) standard architecture for modeling and simulations. It provides a framework and set of functional rules and common interfaces for integrating separate and disparate simulators into a larger simulation. The goal of the HLA is to reduce software costs by facilitating the reuse of simulation components and by providing a runtime infrastructure to manage the simulations. In order to evaluate the applicability of the HLA as a technology for NASA space mission simulations, a Simulations Group at Goddard Space Flight Center (GSFC) conducted a study of the HLA and developed a simple prototype HLA-compliant space mission simulator. This paper summarizes the prototyping effort and discusses the potential usefulness of the HLA in the design and planning of future NASA space missions with a focus on risk mitigation and cost reduction.

  16. A model driven approach for the German health telematics architectural framework and security infrastructure.

    PubMed

    Blobel, Bernd; Pharow, Peter

    2007-01-01

    Shared care concepts such as managed care and continuity of care are based on extended communication and cooperation between different health professionals or between them and the patient respectively. Health information systems and their components, which are very different in their structure, behavior, data and their semantics as well as regarding implementation details used in different environments for different purposes, have to provide intelligent interoperability. Therefore, flexibility, portability, and future orientation must be guaranteed using the newest development of model driven architecture. The ongoing work for the German health telematics platform based on an architectural framework and a security infrastructure is described in some detail. This concept of future proof health information networks with virtual electronic health records as core application starts with multifunctional electronic health cards. It fits into developments currently performed by many other developed countries.

  17. The NIST Real-Time Control System (RCS): A Reference Model Architecture for Computational Intelligence

    NASA Technical Reports Server (NTRS)

    Albus, James S.

    1996-01-01

    The Real-time Control System (RCS) developed at NIST and elsewhere over the past two decades defines a reference model architecture for design and analysis of complex intelligent control systems. The RCS architecture consists of a hierarchically layered set of functional processing modules connected by a network of communication pathways. The primary distinguishing feature of the layers is the bandwidth of the control loops. The characteristic bandwidth of each level is determined by the spatial and temporal integration window of filters, the temporal frequency of signals and events, the spatial frequency of patterns, and the planning horizon and granularity of the planners that operate at each level. At each level, tasks are decomposed into sequential subtasks, to be performed by cooperating sets of subordinate agents. At each level, signals from sensors are filtered and correlated with spatial and temporal features that are relevant to the control function being implemented at that level.

  18. Deformable three-dimensional model architecture for interactive augmented reality in minimally invasive surgery.

    PubMed

    Vemuri, Anant S; Wu, Jungle Chi-Hsiang; Liu, Kai-Che; Wu, Hurng-Sheng

    2012-12-01

    Surgical procedures have undergone considerable advancement during the last few decades. More recently, the availability of some imaging methods intraoperatively has added a new dimension to minimally invasive techniques. Augmented reality in surgery has been a topic of intense interest and research. Augmented reality involves usage of computer vision algorithms on video from endoscopic cameras or cameras mounted in the operating room to provide the surgeon additional information that he or she otherwise would have to recognize intuitively. One of the techniques combines a virtual preoperative model of the patient with the endoscope camera using natural or artificial landmarks to provide an augmented reality view in the operating room. The authors' approach is to provide this with the least number of changes to the operating room. Software architecture is presented to provide interactive adjustment in the registration of a three-dimensional (3D) model and endoscope video. Augmented reality including adrenalectomy, ureteropelvic junction obstruction, and retrocaval ureter and pancreas was used to perform 12 surgeries. The general feedback from the surgeons has been very positive not only in terms of deciding the positions for inserting points but also in knowing the least change in anatomy. The approach involves providing a deformable 3D model architecture and its application to the operating room. A 3D model with a deformable structure is needed to show the shape change of soft tissue during the surgery. The software architecture to provide interactive adjustment in registration of the 3D model and endoscope video with adjustability of every 3D model is presented.

  19. From Tls to Hbim. High Quality Semantically-Aware 3d Modeling of Complex Architecture

    NASA Astrophysics Data System (ADS)

    Quattrini, R.; Malinverni, E. S.; Clini, P.; Nespeca, R.; Orlietti, E.

    2015-02-01

    In order to improve the framework for 3D modeling, a great challenge is to obtain the suitability of Building Information Model (BIM) platform for historical architecture. A specific challenge in HBIM is to guarantee appropriateness of geometrical accuracy. The present work demonstrates the feasibility of a whole HBIM approach for complex architectural shapes, starting from TLS point clouds. A novelty of our method is to work in a 3D environment throughout the process and to develop semantics during the construction phase. This last feature of HBIM was analyzed in the present work verifying the studied ontologies, enabling the data enrichment of the model with non-geometrical information, such as historical notes, decay or deformation evidence, decorative elements etc. The case study is the Church of Santa Maria at Portonovo, an abbey from the Romanesque period. Irregular or complex historical architecture, such as Romanesque, needs the construction of shared libraries starting from the survey of its already existing elements. This is another key aspect in delivering Building Information Modeling standards. In particular, we focus on the quality assessment of the obtained model, using an open-source sw and the point cloud as reference. The proposed work shows how it is possible to develop a high quality 3D model semantic-aware, capable of connecting geometrical-historical survey with descriptive thematic databases. In this way, a centralized HBIM will serve as comprehensive dataset of information about all disciplines, particularly for restoration and conservation. Moreover, the geometric accuracy will ensure also reliable visualization outputs.

  20. Phase-field-crystal methodology for modeling of structural transformations.

    PubMed

    Greenwood, Michael; Rottler, Jörg; Provatas, Nikolas

    2011-03-01

    We introduce and characterize free-energy functionals for modeling of solids with different crystallographic symmetries within the phase-field-crystal methodology. The excess free energy responsible for the emergence of periodic phases is inspired by classical density-functional theory, but uses only a minimal description for the modes of the direct correlation function to preserve computational efficiency. We provide a detailed prescription for controlling the crystal structure and introduce parameters for changing temperature and surface energies, so that phase transformations between body-centered-cubic (bcc), face-centered-cubic (fcc), hexagonal-close-packed (hcp), and simple-cubic (sc) lattices can be studied. To illustrate the versatility of our free-energy functional, we compute the phase diagram for fcc-bcc-liquid coexistence in the temperature-density plane. We also demonstrate that our model can be extended to include hcp symmetry by dynamically simulating hcp-liquid coexistence from a seeded crystal nucleus. We further quantify the dependence of the elastic constants on the model control parameters in two and three dimensions, showing how the degree of elastic anisotropy can be tuned from the shape of the direct correlation functions.

  1. Model for a transformer-coupled toroidal plasma source

    SciTech Connect

    Rauf, Shahid; Balakrishna, Ajit; Chen Zhigang; Collins, Ken

    2012-01-15

    A two-dimensional fluid plasma model for a transformer-coupled toroidal plasma source is described. Ferrites are used in this device to improve the electromagnetic coupling between the primary coils carrying radio frequency (rf) current and a secondary plasma loop. Appropriate components of the Maxwell equations are solved to determine the electromagnetic fields and electron power deposition in the model. The effect of gas flow on species transport is also considered. The model is applied to 1 Torr Ar/NH{sub 3} plasma in this article. Rf electric field lines form a loop in the vacuum chamber and generate a plasma ring. Due to rapid dissociation of NH{sub 3}, NH{sub x}{sup +} ions are more prevalent near the gas inlet and Ar{sup +} ions are the dominant ions farther downstream. NH{sub 3} and its by-products rapidly dissociate into small fragments as the gas flows through the plasma. With increasing source power, NH{sub 3} dissociates more readily and NH{sub x}{sup +} ions are more tightly confined near the gas inlet. Gas flow rate significantly influences the plasma characteristics. With increasing gas flow rate, NH{sub 3} dissociation occurs farther from the gas inlet in regions with higher electron density. Consequently, more NH{sub 4}{sup +} ions are produced and dissociation by-products have higher concentrations near the outlet.

  2. Kinetic Modeling of Damage Repair, Genome Instability, and Neoplastic Transformation

    SciTech Connect

    Stewart, Robert D

    2007-03-17

    Inducible repair and pathway interactions may fundamentally alter the shape of dose-response curves because different mechanisms may be important under low- and high-dose exposure conditions. However, the significance of these phenomena for risk assessment purposes is an open question. This project developed new modeling tools to study the putative effects of DNA damage induction and repair on higher-level biological endpoints, including cell killing, neoplastic transformation and cancer. The project scope included (1) the development of new approaches to simulate the induction and base excision repair (BER) of DNA damage using Monte Carlo methods and (2) the integration of data from the Monte Carlo simulations with kinetic models for higher-level biological endpoints. Methods of calibrating and testing such multiscale biological simulations were developed. We also developed models to aid in the analysis and interpretation of data from experimental assays, such as the pulsed-field gel electrophoresis (PFGE) assay used to quantity the amount of DNA damage caused by ionizing radiation.

  3. D Textured Modelling of both Exterior and Interior of Korean Styled Architectures

    NASA Astrophysics Data System (ADS)

    Lee, J.-D.; Bhang, K.-J.; Schuhr, W.

    2017-08-01

    This paper describes 3D modelling procedure of two Korean styled architectures which were performed through a series of processing from data acquired with the terrestrial laser scanner. These two case projects illustate the use of terrestrial laser scanner as a digital documentation tool for management, conservation and restoration of the cultural assets. We showed an approach to automate reconstruction of both the outside and inside models of a building from laser scanning data. Laser scanning technology is much more efficient than existing photogrammetry in measuring shape and constructing spatial database for preservation and restoration of cultural assets as well as for deformation monitoring and safety diagnosis of structures.

  4. Modeling the Office of Science Ten Year Facilities Plan: The PERI Architecture Tiger Team

    SciTech Connect

    de Supinski, Bronis R.; Alam, Sadaf; Bailey, David H.; Carrington, Laura; Daley, Chris; Dubey, Anshu; Gamblin, Todd; Gunter, Dan; Hovland, Paul D.; Jagode, Heike; Karavanic, Karen; Marin, Gabriel; Mellor-Crummey, John; Moore, Shirley; Norris, Boyana; Oliker, Leonid; Olschanowsky, Catherine; Roth, Philip C.; Schulz, Martin; Shende, Sameer; Snavely, Allan; Spear, Wyatt; Tikir, Mustafa; Vetter, Jeff; Worley, Pat; Wright, Nicholas

    2009-06-26

    The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort optimizing key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measured the performance of these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfill our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully.

  5. Modeling the Office of Science Ten Year FacilitiesPlan: The PERI Architecture Tiger Team

    SciTech Connect

    de Supinski, B R; Alam, S R; Bailey, D H; Carrington, L; Daley, C

    2009-05-27

    The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort to the optimization of key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measured the performance of these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfill our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully.

  6. Modeling the Office of Science Ten Year Facilities Plan: The PERI Architecture Team

    SciTech Connect

    de Supinski, Bronis R.; Alam, Sadaf R; Bailey, David; Carrington, Laura; Daley, Christopher; Dubey, Anshu; Gamblin, Todd; Gunter, Dan; Hovland, Paul; Jagode, Heike; Karavanic, Karen; Marin, Gabriel; Mellor-Crummey, John; Moore, Shirley; Norris, Boyana; Oliker, Leonid; Olschanowsky, Cathy; Roth, Philip C; Schulz, Martin; Shende, Sameer; Snavely, Allan; Spea, Wyatt; Tikir, Mustafa; Vetter, Jeffrey S; Worley, Patrick H; Wright, Nicholas

    2009-01-01

    The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort optimizing key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measured the performance of these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfilll our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully.

  7. Modeling the Office of Science ten year facilities plan: The PERI Architecture Tiger Team

    NASA Astrophysics Data System (ADS)

    de Supinski, Bronis R.; Alam, Sadaf; Bailey, David H.; Carrington, Laura; Daley, Chris; Dubey, Anshu; Gamblin, Todd; Gunter, Dan; Hovland, Paul D.; Jagode, Heike; Karavanic, Karen; Marin, Gabriel; Mellor-Crummey, John; Moore, Shirley; Norris, Boyana; Oliker, Leonid; Olschanowsky, Catherine; Roth, Philip C.; Schulz, Martin; Shende, Sameer; Snavely, Allan; Spear, Wyatt; Tikir, Mustafa; Vetter, Jeff; Worley, Pat; Wright, Nicholas

    2009-07-01

    The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort optimizing key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measured the performance of these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfill our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully.

  8. State of the Art of the Landscape Architecture Spatial Data Model from a Geospatial Perspective

    NASA Astrophysics Data System (ADS)

    Kastuari, A.; Suwardhi, D.; Hanan, H.; Wikantika, K.

    2016-10-01

    Spatial data and information had been used for some time in planning or landscape design. For a long time, architects were using spatial data in the form of topographic map for their designs. This method is not efficient, and it is also not more accurate than using spatial analysis by utilizing GIS. Architects are sometimes also only accentuating the aesthetical aspect for their design, but not taking landscape process into account which could cause the design could be not suitable for its use and its purpose. Nowadays, GIS role in landscape architecture has been formalized by the emergence of Geodesign terminology that starts in Representation Model and ends in Decision Model. The development of GIS could be seen in several fields of science that now have the urgency to use 3 dimensional GIS, such as in: 3D urban planning, flood modeling, or landscape planning. In this fields, 3 dimensional GIS is able to support the steps in modeling, analysis, management, and integration from related data, that describe the human activities and geophysics phenomena in more realistic way. Also, by applying 3D GIS and geodesign in landscape design, geomorphology information can be better presented and assessed. In some research, it is mentioned that the development of 3D GIS is not established yet, either in its 3D data structure, or in its spatial analysis function. This study literature will able to accommodate those problems by providing information on existing development of 3D GIS for landscape architecture, data modeling, the data accuracy, representation of data that is needed by landscape architecture purpose, specifically in the river area.

  9. Modeling of Heavy Metal Transformation in Soil Ecosystem

    NASA Astrophysics Data System (ADS)

    Kalinichenko, Kira; Nikovskaya, Galina N.

    2017-04-01

    The intensification of industrial activity leads to an increase in heavy metals pollution of soils. In our opinion, sludge from biological treatment of municipal waste water, stabilized under aerobic-anaerobic conditions (commonly known as biosolid), may be considered as concentrate of natural soil. In their chemical, physical and chemical and biological properties these systems are similar gel-like nanocomposites. These contain microorganisms, humic substances, clay, clusters of nanoparticles of heavy metal compounds, and so on involved into heteropolysaccharides matrix. It is known that microorganisms play an important role in the transformation of different nature substances in soil and its health maintenance. The regularities of transformation of heavy metal compounds in soil ecosystem were studied at the model of biosolid. At biosolid swelling its structure changing (gel-sol transition, weakening of coagulation contacts between metal containing nanoparticles, microbial cells and metabolites, loosening and even destroying of the nanocomposite structure) can occur [1, 2]. The promotion of the sludge heterotrophic microbial activities leads to solubilization of heavy metal compounds in the system. The microbiological process can be realized in alcaligeneous or acidogeneous regimes in dependence on the type of carbon source and followed by the synthesis of metabolites with the properties of flocculants and heavy metals extragents [3]. In this case the heavy metals solubilization (bioleaching) in the form of nanoparticles of hydroxycarbonate complexes or water soluble complexes with oxycarbonic acids is observed. Under the action of biosolid microorganisms the heavy metals-oxycarbonic acids complexes can be transformed (catabolised) into nano-sizing heavy metals- hydroxycarbonates complexes. These ecologically friendly complexes and microbial heteropolysaccharides are able to interact with soil colloids, stay in the top soil profile, and improve soil structure due

  10. Guiding Principles for Data Architecture to Support the Pathways Community HUB Model.

    PubMed

    Zeigler, Bernard P; Redding, Sarah; Leath, Brenda A; Carter, Ernest L; Russell, Cynthia

    2016-01-01

    The Pathways Community HUB Model provides a unique strategy to effectively supplement health care services with social services needed to overcome barriers for those most at risk of poor health outcomes. Pathways are standardized measurement tools used to define and track health and social issues from identification through to a measurable completion point. The HUB use Pathways to coordinate agencies and service providers in the community to eliminate the inefficiencies and duplication that exist among them. Experience with the Model has brought out the need for better information technology solutions to support implementation of the Pathways themselves through decision-support tools for care coordinators and other users to track activities and outcomes, and to facilitate reporting. Here we provide a basis for discussing recommendations for such a data infrastructure by developing a conceptual model that formalizes the Pathway concept underlying current implementations. The main contribution is a set of core recommendations as a framework for developing and implementing a data architecture to support implementation of the Pathways Community HUB Model. The objective is to present a tool for communities interested in adopting the Model to learn from and to adapt in their own development and implementation efforts. Experience with the Community Health Access Project (CHAP) data base system (the core implementation of the Model) has identified several issues and remedies that have been developed to address these issues. Based on analysis of issues and remedies, we present several key features for a data architecture meeting the just mentioned recommendations. Presentation of features is followed by a practical guide to their implementation allowing an organization to consider either tailoring off-the-shelf generic systems to meet the requirements or offerings that are specialized for community-based care coordination. Looking to future extensions, we discuss the

  11. Capability Maturity Model Integration (CMMI) V1.3 and Architecture-Centric Engineering

    DTIC Science & Technology

    2011-05-17

    Process, Storage, Processor, Communication 20 CMMI V1.3 and Architecture-Centric Engineering © 2011 Carnegie Mellon University RESPONSE MEASURE...ENVIRONMENTSOURCE Example Quality Attribute Scenario ResponseStimulus Artifact: Process, Storage, Processor, Communication A “performance” scenario: A...or selecting the architecture • documenting and communicating the architecture • analyzing or evaluating the architecture • implementing the system

  12. Integrating water quality models in the High Level Architecture (HLA) environment

    NASA Astrophysics Data System (ADS)

    Lindenschmidt, K.-E.; Hesser, F. B.; Rode, M.

    2005-08-01

    HLA (High Level Architecture) is a computer architecture for constructing distributed simulations. It facilitates interoperability among different simulations and simulation types and promotes reuse of simulation software modules. The core of the HLA is the Run-Time Infrastructure (RTI) that provides services to start and stop a simulation execution, to transfer data between interoperating simulations, to control the amount and routing of data that is passed, and to co-ordinate the passage of simulated time among the simulations. The authors are not aware of any HLA applications in the field of water resources management. The development of such a system is underway at the UFZ -Centre for Environmental Research, Germany, in which the simulations of a hydrodynamic model (DYNHYD), eutrophication model (EUTRO) and sediment and micro-pollutant transport model (TOXI) are interlinked and co-ordinated by the HLA RTI environment. This configuration enables extensions such as (i) "cross-model" uncertainty analysis with Monte Carlo Analysis: time synchronisation allows EUTRO and TOXI simulations to be made after each successive simulation time step in DYNHYD, (ii) information transfer from EUTRO to TOXI to compute organic carbon fractions of particulate matter in TOXI, (iii) information transfer from TOXI to EUTRO to compute extinction coefficients in EUTRO and (iv) feedback from water quality simulations to the hydrodynamic modeling.

  13. Computationally efficient method for Fourier transform of highly chirped pulses for laser and parametric amplifier modeling.

    PubMed

    Andrianov, Alexey; Szabo, Aron; Sergeev, Alexander; Kim, Arkady; Chvykov, Vladimir; Kalashnikov, Mikhail

    2016-11-14

    We developed an improved approach to calculate the Fourier transform of signals with arbitrary large quadratic phase which can be efficiently implemented in numerical simulations utilizing Fast Fourier transform. The proposed algorithm significantly reduces the computational cost of Fourier transform of a highly chirped and stretched pulse by splitting it into two separate transforms of almost transform limited pulses, thereby reducing the required grid size roughly by a factor of the pulse stretching. The application of our improved Fourier transform algorithm in the split-step method for numerical modeling of CPA and OPCPA shows excellent agreement with standard algorithms.

  14. The software architecture of climate models: a graphical comparison of CMIP5 and EMICAR5 configurations

    NASA Astrophysics Data System (ADS)

    Alexander, K.; Easterbrook, S. M.

    2015-01-01

    We analyse the source code of eight coupled climate models, selected from those that participated in the CMIP5 (Taylor et al., 2012) or EMICAR5 (Eby et al., 2013; Zickfeld et al., 2013) intercomparison projects. For each model, we sort the preprocessed code into components and subcomponents based on dependency structure. We then create software architecture diagrams which show the relative sizes of these components/subcomponents and the flow of data between them. The diagrams also illustrate several major classes of climate model design; the distribution of complexity between components, which depends on historical development paths as well as the conscious goals of each institution; and the sharing of components between different modelling groups. These diagrams offer insights into the similarities and differences between models, and have the potential to be useful tools for communication between scientists, scientific institutions, and the public.

  15. The software architecture of climate models: a graphical comparison of CMIP5 and EMICAR5 configurations

    NASA Astrophysics Data System (ADS)

    Alexander, K.; Easterbrook, S. M.

    2015-04-01

    We analyze the source code of eight coupled climate models, selected from those that participated in the CMIP5 (Taylor et al., 2012) or EMICAR5 (Eby et al., 2013; Zickfeld et al., 2013) intercomparison projects. For each model, we sort the preprocessed code into components and subcomponents based on dependency structure. We then create software architecture diagrams that show the relative sizes of these components/subcomponents and the flow of data between them. The diagrams also illustrate several major classes of climate model design; the distribution of complexity between components, which depends on historical development paths as well as the conscious goals of each institution; and the sharing of components between different modeling groups. These diagrams offer insights into the similarities and differences in structure between climate models, and have the potential to be useful tools for communication between scientists, scientific institutions, and the public.

  16. Investigating the genetic architecture of conditional strategies using the environmental threshold model

    PubMed Central

    Hazel, Wade N.; Tomkins, Joseph L.

    2015-01-01

    The threshold expression of dichotomous phenotypes that are environmentally cued or induced comprise the vast majority of phenotypic dimorphisms in colour, morphology, behaviour and life history. Modelled as conditional strategies under the framework of evolutionary game theory, the quantitative genetic basis of these traits is a challenge to estimate. The challenge exists firstly because the phenotypic expression of the trait is dichotomous and secondly because the apparent environmental cue is separate from the biological signal pathway that induces the switch between phenotypes. It is the cryptic variation underlying the translation of cue to phenotype that we address here. With a ‘half-sib common environment’ and a ‘family-level split environment’ experiment, we examine the environmental and genetic influences that underlie male dimorphism in the earwig Forficula auricularia. From the conceptual framework of the latent environmental threshold (LET) model, we use pedigree information to dissect the genetic architecture of the threshold expression of forceps length. We investigate for the first time the strength of the correlation between observable and cryptic ‘proximate’ cues. Furthermore, in support of the environmental threshold model, we found no evidence for a genetic correlation between cue and the threshold between phenotypes. Our results show strong correlations between observable and proximate cues and less genetic variation for thresholds than previous studies have suggested. We discuss the importance of generating better estimates of the genetic variation for thresholds when investigating the genetic architecture and heritability of threshold traits. By investigating genetic architecture by means of the LET model, our study supports several key evolutionary ideas related to conditional strategies and improves our understanding of environmentally cued decisions. PMID:26674955

  17. Investigating the genetic architecture of conditional strategies using the environmental threshold model.

    PubMed

    Buzatto, Bruno A; Buoro, Mathieu; Hazel, Wade N; Tomkins, Joseph L

    2015-12-22

    The threshold expression of dichotomous phenotypes that are environmentally cued or induced comprise the vast majority of phenotypic dimorphisms in colour, morphology, behaviour and life history. Modelled as conditional strategies under the framework of evolutionary game theory, the quantitative genetic basis of these traits is a challenge to estimate. The challenge exists firstly because the phenotypic expression of the trait is dichotomous and secondly because the apparent environmental cue is separate from the biological signal pathway that induces the switch between phenotypes. It is the cryptic variation underlying the translation of cue to phenotype that we address here. With a 'half-sib common environment' and a 'family-level split environment' experiment, we examine the environmental and genetic influences that underlie male dimorphism in the earwig Forficula auricularia. From the conceptual framework of the latent environmental threshold (LET) model, we use pedigree information to dissect the genetic architecture of the threshold expression of forceps length. We investigate for the first time the strength of the correlation between observable and cryptic 'proximate' cues. Furthermore, in support of the environmental threshold model, we found no evidence for a genetic correlation between cue and the threshold between phenotypes. Our results show strong correlations between observable and proximate cues and less genetic variation for thresholds than previous studies have suggested. We discuss the importance of generating better estimates of the genetic variation for thresholds when investigating the genetic architecture and heritability of threshold traits. By investigating genetic architecture by means of the LET model, our study supports several key evolutionary ideas related to conditional strategies and improves our understanding of environmentally cued decisions.

  18. Using three-dimensional plant root architecture in models of shallow-slope stability.

    PubMed

    Danjon, Frédéric; Barker, David H; Drexhage, Michael; Stokes, Alexia

    2008-05-01

    The contribution of vegetation to shallow-slope stability is of major importance in landslide-prone regions. However, existing slope stability models use only limited plant root architectural parameters. This study aims to provide a chain of tools useful for determining the contribution of tree roots to soil reinforcement. Three-dimensional digitizing in situ was used to obtain accurate root system architecture data for mature Quercus alba in two forest stands. These data were used as input to tools developed, which analyse the spatial position of roots, topology and geometry. The contribution of roots to soil reinforcement was determined by calculating additional soil cohesion using the limit equilibrium model, and the factor of safety (FOS) using an existing slope stability model, Slip4Ex. Existing models may incorrectly estimate the additional soil cohesion provided by roots, as the spatial position of roots crossing the potential slip surface is usually not taken into account. However, most soil reinforcement by roots occurs close to the tree stem and is negligible at a distance >1.0 m from the tree, and therefore global values of FOS for a slope do not take into account local slippage along the slope. Within a forest stand on a landslide-prone slope, soil fixation by roots can be minimal between uniform rows of trees, leading to local soil slippage. Therefore, staggered rows of trees would improve overall slope stability, as trees would arrest the downward movement of soil. The chain of tools consisting of both software (free for non-commercial use) and functions available from the first author will enable a more accurate description and use of root architectural parameters in standard slope stability analyses.

  19. Using Three-dimensional Plant Root Architecture in Models of Shallow-slope Stability

    PubMed Central

    Danjon, Frédéric; Barker, David H.; Drexhage, Michael; Stokes, Alexia

    2008-01-01

    Background The contribution of vegetation to shallow-slope stability is of major importance in landslide-prone regions. However, existing slope stability models use only limited plant root architectural parameters. This study aims to provide a chain of tools useful for determining the contribution of tree roots to soil reinforcement. Methods Three-dimensional digitizing in situ was used to obtain accurate root system architecture data for mature Quercus alba in two forest stands. These data were used as input to tools developed, which analyse the spatial position of roots, topology and geometry. The contribution of roots to soil reinforcement was determined by calculating additional soil cohesion using the limit equilibrium model, and the factor of safety (FOS) using an existing slope stability model, Slip4Ex. Key Results Existing models may incorrectly estimate the additional soil cohesion provided by roots, as the spatial position of roots crossing the potential slip surface is usually not taken into account. However, most soil reinforcement by roots occurs close to the tree stem and is negligible at a distance >1·0 m from the tree, and therefore global values of FOS for a slope do not take into account local slippage along the slope. Conclusions Within a forest stand on a landslide-prone slope, soil fixation by roots can be minimal between uniform rows of trees, leading to local soil slippage. Therefore, staggered rows of trees would improve overall slope stability, as trees would arrest the downward movement of soil. The chain of tools consisting of both software (free for non-commercial use) and functions available from the first author will enable a more accurate description and use of root architectural parameters in standard slope stability analyses. PMID:17766845

  20. Model-Driven Development of Reliable Avionics Architectures for Lunar Surface Systems

    NASA Technical Reports Server (NTRS)

    Borer, Nicholas; Claypool, Ian; Clark, David; West, John; Somervill, Kevin; Odegard, Ryan; Suzuki, Nantel

    2010-01-01

    This paper discusses a method used for the systematic improvement of NASA s Lunar Surface Systems avionics architectures in the area of reliability and fault-tolerance. This approach utilizes an integrated system model to determine the effects of component failure on the system s ability to provide critical functions. A Markov model of the potential degraded system modes is created to characterize the probability of these degraded modes, and the system model is run for each Markov state to determine its status (operational or system loss). The probabilistic results from the Markov model are first produced from state transition rates based on NASA data for heritage failure rate data of similar components. An additional set of probabilistic results are created from a representative set of failure rates developed for this study, for a variety of component quality grades (space-rated, mil-spec, ruggedized, and commercial). The results show that careful application of redundancy and selected component improvement should result in Lunar Surface Systems architectures that exhibit an appropriate degree of fault-tolerance, reliability, performance, and affordability.

  1. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The non-linear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  2. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The nonlinear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  3. Historic Building Information Modelling - Adding intelligence to laser and image based surveys of European classical architecture

    NASA Astrophysics Data System (ADS)

    Murphy, Maurice; McGovern, Eugene; Pavia, Sara

    2013-02-01

    Historic Building Information Modelling (HBIM) is a novel prototype library of parametric objects, based on historic architectural data and a system of cross platform programmes for mapping parametric objects onto point cloud and image survey data. The HBIM process begins with remote collection of survey data using a terrestrial laser scanner combined with digital photo modelling. The next stage involves the design and construction of a parametric library of objects, which are based on the manuscripts ranging from Vitruvius to 18th century architectural pattern books. In building parametric objects, the problem of file format and exchange of data has been overcome within the BIM ArchiCAD software platform by using geometric descriptive language (GDL). The plotting of parametric objects onto the laser scan surveys as building components to create or form the entire building is the final stage in the reverse engineering process. The final HBIM product is the creation of full 3D models including detail behind the object's surface concerning its methods of construction and material make-up. The resultant HBIM can automatically create cut sections, details and schedules in addition to the orthographic projections and 3D models (wire frame or textured) for both the analysis and conservation of historic objects, structures and environments.

  4. Second Annual Transformative Vertical Flight Concepts Workshop: Enabling New Flight Concepts Through Novel Propulsion and Energy Architectures

    NASA Technical Reports Server (NTRS)

    Dudley, Michael R. (Editor); Duffy, Michael; Hirschberg, Michael; Moore, Mark; German, Brian; Goodrich, Ken; Gunnarson, Tom; Petermaier,Korbinian; Stoll, Alex; Fredericks, Bill; Gibson, Andy; Newman, Aron; Ouellette, Richard; Antcliff, Kevin; Sinkula, Michael; Buettner-Garrett, Josh; Ricci, Mike; Keogh, Rory; Moser, Tim; Borer, Nick; Rizzi, Steve; Lighter, Gwen

    2015-01-01

    On August 3rd and 4th, 2015, a workshop was held at the NASA Ames Research Center, located at the Moffett Federal Airfield in California to explore the aviation communities interest in Transformative Vertical Flight (TVF) Concepts. The Workshop was sponsored by the AHS International (AHS), the American Institute of Aeronautics and Astronautics (AIAA), the National Aeronautics and Space Administration (NASA), and hosted by the NASA Aeronautics Research Institute (NARI). This second annual workshop built on the success and enthusiasm generated by the first TVF Workshop held in Washington, DC in August of 2014. The previous Workshop identified the existence of a multi-disciplinary community interested in this topic and established a consensus among the participants that opportunities to establish further collaborations in this area are warranted. The desire to conduct a series of annual workshops augmented by online virtual technical seminars to strengthen the TVF community and continue planning for advocacy and collaboration was a direct outcome of the first Workshop. The second Workshop organizers focused on four desired action-oriented outcomes. The first was to establish and document common stakeholder needs and areas of potential collaborations. This includes advocacy strategies to encourage the future success of unconventional vertiport capable flight concept solutions that are enabled by emerging technologies. The second was to assemble a community that can collaborate on new conceptual design and analysis tools to permit novel configuration paths with far greater multi-disciplinary coupling (i.e., aero-propulsive-control) to be investigated. The third was to establish a community to develop and deploy regulatory guidelines. This community would have the potential to initiate formation of an American Society for Testing and Materials (ASTM) F44 Committee Subgroup for the development of consensus-based certification standards for General Aviation scale vertiport

  5. High-Performance Work Systems: American Models of Workplace Transformation.

    ERIC Educational Resources Information Center

    Appelbaum, Eileen; Batt, Rosemary

    Rising competition in world and domestic markets for the past 2 decades has necessitated that U.S. companies undergo significant transformations to improve their performance with respect to a wide array of efficiency and quality indicators. Research on the transformations recently undertaken by some U.S. companies to boost performance revealed two…

  6. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    NASA Astrophysics Data System (ADS)

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  7. Modeling in architectural-planning solutions of agrarian technoparks as elements of the infrastructure

    NASA Astrophysics Data System (ADS)

    Abdrassilova, Gulnara S.

    2017-09-01

    In the context of development of the agriculture as the driver of the economy of Kazakhstan it is imperative to study new types of agrarian constructions (agroparks, agrotourists complexes, "vertical" farms, conservatories, greenhouses) that can be combined into complexes - agrarian technoparks. Creation of agrarian technoparks as elements of the infrastructure of the agglomeration shall ensure the breakthrough in the field of agrarian goods production, storing and recycling. Modeling of architectural-planning solutions of agrarian technoparks supports development of the theory and practice of designing objects based on innovative approaches.

  8. The Reactive-Causal Architecture: Introducing an Emotion Model along with Theories of Needs

    NASA Astrophysics Data System (ADS)

    Aydin, Ali Orhan; Orgun, Mehmet Ali

    In the entertainment application area, one of the major aims is to develop believable agents. To achieve this aim, agents should be highly autonomous, situated, flexible, and display affect. The Reactive-Causal Architecture (ReCau) is proposed to simulate these core attributes. In its current form, ReCau cannot explain the effects of emotions on intelligent behaviour. This study aims is to further improve the emotion model of ReCau to explain the effects of emotions on intelligent behaviour. This improvement allows ReCau to be emotional to support the development of believable agents.

  9. Orion Flight Test 1 Architecture: Observed Benefits of a Model Based Engineering Approach

    NASA Technical Reports Server (NTRS)

    Simpson, Kimberly A.; Sindiy, Oleg V.; McVittie, Thomas I.

    2012-01-01

    This paper details how a NASA-led team is using a model-based systems engineering approach to capture, analyze and communicate the end-to-end information system architecture supporting the first unmanned orbital flight of the Orion Multi-Purpose Crew Exploration Vehicle. Along with a brief overview of the approach and its products, the paper focuses on the observed program-level benefits, challenges, and lessons learned; all of which may be applied to improve system engineering tasks for characteristically similarly challenges

  10. Orion Flight Test 1 Architecture: Observed Benefits of a Model Based Engineering Approach

    NASA Technical Reports Server (NTRS)

    Simpson, Kimberly A.; Sindiy, Oleg V.; McVittie, Thomas I.

    2012-01-01

    This paper details how a NASA-led team is using a model-based systems engineering approach to capture, analyze and communicate the end-to-end information system architecture supporting the first unmanned orbital flight of the Orion Multi-Purpose Crew Exploration Vehicle. Along with a brief overview of the approach and its products, the paper focuses on the observed program-level benefits, challenges, and lessons learned; all of which may be applied to improve system engineering tasks for characteristically similarly challenges

  11. Structural Models that Manage IT Portfolio Affecting Business Value of Enterprise Architecture

    NASA Astrophysics Data System (ADS)

    Kamogawa, Takaaki

    This paper examines the structural relationships between Information Technology (IT) governance and Enterprise Architecture (EA), with the objective of enhancing business value in the enterprise society. Structural models consisting of four related hypotheses reveal the relationship between IT governance and EA in the improvement of business values. We statistically examined the hypotheses by analyzing validated questionnaire items from respondents within firms listed on the Japanese stock exchange who were qualified to answer them. We concluded that firms which have organizational ability controlled by IT governance are more likely to deliver business value based on IT portfolio management.

  12. Model-Based Systems Engineering for Capturing Mission Architecture System Processes with an Application Case Study - Orion Flight Test 1

    NASA Technical Reports Server (NTRS)

    Bonanne, Kevin H.

    2011-01-01

    Model-based Systems Engineering (MBSE) is an emerging methodology that can be leveraged to enhance many system development processes. MBSE allows for the centralization of an architecture description that would otherwise be stored in various locations and formats, thus simplifying communication among the project stakeholders, inducing commonality in representation, and expediting report generation. This paper outlines the MBSE approach taken to capture the processes of two different, but related, architectures by employing the Systems Modeling Language (SysML) as a standard for architecture description and the modeling tool MagicDraw. The overarching goal of this study was to demonstrate the effectiveness of MBSE as a means of capturing and designing a mission systems architecture. The first portion of the project focused on capturing the necessary system engineering activities that occur when designing, developing, and deploying a mission systems architecture for a space mission. The second part applies activities from the first to an application problem - the system engineering of the Orion Flight Test 1 (OFT-1) End-to-End Information System (EEIS). By modeling the activities required to create a space mission architecture and then implementing those activities in an application problem, the utility of MBSE as an approach to systems engineering can be demonstrated.

  13. Model-Based Systems Engineering for Capturing Mission Architecture System Processes with an Application Case Study - Orion Flight Test 1

    NASA Technical Reports Server (NTRS)

    Bonanne, Kevin H.

    2011-01-01

    Model-based Systems Engineering (MBSE) is an emerging methodology that can be leveraged to enhance many system development processes. MBSE allows for the centralization of an architecture description that would otherwise be stored in various locations and formats, thus simplifying communication among the project stakeholders, inducing commonality in representation, and expediting report generation. This paper outlines the MBSE approach taken to capture the processes of two different, but related, architectures by employing the Systems Modeling Language (SysML) as a standard for architecture description and the modeling tool MagicDraw. The overarching goal of this study was to demonstrate the effectiveness of MBSE as a means of capturing and designing a mission systems architecture. The first portion of the project focused on capturing the necessary system engineering activities that occur when designing, developing, and deploying a mission systems architecture for a space mission. The second part applies activities from the first to an application problem - the system engineering of the Orion Flight Test 1 (OFT-1) End-to-End Information System (EEIS). By modeling the activities required to create a space mission architecture and then implementing those activities in an application problem, the utility of MBSE as an approach to systems engineering can be demonstrated.

  14. Guiding Principles for Data Architecture to Support the Pathways Community HUB Model

    PubMed Central

    Zeigler, Bernard P.; Redding, Sarah; Leath, Brenda A.; Carter, Ernest L.; Russell, Cynthia

    2016-01-01

    Introduction: The Pathways Community HUB Model provides a unique strategy to effectively supplement health care services with social services needed to overcome barriers for those most at risk of poor health outcomes. Pathways are standardized measurement tools used to define and track health and social issues from identification through to a measurable completion point. The HUB use Pathways to coordinate agencies and service providers in the community to eliminate the inefficiencies and duplication that exist among them. Pathways Community HUB Model and Formalization: Experience with the Model has brought out the need for better information technology solutions to support implementation of the Pathways themselves through decision-support tools for care coordinators and other users to track activities and outcomes, and to facilitate reporting. Here we provide a basis for discussing recommendations for such a data infrastructure by developing a conceptual model that formalizes the Pathway concept underlying current implementations. Requirements for Data Architecture to Support the Pathways Community HUB Model: The main contribution is a set of core recommendations as a framework for developing and implementing a data architecture to support implementation of the Pathways Community HUB Model. The objective is to present a tool for communities interested in adopting the Model to learn from and to adapt in their own development and implementation efforts. Problems with Quality of Data Extracted from the CHAP Database: Experience with the Community Health Access Project (CHAP) data base system (the core implementation of the Model) has identified several issues and remedies that have been developed to address these issues. Based on analysis of issues and remedies, we present several key features for a data architecture meeting the just mentioned recommendations. Implementation of Features: Presentation of features is followed by a practical guide to their implementation

  15. A new technique for dynamic load distribution when two manipulators mutually lift a rigid object. Part 2, Derivation of entire system model and control architecture

    SciTech Connect

    Unseren, M.A.

    1994-04-01

    A rigid body model for the entire system which accounts for the load distribution scheme proposed in Part 1 as well as for the dynamics of the manipulators and the kinematic constraints is derived in the joint space. A technique is presented for expressing the object dynamics in terms of the joint variables of both manipulators which leads to a positive definite and symmetric inertia matrix. The model is then transformed to obtain reduced order equations of motion and a separate set of equations which govern the behavior of the internal contact forces. The control architecture is applied to the model which results in the explicit decoupling of the position and internal contact force-controlled degrees of freedom (DOF).

  16. The Empirical Comparison of Coordinate Transformation Models and Distortion Modeling Methods Based on a Case Study of Croatia

    NASA Astrophysics Data System (ADS)

    Grgic, M.; Varga, M.; Bašić, T.

    2015-12-01

    Several coordinate transformation models enable performing of the coordinate transformations between the historical astro-geodetic datums, which were utilized before the GNSS (Global Navigation Satellite System) technologies were developed, and datums related to the International Terrestrial Reference System (ITRS), which today are most often used to determine the position. The decision on the most appropriate coordinate transformation model is influenced by many factors, such as: required accuracy, available computational resources, possibility of the model application regarding the size and shape of the territory, coordinate distortion that very often exist in historical astro-geodetic datums, etc. This study is based on the geodetic data of the Republic of Croatia in both, historical and ITRS-related datum. It investigates different transformation models, including conformal Molodensky 3 parameters (p) and 5p (standard and abridged) transformation models, 7p transformation models (Bursa-Wolf and Molodensky-Badekas model), Affine transformation models (8p, 9p, 12p), and Multiple Regression Equation approach. Besides, it investigates the 7p, 8p, 9p, and 12p transformation models extended with distortion modeling, and the grid based only transformation model (NTv2 model). Furthermore, several distortion modeling methods were used to produce various models of distortion shifts in different resolutions. Thereafter, their performance and the performance of the transformation models was evaluated using summary statistics derived from the remained positional residuals that were computed for the independent control spatial data set. Lastly, the most appropriate method(s) of distortion modeling and most appropriate coordinate transformation model(s) were defined regarding the required accuracy for the Croatian case.

  17. Developing Historic Building Information Modelling Guidelines and Procedures for Architectural Heritage in Ireland

    NASA Astrophysics Data System (ADS)

    Murphy, M.; Corns, A.; Cahill, J.; Eliashvili, K.; Chenau, A.; Pybus, C.; Shaw, R.; Devlin, G.; Deevy, A.; Truong-Hong, L.

    2017-08-01

    Cultural heritage researchers have recently begun applying Building Information Modelling (BIM) to historic buildings. The model is comprised of intelligent objects with semantic attributes which represent the elements of a building structure and are organised within a 3D virtual environment. Case studies in Ireland are used to test and develop the suitable systems for (a) data capture/digital surveying/processing (b) developing library of architectural components and (c) mapping these architectural components onto the laser scan or digital survey to relate the intelligent virtual representation of a historic structure (HBIM). While BIM platforms have the potential to create a virtual and intelligent representation of a building, its full exploitation and use is restricted to narrow set of expert users with access to costly hardware, software and skills. The testing of open BIM approaches in particular IFCs and the use of game engine platforms is a fundamental component for developing much wider dissemination. The semantically enriched model can be transferred into a WEB based game engine platform.

  18. Architecture and statistical model of a pulse-mode digital multilayer neural network.

    PubMed

    Kim, Y C; Shanblatt, M A

    1995-01-01

    A new architecture and a statistical model for a pulse-mode digital multilayer neural network (DMNN) are presented. Algebraic neural operations are replaced by stochastic processes using pseudo-random pulse sequences. Synaptic weights and neuron states are represented as probabilities and estimated as average rates of pulse occurrences in corresponding pulse sequences. A statistical model of error (or noise) is developed to estimate relative accuracy associated with stochastic computing in terms of mean and variance. The stochastic computing technique is implemented with simple logic gates as basic computing elements leading to a high neuron-density on a chip. Furthermore, the use of simple logic gates for neural operations, the pulse-mode signal representation, and the modular design techniques lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Any size of a feedforward network can be configured where processing speed is independent of the network size. Multilayer feedforward networks are modeled and applied to pattern classification problems such as encoding and character recognition.

  19. Comparison of LIDAR system performance for alternative single-mode receiver architectures: modeling and experimental validation

    NASA Astrophysics Data System (ADS)

    Toliver, Paul; Ozdur, Ibrahim; Agarwal, Anjali; Woodward, T. K.

    2013-05-01

    In this paper, we describe a detailed performance comparison of alternative single-pixel, single-mode LIDAR architectures including (i) linear-mode APD-based direct-detection, (ii) optically-preamplified PIN receiver, (iii) PINbased coherent-detection, and (iv) Geiger-mode single-photon-APD counting. Such a comparison is useful when considering next-generation LIDAR on a chip, which would allow one to leverage extensive waveguide-based structures and processing elements developed for telecom and apply them to small form-factor sensing applications. Models of four LIDAR transmit and receive systems are described in detail, which include not only the dominant sources of receiver noise commonly assumed in each of the four detection limits, but also additional noise terms present in realistic implementations. These receiver models are validated through the analysis of detection statistics collected from an experimental LIDAR testbed. The receiver is reconfigurable into four modes of operation, while transmit waveforms and channel characteristics are held constant. The use of a diffuse hard target highlights the importance of including speckle noise terms in the overall system analysis. All measurements are done at 1550 nm, which offers multiple system advantages including less stringent eye safety requirements and compatibility with available telecom components, optical amplification, and photonic integration. Ultimately, the experimentally-validated detection statistics can be used as part of an end-to-end system model for projecting rate, range, and resolution performance limits and tradeoffs of alternative integrated LIDAR architectures.

  20. Box Architecture.

    ERIC Educational Resources Information Center

    Ham, Jan

    1998-01-01

    Project offers grades 3-8 students hands-on design practice creating built environments to solve a society-based architectural problem. Students plan buildings, draw floor plans, and make scale models of the structures that are then used in related interdisciplinary activities. (Author)

  1. Development of a methodology to reduce the order of a detailed lumped parameter transformer model

    SciTech Connect

    Gutierrez, M.R.

    1993-12-31

    The transformer designer employs detailed electrical models to develop a reliable and cost effective transformer insulation structure. The power engineer must model not only the transformer, but the system, therefore the power engineer requires a smaller model of the transformer that accurately represents its behavior in order to investigate the effects of power system transients. Reduced models are generally obtained either from detailed design models or from measurements on fully constructed transformers. Reduced models constructed from design data generally act as low pass filters and are severely limited in accuracy at high frequencies. The latter technique has the major disadvantage that the model cannot be constructed until after the transformer has been built. Presently, both methods are subject to considerable error. The primary objective of this thesis is to develop a technique that reduces the order of a lumped parameter transformer model used in insulation design and that provides a reduced model of any specified size for transient studies of systems which contain the transformer. The reduction technique developed can be applied to any lumped parameter network which uses electric parameter analogs (i.e., FEM networks). The method of this thesis uses Kron`s reduction approach in time domain to obtain a reduced model. This reduced model is compatible with industry methods for transient studies (EMTP) and retains the accuracy and stability of the detailed model. Additionally, this reduced model can be used to predict the interaction between the transformer and the power system, via EMTP, giving a valuable tool to both power and design engineers. Application of the technique for the detailed model of a 765/345/34.5 kV, YYD, core form, 500 MVA single phase autotransformer is verified by frequency and time domain tests for the linear model. The nonlinear transformer model reduction technique is outlined and a proof of concept is provided by two examples.

  2. Using AOSD and MDD to Enhance the Architectural Design Phase

    NASA Astrophysics Data System (ADS)

    Pinto, Mónica; Fuentes, Lidia; Fernández, Luis; Valenzuela, Juan A.

    This paper describes an MDD process that enhances the architectural design phase by closing the gap between ADLs and the notations used at the detailed design phase. We have defined model-to-model transformation rules to automatically generate either aspect-oriented or object-oriented UML 2.0 models from high-level architectural specifications specified using AO-ADL. These rules have been integrated in the AO-ADL Tool Suite, providing support to automatically generate a skeleton of the detailed design that preserves the crosscutting and the non-crosscutting functionalities identified at the architecture level.

  3. Infra-Free® (IF) Architecture System as the Method for Post-Disaster Shelter Model

    NASA Astrophysics Data System (ADS)

    Chang, Huai-Chien; Anilir, Serkan

    Currently, International Space Station (ISS) is capable to support 3 to 4 astronauts onboard for at least 6 months using an integrated life support system to support the need of crew onboard. Waste from daily life of the crew members are collected by waste recycle systems, electricity consumption depends on collecting solar energy, etc. though it likes the infrastructure we use on Earth, ISS can be comprehended nearly a self-reliant integrated architecture so far, this could be given an important hint for current architecture which is based on urban centralized infrastructure to support our daily lives but could be vulnerable in case of nature disasters. Comparatively, more and more economic activities and communications rely on the enormous urban central infrastructure to support our daily lives. Therefore, when in case of natural disasters, it may cut-out the infrastructure system temporarily or permanent. In order to solve this problem, we propose to design a temporary shelter, which is capable to work without depending on any existing infrastructure. We propose to use some closed-life-cycle or integrated technologies inspired by the possibilities of space and other emerging technologies into current daily architecture by using Infra-free® design framework; which proposes to integrate various life supporting infrastructural elements into one-closed system. We try to work on a scenario for post-disaster management housing as the method for solving the lifeline problems such as solid and liquid waste, energy, and water and hygiene solution into one system. And trying to establish an Infra-free® model of shelter for disaster area. The ultimate objective is to design a Temp Infra-free® model dealing with the sanitation and environment preservation concerns for disaster area.

  4. A generic model to simulate air-borne diseases as a function of crop architecture.

    PubMed

    Casadebaig, Pierre; Quesnel, Gauthier; Langlais, Michel; Faivre, Robert

    2012-01-01

    In a context of pesticide use reduction, alternatives to chemical-based crop protection strategies are needed to control diseases. Crop and plant architectures can be viewed as levers to control disease outbreaks by affecting microclimate within the canopy or pathogen transmission between plants. Modeling and simulation is a key approach to help analyze the behaviour of such systems where direct observations are difficult and tedious. Modeling permits the joining of concepts from ecophysiology and epidemiology to define structures and functions generic enough to describe a wide range of epidemiological dynamics. Additionally, this conception should minimize computing time by both limiting the complexity and setting an efficient software implementation. In this paper, our aim was to present a model that suited these constraints so it could first be used as a research and teaching tool to promote discussions about epidemic management in cropping systems. The system was modelled as a combination of individual hosts (population of plants or organs) and infectious agents (pathogens) whose contacts are restricted through a network of connections. The system dynamics were described at an individual scale. Additional attention was given to the identification of generic properties of host-pathogen systems to widen the model's applicability domain. Two specific pathosystems with contrasted crop architectures were considered: ascochyta blight on pea (homogeneously layered canopy) and potato late blight (lattice of individualized plants). The model behavior was assessed by simulation and sensitivity analysis and these results were discussed against the model ability to discriminate between the defined types of epidemics. Crop traits related to disease avoidance resulting in a low exposure, a slow dispersal or a de-synchronization of plant and pathogen cycles were shown to strongly impact the disease severity at the crop scale.

  5. A Generic Model to Simulate Air-Borne Diseases as a Function of Crop Architecture

    PubMed Central

    Casadebaig, Pierre; Quesnel, Gauthier; Langlais, Michel; Faivre, Robert

    2012-01-01

    In a context of pesticide use reduction, alternatives to chemical-based crop protection strategies are needed to control diseases. Crop and plant architectures can be viewed as levers to control disease outbreaks by affecting microclimate within the canopy or pathogen transmission between plants. Modeling and simulation is a key approach to help analyze the behaviour of such systems where direct observations are difficult and tedious. Modeling permits the joining of concepts from ecophysiology and epidemiology to define structures and functions generic enough to describe a wide range of epidemiological dynamics. Additionally, this conception should minimize computing time by both limiting the complexity and setting an efficient software implementation. In this paper, our aim was to present a model that suited these constraints so it could first be used as a research and teaching tool to promote discussions about epidemic management in cropping systems. The system was modelled as a combination of individual hosts (population of plants or organs) and infectious agents (pathogens) whose contacts are restricted through a network of connections. The system dynamics were described at an individual scale. Additional attention was given to the identification of generic properties of host-pathogen systems to widen the model's applicability domain. Two specific pathosystems with contrasted crop architectures were considered: ascochyta blight on pea (homogeneously layered canopy) and potato late blight (lattice of individualized plants). The model behavior was assessed by simulation and sensitivity analysis and these results were discussed against the model ability to discriminate between the defined types of epidemics. Crop traits related to disease avoidance resulting in a low exposure, a slow dispersal or a de-synchronization of plant and pathogen cycles were shown to strongly impact the disease severity at the crop scale. PMID:23226209

  6. View factor modeling of sputter-deposition on micron-scale-architectured surfaces exposed to plasma

    SciTech Connect

    Huerta, C. E. Matlock, T. S.; Wirz, R. E.

    2016-03-21

    The sputter-deposition on surfaces exposed to plasma plays an important role in the erosion behavior and overall performance of a wide range of plasma devices. Plasma models in the low density, low energy plasma regime typically neglect micron-scale surface feature effects on the net sputter yield and erosion rate. The model discussed in this paper captures such surface architecture effects via a computationally efficient view factor model. The model compares well with experimental measurements of argon ion sputter yield from a nickel surface with a triangle wave geometry with peak heights in the hundreds of microns range. Further analysis with the model shows that increasing the surface pitch angle beyond about 45° can lead to significant decreases in the normalized net sputter yield for all simulated ion incident energies (i.e., 75, 100, 200, and 400 eV) for both smooth and roughened surfaces. At higher incident energies, smooth triangular surfaces exhibit a nonmonotonic trend in the normalized net sputter yield with surface pitch angle with a maximum yield above unity over a range of intermediate angles. The resulting increased erosion rate occurs because increased sputter yield due to the local ion incidence angle outweighs increased deposition due to the sputterant angular distribution. The model also compares well with experimentally observed radial expansion of protuberances (measuring tens of microns) in a nano-rod field exposed to an argon beam. The model captures the coalescence of sputterants at the protuberance sites and accurately illustrates the structure's expansion due to deposition from surrounding sputtering surfaces; these capabilities will be used for future studies into more complex surface architectures.

  7. Multi-agent Architecture for the Multi-Skill Tasks Modeling at the Pediatric Emergency Department.

    PubMed

    Ajmi, Ines; Zgaya, Hayfa; Hammadi, Slim; Gammoudi, Lotfi; Martinot, Alain; Beuscart, Régis; Renard, Jean-Marie

    2015-01-01

    Patient journey in the Pediatric Emergency Department is a highly complex process. Current approaches for modeling are insufficient because they either focus only on the single ancillary units, or therefore do not consider the entire treatment process of the patients, or they do not account for the dynamics of the patient journey modeling. Therefore, we propose an agent based approach in which patients and emergency department human resources are represented as autonomous agents who are able to react flexible to changes and disturbances through pro-activeness and reactiveness. The main aim of this paper is to present the overall design of the proposed multi-agent system, emphasizing its architecture and the behavior of each agent of the model. Besides, we describe inter-agent communication based on the agent interaction protocol to ensure cooperation between agents when they perform the coordination of tasks for the users. This work is integrated into the ANR HOST project (ANR-11-TecSan-010).

  8. Service Oriented Architectural Model for Load Flow Analysis in Power Systems

    NASA Astrophysics Data System (ADS)

    Muthu, Balasingh Moses; Veilumuthu, Ramachandran; Ponnusamy, Lakshmi

    2011-07-01

    The main objective of this paper is to develop the Service Oriented Architectural (SOA) Model for representation of power systems, especially of computing load flow analysis of large interconnected power systems. The proposed SOA model has three elements namely load flow service provider, power systems registry and client. The exchange of data using XML makes the power system services standardized and adaptable. The load flow service is provided by the service provider, which is published in power systems registry for enabling universal visibility and access to the service. The message oriented style of SOA using Simple Object Access Protocol (SOAP) makes the service provider and the power systems client to exist in a loosely coupled environment. This proposed model, portraits the load flow services as Web services in service oriented environment. To suit the power system industry needs, it easily integrates with the Web applications which enables faster power system operations.

  9. High-performance multiprocessor architecture for a 3-D lattice gas model

    NASA Technical Reports Server (NTRS)

    Lee, F.; Flynn, M.; Morf, M.

    1991-01-01

    The lattice gas method has recently emerged as a promising discrete particle simulation method in areas such as fluid dynamics. We present a very high-performance scalable multiprocessor architecture, called ALGE, proposed for the simulation of a realistic 3-D lattice gas model, Henon's 24-bit FCHC isometric model. Each of these VLSI processors is as powerful as a CRAY-2 for this application. ALGE is scalable in the sense that it achieves linear speedup for both fixed and increasing problem sizes with more processors. The core computation of a lattice gas model consists of many repetitions of two alternating phases: particle collision and propagation. Functional decomposition by symmetry group and virtual move are the respective keys to efficient implementation of collision and propagation.

  10. Nonlinear model of a distribution transformer appropriate for evaluating the effects of unbalanced loads

    NASA Astrophysics Data System (ADS)

    Toman, Matej; Štumberger, Gorazd; Štumberger, Bojan; Dolinar, Drago

    Power packages for calculation of power system transients are often used when studying and designing electromagnetic power systems. An accurate model of a distribution transformer is needed in order to obtain realistic values from these calculations. This transformer model must be derived in such a way that it is applicable when calculating those operating conditions appearing in practice. Operation conditions where transformers are loaded with nonlinear and unbalanced loads are especially challenging. The purpose of this work is to derive a three-phase transformer model that is appropriate for evaluating the effects of nonlinear and unbalanced loads. A lumped parameter model instead of a finite element (FE) model is considered in order to ensure that the model can be used in power packages for the calculation of power system transients. The transformer model is obtained by coupling electric and magnetic equivalent circuits. The magnetic equivalent circuit contains only three nonlinear reluctances, which represent nonlinear behaviour of the transformer. They are calculated by the inverse Jiles-Atherton (J-A) hysteresis model, while parameters of hysteresis are identified using differential evolution (DE). This considerably improves the accuracy of the derived transformer model. Although the obtained transformer model is simple, the simulation results show good agreement between measured and calculated results.

  11. Development of a Subcell Based Modeling Approach for Modeling the Architecturally Dependent Impact Response of Triaxially Braided Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Sorini, Chris; Chattopadhyay, Aditi; Goldberg, Robert K.; Kohlman, Lee W.

    2016-01-01

    Understanding the high velocity impact response of polymer matrix composites with complex architectures is critical to many aerospace applications, including engine fan blade containment systems where the structure must be able to completely contain fan blades in the event of a blade-out. Despite the benefits offered by these materials, the complex nature of textile composites presents a significant challenge for the prediction of deformation and damage under both quasi-static and impact loading conditions. The relatively large mesoscale repeating unit cell (in comparison to the size of structural components) causes the material to behave like a structure rather than a homogeneous material. Impact experiments conducted at NASA Glenn Research Center have shown the damage patterns to be a function of the underlying material architecture. Traditional computational techniques that involve modeling these materials using smeared homogeneous, orthotropic material properties at the macroscale result in simulated damage patterns that are a function of the structural geometry, but not the material architecture. In order to preserve heterogeneity at the highest length scale in a robust yet computationally efficient manner, and capture the architecturally dependent damage patterns, a previously-developed subcell modeling approach where the braided composite unit cell is approximated as a series of four adjacent laminated composites is utilized. This work discusses the implementation of the subcell methodology into the commercial transient dynamic finite element code LS-DYNA (Livermore Software Technology Corp.). Verification and validation studies are also presented, including simulation of the tensile response of straight-sided and notched quasi-static coupons composed of a T700/PR520 triaxially braided [0deg/60deg/-60deg] composite. Based on the results of the verification and validation studies, advantages and limitations of the methodology as well as plans for future work

  12. Transform-both-sides nonlinear models for in vitro pharmacokinetic experiments.

    PubMed

    Latif, A H M Mahbub; Gilmour, Steven G

    2015-06-01

    Transform-both-sides nonlinear models have proved useful in many experimental applications including those in pharmaceutical sciences and biochemistry. The maximum likelihood method is commonly used to fit transform-both-sides nonlinear models, where the regression and transformation parameters are estimated simultaneously. In this paper, an analysis of variance-based method is described in detail for estimating transform-both-sides nonlinear models from randomized experiments. It estimates the transformation parameter from the full treatment model and then the regression parameters are estimated conditionally on this estimate of the transformation parameter. The analysis of variance method is computationally simpler compared with the maximum likelihood method of estimation and allows a more natural separation of different sources of lack of fit. Simulation studies show that the analysis of variance method can provide unbiased estimators of complex transform-both-sides nonlinear models, such as transform-both-sides random coefficient nonlinear regression models and transform-both-sides fixed coefficient nonlinear regression models with random block effects.

  13. Reference architecture and interoperability model for data mining and fusion in scientific cross-domain infrastructures

    NASA Astrophysics Data System (ADS)

    Haener, Rainer; Waechter, Joachim; Grellet, Sylvain; Robida, Francois

    2017-04-01

    Interoperability is the key factor in establishing scientific research environments and infrastructures, as well as in bringing together heterogeneous, geographically distributed risk management, monitoring, and early warning systems. Based on developments within the European Plate Observing System (EPOS), a reference architecture has been devised that comprises architectural blue-prints and interoperability models regarding the specification of business processes and logic as well as the encoding of data, metadata, and semantics. The architectural blueprint is developed on the basis of the so called service-oriented architecture (SOA) 2.0 paradigm, which combines intelligence and proactiveness of event-driven with service-oriented architectures. SOA 2.0 supports analysing (Data Mining) both, static and real-time data in order to find correlations of disparate information that do not at first appear to be intuitively obvious: Analysed data (e.g., seismological monitoring) can be enhanced with relationships discovered by associating them (Data Fusion) with other data (e.g., creepmeter monitoring), with digital models of geological structures, or with the simulation of geological processes. The interoperability model describes the information, communication (conversations) and the interactions (choreographies) of all participants involved as well as the processes for registering, providing, and retrieving information. It is based on the principles of functional integration, implemented via dedicated services, communicating via service-oriented and message-driven infrastructures. The services provide their functionality via standardised interfaces: Instead of requesting data directly, users share data via services that are built upon specific adapters. This approach replaces the tight coupling at data level by a flexible dependency on loosely coupled services. The main component of the interoperability model is the comprehensive semantic description of the information

  14. Conversion of Highly Complex Faulted Hydrostratigraphic Architectures into MODFLOW Grid for Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Pham, H. V.; Tsai, F. T.

    2013-12-01

    The USGS MODFLOW is widely used for groundwater modeling. Because of using structured grid, all layers have to be continuous throughout the model domain. This makes it difficult to generate computational grid for complex hydrostratigraphic architectures including thin and discontinuous layers, interconnections of sand units, pinch-outs, and faults. In this study, we present a technique for automatically generating MODFLOW grid for complex aquifer systems of strongly sand-clay binary heterogeneity. To do so, an indicator geostatistical method is adopted to interpolate sand and clay distributions in a gridded two-dimensional plane along the structural dip for every one-foot vertical interval. A three-dimensional gridded binary geological architecture is reconstructed by assembling all two-dimensional planes. Then, the geological architecture is converted to MODFLOW computational grid by the procedures as follows. First, we determine bed boundary elevation of sand and clay units for each vertical column. Then, we determine the total number of bed boundaries for a vertical column by projecting the bed boundaries of its adjacent four vertical columns to the column. This step is of importance to preserve flow pathways, especially for narrow connections between sand units. Finally, we determine the number of MODFLOW layers and assign layer indices to bed boundaries. A MATLAB code was developed to implement the technique. The inputs for the code are bed boundary data from well logs, a structural dip, minimal layer thickness, and the number of layers. The outputs are MODFLOW grid of sand and clay indicators. The technique is able to generate grid that preserves fault features in the geological architecture. Moreover, the code is very efficient for regenerating MODFLOW grid with different grid resolutions. The technique was applied to MODFLOW grid generation for the fluvial aquifer system in Baton Rouge, Louisiana. The study area consists of the '1,200-foot' sand, the '1

  15. Modeling workplace contact networks: The effects of organizational structure, architecture, and reporting errors on epidemic predictions

    PubMed Central

    Potter, Gail E.; Smieszek, Timo; Sailer, Kerstin

    2015-01-01

    Face-to-face social contacts are potentially important transmission routes for acute respiratory infections, and understanding the contact network can improve our ability to predict, contain, and control epidemics. Although workplaces are important settings for infectious disease transmission, few studies have collected workplace contact data and estimated workplace contact networks. We use contact diaries, architectural distance measures, and institutional structures to estimate social contact networks within a Swiss research institute. Some contact reports were inconsistent, indicating reporting errors. We adjust for this with a latent variable model, jointly estimating the true (unobserved) network of contacts and duration-specific reporting probabilities. We find that contact probability decreases with distance, and that research group membership, role, and shared projects are strongly predictive of contact patterns. Estimated reporting probabilities were low only for 0–5 min contacts. Adjusting for reporting error changed the estimate of the duration distribution, but did not change the estimates of covariate effects and had little effect on epidemic predictions. Our epidemic simulation study indicates that inclusion of network structure based on architectural and organizational structure data can improve the accuracy of epidemic forecasting models. PMID:26634122

  16. Modeling workplace contact networks: The effects of organizational structure, architecture, and reporting errors on epidemic predictions.

    PubMed

    Potter, Gail E; Smieszek, Timo; Sailer, Kerstin

    2015-09-01

    Face-to-face social contacts are potentially important transmission routes for acute respiratory infections, and understanding the contact network can improve our ability to predict, contain, and control epidemics. Although workplaces are important settings for infectious disease transmission, few studies have collected workplace contact data and estimated workplace contact networks. We use contact diaries, architectural distance measures, and institutional structures to estimate social contact networks within a Swiss research institute. Some contact reports were inconsistent, indicating reporting errors. We adjust for this with a latent variable model, jointly estimating the true (unobserved) network of contacts and duration-specific reporting probabilities. We find that contact probability decreases with distance, and that research group membership, role, and shared projects are strongly predictive of contact patterns. Estimated reporting probabilities were low only for 0-5 min contacts. Adjusting for reporting error changed the estimate of the duration distribution, but did not change the estimates of covariate effects and had little effect on epidemic predictions. Our epidemic simulation study indicates that inclusion of network structure based on architectural and organizational structure data can improve the accuracy of epidemic forecasting models.

  17. XSTREAM: A practical algorithm for identification and architecture modeling of tandem repeats in protein sequences

    PubMed Central

    Newman, Aaron M; Cooper, James B

    2007-01-01

    Background Biological sequence repeats arranged in tandem patterns are widespread in DNA and proteins. While many software tools have been designed to detect DNA tandem repeats (TRs), useful algorithms for identifying protein TRs with varied levels of degeneracy are still needed. Results To address limitations of current repeat identification methods, and to provide an efficient and flexible algorithm for the detection and analysis of TRs in protein sequences, we designed and implemented a new computational method called XSTREAM. Running time tests confirm the practicality of XSTREAM for analyses of multi-genome datasets. Each of the key capabilities of XSTREAM (e.g., merging, nesting, long-period detection, and TR architecture modeling) are demonstrated using anecdotal examples, and the utility of XSTREAM for identifying TR proteins was validated using data from a recently published paper. Conclusion We show that XSTREAM is a practical and valuable tool for TR detection in protein and nucleotide sequences at the multi-genome scale, and an effective tool for modeling TR domains with diverse architectures and varied levels of degeneracy. Because of these useful features, XSTREAM has significant potential for the discovery of naturally-evolved modular proteins with applications for engineering novel biostructural and biomimetic materials, and identifying new vaccine and diagnostic targets. PMID:17931424

  18. Strategies for memory-based decision making: Modeling behavioral and neural signatures within a cognitive architecture.

    PubMed

    Fechner, Hanna B; Pachur, Thorsten; Schooler, Lael J; Mehlhorn, Katja; Battal, Ceren; Volz, Kirsten G; Borst, Jelmer P

    2016-12-01

    How do people use memories to make inferences about real-world objects? We tested three strategies based on predicted patterns of response times and blood-oxygen-level-dependent (BOLD) responses: one strategy that relies solely on recognition memory, a second that retrieves additional knowledge, and a third, lexicographic (i.e., sequential) strategy, that considers knowledge conditionally on the evidence obtained from recognition memory. We implemented the strategies as computational models within the Adaptive Control of Thought-Rational (ACT-R) cognitive architecture, which allowed us to derive behavioral and neural predictions that we then compared to the results of a functional magnetic resonance imaging (fMRI) study in which participants inferred which of two cities is larger. Overall, versions of the lexicographic strategy, according to which knowledge about many but not all alternatives is searched, provided the best account of the joint patterns of response times and BOLD responses. These results provide insights into the interplay between recognition and additional knowledge in memory, hinting at an adaptive use of these two sources of information in decision making. The results highlight the usefulness of implementing models of decision making within a cognitive architecture to derive predictions on the behavioral and neural level. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Transforming System Engineering through Model-Centric Engineering

    DTIC Science & Technology

    2015-01-31

    story that is being applied and evolved on Jupiter Europa Orbiter (JEO) project [75], and we summarize some aspects of it here, because it goes beyond...JEO Jupiter Europa Orbiter project at NASA/JPL JSF Joint Strike Fighter JPL Jet Propulsion Laboratory of NASA Linux An operating system created by...Adaptation of Flight-Critical Systems, Digital Avionics Systems Conference, 2009. [75] Rasumussen, R., R. Shishko, Jupiter Europa Orbiter Architecture

  20. Notch Signaling and Schwann Cell Transformation: Development of a Model System and Application to Human MPNSTs

    DTIC Science & Technology

    2008-09-01

    TITLE: Notch Signaling and Schwann Cell Transformation: Development of a Model System and Application to Human MPNSTs PRINCIPAL INVESTIGATOR...Schwann cell transformation: Development of a model system and 5a. CONTRACT NUMBER application to human MPNSTs . 5b. GRANT NUMBER W81XWH-04-1-0209...of neurofibromas to MPNSTs in patients with NF1. Our previous work has shown that constitutive expression of Notch can transform rat Schwann cells

  1. A Thermo-Plastic-Martensite Transformation Coupled Constitutive Model for Hot Stamping

    NASA Astrophysics Data System (ADS)

    Bin, Zhu; WeiKang, Liang; Zhongxiang, Gui; Kai, Wang; Chao, Wang; Yilin, Wang; Yisheng, Zhang

    2017-03-01

    In this study, a thermo-plastic-martensite transformation coupled model based on the von Mises yield criterion and the associated plastic flow rule is developed to further improve the accuracy of numerical simulation during hot stamping. The constitutive model is implemented into the finite element program ABAQUS using user subroutine VUMAT. The martensite transformation, transformation-induced plasticity and volume expansion during the austenite-to-martensite transformation are included in the constitutive model. For this purpose, isothermal tensile tests are performed to obtain the flow stress, and non-isothermal tensile tests were carried out to validate the constitutive model. The non-isothermal tensile numerical simulation demonstrates that the thermo-plastic-martensite transformation coupled constitutive model provides a reasonable prediction of force-displacement curves upon loading, which is expected to be applied for modeling and simulation of hot stamping.

  2. A Thermo-Plastic-Martensite Transformation Coupled Constitutive Model for Hot Stamping

    NASA Astrophysics Data System (ADS)

    Bin, Zhu; WeiKang, Liang; Zhongxiang, Gui; Kai, Wang; Chao, Wang; Yilin, Wang; Yisheng, Zhang

    2017-01-01

    In this study, a thermo-plastic-martensite transformation coupled model based on the von Mises yield criterion and the associated plastic flow rule is developed to further improve the accuracy of numerical simulation during hot stamping. The constitutive model is implemented into the finite element program ABAQUS using user subroutine VUMAT. The martensite transformation, transformation-induced plasticity and volume expansion during the austenite-to-martensite transformation are included in the constitutive model. For this purpose, isothermal tensile tests are performed to obtain the flow stress, and non-isothermal tensile tests were carried out to validate the constitutive model. The non-isothermal tensile numerical simulation demonstrates that the thermo-plastic-martensite transformation coupled constitutive model provides a reasonable prediction of force-displacement curves upon loading, which is expected to be applied for modeling and simulation of hot stamping.

  3. OmniPHR: A distributed architecture model to integrate personal health records.

    PubMed

    Roehrs, Alex; da Costa, Cristiano André; da Rosa Righi, Rodrigo

    2017-07-01

    The advances in the Information and Communications Technology (ICT) brought many benefits to the healthcare area, specially to digital storage of patients' health records. However, it is still a challenge to have a unified viewpoint of patients' health history, because typically health data is scattered among different health organizations. Furthermore, there are several standards for these records, some of them open and others proprietary. Usually health records are stored in databases within health organizations and rarely have external access. This situation applies mainly to cases where patients' data are maintained by healthcare providers, known as EHRs (Electronic Health Records). In case of PHRs (Personal Health Records), in which patients by definition can manage their health records, they usually have no control over their data stored in healthcare providers' databases. Thereby, we envision two main challenges regarding PHR context: first, how patients could have a unified view of their scattered health records, and second, how healthcare providers can access up-to-date data regarding their patients, even though changes occurred elsewhere. For addressing these issues, this work proposes a model named OmniPHR, a distributed model to integrate PHRs, for patients and healthcare providers use. The scientific contribution is to propose an architecture model to support a distributed PHR, where patients can maintain their health history in an unified viewpoint, from any device anywhere. Likewise, for healthcare providers, the possibility of having their patients data interconnected among health organizations. The evaluation demonstrates the feasibility of the model in maintaining health records distributed in an architecture model that promotes a unified view of PHR with elasticity and scalability of the solution. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Dynamic root growth and architecture responses to limiting nutrient availability: linking physiological models and experimentation.

    PubMed

    Postma, Johannes A; Schurr, Ulrich; Fiorani, Fabio

    2014-01-01

    In recent years the study of root phenotypic plasticity in response to sub-optimal environmental factors and the genetic control of these responses have received renewed attention. As a path to increased productivity, in particular for low fertility soils, several applied research projects worldwide target the improvement of crop root traits both in plant breeding and biotechnology contexts. To assist these tasks and address the challenge of optimizing root growth and architecture for enhanced mineral resource use, the development of realistic simulation models is of great importance. We review this research field from a modeling perspective focusing particularly on nutrient acquisition strategies for crop production on low nitrogen and low phosphorous soils. Soil heterogeneity and the dynamics of nutrient availability in the soil pose a challenging environment in which plants have to forage efficiently for nutrients in order to maintain their internal nutrient homeostasis throughout their life cycle. Mathematical models assist in understanding plant growth strategies and associated root phenes that have potential to be tested and introduced in physiological breeding programs. At the same time, we stress that it is necessary to carefully consider model assumptions and development from a whole plant-resource allocation perspective and to introduce or refine modules simulating explicitly root growth and architecture dynamics through ontogeny with reference to key factors that constrain root growth. In this view it is important to understand negative feedbacks such as plant-plant competition. We conclude by briefly touching on available and developing technologies for quantitative root phenotyping from lab to field, from quantification of partial root profiles in the field to 3D reconstruction of whole root systems. Finally, we discuss how these approaches can and should be tightly linked to modeling to explore the root phenome.

  5. Dawn: A Simulation Model for Evaluating Costs and Tradeoffs of Big Data Science Architectures

    NASA Astrophysics Data System (ADS)

    Cinquini, L.; Crichton, D. J.; Braverman, A. J.; Kyo, L.; Fuchs, T.; Turmon, M.

    2014-12-01

    In many scientific disciplines, scientists and data managers are bracing for an upcoming deluge of big data volumes, which will increase the size of current data archives by a factor of 10-100 times. For example, the next Climate Model Inter-comparison Project (CMIP6) will generate a global archive of model output of approximately 10-20 Peta-bytes, while the upcoming next generation of NASA decadal Earth Observing instruments are expected to collect tens of Giga-bytes/day. In radio-astronomy, the Square Kilometre Array (SKA) will collect data in the Exa-bytes/day range, of which (after reduction and processing) around 1.5 Exa-bytes/year will be stored. The effective and timely processing of these enormous data streams will require the design of new data reduction and processing algorithms, new system architectures, and new techniques for evaluating computation uncertainty. Yet at present no general software tool or framework exists that will allow system architects to model their expected data processing workflow, and determine the network, computational and storage resources needed to prepare their data for scientific analysis. In order to fill this gap, at NASA/JPL we have been developing a preliminary model named DAWN (Distributed Analytics, Workflows and Numerics) for simulating arbitrary complex workflows composed of any number of data processing and movement tasks. The model can be configured with a representation of the problem at hand (the data volumes, the processing algorithms, the available computing and network resources), and is able to evaluate tradeoffs between different possible workflows based on several estimators: overall elapsed time, separate computation and transfer times, resulting uncertainty, and others. So far, we have been applying DAWN to analyze architectural solutions for 4 different use cases from distinct science disciplines: climate science, astronomy, hydrology and a generic cloud computing use case. This talk will present

  6. Sub-grid parameterisation of fluvio-deltaic processes and architecture in a basin-scale stratigraphic model

    NASA Astrophysics Data System (ADS)

    Dalman, Rory A. F.; Weltje, Gert Jan

    2008-10-01

    We present a parameterisation of fluvio-deltaic drainage network evolution and alluvial architecture in a basin-scale 2-DH model. The model setup is capable of producing convergent and divergent channel networks. Major elements are the alluvial-ridge aggradation and the coupled overbank deposition, the dimension and style of the channel belt and the sub-grid stratigraphic expression. Avulsions are allowed to develop out of randomly instigated crevasses. Channel stability is modelled one dimensionally by calculating the flow and sediment transport at prospective avulsion nodes. The ultimate fate of crevasses (failed avulsion, successful avulsion, stable bifurcation) depends on the ratio of cross-valley and in-channel gradients in the local neighbourhood of the grid cell under consideration and on the amount and distribution of the suspended sediment load in the water column. The sub-grid parameterisation yields implicit knowledge of the alluvial architecture, which may be analysed stochastically. Stochastic realisations of the alluvial architecture allow us to investigate the relationship between basin-fill architecture and small-scale alluvial architecture, which is likely to improve geological reservoir modelling of these notoriously complex deposits. Modelling results under conditions of time-invariant forcing indicate significant quasi-cyclic autogenic behaviour of the fluvio-deltaic system. Changes in the avulsion frequency are correlated with the number and length of distributary channels, which are in turn related to alternating phases of progradational and aggradational delta development. The resulting parasequences may be difficult to distinguish from their allogenically induced counterparts.

  7. Comparison of BrainTool to other UML modeling and model transformation tools

    NASA Astrophysics Data System (ADS)

    Nikiforova, Oksana; Gusarovs, Konstantins

    2017-07-01

    In the last 30 years there were numerous model generated software systems offered targeting problems with the development productivity and the resulting software quality. CASE tools developed due today's date are being advertised as having "complete code-generation capabilities". Nowadays the Object Management Group (OMG) is calling similar arguments in regards to the Unified Modeling Language (UML) models at different levels of abstraction. It is being said that software development automation using CASE tools enables significant level of automation. Actual today's CASE tools are usually offering a combination of several features starting with a model editor and a model repository for a traditional ones and ending with code generator (that could be using a scripting or domain-specific (DSL) language), transformation tool to produce the new artifacts from the manually created and transformation definition editor to define new transformations for the most advanced ones. Present paper contains the results of CASE tool (mainly UML editors) comparison against the level of the automation they are offering.

  8. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  9. On the injectivity of the generalized Radon transform arising in a model of mathematical economics

    NASA Astrophysics Data System (ADS)

    Agaltsov, A. D.

    2016-11-01

    In the present article we consider the uniqueness problem for the generalized Radon transform arising in a mathematical model of production. We prove uniqueness theorems for this transform and for the profit function in the corresponding model of production. Our approach is based on the multidimensional Wiener’s approximation theorems.

  10. The modeling of the chemical composition and structural transformations in electrode pitches

    SciTech Connect

    Turenko, F.P.; Bochkareva, N.N.

    1984-01-01

    A new method is proposed for studying the composition and physicochemical transformations in pitches - chemical modeling. Correlation relationships have been found between thermochemical properties and composition. A chemical classification of electrode pitches on statistical models is given. The structural transformations taking place on the thermostating of medium-temperature pitches have been shown.

  11. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGES

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretizationmore » is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  12. Transforming Boolean models to continuous models: methodology and application to T-cell receptor signaling

    PubMed Central

    Wittmann, Dominik M; Krumsiek, Jan; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Klamt, Steffen; Theis, Fabian J

    2009-01-01

    Background The understanding of regulatory and signaling networks has long been a core objective in Systems Biology. Knowledge about these networks is mainly of qualitative nature, which allows the construction of Boolean models, where the state of a component is either 'off' or 'on'. While often able to capture the essential behavior of a network, these models can never reproduce detailed time courses of concentration levels. Nowadays however, experiments yield more and more quantitative data. An obvious question therefore is how qualitative models can be used to explain and predict the outcome of these experiments. Results In this contribution we present a canonical way of transforming Boolean into continuous models, where the use of multivariate polynomial interpolation allows transformation of logic operations into a system of ordinary differential equations (ODE). The method is standardized and can readily be applied to large networks. Other, more limited approaches to this task are briefly reviewed and compared. Moreover, we discuss and generalize existing theoretical results on the relation between Boolean and continuous models. As a test case a logical model is transformed into an extensive continuous ODE model describing the activation of T-cells. We discuss how parameters for this model can be determined such that quantitative experimental results are explained and predicted, including time-courses for multiple ligand concentrations and binding affinities of different ligands. This shows that from the continuous model we may obtain biological insights not evident from the discrete one. Conclusion The presented approach will facilitate the interaction between modeling and experiments. Moreover, it provides a straightforward way to apply quantitative analysis methods to qualitatively described systems. PMID:19785753

  13. Quantum gates and architecture for the quantum simulation of the Fermi-Hubbard model

    NASA Astrophysics Data System (ADS)

    Dallaire-Demers, Pierre-Luc; Wilhelm, Frank K.

    2016-12-01

    Quantum computers are the ideal platform for quantum simulations. Given enough coherent operations and qubits, such machines can be leveraged to simulate strongly correlated materials, where intricate quantum effects give rise to counterintuitive macroscopic phenomena such as high-temperature superconductivity. In this paper, we provide a gate decomposition and an architecture for a quantum simulator used to simulate the Fermi-Hubbard model in a hybrid variational quantum-classical algorithm. We propose a simple planar implementation-independent layout of qubits that can also be used to simulate more general fermionic systems. By working through a concrete application, we show the gate decomposition used to simulate the Hamiltonian of a cluster of the Fermi-Hubbard model. We briefly analyze the Trotter-Suzuki errors and estimate the scaling properties of the algorithm for more complex applications.

  14. Robotic deposition of model hydroxyapatite scaffolds with multiple architectures and multiscale porosity for bone tissue engineering.

    PubMed

    Dellinger, Jennifer G; Cesarano, Joseph; Jamison, Russell D

    2007-08-01

    Model hydroxyapatite (HA) scaffolds with porosities spanning multiple length scales were fabricated by robocasting, a solid freeform fabrication technique based on the robotic deposition of colloidal pastes. Scaffolds of various architectures including periodic, radial, and superlattice structures were constructed. Macropores (100-600 microm) were designed by controlling the arrangement and spacing between rods of HA. Micropores (1-30 microm) and submicron pores (less than 1 microm) were produced within the rods by including polymer microsphere porogens in the HA pastes and by controlling the sintering of the scaffolds. These model scaffolds may be used to systematically study the effects of scaffold porosity on bone ingrowth processes both in vitro and in vivo.

  15. Analysis of optical near-field energy transfer by stochastic model unifying architectural dependencies

    NASA Astrophysics Data System (ADS)

    Naruse, Makoto; Akahane, Kouichi; Yamamoto, Naokatsu; Holmström, Petter; Thylén, Lars; Huant, Serge; Ohtsu, Motoichi

    2014-04-01

    We theoretically and experimentally demonstrate energy transfer mediated by optical near-field interactions in a multi-layer InAs quantum dot (QD) structure composed of a single layer of larger dots and N layers of smaller ones. We construct a stochastic model in which optical near-field interactions that follow a Yukawa potential, QD size fluctuations, and temperature-dependent energy level broadening are unified, enabling us to examine device-architecture-dependent energy transfer efficiencies. The model results are consistent with the experiments. This study provides an insight into optical energy transfer involving inherent disorders in materials and paves the way to systematic design principles of nanophotonic devices that will allow optimized performance and the realization of designated functions.

  16. Developing Materials Processing to Performance Modeling Capabilities and the Need for Exascale Computing Architectures (and Beyond)

    SciTech Connect

    Schraad, Mark William; Luscher, Darby Jon

    2016-09-06

    Additive Manufacturing techniques are presenting the Department of Energy and the NNSA Laboratories with new opportunities to consider novel component production and repair processes, and to manufacture materials with tailored response and optimized performance characteristics. Additive Manufacturing technologies already are being applied to primary NNSA mission areas, including Nuclear Weapons. These mission areas are adapting to these new manufacturing methods, because of potential advantages, such as smaller manufacturing footprints, reduced needs for specialized tooling, an ability to embed sensing, novel part repair options, an ability to accommodate complex geometries, and lighter weight materials. To realize the full potential of Additive Manufacturing as a game-changing technology for the NNSA’s national security missions; however, significant progress must be made in several key technical areas. In addition to advances in engineering design, process optimization and automation, and accelerated feedstock design and manufacture, significant progress must be made in modeling and simulation. First and foremost, a more mature understanding of the process-structure-property-performance relationships must be developed. Because Additive Manufacturing processes change the nature of a material’s structure below the engineering scale, new models are required to predict materials response across the spectrum of relevant length scales, from the atomistic to the continuum. New diagnostics will be required to characterize materials response across these scales. And not just models, but advanced algorithms, next-generation codes, and advanced computer architectures will be required to complement the associated modeling activities. Based on preliminary work in each of these areas, a strong argument for the need for Exascale computing architectures can be made, if a legitimate predictive capability is to be developed.

  17. A Four-Phase Model of the Evolution of Clinical Decision Support Architectures

    PubMed Central

    Wright, Adam; Sittig, Dean F.

    2008-01-01

    Background A large body of evidence over many years suggests that clinical decision support systems can be helpful in improving both clinical outcomes and adherence to evidence-based guidelines. However, to this day, clinical decision support systems are not widely used outside of a small number of sites. One reason why decision support systems are not widely used is the relative difficulty of integrating such systems into clinical workflows and computer systems. Purpose To review and synthesize the history of clinical decision support systems, and to propose a model of various architectures for integrating clinical decision support systems with clinical systems. Methods The authors conducted an extensive review of the clinical decision support literature since 1959, sequenced the systems and developed a model. Results The model developed consists of four phases: standalone decision support systems, decision support integrated into clinical systems, standards for sharing clinical decision support content and service models for decision support. These four phases have not heretofore been identified, but they track remarkably well with the chronological history of clinical decision support, and show evolving and increasingly sophisticated attempts to ease integrating decision support systems into clinical workflows and other clinical systems. Conclusions Each of the four evolutionary approaches to decision support architecture has unique advantages and disadvantages. A key lesson was that there were common limitations that almost all the approaches faced, and no single approach has been able to entirely surmount: 1) fixed knowledge representation systems inherently circumscribe the type of knowledge that can be represented in them, 2) there are serious terminological issues, 3) patient data may be spread across several sources with no single source having a complete view of the patient, and 4) major difficulties exist in transferring successful interventions from one

  18. Process-based modelling of tidally-influenced estuarine morphodynamics and bar architecture

    NASA Astrophysics Data System (ADS)

    van de Lageweg, Wietse; Feldman, Howard

    2017-04-01

    Estuaries represent one of the most dynamic environments on Earth with continuously changing channels and shoals of sand and mud that are driven by ebb and flood currents that interact with chemical and biological processes. These transition zones between terrestrial and marine environments generally have complex bar depositional patterns due to the dominance of river processes in proximal areas transitioning to the dominance of oceanic processes in distal areas. Although modern estuaries have been studied for many years, it is largely unknown in which manner basin geometry and tidal range impact bar formation, and how this would affect the subsurface architecture. This study applies the morphodynamic model Delft3D to test models of estuarine bar morphology and stratigraphy along the fluvial-tidal transition. Observations from the modern Columbia River estuary and idealized estuaries are combined to systematically evaluate estuarine hydrodynamics, bar formation and bar preservation. A unique aspect of the methodology is that morphological as well as subsurface data are collected, thus enabling the estuarine bar morphodynamics to be related explicitly to the associated depositional product. Model results highlight the complex and dynamic flow patterns in the Columbia River estuary, which are consistent with observations from local tide gauges. By systematically varying tidal range and basin width, it is shown that estuarine bar dimensions are primarily affected by estuary width, and that tidal range has a secondary effect. An increase in estuary width results in a higher bar braiding index, a larger number of bars as well as longer bars, wider bars and thicker bar deposits. Synthetic architectures that can be compared directly to the sedimentary record show a high degree of fragmentation within estuarine bars. Statistical distributions summarising the internal structure of estuarine bars provide much-needed quantification of the preservation of estuarine bars and

  19. Continuous distribution model for the investigation of complex molecular architectures near interfaces with scattering techniques

    SciTech Connect

    Shekhar, Prabhanshu; Nanda, Hirsh; Heinrich, Frank; Loesche, Mathias

    2011-11-15

    Biological membranes are composed of a thermally disordered lipid matrix and therefore require non-crystallographic scattering approaches for structural characterization with x-rays or neutrons. Here we develop a continuous distribution (CD) model to refine neutron or x-ray reflectivity data from complex architectures of organic molecules. The new model is a flexible implementation of the composition-space refinement of interfacial structures to constrain the resulting scattering length density profiles. We show this model increases the precision with which molecular components may be localized within a sample, with a minimal use of free model parameters. We validate the new model by parameterizing all-atom molecular dynamics (MD) simulations of bilayers and by evaluating the neutron reflectivity of a phospholipid bilayer physisorbed to a solid support. The determination of the structural arrangement of a sparsely-tethered bilayer lipid membrane (stBLM) comprised of a multi-component phospholipid bilayer anchored to a gold substrate by a thiolated oligo(ethylene oxide) linker is also demonstrated. From the model we extract the bilayer composition and density of tether points, information which was previously inaccessible for stBLM systems. The new modeling strategy has been implemented into the ga lowbar refl reflectivity data evaluation suite, available through the National Institute of Standards and Technology (NIST) Center for Neutron Research (NCNR).

  20. Key Technology Research on Open Architecture for The Sharing of Heterogeneous Geographic Analysis Models

    NASA Astrophysics Data System (ADS)

    Yue, S. S.; Wen, Y. N.; Lv, G. N.; Hu, D.

    2013-10-01

    In recent years, the increasing development of cloud computing technologies laid critical foundation for efficiently solving complicated geographic issues. However, it is still difficult to realize the cooperative operation of massive heterogeneous geographical models. Traditional cloud architecture is apt to provide centralized solution to end users, while all the required resources are often offered by large enterprises or special agencies. Thus, it's a closed framework from the perspective of resource utilization. Solving comprehensive geographic issues requires integrating multifarious heterogeneous geographical models and data. In this case, an open computing platform is in need, with which the model owners can package and deploy their models into cloud conveniently, while model users can search, access and utilize those models with cloud facility. Based on this concept, the open cloud service strategies for the sharing of heterogeneous geographic analysis models is studied in this article. The key technology: unified cloud interface strategy, sharing platform based on cloud service, and computing platform based on cloud service are discussed in detail, and related experiments are conducted for further verification.

  1. Continuous distribution model for the investigation of complex molecular architectures near interfaces with scattering techniques

    NASA Astrophysics Data System (ADS)

    Shekhar, Prabhanshu; Nanda, Hirsh; Lösche, Mathias; Heinrich, Frank

    2011-11-01

    Biological membranes are composed of a thermally disordered lipid matrix and therefore require non-crystallographic scattering approaches for structural characterization with x-rays or neutrons. Here we develop a continuous distribution (CD) model to refine neutron or x-ray reflectivity data from complex architectures of organic molecules. The new model is a flexible implementation of the composition-space refinement of interfacial structures to constrain the resulting scattering length density profiles. We show this model increases the precision with which molecular components may be localized within a sample, with a minimal use of free model parameters. We validate the new model by parameterizing all-atom molecular dynamics (MD) simulations of bilayers and by evaluating the neutron reflectivity of a phospholipid bilayer physisorbed to a solid support. The determination of the structural arrangement of a sparsely-tethered bilayer lipid membrane (stBLM) comprised of a multi-component phospholipid bilayer anchored to a gold substrate by a thiolated oligo(ethylene oxide) linker is also demonstrated. From the model we extract the bilayer composition and density of tether points, information which was previously inaccessible for stBLM systems. The new modeling strategy has been implemented into the ga_refl reflectivity data evaluation suite, available through the National Institute of Standards and Technology (NIST) Center for Neutron Research (NCNR).

  2. EASEE: an open architecture approach for modeling battlespace signal and sensor phenomenology

    NASA Astrophysics Data System (ADS)

    Waldrop, Lauren E.; Wilson, D. Keith; Ekegren, Michael T.; Borden, Christian T.

    2017-04-01

    Open architecture in the context of defense applications encourages collaboration across government agencies and academia. This paper describes a success story in the implementation of an open architecture framework that fosters transparency and modularity in the context of Environmental Awareness for Sensor and Emitter Employment (EASEE), a complex physics-based software package for modeling the effects of terrain and atmospheric conditions on signal propagation and sensor performance. Among the highlighted features in this paper are: (1) a code refactorization to separate sensitive parts of EASEE, thus allowing collaborators the opportunity to view and interact with non-sensitive parts of the EASEE framework with the end goal of supporting collaborative innovation, (2) a data exchange and validation effort to enable the dynamic addition of signatures within EASEE thus supporting a modular notion that components can be easily added or removed to the software without requiring recompilation by developers, and (3) a flexible and extensible XML interface, which aids in decoupling graphical user interfaces from EASEE's calculation engine, and thus encourages adaptability to many different defense applications. In addition to the outlined points above, this paper also addresses EASEE's ability to interface with both proprietary systems such as ArcGIS. A specific use case regarding the implementation of an ArcGIS toolbar that leverages EASEE's XML interface and enables users to set up an EASEE-compliant configuration for probability of detection or optimal sensor placement calculations in various modalities is discussed as well.

  3. PREDICTING SUBSURFACE CONTAMINANT TRANSPORT AND TRANSFORMATION: CONSIDERATIONS FOR MODEL SELECTION AND FIELD VALIDATION

    EPA Science Inventory

    Predicting subsurface contaminant transport and transformation requires mathematical models based on a variety of physical, chemical, and biological processes. The mathematical model is an attempt to quantitatively describe observed processes in order to permit systematic forecas...

  4. A Vision Based Top-View Transformation Model for a Vehicle Parking Assistant

    PubMed Central

    Lin, Chien-Chuan; Wang, Ming-Shi

    2012-01-01

    This paper proposes the Top-View Transformation Model for image coordinate transformation, which involves transforming a perspective projection image into its corresponding bird's eye vision. A fitting parameters searching algorithm estimates the parameters that are used to transform the coordinates from the source image. Using this approach, it is not necessary to provide any interior and exterior orientation parameters of the camera. The designed car parking assistant system can be installed at the rear end of the car, providing the driver with a clearer image of the area behind the car. The processing time can be reduced by storing and using the transformation matrix estimated from the first image frame for a sequence of video images. The transformation matrix can be stored as the Matrix Mapping Table, and loaded into the embedded platform to perform the transformation. Experimental results show that the proposed approaches can provide a clearer and more accurate bird's eye view to the vehicle driver. PMID:22666038

  5. Architectural Methodology Report

    NASA Technical Reports Server (NTRS)

    Dhas, Chris

    2000-01-01

    The establishment of conventions between two communicating entities in the end systems is essential for communications. Examples of the kind of decisions that need to be made in establishing a protocol convention include the nature of the data representation, the for-mat and the speed of the date representation over the communications path, and the sequence of control messages (if any) which are sent. One of the main functions of a protocol is to establish a standard path between the communicating entities. This is necessary to create a virtual communications medium with certain desirable characteristics. In essence, it is the function of the protocol to transform the characteristics of the physical communications environment into a more useful virtual communications model. The final function of a protocol is to establish standard data elements for communications over the path; that is, the protocol serves to create a virtual data element for exchange. Other systems may be constructed in which the transferred element is a program or a job. Finally, there are special purpose applications in which the element to be transferred may be a complex structure such as all or part of a graphic display. NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to describe the methodologies used in developing a protocol architecture for an in-space Internet node. The node would support NASA:s four mission areas: Earth Science; Space Science; Human Exploration and Development of Space (HEDS); Aerospace Technology. This report presents the methodology for developing the protocol architecture. The methodology addresses the architecture for a computer communications environment. It does not address an analog voice architecture.

  6. Equivalent circuit of radio frequency-plasma with the transformer model.

    PubMed

    Nishida, K; Mochizuki, S; Ohta, M; Yasumoto, M; Lettry, J; Mattei, S; Hatayama, A

    2014-02-01

    LINAC4 H(-) source is radio frequency (RF) driven type source. In the RF system, it is required to match the load impedance, which includes H(-) source, to that of final amplifier. We model RF plasma inside the H(-) source as circuit elements using transformer model so that characteristics of the load impedance become calculable. It has been shown that the modeling based on the transformer model works well to predict the resistance and inductance of the plasma.

  7. PDS4: Meeting Big Data Challenges Via a Model-Driven Planetary Science Data Architecture and System

    NASA Astrophysics Data System (ADS)

    Law, E.; Hughes, J. S.; Crichton, D. J.; Hardman, S. H.; Joyner, R.; Ramirez, P.

    2014-12-01

    Big science data management entails cataloging, processing, distribution, multiple ways of analyzing and interpreting the data, long-term preservation, and international cooperation of massive amount of scientific data. PDS4, the next generation of the Planetary Data System (PDS), uses an information model-driven architectural approach coupled with modern information technologies and standards to meet theses challenges of big science data management. PDS4 is an operational example of the use of an explicit data system architecture and an ontology-base information model to drive the development, operations, and evolution of a scalable data system along the entire science data lifecycle from ground systems to the archives. This overview of PDS4 will include a description of its model-driven approach and its overall systems architecture. It will illustrate how the system is being used to help meet the expectations of modern scientists for interoperable data systems and correlatable data in the Big Data era.

  8. Agent-based modeling supporting the migration of registry systems to grid based architectures.

    PubMed

    Cryer, Martin E; Frey, Lewis

    2009-03-01

    With the increasing age and cost of operation of the existing NCI SEER platform core technologies, such essential resources in the fight against cancer as these will eventually have to be migrated to Grid based systems. In order to model this migration, a simulation is proposed based upon an agent modeling technology. This modeling technique allows for simulation of complex and distributed services provided by a large scale Grid computing platform such as the caBIG(™) project's caGRID. In order to investigate such a migration to a Grid based platform technology, this paper proposes using agent-based modeling simulations to predict the performance of current and Grid configurations of the NCI SEER system integrated with the existing translational opportunities afforded by caGRID. The model illustrates how the use of Grid technology can potentially improve system response time as systems under test are scaled. In modeling SEER nodes accessing multiple registry silos, we show that the performance of SEER applications re-implemented in a Grid native manner exhibits a nearly constant user response time with increasing numbers of distributed registry silos, compared with the current application architecture which exhibits a linear increase in response time for increasing numbers of silos.

  9. The Living Dead: Transformative Experiences in Modelling Natural Selection

    ERIC Educational Resources Information Center

    Petersen, Morten Rask

    2017-01-01

    This study considers how students change their coherent conceptual understanding of natural selection through a hands-on simulation. The results show that most students change their understanding. In addition, some students also underwent a transformative experience and used their new knowledge in a leisure time activity. These transformative…

  10. Origin and model of transform faults in the Okinawa Trough

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Li, Sanzhong; Jiang, Suhua; Suo, Yanhui; Guo, Lingli; Wang, Yongming; Zhang, Huixuan

    2017-06-01

    Transform faults in back-arc basins are the key to revealing the opening and development of marginal seas. The Okinawa Trough (OT) represents an incipient and active back-arc or marginal sea basin oriented in a general NE-SW direction. To determine the strikes and spatial distribution of transform faults in the OT, this paper dissects the NW- and NNE-SN-trending fault patterns on the basis of seismic profiles, gravity anomalies and region geological data. There are three main NW-trending transpressional faults in the OT, which are the seaward propagation of NW-trending faults in the East China Continent. The NNE-SN-trending faults with right-stepping distribution behave as right-lateral shearing. The strike-slip pull-apart process or transtensional faulting triggered the back-arc rifting or extension, and these faults evolved into transform faults with the emergence of oceanic crust. Thus, the transform fault patterns are inherited from pre-existing oblique transtensional faults at the offsets between rifting segments. Therefore, the OT performs the oblique spreading mechanism similar to nascent oceans such as the Red Sea and Gulf of Aden.

  11. Modeling Herriott cells using the linear canonical transform.

    PubMed

    Dahlen, Dar; Wilcox, Russell; Leemans, Wim

    2017-01-10

    We demonstrate a new way to analyze stable, multipass optical cavities (Herriott cells), using the linear canonical transform formalism, showing that re-entrant designs reproduce an arbitrary input field at the output, resulting in useful symmetries. We use this analysis to predict the stability of cavities used in interferometric delay lines for temporal pulse addition.

  12. Modelling, Transformations, and Scaling Decisions in Constrained Optimization Problems

    DTIC Science & Technology

    1976-03-01

    goal programming, linear fractional programming, mathematical programming, nonlinear programming, nonlinear optimization , transformations, scaling... nonlinear optimization problems. A discussion is given of separable programming, goal programming, and linear fractional DD i j°N 73 1473 EDITION OF 1... nonlinear programming codes. The sensitivity of the GRG code to scaling, rotation of coordinates, and translation of variables is examined. Trans

  13. Origin and model of transform faults in the Okinawa Trough

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Li, Sanzhong; Jiang, Suhua; Suo, Yanhui; Guo, Lingli; Wang, Yongming; Zhang, Huixuan

    2017-03-01

    Transform faults in back-arc basins are the key to revealing the opening and development of marginal seas. The Okinawa Trough (OT) represents an incipient and active back-arc or marginal sea basin oriented in a general NE-SW direction. To determine the strikes and spatial distribution of transform faults in the OT, this paper dissects the NW- and NNE-SN-trending fault patterns on the basis of seismic profiles, gravity anomalies and region geological data. There are three main NW-trending transpressional faults in the OT, which are the seaward propagation of NW-trending faults in the East China Continent. The NNE-SN-trending faults with right-stepping distribution behave as right-lateral shearing. The strike-slip pull-apart process or transtensional faulting triggered the back-arc rifting or extension, and these faults evolved into transform faults with the emergence of oceanic crust. Thus, the transform fault patterns are inherited from pre-existing oblique transtensional faults at the offsets between rifting segments. Therefore, the OT performs the oblique spreading mechanism similar to nascent oceans such as the Red Sea and Gulf of Aden.

  14. Use of Dynamic Models and Operational Architecture to Solve Complex Navy Challenges

    NASA Technical Reports Server (NTRS)

    Grande, Darby; Black, J. Todd; Freeman, Jared; Sorber, TIm; Serfaty, Daniel

    2010-01-01

    The United States Navy established 8 Maritime Operations Centers (MOC) to enhance the command and control of forces at the operational level of warfare. Each MOC is a headquarters manned by qualified joint operational-level staffs, and enabled by globally interoperable C41 systems. To assess and refine MOC staffing, equipment, and schedules, a dynamic software model was developed. The model leverages pre-existing operational process architecture, joint military task lists that define activities and their precedence relations, as well as Navy documents that specify manning and roles per activity. The software model serves as a "computational wind-tunnel" in which to test a MOC on a mission, and to refine its structure, staffing, processes, and schedules. More generally, the model supports resource allocation decisions concerning Doctrine, Organization, Training, Material, Leadership, Personnel and Facilities (DOTMLPF) at MOCs around the world. A rapid prototype effort efficiently produced this software in less than five months, using an integrated process team consisting of MOC military and civilian staff, modeling experts, and software developers. The work reported here was conducted for Commander, United States Fleet Forces Command in Norfolk, Virginia, code N5-0LW (Operational Level of War) that facilitates the identification, consolidation, and prioritization of MOC capabilities requirements, and implementation and delivery of MOC solutions.

  15. Enhanced Engine Performance During Emergency Operation Using a Model-Based Engine Control Architecture

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2015-01-01

    This paper discusses the design and application of model-based engine control (MBEC) for use during emergency operation of the aircraft. The MBEC methodology is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40,000) and features an optimal tuner Kalman Filter (OTKF) to estimate unmeasured engine parameters, which can then be used for control. During an emergency scenario, normally-conservative engine operating limits may be relaxed to increase the performance of the engine and overall survivability of the aircraft; this comes at the cost of additional risk of an engine failure. The MBEC architecture offers the advantage of estimating key engine parameters that are not directly measureable. Estimating the unknown parameters allows for tighter control over these parameters, and on the level of risk the engine will operate at. This will allow the engine to achieve better performance than possible when operating to more conservative limits on a related, measurable parameter.

  16. Coupling Multi-Component Models with MPH on Distributed MemoryComputer Architectures

    SciTech Connect

    He, Yun; Ding, Chris

    2005-03-24

    A growing trend in developing large and complex applications on today's Teraflop scale computers is to integrate stand-alone and/or semi-independent program components into a comprehensive simulation package. One example is the Community Climate System Model which consists of atmosphere, ocean, land-surface and sea-ice components. Each component is semi-independent and has been developed at a different institution. We study how this multi-component, multi-executable application can run effectively on distributed memory architectures. For the first time, we clearly identify five effective execution modes and develop the MPH library to support application development utilizing these modes. MPH performs component-name registration, resource allocation and initial component handshaking in a flexible way.

  17. Enhanced Engine Performance During Emergency Operation Using a Model-Based Engine Control Architecture

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and application of model-based engine control (MBEC) for use during emergency operation of the aircraft. The MBEC methodology is applied to the Commercial Modular Aero-Propulsion System Simulation 40k (CMAPSS40k) and features an optimal tuner Kalman Filter (OTKF) to estimate unmeasured engine parameters, which can then be used for control. During an emergency scenario, normally-conservative engine operating limits may be relaxed to increase the performance of the engine and overall survivability of the aircraft; this comes at the cost of additional risk of an engine failure. The MBEC architecture offers the advantage of estimating key engine parameters that are not directly measureable. Estimating the unknown parameters allows for tighter control over these parameters, and on the level of risk the engine will operate at. This will allow the engine to achieve better performance than possible when operating to more conservative limits on a related, measurable parameter.

  18. How plant architecture affects light absorption and photosynthesis in tomato: towards an ideotype for plant architecture using a functional-structural plant model.

    PubMed

    Sarlikioti, V; de Visser, P H B; Buck-Sorlin, G H; Marcelis, L F M

    2011-10-01

    Manipulation of plant structure can strongly affect light distribution in the canopy and photosynthesis. The aim of this paper is to find a plant ideotype for optimization of light absorption and canopy photosynthesis. Using a static functional structural plant model (FSPM), a range of different plant architectural characteristics was tested for two different seasons in order to find the optimal architecture with respect to light absorption and photosynthesis. Simulations were performed with an FSPM of a greenhouse-grown tomato crop. Sensitivity analyses were carried out for leaf elevation angle, leaf phyllotaxis, leaflet angle, leaf shape, leaflet arrangement and internode length. From the results of this analysis two possible ideotypes were proposed. Four different vertical light distributions were also tested, while light absorption cumulated over the whole canopy was kept the same. Photosynthesis was augmented by 6 % in winter and reduced by 7 % in summer, when light absorption in the top part of the canopy was increased by 25 %, while not changing light absorption of the canopy as a whole. The measured plant structure was already optimal with respect to leaf elevation angle, leaflet angle and leaflet arrangement for both light absorption and photosynthesis while phyllotaxis had no effect. Increasing the length : width ratio of leaves by 1·5 or increasing internode length from 7 cm to 12 cm led to an increase of 6-10 % for light absorption and photosynthesis. At high light intensities (summer) deeper penetration of light in the canopy improves crop photosynthesis, but not at low light intensities (winter). In particular, internode length and leaf shape affect the vertical distribution of light in the canopy. A new plant ideotype with more spacious canopy architecture due to long internodes and long and narrow leaves led to an increase in crop photosynthesis of up to 10 %.

  19. How plant architecture affects light absorption and photosynthesis in tomato: towards an ideotype for plant architecture using a functional–structural plant model

    PubMed Central

    Sarlikioti, V.; de Visser, P. H. B.; Buck-Sorlin, G. H.; Marcelis, L. F. M.

    2011-01-01

    Background and Aims Manipulation of plant structure can strongly affect light distribution in the canopy and photosynthesis. The aim of this paper is to find a plant ideotype for optimization of light absorption and canopy photosynthesis. Using a static functional structural plant model (FSPM), a range of different plant architectural characteristics was tested for two different seasons in order to find the optimal architecture with respect to light absorption and photosynthesis. Methods Simulations were performed with an FSPM of a greenhouse-grown tomato crop. Sensitivity analyses were carried out for leaf elevation angle, leaf phyllotaxis, leaflet angle, leaf shape, leaflet arrangement and internode length. From the results of this analysis two possible ideotypes were proposed. Four different vertical light distributions were also tested, while light absorption cumulated over the whole canopy was kept the same. Key Results Photosynthesis was augmented by 6 % in winter and reduced by 7 % in summer, when light absorption in the top part of the canopy was increased by 25 %, while not changing light absorption of the canopy as a whole. The measured plant structure was already optimal with respect to leaf elevation angle, leaflet angle and leaflet arrangement for both light absorption and photosynthesis while phyllotaxis had no effect. Increasing the length : width ratio of leaves by 1·5 or increasing internode length from 7 cm to 12 cm led to an increase of 6–10 % for light absorption and photosynthesis. Conclusions At high light intensities (summer) deeper penetration of light in the canopy improves crop photosynthesis, but not at low light intensities (winter). In particular, internode length and leaf shape affect the vertical distribution of light in the canopy. A new plant ideotype with more spacious canopy architecture due to long internodes and long and narrow leaves led to an increase in crop photosynthesis of up to 10 %. PMID:21865217

  20. Model-Based Systems Engineering With the Architecture Analysis and Design Language (AADL) Applied to NASA Mission Operations

    NASA Technical Reports Server (NTRS)

    Munoz Fernandez, Michela Miche

    2014-01-01

    The potential of Model Model Systems Engineering (MBSE) using the Architecture Analysis and Design Language (AADL) applied to space systems will be described. AADL modeling is applicable to real-time embedded systems- the types of systems NASA builds. A case study with the Juno mission to Jupiter showcases how this work would enable future missions to benefit from using these models throughout their life cycle from design to flight operations.

  1. Model-Based Systems Engineering With the Architecture Analysis and Design Language (AADL) Applied to NASA Mission Operations

    NASA Technical Reports Server (NTRS)

    Munoz Fernandez, Michela Miche

    2014-01-01

    The potential of Model Model Systems Engineering (MBSE) using the Architecture Analysis and Design Language (AADL) applied to space systems will be described. AADL modeling is applicable to real-time embedded systems- the types of systems NASA builds. A case study with the Juno mission to Jupiter showcases how this work would enable future missions to benefit from using these models throughout their life cycle from design to flight operations.

  2. Assessment of model uncertainty during the river export modelling of pesticides and transformation products

    NASA Astrophysics Data System (ADS)

    Gassmann, Matthias; Olsson, Oliver; Kümmerer, Klaus

    2013-04-01

    The modelling of organic pollutants in the environment is burdened by a load of uncertainties. Not only parameter values are uncertain but often also the mass and timing of pesticide application. By introducing transformation products (TPs) into modelling, further uncertainty coming from the dependence of these substances on their parent compounds and the introduction of new model parameters are likely. The purpose of this study was the investigation of the behaviour of a parsimonious catchment scale model for the assessment of river concentrations of the insecticide Chlorpyrifos (CP) and two of its TPs, Chlorpyrifos Oxon (CPO) and 3,5,6-trichloro-2-pyridinol (TCP) under the influence of uncertain input parameter values. Especially parameter uncertainty and pesticide application uncertainty were investigated by Global Sensitivity Analysis (GSA) and the Generalized Likelihood Uncertainty Estimation (GLUE) method, based on Monte-Carlo sampling. GSA revealed that half-lives and sorption parameters as well as half-lives and transformation parameters were correlated to each other. This means, that the concepts of modelling sorption and degradation/transformation were correlated. Thus, it may be difficult in modelling studies to optimize parameter values for these modules. Furthermore, we could show that erroneous pesticide application mass and timing were compensated during Monte-Carlo sampling by changing the half-life of CP. However, the introduction of TCP into the calculation of the objective function was able to enhance identifiability of pesticide application mass. The GLUE analysis showed that CP and TCP were modelled successfully, but CPO modelling failed with high uncertainty and insensitive parameters. We assumed a structural error of the model which was especially important for CPO assessment. This shows that there is the possibility that a chemical and some of its TPs can be modelled successfully by a specific model structure, but for other TPs, the model

  3. High Resolution Genomic Scans Reveal Genetic Architecture Controlling Alcohol Preference in Bidirectionally Selected Rat Model.

    PubMed

    Lo, Chiao-Ling; Lossie, Amy C; Liang, Tiebing; Liu, Yunlong; Xuei, Xiaoling; Lumeng, Lawrence; Zhou, Feng C; Muir, William M

    2016-08-01

    Investigations on the influence of nature vs. nurture on Alcoholism (Alcohol Use Disorder) in human have yet to provide a clear view on potential genomic etiologies. To address this issue, we sequenced a replicated animal model system bidirectionally-selected for alcohol preference (AP). This model is uniquely suited to map genetic effects with high reproducibility, and resolution. The origin of the rat lines (an 8-way cross) resulted in small haplotype blocks (HB) with a corresponding high level of resolution. We sequenced DNAs from 40 samples (10 per line of each replicate) to determine allele frequencies and HB. We achieved ~46X coverage per line and replicate. Excessive differentiation in the genomic architecture between lines, across replicates, termed signatures of selection (SS), were classified according to gene and region. We identified SS in 930 genes associated with AP. The majority (50%) of the SS were confined to single gene regions, the greatest numbers of which were in promoters (284) and intronic regions (169) with the least in exon's (4), suggesting that differences in AP were primarily due to alterations in regulatory regions. We confirmed previously identified genes and found many new genes associated with AP. Of those newly identified genes, several demonstrated neuronal function involved in synaptic memory and reward behavior, e.g. ion channels (Kcnf1, Kcnn3, Scn5a), excitatory receptors (Grin2a, Gria3, Grip1), neurotransmitters (Pomc), and synapses (Snap29). This study not only reveals the polygenic architecture of AP, but also emphasizes the importance of regulatory elements, consistent with other complex traits.

  4. Architectural and morphological assessment of rat abdominal wall muscles: comparison for use as a human model

    PubMed Central

    Brown, Stephen H M; Banuelos, Karina; Ward, Samuel R; Lieber, Richard L

    2010-01-01

    The abdominal wall is a composite of muscles that are important for the mechanical stability of the spine and pelvis. Tremendous clinical attention is given to these muscles, yet little is known about how they function in isolation or how they interact with one another. Given the morphological, vascular, and innervation complexities associated with these muscles and their proximity to the internal organs, an appropriate animal model is important for understanding their physiological and mechanical significance during function. To determine the extent to which the rat abdominal wall resembles that of human, 10 adult male Sprague-Dawley rats were killed and formalin-fixed for architectural and morphological analyses of the four abdominal wall muscles (rectus abdominis, external oblique, internal oblique, and transversus abdominis). Physiological cross-sectional areas and optimal fascicle lengths demonstrated a pattern that was similar to human abdominal wall muscles. In addition, sarcomere lengths measured in the neutral spine posture were similar to human in their relation to optimal sarcomere length. These data indicate that the force-generating and length change capabilities of these muscles, relative to one another, are similar in rat and human. Finally, the fiber lines of action of each abdominal muscle were similar to human over most of the abdominal wall. The main exception was in the lower abdominal region (inferior to the pelvic crest), where the external oblique becomes aponeurotic in human but continues as muscle fibers into its pelvic insertion in the rat. We conclude that, based on the morphology and architecture of the abdominal wall muscles, the adult male Sprague-Dawley rat is a good candidate for a model representation of human, particularly in the middle and upper abdominal wall regions. PMID:20646108

  5. Architectural and morphological assessment of rat abdominal wall muscles: comparison for use as a human model.

    PubMed

    Brown, Stephen H M; Banuelos, Karina; Ward, Samuel R; Lieber, Richard L

    2010-09-01

    The abdominal wall is a composite of muscles that are important for the mechanical stability of the spine and pelvis. Tremendous clinical attention is given to these muscles, yet little is known about how they function in isolation or how they interact with one another. Given the morphological, vascular, and innervation complexities associated with these muscles and their proximity to the internal organs, an appropriate animal model is important for understanding their physiological and mechanical significance during function. To determine the extent to which the rat abdominal wall resembles that of human, 10 adult male Sprague-Dawley rats were killed and formalin-fixed for architectural and morphological analyses of the four abdominal wall muscles (rectus abdominis, external oblique, internal oblique, and transversus abdominis). Physiological cross-sectional areas and optimal fascicle lengths demonstrated a pattern that was similar to human abdominal wall muscles. In addition, sarcomere lengths measured in the neutral spine posture were similar to human in their relation to optimal sarcomere length. These data indicate that the force-generating and length change capabilities of these muscles, relative to one another, are similar in rat and human. Finally, the fiber lines of action of each abdominal muscle were similar to human over most of the abdominal wall. The main exception was in the lower abdominal region (inferior to the pelvic crest), where the external oblique becomes aponeurotic in human but continues as muscle fibers into its pelvic insertion in the rat. We conclude that, based on the morphology and architecture of the abdominal wall muscles, the adult male Sprague-Dawley rat is a good candidate for a model representation of human, particularly in the middle and upper abdominal wall regions.

  6. High Resolution Genomic Scans Reveal Genetic Architecture Controlling Alcohol Preference in Bidirectionally Selected Rat Model

    PubMed Central

    Lo, Chiao-Ling; Liang, Tiebing; Liu, Yunlong; Lumeng, Lawrence; Zhou, Feng C.; Muir, William M.

    2016-01-01

    Investigations on the influence of nature vs. nurture on Alcoholism (Alcohol Use Disorder) in human have yet to provide a clear view on potential genomic etiologies. To address this issue, we sequenced a replicated animal model system bidirectionally-selected for alcohol preference (AP). This model is uniquely suited to map genetic effects with high reproducibility, and resolution. The origin of the rat lines (an 8-way cross) resulted in small haplotype blocks (HB) with a corresponding high level of resolution. We sequenced DNAs from 40 samples (10 per line of each replicate) to determine allele frequencies and HB. We achieved ~46X coverage per line and replicate. Excessive differentiation in the genomic architecture between lines, across replicates, termed signatures of selection (SS), were classified according to gene and region. We identified SS in 930 genes associated with AP. The majority (50%) of the SS were confined to single gene regions, the greatest numbers of which were in promoters (284) and intronic regions (169) with the least in exon's (4), suggesting that differences in AP were primarily due to alterations in regulatory regions. We confirmed previously identified genes and found many new genes associated with AP. Of those newly identified genes, several demonstrated neuronal function involved in synaptic memory and reward behavior, e.g. ion channels (Kcnf1, Kcnn3, Scn5a), excitatory receptors (Grin2a, Gria3, Grip1), neurotransmitters (Pomc), and synapses (Snap29). This study not only reveals the polygenic architecture of AP, but also emphasizes the importance of regulatory elements, consistent with other complex traits. PMID:27490364

  7. Integrating mixed-effect models into an architectural plant model to simulate inter- and intra-progeny variability: a case study on oil palm (Elaeis guineensis Jacq.).

    PubMed

    Perez, Raphaël P A; Pallas, Benoît; Le Moguédec, Gilles; Rey, Hervé; Griffon, Sébastien; Caliman, Jean-Pierre; Costes, Evelyne; Dauzat, Jean

    2016-08-01

    Three-dimensional (3D) reconstruction of plants is time-consuming and involves considerable levels of data acquisition. This is possibly one reason why the integration of genetic variability into 3D architectural models has so far been largely overlooked. In this study, an allometry-based approach was developed to account for architectural variability in 3D architectural models of oil palm (Elaeis guineensis Jacq.) as a case study. Allometric relationships were used to model architectural traits from individual leaflets to the entire crown while accounting for ontogenetic and morphogenetic gradients. Inter- and intra-progeny variabilities were evaluated for each trait and mixed-effect models were used to estimate the mean and variance parameters required for complete 3D virtual plants. Significant differences in leaf geometry (petiole length, density of leaflets, and rachis curvature) and leaflet morphology (gradients of leaflet length and width) were detected between and within progenies and were modelled in order to generate populations of plants that were consistent with the observed populations. The application of mixed-effect models on allometric relationships highlighted an interesting trade-off between model accuracy and ease of defining parameters for the 3D reconstruction of plants while at the same time integrating their observed variability. Future research will be dedicated to sensitivity analyses coupling the structural model presented here with a radiative balance model in order to identify the key architectural traits involved in light interception efficiency.

  8. Functional mapping of quantitative trait loci underlying growth trajectories using a transform-both-sides logistic model.

    PubMed

    Wu, Rongling; Ma, Chang-Xing; Lin, Min; Wang, Zuoheng; Casella, George

    2004-09-01

    The incorporation of developmental control mechanisms of growth has proven to be a powerful tool in mapping quantitative trait loci (QTL) underlying growth trajectories. A theoretical framework for implementing a QTL mapping strategy with growth laws has been established. This framework can be generalized to an arbitrary number of time points, where growth is measured, and becomes computationally more tractable, when the assumption of variance stationarity is made. In practice, however, this assumption is likely to be violated for age-specific growth traits due to a scale effect. In this article, we present a new statistical model for mapping growth QTL, which also addresses the problem of variance stationarity, by using a transform-both-sides (TBS) model advocated by Carroll and Ruppert (1984, Journal of the American Statistical Association 79, 321-328). The TBS-based model for mapping growth QTL cannot only maintain the original biological properties of a growth model, but also can increase the accuracy and precision of parameter estimation and the power to detect a QTL responsible for growth differentiation. Using the TBS-based model, we successfully map a QTL governing growth trajectories to a linkage group in an example of forest trees. The statistical and biological properties of the estimates of this growth QTL position and effect are investigated using Monte Carlo simulation studies. The implications of our model for understanding the genetic architecture of growth are discussed.

  9. A stochastic model of tree architecture and biomass partitioning: application to Mongolian Scots pines.

    PubMed

    Wang, Feng; Kang, Mengzhen; Lu, Qi; Letort, Véronique; Han, Hui; Guo, Yan; de Reffye, Philippe; Li, Baoguo

    2011-04-01

    Mongolian Scots pine (Pinus sylvestris var. mongolica) is one of the principal species used for windbreak and sand stabilization in arid and semi-arid areas in northern China. A model-assisted analysis of its canopy architectural development and functions is valuable for better understanding its behaviour and roles in fragile ecosystems. However, due to the intrinsic complexity and variability of trees, the parametric identification of such models is currently a major obstacle to their evaluation and their validation with respect to real data. The aim of this paper was to present the mathematical framework of a stochastic functional-structural model (GL2) and its parameterization for Mongolian Scots pines, taking into account inter-plant variability in terms of topological development and biomass partitioning. In GL2, plant organogenesis is determined by the realization of random variables representing the behaviour of axillary or apical buds. The associated probabilities are calibrated for Mongolian Scots pines using experimental data including means and variances of the numbers of organs per plant in each order-based class. The functional part of the model relies on the principles of source-sink regulation and is parameterized by direct observations of living trees and the inversion method using measured data for organ mass and dimensions. The final calibration accuracy satisfies both organogenetic and morphogenetic processes. Our hypothesis for the number of organs following a binomial distribution is found to be consistent with the real data. Based on the calibrated parameters, stochastic simulations of the growth of Mongolian Scots pines in plantations are generated by the Monte Carlo method, allowing analysis of the inter-individual variability of the number of organs and biomass partitioning. Three-dimensional (3D) architectures of young Mongolian Scots pines were simulated for 4-, 6- and 8-year-old trees. This work provides a new method for characterizing

  10. A stochastic model of tree architecture and biomass partitioning: application to Mongolian Scots pines

    PubMed Central

    Wang, Feng; Kang, Mengzhen; Lu, Qi; Letort, Véronique; Han, Hui; Guo, Yan; de Reffye, Philippe; Li, Baoguo

    2011-01-01

    Background and Aims Mongolian Scots pine (Pinus sylvestris var. mongolica) is one of the principal species used for windbreak and sand stabilization in arid and semi-arid areas in northern China. A model-assisted analysis of its canopy architectural development and functions is valuable for better understanding its behaviour and roles in fragile ecosystems. However, due to the intrinsic complexity and variability of trees, the parametric identification of such models is currently a major obstacle to their evaluation and their validation with respect to real data. The aim of this paper was to present the mathematical framework of a stochastic functional–structural model (GL2) and its parameterization for Mongolian Scots pines, taking into account inter-plant variability in terms of topological development and biomass partitioning. Methods In GL2, plant organogenesis is determined by the realization of random variables representing the behaviour of axillary or apical buds. The associated probabilities are calibrated for Mongolian Scots pines using experimental data including means and variances of the numbers of organs per plant in each order-based class. The functional part of the model relies on the principles of source–sink regulation and is parameterized by direct observations of living trees and the inversion method using measured data for organ mass and dimensions. Key Results The final calibration accuracy satisfies both organogenetic and morphogenetic processes. Our hypothesis for the number of organs following a binomial distribution is found to be consistent with the real data. Based on the calibrated parameters, stochastic simulations of the growth of Mongolian Scots pines in plantations are generated by the Monte Carlo method, allowing analysis of the inter-individual variability of the number of organs and biomass partitioning. Three-dimensional (3D) architectures of young Mongolian Scots pines were simulated for 4-, 6- and 8-year-old trees

  11. Structure Fluctuation Model of Melting and Polymorphic Transformations in Metals

    NASA Astrophysics Data System (ADS)

    Filippov, E. S.

    2017-09-01

    Relationships of volumes during thermal expansion of metals are analyzed. It is demonstrated that metals with BCC structure undergo two-step structural change at the melting point: at the beginning the BCC structure is transformed into FCC structure and then clusters are formed with K = 12 and statistical packing of atoms of the liquid phase. Before melting, hexagonal layer-by-layer (6 + 6) packed Cd and Zn change their structure from K = 6 to K = 8. Metals with FCC structure do not change the number of neighbors before melting, forming clusters with K = 12. It is shown that conditions of pre-melting and polymorphic FCC (HDP) → BCC transformations are reached for the critical volume of thermal expansion caused by leveling of fluctuations of atom and electron densities.

  12. Application of Model-based Systems Engineering Methods to Development of Combat System Architectures

    DTIC Science & Technology

    2009-04-22

    architectures that meet mission requirements 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER...Correlation 4. Classification 22.Energize Missile 23.Ignite Propulsion 24.Missile Fly Out 25.Missile Comms 26.Engagement Scheduling 27.Illuminator...Architecture Issues and Resolutions • Lack of DoD Common SPL Library • Lack of Core Knowledge in Architecture Development Process H-P Method Dewey

  13. Ivory Coast-Ghana margin: model of a transform margin

    SciTech Connect

    Mascle, J.; Blarez, E.

    1987-05-01

    The authors present a marine study of the eastern Ivory Coast-Ghana continental margins which they consider one of the most spectacular extinct transform margins. This margin has been created during Early-Lower Cretaceous time and has not been submitted to any major geodynamic reactivation since its fabric. Based on this example, they propose to consider during the evolution of the transform margin four main and successive stages. Shearing contact is first active between two probably thick continental crusts and then between progressively thinning continental crusts. This leads to the creation of specific geological structures such as pull-apart graben, elongated fault lineaments, major fault scarps, shear folds, and marginal ridges. After the final continental breakup, a hot center (the mid-oceanic ridge axis) is progressively drifting along the newly created margin. The contact between two lithospheres of different nature should necessarily induce, by thermal exchanges, vertical crustal readjustments. Finally, the transform margin remains directly adjacent to a hot but cooling oceanic lithosphere; its subsidence behavior should then progressively be comparable to the thermal subsidence of classic rifted margins.

  14. Analysis of Terrestrial Planet Formation by the Grand Tack Model: System Architecture and Tack Location

    NASA Astrophysics Data System (ADS)

    Brasser, R.; Matsumura, S.; Ida, S.; Mojzsis, S. J.; Werner, S. C.

    2016-04-01

    The Grand Tack model of terrestrial planet formation has emerged in recent years as the premier scenario used to account for several observed features of the inner solar system. It relies on the early migration of the giant planets to gravitationally sculpt and mix the planetesimal disk down to ˜1 au, after which the terrestrial planets accrete from material remaining in a narrow circumsolar annulus. Here, we investigate how the model fares under a range of initial conditions and migration course-change (“tack”) locations. We run a large number of N-body simulations with tack locations of 1.5 and 2 au and test initial conditions using equal-mass planetary embryos and a semi-analytical approach to oligarchic growth. We make use of a recent model of the protosolar disk that takes into account viscous heating, includes the full effect of type 1 migration, and employs a realistic mass-radius relation for the growing terrestrial planets. Our results show that the canonical tack location of Jupiter at 1.5 au is inconsistent with the most massive planet residing at 1 au at greater than 95% confidence. This favors a tack farther out at 2 au for the disk model and parameters employed. Of the different initial conditions, we find that the oligarchic case is capable of statistically reproducing the orbital architecture and mass distribution of the terrestrial planets, while the equal-mass embryo case is not.

  15. ANALYSIS OF TERRESTRIAL PLANET FORMATION BY THE GRAND TACK MODEL: SYSTEM ARCHITECTURE AND TACK LOCATION

    SciTech Connect

    Brasser, R.; Ida, S.; Matsumura, S.; Mojzsis, S. J.; Werner, S. C.

    2016-04-20

    The Grand Tack model of terrestrial planet formation has emerged in recent years as the premier scenario used to account for several observed features of the inner solar system. It relies on the early migration of the giant planets to gravitationally sculpt and mix the planetesimal disk down to ∼1 au, after which the terrestrial planets accrete from material remaining in a narrow circumsolar annulus. Here, we investigate how the model fares under a range of initial conditions and migration course-change (“tack”) locations. We run a large number of N-body simulations with tack locations of 1.5 and 2 au and test initial conditions using equal-mass planetary embryos and a semi-analytical approach to oligarchic growth. We make use of a recent model of the protosolar disk that takes into account viscous heating, includes the full effect of type 1 migration, and employs a realistic mass–radius relation for the growing terrestrial planets. Our results show that the canonical tack location of Jupiter at 1.5 au is inconsistent with the most massive planet residing at 1 au at greater than 95% confidence. This favors a tack farther out at 2 au for the disk model and parameters employed. Of the different initial conditions, we find that the oligarchic case is capable of statistically reproducing the orbital architecture and mass distribution of the terrestrial planets, while the equal-mass embryo case is not.

  16. A Kinetic Vlasov Model for Plasma Simulation Using Discontinuous Galerkin Method on Many-Core Architectures

    NASA Astrophysics Data System (ADS)

    Reddell, Noah

    Advances are reported in the three pillars of computational science achieving a new capability for understanding dynamic plasma phenomena outside of local thermodynamic equilibrium. A continuum kinetic model for plasma based on the Vlasov-Maxwell system for multiple particle species is developed. Consideration is added for boundary conditions in a truncated velocity domain and supporting wall interactions. A scheme to scale the velocity domain for multiple particle species with different temperatures and particle mass while sharing one computational mesh is described. A method for assessing the degree to which the kinetic solution differs from a Maxwell-Boltzmann distribution is introduced and tested on a thoroughly studied test case. The discontinuous Galerkin numerical method is extended for efficient solution of hyperbolic conservation laws in five or more particle phase-space dimensions using tensor-product hypercube elements with arbitrary polynomial order. A scheme for velocity moment integration is integrated as required for coupling between the plasma species and electromagnetic waves. A new high performance simulation code WARPM is developed to efficiently implement the model and numerical method on emerging many-core supercomputing architectures. WARPM uses the OpenCL programming model for computational kernels and task parallelism to overlap computation with communication. WARPM single-node performance and parallel scaling efficiency are analyzed with bottlenecks identified guiding future directions for the implementation. The plasma modeling capability is validated against physical problems with analytic solutions and well established benchmark problems.

  17. Modeling halotropism: a key role for root tip architecture and reflux loop remodeling in redistributing auxin.

    PubMed

    van den Berg, Thea; Korver, Ruud A; Testerink, Christa; Ten Tusscher, Kirsten H W J

    2016-09-15

    A key characteristic of plant development is its plasticity in response to various and dynamically changing environmental conditions. Tropisms contribute to this flexibility by allowing plant organs to grow from or towards environmental cues. Halotropism is a recently described tropism in which plant roots bend away from salt. During halotropism, as in most other tropisms, directional growth is generated through an asymmetric auxin distribution that generates differences in growth rate and hence induces bending. Here, we develop a detailed model of auxin transport in the Arabidopsis root tip and combine this with experiments to investigate the processes generating auxin asymmetry during halotropism. Our model points to the key role of root tip architecture in allowing the decrease in PIN2 at the salt-exposed side of the root to result in a re-routing of auxin to the opposite side. In addition, our model demonstrates how feedback of auxin on the auxin transporter AUX1 amplifies this auxin asymmetry, while a salt-induced transient increase in PIN1 levels increases the speed at which this occurs. Using AUX1-GFP imaging and pin1 mutants, we experimentally confirmed these model predictions, thus expanding our knowledge of the cellular basis of halotropism.

  18. Accuracy assessment of modeling architectural structures and details using terrestrial laser scanning

    NASA Astrophysics Data System (ADS)

    Kedzierski, M.; Walczykowski, P.; Orych, A.; Czarnecka, P.

    2015-08-01

    One of the most important aspects when performing architectural documentation of cultural heritage structures is the accuracy of both the data and the products which are generated from these data: documentation in the form of 3D models or vector drawings. The paper describes an assessment of the accuracy of modelling data acquired using a terrestrial phase scanner in relation to the density of a point cloud representing the surface of different types of construction materials typical for cultural heritage structures. This analysis includes the impact of the scanning geometry: the incidence angle of the laser beam and the scanning distance. For the purposes of this research, a test field consisting of samples of different types of construction materials (brick, wood, plastic, plaster, a ceramic tile, sheet metal) was built. The study involved conducting measurements at different angles and from a range of distances for chosen scanning densities. Data, acquired in the form of point clouds, were then filtered and modelled. An accuracy assessment of the 3D model was conducted by fitting it with the point cloud. The reflection intensity of each type of material was also analyzed, trying to determine which construction materials have the highest reflectance coefficients, and which have the lowest reflection coefficients, and in turn how this variable changes for different scanning parameters. Additionally measurements were taken of a fragment of a building in order to compare the results obtained in laboratory conditions, with those taken in field conditions.

  19. OpenSimRoot: widening the scope and application of root architectural models.

    PubMed

    Postma, Johannes A; Kuppe, Christian; Owen, Markus R; Mellor, Nathan; Griffiths, Marcus; Bennett, Malcolm J; Lynch, Jonathan P; Watt, Michelle

    2017-08-01

    OpenSimRoot is an open-source, functional-structural plant model and mathematical description of root growth and function. We describe OpenSimRoot and its functionality to broaden the benefits of root modeling to the plant science community. OpenSimRoot is an extended version of SimRoot, established to simulate root system architecture, nutrient acquisition and plant growth. OpenSimRoot has a plugin, modular infrastructure, coupling single plant and crop stands to soil nutrient and water transport models. It estimates the value of root traits for water and nutrient acquisition in environments and plant species. The flexible OpenSimRoot design allows upscaling from root anatomy to plant community to estimate the following: resource costs of developmental and anatomical traits; trait synergisms; and (interspecies) root competition. OpenSimRoot can model three-dimensional images from magnetic resonance imaging (MRI) and X-ray computed tomography (CT) of roots in soil. New modules include: soil water-dependent water uptake and xylem flow; tiller formation; evapotranspiration; simultaneous simulation of mobile solutes; mesh refinement; and root growth plasticity. OpenSimRoot integrates plant phenotypic data with environmental metadata to support experimental designs and to gain a mechanistic understanding at system scales. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  20. Modeling halotropism: a key role for root tip architecture and reflux loop remodeling in redistributing auxin

    PubMed Central

    van den Berg, Thea; Korver, Ruud A.; Testerink, Christa

    2016-01-01

    A key characteristic of plant development is its plasticity in response to various and dynamically changing environmental conditions. Tropisms contribute to this flexibility by allowing plant organs to grow from or towards environmental cues. Halotropism is a recently described tropism in which plant roots bend away from salt. During halotropism, as in most other tropisms, directional growth is generated through an asymmetric auxin distribution that generates differences in growth rate and hence induces bending. Here, we develop a detailed model of auxin transport in the Arabidopsis root tip and combine this with experiments to investigate the processes generating auxin asymmetry during halotropism. Our model points to the key role of root tip architecture in allowing the decrease in PIN2 at the salt-exposed side of the root to result in a re-routing of auxin to the opposite side. In addition, our model demonstrates how feedback of auxin on the auxin transporter AUX1 amplifies this auxin asymmetry, while a salt-induced transient increase in PIN1 levels increases the speed at which this occurs. Using AUX1-GFP imaging and pin1 mutants, we experimentally confirmed these model predictions, thus expanding our knowledge of the cellular basis of halotropism. PMID:27510970

  1. Relevance and limitations of crowding, fractal, and polymer models to describe nuclear architecture.

    PubMed

    Huet, Sébastien; Lavelle, Christophe; Ranchon, Hubert; Carrivain, Pascal; Victor, Jean-Marc; Bancaud, Aurélien

    2014-01-01

    Chromosome architecture plays an essential role for all nuclear functions, and its physical description has attracted considerable interest over the last few years among the biophysics community. These researches at the frontiers of physics and biology have been stimulated by the demand for quantitative analysis of molecular biology experiments, which provide comprehensive data on chromosome folding, or of live cell imaging experiments that enable researchers to visualize selected chromosome loci in living or fixed cells. In this review our goal is to survey several nonmutually exclusive models that have emerged to describe the folding of DNA in the nucleus, the dynamics of proteins in the nucleoplasm, or the movements of chromosome loci. We focus on three classes of models, namely molecular crowding, fractal, and polymer models, draw comparisons, and discuss their merits and limitations in the context of chromosome structure and dynamics, or nuclear protein navigation in the nucleoplasm. Finally, we identify future challenges in the roadmap to a unified model of the nuclear environment.

  2. Image-Based Modeling Techniques for Architectural Heritage 3d Digitalization: Limits and Potentialities

    NASA Astrophysics Data System (ADS)

    Santagati, C.; Inzerillo, L.; Di Paola, F.

    2013-07-01

    3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS), the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases) to large scale buildings for practitioner purpose.

  3. How Plates Pull Transforms Apart: 3-D Numerical Models of Oceanic Transform Fault Response to Changes in Plate Motion Direction

    NASA Astrophysics Data System (ADS)

    Morrow, T. A.; Mittelstaedt, E. L.; Olive, J. A. L.

    2015-12-01

    Observations along oceanic fracture zones suggest that some mid-ocean ridge transform faults (TFs) previously split into multiple strike-slip segments separated by short (<~50 km) intra-transform spreading centers and then reunited to a single TF trace. This history of segmentation appears to correspond with changes in plate motion direction. Despite the clear evidence of TF segmentation, the processes governing its development and evolution are not well characterized. Here we use a 3-D, finite-difference / marker-in-cell technique to model the evolution of localized strain at a TF subjected to a sudden change in plate motion direction. We simulate the oceanic lithosphere and underlying asthenosphere at a ridge-transform-ridge setting using a visco-elastic-plastic rheology with a history-dependent plastic weakening law and a temperature- and stress-dependent mantle viscosity. To simulate the development of topography, a low density, low viscosity 'sticky air' layer is present above the oceanic lithosphere. The initial thermal gradient follows a half-space cooling solution with an offset across the TF. We impose an enhanced thermal diffusivity in the uppermost 6 km of lithosphere to simulate the effects of hydrothermal circulation. An initial weak seed in the lithosphere helps localize shear deformation between the two offset ridge axes to form a TF. For each model case, the simulation is run initially with TF-parallel plate motion until the thermal structure reaches a steady state. The direction of plate motion is then rotated either instantaneously or over a specified time period, placing the TF in a state of trans-tension. Model runs continue until the system reaches a new steady state. Parameters varied here include: initial TF length, spreading rate, and the rotation rate and magnitude of spreading obliquity. We compare our model predictions to structural observations at existing TFs and records of TF segmentation preserved in oceanic fracture zones.

  4. Modeling of a method of parallel hierarchical transformation for fast recognition of dynamic images

    NASA Astrophysics Data System (ADS)

    Timchenko, Leonid I.; Kokryatskaya, Nataliya I.; Shpakovych, Viktoriya V.

    2013-12-01

    Principles necessary to develop a method and computational facilities for the parallel hierarchical transformation based on high-performance GPUs are discussed in the paper. Mathematic models of the parallel hierarchical (PH) network training for the transformation and a PH network training method for recognition of dynamic images are developed.

  5. Influence of diffusive porosity architecture on kinetically-controlled reactions in mobile-immobile models

    NASA Astrophysics Data System (ADS)

    Babey, T.; Ginn, T. R.; De Dreuzy, J. R.

    2014-12-01

    Solute transport in porous media may be structured at various scales by geological features, from connectivity patterns of pores to fracture networks. This structure impacts solute repartition and consequently reactivity. Here we study numerically the influence of the organization of porous volumes within diffusive porosity zones on different reactions. We couple a mobile-immobile transport model where an advective zone exchanges with diffusive zones of variable structure to the geochemical modeling software PHREEQC. We focus on two kinetically-controlled reactions, a linear sorption and a nonlinear dissolution of a mineral. We show that in both cases the structure of the immobile zones has an important impact on the overall reaction rates. Through the Multi-Rate Mass Transfer (MRMT) framework, we show that this impact is very well captured by residence times-based models for the kinetic linear sorption, as it is mathematically equivalent to a modification of the initial diffusive structure; Consequently, the overall reaction rate could be easily extrapolated from a conservative tracer experiment. The MRMT models however struggle to reproduce the non-linearity and the threshold effects associated with the kinetic dissolution. A slower reaction, by allowing more time for diffusion to smooth out the concentration gradients, tends to increase their relevance. Figure: Left: Representation of a mobile-immobile model with a complex immobile architecture. The mobile zone is indicated by an arrow. Right: Total remaining mass of mineral in mobile-immobile models and in their equivalent MRMT models during a flush by a highly under-saturated solution. The models only differ by the organization of their immobile porous volumes.

  6. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models.

    PubMed

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L; Huffman, Jennifer E; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F; Wilson, James F; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S

    2015-07-15

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge.

  7. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    PubMed Central

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  8. A micromechanics constitutive model for pure dilatant martensitic transformation of ZrO2-containing ceramics

    NASA Astrophysics Data System (ADS)

    Qingping, Sun; Shouwen, Yu; Kehchih, Hwang

    1990-05-01

    A new micromechanics constitutive model for pure dilatant transformation plasticity of structure ceramics is proposed in this paper. Based on the thermodynamics, micromechanics and microscale t→m transformation mechanism analysis of the TZP and PSZ ZrO2-containing ceramics, an analytic expressions of the Helmholtz and complementary free energy of the constitutive element for the case of pure dilatant transformation is derived for the first time in a self-consistent manner. By the analysis of energy dissipation in the forward and reverse transformations, the micromechanics constitutive law is derived in the framework of Hill-Rice's internal variable constitutive theory.

  9. Rheology and friction along the Vema transform fault (Central Atlantic) inferred by thermal modeling

    NASA Astrophysics Data System (ADS)

    Cuffaro, Marco; Ligi, Marco

    2016-04-01

    We investigate with 3-D finite element simulations the temperature distribution beneath the Vema transform that offsets the Mid-Atlantic Ridge by ~300 km in the Central Atlantic. The developed thermal model includes the effects of mantle flow beneath a ridge-transform-ridge geometry and the lateral heat conduction across the transform fault, and of the shear heating generated along the fault. Numerical solutions are presented for a 3-D domain, discretized with a non-uniform tetrahedral mesh, where relative plate kinematics is used as boundary condition, providing passive mantle upwelling. Mantle is modelled as a temperature-dependent viscous fluid, and its dynamics can be described by Stokes and advection-conduction heat equations. The results show that shear heating raises significantly the temperature along the transform fault. In order to test model results, we calculated the thermal structure simulating the mantle dynamics beneath an accretionary plate boundary geometry that duplicates the Vema transform fault, assuming the present-day spreading rate and direction of the Mid Atlantic Ridge at 11 °N. Thus, the modelled heat flow at the surface has been compared with 23 heat flow measurements carried out along the Vema Transform valley. Laboratory studies on the frictional stability of olivine aggregates show that the depth extent of oceanic faulting is thermally controlled and limited by the 600 °C isotherm. The depth of isotherms of the thermal model were compared to the depths of earthquakes along transform faults. Slip on oceanic transform faults is primarily aseismic, only 15% of the tectonic offset is accommodated by earthquakes. Despite extensive fault areas, few large earthquakes occur on the fault and few aftershocks follow large events. Rheology constrained by the thermal model combined with geology and seismicity of the Vema Transform fault allows to better understand friction and the spatial distribution of strength along the fault and provides

  10. Bäcklund transformations for the elliptic Gaudin model and a Clebsch system

    NASA Astrophysics Data System (ADS)

    Zullo, Federico

    2011-07-01

    A two-parameters family of Bäcklund transformations for the classical elliptic Gaudin model is constructed. The maps are explicit, symplectic, preserve the same integrals as for the continuous flows, and are a time discretization of each of these flows. The transformations can map real variables into real variables, sending physical solutions of the equations of motion into physical solutions. The starting point of the analysis is the integrability structure of the model. It is shown how the analogue transformations for the rational and trigonometric Gaudin model are a limiting case of this one. An application to a particular case of the Clebsch system is given.

  11. A variational material model for transformation-induced plasticity in polycrystalline steels

    NASA Astrophysics Data System (ADS)

    Waimann, Johanna; Junker, Philipp; Hackl, Klaus

    2015-12-01

    This work presents a variational material model for transformation-induced plasticity in steels. We will use the principle of the minimum of the dissipation potential to develop a coupled material model for plastic deformations and phase transformations that simultaneously accounts for the hardening effects that play an important role. We will use a polycrystalline approach and introduce a combined Voigt/Reuß bound and a coupled ansatz for the dissipation functional to model the simultaneous effects of plastic deformations and phase transformations. Finally, we will present the first numerical results for a tension/compression test.

  12. Internet of Things: a possible change in the distributed modeling and simulation architecture paradigm

    NASA Astrophysics Data System (ADS)

    Riecken, Mark; Lessmann, Kurt; Schillero, David

    2016-05-01

    The Data Distribution Service (DDS) was started by the Object Management Group (OMG) in 2004. Currently, DDS is one of the contenders to support the Internet of Things (IoT) and the Industrial IOT (IIoT). DDS has also been used as a distributed simulation architecture. Given the anticipated proliferation of IoT and II devices, along with the explosive growth of sensor technology, can we expect this to have an impact on the broader community of distributed simulation? If it does, what is the impact and which distributed simulation domains will be most affected? DDS shares many of the same goals and characteristics of distributed simulation such as the need to support scale and an emphasis on Quality of Service (QoS) that can be tailored to meet the end user's needs. In addition, DDS has some built-in features such as security that are not present in traditional distributed simulation protocols. If the IoT and II realize their potential application, we predict a large base of technology to be built around this distributed data paradigm, much of which could be directly beneficial to the distributed M&S community. In this paper we compare some of the perceived gaps and shortfalls of current distributed M&S technology to the emerging capabilities of DDS built around the IoT. Although some trial work has been conducted in this area, we propose a more focused examination of the potential of these new technologies and their applicability to current and future problems in distributed M&S. The Internet of Things (IoT) and its data communications mechanisms such as the Data Distribution System (DDS) share properties in common with distributed modeling and simulation (M&S) and its protocols such as the High Level Architecture (HLA) and the Test and Training Enabling Architecture (TENA). This paper proposes a framework based on the sensor use case for how the two communities of practice (CoP) can benefit from one another and achieve greater capability in practical distributed

  13. A new model for the initiation, crustal architecture, and extinction of pull-apart basins

    NASA Astrophysics Data System (ADS)

    van Wijk, J.; Axen, G. J.; Abera, R.

    2015-12-01

    We present a new model for the origin, crustal architecture, and evolution of pull-apart basins. The model is based on results of three-dimensional upper crustal numerical models of deformation, field observations, and fault theory, and answers many of the outstanding questions related to these rifts. In our model, geometric differences between pull-apart basins are inherited from the initial geometry of the strike-slip fault step which results from early geometry of the strike-slip fault system. As strike-slip motion accumulates, pull-apart basins are stationary with respect to underlying basement and the fault tips may propagate beyond the rift basin. Our model predicts that the sediment source areas may thus migrate over time. This implies that, although pull-apart basins lengthen over time, lengthening is accommodated by extension within the pull-apart basin, rather than formation of new faults outside of the rift zone. In this aspect pull-apart basins behave as narrow rifts: with increasing strike-slip the basins deepen but there is no significant younging outward. We explain why pull-apart basins do not go through previously proposed geometric evolutionary stages, which has not been documented in nature. Field studies predict that pull-apart basins become extinct when an active basin-crossing fault forms; this is the most likely fate of pull-apart basins, because strike-slip systems tend to straighten. The model predicts what the favorable step-dimensions are for the formation of such a fault system, and those for which a pull-apart basin may further develop into a short seafloor-spreading ridge. The model also shows that rift shoulder uplift is enhanced if the strike-slip rate is larger than the fault-propagation rate. Crustal compression then contributes to uplift of the rift flanks.

  14. 4D/RCS: a reference model architecture for intelligent unmanned ground vehicles

    NASA Astrophysics Data System (ADS)

    Albus, James S.

    2002-07-01

    4D/RCS consists of a multi-layered multi-resolutional hierarchy of computational nodes each containing elements of sensory processing (SP), world modeling (WM), value judgment (VJ), and behavior generation (BG). At the lower levels, these elements generate goal-seeking reactive behavior. At higher levels, they enable goal-defining deliberative behavior. At low levels, range in space and time is short and resolution is high. At high levels, distance and time are long and resolution is low. This enables high-precision fast-action response over short intervals of time and space at low levels, while long-range plans and abstract concepts are being formulated over broad regions of time and space at high levels. 4D/RCS closes feedback loops at every level. SP processes focus attention (i.e., window regions of space or time), group (i.e., segment regions into entities), compute entity attributes, estimate entity state, and assign entities to classes at every level. WM processes maintain a rich and dynamic database of knowledge about the world in the form of images, maps, entities, events, and relationships at every level. Other WM processes use that knowledge to generate estimates and predictions that support perception, reasoning, and planning at every level. 4D/RCS was developed for the Army Research Laboratory Demo III program. To date, only the lower levels of the 4D/RCS architecture have been fully implemented, but the results have been extremely positive. It seems clear that the theoretical basis of 4D/RCS is sound and the architecture is capable of being extended to support much higher levels of performance.

  15. Phase field modeling of tetragonal to monoclinic phase transformation in zirconia

    NASA Astrophysics Data System (ADS)

    Mamivand, Mahmood

    Zirconia based ceramics are strong, hard, inert, and smooth, with low thermal conductivity and good biocompatibility. Such properties made zirconia ceramics an ideal material for different applications form thermal barrier coatings (TBCs) to biomedicine applications like femoral implants and dental bridges. However, this unusual versatility of excellent properties would be mediated by the metastable tetragonal (or cubic) transformation to the stable monoclinic phase after a certain exposure at service temperatures. This transformation from tetragonal to monoclinic, known as LTD (low temperature degradation) in biomedical application, proceeds by propagation of martensite, which corresponds to transformation twinning. As such, tetragonal to monoclinic transformation is highly sensitive to mechanical and chemomechanical stresses. It is known in fact that this transformation is the source of the fracture toughening in stabilized zirconia as it occurs at the stress concentration regions ahead of the crack tip. This dissertation is an attempt to provide a kinetic-based model for tetragonal to monoclinic transformation in zirconia. We used the phase field technique to capture the temporal and spatial evolution of monoclinic phase. In addition to morphological patterns, we were able to calculate the developed internal stresses during tetragonal to monoclinic transformation. The model was started form the two dimensional single crystal then was expanded to the two dimensional polycrystalline and finally to the three dimensional single crystal. The model is able to predict the most physical properties associated with tetragonal to monoclinic transformation in zirconia including: morphological patterns, transformation toughening, shape memory effect, pseudoelasticity, surface uplift, and variants impingement. The model was benched marked with several experimental works. The good agreements between simulation results and experimental data, make the model a reliable tool for

  16. Designing a Component-Based Architecture for the Modeling and Simulation of Nuclear Fuels and Reactors

    SciTech Connect

    Billings, Jay Jay; Elwasif, Wael R; Hively, Lee M; Bernholdt, David E; Hetrick III, John M; Bohn, Tim T

    2009-01-01

    Concerns over the environment and energy security have recently prompted renewed interest in the U.S. in nuclear energy. Recognizing this, the U.S. Dept. of Energy has launched an initiative to revamp and modernize the role that modeling and simulation plays in the development and operation of nuclear facilities. This Nuclear Energy Advanced Modeling and Simulation (NEAMS) program represents a major investment in the development of new software, with one or more large multi-scale multi-physics capabilities in each of four technical areas associated with the nuclear fuel cycle, as well as additional supporting developments. In conjunction with this, we are designing a software architecture, computational environment, and component framework to integrate the NEAMS technical capabilities and make them more accessible to users. In this report of work very much in progress, we lay out the 'problem' we are addressing, describe the model-driven system design approach we are using, and compare them with several large-scale technical software initiatives from the past. We discuss how component technology may be uniquely positioned to address the software integration challenges of the NEAMS program, outline the capabilities planned for the NEAMS computational environment and framework, and describe some initial prototyping activities.

  17. Models to evaluate magnicon architectures and designs suitable for high-perveance beams

    SciTech Connect

    Rees, Daniel E.

    1994-03-01

    The magnicon, a new high-power, radio frequency (rf) deflection- modulated amplifier, was recently developed at the Institute for Nuclear Physics in Novosibirsk, Russia. The first magnicon achieved a peak output power of 2.6 MW for 50-μs pulses at a frequency of 915 MHz with a dc-to-rf conversion efficiency of 73%. The conversion efficiency achieved by the original magnicon represents a significant improvement over state-of-the-art conventional velocity- and density-modulated devices. Therefore, if properly exploited, the magnicon could substantially reduce the operating expenses of industrial, scientific, and military facilities that require large amounts of rf power. This dissertation describes the operational principles of the magnicon, provides small-signal analytical theory (where practical), presents a large-signal numerical model to characterize magnicon performance, and then utilizes this model to investigate the characteristics of the component magnicon structures. Using these modeling tools, the first-generation magnicon architecture is analyzed for its performance sensitivity to electron-beam size and is found to support beams of only limited diameter. Finally, an alternate magnicon geometry, called a ``uniform-field`` magnicon, is presented and shown to support beams of larger diameter.

  18. The genetic architecture of heterochronsy as a quantitative trait: lessons from a computational model.

    PubMed

    Sun, Lidan; Sang, Mengmeng; Zheng, Chenfei; Wang, Dongyang; Shi, Hexin; Liu, Kaiyue; Guo, Yanfang; Cheng, Tangren; Zhang, Qixiang; Wu, Rongling

    2017-05-30

    Heterochrony is known as a developmental change in the timing or rate of ontogenetic events across phylogenetic lineages. It is a key concept synthesizing development into ecology and evolution to explore the mechanisms of how developmental processes impact on phenotypic novelties. A number of molecular experiments using contrasting organisms in developmental timing have identified specific genes involved in heterochronic variation. Beyond these classic approaches that can only identify single genes or pathways, quantitative models derived from current next-generation sequencing data serve as a more powerful tool to precisely capture heterochronic variation and systematically map a complete set of genes that contribute to heterochronic processes. In this opinion note, we discuss a computational framework of genetic mapping that can characterize heterochronic quantitative trait loci that determine the pattern and process of development. We propose a unifying model that charts the genetic architecture of heterochrony that perceives and responds to environmental perturbations and evolves over geologic time. The new model may potentially enhance our understanding of the adaptive value of heterochrony and its evolutionary origins, providing a useful context for designing new organisms that can best use future resources. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. GS3: A Knowledge Management Architecture for Collaborative Geologic Sequestration Modeling

    SciTech Connect

    Gorton, Ian; Black, Gary D.; Schuchardt, Karen L.; Sivaramakrishnan, Chandrika; Wurstner, Signe K.; Hui, Peter SY

    2010-01-10

    Modern scientific enterprises are inherently knowledge-intensive. In general, scientific studies in domains such as groundwater, climate, and other environmental modeling as well as fundamental research in chemistry, physics, and biology require the acquisition and manipulation of large amounts of experimental and field data in order to create inputs for large-scale computational simulations. The results of these simulations must then be analyzed, leading to refinements of inputs and models and further simulations. In this paper we describe our efforts in creating a knowledge management platform to support collaborative, wide-scale studies in the area of geologic sequestration. The platform, known as GS3 (Geologic Sequestration Software Suite), exploits and integrates off-the-shelf software components including semantic wikis, content management systems and open source middleware to create the core architecture. We then extend the wiki environment to support the capture of provenance, the ability to incorporate various analysis tools, and the ability to launch simulations on supercomputers. The paper describes the key components of GS3 and demonstrates its use through illustrative examples. We conclude by assessing the suitability of our approach for geologic sequestration modeling and generalization to other scientific problem domains

  20. The fractal globule as a model of chromatin architecture in the cell.

    PubMed

    Mirny, Leonid A

    2011-01-01

    The fractal globule is a compact polymer state that emerges during polymer condensation as a result of topological constraints which prevent one region of the chain from passing across another one. This long-lived intermediate state was introduced in 1988 (Grosberg et al. 1988) and has not been observed in experiments or simulations until recently (Lieberman-Aiden et al. 2009). Recent characterization of human chromatin using a novel chromosome conformational capture technique brought the fractal globule into the spotlight as a structural model of human chromosome on the scale of up to 10 Mb (Lieberman-Aiden et al. 2009). Here, we present the concept of the fractal globule, comparing it to other states of a polymer and focusing on its properties relevant for the biophysics of chromatin. We then discuss properties of the fractal globule that make it an attractive model for chromatin organization inside a cell. Next, we connect the fractal globule to recent studies that emphasize topological constraints as a primary factor driving formation of chromosomal territories. We discuss how theoretical predictions, made on the basis of the fractal globule model, can be tested experimentally. Finally, we discuss whether fractal globule architecture can be relevant for chromatin packing in other organisms such as yeast and bacteria.