Science.gov

Sample records for architectural model transformations

  1. A Concept Transformation Learning Model for Architectural Design Learning Process

    ERIC Educational Resources Information Center

    Wu, Yun-Wu; Weng, Kuo-Hua; Young, Li-Ming

    2016-01-01

    Generally, in the foundation course of architectural design, much emphasis is placed on teaching of the basic design skills without focusing on teaching students to apply the basic design concepts in their architectural designs or promoting students' own creativity. Therefore, this study aims to propose a concept transformation learning model to…

  2. A Model Transformation Approach to Derive Architectural Models from Goal-Oriented Requirements Models

    NASA Astrophysics Data System (ADS)

    Lucena, Marcia; Castro, Jaelson; Silva, Carla; Alencar, Fernanda; Santos, Emanuel; Pimentel, João

    Requirements engineering and architectural design are key activities for successful development of software systems. Both activities are strongly intertwined and interrelated, but many steps toward generating architecture models from requirements models are driven by intuition and architectural knowledge. Thus, systematic approaches that integrate requirements engineering and architectural design activities are needed. This paper presents an approach based on model transformations to generate architectural models from requirements models. The source and target languages are respectively the i* modeling language and Acme architectural description language (ADL). A real web-based recommendation system is used as case study to illustrate our approach.

  3. Adaptive Neuron Model: An architecture for the rapid learning of nonlinear topological transformations

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1994-01-01

    A method for the rapid learning of nonlinear mappings and topological transformations using a dynamically reconfigurable artificial neural network is presented. This fully-recurrent Adaptive Neuron Model (ANM) network was applied to the highly degenerate inverse kinematics problem in robotics, and its performance evaluation is bench-marked. Once trained, the resulting neuromorphic architecture was implemented in custom analog neural network hardware and the parameters capturing the functional transformation downloaded onto the system. This neuroprocessor, capable of 10(exp 9) ops/sec, was interfaced directly to a three degree of freedom Heathkit robotic manipulator. Calculation of the hardware feed-forward pass for this mapping was benchmarked at approximately 10 microsec.

  4. Modeling dynamic reciprocity: Engineering three-dimensional culture models of breast architecture, function, and neoplastic transformation

    PubMed Central

    Nelson, Celeste M.; Bissell, Mina J.

    2010-01-01

    In order to understand why cancer develops as well as predict the outcome of pharmacological treatments, we need to model the structure and function of organs in culture so that our experimental manipulations occur under physiological contexts. This review traces the history of the development of a prototypic example, the three-dimensional (3D) model of the mammary gland acinus. We briefly describe the considerable information available on both normal mammary gland function and breast cancer generated by the current model and present future challenges that will require an increase in its complexity. We propose the need for engineered tissues that faithfully recapitulate their native structures to allow a greater understanding of tissue function, dysfunction, and potential therapeutic intervention. PMID:15963732

  5. Spatial transformation architectures with applications: an introduction

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    1993-08-01

    Spatial transformations (STs) constitute an important class of image operations, which include the well-known affine transformation, image rotation, scaling, warping, etc. Less well known are the anisomorphic transformations among cartographic projections such as the Mercator, gnomonic, and equal-area formats. In this preliminary study, we introduce a unifying theory of spatial transformation, expressed in terms of the Image Algebra, a rigorous, inherently parallel notation for image and signal processing. Via such theory, we can predict the implementational cost of various STs. Since spatial operations are frequently I/O-intensive, we first analyze the I/O performance of well-known architectures, in order to determine their suitability for ST implementation. Analyses are verified by simulation, with emphasis upon vision-based navigation applications. An additional applications area concerns the remapping of visual receptive fields, which facilitates visual rehabilitation in the presence of retinal damage.

  6. Consistent model driven architecture

    NASA Astrophysics Data System (ADS)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  7. Supramolecular transformations within discrete coordination-driven supramolecular architectures.

    PubMed

    Wang, Wei; Wang, Yu-Xuan; Yang, Hai-Bo

    2016-05-03

    In this review, a comprehensive summary of supramolecular transformations within discrete coordination-driven supramolecular architectures, including helices, metallacycles, metallacages, etc., is presented. Recent investigations have demonstrated that coordination-driven self-assembled architectures provide an ideal platform to study supramolecular transformations mainly due to the relatively rigid yet dynamic nature of the coordination bonds. Various stimuli have been extensively employed to trigger the transformation processes of metallosupramolecular architectures, such as solvents, concentration, anions, guests, change in component fractions or chemical compositions, light, and post-modification reactions, which allowed for the formation of new structures with specific properties and functions. Thus, it is believed that supramolecular transformations could serve as another highly efficient approach for generating diverse metallosupramolecular architectures. Classified by the aforementioned various stimuli used to induce the interconversion processes, the emphasis in this review will be on the transformation conditions, structural changes, mechanisms, and the output of specific properties and functions upon induction of structural transformations.

  8. Protocol Architecture Model Report

    NASA Technical Reports Server (NTRS)

    Dhas, Chris

    2000-01-01

    NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to examine protocols and architectures for an In-Space Internet Node. CNS has developed a methodology for network reference models to support NASA's four mission areas: Earth Science, Space Science, Human Exploration and Development of Space (REDS), Aerospace Technology. This report applies the methodology to three space Internet-based communications scenarios for future missions. CNS has conceptualized, designed, and developed space Internet-based communications protocols and architectures for each of the independent scenarios. The scenarios are: Scenario 1: Unicast communications between a Low-Earth-Orbit (LEO) spacecraft inspace Internet node and a ground terminal Internet node via a Tracking and Data Rela Satellite (TDRS) transfer; Scenario 2: Unicast communications between a Low-Earth-Orbit (LEO) International Space Station and a ground terminal Internet node via a TDRS transfer; Scenario 3: Multicast Communications (or "Multicasting"), 1 Spacecraft to N Ground Receivers, N Ground Transmitters to 1 Ground Receiver via a Spacecraft.

  9. ESPC Common Model Architecture

    DTIC Science & Technology

    2014-09-30

    support for the Intel MIC architecture, the Apple Clang/LLVM C++ compiler is supported on both Linux and Darwin , and ESMF’s dependency on the NetCDF C...compiler on both Linux and Darwin systems. • Support was added to compile the ESMF library for the Intel MIC architecture under Linux. This allows

  10. Digital Architecture Planning Model

    SciTech Connect

    Oxstrand, Johanna Helene; Al Rashdan, Ahmad Yahya Mohammad; Bly, Aaron Douglas; Rice, Brandon Charles; Fitzgerald, Kirk; Wilson, Keith Leon

    2016-03-01

    As part of the U.S. Department of Energy’s Light Water Reactor Sustainability Program, the Digital Architecture (DA) Project focuses on providing a model that nuclear utilities can refer to when planning deployment of advanced technologies. The digital architecture planning model (DAPM) is the methodology for mapping power plant operational and support activities into a DA that unifies all data sources needed by the utilities to operate their plants. The DA is defined as a collection of information technology capabilities needed to support and integrate a wide spectrum of real-time digital capabilities for performance improvements of nuclear power plants. DA can be thought of as integration of the separate instrumentation and control and information systems already in place in nuclear power plants, which are brought together for the purpose of creating new levels of automation in plant work activities. A major objective in DAPM development was to survey all key areas that needed to be reviewed in order for a utility to make knowledgeable decisions regarding needs and plans to implement a DA at the plant. The development was done in two steps. First, researchers surveyed the nuclear industry in order to learn their near-term plans for adopting new advanced capabilities and implementing a network (i.e., wireless and wire) infrastructure throughout the plant, including the power block. Secondly, a literature review covering regulatory documents, industry standards, and technical research reports and articles was conducted. The objective of the review was to identify key areas to be covered by the DAPM, which included the following: 1. The need for a DA and its benefits to the plant 2. Resources required to implement the DA 3. Challenges that need to be addressed and resolved to implement the DA 4. Roles and responsibilities of the DA implementation plan. The DAPM was developed based on results from the survey and the literature review. Model development, including

  11. Transforming Space Missions into Service Oriented Architectures

    NASA Technical Reports Server (NTRS)

    Mandl, Dan; Frye, Stuart; Cappelaere, Pat

    2006-01-01

    This viewgraph presentation reviews the vision of the sensor web enablement via a Service Oriented Architecture (SOA). An generic example is given of a user finding a service through the Web, and initiating a request for the desired observation. The parts that comprise this system and how they interact are reviewed. The advantages of the use of SOA are reviewed.

  12. Optical chirp z-transform processor with a simplified architecture.

    PubMed

    Ngo, Nam Quoc

    2014-12-29

    Using a simplified chirp z-transform (CZT) algorithm based on the discrete-time convolution method, this paper presents the synthesis of a simplified architecture of a reconfigurable optical chirp z-transform (OCZT) processor based on the silica-based planar lightwave circuit (PLC) technology. In the simplified architecture of the reconfigurable OCZT, the required number of optical components is small and there are no waveguide crossings which make fabrication easy. The design of a novel type of optical discrete Fourier transform (ODFT) processor as a special case of the synthesized OCZT is then presented to demonstrate its effectiveness. The designed ODFT can be potentially used as an optical demultiplexer at the receiver of an optical fiber orthogonal frequency division multiplexing (OFDM) transmission system.

  13. Efficient architecture for adaptive directional lifting-based wavelet transform

    NASA Astrophysics Data System (ADS)

    Yin, Zan; Zhang, Li; Shi, Guangming

    2010-07-01

    Adaptive direction lifting-based wavelet transform (ADL) has better performance than conventional lifting both in image compression and de-noising. However, no architecture has been proposed to hardware implement it because of its high computational complexity and huge internal memory requirements. In this paper, we propose a four-stage pipelined architecture for 2 Dimensional (2D) ADL with fast computation and high data throughput. The proposed architecture comprises column direction estimation, column lifting, row direction estimation and row lifting which are performed in parallel in a pipeline mode. Since the column processed data is transposed, the row processor can reuse the column processor which can decrease the design complexity. In the lifting step, predict and update are also performed in parallel. For an 8×8 image sub-block, the proposed architecture can finish the ADL forward transform within 78 clock cycles. The architecture is implemented on Xilinx Virtex5 device on which the frequency can achieve 367 MHz. The processed time is 212.5 ns, which can meet the request of real-time system.

  14. A VLSI architecture for simplified arithmetic Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.

    1992-01-01

    The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.

  15. Compressive optical image watermarking using joint Fresnel transform correlator architecture

    NASA Astrophysics Data System (ADS)

    Li, Jun; Zhong, Ting; Dai, Xiaofang; Yang, Chanxia; Li, Rong; Tang, Zhilie

    2017-02-01

    A new optical image watermarking technique based on compressive sensing using joint Fresnel transform correlator architecture has been presented. A secret scene or image is first embedded into a host image to perform optical image watermarking by use of joint Fresnel transform correlator architecture. Then, the watermarked image is compressed to much smaller signal data using single-pixel compressive holographic imaging in optical domain. At the received terminal, the watermarked image is reconstructed well via compressive sensing theory and a specified holographic reconstruction algorithm. The preliminary numerical simulations show that it is effective and suitable for optical image security transmission in the coming absolutely optical network for the reason of the completely optical implementation and largely decreased holograms data volume.

  16. Transformation of legacy network management system to service oriented architecture

    NASA Astrophysics Data System (ADS)

    Sathyan, Jithesh; Shenoy, Krishnananda

    2007-09-01

    Service providers today are facing the challenge of operating and maintaining multiple networks, based on multiple technologies. Network Management System (NMS) solutions are being used to manage these networks. However the NMS is tightly coupled with Element or the Core network components. Hence there are multiple NMS solutions for heterogeneous networks. Current network management solutions are targeted at a variety of independent networks. The wide spread popularity of IP Multimedia Subsystem (IMS) is a clear indication that all of these independent networks will be integrated into a single IP-based infrastructure referred to as Next Generation Networks (NGN) in the near future. The services, network architectures and traffic pattern in NGN will dramatically differ from the current networks. The heterogeneity and complexity in NGN including concepts like Fixed Mobile Convergence will bring a number of challenges to network management. The high degree of complexity accompanying the network element technology necessitates network management systems (NMS) which can utilize this technology to provide more service interfaces while hiding the inherent complexity. As operators begin to add new networks and expand existing networks to support new technologies and products, the necessity of scalable, flexible and functionally rich NMS systems arises. Another important factor influencing NMS architecture is mergers and acquisitions among the key vendors. Ease of integration is a key impediment in the traditional hierarchical NMS architecture. These requirements trigger the need for an architectural framework that will address the NGNM (Next Generation Network Management) issues seamlessly. This paper presents a unique perspective of bringing service orientated architecture (SOA) to legacy network management systems (NMS). It advocates a staged approach in transforming a legacy NMS to SOA. The architecture at each stage is detailed along with the technical advantages and

  17. HRST architecture modeling and assessments

    SciTech Connect

    Comstock, D.A.

    1997-01-01

    This paper presents work supporting the assessment of advanced concept options for the Highly Reusable Space Transportation (HRST) study. It describes the development of computer models as the basis for creating an integrated capability to evaluate the economic feasibility and sustainability of a variety of system architectures. It summarizes modeling capabilities for use on the HRST study to perform sensitivity analysis of alternative architectures (consisting of different combinations of highly reusable vehicles, launch assist systems, and alternative operations and support concepts) in terms of cost, schedule, performance, and demand. In addition, the identification and preliminary assessment of alternative market segments for HRST applications, such as space manufacturing, space tourism, etc., is described. Finally, the development of an initial prototype model that can begin to be used for modeling alternative HRST concepts at the system level is presented. {copyright} {ital 1997 American Institute of Physics.}

  18. HRST architecture modeling and assessments

    NASA Astrophysics Data System (ADS)

    Comstock, Douglas A.

    1997-01-01

    This paper presents work supporting the assessment of advanced concept options for the Highly Reusable Space Transportation (HRST) study. It describes the development of computer models as the basis for creating an integrated capability to evaluate the economic feasibility and sustainability of a variety of system architectures. It summarizes modeling capabilities for use on the HRST study to perform sensitivity analysis of alternative architectures (consisting of different combinations of highly reusable vehicles, launch assist systems, and alternative operations and support concepts) in terms of cost, schedule, performance, and demand. In addition, the identification and preliminary assessment of alternative market segments for HRST applications, such as space manufacturing, space tourism, etc., is described. Finally, the development of an initial prototype model that can begin to be used for modeling alternative HRST concepts at the system level is presented.

  19. Interoperability format translation and transformation between IFC architectural design file and simulation file formats

    DOEpatents

    Chao, Tian-Jy; Kim, Younghun

    2015-02-03

    Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function to convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.

  20. Interoperability format translation and transformation between IFC architectural design file and simulation file formats

    DOEpatents

    Chao, Tian-Jy; Kim, Younghun

    2015-01-06

    Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function to convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.

  1. Scalable Models Using Model Transformation

    DTIC Science & Technology

    2008-07-13

    huge number of web documents. We have created a simplified demo using 5 worker machines in the Ptolemy II modeling and simulation environment [3], as...the pattern of the transformation rule matches any subgraph of the input model. When the TransformationRule actor is opened in the Ptolemy II GUI...tool developed in the Ptolemy II frame- work, existing tools include AGG [14], PROGRES [15], AToM3 [16], FUJABA [17], VIATRA2 [18], and GReAT [19

  2. Formalism Challenges of the Cougaar Model Driven Architecture

    NASA Technical Reports Server (NTRS)

    Bohner, Shawn A.; George, Boby; Gracanin, Denis; Hinchey, Michael G.

    2004-01-01

    The Cognitive Agent Architecture (Cougaar) is one of the most sophisticated distributed agent architectures developed today. As part of its research and evolution, Cougaar is being studied for application to large, logistics-based applications for the Department of Defense (DoD). Anticipiting future complex applications of Cougaar, we are investigating the Model Driven Architecture (MDA) approach to understand how effective it would be for increasing productivity in Cougar-based development efforts. Recognizing the sophistication of the Cougaar development environment and the limitations of transformation technologies for agents, we have systematically developed an approach that combines component assembly in the large and transformation in the small. This paper describes some of the key elements that went into the Cougaar Model Driven Architecture approach and the characteristics that drove the approach.

  3. Institutional Transformation Model

    SciTech Connect

    2015-10-19

    Reducing the energy consumption of large institutions with dozens to hundreds of existing buildings while maintaining and improving existing infrastructure is a critical economic and environmental challenge. SNL's Institutional Transformation (IX) work integrates facilities and infrastructure sustainability technology capabilities and collaborative decision support modeling approaches to help facilities managers at Sandia National Laboratories (SNL) simulate different future energy reduction strategies and meet long term energy conservation goals.

  4. Entropy-based consistent model driven architecture

    NASA Astrophysics Data System (ADS)

    Niepostyn, Stanisław Jerzy

    2016-09-01

    A description of software architecture is a plan of the IT system construction, therefore any architecture gaps affect the overall success of an entire project. The definitions mostly describe software architecture as a set of views which are mutually unrelated, hence potentially inconsistent. Software architecture completeness is also often described in an ambiguous way. As a result most methods of IT systems building comprise many gaps and ambiguities, thus presenting obstacles for software building automation. In this article the consistency and completeness of software architecture are mathematically defined based on calculation of entropy of the architecture description. Following this approach, in this paper we also propose our method of automatic verification of consistency and completeness of the software architecture development method presented in our previous article as Consistent Model Driven Architecture (CMDA). The proposed FBS (Functionality-Behaviour-Structure) entropy-based metric applied in our CMDA approach enables IT architects to decide whether the modelling process is complete and consistent. With this metric, software architects could assess the readiness of undergoing modelling work for the start of IT system building. It even allows them to assess objectively whether the designed software architecture of the IT system could be implemented at all. The overall benefit of such an approach is that it facilitates the preparation of complete and consistent software architecture more effectively as well as it enables assessing and monitoring of the ongoing modelling development status. We demonstrate this with a few industry examples of IT system designs.

  5. Building Paradigms: Major Transformations in School Architecture (1798-2009)

    ERIC Educational Resources Information Center

    Gislason, Neil

    2009-01-01

    This article provides an historical overview of significant trends in school architecture from 1798 to the present. I divide the history of school architecture into two major phases. The first period falls between 1798 and 1921: the modern graded classroom emerged as a standard architectural feature during this period. The second period, which…

  6. Modeling and analysis of multiprocessor architectures

    NASA Technical Reports Server (NTRS)

    Yalamanchili, S.; Carpenter, T.

    1989-01-01

    Some technologies developed for system level modeling and analysis of algorithms/architectures using an architecture design and development system are reviewed. Modeling and analysis is described with attention given to modeling constraints and analysis using constrained software graphs. An example is presented of an ADAS graph and its associated attributes, such as firing delay, token consume rate, token produce rate, firing threshold, firing condition, arc queue lengths, associated C or Ada functional model, and stochastic behavior.

  7. Electromagnetic physics models for parallel computing architectures

    SciTech Connect

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-11-21

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Finally, the results of preliminary performance evaluation and physics validation are presented as well.

  8. Electromagnetic physics models for parallel computing architectures

    DOE PAGES

    Amadio, G.; Ananya, A.; Apostolakis, J.; ...

    2016-11-21

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part ofmore » the GeantV project. Finally, the results of preliminary performance evaluation and physics validation are presented as well.« less

  9. Electromagnetic Physics Models for Parallel Computing Architectures

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  10. Unified transform architecture for AVC, AVS, VC-1 and HEVC high-performance codecs

    NASA Astrophysics Data System (ADS)

    Dias, Tiago; Roma, Nuno; Sousa, Leonel

    2014-12-01

    A unified architecture for fast and efficient computation of the set of two-dimensional (2-D) transforms adopted by the most recent state-of-the-art digital video standards is presented in this paper. Contrasting to other designs with similar functionality, the presented architecture is supported on a scalable, modular and completely configurable processing structure. This flexible structure not only allows to easily reconfigure the architecture to support different transform kernels, but it also permits its resizing to efficiently support transforms of different orders (e.g. order-4, order-8, order-16 and order-32). Consequently, not only is it highly suitable to realize high-performance multi-standard transform cores, but it also offers highly efficient implementations of specialized processing structures addressing only a reduced subset of transforms that are used by a specific video standard. The experimental results that were obtained by prototyping several configurations of this processing structure in a Xilinx Virtex-7 FPGA show the superior performance and hardware efficiency levels provided by the proposed unified architecture for the implementation of transform cores for the Advanced Video Coding (AVC), Audio Video coding Standard (AVS), VC-1 and High Efficiency Video Coding (HEVC) standards. In addition, such results also demonstrate the ability of this processing structure to realize multi-standard transform cores supporting all the standards mentioned above and that are capable of processing the 8k Ultra High Definition Television (UHDTV) video format (7,680 × 4,320 at 30 fps) in real time.

  11. Behavior Models for Software Architecture

    DTIC Science & Technology

    2014-11-01

    MP. Existing process modeling frameworks (BPEL, BPMN [Grosskopf et al. 2009], IDEF) usually follow the “single flowchart” paradigm. MP separates...Process: Business Process Modeling using BPMN , Meghan Kiffer Press. HAREL, D., 1987, A Visual Formalism for Complex Systems. Science of Computer

  12. Modeling Operations Costs for Human Exploration Architectures

    NASA Technical Reports Server (NTRS)

    Shishko, Robert

    2013-01-01

    Operations and support (O&S) costs for human spaceflight have not received the same attention in the cost estimating community as have development costs. This is unfortunate as O&S costs typically comprise a majority of life-cycle costs (LCC) in such programs as the International Space Station (ISS) and the now-cancelled Constellation Program. Recognizing this, the Constellation Program and NASA HQs supported the development of an O&S cost model specifically for human spaceflight. This model, known as the Exploration Architectures Operations Cost Model (ExAOCM), provided the operations cost estimates for a variety of alternative human missions to the moon, Mars, and Near-Earth Objects (NEOs) in architectural studies. ExAOCM is philosophically based on the DoD Architecture Framework (DoDAF) concepts of operational nodes, systems, operational functions, and milestones. This paper presents some of the historical background surrounding the development of the model, and discusses the underlying structure, its unusual user interface, and lastly, previous examples of its use in the aforementioned architectural studies.

  13. Utilizing Rapid Prototyping for Architectural Modeling

    ERIC Educational Resources Information Center

    Kirton, E. F.; Lavoie, S. D.

    2006-01-01

    This paper will discuss our approach to, success with and future direction in rapid prototyping for architectural modeling. The premise that this emerging technology has broad and exciting applications in the building design and construction industry will be supported by visual and physical evidence. This evidence will be presented in the form of…

  14. A parallel 3-D discrete wavelet transform architecture using pipelined lifting scheme approach for video coding

    NASA Astrophysics Data System (ADS)

    Hegde, Ganapathi; Vaya, Pukhraj

    2013-10-01

    This article presents a parallel architecture for 3-D discrete wavelet transform (3-DDWT). The proposed design is based on the 1-D pipelined lifting scheme. The architecture is fully scalable beyond the present coherent Daubechies filter bank (9, 7). This 3-DDWT architecture has advantages such as no group of pictures restriction and reduced memory referencing. It offers low power consumption, low latency and high throughput. The computing technique is based on the concept that lifting scheme minimises the storage requirement. The application specific integrated circuit implementation of the proposed architecture is done by synthesising it using 65 nm Taiwan Semiconductor Manufacturing Company standard cell library. It offers a speed of 486 MHz with a power consumption of 2.56 mW. This architecture is suitable for real-time video compression even with large frame dimensions.

  15. A parallel VLSI architecture for a digital filter of arbitrary length using Fermat number transforms

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Reed, I. S.; Yeh, C. S.; Shao, H. M.

    1982-01-01

    A parallel architecture for computation of the linear convolution of two sequences of arbitrary lengths using the Fermat number transform (FNT) is described. In particular a pipeline structure is designed to compute a 128-point FNT. In this FNT, only additions and bit rotations are required. A standard barrel shifter circuit is modified so that it performs the required bit rotation operation. The overlap-save method is generalized for the FNT to compute a linear convolution of arbitrary length. A parallel architecture is developed to realize this type of overlap-save method using one FNT and several inverse FNTs of 128 points. The generalized overlap save method alleviates the usual dynamic range limitation in FNTs of long transform lengths. Its architecture is regular, simple, and expandable, and therefore naturally suitable for VLSI implementation.

  16. Architecture Models and Data Flows in Local and Group Datawarehouses

    NASA Astrophysics Data System (ADS)

    Bogza, R. M.; Zaharie, Dorin; Avasilcai, Silvia; Bacali, Laura

    Architecture models and possible data flows for local and group datawarehouses are presented, together with some data processing models. The architecture models consists of several layers and the data flow between them. The choosen architecture of a datawarehouse depends on the data type and volumes from the source data, and inflences the analysis, data mining and reports done upon the data from DWH.

  17. A Cognitive Architecture for Human Performance Process Model Research

    DTIC Science & Technology

    1992-11-01

    Architecture for Human Performance Process Model C - F33615-91 -D-0009 Research PE - 62205F PR- 1710 6. AUTHOR(S) TA - 00 Michael J. Young WU - 60 7...OF PAGES cognitive architectures human performance process models 4 1 cognitive psychology Implementation architectures 16. PRICE CODE computational...1 Human Performance Process Models ............................................................ 2

  18. Performance and Architecture Lab Modeling Tool

    SciTech Connect

    2014-06-19

    Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this link makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior

  19. Engineering Structurally Configurable Models with Model Transformation

    DTIC Science & Technology

    2008-12-15

    model in the case of Simulink, and a dataflow model in the case of LabVIEW). Research modeling tools such as Ptolemy II [14], ForSyDe [21], SPEX [30...functionality of our model transformation tool built in the Ptolemy II framework, and its application to large models of distributed and parallel embedded...in Ptolemy II, the same idea can be applied to other modeling tools such as Simulink, LabVIEW, ForSyDe, SPEX and ModHel’X. Moreover, the recent OMG

  20. Resource utilization model for the algorithm to architecture mapping model

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Patel, Rakesh R.

    1993-01-01

    The analytical model for resource utilization and the variable node time and conditional node model for the enhanced ATAMM model for a real-time data flow architecture are presented in this research. The Algorithm To Architecture Mapping Model, ATAMM, is a Petri net based graph theoretic model developed at Old Dominion University, and is capable of modeling the execution of large-grained algorithms on a real-time data flow architecture. Using the resource utilization model, the resource envelope may be obtained directly from a given graph and, consequently, the maximum number of required resources may be evaluated. The node timing diagram for one iteration period may be obtained using the analytical resource envelope. The variable node time model, which describes the change in resource requirement for the execution of an algorithm under node time variation, is useful to expand the applicability of the ATAMM model to heterogeneous architectures. The model also describes a method of detecting the presence of resource limited mode and its subsequent prevention. Graphs with conditional nodes are shown to be reduced to equivalent graphs with time varying nodes and, subsequently, may be analyzed using the variable node time model to determine resource requirements. Case studies are performed on three graphs for the illustration of applicability of the analytical theories.

  1. A high-throughput two channel discrete wavelet transform architecture for the JPEG2000 standard

    NASA Astrophysics Data System (ADS)

    Badakhshannoory, Hossein; Hashemi, Mahmoud R.; Aminlou, Alireza; Fatemi, Omid

    2005-07-01

    The Discrete Wavelet Transform (DWT) is increasingly recognized in image and video compression standards, as indicated by its use in JPEG2000. The lifting scheme algorithm is an alternative DWT implementation that has a lower computational complexity and reduced resource requirement. In the JPEG2000 standard two lifting scheme based filter banks are introduced: the 5/3 and 9/7. In this paper a high throughput, two channel DWT architecture for both of the JPEG2000 DWT filters is presented. The proposed pipelined architecture has two separate input channels that process the incoming samples simultaneously with minimum memory requirement for each channel. The architecture had been implemented in VHDL and synthesized on a Xilinx Virtex2 XCV1000. The proposed architecture applies DWT on a 2K by 1K image at 33 fps with a 75 MHZ clock frequency. This performance is achieved with 70% less resources than two independent single channel modules. The high throughput and reduced resource requirement has made this architecture the proper choice for real time applications such as Digital Cinema.

  2. Modeling the Europa Pathfinder avionics system with a model based avionics architecture tool

    NASA Technical Reports Server (NTRS)

    Chau, S.; Traylor, M.; Hall, R.; Whitfield, A.

    2002-01-01

    In order to shorten the avionics architecture development time, the Jet Propulsion Laboratory has developed a model-based architecture simultion tool called the Avionics System Architecture Tool (ASAT).

  3. A Framework and Model for Evaluating Clinical Decision Support Architectures

    PubMed Central

    Wright, Adam; Sittig, Dean F.

    2008-01-01

    In this paper, we develop a four-phase model for evaluating architectures for clinical decision support that focuses on: defining a set of desirable features for a decision support architecture; building a proof-of-concept prototype; demonstrating that the architecture is useful by showing that it can be integrated with existing decision support systems and comparing its coverage to that of other architectures. We apply this framework to several well-known decision support architectures, including Arden Syntax, GLIF, SEBASTIAN and SAGE PMID:18462999

  4. A comparison of VLSI architectures for time and transform domain decoding of Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Truong, T. K.; Deutsch, L. J.; Satorius, E. H.; Reed, I. S.

    1988-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial needed to decode a Reed-Solomon (RS) code. It is shown that this algorithm can be used for both time and transform domain decoding by replacing its initial conditions with the Forney syndromes and the erasure locator polynomial. By this means both the errata locator polynomial and the errate evaluator polynomial can be obtained with the Euclidean algorithm. With these ideas, both time and transform domain Reed-Solomon decoders for correcting errors and erasures are simplified and compared. As a consequence, the architectures of Reed-Solomon decoders for correcting both errors and erasures can be made more modular, regular, simple, and naturally suitable for VLSI implementation.

  5. Modelling the pulse transformer in SPICE

    NASA Astrophysics Data System (ADS)

    Godlewska, Malgorzata; Górecki, Krzysztof; Górski, Krzysztof

    2016-01-01

    The paper is devoted to modelling pulse transformers in SPICE. It shows the character of the selected models of this element, points out their advantages and disadvantages, and presents the results of experimental verification of the considered models. These models are characterized by varying degrees of complexity - from linearly coupled linear coils to nonlinear electrothermal models. The study was conducted for transformer with ring cores made of a variety of ferromagnetic materials, while exciting the sinusoidal signal of a frequency 100 kHz and different values of load resistance. The transformers operating conditions under which the considered models ensure the acceptable accuracy of calculations are indicated.

  6. Space Generic Open Avionics Architecture (SGOAA) reference model technical guide

    NASA Technical Reports Server (NTRS)

    Wray, Richard B.; Stovall, John R.

    1993-01-01

    This report presents a full description of the Space Generic Open Avionics Architecture (SGOAA). The SGOAA consists of a generic system architecture for the entities in spacecraft avionics, a generic processing architecture, and a six class model of interfaces in a hardware/software system. The purpose of the SGOAA is to provide an umbrella set of requirements for applying the generic architecture interface model to the design of specific avionics hardware/software systems. The SGOAA defines a generic set of system interface points to facilitate identification of critical interfaces and establishes the requirements for applying appropriate low level detailed implementation standards to those interface points. The generic core avionics system and processing architecture models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.

  7. Model-Drive Architecture for Agent-Based Systems

    NASA Technical Reports Server (NTRS)

    Gradanin, Denis; Singh, H. Lally; Bohner, Shawn A.; Hinchey, Michael G.

    2004-01-01

    The Model Driven Architecture (MDA) approach uses a platform-independent model to define system functionality, or requirements, using some specification language. The requirements are then translated to a platform-specific model for implementation. An agent architecture based on the human cognitive model of planning, the Cognitive Agent Architecture (Cougaar) is selected for the implementation platform. The resulting Cougaar MDA prescribes certain kinds of models to be used, how those models may be prepared and the relationships of the different kinds of models. Using the existing Cougaar architecture, the level of application composition is elevated from individual components to domain level model specifications in order to generate software artifacts. The software artifacts generation is based on a metamodel. Each component maps to a UML structured component which is then converted into multiple artifacts: Cougaar/Java code, documentation, and test cases.

  8. A Model of Transformative Collaboration

    ERIC Educational Resources Information Center

    Swartz, Ann L.; Triscari, Jacqlyn S.

    2011-01-01

    Two collaborative writing partners sought to deepen their understanding of transformative learning by conducting several spirals of grounded theory research on their own collaborative relationship. Drawing from adult education, business, and social science literature and including descriptive analysis of their records of activity and interaction…

  9. Modeling Techniques for High Dependability Protocols and Architecture

    NASA Technical Reports Server (NTRS)

    LaValley, Brian; Ellis, Peter; Walter, Chris J.

    2012-01-01

    This report documents an investigation into modeling high dependability protocols and some specific challenges that were identified as a result of the experiments. The need for an approach was established and foundational concepts proposed for modeling different layers of a complex protocol and capturing the compositional properties that provide high dependability services for a system architecture. The approach centers around the definition of an architecture layer, its interfaces for composability with other layers and its bindings to a platform specific architecture model that implements the protocols required for the layer.

  10. E-Governance and Service Oriented Computing Architecture Model

    NASA Astrophysics Data System (ADS)

    Tejasvee, Sanjay; Sarangdevot, S. S.

    2010-11-01

    E-Governance is the effective application of information communication and technology (ICT) in the government processes to accomplish safe and reliable information lifecycle management. Lifecycle of the information involves various processes as capturing, preserving, manipulating and delivering information. E-Governance is meant to transform of governance in better manner to the citizens which is transparent, reliable, participatory, and accountable in point of view. The purpose of this paper is to attempt e-governance model, focus on the Service Oriented Computing Architecture (SOCA) that includes combination of information and services provided by the government, innovation, find out the way of optimal service delivery to citizens and implementation in transparent and liable practice. This paper also try to enhance focus on the E-government Service Manager as a essential or key factors service oriented and computing model that provides a dynamically extensible structural design in which all area or branch can bring in innovative services. The heart of this paper examine is an intangible model that enables E-government communication for trade and business, citizen and government and autonomous bodies.

  11. Application of the medical data warehousing architecture EPIDWARE to epidemiological follow-up: data extraction and transformation.

    PubMed

    Kerkri, E; Quantin, C; Yetongnon, K; Allaert, F A; Dusserre, L

    1999-01-01

    In this paper, we present an application of EPIDWARE, medical data warehousing architecture, to our epidemiological follow-up project. The aim of this project is to extract and regroup information from various information systems for epidemiological studies. We give a description of the requirements of the epidemiological follow-up project such as anonymity of medical data information and data file linkage procedure. We introduce the concept of Data Warehousing Architecture. The particularities of data extraction and transformation are presented and discussed.

  12. Metabotropic glutamate receptor 1 disrupts mammary acinar architecture and initiates malignant transformation of mammary epithelial cells

    PubMed Central

    Teh, Jessica L. F.; Shah, Raj; La Cava, Stephanie; Dolfi, Sonia C.; Mehta, Madhura S.; Kongara, Sameera; Price, Sandy; Ganesan, Shridar; Reuhl, Kenneth R.; Hirshfield, Kim M.

    2016-01-01

    Metabotropic glutamate receptor 1 (mGluR1/Grm1) is a member of the G-protein-coupled receptor superfamily, which was once thought to only participate in synaptic transmission and neuronal excitability, but has more recently been implicated in non-neuronal tissue functions. We previously described the oncogenic properties of Grm1 in cultured melanocytes in vitro and in spontaneous melanoma development with 100 % penetrance in vivo. Aberrant mGluR1 expression was detected in 60–80 % of human melanoma cell lines and biopsy samples. As most human cancers are of epithelial origin, we utilized immortalized mouse mammary epithelial cells (iMMECs) as a model system to study the transformative properties of Grm1. We introduced Grm1 into iMMECs and isolated several stable mGluR1-expressing clones. Phenotypic alterations in mammary acinar architecture were assessed using three-dimensional morphogenesis assays. We found that mGluR1-expressing iMMECs exhibited delayed lumen formation in association with decreased central acinar cell death, disrupted cell polarity, and a dramatic increase in the activation of the mitogen-activated protein kinase pathway. Orthotopic implantation of mGluR1-expressing iMMEC clones into mammary fat pads of immunodeficient nude mice resulted in mammary tumor formation in vivo. Persistent mGluR1 expression was required for the maintenance of the tumorigenic phenotypes in vitro and in vivo, as demonstrated by an inducible Grm1-silencing RNA system. Furthermore, mGluR1 was found be expressed in human breast cancer cell lines and breast tumor biopsies. Elevated levels of extracellular glutamate were observed in mGluR1-expressing breast cancer cell lines and concurrent treatment of MCF7 xenografts with glutamate release inhibitor, riluzole, and an AKT inhibitor led to suppression of tumor progression. Our results are likely relevant to human breast cancer, highlighting a putative role of mGluR1 in the pathophysiology of breast cancer and the potential

  13. Framework for the Parametric System Modeling of Space Exploration Architectures

    NASA Technical Reports Server (NTRS)

    Komar, David R.; Hoffman, Jim; Olds, Aaron D.; Seal, Mike D., II

    2008-01-01

    This paper presents a methodology for performing architecture definition and assessment prior to, or during, program formulation that utilizes a centralized, integrated architecture modeling framework operated by a small, core team of general space architects. This framework, known as the Exploration Architecture Model for IN-space and Earth-to-orbit (EXAMINE), enables: 1) a significantly larger fraction of an architecture trade space to be assessed in a given study timeframe; and 2) the complex element-to-element and element-to-system relationships to be quantitatively explored earlier in the design process. Discussion of the methodology advantages and disadvantages with respect to the distributed study team approach typically used within NASA to perform architecture studies is presented along with an overview of EXAMINE s functional components and tools. An example Mars transportation system architecture model is used to demonstrate EXAMINE s capabilities in this paper. However, the framework is generally applicable for exploration architecture modeling with destinations to any celestial body in the solar system.

  14. Vibrational testing of trabecular bone architectures using rapid prototype models.

    PubMed

    Mc Donnell, P; Liebschner, M A K; Tawackoli, Wafa; Mc Hugh, P E

    2009-01-01

    The purpose of this study was to investigate if standard analysis of the vibrational characteristics of trabecular architectures can be used to detect changes in the mechanical properties due to progressive bone loss. A cored trabecular specimen from a human lumbar vertebra was microCT scanned and a three-dimensional, virtual model in stereolithography (STL) format was generated. Uniform bone loss was simulated using a surface erosion algorithm. Rapid prototype (RP) replicas were manufactured from these virtualised models with 0%, 16% and 42% bone loss. Vibrational behaviour of the RP replicas was evaluated by performing a dynamic compression test through a frequency range using an electro-dynamic shaker. The acceleration and dynamic force responses were recorded and fast Fourier transform (FFT) analyses were performed to determine the response spectrum. Standard resonant frequency analysis and damping factor calculations were performed. The RP replicas were subsequently tested in compression beyond failure to determine their strength and modulus. It was found that the reductions in resonant frequency with increasing bone loss corresponded well with reductions in apparent stiffness and strength. This suggests that structural dynamics has the potential to be an alternative diagnostic technique for osteoporosis, although significant challenges must be overcome to determine the effect of the skin/soft tissue interface, the cortex and variabilities associated with in vivo testing.

  15. Direct model extraction of RFCMOS spiral transformers

    NASA Astrophysics Data System (ADS)

    Pan, Jie; Yang, Hai-Gang

    2010-11-01

    In a spiral transformer, couplings between the coils are interlaced and correlative, and are difficult to independently extract from limited network parameters. In this article, we present a method for directly extracting model parameters including mutual inductances and port-to-port capacitances one by one. In the method, by leaving unmeasured ports short-circuited or open-circuited on the wafer, we transform a 4-port transformer into four 2-port networks for obtaining adequate measurement data, enabling us to extract all the '2-π'-like model parameters independently. We adopt this method into the modelling of a 5:5-turn spiral transformer fabricated in 0.18 μm CMOS technology. Finally, comparisons between electromagnetic (EM)-simulated results, measured results and model-simulated results demonstrate that our method is accurate and reliable.

  16. Code generator for implementing dual tree complex wavelet transform on reconfigurable architectures for mobile applications.

    PubMed

    Canbay, Ferhat; Levent, Vecdi Emre; Serbes, Gorkem; Ugurdag, H Fatih; Goren, Sezer; Aydin, Nizamettin

    2016-09-01

    The authors aimed to develop an application for producing different architectures to implement dual tree complex wavelet transform (DTCWT) having near shift-invariance property. To obtain a low-cost and portable solution for implementing the DTCWT in multi-channel real-time applications, various embedded-system approaches are realised. For comparison, the DTCWT was implemented in C language on a personal computer and on a PIC microcontroller. However, in the former approach portability and in the latter desired speed performance properties cannot be achieved. Hence, implementation of the DTCWT on a reconfigurable platform such as field programmable gate array, which provides portable, low-cost, low-power, and high-performance computing, is considered as the most feasible solution. At first, they used the system generator DSP design tool of Xilinx for algorithm design. However, the design implemented by using such tools is not optimised in terms of area and power. To overcome all these drawbacks mentioned above, they implemented the DTCWT algorithm by using Verilog Hardware Description Language, which has its own difficulties. To overcome these difficulties, simplify the usage of proposed algorithms and the adaptation procedures, a code generator program that can produce different architectures is proposed.

  17. Model based analysis of piezoelectric transformers.

    PubMed

    Hemsel, T; Priya, S

    2006-12-22

    Piezoelectric transformers are increasingly getting popular in the electrical devices owing to several advantages such as small size, high efficiency, no electromagnetic noise and non-flammable. In addition to the conventional applications such as ballast for back light inverter in notebook computers, camera flash, and fuel ignition several new applications have emerged such as AC/DC converter, battery charger and automobile lighting. These new applications demand high power density and wide range of voltage gain. Currently, the transformer power density is limited to 40 W/cm(3) obtained at low voltage gain. The purpose of this study was to investigate a transformer design that has the potential of providing higher power density and wider range of voltage gain. The new transformer design utilizes radial mode both at the input and output port and has the unidirectional polarization in the ceramics. This design was found to provide 30 W power with an efficiency of 98% and 30 degrees C temperature rise from the room temperature. An electro-mechanical equivalent circuit model was developed to describe the characteristics of the piezoelectric transformer. The model was found to successfully predict the characteristics of the transformer. Excellent matching was found between the computed and experimental results. The results of this study will allow to deterministically design unipoled piezoelectric transformers with specified performance. It is expected that in near future the unipoled transformer will gain significant importance in various electrical components.

  18. Transformation as a Design Process and Runtime Architecture for High Integrity Software

    SciTech Connect

    Bespalko, S.J.; Winter, V.L.

    1999-04-05

    We have discussed two aspects of creating high integrity software that greatly benefit from the availability of transformation technology, which in this case is manifest by the requirement for a sophisticated backtracking parser. First, because of the potential for correctly manipulating programs via small changes, an automated non-procedural transformation system can be a valuable tool for constructing high assurance software. Second, modeling the processing of translating data into information as a, perhaps, context-dependent grammar leads to an efficient, compact implementation. From a practical perspective, the transformation process should begin in the domain language in which a problem is initially expressed. Thus in order for a transformation system to be practical it must be flexible with respect to domain-specific languages. We have argued that transformation applied to specification results in a highly reliable system. We also attempted to briefly demonstrate that transformation technology applied to the runtime environment will result in a safe and secure system. We thus believe that the sophisticated multi-lookahead backtracking parsing technology is central to the task of being in a position to demonstrate the existence of HIS.

  19. Demand Activated Manufacturing Architecture (DAMA) model for supply chain collaboration

    SciTech Connect

    CHAPMAN,LEON D.; PETERSEN,MARJORIE B.

    2000-03-13

    The Demand Activated Manufacturing Architecture (DAMA) project during the last five years of work with the U.S. Integrated Textile Complex (retail, apparel, textile, and fiber sectors) has developed an inter-enterprise architecture and collaborative model for supply chains. This model will enable improved collaborative business across any supply chain. The DAMA Model for Supply Chain Collaboration is a high-level model for collaboration to achieve Demand Activated Manufacturing. The five major elements of the architecture to support collaboration are (1) activity or process, (2) information, (3) application, (4) data, and (5) infrastructure. These five elements are tied to the application of the DAMA architecture to three phases of collaboration - prepare, pilot, and scale. There are six collaborative activities that may be employed in this model: (1) Develop Business Planning Agreements, (2) Define Products, (3) Forecast and Plan Capacity Commitments, (4) Schedule Product and Product Delivery, (5) Expedite Production and Delivery Exceptions, and (6) Populate Supply Chain Utility. The Supply Chain Utility is a set of applications implemented to support collaborative product definition, forecast visibility, planning, scheduling, and execution. The DAMA architecture and model will be presented along with the process for implementing this DAMA model.

  20. Hardware architecture for projective model calculation and false match refining using random sample consensus algorithm

    NASA Astrophysics Data System (ADS)

    Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid

    2016-11-01

    The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.

  1. Conformal map transformations for meteorological modelers

    NASA Astrophysics Data System (ADS)

    Taylor, Albion D.

    1997-02-01

    This paper describes a utility function library which meteorological computer modelers can incorporate in their programs to provide the mathematical transformations of conformai maps that their models may need. In addition to coordinate transformations, routines supply projection-dependent terms of the governing equations, wind component conversions, and rotation axis orientation components. The routines seamlessly handle the transitions from Polar Stereographic through Lambert Conformai to Mercator projections. Initialization routines allow concurrent handling of multiple projections, and allow a simple method of defining computational model grids to the software.

  2. Non-linear transformer modeling and simulation

    SciTech Connect

    Archer, W.E.; Deveney, M.F.; Nagel, R.L.

    1994-08-01

    Transformers models for simulation with Pspice and Analogy`s Saber are being developed using experimental B-H Loop and network analyzer measurements. The models are evaluated for accuracy and convergence using several test circuits. Results are presented which demonstrate the effects on circuit performance from magnetic core losses eddy currents and mechanical stress on the magnetic cores.

  3. Modeling and testing of ethernet transformers

    NASA Astrophysics Data System (ADS)

    Bowen, David

    2011-12-01

    Twisted-pair Ethernet is now the standard home and office last-mile network technology. For decades, the IEEE standard that defines Ethernet has required electrical isolation between the twisted pair cable and the Ethernet device. So, for decades, every Ethernet interface has used magnetic core Ethernet transformers to isolate Ethernet devices and keep users safe in the event of a potentially dangerous fault on the network media. The current state-of-the-art Ethernet transformers are miniature (<5mm diameter) ferrite-core toroids wrapped with approximately 10 to 30 turns of wire. As small as current Ethernet transformers are, they still limit further Ethernet device miniaturization and require a separate bulky package or jack housing. New coupler designs must be explored which are capable of exceptional miniaturization or on-chip fabrication. This dissertation thoroughly explores the performance of the current commercial Ethernet transformers to both increase understanding of the device's behavior and outline performance parameters for replacement devices. Lumped element and distributed circuit models are derived; testing schemes are developed and used to extract model parameters from commercial Ethernet devices. Transfer relation measurements of the commercial Ethernet transformers are compared against the model's behavior and it is found that the tuned, distributed models produce the best transfer relation match to the measured data. Process descriptions and testing results on fabricated thin-film dielectric-core toroid transformers are presented. The best results were found for a 32-turn transformer loaded with 100Ω, the impedance of twisted pair cable. This transformer gave a flat response from about 10MHz to 40MHz with a height of approximately 0.45. For the fabricated transformer structures, theoretical methods to determine resistance, capacitance and inductance are presented. A special analytical and numerical analysis of the fabricated transformer

  4. Optimizing transformations of stencil operations for parallel cache-based architectures

    SciTech Connect

    Bassetti, F.; Davis, K.

    1999-06-28

    This paper describes a new technique for optimizing serial and parallel stencil- and stencil-like operations for cache-based architectures. This technique takes advantage of the semantic knowledge implicity in stencil-like computations. The technique is implemented as a source-to-source program transformation; because of its specificity it could not be expected of a conventional compiler. Empirical results demonstrate a uniform factor of two speedup. The experiments clearly show the benefits of this technique to be a consequence, as intended, of the reduction in cache misses. The test codes are based on a 5-point stencil obtained by the discretization of the Poisson equation and applied to a two-dimensional uniform grid using the Jacobi method as an iterative solver. Results are presented for a 1-D tiling for a single processor, and in parallel using 1-D data partition. For the parallel case both blocking and non-blocking communication are tested. The same scheme of experiments has bee n performed for the 2-D tiling case. However, for the parallel case the 2-D partitioning is not discussed here, so the parallel case handled for 2-D is 2-D tiling with 1-D data partitioning.

  5. Modeling and Verification of Dependable Electronic Power System Architecture

    NASA Astrophysics Data System (ADS)

    Yuan, Ling; Fan, Ping; Zhang, Xiao-fang

    The electronic power system can be viewed as a system composed of a set of concurrently interacting subsystems to generate, transmit, and distribute electric power. The complex interaction among sub-systems makes the design of electronic power system complicated. Furthermore, in order to guarantee the safe generation and distribution of electronic power, the fault tolerant mechanisms are incorporated in the system design to satisfy high reliability requirements. As a result, the incorporation makes the design of such system more complicated. We propose a dependable electronic power system architecture, which can provide a generic framework to guide the development of electronic power system to ease the development complexity. In order to provide common idioms and patterns to the system *designers, we formally model the electronic power system architecture by using the PVS formal language. Based on the PVS model of this system architecture, we formally verify the fault tolerant properties of the system architecture by using the PVS theorem prover, which can guarantee that the system architecture can satisfy high reliability requirements.

  6. EVA/ORU model architecture using RAMCOST

    NASA Technical Reports Server (NTRS)

    Ntuen, Celestine A.; Park, Eui H.; Wang, Y. M.; Bretoi, R.

    1990-01-01

    A parametrically driven simulation model is presented in order to provide a detailed insight into the effects of various input parameters in the life testing of a modular space suit. The RAMCOST model employed is a user-oriented simulation model for studying the life-cycle costs of designs under conditions of uncertainty. The results obtained from the EVA simulated model are used to assess various mission life testing parameters such as the number of joint motions per EVA cycle time, part availability, and number of inspection requirements. RAMCOST first simulates EVA completion for NASA application using a probabilistic like PERT network. With the mission time heuristically determined, RAMCOST then models different orbital replacement unit policies with special application to the astronaut's space suit functional designs.

  7. RT 24 - Architecture, Modeling & Simulation, and Software Design

    DTIC Science & Technology

    2010-11-01

    focus on tool extensions (UPDM, SysML, SoaML, BPMN ) Leverage “best of breed” architecture methodologies Provide tooling to support the methodology DoDAF...Capability 10 Example: BPMN 11 DoDAF 2.0 MetaModel BPMN MetaModel Mapping SysML to DoDAF 2.0 12 DoDAF V2.0 Models OV-2 SysML Diagrams Requirement

  8. Improving Project Management Using Formal Models and Architectures

    NASA Technical Reports Server (NTRS)

    Kahn, Theodore; Sturken, Ian

    2011-01-01

    This talk discusses the advantages formal modeling and architecture brings to project management. These emerging technologies have both great potential and challenges for improving information available for decision-making. The presentation covers standards, tools and cultural issues needing consideration, and includes lessons learned from projects the presenters have worked on.

  9. Modeling of Euclidean braided fiber architectures to optimize composite properties

    NASA Technical Reports Server (NTRS)

    Armstrong-Carroll, E.; Pastore, C.; Ko, F. K.

    1992-01-01

    Three-dimensional braided fiber reinforcements are a very effective toughening mechanism for composite materials. The integral yarn path inherent to this fiber architecture allows for effective multidirectional dispersion of strain energy and negates delamination problems. In this paper a geometric model of Euclidean braid fiber architectures is presented. This information is used to determine the degree of geometric isotropy in the braids. This information, when combined with candidate material properties, can be used to quickly generate an estimate of the available load-carrying capacity of Euclidean braids at any arbitrary angle.

  10. Empirical Memory-Access Cost Models in Multicore NUMA Architectures

    SciTech Connect

    McCormick, Patrick S.; Braithwaite, Ryan Karl; Feng, Wu-chun

    2011-01-01

    Data location is of prime importance when scheduling tasks in a non-uniform memory access (NUMA) architecture. The characteristics of the NUMA architecture must be understood so tasks can be scheduled onto processors that are close to the task's data. However, in modern NUMA architectures, such as AMD Magny-Cours and Intel Nehalem, there may be a relatively large number of memory controllers with sockets that are connected in a non-intuitive manner, leading to performance degradation due to uninformed task-scheduling decisions. In this paper, we provide a method for experimentally characterizing memory-access costs for modern NUMA architectures via memory latency and bandwidth microbenchmarks. Using the results of these benchmarks, we propose a memory-access cost model to improve task-scheduling decisions by scheduling tasks near the data they need. Simple task-scheduling experiments using the memory-access cost models validate the use of empirical memory-access cost models to significantly improve program performance.

  11. A Model-Driven Architecture Approach for Modeling, Specifying and Deploying Policies in Autonomous and Autonomic Systems

    NASA Technical Reports Server (NTRS)

    Pena, Joaquin; Hinchey, Michael G.; Sterritt, Roy; Ruiz-Cortes, Antonio; Resinas, Manuel

    2006-01-01

    Autonomic Computing (AC), self-management based on high level guidance from humans, is increasingly gaining momentum as the way forward in designing reliable systems that hide complexity and conquer IT management costs. Effectively, AC may be viewed as Policy-Based Self-Management. The Model Driven Architecture (MDA) approach focuses on building models that can be transformed into code in an automatic manner. In this paper, we look at ways to implement Policy-Based Self-Management by means of models that can be converted to code using transformations that follow the MDA philosophy. We propose a set of UML-based models to specify autonomic and autonomous features along with the necessary procedures, based on modification and composition of models, to deploy a policy as an executing system.

  12. Superstatistical model of bacterial DNA architecture

    PubMed Central

    Bogachev, Mikhail I.; Markelov, Oleg A.; Kayumov, Airat R.; Bunde, Armin

    2017-01-01

    Understanding the physical principles that govern the complex DNA structural organization as well as its mechanical and thermodynamical properties is essential for the advancement in both life sciences and genetic engineering. Recently we have discovered that the complex DNA organization is explicitly reflected in the arrangement of nucleotides depicted by the universal power law tailed internucleotide interval distribution that is valid for complete genomes of various prokaryotic and eukaryotic organisms. Here we suggest a superstatistical model that represents a long DNA molecule by a series of consecutive ~150 bp DNA segments with the alternation of the local nucleotide composition between segments exhibiting long-range correlations. We show that the superstatistical model and the corresponding DNA generation algorithm explicitly reproduce the laws governing the empirical nucleotide arrangement properties of the DNA sequences for various global GC contents and optimal living temperatures. Finally, we discuss the relevance of our model in terms of the DNA mechanical properties. As an outlook, we focus on finding the DNA sequences that encode a given protein while simultaneously reproducing the nucleotide arrangement laws observed from empirical genomes, that may be of interest in the optimization of genetic engineering of long DNA molecules. PMID:28225058

  13. Superstatistical model of bacterial DNA architecture

    NASA Astrophysics Data System (ADS)

    Bogachev, Mikhail I.; Markelov, Oleg A.; Kayumov, Airat R.; Bunde, Armin

    2017-02-01

    Understanding the physical principles that govern the complex DNA structural organization as well as its mechanical and thermodynamical properties is essential for the advancement in both life sciences and genetic engineering. Recently we have discovered that the complex DNA organization is explicitly reflected in the arrangement of nucleotides depicted by the universal power law tailed internucleotide interval distribution that is valid for complete genomes of various prokaryotic and eukaryotic organisms. Here we suggest a superstatistical model that represents a long DNA molecule by a series of consecutive ~150 bp DNA segments with the alternation of the local nucleotide composition between segments exhibiting long-range correlations. We show that the superstatistical model and the corresponding DNA generation algorithm explicitly reproduce the laws governing the empirical nucleotide arrangement properties of the DNA sequences for various global GC contents and optimal living temperatures. Finally, we discuss the relevance of our model in terms of the DNA mechanical properties. As an outlook, we focus on finding the DNA sequences that encode a given protein while simultaneously reproducing the nucleotide arrangement laws observed from empirical genomes, that may be of interest in the optimization of genetic engineering of long DNA molecules.

  14. System Architecture Modeling for Technology Portfolio Management using ATLAS

    NASA Technical Reports Server (NTRS)

    Thompson, Robert W.; O'Neil, Daniel A.

    2006-01-01

    Strategic planners and technology portfolio managers have traditionally relied on consensus-based tools, such as Analytical Hierarchy Process (AHP) and Quality Function Deployment (QFD) in planning the funding of technology development. While useful to a certain extent, these tools are limited in the ability to fully quantify the impact of a technology choice on system mass, system reliability, project schedule, and lifecycle cost. The Advanced Technology Lifecycle Analysis System (ATLAS) aims to provide strategic planners a decision support tool for analyzing technology selections within a Space Exploration Architecture (SEA). Using ATLAS, strategic planners can select physics-based system models from a library, configure the systems with technologies and performance parameters, and plan the deployment of a SEA. Key parameters for current and future technologies have been collected from subject-matter experts and other documented sources in the Technology Tool Box (TTB). ATLAS can be used to compare the technical feasibility and economic viability of a set of technology choices for one SEA, and compare it against another set of technology choices or another SEA. System architecture modeling in ATLAS is a multi-step process. First, the modeler defines the system level requirements. Second, the modeler identifies technologies of interest whose impact on an SEA. Third, the system modeling team creates models of architecture elements (e.g. launch vehicles, in-space transfer vehicles, crew vehicles) if they are not already in the model library. Finally, the architecture modeler develops a script for the ATLAS tool to run, and the results for comparison are generated.

  15. A modeling process to understand complex system architectures

    NASA Astrophysics Data System (ADS)

    Robinson, Santiago Balestrini

    2009-12-01

    In recent decades, several tools have been developed by the armed forces, and their contractors, to test the capability of a force. These campaign level analysis tools, often times characterized as constructive simulations are generally expensive to create and execute, and at best they are extremely difficult to verify and validate. This central observation, that the analysts are relying more and more on constructive simulations to predict the performance of future networks of systems, leads to the two central objectives of this thesis: (1) to enable the quantitative comparison of architectures in terms of their ability to satisfy a capability without resorting to constructive simulations, and (2) when constructive simulations must be created, to quantitatively determine how to spend the modeling effort amongst the different system classes. The first objective led to Hypothesis A, the first main hypotheses, which states that by studying the relationships between the entities that compose an architecture, one can infer how well it will perform a given capability. The method used to test the hypothesis is based on two assumptions: (1) the capability can be defined as a cycle of functions, and that it (2) must be possible to estimate the probability that a function-based relationship occurs between any two types of entities. If these two requirements are met, then by creating random functional networks, different architectures can be compared in terms of their ability to satisfy a capability. In order to test this hypothesis, a novel process for creating representative functional networks of large-scale system architectures was developed. The process, named the Digraph Modeling for Architectures (DiMA), was tested by comparing its results to those of complex constructive simulations. Results indicate that if the inputs assigned to DiMA are correct (in the tests they were based on time-averaged data obtained from the ABM), DiMA is able to identify which of any two

  16. Space station architectural elements model study

    NASA Technical Reports Server (NTRS)

    Taylor, T. C.; Spencer, J. S.; Rocha, C. J.; Kahn, E.; Cliffton, E.; Carr, C.

    1987-01-01

    The worksphere, a user controlled computer workstation enclosure, was expanded in scope to an engineering workstation suitable for use on the Space Station as a crewmember desk in orbit. The concept was also explored as a module control station capable of enclosing enough equipment to control the station from each module. The concept has commercial potential for the Space Station and surface workstation applications. The central triangular beam interior configuration was expanded and refined to seven different beam configurations. These included triangular on center, triangular off center, square, hexagonal small, hexagonal medium, hexagonal large and the H beam. Each was explored with some considerations as to the utilities and a suggested evaluation factor methodology was presented. Scale models of each concept were made. The models were helpful in researching the seven beam configurations and determining the negative residual (unused) volume of each configuration. A flexible hardware evaluation factor concept is proposed which could be helpful in evaluating interior space volumes from a human factors point of view. A magnetic version with all the graphics is available from the author or the technical monitor.

  17. Modelling parallel programs and multiprocessor architectures with AXE

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Fineman, Charles E.

    1991-01-01

    AXE, An Experimental Environment for Parallel Systems, was designed to model and simulate for parallel systems at the process level. It provides an integrated environment for specifying computation models, multiprocessor architectures, data collection, and performance visualization. AXE is being used at NASA-Ames for developing resource management strategies, parallel problem formulation, multiprocessor architectures, and operating system issues related to the High Performance Computing and Communications Program. AXE's simple, structured user-interface enables the user to model parallel programs and machines precisely and efficiently. Its quick turn-around time keeps the user interested and productive. AXE models multicomputers. The user may easily modify various architectural parameters including the number of sites, connection topologies, and overhead for operating system activities. Parallel computations in AXE are represented as collections of autonomous computing objects known as players. Their use and behavior is described. Performance data of the multiprocessor model can be observed on a color screen. These include CPU and message routing bottlenecks, and the dynamic status of the software.

  18. Entity-Centric Abstraction and Modeling Framework for Transportation Architectures

    NASA Technical Reports Server (NTRS)

    Lewe, Jung-Ho; DeLaurentis, Daniel A.; Mavris, Dimitri N.; Schrage, Daniel P.

    2007-01-01

    A comprehensive framework for representing transpportation architectures is presented. After discussing a series of preceding perspectives and formulations, the intellectual underpinning of the novel framework using an entity-centric abstraction of transportation is described. The entities include endogenous and exogenous factors and functional expressions are offered that relate these and their evolution. The end result is a Transportation Architecture Field which permits analysis of future concepts under the holistic perspective. A simulation model which stems from the framework is presented and exercised producing results which quantify improvements in air transportation due to advanced aircraft technologies. Finally, a modeling hypothesis and its accompanying criteria are proposed to test further use of the framework for evaluating new transportation solutions.

  19. Coaching Model + Clinical Playbook = Transformative Learning.

    PubMed

    Fletcher, Katherine A; Meyer, Mary

    2016-01-01

    Health care employers demand that workers be skilled in clinical reasoning, able to work within complex interprofessional teams to provide safe, quality patient-centered care in a complex evolving system. To this end, there have been calls for radical transformation of nursing education including the development of a baccalaureate generalist nurse. Based on recommendations from the American Association of Colleges of Nursing, faculty concluded that clinical education must change moving beyond direct patient care by applying the concepts associated with designer, manager, and coordinator of care and being a member of a profession. To accomplish this, the faculty utilized a system of focused learning assignments (FLAs) that present transformative learning opportunities that expose students to "disorienting dilemmas," alternative perspectives, and repeated opportunities to reflect and challenge their own beliefs. The FLAs collected in a "Playbook" were scaffolded to build the student's competencies over the course of the clinical experience. The FLAs were centered on the 6 Quality and Safety Education for Nurses competencies, with 2 additional concepts of professionalism and systems-based practice. The FLAs were competency-based exercises that students performed when not assigned to direct patient care or had free clinical time. Each FLA had a lesson plan that allowed the student and faculty member to see the competency addressed by the lesson, resources, time on task, student instructions, guide for reflection, grading rubric, and recommendations for clinical instructor. The major advantages of the model included (a) consistent implementation of structured learning experiences by a diverse teaching staff using a coaching model of instruction; (b) more systematic approach to present learning activities that build upon each other; (c) increased time for faculty to interact with students providing direct patient care; (d) guaranteed capture of selected transformative

  20. Assessing Aegis Program Transition to an Open-Architecture Model

    DTIC Science & Technology

    2013-01-01

    executive officer for Integrated Warfare Systems, encouraged and supported this research effort. Bill Bray, Myron Liszniansky, Kathy Emery, and...Architecture Model line development initiatives begin by “ cloning ”3 the software of a previ- ous baseline. After the baseline is certified, it is...maintained separately. Navy officials sometimes refer to this approach as “ clone and own.” One of the implications of this approach is that fixes or

  1. Study of performance on SMP and distributed memory architectures using a shared memory programming model

    SciTech Connect

    Brooks, E.D.; Warren, K.H.

    1997-08-08

    In this paper we examine the use of a shared memory programming model to address the problem of portability of application codes between distributed memory and shared memory architectures. We do this with an extension of the Parallel C Preprocessor. The extension, borrowed from Split-C and AC, uses type qualifiers instead of storage class modifiers to declare variables that are shared among processors. The type qualifier declaration supports an abstract shared memory facility on distributed memory machines while making direct use of hardware support on shared memory architectures. Our benchmarking study spans a wide range of shared memory and distributed memory platforms. Benchmarks include Gaussian elimination with back substitution, a two-dimensional fast Fourier transform, and a matrix-matrix multiply. We find that the type-qualifier-based shared memory programming model is capable of efficiently spanning both distributed memory and shared memory architectures. Although the resulting shared memory programming model is portable, it does not remove the need to arrange for overlapped or blocked remote memory references on platforms that require these tuning measures in order to obtain good performance.

  2. Managing changes in the enterprise architecture modelling context

    NASA Astrophysics Data System (ADS)

    Khanh Dam, Hoa; Lê, Lam-Son; Ghose, Aditya

    2016-07-01

    Enterprise architecture (EA) models the whole enterprise in various aspects regarding both business processes and information technology resources. As the organisation grows, the architecture of its systems and processes must also evolve to meet the demands of the business environment. Evolving an EA model may involve making changes to various components across different levels of the EA. As a result, an important issue before making a change to an EA model is assessing the ripple effect of the change, i.e. change impact analysis. Another critical issue is change propagation: given a set of primary changes that have been made to the EA model, what additional secondary changes are needed to maintain consistency across multiple levels of the EA. There has been however limited work on supporting the maintenance and evolution of EA models. This article proposes an EA description language, namely ChangeAwareHierarchicalEA, integrated with an evolution framework to support both change impact analysis and change propagation within an EA model. The core part of our framework is a technique for computing the impact of a change and a new method for generating interactive repair plans from Alloy consistency rules that constrain the EA model.

  3. SpaceWire model development technology for satellite architecture.

    SciTech Connect

    Eldridge, John M.; Leemaster, Jacob Edward; Van Leeuwen, Brian P.

    2011-09-01

    Packet switched data communications networks that use distributed processing architectures have the potential to simplify the design and development of new, increasingly more sophisticated satellite payloads. In addition, the use of reconfigurable logic may reduce the amount of redundant hardware required in space-based applications without sacrificing reliability. These concepts were studied using software modeling and simulation, and the results are presented in this report. Models of the commercially available, packet switched data interconnect SpaceWire protocol were developed and used to create network simulations of data networks containing reconfigurable logic with traffic flows for timing system distribution.

  4. Plant growth and architectural modelling and its applications

    PubMed Central

    Guo, Yan; Fourcaud, Thierry; Jaeger, Marc; Zhang, Xiaopeng; Li, Baoguo

    2011-01-01

    Over the last decade, a growing number of scientists around the world have invested in research on plant growth and architectural modelling and applications (often abbreviated to plant modelling and applications, PMA). By combining physical and biological processes, spatially explicit models have shown their ability to help in understanding plant–environment interactions. This Special Issue on plant growth modelling presents new information within this topic, which are summarized in this preface. Research results for a variety of plant species growing in the field, in greenhouses and in natural environments are presented. Various models and simulation platforms are developed in this field of research, opening new features to a wider community of researchers and end users. New modelling technologies relating to the structure and function of plant shoots and root systems are explored from the cellular to the whole-plant and plant-community levels. PMID:21638797

  5. Reservoir architecture modeling: Nonstationary models for quantitative geological characterization. Final report, April 30, 1998

    SciTech Connect

    Kerr, D.; Epili, D.; Kelkar, M.; Redner, R.; Reynolds, A.

    1998-12-01

    The study was comprised of four investigations: facies architecture; seismic modeling and interpretation; Markov random field and Boolean models for geologic modeling of facies distribution; and estimation of geological architecture using the Bayesian/maximum entropy approach. This report discusses results from all four investigations. Investigations were performed using data from the E and F units of the Middle Frio Formation, Stratton Field, one of the major reservoir intervals in the Gulf Coast Basin.

  6. A Distributed, Cross-Agency Software Architecture for Sharing Climate Models and Observational Data Sets (Invited)

    NASA Astrophysics Data System (ADS)

    Crichton, D. J.; Mattmann, C. A.; Braverman, A. J.; Cinquini, L.

    2010-12-01

    The Jet Propulsion Laboratory (JPL) has been developing a distributed infrastructure to supporting access and sharing of Earth Science observational data sets with climate models to support model-to-data intercomparison for climate research. The Climate Data Exchange (CDX), a framework for linking distributed repositories coupled with tailored distributed services to support the intercomparison, provides mechanisms to discover, access, transform and share observational and model output data [2]. These services are critical to allowing data to remain distributed, but be pulled together to support analysis. The architecture itself provides a services-based approach allowing for integrating and working with other computing infrastructures through well-defined software interfaces. Specifically, JPL has worked very closely with the Earth System Grid (ESG) and the Program for Climate Model Diagnostics and Intercomparisons (PCMDI) at Lawrence Livermore National Laboratory (LLNL) to integrate NASA science data systems with the Earth System Grid to support federation across organizational and agency boundaries [1]. Of particular interest near-term is enabling access to NASA observational data along-side climate models for the Coupled Model Intercomparison Project known as CMIP5. CMIP5 is the protocol that will be used for the next International Panel for Climate Change (IPCC) Assessment Report (AR5) on climate change. JPL and NASA are currently engaged in a project to ensure that observational data are available to the climate research community through the Earth System Grid. By both developing a software architecture and working with the key architects for the ESG, JPL has been successful at building a prototype for AR5. This presentation will review the software architecture including core principles, models and interfaces, the Climate Data Exchange project and specific goals to support access to both observational data and models for AR5. It will highlight the progress

  7. Building energy modeling for green architecture and intelligent dashboard applications

    NASA Astrophysics Data System (ADS)

    DeBlois, Justin

    Buildings are responsible for 40% of the carbon emissions in the United States. Energy efficiency in this sector is key to reducing overall greenhouse gas emissions. This work studied the passive technique called the roof solar chimney for reducing the cooling load in homes architecturally. Three models of the chimney were created: a zonal building energy model, computational fluid dynamics model, and numerical analytic model. The study estimated the error introduced to the building energy model (BEM) through key assumptions, and then used a sensitivity analysis to examine the impact on the model outputs. The conclusion was that the error in the building energy model is small enough to use it for building simulation reliably. Further studies simulated the roof solar chimney in a whole building, integrated into one side of the roof. Comparisons were made between high and low efficiency constructions, and three ventilation strategies. The results showed that in four US climates, the roof solar chimney results in significant cooling load energy savings of up to 90%. After developing this new method for the small scale representation of a passive architecture technique in BEM, the study expanded the scope to address a fundamental issue in modeling - the implementation of the uncertainty from and improvement of occupant behavior. This is believed to be one of the weakest links in both accurate modeling and proper, energy efficient building operation. A calibrated model of the Mascaro Center for Sustainable Innovation's LEED Gold, 3,400 m2 building was created. Then algorithms were developed for integration to the building's dashboard application that show the occupant the energy savings for a variety of behaviors in real time. An approach using neural networks to act on real-time building automation system data was found to be the most accurate and efficient way to predict the current energy savings for each scenario. A stochastic study examined the impact of the

  8. Modeling excitation energy transfer in multi-BODIPY architectures.

    PubMed

    Azarias, Cloé; Russo, Roberto; Cupellini, Lorenzo; Mennucci, Benedetta; Jacquemin, Denis

    2017-02-15

    The excitation energy transfer (EET) allowing the concentration of the energy has been investigated in several multi-BODIPY architectures with the help of an approach coupling time dependent density functional theory to an implicit solvation scheme, the polarizable continuum model. We have first considered several strategies to compute the electronic coupling in a dyad varying the size of the donor/acceptor units, the bridge, the geometries and conformations. We have next studied the electronic coupling in three different architectures for which the EET rate constants have been experimentally measured both from luminescence and transient absorption data and from Förster theory. A good agreement with experimental values was obtained. Finally, in an effort to further improve these systems, we have designed several series of BODIPY triads, investigating the effect of acidochromism, core modifications, the position of the linkage and chemical substitutions on the EET coupling and rate constant. We show that several architectures allow us to increase the EET rate by one order of magnitude compared to the original compound.

  9. Low cost high throughput pipelined architecture of 2-D 8 × 8 integer transforms for H.264/AVC

    NASA Astrophysics Data System (ADS)

    Sharma, Meeturani; Durga Tiwari, Honey; Cho, Yong Beom

    2013-08-01

    In this article, we present the implementation of high throughput two-dimensional (2-D) 8 × 8 forward and inverse integer DCT transform for H.264. Using matrix decomposition and matrix operation, such as the Kronecker product and direct sum, the forward and inverse integer transform can be represented using simple addition operations. The dual clocked pipelined structure of the proposed implementation uses non-floating point adders and does not require any transpose memory. Hardware synthesis shows that the maximum operating frequency of the proposed pipelined architecture is 1.31 GHz, which achieves 21.05 Gpixels/s throughput rate with the hardware cost of 42932 gates. High throughput and low hardware makes the proposed design useful for real time H.264/AVC high definition processing.

  10. Probabilistic logic modeling of network reliability for hybrid network architectures

    SciTech Connect

    Wyss, G.D.; Schriner, H.K.; Gaylor, T.R.

    1996-10-01

    Sandia National Laboratories has found that the reliability and failure modes of current-generation network technologies can be effectively modeled using fault tree-based probabilistic logic modeling (PLM) techniques. We have developed fault tree models that include various hierarchical networking technologies and classes of components interconnected in a wide variety of typical and atypical configurations. In this paper we discuss the types of results that can be obtained from PLMs and why these results are of great practical value to network designers and analysts. After providing some mathematical background, we describe the `plug-and-play` fault tree analysis methodology that we have developed for modeling connectivity and the provision of network services in several current- generation network architectures. Finally, we demonstrate the flexibility of the method by modeling the reliability of a hybrid example network that contains several interconnected ethernet, FDDI, and token ring segments. 11 refs., 3 figs., 1 tab.

  11. Crystal Level Continuum Modeling of Phase Transformations: The (alpha) <--> (epsilon) Transformation in Iron

    SciTech Connect

    Barton, N R; Benson, D J; Becker, R; Bykov, Y; Caplan, M

    2004-10-18

    We present a crystal level model for thermo-mechanical deformation with phase transformation capabilities. The model is formulated to allow for large pressures (on the order of the elastic moduli) and makes use of a multiplicative decomposition of the deformation gradient. Elastic and thermal lattice distortions are combined into a single lattice stretch to allow the model to be used in conjunction with general equation of state relationships. Phase transformations change the mass fractions of the material constituents. The driving force for phase transformations includes terms arising from mechanical work, from the temperature dependent chemical free energy change on transformation, and from interaction energy among the constituents. Deformation results from both these phase transformations and elasto-viscoplastic deformation of the constituents themselves. Simulation results are given for the {alpha} to {epsilon} phase transformation in iron. Results include simulations of shock induced transformation in single crystals and of compression of polycrystals. Results are compared to available experimental data.

  12. Architecture for time or transform domain decoding of reed-solomon codes

    NASA Technical Reports Server (NTRS)

    Shao, Howard M. (Inventor); Truong, Trieu-Kie (Inventor); Hsu, In-Shek (Inventor); Deutsch, Leslie J. (Inventor)

    1989-01-01

    Two pipeline (255,233) RS decoders, one a time domain decoder and the other a transform domain decoder, use the same first part to develop an errata locator polynomial .tau.(x), and an errata evaluator polynominal A(x). Both the time domain decoder and transform domain decoder have a modified GCD that uses an input multiplexer and an output demultiplexer to reduce the number of GCD cells required. The time domain decoder uses a Chien search and polynomial evaluator on the GCD outputs .tau.(x) and A(x), for the final decoding steps, while the transform domain decoder uses a transform error pattern algorithm operating on .tau.(x) and the initial syndrome computation S(x), followed by an inverse transform algorithm in sequence for the final decoding steps prior to adding the received RS coded message to produce a decoded output message.

  13. Architecture in motion: A model for music composition

    NASA Astrophysics Data System (ADS)

    Variego, Jorge Elias

    2011-12-01

    Speculations regarding the relationship between music and architecture go back to the very origins of these disciplines. Throughout history, these links have always reaffirmed that music and architecture are analogous art forms that only diverge in their object of study. In the 1 st c. BCE Vitruvius conceived Architecture as "one of the most inclusive and universal human activities" where the architect should be educated in all the arts, having a vast knowledge in history, music and philosophy. In the 18th c., the German thinker Johann Wolfgang von Goethe, described Architecture as "frozen music". More recently, in the 20th c., Iannis Xenakis studied the similar structuring principles between Music and Architecture creating his own "models" of musical composition based on mathematical principles and geometric constructions. The goal of this document is to propose a compositional method that will function as a translator between the acoustical properties of a room and music, to facilitate the creation of musical works that will not only happen within an enclosed space but will also intentionally interact with the space. Acoustical measurements of rooms such as reverberation time, frequency response and volume will be measured and systematically organized in correspondence with orchestrational parameters. The musical compositions created after the proposed model are evocative of the spaces on which they are based. They are meant to be performed in any space, not exclusively in the one where the acoustical measurements were obtained. The visual component of architectural design is disregarded; the room is considered a musical instrument, with its particular sound qualities and resonances. Compositions using the proposed model will not result as sonified shapes, they will be musical works literally "tuned" to a specific space. This Architecture in motion is an attempt to adopt scientific research to the service of a creative activity and to let the aural properties of

  14. Predicting chromatin architecture from models of polymer physics.

    PubMed

    Bianco, Simona; Chiariello, Andrea M; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario

    2017-01-09

    We review the picture of chromatin large-scale 3D organization emerging from the analysis of Hi-C data and polymer modeling. In higher mammals, Hi-C contact maps reveal a complex higher-order organization, extending from the sub-Mb to chromosomal scales, hierarchically folded in a structure of domains-within-domains (metaTADs). The domain folding hierarchy is partially conserved throughout differentiation, and deeply correlated to epigenomic features. Rearrangements in the metaTAD topology relate to gene expression modifications: in particular, in neuronal differentiation models, topologically associated domains (TADs) tend to have coherent expression changes within architecturally conserved metaTAD niches. To identify the nature of architectural domains and their molecular determinants within a principled approach, we discuss models based on polymer physics. We show that basic concepts of interacting polymer physics explain chromatin spatial organization across chromosomal scales and cell types. The 3D structure of genomic loci can be derived with high accuracy and its molecular determinants identified by crossing information with epigenomic databases. In particular, we illustrate the case of the Sox9 locus, linked to human congenital disorders. The model in-silico predictions on the effects of genomic rearrangements are confirmed by available 5C data. That can help establishing new diagnostic tools for diseases linked to chromatin mis-folding, such as congenital disorders and cancer.

  15. Architecture for Integrated Medical Model Dynamic Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    Jaworske, D. A.; Myers, J. G.; Goodenow, D.; Young, M.; Arellano, J. D.

    2016-01-01

    Probabilistic Risk Assessment (PRA) is a modeling tool used to predict potential outcomes of a complex system based on a statistical understanding of many initiating events. Utilizing a Monte Carlo method, thousands of instances of the model are considered and outcomes are collected. PRA is considered static, utilizing probabilities alone to calculate outcomes. Dynamic Probabilistic Risk Assessment (dPRA) is an advanced concept where modeling predicts the outcomes of a complex system based not only on the probabilities of many initiating events, but also on a progression of dependencies brought about by progressing down a time line. Events are placed in a single time line, adding each event to a queue, as managed by a planner. Progression down the time line is guided by rules, as managed by a scheduler. The recently developed Integrated Medical Model (IMM) summarizes astronaut health as governed by the probabilities of medical events and mitigation strategies. Managing the software architecture process provides a systematic means of creating, documenting, and communicating a software design early in the development process. The software architecture process begins with establishing requirements and the design is then derived from the requirements.

  16. A Functional Model of Sensemaking in a Neurocognitive Architecture

    PubMed Central

    Lebiere, Christian; Paik, Jaehyon; Rutledge-Taylor, Matthew; Staszewski, James; Anderson, John R.

    2013-01-01

    Sensemaking is the active process of constructing a meaningful representation (i.e., making sense) of some complex aspect of the world. In relation to intelligence analysis, sensemaking is the act of finding and interpreting relevant facts amongst the sea of incoming reports, images, and intelligence. We present a cognitive model of core information-foraging and hypothesis-updating sensemaking processes applied to complex spatial probability estimation and decision-making tasks. While the model was developed in a hybrid symbolic-statistical cognitive architecture, its correspondence to neural frameworks in terms of both structure and mechanisms provided a direct bridge between rational and neural levels of description. Compared against data from two participant groups, the model correctly predicted both the presence and degree of four biases: confirmation, anchoring and adjustment, representativeness, and probability matching. It also favorably predicted human performance in generating probability distributions across categories, assigning resources based on these distributions, and selecting relevant features given a prior probability distribution. This model provides a constrained theoretical framework describing cognitive biases as arising from three interacting factors: the structure of the task environment, the mechanisms and limitations of the cognitive architecture, and the use of strategies to adapt to the dual constraints of cognition and the environment. PMID:24302930

  17. Parametric Wave Transformation Models on Natural Beaches

    NASA Astrophysics Data System (ADS)

    Apotsos, A. A.; Raubenheimer, B.; Elgar, S.; Guza, R. T.

    2006-12-01

    Seven parametric models for wave height transformation across the surf zone [e.g., Thornton and Guza, 1983] are tested with observations collected between the shoreline and about 5-m water depth during 2 experiments on a barred beach near Duck, NC, and between the shoreline and about 3.5-m water depth during 2 experiments on unbarred beaches near La Jolla, CA. Offshore wave heights ranged from about 0.1 to 3.0 m. Beach profiles were surveyed approximately every other day. The models predict the observations well. Root-mean-square errors between observed and simulated wave heights are small in water depths h > 2 m (average rms errors < 10%), and increase with decreasing depth for h < 2 m (average rms errors > 20%). The lowest rms errors (i.e., the most accurate predictions) are achieved by tuning a free parameter, γ, in each model. To tune the models accurately to the data considered here, observations are required at 3 to 5 locations, and must span the surf zone. No tuned or untuned model provides the best predictions for all data records in any one experiment. The best fit γ's for each model-experiment pair are represented well with an empirical hyperbolic tangent curve based on the inverse Iribarren number. In 3 of the 4 data sets, estimating γ for each model using an average curve based on the predictions and observations from all 4 experiments typically improves model-data agreement relative to using a constant or previously determined empirical γ. The best fit γ's at the 4th experiment (conducted off La Jolla, CA) are roughly 20% smaller than the γ's for the other 3 experiments, and thus using the experiment-averaged curve increases prediction errors. Possible causes for the smaller γ's at the 4th experiment will be discussed. Funded by ONR and NSF.

  18. Integrating Physiology and Architecture in Models of Fruit Expansion.

    PubMed

    Cieslak, Mikolaj; Cheddadi, Ibrahim; Boudon, Frédéric; Baldazzi, Valentina; Génard, Michel; Godin, Christophe; Bertin, Nadia

    2016-01-01

    Architectural properties of a fruit, such as its shape, vascular patterns, and skin morphology, play a significant role in determining the distributions of water, carbohydrates, and nutrients inside the fruit. Understanding the impact of these properties on fruit quality is difficult because they develop over time and are highly dependent on both genetic and environmental controls. We present a 3D functional-structural fruit model that can be used to investigate effects of the principle architectural properties on fruit quality. We use a three step modeling pipeline in the OpenAlea platform: (1) creating a 3D volumetric mesh representation of the internal and external fruit structure, (2) generating a complex network of vasculature that is embedded within this mesh, and (3) integrating aspects of the fruit's function, such as water and dry matter transport, with the fruit's structure. We restrict our approach to the phase where fruit growth is mostly due to cell expansion and the fruit has already differentiated into different tissue types. We show how fruit shape affects vascular patterns and, as a consequence, the distribution of sugar/water in tomato fruit. Furthermore, we show that strong interaction between tomato fruit shape and vessel density induces, independently of size, an important and contrasted gradient of water supply from the pedicel to the blossom end of the fruit. We also demonstrate how skin morphology related to microcracking distribution affects the distribution of water and sugars inside nectarine fruit. Our results show that such a generic model permits detailed studies of various, unexplored architectural features affecting fruit quality development.

  19. Integrating Physiology and Architecture in Models of Fruit Expansion

    PubMed Central

    Cieslak, Mikolaj; Cheddadi, Ibrahim; Boudon, Frédéric; Baldazzi, Valentina; Génard, Michel; Godin, Christophe; Bertin, Nadia

    2016-01-01

    Architectural properties of a fruit, such as its shape, vascular patterns, and skin morphology, play a significant role in determining the distributions of water, carbohydrates, and nutrients inside the fruit. Understanding the impact of these properties on fruit quality is difficult because they develop over time and are highly dependent on both genetic and environmental controls. We present a 3D functional-structural fruit model that can be used to investigate effects of the principle architectural properties on fruit quality. We use a three step modeling pipeline in the OpenAlea platform: (1) creating a 3D volumetric mesh representation of the internal and external fruit structure, (2) generating a complex network of vasculature that is embedded within this mesh, and (3) integrating aspects of the fruit's function, such as water and dry matter transport, with the fruit's structure. We restrict our approach to the phase where fruit growth is mostly due to cell expansion and the fruit has already differentiated into different tissue types. We show how fruit shape affects vascular patterns and, as a consequence, the distribution of sugar/water in tomato fruit. Furthermore, we show that strong interaction between tomato fruit shape and vessel density induces, independently of size, an important and contrasted gradient of water supply from the pedicel to the blossom end of the fruit. We also demonstrate how skin morphology related to microcracking distribution affects the distribution of water and sugars inside nectarine fruit. Our results show that such a generic model permits detailed studies of various, unexplored architectural features affecting fruit quality development. PMID:27917187

  20. Methodology of modeling and measuring computer architectures for plasma simulations

    NASA Technical Reports Server (NTRS)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  1. Building Structure Design as an Integral Part of Architecture: A Teaching Model for Students of Architecture

    ERIC Educational Resources Information Center

    Unay, Ali Ihsan; Ozmen, Cengiz

    2006-01-01

    This paper explores the place of structural design within undergraduate architectural education. The role and format of lecture-based structure courses within an education system, organized around the architectural design studio is discussed with its most prominent problems and proposed solutions. The fundamental concept of the current teaching…

  2. An architecture model for multiple disease management information systems.

    PubMed

    Chen, Lichin; Yu, Hui-Chu; Li, Hao-Chun; Wang, Yi-Van; Chen, Huang-Jen; Wang, I-Ching; Wang, Chiou-Shiang; Peng, Hui-Yu; Hsu, Yu-Ling; Chen, Chi-Huang; Chuang, Lee-Ming; Lee, Hung-Chang; Chung, Yufang; Lai, Feipei

    2013-04-01

    Disease management is a program which attempts to overcome the fragmentation of healthcare system and improve the quality of care. Many studies have proven the effectiveness of disease management. However, the case managers were spending the majority of time in documentation, coordinating the members of the care team. They need a tool to support them with daily practice and optimizing the inefficient workflow. Several discussions have indicated that information technology plays an important role in the era of disease management. Whereas applications have been developed, it is inefficient to develop information system for each disease management program individually. The aim of this research is to support the work of disease management, reform the inefficient workflow, and propose an architecture model that enhance on the reusability and time saving of information system development. The proposed architecture model had been successfully implemented into two disease management information system, and the result was evaluated through reusability analysis, time consumed analysis, pre- and post-implement workflow analysis, and user questionnaire survey. The reusability of the proposed model was high, less than half of the time was consumed, and the workflow had been improved. The overall user aspect is positive. The supportiveness during daily workflow is high. The system empowers the case managers with better information and leads to better decision making.

  3. Multiresolution Stochastic Models, Data Fusion, and Wavelet Transforms

    DTIC Science & Technology

    1992-05-01

    based on the wavelet transform . The statistical structure of these models is Markovian in scale, and in addition the eigenstructure of these models is...given by the wavelet transform . The implication of this is that by using the wavelet transform we can convert the apparently complicated problem of...plays the role of the time-like variable. In addition we show how the wavelet transform , which is defined for signals that extend from -infinity to

  4. Modern multicore and manycore architectures: Modelling, optimisation and benchmarking a multiblock CFD code

    NASA Astrophysics Data System (ADS)

    Hadade, Ioan; di Mare, Luca

    2016-08-01

    Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.

  5. The Application of Architecture Frameworks to Modelling Exploration Operations Costs

    NASA Technical Reports Server (NTRS)

    Shishko, Robert

    2006-01-01

    Developments in architectural frameworks and system-of-systems thinking have provided useful constructs for systems engineering. DoDAF concepts, language, and formalisms, in particular, provide a natural way of conceptualizing an operations cost model applicable to NASA's space exploration vision. Not all DoDAF products have meaning or apply to a DoDAF inspired operations cost model, but this paper describes how such DoDAF concepts as nodes, systems, and operational activities relate to the development of a model to estimate exploration operations costs. The paper discusses the specific implementation to the Mission Operations Directorate (MOD) operational functions/activities currently being developed and presents an overview of how this powerful representation can apply to robotic space missions as well.

  6. T:XML: A Tool Supporting User Interface Model Transformation

    NASA Astrophysics Data System (ADS)

    López-Jaquero, Víctor; Montero, Francisco; González, Pascual

    Model driven development of user interfaces is based on the transformation of an abstract specification into the final user interface the user will interact with. The design of transformation rules to carry out this transformation process is a key issue in any model-driven user interface development approach. In this paper, we introduce T:XML, an integrated development environment for managing, creating and previewing transformation rules. The tool supports the specification of transformation rules by using a graphical notation that works on the basis of the transformation of the input model into a graph-based representation. T:XML allows the design and execution of transformation rules in an integrated development environment. Furthermore, the designer can also preview how the generated user interface looks like after the transformations have been applied. These previewing capabilities can be used to quickly create prototypes to discuss with the users in user-centered design methods.

  7. Polygonal Shapes Detection in 3d Models of Complex Architectures

    NASA Astrophysics Data System (ADS)

    Benciolini, G. B.; Vitti, A.

    2015-02-01

    A sequential application of two global models defined on a variational framework is proposed for the detection of polygonal shapes in 3D models of complex architectures. As a first step, the procedure involves the use of the Mumford and Shah (1989) 1st-order variational model in dimension two (gridded height data are processed). In the Mumford-Shah model an auxiliary function detects the sharp changes, i.e., the discontinuities, of a piecewise smooth approximation of the data. The Mumford-Shah model requires the global minimization of a specific functional to simultaneously produce both the smooth approximation and its discontinuities. In the proposed procedure, the edges of the smooth approximation derived by a specific processing of the auxiliary function are then processed using the Blake and Zisserman (1987) 2nd-order variational model in dimension one (edges are processed in the plane). This second step permits to describe the edges of an object by means of piecewise almost-linear approximation of the input edges themselves and to detects sharp changes of the first-derivative of the edges so to detect corners. The Mumford-Shah variational model is used in two dimensions accepting the original data as primary input. The Blake-Zisserman variational model is used in one dimension for the refinement of the description of the edges. The selection among all the boundaries detected by the Mumford-Shah model of those that present a shape close to a polygon is performed by considering only those boundaries for which the Blake-Zisserman model identified discontinuities in their first derivative. The output of the procedure are hence shapes, coming from 3D geometric data, that can be considered as polygons. The application of the procedure is suitable for, but not limited to, the detection of objects such as foot-print of polygonal buildings, building facade boundaries or windows contours. v The procedure is applied to a height model of the building of the Engineering

  8. A conceptual framework to design a dimensional model based on the HL7 Clinical Document Architecture.

    PubMed

    Pecoraro, Fabrizio; Luzi, Daniela; Ricci, Fabrizio L

    2014-01-01

    This paper proposes a conceptual framework to design a dimensional model based on the HL7 Clinical Document Architecture (CDA) standard. The adoption of this framework can represent a possible solution to facilitate the integration of heterogeneous information systems in a clinical data warehouse. This can simplify the Extract, Transform and Load (ETL) procedures that are considered the most time-consuming and expensive part of the data warehouse development process. The paper describes the main activities to be carried out to design the dimensional model outlining the main advantages in the application of the proposed framework. The feasibility of our approach is also demonstrated providing a case study to define clinical indicators for quality assessment.

  9. 3D reconstruction and dynamic modeling of root architecture in situ and its application to crop phosphorus research.

    PubMed

    Fang, Suqin; Yan, Xiaolong; Liao, Hong

    2009-12-01

    Root architecture plays important roles in plant water and nutrient acquisition. However, accurate modeling of the root system that provides a realistic representation of roots in the soil is limited by a lack of appropriate tools for the non-destructive and precise measurement of the root system architecture in situ. Here we describe a root growth system in which the roots grow in a solid gel matrix that was used to reconstruct 3D root architecture in situ and dynamically simulate its changes under various nutrient conditions with a high degree of precision. A 3D laser scanner combined with a transparent gel-based growth system was used to capture 3D images of roots. The root system skeleton was extracted using a skeleton extraction method based on the Hough transformation, and mesh modeling using Ball-B spline was employed. We successfully used this system to reconstruct rice and soybean root architectures and determine their changes under various phosphorus (P) supply conditions. Our results showed that the 3D root architecture parameters that were dynamically calculated based on the skeletonization and simulation of root systems were significantly correlated with the biomass and P content of rice and soybean based on both the simulation system and previous reports. Therefore, this approach provides a novel technique for the study of crop root growth and its adaptive changes to various environmental conditions.

  10. 3D model tools for architecture and archaeology reconstruction

    NASA Astrophysics Data System (ADS)

    Vlad, Ioan; Herban, Ioan Sorin; Stoian, Mircea; Vilceanu, Clara-Beatrice

    2016-06-01

    The main objective of architectural and patrimonial survey is to provide a precise documentation of the status quo of the surveyed objects (monuments, buildings, archaeological object and sites) for preservation and protection, for scientific studies and restoration purposes, for the presentation to the general public. Cultural heritage documentation includes an interdisciplinary approach having as purpose an overall understanding of the object itself and an integration of the information which characterize it. The accuracy and the precision of the model are directly influenced by the quality of the measurements realized on field and by the quality of the software. The software is in the process of continuous development, which brings many improvements. On the other side, compared to aerial photogrammetry, close range photogrammetry and particularly architectural photogrammetry is not limited to vertical photographs with special cameras. The methodology of terrestrial photogrammetry has changed significantly and various photographic acquisitions are widely in use. In this context, the present paper brings forward a comparative study of TLS (Terrestrial Laser Scanner) and digital photogrammetry for 3D modeling. The authors take into account the accuracy of the 3D models obtained, the overall costs involved for each technology and method and the 4th dimension - time. The paper proves its applicability as photogrammetric technologies are nowadays used at a large scale for obtaining the 3D model of cultural heritage objects, efficacious in their assessment and monitoring, thus contributing to historic conservation. Its importance also lies in highlighting the advantages and disadvantages of each method used - very important issue for both the industrial and scientific segment when facing decisions such as in which technology to invest more research and funds.

  11. Optimization of Forward Wave Modeling on Contemporary HPC Architectures

    SciTech Connect

    Krueger, Jens; Micikevicius, Paulius; Williams, Samuel

    2012-07-20

    Reverse Time Migration (RTM) is one of the main approaches in the seismic processing industry for imaging the subsurface structure of the Earth. While RTM provides qualitative advantages over its predecessors, it has a high computational cost warranting implementation on HPC architectures. We focus on three progressively more complex kernels extracted from RTM: for isotropic (ISO), vertical transverse isotropic (VTI) and tilted transverse isotropic (TTI) media. In this work, we examine performance optimization of forward wave modeling, which describes the computational kernels used in RTM, on emerging multi- and manycore processors and introduce a novel common subexpression elimination optimization for TTI kernels. We compare attained performance and energy efficiency in both the single-node and distributed memory environments in order to satisfy industry’s demands for fidelity, performance, and energy efficiency. Moreover, we discuss the interplay between architecture (chip and system) and optimizations (both on-node computation) highlighting the importance of NUMA-aware approaches to MPI communication. Ultimately, our results show we can improve CPU energy efficiency by more than 10× on Magny Cours nodes while acceleration via multiple GPUs can surpass the energy-efficient Intel Sandy Bridge by as much as 3.6×.

  12. Java Architecture for Detect and Avoid Extensibility and Modeling

    NASA Technical Reports Server (NTRS)

    Santiago, Confesor; Mueller, Eric Richard; Johnson, Marcus A.; Abramson, Michael; Snow, James William

    2015-01-01

    Unmanned aircraft will equip with a detect-and-avoid (DAA) system that enables them to comply with the requirement to "see and avoid" other aircraft, an important layer in the overall set of procedural, strategic and tactical separation methods designed to prevent mid-air collisions. This paper describes a capability called Java Architecture for Detect and Avoid Extensibility and Modeling (JADEM), developed to prototype and help evaluate various DAA technological requirements by providing a flexible and extensible software platform that models all major detect-and-avoid functions. Figure 1 illustrates JADEM's architecture. The surveillance module can be actual equipment on the unmanned aircraft or simulators that model the process by which sensors on-board detect other aircraft and provide track data to the traffic display. The track evaluation function evaluates each detected aircraft and decides whether to provide an alert to the pilot and its severity. Guidance is a combination of intruder track information, alerting, and avoidance/advisory algorithms behind the tools shown on the traffic display to aid the pilot in determining a maneuver to avoid a loss of well clear. All these functions are designed with a common interface and configurable implementation, which is critical in exploring DAA requirements. To date, JADEM has been utilized in three computer simulations of the National Airspace System, three pilot-in-the-loop experiments using a total of 37 professional UAS pilots, and two flight tests using NASA's Predator-B unmanned aircraft, named Ikhana. The data collected has directly informed the quantitative separation standard for "well clear", safety case, requirements development, and the operational environment for the DAA minimum operational performance standards. This work was performed by the Separation Assurance/Sense and Avoid Interoperability team under NASA's UAS Integration in the NAS project.

  13. Automatic Texture Mapping of Architectural and Archaeological 3d Models

    NASA Astrophysics Data System (ADS)

    Kersten, T. P.; Stallmann, D.

    2012-07-01

    Today, detailed, complete and exact 3D models with photo-realistic textures are increasingly demanded for numerous applications in architecture and archaeology. Manual texture mapping of 3D models by digital photographs with software packages, such as Maxon Cinema 4D, Autodesk 3Ds Max or Maya, still requires a complex and time-consuming workflow. So, procedures for automatic texture mapping of 3D models are in demand. In this paper two automatic procedures are presented. The first procedure generates 3D surface models with textures by web services, while the second procedure textures already existing 3D models with the software tmapper. The program tmapper is based on the Multi Layer 3D image (ML3DImage) algorithm and developed in the programming language C++. The studies showing that the visibility analysis using the ML3DImage algorithm is not sufficient to obtain acceptable results of automatic texture mapping. To overcome the visibility problem the Point Cloud Painter algorithm in combination with the Z-buffer-procedure will be applied in the future.

  14. An avionics scenario and command model description for Space Generic Open Avionics Architecture (SGOAA)

    NASA Technical Reports Server (NTRS)

    Stovall, John R.; Wray, Richard B.

    1994-01-01

    This paper presents a description of a model for a space vehicle operational scenario and the commands for avionics. This model will be used in developing a dynamic architecture simulation model using the Statemate CASE tool for validation of the Space Generic Open Avionics Architecture (SGOAA). The SGOAA has been proposed as an avionics architecture standard to NASA through its Strategic Avionics Technology Working Group (SATWG) and has been accepted by the Society of Automotive Engineers (SAE) for conversion into an SAE Avionics Standard. This architecture was developed for the Flight Data Systems Division (FDSD) of the NASA Johnson Space Center (JSC) by the Lockheed Engineering and Sciences Company (LESC), Houston, Texas. This SGOAA includes a generic system architecture for the entities in spacecraft avionics, a generic processing external and internal hardware architecture, and a nine class model of interfaces. The SGOAA is both scalable and recursive and can be applied to any hierarchical level of hardware/software processing systems.

  15. Fortran Transformational Tools in Support of Scientific Application Development for Petascale Computer Architectures

    SciTech Connect

    Sottille, Matthew

    2013-09-12

    This document is the final report for a multi-year effort building infrastructure to support tool development for Fortran programs. We also investigated static analysis and code transformation methods relevant to scientific programmers who are writing Fortran programs for petascale-class high performance computing systems. This report details our accomplishments, technical approaches, and provides information on where the research results and code may be obtained from an open source software repository. The report for the first year of the project that was performed at the University of Oregon prior to the PI moving to Galois, Inc. is included as an appendix.

  16. Transform continental margins - part 1: Concepts and models

    NASA Astrophysics Data System (ADS)

    Basile, Christophe

    2015-10-01

    This paper reviews the geodynamic concepts and models related to transform continental margins, and their implications on the structure of these margins. Simple kinematic models of transform faulting associated with continental rifting and oceanic accretion allow to define three successive stages of evolution, including intra-continental transform faulting, active transform margin, and passive transform margin. Each part of the transform margin experiences these three stages, but the evolution is diachronous along the margin. Both the duration of each stage and the cumulated strike-slip deformation increase from one extremity of the margin (inner corner) to the other (outer corner). Initiation of transform faulting is related to the obliquity between the trend of the lithospheric deformed zone and the relative displacement of the lithospheric plates involved in divergence. In this oblique setting, alternating transform and divergent plate boundaries correspond to spatial partitioning of the deformation. Both obliquity and the timing of partitioning influence the shape of transform margins. Oblique margin can be defined when oblique rifting is followed by oblique oceanic accretion. In this case, no transform margin should exist in the prolongation of the oceanic fracture zones. Vertical displacements along transform margins were mainly studied to explain the formation of marginal ridges. Numerous models were proposed, one of the most used is being based on thermal exchanges between the oceanic and the continental lithospheres across the transform fault. But this model is compatible neither with numerical computation including flexural behavior of the lithosphere nor with timing of vertical displacements and the lack of heating related to the passing of the oceanic accretion axis as recorded by the Côte d'Ivoire-Ghana marginal ridge. Enhanced models are still needed. They should better take into account the erosion on the continental slope, and the level of coupling

  17. Modeling cognitive and emotional processes: a novel neural network architecture.

    PubMed

    Khashman, Adnan

    2010-12-01

    In our continuous attempts to model natural intelligence and emotions in machine learning, many research works emerge with different methods that are often driven by engineering concerns and have the common goal of modeling human perception in machines. This paper aims to go further in that direction by investigating the integration of emotion at the structural level of cognitive systems using the novel emotional DuoNeural Network (DuoNN). This network has hidden layer DuoNeurons, where each has two embedded neurons: a dorsal neuron and a ventral neuron for cognitive and emotional data processing, respectively. When input visual stimuli are presented to the DuoNN, the dorsal cognitive neurons process local features while the ventral emotional neurons process the entire pattern. We present the computational model and the learning algorithm of the DuoNN, the input information-cognitive and emotional-parallel streaming method, and a comparison between the DuoNN and a recently developed emotional neural network. Experimental results show that the DuoNN architecture, configuration, and the additional emotional information processing, yield higher recognition rates and faster learning and decision making.

  18. Developing a scalable modeling architecture for studying survivability technologies

    NASA Astrophysics Data System (ADS)

    Mohammad, Syed; Bounker, Paul; Mason, James; Brister, Jason; Shady, Dan; Tucker, David

    2006-05-01

    To facilitate interoperability of models in a scalable environment, and provide a relevant virtual environment in which Survivability technologies can be evaluated, the US Army Research Development and Engineering Command (RDECOM) Modeling Architecture for Technology Research and Experimentation (MATREX) Science and Technology Objective (STO) program has initiated the Survivability Thread which will seek to address some of the many technical and programmatic challenges associated with the effort. In coordination with different Thread customers, such as the Survivability branches of various Army labs, a collaborative group has been formed to define the requirements for the simulation environment that would in turn provide them a value-added tool for assessing models and gauge system-level performance relevant to Future Combat Systems (FCS) and the Survivability requirements of other burgeoning programs. An initial set of customer requirements has been generated in coordination with the RDECOM Survivability IPT lead, through the Survivability Technology Area at RDECOM Tank-automotive Research Development and Engineering Center (TARDEC, Warren, MI). The results of this project are aimed at a culminating experiment and demonstration scheduled for September, 2006, which will include a multitude of components from within RDECOM and provide the framework for future experiments to support Survivability research. This paper details the components with which the MATREX Survivability Thread was created and executed, and provides insight into the capabilities currently demanded by the Survivability faculty within RDECOM.

  19. The caBIG® Life Science Business Architecture Model

    PubMed Central

    Boyd, Lauren Becnel; Hunicke-Smith, Scott P.; Stafford, Grace A.; Freund, Elaine T.; Ehlman, Michele; Chandran, Uma; Dennis, Robert; Fernandez, Anna T.; Goldstein, Stephen; Steffen, David; Tycko, Benjamin; Klemm, Juli D.

    2011-01-01

    Motivation: Business Architecture Models (BAMs) describe what a business does, who performs the activities, where and when activities are performed, how activities are accomplished and which data are present. The purpose of a BAM is to provide a common resource for understanding business functions and requirements and to guide software development. The cancer Biomedical Informatics Grid (caBIG®) Life Science BAM (LS BAM) provides a shared understanding of the vocabulary, goals and processes that are common in the business of LS research. Results: LS BAM 1.1 includes 90 goals and 61 people and groups within Use Case and Activity Unified Modeling Language (UML) Diagrams. Here we report on the model's current release, LS BAM 1.1, its utility and usage, and plans for future use and continuing development for future releases. Availability and Implementation: The LS BAM is freely available as UML, PDF and HTML (https://wiki.nci.nih.gov/x/OFNyAQ). Contact: lbboyd@bcm.edu; laurenbboyd@gmail.com Supplementary information: Supplementary data) are avaliable at Bioinformatics online. PMID:21450709

  20. An improved equivalent circuit model of radial mode piezoelectric transformer.

    PubMed

    Huang, Yihua; Huang, Wei

    2011-05-01

    In this paper, both the equivalent circuit models of the radial mode and the coupled thickness vibration mode of the radial mode piezoelectric transformer are deduced, and then with the Y-parameter matrix method and the dual-port network theory, an improved equivalent circuit model for the multilayer radial mode piezoelectric transformer is established. A radial mode transformer sample is tested to verify the equivalent circuit model. The experimental results show that the model proposed in this paper is more precise than the typical model.

  1. On the Role of Connectors in Modeling and Implementing Software Architectures

    DTIC Science & Technology

    1998-02-15

    On the Role of Connectors in Modeling and Implementing Software Architectures Peyman Oreizy, David S. Rosenblum, and Richard N. Taylor Department of...Std Z39-18 On the Role of Connectors in Modeling and Implementing Software Architectures Peyman Oreizy David S. Rosenblum Richard N. Taylor

  2. Interaction of epithelium with mesenchyme affects global features of lung architecture: a computer model of development.

    PubMed

    Tebockhorst, Seth; Lee, Dongyoub; Wexler, Anthony S; Oldham, Michael J

    2007-01-01

    Lung airway morphogenesis is simulated in a simplified diffusing environment that simulates the mesenchyme to explore the role of morphogens in airway architecture development. Simple rules govern local branching morphogenesis. Morphogen gradients are modeled by four pairs of sources and their diffusion through the mesenchyme. Sensitivity to lobar architecture and mesenchymal morphogen are explored. Even if the model accurately represents observed patterns of local development, it could not produce realistic global patterns of lung architecture if interaction with its environment was not taken into account, implying that reciprocal interaction between airway growth and morphogens in the mesenchyme plays a critical role in producing realistic global features of lung architecture.

  3. Organoids as Models for Neoplastic Transformation | Office of Cancer Genomics

    Cancer.gov

    Cancer models strive to recapitulate the incredible diversity inherent in human tumors. A key challenge in accurate tumor modeling lies in capturing the panoply of homo- and heterotypic cellular interactions within the context of a three-dimensional tissue microenvironment. To address this challenge, researchers have developed organotypic cancer models (organoids) that combine the 3D architecture of in vivo tissues with the experimental facility of 2D cell lines.

  4. A Multiperspectival Conceptual Model of Transformative Meaning Making

    ERIC Educational Resources Information Center

    Freed, Maxine

    2009-01-01

    Meaning making is central to transformative learning, but little work has explored how meaning is constructed in the process. Moreover, no meaning-making theory adequately captures its characteristics and operations during radical transformation. The purpose of this dissertation was to formulate and specify a multiperspectival conceptual model of…

  5. Typical Phases of Transformative Learning: A Practice-Based Model

    ERIC Educational Resources Information Center

    Nohl, Arnd-Michael

    2015-01-01

    Empirical models of transformative learning offer important insights into the core characteristics of this concept. Whereas previous analyses were limited to specific social groups or topical terrains, this article empirically typifies the phases of transformative learning on the basis of a comparative analysis of various social groups and topical…

  6. Practical Application of Model-based Programming and State-based Architecture to Space Missions

    NASA Technical Reports Server (NTRS)

    Horvath, Gregory; Ingham, Michel; Chung, Seung; Martin, Oliver; Williams, Brian

    2006-01-01

    A viewgraph presentation to develop models from systems engineers that accomplish mission objectives and manage the health of the system is shown. The topics include: 1) Overview; 2) Motivation; 3) Objective/Vision; 4) Approach; 5) Background: The Mission Data System; 6) Background: State-based Control Architecture System; 7) Background: State Analysis; 8) Overview of State Analysis; 9) Background: MDS Software Frameworks; 10) Background: Model-based Programming; 10) Background: Titan Model-based Executive; 11) Model-based Execution Architecture; 12) Compatibility Analysis of MDS and Titan Architectures; 13) Integrating Model-based Programming and Execution into the Architecture; 14) State Analysis and Modeling; 15) IMU Subsystem State Effects Diagram; 16) Titan Subsystem Model: IMU Health; 17) Integrating Model-based Programming and Execution into the Software IMU; 18) Testing Program; 19) Computationally Tractable State Estimation & Fault Diagnosis; 20) Diagnostic Algorithm Performance; 21) Integration and Test Issues; 22) Demonstrated Benefits; and 23) Next Steps

  7. Connection and coordination: the interplay between architecture and dynamics in evolved model pattern generators.

    PubMed

    Psujek, Sean; Ames, Jeffrey; Beer, Randall D

    2006-03-01

    We undertake a systematic study of the role of neural architecture in shaping the dynamics of evolved model pattern generators for a walking task. First, we consider the minimum number of connections necessary to achieve high performance on this task. Next, we identify architectural motifs associated with high fitness. We then examine how high-fitness architectures differ in their ability to evolve. Finally, we demonstrate the existence of distinct parameter subgroups in some architectures and show that these subgroups are characterized by differences in neuron excitabilities and connection signs.

  8. Modeling the Contribution of Enterprise Architecture Practice to the Achievement of Business Goals

    NASA Astrophysics Data System (ADS)

    van Steenbergen, Marlies; Brinkkemper, Sjaak

    Enterprise architecture is a young, but well-accepted discipline in information management. Establishing the effectiveness of an enterprise architecture practice, however, appears difficult. In this chapter we introduce an architecture effectiveness model (AEM) to express how enterprise architecture practices are meant to contribute to the business goals of an organization. We developed an AEM for three different organizations. These three instances show that the concept of the AEM is applicable in a variety of organizations. It also shows that the objectives of enterprise architecture are not to be restricted to financial goals. The AEM can be used by organizations to set coherent priorities for their architectural practices and to define KPIs for measuring the effectiveness of these practices.

  9. Plum (Prunus domestica) Trees Transformed with Poplar FT1 Result in Altered Architecture, Dormancy Requirement, and Continuous Flowering

    PubMed Central

    Callahan, Ann; Scorza, Ralph

    2012-01-01

    The Flowering Locus T1 (FT1) gene from Populus trichocarpa under the control of the 35S promoter was transformed into European plum (Prunus domestica L). Transgenic plants expressing higher levels of FT flowered and produced fruits in the greenhouse within 1 to 10 months. FT plums did not enter dormancy after cold or short day treatments yet field planted FT plums remained winter hardy down to at least −10°C. The plants also displayed pleiotropic phenotypes atypical for plum including shrub-type growth habit and panicle flower architecture. The flowering and fruiting phenotype was found to be continuous in the greenhouse but limited to spring and fall in the field. The pattern of flowering in the field correlated with lower daily temperatures. This apparent temperature effect was subsequently confirmed in growth chamber studies. The pleitropic phenotypes associated with FT1 expression in plum suggests a fundamental role of this gene in plant growth and development. This study demonstrates the potential for a single transgene event to markedly affect the vegetative and reproductive growth and development of an economically important temperate woody perennial crop. We suggest that FT1 may be a useful tool to modify temperate plants to changing climates and/or to adapt these crops to new growing areas. PMID:22859952

  10. Plum (Prunus domestica) trees transformed with poplar FT1 result in altered architecture, dormancy requirement, and continuous flowering.

    PubMed

    Srinivasan, Chinnathambi; Dardick, Chris; Callahan, Ann; Scorza, Ralph

    2012-01-01

    The Flowering Locus T1 (FT1) gene from Populus trichocarpa under the control of the 35S promoter was transformed into European plum (Prunus domestica L). Transgenic plants expressing higher levels of FT flowered and produced fruits in the greenhouse within 1 to 10 months. FT plums did not enter dormancy after cold or short day treatments yet field planted FT plums remained winter hardy down to at least -10°C. The plants also displayed pleiotropic phenotypes atypical for plum including shrub-type growth habit and panicle flower architecture. The flowering and fruiting phenotype was found to be continuous in the greenhouse but limited to spring and fall in the field. The pattern of flowering in the field correlated with lower daily temperatures. This apparent temperature effect was subsequently confirmed in growth chamber studies. The pleitropic phenotypes associated with FT1 expression in plum suggests a fundamental role of this gene in plant growth and development. This study demonstrates the potential for a single transgene event to markedly affect the vegetative and reproductive growth and development of an economically important temperate woody perennial crop. We suggest that FT1 may be a useful tool to modify temperate plants to changing climates and/or to adapt these crops to new growing areas.

  11. An architectural model of conscious and unconscious brain functions: Global Workspace Theory and IDA.

    PubMed

    Baars, Bernard J; Franklin, Stan

    2007-11-01

    While neural net models have been developed to a high degree of sophistication, they have some drawbacks at a more integrative, "architectural" level of analysis. We describe a "hybrid" cognitive architecture that is implementable in neuronal nets, and which has uniform brainlike features, including activation-passing and highly distributed "codelets," implementable as small-scale neural nets. Empirically, this cognitive architecture accounts qualitatively for the data described by Baars' Global Workspace Theory (GWT), and Franklin's LIDA architecture, including state-of-the-art models of conscious contents in action-planning, Baddeley-style Working Memory, and working models of episodic and semantic longterm memory. These terms are defined both conceptually and empirically for the current theoretical domain. The resulting architecture meets four desirable goals for a unified theory of cognition: practical workability, autonomous agency, a plausible role for conscious cognition, and translatability into plausible neural terms. It also generates testable predictions, both empirical and computational.

  12. Robust transformation with applications to structural equation modelling.

    PubMed

    Yuan, K H; Chan, W; Bentler, P M

    2000-05-01

    Data sets in social and behavioural sciences are seldom normal. Influential cases or outliers can lead to inappropriate solutions and problematic conclusions in structural equation modelling. By giving a proper weight to each case, the influence of outliers on a robust procedure can be minimized. We propose using a robust procedure as a transformation technique, generating a new data matrix that can be analysed by a variety of multivariate methods. Mardia's multivariate skewness and kurtosis statistics are used to measure the effect of the transformation in achieving approximate normality. Since the transformation makes the data approximately normal, applying a classical normal theory based procedure to the transformed data gives more efficient parameter estimates. Three procedures for parameter evaluation and model testing are discussed. Six examples illustrate the various aspects with the robust transformation.

  13. Fourier transform methods in local gravity modeling

    NASA Technical Reports Server (NTRS)

    Harrison, J. C.; Dickinson, M.

    1989-01-01

    New algorithms were derived for computing terrain corrections, all components of the attraction of the topography at the topographic surface and the gradients of these attractions. These algoriithms utilize fast Fourier transforms, but, in contrast to methods currently in use, all divergences of the integrals are removed during the analysis. Sequential methods employing a smooth intermediate reference surface were developed to avoid the very large transforms necessary when making computations at high resolution over a wide area. A new method for the numerical solution of Molodensky's problem was developed to mitigate the convergence difficulties that occur at short wavelengths with methods based on a Taylor series expansion. A trial field on a level surface is continued analytically to the topographic surface, and compared with that predicted from gravity observations. The difference is used to compute a correction to the trial field and the process iterated. Special techniques are employed to speed convergence and prevent oscillations. Three different spectral methods for fitting a point-mass set to a gravity field given on a regular grid at constant elevation are described. Two of the methods differ in the way that the spectrum of the point-mass set, which extends to infinite wave number, is matched to that of the gravity field which is band-limited. The third method is essentially a space-domain technique in which Fourier methods are used to solve a set of simultaneous equations.

  14. Dynamic model of a three-phase power transformer

    SciTech Connect

    Dolinar, D.; Pihler, J.; Grcar, B. . Faculty of Technical Sciences)

    1993-10-01

    An adequate mathematical model of a three-phase power transformer is one of the important elements in the programs for the computer analysis of power system transients. Featured in this paper is the simulation model of a three-phase, three-limb core-type power transformer. Non-linear effects of saturation, hysteresis and eddy currents are considered. Two ways of creating major and minor hysteresis loops are presented. The transformer model, described by a system of time dependent differential equations, is solved by an efficient numerical algorithm. The behavior of the transformer model during switching-in and fault transients, as well as other types of transients, has been tested. The computed transient waveforms are compared with the measured ones of there exists very close agreement between them.

  15. Transforming teacher knowledge: Modeling instruction in physics

    NASA Astrophysics Data System (ADS)

    Cabot, Lloyd H.

    I show that the Modeling physics curriculum is readily accommodated by most teachers in favor of traditional didactic pedagogies. This is so, at least in part, because Modeling focuses on a small set of connected models embedded in a self-consistent theoretical framework and thus is closely congruent with human cognition in this context which is to generate mental models of physical phenomena as both predictive and explanatory devices. Whether a teacher fully implements the Modeling pedagogy depends on the depth of the teacher's commitment to inquiry-based instruction, specifically Modeling instruction, as a means of promoting student understanding of Newtonian mechanics. Moreover, this commitment trumps all other characteristics: teacher educational background, content coverage issues, student achievement data, district or state learning standards, and district or state student assessments. Indeed, distinctive differences exist in how Modeling teachers deliver their curricula and some teachers are measurably more effective than others in their delivery, but they all share an unshakable belief in the efficacy of inquiry-based, constructivist-oriented instruction. The Modeling Workshops' pedagogy, duration, and social interactions impacts teachers' self-identification as members of a professional community. Finally, I discuss the consequences my research may have for the Modeling Instruction program designers and for designers of professional development programs generally.

  16. Rice Morphogenesis and Plant Architecture: Measurement, Specification and the Reconstruction of Structural Development by 3D Architectural Modelling

    PubMed Central

    WATANABE, TOMONARI; HANAN, JIM S.; ROOM, PETER M.; HASEGAWA, TOSHIHIRO; NAKAGAWA, HIROSHI; TAKAHASHI, WATARU

    2005-01-01

    • Background and Aims The morphogenesis and architecture of a rice plant, Oryza sativa, are critical factors in the yield equation, but they are not well studied because of the lack of appropriate tools for 3D measurement. The architecture of rice plants is characterized by a large number of tillers and leaves. The aims of this study were to specify rice plant architecture and to find appropriate functions to represent the 3D growth across all growth stages. • Methods A japonica type rice, ‘Namaga’, was grown in pots under outdoor conditions. A 3D digitizer was used to measure the rice plant structure at intervals from the young seedling stage to maturity. The L-system formalism was applied to create ‘3D virtual rice’ plants, incorporating models of phenological development and leaf emergence period as a function of temperature and photoperiod, which were used to determine the timing of tiller emergence. • Key Results The relationships between the nodal positions and leaf lengths, leaf angles and tiller angles were analysed and used to determine growth functions for the models. The ‘3D virtual rice’ reproduces the structural development of isolated plants and provides a good estimation of the tillering process, and of the accumulation of leaves. • Conclusions The results indicated that the ‘3D virtual rice’ has a possibility to demonstrate the differences in the structure and development between cultivars and under different environmental conditions. Future work, necessary to reflect both cultivar and environmental effects on the model performance, and to link with physiological models, is proposed in the discussion. PMID:15820987

  17. Evaluating the Effectiveness of Reference Models in Federating Enterprise Architectures

    ERIC Educational Resources Information Center

    Wilson, Jeffery A.

    2012-01-01

    Agencies need to collaborate with each other to perform missions, improve mission performance, and find efficiencies. The ability of individual government agencies to collaborate with each other for mission and business success and efficiency is complicated by the different techniques used to describe their Enterprise Architectures (EAs).…

  18. An IIOP Architecture for Web-Enabled Physiological Models

    DTIC Science & Technology

    2007-11-02

    available. This need can be met by a web-based architecture that uses the equivalent of interactive browsers such as Netscape and Microsoft...With the backing of major players like Sun Microsystems, Netscape , and Oracle, the combined use of Java and CORBA will become commonplace in

  19. New Models of Mechanisms for the Motion Transformation

    NASA Astrophysics Data System (ADS)

    Petrović, Tomislav; Ivanov, Ivan

    In this paper two new mechanisms for the motion transformations are presented: screw mechanism for the transformation of one-way circular into two-way linear motion with impulse control and worm-planetary gear train with extremely height gear ratio. Both mechanisms represent new models of construction solutions for which patent protection has been achieved. These mechanisms are based on the application of the differential gearbox with two degrees of freedom. They are characterized by series of kinematic impacts at motion transformation and the possibility of temporary or permanent changes in the structure by subtracting the redundant degree of freedom. Thus the desired characteristic of the motion transformation is achieved. For each mechanism separately the principles of motion and transformation are described and the basic equations that describe the interdependence of geometric and kinematic and kinetic parameters of the system dynamics are given. The basic principles of controlling new mechanisms for motion transformation have been pointed to and the basic constructional performances which may find practical application have been given. The physical models of new systems of motion transformation have been designed and their operation has been presented. Performed experimental researches confirmed the theoretical results and very favorable kinematic characteristics of the mechanisms.

  20. Rapid architecture alternative modeling (RAAM): A framework for capability-based analysis of system of systems architectures

    NASA Astrophysics Data System (ADS)

    Iacobucci, Joseph V.

    The research objective for this manuscript is to develop a Rapid Architecture Alternative Modeling (RAAM) methodology to enable traceable Pre-Milestone A decision making during the conceptual phase of design of a system of systems. Rather than following current trends that place an emphasis on adding more analysis which tends to increase the complexity of the decision making problem, RAAM improves on current methods by reducing both runtime and model creation complexity. RAAM draws upon principles from computer science, system architecting, and domain specific languages to enable the automatic generation and evaluation of architecture alternatives. For example, both mission dependent and mission independent metrics are considered. Mission dependent metrics are determined by the performance of systems accomplishing a task, such as Probability of Success. In contrast, mission independent metrics, such as acquisition cost, are solely determined and influenced by the other systems in the portfolio. RAAM also leverages advances in parallel computing to significantly reduce runtime by defining executable models that are readily amendable to parallelization. This allows the use of cloud computing infrastructures such as Amazon's Elastic Compute Cloud and the PASTEC cluster operated by the Georgia Institute of Technology Research Institute (GTRI). Also, the amount of data that can be generated when fully exploring the design space can quickly exceed the typical capacity of computational resources at the analyst's disposal. To counter this, specific algorithms and techniques are employed. Streaming algorithms and recursive architecture alternative evaluation algorithms are used that reduce computer memory requirements. Lastly, a domain specific language is created to provide a reduction in the computational time of executing the system of systems models. A domain specific language is a small, usually declarative language that offers expressive power focused on a particular

  1. TRANSFORMATION

    SciTech Connect

    LACKS,S.A.

    2003-10-09

    Transformation, which alters the genetic makeup of an individual, is a concept that intrigues the human imagination. In Streptococcus pneumoniae such transformation was first demonstrated. Perhaps our fascination with genetics derived from our ancestors observing their own progeny, with its retention and assortment of parental traits, but such interest must have been accelerated after the dawn of agriculture. It was in pea plants that Gregor Mendel in the late 1800s examined inherited traits and found them to be determined by physical elements, or genes, passed from parents to progeny. In our day, the material basis of these genetic determinants was revealed to be DNA by the lowly bacteria, in particular, the pneumococcus. For this species, transformation by free DNA is a sexual process that enables cells to sport new combinations of genes and traits. Genetic transformation of the type found in S. pneumoniae occurs naturally in many species of bacteria (70), but, initially only a few other transformable species were found, namely, Haemophilus influenzae, Neisseria meningitides, Neisseria gonorrheae, and Bacillus subtilis (96). Natural transformation, which requires a set of genes evolved for the purpose, contrasts with artificial transformation, which is accomplished by shocking cells either electrically, as in electroporation, or by ionic and temperature shifts. Although such artificial treatments can introduce very small amounts of DNA into virtually any type of cell, the amounts introduced by natural transformation are a million-fold greater, and S. pneumoniae can take up as much as 10% of its cellular DNA content (40).

  2. Developing a Conceptual Architecture for a Generalized Agent-based Modeling Environment (GAME)

    DTIC Science & Technology

    2008-03-01

    possible. A conceptual architecture for a generalized agent- based modeling environment (GAME) based upon design principles from OR/MS systems was created...conceptual architecture for a generalized agent-based modeling environment (GAME) based upon design principles from OR/MS systems was created that...handle the event, and subsequently form the relevant plans. One of these plans will be selected, and either pushed to the top of the current

  3. Heterogeneous Concurrent Modeling and Design in Java (Volume 2: Ptolemy II Software Architecture)

    DTIC Science & Technology

    2008-04-01

    Heterogeneous Concurrent Modeling and Design in Java (Volume 2: Ptolemy II Software Architecture) Christopher Brooks Edward A. Lee Xiaojun Liu...00-2008 4. TITLE AND SUBTITLE Heterogeneous Concurrent Modeling and Design in Java (Volume 2: Ptolemy II Software Architecture) 5a. CONTRACT...the State of California Micro Program, and the following companies: Agilent, Bosch, HSBC, Lockheed-Martin, National Instruments, and Toyota. PTOLEMY II

  4. Transforming Community Access to Space Science Models

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Heese, Michael; Kunetsova, Maria; Maddox, Marlo; Rastaetter, Lutz; Berrios, David; Pulkkinen, Antti

    2012-01-01

    Researching and forecasting the ever changing space environment (often referred to as space weather) and its influence on humans and their activities are model-intensive disciplines. This is true because the physical processes involved are complex, but, in contrast to terrestrial weather, the supporting observations are typically sparse. Models play a vital role in establishing a physically meaningful context for interpreting limited observations, testing theory, and producing both nowcasts and forecasts. For example, with accurate forecasting of hazardous space weather conditions, spacecraft operators can place sensitive systems in safe modes, and power utilities can protect critical network components from damage caused by large currents induced in transmission lines by geomagnetic storms.

  5. Transformational mentorship models for nurse educators.

    PubMed

    Jacobson, Sheri L; Sherrod, Dennis R

    2012-07-01

    A consistent supply of competent and confident faculty is essential to meeting the growing demand for nurses. One way to ensure continuity among nurse educators is through faculty mentorship. There is very little literature about nurse educator mentorship models and no research was found that tested mentoring frameworks or strategies with nurse educators. The matriculation and retention of nursing faculty requires diligence in the areas of practice, teaching, and scholarship. The authors of this article discuss current nursing mentorship models and propose a new one for consideration.

  6. TRANSFORMER

    DOEpatents

    Baker, W.R.

    1959-08-25

    Transformers of a type adapted for use with extreme high power vacuum tubes where current requirements may be of the order of 2,000 to 200,000 amperes are described. The transformer casing has the form of a re-entrant section being extended through an opening in one end of the cylinder to form a coaxial terminal arrangement. A toroidal multi-turn primary winding is disposed within the casing in coaxial relationship therein. In a second embodiment, means are provided for forming the casing as a multi-turn secondary. The transformer is characterized by minimized resistance heating, minimized external magnetic flux, and an economical construction.

  7. Transformative leadership: an ethical stewardship model for healthcare.

    PubMed

    Caldwell, Cam; Voelker, Carolyn; Dixon, Rolf D; LeJeune, Adena

    2008-01-01

    The need for effective leadership is a compelling priority for those who would choose to govern in public, private, and nonprofit organizations, and applies as much to the healthcare profession as it does to other sectors of the economy (Moody, Horton-Deutsch, & Pesut, 2007). Transformative Leadership, an approach to leadership and governance that incorporates the best characteristics of six other highly respected leadership models, is an integrative theory of ethical stewardship that can help healthcare professionals to more effectively achieve organizational efficiencies, build stakeholder commitment and trust, and create valuable synergies to transform and enrich today's healthcare systems (cf. Caldwell, LeJeune, & Dixon, 2007). The purpose of this article is to introduce the concept of Transformative Leadership and to explain how this model applies within a healthcare context. We define Transformative Leadership and identify its relationship to Transformational, Charismatic, Level 5, Principle-Centered, Servant, and Covenantal Leadership--providing examples of each of these elements of Transformative Leadership within a healthcare leadership context. We conclude by identifying contributions of this article to the healthcare leadership literature.

  8. Building Information Modeling (BIM): A Road Map for Implementation to Support MILCON Transformation and Civil Works Projects within the U.S. Army Corps of Engineers

    DTIC Science & Technology

    2006-10-01

    ER D C TR -0 6 -1 0 Building Information Modeling (BIM) A Road Map for Implementation To Support MILCON Transformation and Civil Works...Transformation and Civil Works Projects within the U.S. Army Corps of Engineers Beth A. Brucker, Michael P. Case, E. William East, and Susan D... civil works and military construction business processes, including the process for working with the USACE Architectural Engineering Construction (AEC

  9. Transfer Function Identification Using Orthogonal Fourier Transform Modeling Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2013-01-01

    A method for transfer function identification, including both model structure determination and parameter estimation, was developed and demonstrated. The approach uses orthogonal modeling functions generated from frequency domain data obtained by Fourier transformation of time series data. The method was applied to simulation data to identify continuous-time transfer function models and unsteady aerodynamic models. Model fit error, estimated model parameters, and the associated uncertainties were used to show the effectiveness of the method for identifying accurate transfer function models from noisy data.

  10. NASA Integrated Model Centric Architecture (NIMA) Model Use and Re-Use

    NASA Technical Reports Server (NTRS)

    Conroy, Mike; Mazzone, Rebecca; Lin, Wei

    2012-01-01

    This whitepaper accepts the goals, needs and objectives of NASA's Integrated Model-centric Architecture (NIMA); adds experience and expertise from the Constellation program as well as NASA's architecture development efforts; and provides suggested concepts, practices and norms that nurture and enable model use and re-use across programs, projects and other complex endeavors. Key components include the ability to effectively move relevant information through a large community, process patterns that support model reuse and the identification of the necessary meta-information (ex. history, credibility, and provenance) to safely use and re-use that information. In order to successfully Use and Re-Use Models and Simulations we must define and meet key organizational and structural needs: 1. We must understand and acknowledge all the roles and players involved from the initial need identification through to the final product, as well as how they change across the lifecycle. 2. We must create the necessary structural elements to store and share NIMA-enabled information throughout the Program or Project lifecycle. 3. We must create the necessary organizational processes to stand up and execute a NIMA-enabled Program or Project throughout its lifecycle. NASA must meet all three of these needs to successfully use and re-use models. The ability to Reuse Models a key component of NIMA and the capabilities inherent in NIMA are key to accomplishing NASA's space exploration goals. 11

  11. Plant Growth Modelling and Applications: The Increasing Importance of Plant Architecture in Growth Models

    PubMed Central

    Fourcaud, Thierry; Zhang, Xiaopeng; Stokes, Alexia; Lambers, Hans; Körner, Christian

    2008-01-01

    Background Modelling plant growth allows us to test hypotheses and carry out virtual experiments concerning plant growth processes that could otherwise take years in field conditions. The visualization of growth simulations allows us to see directly and vividly the outcome of a given model and provides us with an instructive tool useful for agronomists and foresters, as well as for teaching. Functional–structural (FS) plant growth models are nowadays particularly important for integrating biological processes with environmental conditions in 3-D virtual plants, and provide the basis for more advanced research in plant sciences. Scope In this viewpoint paper, we ask the following questions. Are we modelling the correct processes that drive plant growth, and is growth driven mostly by sink or source activity? In current models, is the importance of soil resources (nutrients, water, temperature and their interaction with meristematic activity) considered adequately? Do classic models account for architectural adjustment as well as integrating the fundamental principles of development? Whilst answering these questions with the available data in the literature, we put forward the opinion that plant architecture and sink activity must be pushed to the centre of plant growth models. In natural conditions, sinks will more often drive growth than source activity, because sink activity is often controlled by finite soil resources or developmental constraints. PMA06 This viewpoint paper also serves as an introduction to this Special Issue devoted to plant growth modelling, which includes new research covering areas stretching from cell growth to biomechanics. All papers were presented at the Second International Symposium on Plant Growth Modeling, Simulation, Visualization and Applications (PMA06), held in Beijing, China, from 13–17 November, 2006. Although a large number of papers are devoted to FS models of agricultural and forest crop species, physiological and genetic

  12. An Interactive Design Space Supporting Development of Vehicle Architecture Concept Models

    DTIC Science & Technology

    2011-01-01

    Denver, Colorado, USA IMECE2011-64510 AN INTERACTIVE DESIGN SPACE SUPPORTING DEVELOPMENT OF VEHICLE ARCHITECTURE CONCEPT MODELS Gary Osborne...early in the development cycle. Optimization taking place later in the cycle usually occurs at the detail design level, and tends to result in...architecture changes may be imposed, but such modifications are equivalent to a huge optimization cycle covering almost the entire design process, and

  13. Modelling of internal architecture of kinesin nanomotor as a machine language.

    PubMed

    Khataee, H R; Ibrahim, M Y

    2012-09-01

    Kinesin is a protein-based natural nanomotor that transports molecular cargoes within cells by walking along microtubules. Kinesin nanomotor is considered as a bio-nanoagent which is able to sense the cell through its sensors (i.e. its heads and tail), make the decision internally and perform actions on the cell through its actuator (i.e. its motor domain). The study maps the agent-based architectural model of internal decision-making process of kinesin nanomotor to a machine language using an automata algorithm. The applied automata algorithm receives the internal agent-based architectural model of kinesin nanomotor as a deterministic finite automaton (DFA) model and generates a regular machine language. The generated regular machine language was acceptable by the architectural DFA model of the nanomotor and also in good agreement with its natural behaviour. The internal agent-based architectural model of kinesin nanomotor indicates the degree of autonomy and intelligence of the nanomotor interactions with its cell. Thus, our developed regular machine language can model the degree of autonomy and intelligence of kinesin nanomotor interactions with its cell as a language. Modelling of internal architectures of autonomous and intelligent bio-nanosystems as machine languages can lay the foundation towards the concept of bio-nanoswarms and next phases of the bio-nanorobotic systems development.

  14. An Architectural Overlay: Modifying an Architecture to Help Cognitive Models Understand and Explain Themselves

    DTIC Science & Technology

    2006-02-24

    the metacognitive facilities Herbal can include in a model to create an intelligent opponent in dTank. dTank works, but it needs to be made even easier...the portfolio of mitigation projects that provides the overall greatest net benefit given resource constraints. Such planning is rarely performed in a

  15. The role of technology and engineering models in transforming healthcare.

    PubMed

    Pavel, Misha; Jimison, Holly Brugge; Wactlar, Howard D; Hayes, Tamara L; Barkis, Will; Skapik, Julia; Kaye, Jeffrey

    2013-01-01

    The healthcare system is in crisis due to challenges including escalating costs, the inconsistent provision of care, an aging population, and high burden of chronic disease related to health behaviors. Mitigating this crisis will require a major transformation of healthcare to be proactive, preventive, patient-centered, and evidence-based with a focus on improving quality-of-life. Information technology, networking, and biomedical engineering are likely to be essential in making this transformation possible with the help of advances, such as sensor technology, mobile computing, machine learning, etc. This paper has three themes: 1) motivation for a transformation of healthcare; 2) description of how information technology and engineering can support this transformation with the help of computational models; and 3) a technical overview of several research areas that illustrate the need for mathematical modeling approaches, ranging from sparse sampling to behavioral phenotyping and early detection. A key tenet of this paper concerns complementing prior work on patient-specific modeling and simulation by modeling neuropsychological, behavioral, and social phenomena. The resulting models, in combination with frequent or continuous measurements, are likely to be key components of health interventions to enhance health and wellbeing and the provision of healthcare.

  16. Modeling of a 3DTV service in the software-defined networking architecture

    NASA Astrophysics Data System (ADS)

    Wilczewski, Grzegorz

    2014-11-01

    In this article a newly developed concept towards modeling of a multimedia service offering stereoscopic motion imagery is presented. Proposed model is based on the approach of utilization of Software-defined Networking or Software Defined Networks architecture (SDN). The definition of 3D television service spanning SDN concept is identified, exposing basic characteristic of a 3DTV service in a modern networking organization layout. Furthermore, exemplary functionalities of the proposed 3DTV model are depicted. It is indicated that modeling of a 3DTV service in the Software-defined Networking architecture leads to multiplicity of improvements, especially towards flexibility of a service supporting heterogeneity of end user devices.

  17. Fractional brownian functions as mathematical models of natural rhythm in architecture.

    PubMed

    Cirovic, Ivana M

    2014-10-01

    Carl Bovill suggested and described a method of generating rhythm in architecture with the help of fractional Brownian functions, as they are mathematical models of natural rhythm. A relationship established in the stated procedure between fractional Brownian functions as models of rhythm, and the observed group of architectural elements, is recognized as an analogical relationship, and the procedure of generating rhythm as a process of analogical transfer from the natural domain to the architectural domain. Since analogical transfer implies relational similarity of two domains, and the establishment of one-to-one correspondence, this paper is trying to determine under which conditions such correspondence could be established. For example, if the values of the observed visual feature of architectural elements are not similar to each other in a way in which they can form a monotonically increasing, or a monotonically decreasing bounded sequence, then the structural alignment and the one-to-one correspondence with a single fractional Brownian function cannot be established, hence, this function is deemed inappropriate as a model for the architectural rhythm. In this case we propose overlapping of two or more functions, so that each of them is an analog for one subset of mutually similar values of the visual feature of architectural elements.

  18. The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding

    PubMed Central

    Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco

    2017-01-01

    The recent “deep learning revolution” in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems. PMID:28377709

  19. The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding.

    PubMed

    Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco

    2017-01-01

    The recent "deep learning revolution" in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems.

  20. Comparison of Different Artificial Neural Network (ANN) Architectures in Modeling of Chlorella sp. Flocculation.

    PubMed

    Zenooz, Alireza Moosavi; Ashtiani, Farzin Zokaee; Ranjbar, Reza; Nikbakht, Fatemeh; Bolouri, Oberon

    2017-01-03

    Biodiesel production from microalgae feedstock should be performed after growth and harvesting of the cells and the most feasible method for harvesting and dewatering of microalgae is flocculation. Flocculation modeling can be used for evaluation and prediction of its performance under different affective parameters. However, the modeling of flocculation in microalgae is not simple and has not performed yet, under all experimental conditions, mostly due to different behaviors of microalgae cells during the process under different flocculation conditions. In the current study, the modeling of microalgae flocculation is studied with different neural network architectures. Microalgae specie, Chlorella sp., was flocculated with ferric chloride under different conditions and then the experimental data modeled using artificial neural network (ANN). Neural network architectures of Multilayer Perceptron (MLP) and Radial Basis Function (RBF) architectures, failed to predict the targets successfully, though, modeling was effective with ensemble architecture of MLP networks. Comparison between the performances of the ensemble and each individual network explains the ability of the ensemble architecture in microalgae flocculation modeling.

  1. Estimation in a semi-Markov transformation model

    PubMed Central

    Dabrowska, Dorota M.

    2012-01-01

    Multi-state models provide a common tool for analysis of longitudinal failure time data. In biomedical applications, models of this kind are often used to describe evolution of a disease and assume that patient may move among a finite number of states representing different phases in the disease progression. Several authors developed extensions of the proportional hazard model for analysis of multi-state models in the presence of covariates. In this paper, we consider a general class of censored semi-Markov and modulated renewal processes and propose the use of transformation models for their analysis. Special cases include modulated renewal processes with interarrival times specified using transformation models, and semi-Markov processes with with one-step transition probabilities defined using copula-transformation models. We discuss estimation of finite and infinite dimensional parameters of the model, and develop an extension of the Gaussian multiplier method for setting confidence bands for transition probabilities. A transplant outcome data set from the Center for International Blood and Marrow Transplant Research is used for illustrative purposes. PMID:22740583

  2. Model Transformation for a System of Systems Dependability Safety Case

    NASA Technical Reports Server (NTRS)

    Murphy, Judy; Driskell, Steve

    2011-01-01

    The presentation reviews the dependability and safety effort of NASA's Independent Verification and Validation Facility. Topics include: safety engineering process, applications to non-space environment, Phase I overview, process creation, sample SRM artifact, Phase I end result, Phase II model transformation, fault management, and applying Phase II to individual projects.

  3. Software architecture and design of the web services facilitating climate model diagnostic analysis

    NASA Astrophysics Data System (ADS)

    Pan, L.; Lee, S.; Zhang, J.; Tang, B.; Zhai, C.; Jiang, J. H.; Wang, W.; Bao, Q.; Qi, M.; Kubar, T. L.; Teixeira, J.

    2015-12-01

    Climate model diagnostic analysis is a computationally- and data-intensive task because it involves multiple numerical model outputs and satellite observation data that can both be high resolution. We have built an online tool that facilitates this process. The tool is called Climate Model Diagnostic Analyzer (CMDA). It employs the web service technology and provides a web-based user interface. The benefits of these choices include: (1) No installation of any software other than a browser, hence it is platform compatable; (2) Co-location of computation and big data on the server side, and small results and plots to be downloaded on the client side, hence high data efficiency; (3) multi-threaded implementation to achieve parallel performance on multi-core servers; and (4) cloud deployment so each user has a dedicated virtual machine. In this presentation, we will focus on the computer science aspects of this tool, namely the architectural design, the infrastructure of the web services, the implementation of the web-based user interface, the mechanism of provenance collection, the approach to virtualization, and the Amazon Cloud deployment. As an example, We will describe our methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks (i.e., Flask, Gunicorn, and Tornado). Another example is the use of Docker, a light-weight virtualization container, to distribute and deploy CMDA onto an Amazon EC2 instance. Our tool of CMDA has been successfully used in the 2014 Summer School hosted by the JPL Center for Climate Science. Students had positive feedbacks in general and we will report their comments. An enhanced version of CMDA with several new features, some requested by the 2014 students, will be used in the 2015 Summer School soon.

  4. A Microscale Model for Ausferritic Transformation of Austempered Ductile Irons

    NASA Astrophysics Data System (ADS)

    Boccardo, Adrián D.; Dardati, Patricia M.; Celentano, Diego J.; Godoy, Luis A.

    2017-01-01

    This paper presents a new metallurgical model for the ausferritic transformation of ductile cast iron. The model allows predicting the evolution of phases in terms of the chemical composition, austenitization and austempering temperatures, graphite nodule count, and distribution of graphite nodule size. The ferrite evolution is predicted according to the displacive growth mechanism. A representative volume element is employed at the microscale to consider the phase distributions, the inhomogeneous austenite carbon content, and the nucleation of ferrite subunits at the graphite nodule surface and at the tips of existing ferrite subunits. The performance of the model is evaluated by comparison with experimental results. The results indicate that the increment of the ausferritic transformation rate, which is caused by increments of austempering temperature and graphite nodule count, is adequately represented by this model.

  5. Laguerre-Volterra model and architecture for MIMO system identification and output prediction.

    PubMed

    Li, Will X Y; Xin, Yao; Chan, Rosa H M; Song, Dong; Berger, Theodore W; Cheung, Ray C C

    2014-01-01

    A generalized mathematical model is proposed for behaviors prediction of biological causal systems with multiple inputs and multiple outputs (MIMO). The system properties are represented by a set of model parameters, which can be derived with random input stimuli probing it. The system calculates predicted outputs based on the estimated parameters and its novel inputs. An efficient hardware architecture is established for this mathematical model and its circuitry has been implemented using the field-programmable gate arrays (FPGAs). This architecture is scalable and its functionality has been validated by using experimental data gathered from real-world measurement.

  6. Transitioning ISR architecture into the cloud

    NASA Astrophysics Data System (ADS)

    Lash, Thomas D.

    2012-06-01

    Emerging cloud computing platforms offer an ideal opportunity for Intelligence, Surveillance, and Reconnaissance (ISR) intelligence analysis. Cloud computing platforms help overcome challenges and limitations of traditional ISR architectures. Modern ISR architectures can benefit from examining commercial cloud applications, especially as they relate to user experience, usage profiling, and transformational business models. This paper outlines legacy ISR architectures and their limitations, presents an overview of cloud technologies and their applications to the ISR intelligence mission, and presents an idealized ISR architecture implemented with cloud computing.

  7. Assessing biocomputational modelling in transforming clinical guidelines for osteoporosis management.

    PubMed

    Thiel, Rainer; Viceconti, Marco; Stroetmann, Karl

    2011-01-01

    Biocomputational modelling as developed by the European Virtual Physiological Human (VPH) Initiative is the area of ICT most likely to revolutionise in the longer term the practice of medicine. Using the example of osteoporosis management, a socio-economic assessment framework is presented that captures how the transformation of clinical guidelines through VPH models can be evaluated. Applied to the Osteoporotic Virtual Physiological Human Project, a consequent benefit-cost analysis delivers promising results, both methodologically and substantially.

  8. Transforming System Engineering through Model-Centric Engineering

    DTIC Science & Technology

    2015-11-18

    computational fluid dynamics, radio-frequency, heat transfer). However, SysML does provide an underlying framework for holding system model...Contract No. HQ0034-13-D-0004 Task Order: 0041, RT 141 Report No. SERC-2015-TR-109 Transforming System Engineering through Model-Centric...Institute of Technology, Systems Engineering Research Center This material is based upon work supported, in whole or in part, by the U.S. Department of

  9. Wavelet transforms in a critical interface model for Barkhausen noise.

    PubMed

    de Queiroz, S L A

    2008-02-01

    We discuss the application of wavelet transforms to a critical interface model which is known to provide a good description of Barkhausen noise in soft ferromagnets. The two-dimensional version of the model (one-dimensional interface) is considered, mainly in the adiabatic limit of very slow driving. On length scales shorter than a crossover length (which grows with the strength of the surface tension), the effective interface roughness exponent zeta is approximately 1.20 , close to the expected value for the universality class of the quenched Edwards-Wilkinson model. We find that the waiting times between avalanches are fully uncorrelated, as the wavelet transform of their autocorrelations scales as white noise. Similarly, detrended size-size correlations give a white-noise wavelet transform. Consideration of finite driving rates, still deep within the intermittent regime, shows the wavelet transform of correlations scaling as 1/f(1.5) for intermediate frequencies. This behavior is ascribed to intra-avalanche correlations.

  10. Tests for Regression Parameters in Power Transformation Models.

    DTIC Science & Technology

    1980-01-01

    of estimating the correct %.JI.J scale and then performing the usual linear model F-test in this estimated Ascale. We explore situations in which this...transformation model. In this model, a simple test consists of estimating the correct scale and t ihv. performin g the usutal l iiear model F-test in ’ this...X (yi,y ) will be the least squares estimaites in the estimated scale X and -(yiY2) will be the least squares estimates calculated in the true but

  11. Processes models, environmental analyses, and cognitive architectures: quo vadis quantum probability theory?

    PubMed

    Marewski, Julian N; Hoffrage, Ulrich

    2013-06-01

    A lot of research in cognition and decision making suffers from a lack of formalism. The quantum probability program could help to improve this situation, but we wonder whether it would provide even more added value if its presumed focus on outcome models were complemented by process models that are, ideally, informed by ecological analyses and integrated into cognitive architectures.

  12. Understanding transparency perception in architecture: presentation of the simplified perforated model.

    PubMed

    Brzezicki, Marcin

    2013-01-01

    Issues of transparency perception are addressed from an architectural perspective, pointing out previously neglected factors that greatly influence this phenomenon in the scale of a building. The simplified perforated model of a transparent surface presented in the paper has been based on previously developed theories and involves the balance of light reflected versus light transmitted. Its aim is to facilitate an understanding of non-intuitive phenomena related to transparency (eg dynamically changing reflectance) for readers without advanced knowledge of molecular physics. A verification of the presented model has been based on the comparison of optical performance of the model with the results of Fresnel's equations for light-transmitting materials. The presented methodology is intended to be used both in the design and explanatory stages of architectural practice and vision research. Incorporation of architectural issues could enrich the perspective of scientists representing other disciplines.

  13. Stable Eutectoid Transformation in Nodular Cast Iron: Modeling and Validation

    NASA Astrophysics Data System (ADS)

    Carazo, Fernando D.; Dardati, Patricia M.; Celentano, Diego J.; Godoy, Luis A.

    2017-01-01

    This paper presents a new microstructural model of the stable eutectoid transformation in a spheroidal cast iron. The model takes into account the nucleation and growth of ferrite grains and the growth of graphite spheroids. Different laws are assumed for the growth of both phases during and below the intercritical stable eutectoid. At a microstructural level, the initial conditions for the phase transformations are obtained from the microstructural simulation of solidification of the material, which considers the divorced eutectic and the subsequent growth of graphite spheroids up to the initiation of the stable eutectoid transformation. The temperature field is obtained by solving the energy equation by means of finite elements. The microstructural (phase change) and macrostructural (energy balance) models are coupled by a sequential multiscale procedure. Experimental validation of the model is achieved by comparison with measured values of fractions and radius of 2D view of ferrite grains. Agreement with such experiments indicates that the present model is capable of predicting ferrite phase fraction and grain size with reasonable accuracy.

  14. Cultural heritage conservation and communication by digital modeling tools. Case studies: minor architectures of the Thirties in the Turin area

    NASA Astrophysics Data System (ADS)

    Bruno, A., Jr.; Spallone, R.

    2015-08-01

    Between the end of the twenties and the beginning of the World war two Turin, as the most of the Italian cities, was endowed by the fascist regime of many new buildings to guarantee its visibility and to control the territory: the fascist party main houses and the local ones. The style that was adopted for these constructions was inspired by the guide lines of the Modern movement which were spreading by a generation of architects as Le Corbusier, Gropius, Mendelsohn. At the end of the war many buildings were reconverted to several functions that led heavy transformations not respectful of the original worth, other were demolished. Today it's possible to rebuild those lost architectures in their primal format as it was created by their architects on paper (and in their mind). This process can guarantee the three-dimensional perception, the authenticity of the materials and the placement into the Turin urban tissue, using static and dynamic digital representation systems. The "three-dimensional re-drawing" of the projects, thought as an heuristic practice devoted to reveal the original idea of the project, inserts itself in a digital model of the urban and natural context as we can live it today, to simulate the perceptive effects that the building could stir up today. The modeling skills are the basis to product videos able to explore the relationship between the environment and "re-built architectures", describing with the synthetic movie techniques, the main formal and perceptive roots. The model represents a scientific product that can be involved in a virtual archive of cultural goods to preserve the collective memory of the architectural and urban past image of Turin.

  15. Algorithm To Architecture Mapping Model (ATAMM) multicomputer operating system functional specification

    NASA Technical Reports Server (NTRS)

    Mielke, R.; Stoughton, J.; Som, S.; Obando, R.; Malekpour, M.; Mandala, B.

    1990-01-01

    A functional description of the ATAMM Multicomputer Operating System is presented. ATAMM (Algorithm to Architecture Mapping Model) is a marked graph model which describes the implementation of large grained, decomposed algorithms on data flow architectures. AMOS, the ATAMM Multicomputer Operating System, is an operating system which implements the ATAMM rules. A first generation version of AMOS which was developed for the Advanced Development Module (ADM) is described. A second generation version of AMOS being developed for the Generic VHSIC Spaceborne Computer (GVSC) is also presented.

  16. A transformation model for Laminaria Japonica (Phaeophyta, Laminariales)

    NASA Astrophysics Data System (ADS)

    Qin, Song; Jiang, Peng; Li, Xin-Ping; Wang, Xi-Hua; Zeng, Cheng-Kui

    1998-03-01

    A genetic transformation model for the seaweed Laminaria japonica mainly includes the following aspects: 1. The method to introduce foreign genes into the kelp, L. japonica Biolistic bombardment has been proved to be an effective method to bombard foreign DNA through cell walls into intact cells of both sporophytes and gametophytes. The expression of cat and lacZ was detected in regenerated sporophytes, which suggests that this method could induce random integration of foreign genes. Promoters to drive gene expression

  17. Understanding Portability of a High-Level Programming Model on Contemporary Heterogeneous Architectures

    DOE PAGES

    Sabne, Amit J.; Sakdhnagool, Putt; Lee, Seyong; ...

    2015-07-13

    Accelerator-based heterogeneous computing is gaining momentum in the high-performance computing arena. However, the increased complexity of heterogeneous architectures demands more generic, high-level programming models. OpenACC is one such attempt to tackle this problem. Although the abstraction provided by OpenACC offers productivity, it raises questions concerning both functional and performance portability. In this article, the authors propose HeteroIR, a high-level, architecture-independent intermediate representation, to map high-level programming models, such as OpenACC, to heterogeneous architectures. They present a compiler approach that translates OpenACC programs into HeteroIR and accelerator kernels to obtain OpenACC functional portability. They then evaluate the performance portability obtained bymore » OpenACC with their approach on 12 OpenACC programs on Nvidia CUDA, AMD GCN, and Intel Xeon Phi architectures. They study the effects of various compiler optimizations and OpenACC program settings on these architectures to provide insights into the achieved performance portability.« less

  18. Understanding Portability of a High-Level Programming Model on Contemporary Heterogeneous Architectures

    SciTech Connect

    Sabne, Amit J.; Sakdhnagool, Putt; Lee, Seyong; Vetter, Jeffrey S.

    2015-07-13

    Accelerator-based heterogeneous computing is gaining momentum in the high-performance computing arena. However, the increased complexity of heterogeneous architectures demands more generic, high-level programming models. OpenACC is one such attempt to tackle this problem. Although the abstraction provided by OpenACC offers productivity, it raises questions concerning both functional and performance portability. In this article, the authors propose HeteroIR, a high-level, architecture-independent intermediate representation, to map high-level programming models, such as OpenACC, to heterogeneous architectures. They present a compiler approach that translates OpenACC programs into HeteroIR and accelerator kernels to obtain OpenACC functional portability. They then evaluate the performance portability obtained by OpenACC with their approach on 12 OpenACC programs on Nvidia CUDA, AMD GCN, and Intel Xeon Phi architectures. They study the effects of various compiler optimizations and OpenACC program settings on these architectures to provide insights into the achieved performance portability.

  19. Hierarchical Architectural Considerations in Econometric Modeling of Manufacturing Systems

    DTIC Science & Technology

    1981-06-01

    the model (e.g. center level as a function of cell level, etc.). Although the current effort was to develop an IDEF o activ- ity model, the...concepts and thoughts on synthesizing existing knowledge toward the objective of developing a hierarchical IDEF o econo- metric model for a large scale...review of the termin- ology and structure of IDEF o (ICAM definition method-version 0) is given in the subsequent paragraphs. Structured analysis

  20. Research and development of the evolving architecture for beyond the Standard Model

    NASA Astrophysics Data System (ADS)

    Cho, Kihyeon; Kim, Jangho; Kim, Junghyun

    2015-12-01

    The Standard Model (SM) has been successfully validated with the discovery of Higgs boson. However, the model is not yet fully regarded as a complete description. There are efforts to develop phenomenological models that are collectively termed beyond the standard model (BSM). The BSM requires several orders of magnitude more simulations compared with those required for the Higgs boson events. On the other hand, particle physics research involves major investments in hardware coupled with large-scale theoretical and computational efforts along with experiments. These fields include simulation toolkits based on an evolving computing architecture. Using the simulation toolkits, we study particle physics beyond the standard model. Here, we describe the state of this research and development effort for evolving computing architecture of high throughput computing (HTC) and graphic processing units (GPUs) for searching beyond the standard model.

  1. Modeling & Analysis of Multicore Architectures for Embedded SIGINT Applications

    DTIC Science & Technology

    2015-03-01

    Advisor, Computing & Communications Division Information Directorate This report is published in the interest of scientific and technical...to make more informed selection of high performance embedded computing (HPEC) technologies. 15. SUBJECT TERMS analytical performance model, embedded... computing , energy efficient computing , high performance computing , multicore computing , power modeling, signals intelligence, signal processing

  2. Towards automatic Markov reliability modeling of computer architectures

    NASA Technical Reports Server (NTRS)

    Liceaga, C. A.; Siewiorek, D. P.

    1986-01-01

    The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.

  3. Conformations of seven-membered rings: The Fourier transform model

    NASA Astrophysics Data System (ADS)

    Cano, F. H.; Foces-Foces, C.

    A representation of the puckered conformations of seven-membered rings, using the Fourier Fourier Transform model and derived from the torsion angles, is presented in terms of two puckering amplitudes and their corresponding puckering phases. These four parameters are used to describe the main conformational types and to study the planarity of the rings, symmetrical forms, pseudorotation pathways and symmetrical interconversions through the puckering levels. This analysis provides a criterion for characterizing the basic conformations which have already been established by earlier work. A comparison with previous models is also given and the representation applied to some 1,4-benzodiazepine compounds.

  4. Using UML Modeling to Facilitate Three-Tier Architecture Projects in Software Engineering Courses

    ERIC Educational Resources Information Center

    Mitra, Sandeep

    2014-01-01

    This article presents the use of a model-centric approach to facilitate software development projects conforming to the three-tier architecture in undergraduate software engineering courses. Many instructors intend that such projects create software applications for use by real-world customers. While it is important that the first version of these…

  5. Can diversity in root architecture explain plant water use efficiency? A modeling study

    PubMed Central

    Tron, Stefania; Bodner, Gernot; Laio, Francesco; Ridolfi, Luca; Leitner, Daniel

    2015-01-01

    Drought stress is a dominant constraint to crop production. Breeding crops with adapted root systems for effective uptake of water represents a novel strategy to increase crop drought resistance. Due to complex interaction between root traits and high diversity of hydrological conditions, modeling provides important information for trait based selection. In this work we use a root architecture model combined with a soil-hydrological model to analyze whether there is a root system ideotype of general adaptation to drought or water uptake efficiency of root systems is a function of specific hydrological conditions. This was done by modeling transpiration of 48 root architectures in 16 drought scenarios with distinct soil textures, rainfall distributions, and initial soil moisture availability. We find that the efficiency in water uptake of root architecture is strictly dependent on the hydrological scenario. Even dense and deep root systems are not superior in water uptake under all hydrological scenarios. Our results demonstrate that mere architectural description is insufficient to find root systems of optimum functionality. We find that in environments with sufficient rainfall before the growing season, root depth represents the key trait for the exploration of stored water, especially in fine soils. Root density, instead, especially near the soil surface, becomes the most relevant trait for exploiting soil moisture when plant water supply is mainly provided by rainfall events during the root system development. We therefore concluded that trait based root breeding has to consider root systems with specific adaptation to the hydrology of the target environment. PMID:26412932

  6. A Model Based Framework for Semantic Interpretation of Architectural Construction Drawings

    ERIC Educational Resources Information Center

    Babalola, Olubi Oluyomi

    2011-01-01

    The study addresses the automated translation of architectural drawings from 2D Computer Aided Drafting (CAD) data into a Building Information Model (BIM), with emphasis on the nature, possible role, and limitations of a drafting language Knowledge Representation (KR) on the problem and process. The central idea is that CAD to BIM translation is a…

  7. Derivation of Rigid Body Analysis Models from Vehicle Architecture Abstractions

    DTIC Science & Technology

    2011-06-17

    simultaneously with the model creation. The author has described the evolution of the car design process from the conventional approach to the new development...models of every type have their basis in some type of physical representation of the design domain. Rather than describing three-dimensional continua of...arrangement, while capturing just enough physical detail to be used as the basis for a meaningful representation of the design , and eventually, analyses that

  8. Integrated Architectural Level Power-Performance Modeling Toolkit

    DTIC Science & Technology

    2004-08-20

    laptop) systems. We utilize the MET/ Turandot toolkit originally developed at IBM TJ Watson Research Center as the underlying PowerPC...microarchitecture performance simulator [3]. Turandot is flexible enough to model a broad range of microarchitectures and has undergone extensive validation [3...In addition, Turandot has been augmented with power models to explore power-performance tradeoffs in an internal IBM tool called PowerTimer [4

  9. Designing Capital-Intensive Systems with Architectural and Operational Flexibility Using a Screening Model

    NASA Astrophysics Data System (ADS)

    Lin, Jijun; de Weck, Olivier; de Neufville, Richard; Robinson, Bob; MacGowan, David

    Development of capital intensive systems, such as offshore oil platforms or other industrial infrastructure, generally requires a significant amount of capital investment under various resource, technical, and market uncertainties. It is a very challenging task for development co-owners or joint ventures because important decisions, such as system architectures, have to be made while uncertainty remains high. This paper develops a screening model and a simulation framework to quickly explore the design space for complex engineering systems under uncertainty allowing promising strategies or architectures to be identified. Flexibility in systems’ design and operation is proposed as a proactive means to enable systems to adapt to future uncertainty. Architectural and operational flexibility can improve systems’ lifecycle value by mitigating downside risks and capturing upside opportunities. In order to effectively explore different flexible strategies addressing a view of uncertainty which changes with time, a computational framework based on Monte Carlo simulation is proposed in this paper. This framework is applied to study flexible development strategies for a representative offshore petroleum project. The complexity of this problem comes from multi-domain uncertainties, large architectural design space, and structure of flexibility decision rules. The results demonstrate that architectural and operational flexibility can significantly improve projects’ Expected Net Present Value (ENPV), reduce downside risks, and improve upside gains, compared to adopting an inflexible strategy appropriate to the view of uncertainty at the start of the project. In this particular case study, the most flexible strategy improves ENPV by 85% over an inflexible base case.

  10. A microcomputer algorithm for solving compartmental models involving radionuclide transformations.

    PubMed

    Birchall, A

    1986-03-01

    An algorithm for solving first-order non-recycling compartment models is described. Given the initial amounts of a radioactive material in each compartment and the fundamental transfer rate constants between each compartment, the algorithm gives both the amount of material remaining at any time t and the integrated number of transformations that would occur up to time t. The method is analytical, and consequently, is ideally suited for implementation on a microcomputer. For a typical microcomputer with 64 kilobytes of random access memory, a model containing up to 100 compartments, with any number of interconnecting translocation routes, can be solved in a few seconds; providing that no recycling occurs. An example computer program, written in 30 lines of Microsoft BASIC, is included in an appendix to demonstrate the use of the algorithm. A detailed description is included to show how the algorithm is modified to satisfy the requirements commonly encountered in compartment modelling, for example, continuous intake, partitioning of activity, and transformations from radioactive progeny. Although the algorithm does not solve models involving recycling, it is often possible to represent such cases by a non-recycling model which is mathematically equivalent.

  11. Distributed model predictive control with hierarchical architecture for communication: application in automated irrigation channels

    NASA Astrophysics Data System (ADS)

    Farhadi, Alireza; Khodabandehlou, Ali

    2016-08-01

    This paper is concerned with a distributed model predictive control (DMPC) method that is based on a distributed optimisation method with two-level architecture for communication. Feasibility (constraints satisfaction by the approximated solution), convergence and optimality of this distributed optimisation method are mathematically proved. For an automated irrigation channel, the satisfactory performance of the proposed DMPC method in attenuation of the undesired upstream transient error propagation and amplification phenomenon is illustrated and compared with the performance of another DMPC method that exploits a single-level architecture for communication. It is illustrated that the DMPC that exploits a two-level architecture for communication has a better performance by better managing communication overhead.

  12. Connecting Requirements to Architecture and Analysis via Model-Based Systems Engineering

    NASA Technical Reports Server (NTRS)

    Cole, Bjorn F.; Jenkins, J. Steven

    2015-01-01

    In traditional systems engineering practice, architecture, concept development, and requirements development are related but still separate activities. Concepts for operation, key technical approaches, and related proofs of concept are developed. These inform the formulation of an architecture at multiple levels, starting with the overall system composition and functionality and progressing into more detail. As this formulation is done, a parallel activity develops a set of English statements that constrain solutions. These requirements are often called "shall statements" since they are formulated to use "shall." The separation of requirements from design is exacerbated by well-meaning tools like the Dynamic Object-Oriented Requirements System (DOORS) that remained separated from engineering design tools. With the Europa Clipper project, efforts are being taken to change the requirements development approach from a separate activity to one intimately embedded in formulation effort. This paper presents a modeling approach and related tooling to generate English requirement statements from constraints embedded in architecture definition.

  13. Evaluating models of community psychology: social transformation in South Africa.

    PubMed

    Edwards, Steve

    2002-01-01

    Tricket (1996) described community psychology in terms of contexts of diversity within a diversity of contexts. As abstract representations of reality, various community psychological models provide further diverse contexts through which to view the diversity of community psychological reality. The Zululand Community Psychology Project is a South African initiative aimed at improving community life. This includes treating the violent sequelae of the unjust Apartheid system through improving relationships among communities divided in terms of historical, colonial, racial, ethnic, political, gender, and other boundaries as well as promoting health and social change. The aim of this article is to evaluate the applicability of various models of community psychology used in this project. The initial quantitative investigation in the Zululand Community Psychology Project involved five coresearchers, who evaluated five community psychology models--the mental health, social action, organizational, ecological, and phenomenological models--in terms of their differential applicability in three partnership centers, representing health, education, and business sectors of the local community. In all three contexts, the models were rank ordered by a representative of each center, an intern community psychologist, and his supervisor in terms of the models' respective applicability to the particular partnership center concerned. Results indicated significant agreement with regard to the differential applicability of the mental health, phenomenological, and organizational models in the health, education, and business centers respectively, with the social action model being most generally applicable across all centers. This led to a further qualitative individual and focus group investigation with eight university coresearchers into the experience of social transformation with special reference to social changes needed in the South African context. These social transformation

  14. Use of the Chemical Transformation Simulator as a Parameterization Tool for Modeling the Environmental Fate of Organic Chemicals and their Transformation Products

    EPA Science Inventory

    A Chemical Transformation Simulator is a web-based system for predicting transformation pathways and physicochemical properties of organic chemicals. Role in Environmental Modeling • Screening tool for identifying likely transformation products in the environment • Parameteri...

  15. Models for Predicting the Architecture of Different Shoot Types in Apple

    PubMed Central

    Baïram, Emna; Delaire, Mickaël; Le Morvan, Christian; Buck-Sorlin, Gerhard

    2017-01-01

    In apple, the first-order branch of a tree has a characteristic architecture constituting three shoot types: bourses (rosettes), bourse shoots, and vegetative shoots. Its overall architecture as well as that of each shoot thus determines the distribution of sources (leaves) and sinks (fruits) and could have an influence on the amount of sugar allocated to fruits. Knowledge of architecture, in particular the position and area of leaves helps to quantify source strength. In order to reconstruct this initial architecture, rules equipped with allometric relations could be used: these allow predicting model parameters that are difficult to measure from simple traits that can be determined easily, non-destructively and directly in the orchard. Once such allometric relations are established they can be used routinely to recreate initial structures. Models based on allometric relations have been established in this study in order to predict the leaf areas of the three different shoot types of three apple cultivars with different branch architectures: “Fuji,” “Ariane,” and “Rome Beauty.” The allometric relations derived from experimental data allowed us to model the total shoot leaf area as well as the individual leaf area for each leaf rank, for each shoot type and each genotype. This was achieved using two easily measurable input variables: total leaf number per shoot and the length of the biggest leaf on the shoot. The models were tested using a different data set, and they were able to accurately predict leaf area of all shoot types and genotypes. Additional focus on internode lengths on spurs contributed to refine the models. PMID:28203241

  16. The Laminar Cortex Model: A New Continuum Cortex Model Incorporating Laminar Architecture

    PubMed Central

    Du, Jiaxin; Vegh, Viktor; Reutens, David C.

    2012-01-01

    Local field potentials (LFPs) are widely used to study the function of local networks in the brain. They are also closely correlated with the blood-oxygen-level-dependent signal, the predominant contrast mechanism in functional magnetic resonance imaging. We developed a new laminar cortex model (LCM) to simulate the amplitude and frequency of LFPs. Our model combines the laminar architecture of the cerebral cortex and multiple continuum models to simulate the collective activity of cortical neurons. The five cortical layers (layer I, II/III, IV, V, and VI) are simulated as separate continuum models between which there are synaptic connections. The LCM was used to simulate the dynamics of the visual cortex under different conditions of visual stimulation. LFPs are reported for two kinds of visual stimulation: general visual stimulation and intermittent light stimulation. The power spectra of LFPs were calculated and compared with existing empirical data. The LCM was able to produce spontaneous LFPs exhibiting frequency-inverse (1/ƒ) power spectrum behaviour. Laminar profiles of current source density showed similarities to experimental data. General stimulation enhanced the oscillation of LFPs corresponding to gamma frequencies. During simulated intermittent light stimulation, the LCM captured the fundamental as well as high order harmonics as previously reported. The power spectrum expected with a reduction in layer IV neurons, often observed with focal cortical dysplasias associated with epilepsy was also simulated. PMID:23093925

  17. Implementation of Remaining Useful Lifetime Transformer Models in the Fleet-Wide Prognostic and Health Management Suite

    SciTech Connect

    Agarwal, Vivek; Lybeck, Nancy J.; Pham, Binh; Rusaw, Richard; Bickford, Randall

    2015-02-01

    Research and development efforts are required to address aging and reliability concerns of the existing fleet of nuclear power plants. As most plants continue to operate beyond the license life (i.e., towards 60 or 80 years), plant components are more likely to incur age-related degradation mechanisms. To assess and manage the health of aging plant assets across the nuclear industry, the Electric Power Research Institute has developed a web-based Fleet-Wide Prognostic and Health Management (FW-PHM) Suite for diagnosis and prognosis. FW-PHM is a set of web-based diagnostic and prognostic tools and databases, comprised of the Diagnostic Advisor, the Asset Fault Signature Database, the Remaining Useful Life Advisor, and the Remaining Useful Life Database, that serves as an integrated health monitoring architecture. The main focus of this paper is the implementation of prognostic models for generator step-up transformers in the FW-PHM Suite. One prognostic model discussed is based on the functional relationship between degree of polymerization, (the most commonly used metrics to assess the health of the winding insulation in a transformer) and furfural concentration in the insulating oil. The other model is based on thermal-induced degradation of the transformer insulation. By utilizing transformer loading information, established thermal models are used to estimate the hot spot temperature inside the transformer winding. Both models are implemented in the Remaining Useful Life Database of the FW-PHM Suite. The Remaining Useful Life Advisor utilizes the implemented prognostic models to estimate the remaining useful life of the paper winding insulation in the transformer based on actual oil testing and operational data.

  18. Analysis of trabecular bone architectural changes induced by osteoarthritis in rabbit femur using 3D active shape model and digital topology

    NASA Astrophysics Data System (ADS)

    Saha, P. K.; Rajapakse, C. S.; Williams, D. S.; Duong, L.; Coimbra, A.

    2007-03-01

    Osteoarthritis (OA) is the most common chronic joint disease, which causes the cartilage between the bone joints to wear away, leading to pain and stiffness. Currently, progression of OA is monitored by measuring joint space width using x-ray or cartilage volume using MRI. However, OA affects all periarticular tissues, including cartilage and bone. It has been shown previously that in animal models of OA, trabecular bone (TB) architecture is particularly affected. Furthermore, relative changes in architecture are dependent on the depth of the TB region with respect to the bone surface and main direction of load on the bone. The purpose of this study was to develop a new method for accurately evaluating 3D architectural changes induced by OA in TB. Determining the TB test domain that represents the same anatomic region across different animals is crucial for studying disease etiology, progression and response to therapy. It also represents a major technical challenge in analyzing architectural changes. Here, we solve this problem using a new active shape model (ASM)-based approach. A new and effective semi-automatic landmark selection approach has been developed for rabbit distal femur surface that can easily be adopted for many other anatomical regions. It has been observed that, on average, a trained operator can complete the user interaction part of landmark specification process in less than 15 minutes for each bone data set. Digital topological analysis and fuzzy distance transform derived parameters are used for quantifying TB architecture. The method has been applied on micro-CT data of excised rabbit femur joints from anterior cruciate ligament transected (ACLT) (n = 6) and sham (n = 9) operated groups collected at two and two-to-eight week post-surgery, respectively. An ASM of the rabbit right distal femur has been generated from the sham group micro-CT data. The results suggest that, in conjunction with ASM, digital topological parameters are suitable for

  19. Product Lifecycle Management Architecture: A Model Based Systems Engineering Analysis.

    SciTech Connect

    Noonan, Nicholas James

    2015-07-01

    This report is an analysis of the Product Lifecycle Management (PLM) program. The analysis is centered on a need statement generated by a Nuclear Weapons (NW) customer. The need statement captured in this report creates an opportunity for the PLM to provide a robust service as a solution. Lifecycles for both the NW and PLM are analyzed using Model Based System Engineering (MBSE).

  20. Accounting for nonlinear material characteristics in modeling ferroresonant transformers

    NASA Astrophysics Data System (ADS)

    Voisine, J. T.

    1985-04-01

    A mathematical model relating core material properties, including nonlinear magnetization characteristics, to the performance of ferroresonant transformers has been developed. In accomplishing this, other factors such as fabrication destruction factors, leakage flux, air gap characteristics, loading, and coil resistances and self-inductances are also accounted for. From a material manufacturer's view, knowing such information facilitates isolating sources of performance variations between units of similar design and is therefore highly desirable. The model predicts the primary induction necessary to establish a specified secondary induction and determines peak induction at other points in the magnetic circuit. A study comparing the model with a transformer indicated that each predicted peak induction was within ±5% of the corresponding measured peak induction. A generalized 4-node magnetic circuit having two shunt paths was chosen and modeled. Such a circuit is easily modified facilitating the analyses of numerous other core designs. A computer program designed to run on an HP-41 programmable calculator was also developed and is briefly described.

  1. Deep Phenotyping of Coarse Root Architecture in R. pseudoacacia Reveals That Tree Root System Plasticity Is Confined within Its Architectural Model

    PubMed Central

    Danjon, Frédéric; Khuder, Hayfa; Stokes, Alexia

    2013-01-01

    This study aims at assessing the influence of slope angle and multi-directional flexing and their interaction on the root architecture of Robinia pseudoacacia seedlings, with a particular focus on architectural model and trait plasticity. 36 trees were grown from seed in containers inclined at 0° (control) or 45° (slope) in a glasshouse. The shoots of half the plants were gently flexed for 5 minutes a day. After 6 months, root systems were excavated and digitized in 3D, and biomass measured. Over 100 root architectural traits were determined. Both slope and flexing increased significantly plant size. Non-flexed trees on 45° slopes developed shallow roots which were largely aligned perpendicular to the slope. Compared to the controls, flexed trees on 0° slopes possessed a shorter and thicker taproot held in place by regularly distributed long and thin lateral roots. Flexed trees on the 45° slope also developed a thick vertically aligned taproot, with more volume allocated to upslope surface lateral roots, due to the greater soil volume uphill. We show that there is an inherent root system architectural model, but that a certain number of traits are highly plastic. This plasticity will permit root architectural design to be modified depending on external mechanical signals perceived by young trees. PMID:24386227

  2. Deep phenotyping of coarse root architecture in R. pseudoacacia reveals that tree root system plasticity is confined within its architectural model.

    PubMed

    Danjon, Frédéric; Khuder, Hayfa; Stokes, Alexia

    2013-01-01

    This study aims at assessing the influence of slope angle and multi-directional flexing and their interaction on the root architecture of Robinia pseudoacacia seedlings, with a particular focus on architectural model and trait plasticity. 36 trees were grown from seed in containers inclined at 0° (control) or 45° (slope) in a glasshouse. The shoots of half the plants were gently flexed for 5 minutes a day. After 6 months, root systems were excavated and digitized in 3D, and biomass measured. Over 100 root architectural traits were determined. Both slope and flexing increased significantly plant size. Non-flexed trees on 45° slopes developed shallow roots which were largely aligned perpendicular to the slope. Compared to the controls, flexed trees on 0° slopes possessed a shorter and thicker taproot held in place by regularly distributed long and thin lateral roots. Flexed trees on the 45° slope also developed a thick vertically aligned taproot, with more volume allocated to upslope surface lateral roots, due to the greater soil volume uphill. We show that there is an inherent root system architectural model, but that a certain number of traits are highly plastic. This plasticity will permit root architectural design to be modified depending on external mechanical signals perceived by young trees.

  3. Healthcare information system architecture (HISA) and its middleware models.

    PubMed

    Scherrer, J R; Spahni, S

    1999-01-01

    The use of middleware to develop widely distributed healthcare information systems (HIS) has become inevitable. However, the fact that many different platforms, even sometimes heterogeneous to each other, are hooked into the same network makes the integration of various middleware components more difficult than some might believe. This paper discusses the HISA standard and proposes extensions to the model that, in turn, could be compliant with other various existing distributed platforms and their middleware components.

  4. Mapping a Domain Model and Architecture to a Generic Design

    DTIC Science & Technology

    1994-05-01

    software engineering life cycle entitled Mode/-Based Software Engineerng ( MBSE ), a concept first described by the SEI In [Feller 93]. MBSE enables...organizations to build software applications which must evolve with a minimum of rework and scrap to meet changes in mission and technology. MBSE Involves...software models are also built. MBSE is a focus area for the SEI’s Engineering Techniques Program and is the subjedt of a recent SEI report [Withey 94

  5. Research on mixed network architecture collaborative application model

    NASA Astrophysics Data System (ADS)

    Jing, Changfeng; Zhao, Xi'an; Liang, Song

    2009-10-01

    When facing complex requirements of city development, ever-growing spatial data, rapid development of geographical business and increasing business complexity, collaboration between multiple users and departments is needed urgently, however conventional GIS software (such as Client/Server model or Browser/Server model) are not support this well. Collaborative application is one of the good resolutions. Collaborative application has four main problems to resolve: consistency and co-edit conflict, real-time responsiveness, unconstrained operation, spatial data recoverability. In paper, application model called AMCM is put forward based on agent and multi-level cache. AMCM can be used in mixed network structure and supports distributed collaborative. Agent is an autonomous, interactive, initiative and reactive computing entity in a distributed environment. Agent has been used in many fields such as compute science and automation. Agent brings new methods for cooperation and the access for spatial data. Multi-level cache is a part of full data. It reduces the network load and improves the access and handle of spatial data, especially, in editing the spatial data. With agent technology, we make full use of its characteristics of intelligent for managing the cache and cooperative editing that brings a new method for distributed cooperation and improves the efficiency.

  6. Coupling root architecture and pore network modeling - an attempt towards better understanding root-soil interactions

    NASA Astrophysics Data System (ADS)

    Leitner, Daniel; Bodner, Gernot; Raoof, Amir

    2013-04-01

    Understanding root-soil interactions is of high importance for environmental and agricultural management. Root uptake is an essential component in water and solute transport modeling. The amount of groundwater recharge and solute leaching significantly depends on the demand based plant extraction via its root system. Plant uptake however not only responds to the potential demand, but in most situations is limited by supply form the soil. The ability of the plant to access water and solutes in the soil is governed mainly by root distribution. Particularly under conditions of heterogeneous distribution of water and solutes in the soil, it is essential to capture the interaction between soil and roots. Root architecture models allow studying plant uptake from soil by describing growth and branching of root axes in the soil. Currently root architecture models are able to respond dynamically to water and nutrient distribution in the soil by directed growth (tropism), modified branching and enhanced exudation. The porous soil medium as rooting environment in these models is generally described by classical macroscopic water retention and sorption models, average over the pore scale. In our opinion this simplified description of the root growth medium implies several shortcomings for better understanding root-soil interactions: (i) It is well known that roots grow preferentially in preexisting pores, particularly in more rigid/dry soil. Thus the pore network contributes to the architectural form of the root system; (ii) roots themselves can influence the pore network by creating preferential flow paths (biopores) which are an essential element of structural porosity with strong impact on transport processes; (iii) plant uptake depend on both the spatial location of water/solutes in the pore network as well as the spatial distribution of roots. We therefore consider that for advancing our understanding in root-soil interactions, we need not only to extend our root models

  7. Development and validation of a tokamak skin effect transformer model

    NASA Astrophysics Data System (ADS)

    Romero, J. A.; Moret, J.-M.; Coda, S.; Felici, F.; Garrido, I.

    2012-02-01

    A lumped parameter, state space model for a tokamak transformer including the slow flux penetration in the plasma (skin effect transformer model) is presented. The model does not require detailed or explicit information about plasma profiles or geometry. Instead, this information is lumped in system variables, parameters and inputs. The model has an exact mathematical structure built from energy and flux conservation theorems, predicting the evolution and non-linear interaction of plasma current and internal inductance as functions of the primary coil currents, plasma resistance, non-inductive current drive and the loop voltage at a specific location inside the plasma (equilibrium loop voltage). Loop voltage profile in the plasma is substituted by a three-point discretization, and ordinary differential equations are used to predict the equilibrium loop voltage as a function of the boundary and resistive loop voltages. This provides a model for equilibrium loop voltage evolution, which is reminiscent of the skin effect. The order and parameters of this differential equation are determined empirically using system identification techniques. Fast plasma current modulation experiments with random binary signals have been conducted in the TCV tokamak to generate the required data for the analysis. Plasma current was modulated under ohmic conditions between 200 and 300 kA with 30 ms rise time, several times faster than its time constant L/R ≈ 200 ms. A second-order linear differential equation for equilibrium loop voltage is sufficient to describe the plasma current and internal inductance modulation with 70% and 38% fit parameters, respectively. The model explains the most salient features of the plasma current transients, such as the inverse correlation between plasma current ramp rates and internal inductance changes, without requiring detailed or explicit information about resistivity profiles. This proves that a lumped parameter modelling approach can be used to

  8. Architectural Improvements and New Processing Tools for the Open XAL Online Model

    SciTech Connect

    Allen, Christopher K; Pelaia II, Tom; Freed, Jonathan M

    2015-01-01

    The online model is the component of Open XAL providing accelerator modeling, simulation, and dynamic synchronization to live hardware. Significant architectural changes and feature additions have been recently made in two separate areas: 1) the managing and processing of simulation data, and 2) the modeling of RF cavities. Simulation data and data processing have been completely decoupled. A single class manages all simulation data while standard tools were developed for processing the simulation results. RF accelerating cavities are now modeled as composite structures where parameter and dynamics computations are distributed. The beam and hardware models both maintain their relative phase information, which allows for dynamic phase slip and elapsed time computation.

  9. Objective Evaluation of Sensor Web Modeling and Data System Architectures

    NASA Astrophysics Data System (ADS)

    Seablom, M. S.; Atlas, R. M.; Ardizzone, J.; Kemp, E. M.; Talabac, S.

    2013-12-01

    We discuss the recent development of an end-to-end simulator designed to quantitatively assess the scientific value of incorporating model- and event-driven "sensor web" capabilities into future NASA Earth Science missions. The intent is to provide an objective analysis tool for performing engineering and scientific trade studies in which new technologies are introduced. In the case study presented here we focus on meteorological applications in which a numerical model is used to intelligently schedule data collection by space-based assets. Sensor web observing systems that enable dynamic targeting by various observing platforms have the potential to significantly improve our ability to monitor, understand, and predict the evolution of rapidly evolving, transient, or variable meteorological events. The use case focuses on landfalling hurricanes and was selected due to the obvious societal impact and the ongoing need to improve warning times. Although hurricane track prediction has improved over the past several decades, further improvement is necessary in the prediction of hurricane intensity. We selected a combination of future observing platforms to apply sensor web measurement techniques: global 3D lidar winds, next-generation scatterometer ocean vector winds, and high resolution cloud motion vectors from GOES-R. Targeting of the assets by a numerical model would allow the spacecraft to change its attitude by performing a roll maneuver to enable off-nadir measurements to be acquired. In this study, synthetic measurements were derived through Observing System Simulation Experiments (OSSEs) and enabled in part through the Dopplar Lidar Simulation Model developed by Simpson Weather Associates. We describe the capabilities of the simulator through three different sensor web configurations of the wind lidar: winds obtained from a nominal "survey mode" operation, winds obtained with a reduced duty cycle of the lidar (designed for preserving the life of the instrument

  10. Creative Practices Embodied, Embedded, and Enacted in Architectural Settings: Toward an Ecological Model of Creativity.

    PubMed

    Malinin, Laura H

    2015-01-01

    Memoires by eminently creative people often describe architectural spaces and qualities they believe instrumental for their creativity. However, places designed to encourage creativity have had mixed results, with some found to decrease creative productivity for users. This may be due, in part, to lack of suitable empirical theory or model to guide design strategies. Relationships between creative cognition and features of the physical environment remain largely uninvestigated in the scientific literature, despite general agreement among researchers that human cognition is physically and socially situated. This paper investigates what role architectural settings may play in creative processes by examining documented first person and biographical accounts of creativity with respect to three central theories of situated cognition. First, the embodied thesis argues that cognition encompasses both the mind and the body. Second, the embedded thesis maintains that people exploit features of the physical and social environment to increase their cognitive capabilities. Third, the enaction thesis describes cognition as dependent upon a person's interactions with the world. Common themes inform three propositions, illustrated in a new theoretical framework describing relationships between people and their architectural settings with respect to different cognitive processes of creativity. The framework is intended as a starting point toward an ecological model of creativity, which may be used to guide future creative process research and architectural design strategies to support user creative productivity.

  11. Creative Practices Embodied, Embedded, and Enacted in Architectural Settings: Toward an Ecological Model of Creativity

    PubMed Central

    Malinin, Laura H.

    2016-01-01

    Memoires by eminently creative people often describe architectural spaces and qualities they believe instrumental for their creativity. However, places designed to encourage creativity have had mixed results, with some found to decrease creative productivity for users. This may be due, in part, to lack of suitable empirical theory or model to guide design strategies. Relationships between creative cognition and features of the physical environment remain largely uninvestigated in the scientific literature, despite general agreement among researchers that human cognition is physically and socially situated. This paper investigates what role architectural settings may play in creative processes by examining documented first person and biographical accounts of creativity with respect to three central theories of situated cognition. First, the embodied thesis argues that cognition encompasses both the mind and the body. Second, the embedded thesis maintains that people exploit features of the physical and social environment to increase their cognitive capabilities. Third, the enaction thesis describes cognition as dependent upon a person’s interactions with the world. Common themes inform three propositions, illustrated in a new theoretical framework describing relationships between people and their architectural settings with respect to different cognitive processes of creativity. The framework is intended as a starting point toward an ecological model of creativity, which may be used to guide future creative process research and architectural design strategies to support user creative productivity. PMID:26779087

  12. A service-oriented architecture for integrating the modeling and formal verification of genetic regulatory networks

    PubMed Central

    2009-01-01

    Background The study of biological networks has led to the development of increasingly large and detailed models. Computer tools are essential for the simulation of the dynamical behavior of the networks from the model. However, as the size of the models grows, it becomes infeasible to manually verify the predictions against experimental data or identify interesting features in a large number of simulation traces. Formal verification based on temporal logic and model checking provides promising methods to automate and scale the analysis of the models. However, a framework that tightly integrates modeling and simulation tools with model checkers is currently missing, on both the conceptual and the implementational level. Results We have developed a generic and modular web service, based on a service-oriented architecture, for integrating the modeling and formal verification of genetic regulatory networks. The architecture has been implemented in the context of the qualitative modeling and simulation tool GNA and the model checkers NUSMV and CADP. GNA has been extended with a verification module for the specification and checking of biological properties. The verification module also allows the display and visual inspection of the verification results. Conclusions The practical use of the proposed web service is illustrated by means of a scenario involving the analysis of a qualitative model of the carbon starvation response in E. coli. The service-oriented architecture allows modelers to define the model and proceed with the specification and formal verification of the biological properties by means of a unified graphical user interface. This guarantees a transparent access to formal verification technology for modelers of genetic regulatory networks. PMID:20042075

  13. Culture models of human mammary epithelial cell transformation

    SciTech Connect

    Stampfer, Martha R.; Yaswen, Paul

    2000-11-10

    Human pre-malignant breast diseases, particularly ductal carcinoma in situ (DCIS)3 already display several of the aberrant phenotypes found in primary breast cancers, including chromosomal abnormalities, telomerase activity, inactivation of the p53 gene and overexpression of some oncogenes. Efforts to model early breast carcinogenesis in human cell cultures have largely involved studies in vitro transformation of normal finite lifespan human mammary epithelial cells (HMEC) to immortality and malignancy. We present a model of HMEC immortal transformation consistent with the know in vivo data. This model includes a recently described, presumably epigenetic process, termed conversion, which occurs in cells that have overcome stringent replicative senescence and are thus able to maintain proliferation with critically short telomeres. The conversion process involves reactivation of telomerase activity, and acquisition of good uniform growth in the absence and presence of TFGB. We propose th at overcoming the proliferative constraints set by senescence, and undergoing conversion, represent key rate-limiting steps in human breast carcinogenesis, and occur during early stage breast cancer progression.

  14. Role of System Architecture in Architecture in Developing New Drafting Tools

    NASA Astrophysics Data System (ADS)

    Sorguç, Arzu Gönenç

    In this study, the impact of information technologies in architectural design process is discussed. In this discussion, first the differences/nuances between the concept of software engineering and system architecture are clarified. Then, the design process in engineering, and design process in architecture has been compared by considering 3-D models as the center of design process over which the other disciplines involve the design. It is pointed out that in many high-end engineering applications, 3-D solid models and consequently digital mock-up concept has become a common practice. But, architecture as one of the important customers of CAD systems employing these tools has not started to use these 3-D models. It is shown that the reason of this time lag between architecture and engineering lies behind the tradition of design attitude. Therefore, it is proposed a new design scheme a meta-model to develop an integrated design model being centered on 3-D model. It is also proposed a system architecture to achieve the transformation of architectural design process by replacing 2-D thinking with 3-D thinking. It is stated that in the proposed system architecture, the CAD systems are included and adapted for 3-D architectural design in order to provide interfaces for integration of all possible disciplines to design process. It is also shown that such a change will allow to elaborate the intelligent or smart building concept in future.

  15. Diagnostic and Prognostic Models for Generator Step-Up Transformers

    SciTech Connect

    Vivek Agarwal; Nancy J. Lybeck; Binh T. Pham

    2014-09-01

    In 2014, the online monitoring (OLM) of active components project under the Light Water Reactor Sustainability program at Idaho National Laboratory (INL) focused on diagnostic and prognostic capabilities for generator step-up transformers. INL worked with subject matter experts from the Electric Power Research Institute (EPRI) to augment and revise the GSU fault signatures previously implemented in the Electric Power Research Institute’s (EPRI’s) Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. Two prognostic models were identified and implemented for GSUs in the FW-PHM Suite software. INL and EPRI demonstrated the use of prognostic capabilities for GSUs. The complete set of fault signatures developed for GSUs in the Asset Fault Signature Database of the FW-PHM Suite for GSUs is presented in this report. Two prognostic models are described for paper insulation: the Chendong model for degree of polymerization, and an IEEE model that uses a loading profile to calculates life consumption based on hot spot winding temperatures. Both models are life consumption models, which are examples of type II prognostic models. Use of the models in the FW-PHM Suite was successfully demonstrated at the 2014 August Utility Working Group Meeting, Idaho Falls, Idaho, to representatives from different utilities, EPRI, and the Halden Research Project.

  16. Assessment of Mechanical Performance of Bone Architecture Using Rapid Prototyping Models

    NASA Astrophysics Data System (ADS)

    Saparin, Peter; Woesz, Alexander; Thomsen, Jasper S.; Fratzl, Peter

    2008-06-01

    The aim of this on-going research project is to assess the influence of bone microarchitecture on the mechanical performance of trabecular bone. A testing chain consist-ing of three steps was established: 1) micro computed tomography (μCT) imaging of human trabecular bone; 2) building of models of the bone from a light-sensitive polymer using Rapid Prototyping (RP); 3) mechanical testing of the models in a material testing machine. A direct resampling procedure was developed to convert μCT data into the format of the RP machine. Standardized parameters for production and testing of the plastic models were established by use of regular cellular structures. Next, normal, osteoporotic, and extreme osteoporotic vertebral trabecular bone architectures were re-produced by RP and compression tested. We found that normal architecture of vertebral trabecular bone exhibit behaviour characteristic of a cellular structure. In normal bone the fracture occurs at much higher strain values that in osteoporotic bone. After the fracture a normal trabecular architecture is able to carry much higher loads than an osteoporotic architecture. However, no statistically significant differences were found in maximal stress during uniaxial compression of the central part of normal, osteoporotic, and extreme osteoporotic vertebral trabecular bone. This supports the hypothesis that osteoporotic trabecular bone can compensate for a loss of trabeculae by thickening the remaining trabeculae in the loading direction (compensatory hypertrophy). The developed approach could be used for mechanical evaluation of structural data acquired non-invasively and assessment of changes in performance of bone architecture.

  17. Operations Assessment of Launch Vehicle Architectures using Activity Based Cost Models

    NASA Technical Reports Server (NTRS)

    Ruiz-Torres, Alex J.; McCleskey, Carey

    2000-01-01

    The growing emphasis on affordability for space transportation systems requires the assessment of new space vehicles for all life cycle activities, from design and development, through manufacturing and operations. This paper addresses the operational assessment of launch vehicles, focusing on modeling the ground support requirements of a vehicle architecture, and estimating the resulting costs and flight rate. This paper proposes the use of Activity Based Costing (ABC) modeling for this assessment. The model uses expert knowledge to determine the activities, the activity times and the activity costs based on vehicle design characteristics. The approach provides several advantages to current approaches to vehicle architecture assessment including easier validation and allowing vehicle designers to understand the cost and cycle time drivers.

  18. Modeling, construction and experimental validation of actuated rolling dynamics of the cylindrical Transforming Roving-Rolling Explorer (TRREx)

    NASA Astrophysics Data System (ADS)

    Edwin, L.; Mazzoleni, A.; Gemmer, T.; Ferguson, S.

    2017-03-01

    Planetary surface exploration technology over the past few years has seen significant advancements on multiple fronts. Robotic exploration platforms are becoming more sophisticated and capable of embarking on more challenging missions. More unconventional designs, particularly transforming architectures that have multiple modes of locomotion, are being studied. This work explores the capabilities of one such novel transforming rover called the Transforming Roving-Rolling Explorer (TRREx). Biologically inspired by the armadillo and the golden-wheel spider, the TRREx has two modes of locomotion: it can traverse on six wheels like a conventional rover on benign terrain, but can transform into a sphere when necessary to negotiate steep rugged slopes. The ability to self-propel in the spherical configuration, even in the absence of a negative gradient, increases the TRREx's versatility and its concept value. This paper describes construction and testing of a prototype cylindrical TRREx that demonstrates that "actuated rolling" can be achieved, and also presents a dynamic model of this prototype version of the TRREx that can be used to investigate the feasibility and value of such self-propelled locomotion. Finally, we present results that validate our dynamic model by comparing results from computer simulations made using the dynamic model to experimental results acquired from test runs using the prototype.

  19. Photo-Modeling and Cloud Computing. Applications in the Survey of Late Gothic Architectural Elements

    NASA Astrophysics Data System (ADS)

    Casu, P.; Pisu, C.

    2013-02-01

    This work proposes the application of the latest methods of photo-modeling to the study of Gothic architecture in Sardinia. The aim is to consider the versatility and ease of use of such documentation tools in order to study architecture and its ornamental details. The paper illustrates a procedure of integrated survey and restitution, with the purpose to obtain an accurate 3D model of some gothic portals. We combined the contact survey and the photographic survey oriented to the photo-modelling. The software used is 123D Catch by Autodesk an Image Based Modelling (IBM) system available free. It is a web-based application that requires a few simple steps to produce a mesh from a set of not oriented photos. We tested the application on four portals, working at different scale of detail: at first the whole portal and then the different architectural elements that composed it. We were able to model all the elements and to quickly extrapolate simple sections, in order to make a comparison between the moldings, highlighting similarities and differences. Working in different sites at different scale of detail, have allowed us to test the procedure under different conditions of exposure, sunshine, accessibility, degradation of surface, type of material, and with different equipment and operators, showing if the final result could be affected by these factors. We tested a procedure, articulated in a few repeatable steps, that can be applied, with the right corrections and adaptations, to similar cases and/or larger or smaller elements.

  20. A scaleable architecture for the modeling and simulation of intelligent transportation systems.

    SciTech Connect

    Ewing, T.; Tentner, A.

    1999-03-17

    A distributed, scaleable architecture for the modeling and simulation of Intelligent Transportation Systems on a network of workstations or a parallel computer has been developed at Argonne National Laboratory. The resulting capability provides a modular framework supporting plug-in models, hardware, and live data sources; visually realistic graphics displays to support training and human factors studies; and a set of basic ITS models. The models and capabilities are described, along with atypical scenario involving dynamic rerouting of smart vehicles which send probe reports to and receive traffic advisories from a traffic management center capable of incident detection.

  1. Imputation for semiparametric transformation models with biased-sampling data

    PubMed Central

    Liu, Hao; Qin, Jing; Shen, Yu

    2012-01-01

    Widely recognized in many fields including economics, engineering, epidemiology, health sciences, technology and wildlife management, length-biased sampling generates biased and right-censored data but often provide the best information available for statistical inference. Different from traditional right-censored data, length-biased data have unique aspects resulting from their sampling procedures. We exploit these unique aspects and propose a general imputation-based estimation method for analyzing length-biased data under a class of flexible semiparametric transformation models. We present new computational algorithms that can jointly estimate the regression coefficients and the baseline function semiparametrically. The imputation-based method under the transformation model provides an unbiased estimator regardless whether the censoring is independent or not on the covariates. We establish large-sample properties using the empirical processes method. Simulation studies show that under small to moderate sample sizes, the proposed procedure has smaller mean square errors than two existing estimation procedures. Finally, we demonstrate the estimation procedure by a real data example. PMID:22903245

  2. Optimizing transformations of stencil operations for parallel object-oriented scientific frameworks on cache-based architectures

    SciTech Connect

    Bassetti, F.; Davis, K.; Quinlan, D.

    1998-12-31

    High-performance scientific computing relies increasingly on high-level large-scale object-oriented software frameworks to manage both algorithmic complexity and the complexities of parallelism: distributed data management, process management, inter-process communication, and load balancing. This encapsulation of data management, together with the prescribed semantics of a typical fundamental component of such object-oriented frameworks--a parallel or serial array-class library--provides an opportunity for increasingly sophisticated compile-time optimization techniques. This paper describes two optimizing transformations suitable for certain classes of numerical algorithms, one for reducing the cost of inter-processor communication, and one for improving cache utilization; demonstrates and analyzes the resulting performance gains; and indicates how these transformations are being automated.

  3. Cloud GIS and 3d Modelling to Enhance Sardinian Late Gothic Architectural Heritage

    NASA Astrophysics Data System (ADS)

    Pisu, C.; Casu, P.

    2013-07-01

    This work proposes the documentation, virtual reconstruction and spreading of architectural heritage through the use of software packages that operate in cloud computing. Cloud computing makes available a variety of applications and tools which can be effective both for the preparation and for the publication of different kinds of data. We tested the versatil ity and ease of use of such documentation tools in order to study a particular architectural phenomenon. The ultimate aim is to develop a multi-scale and multi-layer information system, oriented to the divulgation of Sardinian late gothic architecture. We tested the applications on portals of late Gothic architecture in Sardinia. The actions of conservation, protection and enhancement of cultural heritage are all founded on the social function that can be reached only through the widest possible fruition by the community. The applications of digital technologies on cultural heritage can contribute to the construction of effective communication models that, relying on sensory and emotional involvement of the viewer, can attract a wider audience to cultural content.

  4. Using two coefficients modeling of nonsubsampled Shearlet transform for despeckling

    NASA Astrophysics Data System (ADS)

    Jafari, Saeed; Ghofrani, Sedigheh

    2016-01-01

    Synthetic aperture radar (SAR) images are inherently affected by multiplicative speckle noise. Two approaches based on modeling the nonsubsampled Shearlet transform (NSST) coefficients are presented. Two-sided generalized Gamma distribution and normal inverse Gaussian probability density function have been used to model the statistics of NSST coefficients. Bayesian maximum a posteriori estimator is applied to the corrupted NSST coefficients in order to estimate the noise-free NSST coefficients. Finally, experimental results, according to objective and subjective criteria, carried out on both artificially speckled images and the true SAR images, demonstrate that the proposed methods outperform other state of art references via two points of view, speckle noise reduction and image quality preservation.

  5. Practical Application of Model-based Programming and State-based Architecture to Space Missions

    NASA Technical Reports Server (NTRS)

    Horvath, Gregory A.; Ingham, Michel D.; Chung, Seung; Martin, Oliver; Williams, Brian

    2006-01-01

    Innovative systems and software engineering solutions are required to meet the increasingly challenging demands of deep-space robotic missions. While recent advances in the development of an integrated systems and software engineering approach have begun to address some of these issues, they are still at the core highly manual and, therefore, error-prone. This paper describes a task aimed at infusing MIT's model-based executive, Titan, into JPL's Mission Data System (MDS), a unified state-based architecture, systems engineering process, and supporting software framework. Results of the task are presented, including a discussion of the benefits and challenges associated with integrating mature model-based programming techniques and technologies into a rigorously-defined domain specific architecture.

  6. A transformational model for the practice of professional nursing. Part 1, The model.

    PubMed

    Wolf, G A; Boland, S; Aukerman, M

    1994-04-01

    Our healthcare system is undergoing major transformation. Most nurse executives know that change is necessary and inevitable, but are less certain how to position their departments for these changes. The Transformational Model for the Practice of Professional Nursing was developed as a "road map" for that purpose. Part 1 of the model discusses the paradigm shifts that need to occur in professional practice for future success. The various components of the model are presented, and applications are identified. Part 2 will appear in the May 1994 issue of JONA, and will discuss the implementation of this model into a practice setting.

  7. An Interactive Design Space Supporting Development of Vehicle Architecture Concept Models

    DTIC Science & Technology

    2011-06-17

    ponents that are not designed to carry structural loads in the assembly, such as seats and other trim items. However, these inertial items have an...Denver, Colorado, USA IMECE2011-64510 AN INTERACTIVE DESIGN SPACE SUPPORTING DEVELOPMENT OF VEHICLE ARCHITECTURE CONCEPT MODELS Gary Osborne...early in the development cycle. Optimization taking place later in the cycle usually occurs at the detail design level, and tends to result in

  8. A functional–structural kiwifruit vine model integrating architecture, carbon dynamics and effects of the environment

    PubMed Central

    Cieslak, Mikolaj; Seleznyova, Alla N.; Hanan, Jim

    2011-01-01

    Background and Aims Functional–structural modelling can be used to increase our understanding of how different aspects of plant structure and function interact, identify knowledge gaps and guide priorities for future experimentation. By integrating existing knowledge of the different aspects of the kiwifruit (Actinidia deliciosa) vine's architecture and physiology, our aim is to develop conceptual and mathematical hypotheses on several of the vine's features: (a) plasticity of the vine's architecture; (b) effects of organ position within the canopy on its size; (c) effects of environment and horticultural management on shoot growth, light distribution and organ size; and (d) role of carbon reserves in early shoot growth. Methods Using the L-system modelling platform, a functional–structural plant model of a kiwifruit vine was created that integrates architectural development, mechanistic modelling of carbon transport and allocation, and environmental and management effects on vine and fruit growth. The branching pattern was captured at the individual shoot level by modelling axillary shoot development using a discrete-time Markov chain. An existing carbon transport resistance model was extended to account for several source/sink components of individual plant elements. A quasi-Monte Carlo path-tracing algorithm was used to estimate the absorbed irradiance of each leaf. Key Results Several simulations were performed to illustrate the model's potential to reproduce the major features of the vine's behaviour. The model simulated vine growth responses that were qualitatively similar to those observed in experiments, including the plastic response of shoot growth to local carbon supply, the branching patterns of two Actinidia species, the effect of carbon limitation and topological distance on fruit size and the complex behaviour of sink competition for carbon. Conclusions The model is able to reproduce differences in vine and fruit growth arising from various

  9. A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture

    NASA Technical Reports Server (NTRS)

    Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.

    2005-01-01

    Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.

  10. Characterization of Model-Based Reasoning Strategies for Use in IVHM Architectures

    NASA Technical Reports Server (NTRS)

    Poll, Scott; Iverson, David; Patterson-Hine, Ann

    2003-01-01

    Open architectures are gaining popularity for Integrated Vehicle Health Management (IVHM) applications due to the diversity of subsystem health monitoring strategies in use and the need to integrate a variety of techniques at the system health management level. The basic concept of an open architecture suggests that whatever monitoring or reasoning strategy a subsystem wishes to deploy, the system architecture will support the needs of that subsystem and will be capable of transmitting subsystem health status across subsystem boundaries and up to the system level for system-wide fault identification and diagnosis. There is a need to understand the capabilities of various reasoning engines and how they, coupled with intelligent monitoring techniques, can support fault detection and system level fault management. Researchers in IVHM at NASA Ames Research Center are supporting the development of an IVHM system for liquefying-fuel hybrid rockets. In the initial stage of this project, a few readily available reasoning engines were studied to assess candidate technologies for application in next generation launch systems. Three tools representing the spectrum of model-based reasoning approaches, from a quantitative simulation based approach to a graph-based fault propagation technique, were applied to model the behavior of the Hybrid Combustion Facility testbed at Ames. This paper summarizes the characterization of the modeling process for each of the techniques.

  11. Impact of plant shoot architecture on leaf cooling: a coupled heat and mass transfer model

    PubMed Central

    Bridge, L. J.; Franklin, K. A.; Homer, M. E.

    2013-01-01

    Plants display a range of striking architectural adaptations when grown at elevated temperatures. In the model plant Arabidopsis thaliana, these include elongation of petioles, and increased petiole and leaf angles from the soil surface. The potential physiological significance of these architectural changes remains speculative. We address this issue computationally by formulating a mathematical model and performing numerical simulations, testing the hypothesis that elongated and elevated plant configurations may reflect a leaf-cooling strategy. This sets in place a new basic model of plant water use and interaction with the surrounding air, which couples heat and mass transfer within a plant to water vapour diffusion in the air, using a transpiration term that depends on saturation, temperature and vapour concentration. A two-dimensional, multi-petiole shoot geometry is considered, with added leaf-blade shape detail. Our simulations show that increased petiole length and angle generally result in enhanced transpiration rates and reduced leaf temperatures in well-watered conditions. Furthermore, our computations also reveal plant configurations for which elongation may result in decreased transpiration rate owing to decreased leaf liquid saturation. We offer further qualitative and quantitative insights into the role of architectural parameters as key determinants of leaf-cooling capacity. PMID:23720538

  12. Grenville-era Crustal Architecture of Central Australia, and its Importance in Constraining Rodinia Models.

    NASA Astrophysics Data System (ADS)

    Aitken, A. R.; Betts, P. G.

    2007-12-01

    continuous and coherent northeast trending orogenic belt connecting the Albany Fraser, Musgrave and Warumpi provinces. The geometry and extent of this orogenic belt precludes a direct connection between the Musgrave Province and contemporaneous orogens in Laurentia. Any model of Australian orogenic activity during the Grenvillian era, must take account of the NE oriented architecture, and intracontinental termination of the orogenic belt. Continental reconfiguration within Australia via the rotation of the South Australian Craton can adequately explain the Grenville-aged architecture of Australia.

  13. Simulation models relevant to the protection of synchronous machines and transformers

    NASA Astrophysics Data System (ADS)

    Muthumuni, Dharshana De Silva

    2001-07-01

    The purpose of this research is to develop models which can be used to produce realistic test waveforms for the evaluation of protection systems used for generators and transformers. Software models of generators and transformers which have the capability to calculate voltage and current waveforms in the presence of internal faults are presented in this thesis. The thesis also presents accurate models of current transformers used in differential current protection schemes. These include air gapped current transformers which are widely used in transformer and generator protection. The models of generators and transformers can be used with the models of current transformers to obtain test waveforms to evaluate a protection system. The models are validated by comparing the results obtained from simulations with recorded waveforms.

  14. A transformational model for the practice of professional nursing. Part 2, Implementation of the model.

    PubMed

    Wolf, G A; Boland, S; Aukerman, M

    1994-05-01

    Our healthcare system is undergoing major transformation. Most nurse executives are convinced that change is necessary and inevitable, but they are less certain how to position their departments for future success. The Transformational Model for the Practice of Professional Nursing was developed as a "road map" for that purpose. Part 1 (JONA, April 1994) discussed the professional practice paradigm shifts that are needed for future success. The model components were presented and applications identified. Part 2 discusses the implementation of this model in a practice setting.

  15. An Evaluation of the High Level Architecture (HLA) as a Framework for NASA Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reid, Michael R.; Powers, Edward I. (Technical Monitor)

    2000-01-01

    The High Level Architecture (HLA) is a current US Department of Defense and an industry (IEEE-1516) standard architecture for modeling and simulations. It provides a framework and set of functional rules and common interfaces for integrating separate and disparate simulators into a larger simulation. The goal of the HLA is to reduce software costs by facilitating the reuse of simulation components and by providing a runtime infrastructure to manage the simulations. In order to evaluate the applicability of the HLA as a technology for NASA space mission simulations, a Simulations Group at Goddard Space Flight Center (GSFC) conducted a study of the HLA and developed a simple prototype HLA-compliant space mission simulator. This paper summarizes the prototyping effort and discusses the potential usefulness of the HLA in the design and planning of future NASA space missions with a focus on risk mitigation and cost reduction.

  16. The NIST Real-Time Control System (RCS): A Reference Model Architecture for Computational Intelligence

    NASA Technical Reports Server (NTRS)

    Albus, James S.

    1996-01-01

    The Real-time Control System (RCS) developed at NIST and elsewhere over the past two decades defines a reference model architecture for design and analysis of complex intelligent control systems. The RCS architecture consists of a hierarchically layered set of functional processing modules connected by a network of communication pathways. The primary distinguishing feature of the layers is the bandwidth of the control loops. The characteristic bandwidth of each level is determined by the spatial and temporal integration window of filters, the temporal frequency of signals and events, the spatial frequency of patterns, and the planning horizon and granularity of the planners that operate at each level. At each level, tasks are decomposed into sequential subtasks, to be performed by cooperating sets of subordinate agents. At each level, signals from sensors are filtered and correlated with spatial and temporal features that are relevant to the control function being implemented at that level.

  17. From Tls to Hbim. High Quality Semantically-Aware 3d Modeling of Complex Architecture

    NASA Astrophysics Data System (ADS)

    Quattrini, R.; Malinverni, E. S.; Clini, P.; Nespeca, R.; Orlietti, E.

    2015-02-01

    In order to improve the framework for 3D modeling, a great challenge is to obtain the suitability of Building Information Model (BIM) platform for historical architecture. A specific challenge in HBIM is to guarantee appropriateness of geometrical accuracy. The present work demonstrates the feasibility of a whole HBIM approach for complex architectural shapes, starting from TLS point clouds. A novelty of our method is to work in a 3D environment throughout the process and to develop semantics during the construction phase. This last feature of HBIM was analyzed in the present work verifying the studied ontologies, enabling the data enrichment of the model with non-geometrical information, such as historical notes, decay or deformation evidence, decorative elements etc. The case study is the Church of Santa Maria at Portonovo, an abbey from the Romanesque period. Irregular or complex historical architecture, such as Romanesque, needs the construction of shared libraries starting from the survey of its already existing elements. This is another key aspect in delivering Building Information Modeling standards. In particular, we focus on the quality assessment of the obtained model, using an open-source sw and the point cloud as reference. The proposed work shows how it is possible to develop a high quality 3D model semantic-aware, capable of connecting geometrical-historical survey with descriptive thematic databases. In this way, a centralized HBIM will serve as comprehensive dataset of information about all disciplines, particularly for restoration and conservation. Moreover, the geometric accuracy will ensure also reliable visualization outputs.

  18. Phase-field-crystal methodology for modeling of structural transformations.

    PubMed

    Greenwood, Michael; Rottler, Jörg; Provatas, Nikolas

    2011-03-01

    We introduce and characterize free-energy functionals for modeling of solids with different crystallographic symmetries within the phase-field-crystal methodology. The excess free energy responsible for the emergence of periodic phases is inspired by classical density-functional theory, but uses only a minimal description for the modes of the direct correlation function to preserve computational efficiency. We provide a detailed prescription for controlling the crystal structure and introduce parameters for changing temperature and surface energies, so that phase transformations between body-centered-cubic (bcc), face-centered-cubic (fcc), hexagonal-close-packed (hcp), and simple-cubic (sc) lattices can be studied. To illustrate the versatility of our free-energy functional, we compute the phase diagram for fcc-bcc-liquid coexistence in the temperature-density plane. We also demonstrate that our model can be extended to include hcp symmetry by dynamically simulating hcp-liquid coexistence from a seeded crystal nucleus. We further quantify the dependence of the elastic constants on the model control parameters in two and three dimensions, showing how the degree of elastic anisotropy can be tuned from the shape of the direct correlation functions.

  19. Kinetic Modeling of Damage Repair, Genome Instability, and Neoplastic Transformation

    SciTech Connect

    Stewart, Robert D

    2007-03-17

    Inducible repair and pathway interactions may fundamentally alter the shape of dose-response curves because different mechanisms may be important under low- and high-dose exposure conditions. However, the significance of these phenomena for risk assessment purposes is an open question. This project developed new modeling tools to study the putative effects of DNA damage induction and repair on higher-level biological endpoints, including cell killing, neoplastic transformation and cancer. The project scope included (1) the development of new approaches to simulate the induction and base excision repair (BER) of DNA damage using Monte Carlo methods and (2) the integration of data from the Monte Carlo simulations with kinetic models for higher-level biological endpoints. Methods of calibrating and testing such multiscale biological simulations were developed. We also developed models to aid in the analysis and interpretation of data from experimental assays, such as the pulsed-field gel electrophoresis (PFGE) assay used to quantity the amount of DNA damage caused by ionizing radiation.

  20. Modeling the Office of Science ten year facilities plan: The PERI Architecture Tiger Team

    NASA Astrophysics Data System (ADS)

    de Supinski, Bronis R.; Alam, Sadaf; Bailey, David H.; Carrington, Laura; Daley, Chris; Dubey, Anshu; Gamblin, Todd; Gunter, Dan; Hovland, Paul D.; Jagode, Heike; Karavanic, Karen; Marin, Gabriel; Mellor-Crummey, John; Moore, Shirley; Norris, Boyana; Oliker, Leonid; Olschanowsky, Catherine; Roth, Philip C.; Schulz, Martin; Shende, Sameer; Snavely, Allan; Spear, Wyatt; Tikir, Mustafa; Vetter, Jeff; Worley, Pat; Wright, Nicholas

    2009-07-01

    The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort optimizing key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measured the performance of these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfill our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully.

  1. Modeling the Office of Science Ten Year Facilities Plan: The PERI Architecture Tiger Team

    SciTech Connect

    de Supinski, Bronis R.; Alam, Sadaf; Bailey, David H.; Carrington, Laura; Daley, Chris; Dubey, Anshu; Gamblin, Todd; Gunter, Dan; Hovland, Paul D.; Jagode, Heike; Karavanic, Karen; Marin, Gabriel; Mellor-Crummey, John; Moore, Shirley; Norris, Boyana; Oliker, Leonid; Olschanowsky, Catherine; Roth, Philip C.; Schulz, Martin; Shende, Sameer; Snavely, Allan; Spear, Wyatt; Tikir, Mustafa; Vetter, Jeff; Worley, Pat; Wright, Nicholas

    2009-06-26

    The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort optimizing key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measured the performance of these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfill our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully.

  2. Modeling the Office of Science Ten Year Facilities Plan: The PERI Architecture Team

    SciTech Connect

    de Supinski, Bronis R.; Alam, Sadaf R; Bailey, David; Carrington, Laura; Daley, Christopher; Dubey, Anshu; Gamblin, Todd; Gunter, Dan; Hovland, Paul; Jagode, Heike; Karavanic, Karen; Marin, Gabriel; Mellor-Crummey, John; Moore, Shirley; Norris, Boyana; Oliker, Leonid; Olschanowsky, Cathy; Roth, Philip C; Schulz, Martin; Shende, Sameer; Snavely, Allan; Spea, Wyatt; Tikir, Mustafa; Vetter, Jeffrey S; Worley, Patrick H; Wright, Nicholas

    2009-01-01

    The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort optimizing key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measured the performance of these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfilll our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully.

  3. Modeling the Office of Science Ten Year FacilitiesPlan: The PERI Architecture Tiger Team

    SciTech Connect

    de Supinski, B R; Alam, S R; Bailey, D H; Carrington, L; Daley, C

    2009-05-27

    The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort to the optimization of key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measured the performance of these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfill our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully.

  4. State of the Art of the Landscape Architecture Spatial Data Model from a Geospatial Perspective

    NASA Astrophysics Data System (ADS)

    Kastuari, A.; Suwardhi, D.; Hanan, H.; Wikantika, K.

    2016-10-01

    Spatial data and information had been used for some time in planning or landscape design. For a long time, architects were using spatial data in the form of topographic map for their designs. This method is not efficient, and it is also not more accurate than using spatial analysis by utilizing GIS. Architects are sometimes also only accentuating the aesthetical aspect for their design, but not taking landscape process into account which could cause the design could be not suitable for its use and its purpose. Nowadays, GIS role in landscape architecture has been formalized by the emergence of Geodesign terminology that starts in Representation Model and ends in Decision Model. The development of GIS could be seen in several fields of science that now have the urgency to use 3 dimensional GIS, such as in: 3D urban planning, flood modeling, or landscape planning. In this fields, 3 dimensional GIS is able to support the steps in modeling, analysis, management, and integration from related data, that describe the human activities and geophysics phenomena in more realistic way. Also, by applying 3D GIS and geodesign in landscape design, geomorphology information can be better presented and assessed. In some research, it is mentioned that the development of 3D GIS is not established yet, either in its 3D data structure, or in its spatial analysis function. This study literature will able to accommodate those problems by providing information on existing development of 3D GIS for landscape architecture, data modeling, the data accuracy, representation of data that is needed by landscape architecture purpose, specifically in the river area.

  5. Transforming System Engineering through Model-Centric Engineering

    DTIC Science & Technology

    2015-01-31

    technologies that improve automation and efficiencies , it is not necessarily “radically transformative ” While our directive is to focus on the...automation and efficiencies , however we still need to better characterize how NAVAIR can achieve a radical transformation . One key discussion topic that has...the risk of SE transformation to MCE will fail to provide an efficient , effective and reliable alternative to the current process. This is an

  6. Computationally efficient method for Fourier transform of highly chirped pulses for laser and parametric amplifier modeling.

    PubMed

    Andrianov, Alexey; Szabo, Aron; Sergeev, Alexander; Kim, Arkady; Chvykov, Vladimir; Kalashnikov, Mikhail

    2016-11-14

    We developed an improved approach to calculate the Fourier transform of signals with arbitrary large quadratic phase which can be efficiently implemented in numerical simulations utilizing Fast Fourier transform. The proposed algorithm significantly reduces the computational cost of Fourier transform of a highly chirped and stretched pulse by splitting it into two separate transforms of almost transform limited pulses, thereby reducing the required grid size roughly by a factor of the pulse stretching. The application of our improved Fourier transform algorithm in the split-step method for numerical modeling of CPA and OPCPA shows excellent agreement with standard algorithms.

  7. Investigating the genetic architecture of conditional strategies using the environmental threshold model

    PubMed Central

    Hazel, Wade N.; Tomkins, Joseph L.

    2015-01-01

    The threshold expression of dichotomous phenotypes that are environmentally cued or induced comprise the vast majority of phenotypic dimorphisms in colour, morphology, behaviour and life history. Modelled as conditional strategies under the framework of evolutionary game theory, the quantitative genetic basis of these traits is a challenge to estimate. The challenge exists firstly because the phenotypic expression of the trait is dichotomous and secondly because the apparent environmental cue is separate from the biological signal pathway that induces the switch between phenotypes. It is the cryptic variation underlying the translation of cue to phenotype that we address here. With a ‘half-sib common environment’ and a ‘family-level split environment’ experiment, we examine the environmental and genetic influences that underlie male dimorphism in the earwig Forficula auricularia. From the conceptual framework of the latent environmental threshold (LET) model, we use pedigree information to dissect the genetic architecture of the threshold expression of forceps length. We investigate for the first time the strength of the correlation between observable and cryptic ‘proximate’ cues. Furthermore, in support of the environmental threshold model, we found no evidence for a genetic correlation between cue and the threshold between phenotypes. Our results show strong correlations between observable and proximate cues and less genetic variation for thresholds than previous studies have suggested. We discuss the importance of generating better estimates of the genetic variation for thresholds when investigating the genetic architecture and heritability of threshold traits. By investigating genetic architecture by means of the LET model, our study supports several key evolutionary ideas related to conditional strategies and improves our understanding of environmentally cued decisions. PMID:26674955

  8. Using Three-dimensional Plant Root Architecture in Models of Shallow-slope Stability

    PubMed Central

    Danjon, Frédéric; Barker, David H.; Drexhage, Michael; Stokes, Alexia

    2008-01-01

    Background The contribution of vegetation to shallow-slope stability is of major importance in landslide-prone regions. However, existing slope stability models use only limited plant root architectural parameters. This study aims to provide a chain of tools useful for determining the contribution of tree roots to soil reinforcement. Methods Three-dimensional digitizing in situ was used to obtain accurate root system architecture data for mature Quercus alba in two forest stands. These data were used as input to tools developed, which analyse the spatial position of roots, topology and geometry. The contribution of roots to soil reinforcement was determined by calculating additional soil cohesion using the limit equilibrium model, and the factor of safety (FOS) using an existing slope stability model, Slip4Ex. Key Results Existing models may incorrectly estimate the additional soil cohesion provided by roots, as the spatial position of roots crossing the potential slip surface is usually not taken into account. However, most soil reinforcement by roots occurs close to the tree stem and is negligible at a distance >1·0 m from the tree, and therefore global values of FOS for a slope do not take into account local slippage along the slope. Conclusions Within a forest stand on a landslide-prone slope, soil fixation by roots can be minimal between uniform rows of trees, leading to local soil slippage. Therefore, staggered rows of trees would improve overall slope stability, as trees would arrest the downward movement of soil. The chain of tools consisting of both software (free for non-commercial use) and functions available from the first author will enable a more accurate description and use of root architectural parameters in standard slope stability analyses. PMID:17766845

  9. Investigating the genetic architecture of conditional strategies using the environmental threshold model.

    PubMed

    Buzatto, Bruno A; Buoro, Mathieu; Hazel, Wade N; Tomkins, Joseph L

    2015-12-22

    The threshold expression of dichotomous phenotypes that are environmentally cued or induced comprise the vast majority of phenotypic dimorphisms in colour, morphology, behaviour and life history. Modelled as conditional strategies under the framework of evolutionary game theory, the quantitative genetic basis of these traits is a challenge to estimate. The challenge exists firstly because the phenotypic expression of the trait is dichotomous and secondly because the apparent environmental cue is separate from the biological signal pathway that induces the switch between phenotypes. It is the cryptic variation underlying the translation of cue to phenotype that we address here. With a 'half-sib common environment' and a 'family-level split environment' experiment, we examine the environmental and genetic influences that underlie male dimorphism in the earwig Forficula auricularia. From the conceptual framework of the latent environmental threshold (LET) model, we use pedigree information to dissect the genetic architecture of the threshold expression of forceps length. We investigate for the first time the strength of the correlation between observable and cryptic 'proximate' cues. Furthermore, in support of the environmental threshold model, we found no evidence for a genetic correlation between cue and the threshold between phenotypes. Our results show strong correlations between observable and proximate cues and less genetic variation for thresholds than previous studies have suggested. We discuss the importance of generating better estimates of the genetic variation for thresholds when investigating the genetic architecture and heritability of threshold traits. By investigating genetic architecture by means of the LET model, our study supports several key evolutionary ideas related to conditional strategies and improves our understanding of environmentally cued decisions.

  10. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The nonlinear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  11. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The non-linear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  12. Model-Driven Development of Reliable Avionics Architectures for Lunar Surface Systems

    NASA Technical Reports Server (NTRS)

    Borer, Nicholas; Claypool, Ian; Clark, David; West, John; Somervill, Kevin; Odegard, Ryan; Suzuki, Nantel

    2010-01-01

    This paper discusses a method used for the systematic improvement of NASA s Lunar Surface Systems avionics architectures in the area of reliability and fault-tolerance. This approach utilizes an integrated system model to determine the effects of component failure on the system s ability to provide critical functions. A Markov model of the potential degraded system modes is created to characterize the probability of these degraded modes, and the system model is run for each Markov state to determine its status (operational or system loss). The probabilistic results from the Markov model are first produced from state transition rates based on NASA data for heritage failure rate data of similar components. An additional set of probabilistic results are created from a representative set of failure rates developed for this study, for a variety of component quality grades (space-rated, mil-spec, ruggedized, and commercial). The results show that careful application of redundancy and selected component improvement should result in Lunar Surface Systems architectures that exhibit an appropriate degree of fault-tolerance, reliability, performance, and affordability.

  13. Historic Building Information Modelling - Adding intelligence to laser and image based surveys of European classical architecture

    NASA Astrophysics Data System (ADS)

    Murphy, Maurice; McGovern, Eugene; Pavia, Sara

    2013-02-01

    Historic Building Information Modelling (HBIM) is a novel prototype library of parametric objects, based on historic architectural data and a system of cross platform programmes for mapping parametric objects onto point cloud and image survey data. The HBIM process begins with remote collection of survey data using a terrestrial laser scanner combined with digital photo modelling. The next stage involves the design and construction of a parametric library of objects, which are based on the manuscripts ranging from Vitruvius to 18th century architectural pattern books. In building parametric objects, the problem of file format and exchange of data has been overcome within the BIM ArchiCAD software platform by using geometric descriptive language (GDL). The plotting of parametric objects onto the laser scan surveys as building components to create or form the entire building is the final stage in the reverse engineering process. The final HBIM product is the creation of full 3D models including detail behind the object's surface concerning its methods of construction and material make-up. The resultant HBIM can automatically create cut sections, details and schedules in addition to the orthographic projections and 3D models (wire frame or textured) for both the analysis and conservation of historic objects, structures and environments.

  14. Second Annual Transformative Vertical Flight Concepts Workshop: Enabling New Flight Concepts Through Novel Propulsion and Energy Architectures

    NASA Technical Reports Server (NTRS)

    Dudley, Michael R. (Editor); Duffy, Michael; Hirschberg, Michael; Moore, Mark; German, Brian; Goodrich, Ken; Gunnarson, Tom; Petermaier,Korbinian; Stoll, Alex; Fredericks, Bill; Gibson, Andy; Newman, Aron; Ouellette, Richard; Antcliff, Kevin; Sinkula, Michael; Buettner-Garrett, Josh; Ricci, Mike; Keogh, Rory; Moser, Tim; Borer, Nick; Rizzi, Steve; Lighter, Gwen

    2015-01-01

    On August 3rd and 4th, 2015, a workshop was held at the NASA Ames Research Center, located at the Moffett Federal Airfield in California to explore the aviation communities interest in Transformative Vertical Flight (TVF) Concepts. The Workshop was sponsored by the AHS International (AHS), the American Institute of Aeronautics and Astronautics (AIAA), the National Aeronautics and Space Administration (NASA), and hosted by the NASA Aeronautics Research Institute (NARI). This second annual workshop built on the success and enthusiasm generated by the first TVF Workshop held in Washington, DC in August of 2014. The previous Workshop identified the existence of a multi-disciplinary community interested in this topic and established a consensus among the participants that opportunities to establish further collaborations in this area are warranted. The desire to conduct a series of annual workshops augmented by online virtual technical seminars to strengthen the TVF community and continue planning for advocacy and collaboration was a direct outcome of the first Workshop. The second Workshop organizers focused on four desired action-oriented outcomes. The first was to establish and document common stakeholder needs and areas of potential collaborations. This includes advocacy strategies to encourage the future success of unconventional vertiport capable flight concept solutions that are enabled by emerging technologies. The second was to assemble a community that can collaborate on new conceptual design and analysis tools to permit novel configuration paths with far greater multi-disciplinary coupling (i.e., aero-propulsive-control) to be investigated. The third was to establish a community to develop and deploy regulatory guidelines. This community would have the potential to initiate formation of an American Society for Testing and Materials (ASTM) F44 Committee Subgroup for the development of consensus-based certification standards for General Aviation scale vertiport

  15. High-Performance Work Systems: American Models of Workplace Transformation.

    ERIC Educational Resources Information Center

    Appelbaum, Eileen; Batt, Rosemary

    Rising competition in world and domestic markets for the past 2 decades has necessitated that U.S. companies undergo significant transformations to improve their performance with respect to a wide array of efficiency and quality indicators. Research on the transformations recently undertaken by some U.S. companies to boost performance revealed two…

  16. The Reactive-Causal Architecture: Introducing an Emotion Model along with Theories of Needs

    NASA Astrophysics Data System (ADS)

    Aydin, Ali Orhan; Orgun, Mehmet Ali

    In the entertainment application area, one of the major aims is to develop believable agents. To achieve this aim, agents should be highly autonomous, situated, flexible, and display affect. The Reactive-Causal Architecture (ReCau) is proposed to simulate these core attributes. In its current form, ReCau cannot explain the effects of emotions on intelligent behaviour. This study aims is to further improve the emotion model of ReCau to explain the effects of emotions on intelligent behaviour. This improvement allows ReCau to be emotional to support the development of believable agents.

  17. Orion Flight Test 1 Architecture: Observed Benefits of a Model Based Engineering Approach

    NASA Technical Reports Server (NTRS)

    Simpson, Kimberly A.; Sindiy, Oleg V.; McVittie, Thomas I.

    2012-01-01

    This paper details how a NASA-led team is using a model-based systems engineering approach to capture, analyze and communicate the end-to-end information system architecture supporting the first unmanned orbital flight of the Orion Multi-Purpose Crew Exploration Vehicle. Along with a brief overview of the approach and its products, the paper focuses on the observed program-level benefits, challenges, and lessons learned; all of which may be applied to improve system engineering tasks for characteristically similarly challenges

  18. Guiding Principles for Data Architecture to Support the Pathways Community HUB Model

    PubMed Central

    Zeigler, Bernard P.; Redding, Sarah; Leath, Brenda A.; Carter, Ernest L.; Russell, Cynthia

    2016-01-01

    Introduction: The Pathways Community HUB Model provides a unique strategy to effectively supplement health care services with social services needed to overcome barriers for those most at risk of poor health outcomes. Pathways are standardized measurement tools used to define and track health and social issues from identification through to a measurable completion point. The HUB use Pathways to coordinate agencies and service providers in the community to eliminate the inefficiencies and duplication that exist among them. Pathways Community HUB Model and Formalization: Experience with the Model has brought out the need for better information technology solutions to support implementation of the Pathways themselves through decision-support tools for care coordinators and other users to track activities and outcomes, and to facilitate reporting. Here we provide a basis for discussing recommendations for such a data infrastructure by developing a conceptual model that formalizes the Pathway concept underlying current implementations. Requirements for Data Architecture to Support the Pathways Community HUB Model: The main contribution is a set of core recommendations as a framework for developing and implementing a data architecture to support implementation of the Pathways Community HUB Model. The objective is to present a tool for communities interested in adopting the Model to learn from and to adapt in their own development and implementation efforts. Problems with Quality of Data Extracted from the CHAP Database: Experience with the Community Health Access Project (CHAP) data base system (the core implementation of the Model) has identified several issues and remedies that have been developed to address these issues. Based on analysis of issues and remedies, we present several key features for a data architecture meeting the just mentioned recommendations. Implementation of Features: Presentation of features is followed by a practical guide to their implementation

  19. Model-Based Systems Engineering for Capturing Mission Architecture System Processes with an Application Case Study - Orion Flight Test 1

    NASA Technical Reports Server (NTRS)

    Bonanne, Kevin H.

    2011-01-01

    Model-based Systems Engineering (MBSE) is an emerging methodology that can be leveraged to enhance many system development processes. MBSE allows for the centralization of an architecture description that would otherwise be stored in various locations and formats, thus simplifying communication among the project stakeholders, inducing commonality in representation, and expediting report generation. This paper outlines the MBSE approach taken to capture the processes of two different, but related, architectures by employing the Systems Modeling Language (SysML) as a standard for architecture description and the modeling tool MagicDraw. The overarching goal of this study was to demonstrate the effectiveness of MBSE as a means of capturing and designing a mission systems architecture. The first portion of the project focused on capturing the necessary system engineering activities that occur when designing, developing, and deploying a mission systems architecture for a space mission. The second part applies activities from the first to an application problem - the system engineering of the Orion Flight Test 1 (OFT-1) End-to-End Information System (EEIS). By modeling the activities required to create a space mission architecture and then implementing those activities in an application problem, the utility of MBSE as an approach to systems engineering can be demonstrated.

  20. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    NASA Astrophysics Data System (ADS)

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  1. The Empirical Comparison of Coordinate Transformation Models and Distortion Modeling Methods Based on a Case Study of Croatia

    NASA Astrophysics Data System (ADS)

    Grgic, M.; Varga, M.; Bašić, T.

    2015-12-01

    Several coordinate transformation models enable performing of the coordinate transformations between the historical astro-geodetic datums, which were utilized before the GNSS (Global Navigation Satellite System) technologies were developed, and datums related to the International Terrestrial Reference System (ITRS), which today are most often used to determine the position. The decision on the most appropriate coordinate transformation model is influenced by many factors, such as: required accuracy, available computational resources, possibility of the model application regarding the size and shape of the territory, coordinate distortion that very often exist in historical astro-geodetic datums, etc. This study is based on the geodetic data of the Republic of Croatia in both, historical and ITRS-related datum. It investigates different transformation models, including conformal Molodensky 3 parameters (p) and 5p (standard and abridged) transformation models, 7p transformation models (Bursa-Wolf and Molodensky-Badekas model), Affine transformation models (8p, 9p, 12p), and Multiple Regression Equation approach. Besides, it investigates the 7p, 8p, 9p, and 12p transformation models extended with distortion modeling, and the grid based only transformation model (NTv2 model). Furthermore, several distortion modeling methods were used to produce various models of distortion shifts in different resolutions. Thereafter, their performance and the performance of the transformation models was evaluated using summary statistics derived from the remained positional residuals that were computed for the independent control spatial data set. Lastly, the most appropriate method(s) of distortion modeling and most appropriate coordinate transformation model(s) were defined regarding the required accuracy for the Croatian case.

  2. A new technique for dynamic load distribution when two manipulators mutually lift a rigid object. Part 2, Derivation of entire system model and control architecture

    SciTech Connect

    Unseren, M.A.

    1994-04-01

    A rigid body model for the entire system which accounts for the load distribution scheme proposed in Part 1 as well as for the dynamics of the manipulators and the kinematic constraints is derived in the joint space. A technique is presented for expressing the object dynamics in terms of the joint variables of both manipulators which leads to a positive definite and symmetric inertia matrix. The model is then transformed to obtain reduced order equations of motion and a separate set of equations which govern the behavior of the internal contact forces. The control architecture is applied to the model which results in the explicit decoupling of the position and internal contact force-controlled degrees of freedom (DOF).

  3. Architecture and statistical model of a pulse-mode digital multilayer neural network.

    PubMed

    Kim, Y C; Shanblatt, M A

    1995-01-01

    A new architecture and a statistical model for a pulse-mode digital multilayer neural network (DMNN) are presented. Algebraic neural operations are replaced by stochastic processes using pseudo-random pulse sequences. Synaptic weights and neuron states are represented as probabilities and estimated as average rates of pulse occurrences in corresponding pulse sequences. A statistical model of error (or noise) is developed to estimate relative accuracy associated with stochastic computing in terms of mean and variance. The stochastic computing technique is implemented with simple logic gates as basic computing elements leading to a high neuron-density on a chip. Furthermore, the use of simple logic gates for neural operations, the pulse-mode signal representation, and the modular design techniques lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Any size of a feedforward network can be configured where processing speed is independent of the network size. Multilayer feedforward networks are modeled and applied to pattern classification problems such as encoding and character recognition.

  4. Comparison of LIDAR system performance for alternative single-mode receiver architectures: modeling and experimental validation

    NASA Astrophysics Data System (ADS)

    Toliver, Paul; Ozdur, Ibrahim; Agarwal, Anjali; Woodward, T. K.

    2013-05-01

    In this paper, we describe a detailed performance comparison of alternative single-pixel, single-mode LIDAR architectures including (i) linear-mode APD-based direct-detection, (ii) optically-preamplified PIN receiver, (iii) PINbased coherent-detection, and (iv) Geiger-mode single-photon-APD counting. Such a comparison is useful when considering next-generation LIDAR on a chip, which would allow one to leverage extensive waveguide-based structures and processing elements developed for telecom and apply them to small form-factor sensing applications. Models of four LIDAR transmit and receive systems are described in detail, which include not only the dominant sources of receiver noise commonly assumed in each of the four detection limits, but also additional noise terms present in realistic implementations. These receiver models are validated through the analysis of detection statistics collected from an experimental LIDAR testbed. The receiver is reconfigurable into four modes of operation, while transmit waveforms and channel characteristics are held constant. The use of a diffuse hard target highlights the importance of including speckle noise terms in the overall system analysis. All measurements are done at 1550 nm, which offers multiple system advantages including less stringent eye safety requirements and compatibility with available telecom components, optical amplification, and photonic integration. Ultimately, the experimentally-validated detection statistics can be used as part of an end-to-end system model for projecting rate, range, and resolution performance limits and tradeoffs of alternative integrated LIDAR architectures.

  5. Box Architecture.

    ERIC Educational Resources Information Center

    Ham, Jan

    1998-01-01

    Project offers grades 3-8 students hands-on design practice creating built environments to solve a society-based architectural problem. Students plan buildings, draw floor plans, and make scale models of the structures that are then used in related interdisciplinary activities. (Author)

  6. A generic model to simulate air-borne diseases as a function of crop architecture.

    PubMed

    Casadebaig, Pierre; Quesnel, Gauthier; Langlais, Michel; Faivre, Robert

    2012-01-01

    In a context of pesticide use reduction, alternatives to chemical-based crop protection strategies are needed to control diseases. Crop and plant architectures can be viewed as levers to control disease outbreaks by affecting microclimate within the canopy or pathogen transmission between plants. Modeling and simulation is a key approach to help analyze the behaviour of such systems where direct observations are difficult and tedious. Modeling permits the joining of concepts from ecophysiology and epidemiology to define structures and functions generic enough to describe a wide range of epidemiological dynamics. Additionally, this conception should minimize computing time by both limiting the complexity and setting an efficient software implementation. In this paper, our aim was to present a model that suited these constraints so it could first be used as a research and teaching tool to promote discussions about epidemic management in cropping systems. The system was modelled as a combination of individual hosts (population of plants or organs) and infectious agents (pathogens) whose contacts are restricted through a network of connections. The system dynamics were described at an individual scale. Additional attention was given to the identification of generic properties of host-pathogen systems to widen the model's applicability domain. Two specific pathosystems with contrasted crop architectures were considered: ascochyta blight on pea (homogeneously layered canopy) and potato late blight (lattice of individualized plants). The model behavior was assessed by simulation and sensitivity analysis and these results were discussed against the model ability to discriminate between the defined types of epidemics. Crop traits related to disease avoidance resulting in a low exposure, a slow dispersal or a de-synchronization of plant and pathogen cycles were shown to strongly impact the disease severity at the crop scale.

  7. A Generic Model to Simulate Air-Borne Diseases as a Function of Crop Architecture

    PubMed Central

    Casadebaig, Pierre; Quesnel, Gauthier; Langlais, Michel; Faivre, Robert

    2012-01-01

    In a context of pesticide use reduction, alternatives to chemical-based crop protection strategies are needed to control diseases. Crop and plant architectures can be viewed as levers to control disease outbreaks by affecting microclimate within the canopy or pathogen transmission between plants. Modeling and simulation is a key approach to help analyze the behaviour of such systems where direct observations are difficult and tedious. Modeling permits the joining of concepts from ecophysiology and epidemiology to define structures and functions generic enough to describe a wide range of epidemiological dynamics. Additionally, this conception should minimize computing time by both limiting the complexity and setting an efficient software implementation. In this paper, our aim was to present a model that suited these constraints so it could first be used as a research and teaching tool to promote discussions about epidemic management in cropping systems. The system was modelled as a combination of individual hosts (population of plants or organs) and infectious agents (pathogens) whose contacts are restricted through a network of connections. The system dynamics were described at an individual scale. Additional attention was given to the identification of generic properties of host-pathogen systems to widen the model's applicability domain. Two specific pathosystems with contrasted crop architectures were considered: ascochyta blight on pea (homogeneously layered canopy) and potato late blight (lattice of individualized plants). The model behavior was assessed by simulation and sensitivity analysis and these results were discussed against the model ability to discriminate between the defined types of epidemics. Crop traits related to disease avoidance resulting in a low exposure, a slow dispersal or a de-synchronization of plant and pathogen cycles were shown to strongly impact the disease severity at the crop scale. PMID:23226209

  8. Infra-Free® (IF) Architecture System as the Method for Post-Disaster Shelter Model

    NASA Astrophysics Data System (ADS)

    Chang, Huai-Chien; Anilir, Serkan

    Currently, International Space Station (ISS) is capable to support 3 to 4 astronauts onboard for at least 6 months using an integrated life support system to support the need of crew onboard. Waste from daily life of the crew members are collected by waste recycle systems, electricity consumption depends on collecting solar energy, etc. though it likes the infrastructure we use on Earth, ISS can be comprehended nearly a self-reliant integrated architecture so far, this could be given an important hint for current architecture which is based on urban centralized infrastructure to support our daily lives but could be vulnerable in case of nature disasters. Comparatively, more and more economic activities and communications rely on the enormous urban central infrastructure to support our daily lives. Therefore, when in case of natural disasters, it may cut-out the infrastructure system temporarily or permanent. In order to solve this problem, we propose to design a temporary shelter, which is capable to work without depending on any existing infrastructure. We propose to use some closed-life-cycle or integrated technologies inspired by the possibilities of space and other emerging technologies into current daily architecture by using Infra-free® design framework; which proposes to integrate various life supporting infrastructural elements into one-closed system. We try to work on a scenario for post-disaster management housing as the method for solving the lifeline problems such as solid and liquid waste, energy, and water and hygiene solution into one system. And trying to establish an Infra-free® model of shelter for disaster area. The ultimate objective is to design a Temp Infra-free® model dealing with the sanitation and environment preservation concerns for disaster area.

  9. Generation of Department of Defense Architecture Framework (DODAF) Models Using the Monterey Phoenix Behavior Modeling Approach

    DTIC Science & Technology

    2015-09-01

    management teams need to follow the same methods and techniques and use the same tools to realize the benefits of an integrated architecture. Tables 3–6...Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington, DC 20503. 1. AGENCY USE ONLY...realize MP benefits such as automatic scenario generation and comply with DOD guidance. Using criteria established in this research, 16 of the 51

  10. Using AOSD and MDD to Enhance the Architectural Design Phase

    NASA Astrophysics Data System (ADS)

    Pinto, Mónica; Fuentes, Lidia; Fernández, Luis; Valenzuela, Juan A.

    This paper describes an MDD process that enhances the architectural design phase by closing the gap between ADLs and the notations used at the detailed design phase. We have defined model-to-model transformation rules to automatically generate either aspect-oriented or object-oriented UML 2.0 models from high-level architectural specifications specified using AO-ADL. These rules have been integrated in the AO-ADL Tool Suite, providing support to automatically generate a skeleton of the detailed design that preserves the crosscutting and the non-crosscutting functionalities identified at the architecture level.

  11. Nonlinear model of a distribution transformer appropriate for evaluating the effects of unbalanced loads

    NASA Astrophysics Data System (ADS)

    Toman, Matej; Štumberger, Gorazd; Štumberger, Bojan; Dolinar, Drago

    Power packages for calculation of power system transients are often used when studying and designing electromagnetic power systems. An accurate model of a distribution transformer is needed in order to obtain realistic values from these calculations. This transformer model must be derived in such a way that it is applicable when calculating those operating conditions appearing in practice. Operation conditions where transformers are loaded with nonlinear and unbalanced loads are especially challenging. The purpose of this work is to derive a three-phase transformer model that is appropriate for evaluating the effects of nonlinear and unbalanced loads. A lumped parameter model instead of a finite element (FE) model is considered in order to ensure that the model can be used in power packages for the calculation of power system transients. The transformer model is obtained by coupling electric and magnetic equivalent circuits. The magnetic equivalent circuit contains only three nonlinear reluctances, which represent nonlinear behaviour of the transformer. They are calculated by the inverse Jiles-Atherton (J-A) hysteresis model, while parameters of hysteresis are identified using differential evolution (DE). This considerably improves the accuracy of the derived transformer model. Although the obtained transformer model is simple, the simulation results show good agreement between measured and calculated results.

  12. Multi-agent Architecture for the Multi-Skill Tasks Modeling at the Pediatric Emergency Department.

    PubMed

    Ajmi, Ines; Zgaya, Hayfa; Hammadi, Slim; Gammoudi, Lotfi; Martinot, Alain; Beuscart, Régis; Renard, Jean-Marie

    2015-01-01

    Patient journey in the Pediatric Emergency Department is a highly complex process. Current approaches for modeling are insufficient because they either focus only on the single ancillary units, or therefore do not consider the entire treatment process of the patients, or they do not account for the dynamics of the patient journey modeling. Therefore, we propose an agent based approach in which patients and emergency department human resources are represented as autonomous agents who are able to react flexible to changes and disturbances through pro-activeness and reactiveness. The main aim of this paper is to present the overall design of the proposed multi-agent system, emphasizing its architecture and the behavior of each agent of the model. Besides, we describe inter-agent communication based on the agent interaction protocol to ensure cooperation between agents when they perform the coordination of tasks for the users. This work is integrated into the ANR HOST project (ANR-11-TecSan-010).

  13. Service Oriented Architectural Model for Load Flow Analysis in Power Systems

    NASA Astrophysics Data System (ADS)

    Muthu, Balasingh Moses; Veilumuthu, Ramachandran; Ponnusamy, Lakshmi

    2011-07-01

    The main objective of this paper is to develop the Service Oriented Architectural (SOA) Model for representation of power systems, especially of computing load flow analysis of large interconnected power systems. The proposed SOA model has three elements namely load flow service provider, power systems registry and client. The exchange of data using XML makes the power system services standardized and adaptable. The load flow service is provided by the service provider, which is published in power systems registry for enabling universal visibility and access to the service. The message oriented style of SOA using Simple Object Access Protocol (SOAP) makes the service provider and the power systems client to exist in a loosely coupled environment. This proposed model, portraits the load flow services as Web services in service oriented environment. To suit the power system industry needs, it easily integrates with the Web applications which enables faster power system operations.

  14. Development of a Subcell Based Modeling Approach for Modeling the Architecturally Dependent Impact Response of Triaxially Braided Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Sorini, Chris; Chattopadhyay, Aditi; Goldberg, Robert K.; Kohlman, Lee W.

    2016-01-01

    Understanding the high velocity impact response of polymer matrix composites with complex architectures is critical to many aerospace applications, including engine fan blade containment systems where the structure must be able to completely contain fan blades in the event of a blade-out. Despite the benefits offered by these materials, the complex nature of textile composites presents a significant challenge for the prediction of deformation and damage under both quasi-static and impact loading conditions. The relatively large mesoscale repeating unit cell (in comparison to the size of structural components) causes the material to behave like a structure rather than a homogeneous material. Impact experiments conducted at NASA Glenn Research Center have shown the damage patterns to be a function of the underlying material architecture. Traditional computational techniques that involve modeling these materials using smeared homogeneous, orthotropic material properties at the macroscale result in simulated damage patterns that are a function of the structural geometry, but not the material architecture. In order to preserve heterogeneity at the highest length scale in a robust yet computationally efficient manner, and capture the architecturally dependent damage patterns, a previously-developed subcell modeling approach where the braided composite unit cell is approximated as a series of four adjacent laminated composites is utilized. This work discusses the implementation of the subcell methodology into the commercial transient dynamic finite element code LS-DYNA (Livermore Software Technology Corp.). Verification and validation studies are also presented, including simulation of the tensile response of straight-sided and notched quasi-static coupons composed of a T700/PR520 triaxially braided [0deg/60deg/-60deg] composite. Based on the results of the verification and validation studies, advantages and limitations of the methodology as well as plans for future work

  15. Transform-both-sides nonlinear models for in vitro pharmacokinetic experiments.

    PubMed

    Latif, A H M Mahbub; Gilmour, Steven G

    2015-06-01

    Transform-both-sides nonlinear models have proved useful in many experimental applications including those in pharmaceutical sciences and biochemistry. The maximum likelihood method is commonly used to fit transform-both-sides nonlinear models, where the regression and transformation parameters are estimated simultaneously. In this paper, an analysis of variance-based method is described in detail for estimating transform-both-sides nonlinear models from randomized experiments. It estimates the transformation parameter from the full treatment model and then the regression parameters are estimated conditionally on this estimate of the transformation parameter. The analysis of variance method is computationally simpler compared with the maximum likelihood method of estimation and allows a more natural separation of different sources of lack of fit. Simulation studies show that the analysis of variance method can provide unbiased estimators of complex transform-both-sides nonlinear models, such as transform-both-sides random coefficient nonlinear regression models and transform-both-sides fixed coefficient nonlinear regression models with random block effects.

  16. Conversion of Highly Complex Faulted Hydrostratigraphic Architectures into MODFLOW Grid for Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Pham, H. V.; Tsai, F. T.

    2013-12-01

    The USGS MODFLOW is widely used for groundwater modeling. Because of using structured grid, all layers have to be continuous throughout the model domain. This makes it difficult to generate computational grid for complex hydrostratigraphic architectures including thin and discontinuous layers, interconnections of sand units, pinch-outs, and faults. In this study, we present a technique for automatically generating MODFLOW grid for complex aquifer systems of strongly sand-clay binary heterogeneity. To do so, an indicator geostatistical method is adopted to interpolate sand and clay distributions in a gridded two-dimensional plane along the structural dip for every one-foot vertical interval. A three-dimensional gridded binary geological architecture is reconstructed by assembling all two-dimensional planes. Then, the geological architecture is converted to MODFLOW computational grid by the procedures as follows. First, we determine bed boundary elevation of sand and clay units for each vertical column. Then, we determine the total number of bed boundaries for a vertical column by projecting the bed boundaries of its adjacent four vertical columns to the column. This step is of importance to preserve flow pathways, especially for narrow connections between sand units. Finally, we determine the number of MODFLOW layers and assign layer indices to bed boundaries. A MATLAB code was developed to implement the technique. The inputs for the code are bed boundary data from well logs, a structural dip, minimal layer thickness, and the number of layers. The outputs are MODFLOW grid of sand and clay indicators. The technique is able to generate grid that preserves fault features in the geological architecture. Moreover, the code is very efficient for regenerating MODFLOW grid with different grid resolutions. The technique was applied to MODFLOW grid generation for the fluvial aquifer system in Baton Rouge, Louisiana. The study area consists of the '1,200-foot' sand, the '1

  17. Modeling workplace contact networks: The effects of organizational structure, architecture, and reporting errors on epidemic predictions

    PubMed Central

    Potter, Gail E.; Smieszek, Timo; Sailer, Kerstin

    2015-01-01

    Face-to-face social contacts are potentially important transmission routes for acute respiratory infections, and understanding the contact network can improve our ability to predict, contain, and control epidemics. Although workplaces are important settings for infectious disease transmission, few studies have collected workplace contact data and estimated workplace contact networks. We use contact diaries, architectural distance measures, and institutional structures to estimate social contact networks within a Swiss research institute. Some contact reports were inconsistent, indicating reporting errors. We adjust for this with a latent variable model, jointly estimating the true (unobserved) network of contacts and duration-specific reporting probabilities. We find that contact probability decreases with distance, and that research group membership, role, and shared projects are strongly predictive of contact patterns. Estimated reporting probabilities were low only for 0–5 min contacts. Adjusting for reporting error changed the estimate of the duration distribution, but did not change the estimates of covariate effects and had little effect on epidemic predictions. Our epidemic simulation study indicates that inclusion of network structure based on architectural and organizational structure data can improve the accuracy of epidemic forecasting models. PMID:26634122

  18. XSTREAM: A practical algorithm for identification and architecture modeling of tandem repeats in protein sequences

    PubMed Central

    Newman, Aaron M; Cooper, James B

    2007-01-01

    Background Biological sequence repeats arranged in tandem patterns are widespread in DNA and proteins. While many software tools have been designed to detect DNA tandem repeats (TRs), useful algorithms for identifying protein TRs with varied levels of degeneracy are still needed. Results To address limitations of current repeat identification methods, and to provide an efficient and flexible algorithm for the detection and analysis of TRs in protein sequences, we designed and implemented a new computational method called XSTREAM. Running time tests confirm the practicality of XSTREAM for analyses of multi-genome datasets. Each of the key capabilities of XSTREAM (e.g., merging, nesting, long-period detection, and TR architecture modeling) are demonstrated using anecdotal examples, and the utility of XSTREAM for identifying TR proteins was validated using data from a recently published paper. Conclusion We show that XSTREAM is a practical and valuable tool for TR detection in protein and nucleotide sequences at the multi-genome scale, and an effective tool for modeling TR domains with diverse architectures and varied levels of degeneracy. Because of these useful features, XSTREAM has significant potential for the discovery of naturally-evolved modular proteins with applications for engineering novel biostructural and biomimetic materials, and identifying new vaccine and diagnostic targets. PMID:17931424

  19. Notch Signaling and Schwann Cell Transformation: Development of a Model System and Application to Human MPNSTs

    DTIC Science & Technology

    2008-09-01

    TITLE: Notch Signaling and Schwann Cell Transformation: Development of a Model System and Application to Human MPNSTs PRINCIPAL INVESTIGATOR...Schwann cell transformation: Development of a model system and 5a. CONTRACT NUMBER application to human MPNSTs . 5b. GRANT NUMBER W81XWH-04-1-0209...of neurofibromas to MPNSTs in patients with NF1. Our previous work has shown that constitutive expression of Notch can transform rat Schwann cells

  20. A Thermo-Plastic-Martensite Transformation Coupled Constitutive Model for Hot Stamping

    NASA Astrophysics Data System (ADS)

    Bin, Zhu; WeiKang, Liang; Zhongxiang, Gui; Kai, Wang; Chao, Wang; Yilin, Wang; Yisheng, Zhang

    2017-01-01

    In this study, a thermo-plastic-martensite transformation coupled model based on the von Mises yield criterion and the associated plastic flow rule is developed to further improve the accuracy of numerical simulation during hot stamping. The constitutive model is implemented into the finite element program ABAQUS using user subroutine VUMAT. The martensite transformation, transformation-induced plasticity and volume expansion during the austenite-to-martensite transformation are included in the constitutive model. For this purpose, isothermal tensile tests are performed to obtain the flow stress, and non-isothermal tensile tests were carried out to validate the constitutive model. The non-isothermal tensile numerical simulation demonstrates that the thermo-plastic-martensite transformation coupled constitutive model provides a reasonable prediction of force-displacement curves upon loading, which is expected to be applied for modeling and simulation of hot stamping.

  1. A Thermo-Plastic-Martensite Transformation Coupled Constitutive Model for Hot Stamping

    NASA Astrophysics Data System (ADS)

    Bin, Zhu; WeiKang, Liang; Zhongxiang, Gui; Kai, Wang; Chao, Wang; Yilin, Wang; Yisheng, Zhang

    2017-03-01

    In this study, a thermo-plastic-martensite transformation coupled model based on the von Mises yield criterion and the associated plastic flow rule is developed to further improve the accuracy of numerical simulation during hot stamping. The constitutive model is implemented into the finite element program ABAQUS using user subroutine VUMAT. The martensite transformation, transformation-induced plasticity and volume expansion during the austenite-to-martensite transformation are included in the constitutive model. For this purpose, isothermal tensile tests are performed to obtain the flow stress, and non-isothermal tensile tests were carried out to validate the constitutive model. The non-isothermal tensile numerical simulation demonstrates that the thermo-plastic-martensite transformation coupled constitutive model provides a reasonable prediction of force-displacement curves upon loading, which is expected to be applied for modeling and simulation of hot stamping.

  2. Dynamic root growth and architecture responses to limiting nutrient availability: linking physiological models and experimentation.

    PubMed

    Postma, Johannes A; Schurr, Ulrich; Fiorani, Fabio

    2014-01-01

    In recent years the study of root phenotypic plasticity in response to sub-optimal environmental factors and the genetic control of these responses have received renewed attention. As a path to increased productivity, in particular for low fertility soils, several applied research projects worldwide target the improvement of crop root traits both in plant breeding and biotechnology contexts. To assist these tasks and address the challenge of optimizing root growth and architecture for enhanced mineral resource use, the development of realistic simulation models is of great importance. We review this research field from a modeling perspective focusing particularly on nutrient acquisition strategies for crop production on low nitrogen and low phosphorous soils. Soil heterogeneity and the dynamics of nutrient availability in the soil pose a challenging environment in which plants have to forage efficiently for nutrients in order to maintain their internal nutrient homeostasis throughout their life cycle. Mathematical models assist in understanding plant growth strategies and associated root phenes that have potential to be tested and introduced in physiological breeding programs. At the same time, we stress that it is necessary to carefully consider model assumptions and development from a whole plant-resource allocation perspective and to introduce or refine modules simulating explicitly root growth and architecture dynamics through ontogeny with reference to key factors that constrain root growth. In this view it is important to understand negative feedbacks such as plant-plant competition. We conclude by briefly touching on available and developing technologies for quantitative root phenotyping from lab to field, from quantification of partial root profiles in the field to 3D reconstruction of whole root systems. Finally, we discuss how these approaches can and should be tightly linked to modeling to explore the root phenome.

  3. Dawn: A Simulation Model for Evaluating Costs and Tradeoffs of Big Data Science Architectures

    NASA Astrophysics Data System (ADS)

    Cinquini, L.; Crichton, D. J.; Braverman, A. J.; Kyo, L.; Fuchs, T.; Turmon, M.

    2014-12-01

    In many scientific disciplines, scientists and data managers are bracing for an upcoming deluge of big data volumes, which will increase the size of current data archives by a factor of 10-100 times. For example, the next Climate Model Inter-comparison Project (CMIP6) will generate a global archive of model output of approximately 10-20 Peta-bytes, while the upcoming next generation of NASA decadal Earth Observing instruments are expected to collect tens of Giga-bytes/day. In radio-astronomy, the Square Kilometre Array (SKA) will collect data in the Exa-bytes/day range, of which (after reduction and processing) around 1.5 Exa-bytes/year will be stored. The effective and timely processing of these enormous data streams will require the design of new data reduction and processing algorithms, new system architectures, and new techniques for evaluating computation uncertainty. Yet at present no general software tool or framework exists that will allow system architects to model their expected data processing workflow, and determine the network, computational and storage resources needed to prepare their data for scientific analysis. In order to fill this gap, at NASA/JPL we have been developing a preliminary model named DAWN (Distributed Analytics, Workflows and Numerics) for simulating arbitrary complex workflows composed of any number of data processing and movement tasks. The model can be configured with a representation of the problem at hand (the data volumes, the processing algorithms, the available computing and network resources), and is able to evaluate tradeoffs between different possible workflows based on several estimators: overall elapsed time, separate computation and transfer times, resulting uncertainty, and others. So far, we have been applying DAWN to analyze architectural solutions for 4 different use cases from distinct science disciplines: climate science, astronomy, hydrology and a generic cloud computing use case. This talk will present

  4. On the injectivity of the generalized Radon transform arising in a model of mathematical economics

    NASA Astrophysics Data System (ADS)

    Agaltsov, A. D.

    2016-11-01

    In the present article we consider the uniqueness problem for the generalized Radon transform arising in a mathematical model of production. We prove uniqueness theorems for this transform and for the profit function in the corresponding model of production. Our approach is based on the multidimensional Wiener’s approximation theorems.

  5. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGES

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretizationmore » is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  6. Analysis of optical near-field energy transfer by stochastic model unifying architectural dependencies

    NASA Astrophysics Data System (ADS)

    Naruse, Makoto; Akahane, Kouichi; Yamamoto, Naokatsu; Holmström, Petter; Thylén, Lars; Huant, Serge; Ohtsu, Motoichi

    2014-04-01

    We theoretically and experimentally demonstrate energy transfer mediated by optical near-field interactions in a multi-layer InAs quantum dot (QD) structure composed of a single layer of larger dots and N layers of smaller ones. We construct a stochastic model in which optical near-field interactions that follow a Yukawa potential, QD size fluctuations, and temperature-dependent energy level broadening are unified, enabling us to examine device-architecture-dependent energy transfer efficiencies. The model results are consistent with the experiments. This study provides an insight into optical energy transfer involving inherent disorders in materials and paves the way to systematic design principles of nanophotonic devices that will allow optimized performance and the realization of designated functions.

  7. Quantum gates and architecture for the quantum simulation of the Fermi-Hubbard model

    NASA Astrophysics Data System (ADS)

    Dallaire-Demers, Pierre-Luc; Wilhelm, Frank K.

    2016-12-01

    Quantum computers are the ideal platform for quantum simulations. Given enough coherent operations and qubits, such machines can be leveraged to simulate strongly correlated materials, where intricate quantum effects give rise to counterintuitive macroscopic phenomena such as high-temperature superconductivity. In this paper, we provide a gate decomposition and an architecture for a quantum simulator used to simulate the Fermi-Hubbard model in a hybrid variational quantum-classical algorithm. We propose a simple planar implementation-independent layout of qubits that can also be used to simulate more general fermionic systems. By working through a concrete application, we show the gate decomposition used to simulate the Hamiltonian of a cluster of the Fermi-Hubbard model. We briefly analyze the Trotter-Suzuki errors and estimate the scaling properties of the algorithm for more complex applications.

  8. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  9. Developing Materials Processing to Performance Modeling Capabilities and the Need for Exascale Computing Architectures (and Beyond)

    SciTech Connect

    Schraad, Mark William; Luscher, Darby Jon

    2016-09-06

    Additive Manufacturing techniques are presenting the Department of Energy and the NNSA Laboratories with new opportunities to consider novel component production and repair processes, and to manufacture materials with tailored response and optimized performance characteristics. Additive Manufacturing technologies already are being applied to primary NNSA mission areas, including Nuclear Weapons. These mission areas are adapting to these new manufacturing methods, because of potential advantages, such as smaller manufacturing footprints, reduced needs for specialized tooling, an ability to embed sensing, novel part repair options, an ability to accommodate complex geometries, and lighter weight materials. To realize the full potential of Additive Manufacturing as a game-changing technology for the NNSA’s national security missions; however, significant progress must be made in several key technical areas. In addition to advances in engineering design, process optimization and automation, and accelerated feedstock design and manufacture, significant progress must be made in modeling and simulation. First and foremost, a more mature understanding of the process-structure-property-performance relationships must be developed. Because Additive Manufacturing processes change the nature of a material’s structure below the engineering scale, new models are required to predict materials response across the spectrum of relevant length scales, from the atomistic to the continuum. New diagnostics will be required to characterize materials response across these scales. And not just models, but advanced algorithms, next-generation codes, and advanced computer architectures will be required to complement the associated modeling activities. Based on preliminary work in each of these areas, a strong argument for the need for Exascale computing architectures can be made, if a legitimate predictive capability is to be developed.

  10. A Four-Phase Model of the Evolution of Clinical Decision Support Architectures

    PubMed Central

    Wright, Adam; Sittig, Dean F.

    2008-01-01

    Background A large body of evidence over many years suggests that clinical decision support systems can be helpful in improving both clinical outcomes and adherence to evidence-based guidelines. However, to this day, clinical decision support systems are not widely used outside of a small number of sites. One reason why decision support systems are not widely used is the relative difficulty of integrating such systems into clinical workflows and computer systems. Purpose To review and synthesize the history of clinical decision support systems, and to propose a model of various architectures for integrating clinical decision support systems with clinical systems. Methods The authors conducted an extensive review of the clinical decision support literature since 1959, sequenced the systems and developed a model. Results The model developed consists of four phases: standalone decision support systems, decision support integrated into clinical systems, standards for sharing clinical decision support content and service models for decision support. These four phases have not heretofore been identified, but they track remarkably well with the chronological history of clinical decision support, and show evolving and increasingly sophisticated attempts to ease integrating decision support systems into clinical workflows and other clinical systems. Conclusions Each of the four evolutionary approaches to decision support architecture has unique advantages and disadvantages. A key lesson was that there were common limitations that almost all the approaches faced, and no single approach has been able to entirely surmount: 1) fixed knowledge representation systems inherently circumscribe the type of knowledge that can be represented in them, 2) there are serious terminological issues, 3) patient data may be spread across several sources with no single source having a complete view of the patient, and 4) major difficulties exist in transferring successful interventions from one

  11. Continuous distribution model for the investigation of complex molecular architectures near interfaces with scattering techniques

    NASA Astrophysics Data System (ADS)

    Shekhar, Prabhanshu; Nanda, Hirsh; Lösche, Mathias; Heinrich, Frank

    2011-11-01

    Biological membranes are composed of a thermally disordered lipid matrix and therefore require non-crystallographic scattering approaches for structural characterization with x-rays or neutrons. Here we develop a continuous distribution (CD) model to refine neutron or x-ray reflectivity data from complex architectures of organic molecules. The new model is a flexible implementation of the composition-space refinement of interfacial structures to constrain the resulting scattering length density profiles. We show this model increases the precision with which molecular components may be localized within a sample, with a minimal use of free model parameters. We validate the new model by parameterizing all-atom molecular dynamics (MD) simulations of bilayers and by evaluating the neutron reflectivity of a phospholipid bilayer physisorbed to a solid support. The determination of the structural arrangement of a sparsely-tethered bilayer lipid membrane (stBLM) comprised of a multi-component phospholipid bilayer anchored to a gold substrate by a thiolated oligo(ethylene oxide) linker is also demonstrated. From the model we extract the bilayer composition and density of tether points, information which was previously inaccessible for stBLM systems. The new modeling strategy has been implemented into the ga_refl reflectivity data evaluation suite, available through the National Institute of Standards and Technology (NIST) Center for Neutron Research (NCNR).

  12. A Vision Based Top-View Transformation Model for a Vehicle Parking Assistant

    PubMed Central

    Lin, Chien-Chuan; Wang, Ming-Shi

    2012-01-01

    This paper proposes the Top-View Transformation Model for image coordinate transformation, which involves transforming a perspective projection image into its corresponding bird's eye vision. A fitting parameters searching algorithm estimates the parameters that are used to transform the coordinates from the source image. Using this approach, it is not necessary to provide any interior and exterior orientation parameters of the camera. The designed car parking assistant system can be installed at the rear end of the car, providing the driver with a clearer image of the area behind the car. The processing time can be reduced by storing and using the transformation matrix estimated from the first image frame for a sequence of video images. The transformation matrix can be stored as the Matrix Mapping Table, and loaded into the embedded platform to perform the transformation. Experimental results show that the proposed approaches can provide a clearer and more accurate bird's eye view to the vehicle driver. PMID:22666038

  13. Equivalent circuit of radio frequency-plasma with the transformer model.

    PubMed

    Nishida, K; Mochizuki, S; Ohta, M; Yasumoto, M; Lettry, J; Mattei, S; Hatayama, A

    2014-02-01

    LINAC4 H(-) source is radio frequency (RF) driven type source. In the RF system, it is required to match the load impedance, which includes H(-) source, to that of final amplifier. We model RF plasma inside the H(-) source as circuit elements using transformer model so that characteristics of the load impedance become calculable. It has been shown that the modeling based on the transformer model works well to predict the resistance and inductance of the plasma.

  14. PDS4: Meeting Big Data Challenges Via a Model-Driven Planetary Science Data Architecture and System

    NASA Astrophysics Data System (ADS)

    Law, E.; Hughes, J. S.; Crichton, D. J.; Hardman, S. H.; Joyner, R.; Ramirez, P.

    2014-12-01

    Big science data management entails cataloging, processing, distribution, multiple ways of analyzing and interpreting the data, long-term preservation, and international cooperation of massive amount of scientific data. PDS4, the next generation of the Planetary Data System (PDS), uses an information model-driven architectural approach coupled with modern information technologies and standards to meet theses challenges of big science data management. PDS4 is an operational example of the use of an explicit data system architecture and an ontology-base information model to drive the development, operations, and evolution of a scalable data system along the entire science data lifecycle from ground systems to the archives. This overview of PDS4 will include a description of its model-driven approach and its overall systems architecture. It will illustrate how the system is being used to help meet the expectations of modern scientists for interoperable data systems and correlatable data in the Big Data era.

  15. Agent-based modeling supporting the migration of registry systems to grid based architectures.

    PubMed

    Cryer, Martin E; Frey, Lewis

    2009-03-01

    With the increasing age and cost of operation of the existing NCI SEER platform core technologies, such essential resources in the fight against cancer as these will eventually have to be migrated to Grid based systems. In order to model this migration, a simulation is proposed based upon an agent modeling technology. This modeling technique allows for simulation of complex and distributed services provided by a large scale Grid computing platform such as the caBIG(™) project's caGRID. In order to investigate such a migration to a Grid based platform technology, this paper proposes using agent-based modeling simulations to predict the performance of current and Grid configurations of the NCI SEER system integrated with the existing translational opportunities afforded by caGRID. The model illustrates how the use of Grid technology can potentially improve system response time as systems under test are scaled. In modeling SEER nodes accessing multiple registry silos, we show that the performance of SEER applications re-implemented in a Grid native manner exhibits a nearly constant user response time with increasing numbers of distributed registry silos, compared with the current application architecture which exhibits a linear increase in response time for increasing numbers of silos.

  16. Architectural Methodology Report

    NASA Technical Reports Server (NTRS)

    Dhas, Chris

    2000-01-01

    The establishment of conventions between two communicating entities in the end systems is essential for communications. Examples of the kind of decisions that need to be made in establishing a protocol convention include the nature of the data representation, the for-mat and the speed of the date representation over the communications path, and the sequence of control messages (if any) which are sent. One of the main functions of a protocol is to establish a standard path between the communicating entities. This is necessary to create a virtual communications medium with certain desirable characteristics. In essence, it is the function of the protocol to transform the characteristics of the physical communications environment into a more useful virtual communications model. The final function of a protocol is to establish standard data elements for communications over the path; that is, the protocol serves to create a virtual data element for exchange. Other systems may be constructed in which the transferred element is a program or a job. Finally, there are special purpose applications in which the element to be transferred may be a complex structure such as all or part of a graphic display. NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to describe the methodologies used in developing a protocol architecture for an in-space Internet node. The node would support NASA:s four mission areas: Earth Science; Space Science; Human Exploration and Development of Space (HEDS); Aerospace Technology. This report presents the methodology for developing the protocol architecture. The methodology addresses the architecture for a computer communications environment. It does not address an analog voice architecture.

  17. Origin and model of transform faults in the Okinawa Trough

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Li, Sanzhong; Jiang, Suhua; Suo, Yanhui; Guo, Lingli; Wang, Yongming; Zhang, Huixuan

    2017-03-01

    Transform faults in back-arc basins are the key to revealing the opening and development of marginal seas. The Okinawa Trough (OT) represents an incipient and active back-arc or marginal sea basin oriented in a general NE-SW direction. To determine the strikes and spatial distribution of transform faults in the OT, this paper dissects the NW- and NNE-SN-trending fault patterns on the basis of seismic profiles, gravity anomalies and region geological data. There are three main NW-trending transpressional faults in the OT, which are the seaward propagation of NW-trending faults in the East China Continent. The NNE-SN-trending faults with right-stepping distribution behave as right-lateral shearing. The strike-slip pull-apart process or transtensional faulting triggered the back-arc rifting or extension, and these faults evolved into transform faults with the emergence of oceanic crust. Thus, the transform fault patterns are inherited from pre-existing oblique transtensional faults at the offsets between rifting segments. Therefore, the OT performs the oblique spreading mechanism similar to nascent oceans such as the Red Sea and Gulf of Aden.

  18. Use of Dynamic Models and Operational Architecture to Solve Complex Navy Challenges

    NASA Technical Reports Server (NTRS)

    Grande, Darby; Black, J. Todd; Freeman, Jared; Sorber, TIm; Serfaty, Daniel

    2010-01-01

    The United States Navy established 8 Maritime Operations Centers (MOC) to enhance the command and control of forces at the operational level of warfare. Each MOC is a headquarters manned by qualified joint operational-level staffs, and enabled by globally interoperable C41 systems. To assess and refine MOC staffing, equipment, and schedules, a dynamic software model was developed. The model leverages pre-existing operational process architecture, joint military task lists that define activities and their precedence relations, as well as Navy documents that specify manning and roles per activity. The software model serves as a "computational wind-tunnel" in which to test a MOC on a mission, and to refine its structure, staffing, processes, and schedules. More generally, the model supports resource allocation decisions concerning Doctrine, Organization, Training, Material, Leadership, Personnel and Facilities (DOTMLPF) at MOCs around the world. A rapid prototype effort efficiently produced this software in less than five months, using an integrated process team consisting of MOC military and civilian staff, modeling experts, and software developers. The work reported here was conducted for Commander, United States Fleet Forces Command in Norfolk, Virginia, code N5-0LW (Operational Level of War) that facilitates the identification, consolidation, and prioritization of MOC capabilities requirements, and implementation and delivery of MOC solutions.

  19. Assessment of model uncertainty during the river export modelling of pesticides and transformation products

    NASA Astrophysics Data System (ADS)

    Gassmann, Matthias; Olsson, Oliver; Kümmerer, Klaus

    2013-04-01

    The modelling of organic pollutants in the environment is burdened by a load of uncertainties. Not only parameter values are uncertain but often also the mass and timing of pesticide application. By introducing transformation products (TPs) into modelling, further uncertainty coming from the dependence of these substances on their parent compounds and the introduction of new model parameters are likely. The purpose of this study was the investigation of the behaviour of a parsimonious catchment scale model for the assessment of river concentrations of the insecticide Chlorpyrifos (CP) and two of its TPs, Chlorpyrifos Oxon (CPO) and 3,5,6-trichloro-2-pyridinol (TCP) under the influence of uncertain input parameter values. Especially parameter uncertainty and pesticide application uncertainty were investigated by Global Sensitivity Analysis (GSA) and the Generalized Likelihood Uncertainty Estimation (GLUE) method, based on Monte-Carlo sampling. GSA revealed that half-lives and sorption parameters as well as half-lives and transformation parameters were correlated to each other. This means, that the concepts of modelling sorption and degradation/transformation were correlated. Thus, it may be difficult in modelling studies to optimize parameter values for these modules. Furthermore, we could show that erroneous pesticide application mass and timing were compensated during Monte-Carlo sampling by changing the half-life of CP. However, the introduction of TCP into the calculation of the objective function was able to enhance identifiability of pesticide application mass. The GLUE analysis showed that CP and TCP were modelled successfully, but CPO modelling failed with high uncertainty and insensitive parameters. We assumed a structural error of the model which was especially important for CPO assessment. This shows that there is the possibility that a chemical and some of its TPs can be modelled successfully by a specific model structure, but for other TPs, the model

  20. Enhanced Engine Performance During Emergency Operation Using a Model-Based Engine Control Architecture

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and application of model-based engine control (MBEC) for use during emergency operation of the aircraft. The MBEC methodology is applied to the Commercial Modular Aero-Propulsion System Simulation 40k (CMAPSS40k) and features an optimal tuner Kalman Filter (OTKF) to estimate unmeasured engine parameters, which can then be used for control. During an emergency scenario, normally-conservative engine operating limits may be relaxed to increase the performance of the engine and overall survivability of the aircraft; this comes at the cost of additional risk of an engine failure. The MBEC architecture offers the advantage of estimating key engine parameters that are not directly measureable. Estimating the unknown parameters allows for tighter control over these parameters, and on the level of risk the engine will operate at. This will allow the engine to achieve better performance than possible when operating to more conservative limits on a related, measurable parameter.

  1. Enhanced Engine Performance During Emergency Operation Using a Model-Based Engine Control Architecture

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2015-01-01

    This paper discusses the design and application of model-based engine control (MBEC) for use during emergency operation of the aircraft. The MBEC methodology is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40,000) and features an optimal tuner Kalman Filter (OTKF) to estimate unmeasured engine parameters, which can then be used for control. During an emergency scenario, normally-conservative engine operating limits may be relaxed to increase the performance of the engine and overall survivability of the aircraft; this comes at the cost of additional risk of an engine failure. The MBEC architecture offers the advantage of estimating key engine parameters that are not directly measureable. Estimating the unknown parameters allows for tighter control over these parameters, and on the level of risk the engine will operate at. This will allow the engine to achieve better performance than possible when operating to more conservative limits on a related, measurable parameter.

  2. Model-Based Systems Engineering With the Architecture Analysis and Design Language (AADL) Applied to NASA Mission Operations

    NASA Technical Reports Server (NTRS)

    Munoz Fernandez, Michela Miche

    2014-01-01

    The potential of Model Model Systems Engineering (MBSE) using the Architecture Analysis and Design Language (AADL) applied to space systems will be described. AADL modeling is applicable to real-time embedded systems- the types of systems NASA builds. A case study with the Juno mission to Jupiter showcases how this work would enable future missions to benefit from using these models throughout their life cycle from design to flight operations.

  3. High Resolution Genomic Scans Reveal Genetic Architecture Controlling Alcohol Preference in Bidirectionally Selected Rat Model.

    PubMed

    Lo, Chiao-Ling; Lossie, Amy C; Liang, Tiebing; Liu, Yunlong; Xuei, Xiaoling; Lumeng, Lawrence; Zhou, Feng C; Muir, William M

    2016-08-01

    Investigations on the influence of nature vs. nurture on Alcoholism (Alcohol Use Disorder) in human have yet to provide a clear view on potential genomic etiologies. To address this issue, we sequenced a replicated animal model system bidirectionally-selected for alcohol preference (AP). This model is uniquely suited to map genetic effects with high reproducibility, and resolution. The origin of the rat lines (an 8-way cross) resulted in small haplotype blocks (HB) with a corresponding high level of resolution. We sequenced DNAs from 40 samples (10 per line of each replicate) to determine allele frequencies and HB. We achieved ~46X coverage per line and replicate. Excessive differentiation in the genomic architecture between lines, across replicates, termed signatures of selection (SS), were classified according to gene and region. We identified SS in 930 genes associated with AP. The majority (50%) of the SS were confined to single gene regions, the greatest numbers of which were in promoters (284) and intronic regions (169) with the least in exon's (4), suggesting that differences in AP were primarily due to alterations in regulatory regions. We confirmed previously identified genes and found many new genes associated with AP. Of those newly identified genes, several demonstrated neuronal function involved in synaptic memory and reward behavior, e.g. ion channels (Kcnf1, Kcnn3, Scn5a), excitatory receptors (Grin2a, Gria3, Grip1), neurotransmitters (Pomc), and synapses (Snap29). This study not only reveals the polygenic architecture of AP, but also emphasizes the importance of regulatory elements, consistent with other complex traits.

  4. High Resolution Genomic Scans Reveal Genetic Architecture Controlling Alcohol Preference in Bidirectionally Selected Rat Model

    PubMed Central

    Lo, Chiao-Ling; Liang, Tiebing; Liu, Yunlong; Lumeng, Lawrence; Zhou, Feng C.; Muir, William M.

    2016-01-01

    Investigations on the influence of nature vs. nurture on Alcoholism (Alcohol Use Disorder) in human have yet to provide a clear view on potential genomic etiologies. To address this issue, we sequenced a replicated animal model system bidirectionally-selected for alcohol preference (AP). This model is uniquely suited to map genetic effects with high reproducibility, and resolution. The origin of the rat lines (an 8-way cross) resulted in small haplotype blocks (HB) with a corresponding high level of resolution. We sequenced DNAs from 40 samples (10 per line of each replicate) to determine allele frequencies and HB. We achieved ~46X coverage per line and replicate. Excessive differentiation in the genomic architecture between lines, across replicates, termed signatures of selection (SS), were classified according to gene and region. We identified SS in 930 genes associated with AP. The majority (50%) of the SS were confined to single gene regions, the greatest numbers of which were in promoters (284) and intronic regions (169) with the least in exon's (4), suggesting that differences in AP were primarily due to alterations in regulatory regions. We confirmed previously identified genes and found many new genes associated with AP. Of those newly identified genes, several demonstrated neuronal function involved in synaptic memory and reward behavior, e.g. ion channels (Kcnf1, Kcnn3, Scn5a), excitatory receptors (Grin2a, Gria3, Grip1), neurotransmitters (Pomc), and synapses (Snap29). This study not only reveals the polygenic architecture of AP, but also emphasizes the importance of regulatory elements, consistent with other complex traits. PMID:27490364

  5. Integrating mixed-effect models into an architectural plant model to simulate inter- and intra-progeny variability: a case study on oil palm (Elaeis guineensis Jacq.).

    PubMed

    Perez, Raphaël P A; Pallas, Benoît; Le Moguédec, Gilles; Rey, Hervé; Griffon, Sébastien; Caliman, Jean-Pierre; Costes, Evelyne; Dauzat, Jean

    2016-08-01

    Three-dimensional (3D) reconstruction of plants is time-consuming and involves considerable levels of data acquisition. This is possibly one reason why the integration of genetic variability into 3D architectural models has so far been largely overlooked. In this study, an allometry-based approach was developed to account for architectural variability in 3D architectural models of oil palm (Elaeis guineensis Jacq.) as a case study. Allometric relationships were used to model architectural traits from individual leaflets to the entire crown while accounting for ontogenetic and morphogenetic gradients. Inter- and intra-progeny variabilities were evaluated for each trait and mixed-effect models were used to estimate the mean and variance parameters required for complete 3D virtual plants. Significant differences in leaf geometry (petiole length, density of leaflets, and rachis curvature) and leaflet morphology (gradients of leaflet length and width) were detected between and within progenies and were modelled in order to generate populations of plants that were consistent with the observed populations. The application of mixed-effect models on allometric relationships highlighted an interesting trade-off between model accuracy and ease of defining parameters for the 3D reconstruction of plants while at the same time integrating their observed variability. Future research will be dedicated to sensitivity analyses coupling the structural model presented here with a radiative balance model in order to identify the key architectural traits involved in light interception efficiency.

  6. Functional mapping of quantitative trait loci underlying growth trajectories using a transform-both-sides logistic model.

    PubMed

    Wu, Rongling; Ma, Chang-Xing; Lin, Min; Wang, Zuoheng; Casella, George

    2004-09-01

    The incorporation of developmental control mechanisms of growth has proven to be a powerful tool in mapping quantitative trait loci (QTL) underlying growth trajectories. A theoretical framework for implementing a QTL mapping strategy with growth laws has been established. This framework can be generalized to an arbitrary number of time points, where growth is measured, and becomes computationally more tractable, when the assumption of variance stationarity is made. In practice, however, this assumption is likely to be violated for age-specific growth traits due to a scale effect. In this article, we present a new statistical model for mapping growth QTL, which also addresses the problem of variance stationarity, by using a transform-both-sides (TBS) model advocated by Carroll and Ruppert (1984, Journal of the American Statistical Association 79, 321-328). The TBS-based model for mapping growth QTL cannot only maintain the original biological properties of a growth model, but also can increase the accuracy and precision of parameter estimation and the power to detect a QTL responsible for growth differentiation. Using the TBS-based model, we successfully map a QTL governing growth trajectories to a linkage group in an example of forest trees. The statistical and biological properties of the estimates of this growth QTL position and effect are investigated using Monte Carlo simulation studies. The implications of our model for understanding the genetic architecture of growth are discussed.

  7. A stochastic model of tree architecture and biomass partitioning: application to Mongolian Scots pines

    PubMed Central

    Wang, Feng; Kang, Mengzhen; Lu, Qi; Letort, Véronique; Han, Hui; Guo, Yan; de Reffye, Philippe; Li, Baoguo

    2011-01-01

    Background and Aims Mongolian Scots pine (Pinus sylvestris var. mongolica) is one of the principal species used for windbreak and sand stabilization in arid and semi-arid areas in northern China. A model-assisted analysis of its canopy architectural development and functions is valuable for better understanding its behaviour and roles in fragile ecosystems. However, due to the intrinsic complexity and variability of trees, the parametric identification of such models is currently a major obstacle to their evaluation and their validation with respect to real data. The aim of this paper was to present the mathematical framework of a stochastic functional–structural model (GL2) and its parameterization for Mongolian Scots pines, taking into account inter-plant variability in terms of topological development and biomass partitioning. Methods In GL2, plant organogenesis is determined by the realization of random variables representing the behaviour of axillary or apical buds. The associated probabilities are calibrated for Mongolian Scots pines using experimental data including means and variances of the numbers of organs per plant in each order-based class. The functional part of the model relies on the principles of source–sink regulation and is parameterized by direct observations of living trees and the inversion method using measured data for organ mass and dimensions. Key Results The final calibration accuracy satisfies both organogenetic and morphogenetic processes. Our hypothesis for the number of organs following a binomial distribution is found to be consistent with the real data. Based on the calibrated parameters, stochastic simulations of the growth of Mongolian Scots pines in plantations are generated by the Monte Carlo method, allowing analysis of the inter-individual variability of the number of organs and biomass partitioning. Three-dimensional (3D) architectures of young Mongolian Scots pines were simulated for 4-, 6- and 8-year-old trees

  8. Ivory Coast-Ghana margin: model of a transform margin

    SciTech Connect

    Mascle, J.; Blarez, E.

    1987-05-01

    The authors present a marine study of the eastern Ivory Coast-Ghana continental margins which they consider one of the most spectacular extinct transform margins. This margin has been created during Early-Lower Cretaceous time and has not been submitted to any major geodynamic reactivation since its fabric. Based on this example, they propose to consider during the evolution of the transform margin four main and successive stages. Shearing contact is first active between two probably thick continental crusts and then between progressively thinning continental crusts. This leads to the creation of specific geological structures such as pull-apart graben, elongated fault lineaments, major fault scarps, shear folds, and marginal ridges. After the final continental breakup, a hot center (the mid-oceanic ridge axis) is progressively drifting along the newly created margin. The contact between two lithospheres of different nature should necessarily induce, by thermal exchanges, vertical crustal readjustments. Finally, the transform margin remains directly adjacent to a hot but cooling oceanic lithosphere; its subsidence behavior should then progressively be comparable to the thermal subsidence of classic rifted margins.

  9. ANALYSIS OF TERRESTRIAL PLANET FORMATION BY THE GRAND TACK MODEL: SYSTEM ARCHITECTURE AND TACK LOCATION

    SciTech Connect

    Brasser, R.; Ida, S.; Matsumura, S.; Mojzsis, S. J.; Werner, S. C.

    2016-04-20

    The Grand Tack model of terrestrial planet formation has emerged in recent years as the premier scenario used to account for several observed features of the inner solar system. It relies on the early migration of the giant planets to gravitationally sculpt and mix the planetesimal disk down to ∼1 au, after which the terrestrial planets accrete from material remaining in a narrow circumsolar annulus. Here, we investigate how the model fares under a range of initial conditions and migration course-change (“tack”) locations. We run a large number of N-body simulations with tack locations of 1.5 and 2 au and test initial conditions using equal-mass planetary embryos and a semi-analytical approach to oligarchic growth. We make use of a recent model of the protosolar disk that takes into account viscous heating, includes the full effect of type 1 migration, and employs a realistic mass–radius relation for the growing terrestrial planets. Our results show that the canonical tack location of Jupiter at 1.5 au is inconsistent with the most massive planet residing at 1 au at greater than 95% confidence. This favors a tack farther out at 2 au for the disk model and parameters employed. Of the different initial conditions, we find that the oligarchic case is capable of statistically reproducing the orbital architecture and mass distribution of the terrestrial planets, while the equal-mass embryo case is not.

  10. A Kinetic Vlasov Model for Plasma Simulation Using Discontinuous Galerkin Method on Many-Core Architectures

    NASA Astrophysics Data System (ADS)

    Reddell, Noah

    Advances are reported in the three pillars of computational science achieving a new capability for understanding dynamic plasma phenomena outside of local thermodynamic equilibrium. A continuum kinetic model for plasma based on the Vlasov-Maxwell system for multiple particle species is developed. Consideration is added for boundary conditions in a truncated velocity domain and supporting wall interactions. A scheme to scale the velocity domain for multiple particle species with different temperatures and particle mass while sharing one computational mesh is described. A method for assessing the degree to which the kinetic solution differs from a Maxwell-Boltzmann distribution is introduced and tested on a thoroughly studied test case. The discontinuous Galerkin numerical method is extended for efficient solution of hyperbolic conservation laws in five or more particle phase-space dimensions using tensor-product hypercube elements with arbitrary polynomial order. A scheme for velocity moment integration is integrated as required for coupling between the plasma species and electromagnetic waves. A new high performance simulation code WARPM is developed to efficiently implement the model and numerical method on emerging many-core supercomputing architectures. WARPM uses the OpenCL programming model for computational kernels and task parallelism to overlap computation with communication. WARPM single-node performance and parallel scaling efficiency are analyzed with bottlenecks identified guiding future directions for the implementation. The plasma modeling capability is validated against physical problems with analytic solutions and well established benchmark problems.

  11. Modeling halotropism: a key role for root tip architecture and reflux loop remodeling in redistributing auxin

    PubMed Central

    van den Berg, Thea; Korver, Ruud A.; Testerink, Christa

    2016-01-01

    A key characteristic of plant development is its plasticity in response to various and dynamically changing environmental conditions. Tropisms contribute to this flexibility by allowing plant organs to grow from or towards environmental cues. Halotropism is a recently described tropism in which plant roots bend away from salt. During halotropism, as in most other tropisms, directional growth is generated through an asymmetric auxin distribution that generates differences in growth rate and hence induces bending. Here, we develop a detailed model of auxin transport in the Arabidopsis root tip and combine this with experiments to investigate the processes generating auxin asymmetry during halotropism. Our model points to the key role of root tip architecture in allowing the decrease in PIN2 at the salt-exposed side of the root to result in a re-routing of auxin to the opposite side. In addition, our model demonstrates how feedback of auxin on the auxin transporter AUX1 amplifies this auxin asymmetry, while a salt-induced transient increase in PIN1 levels increases the speed at which this occurs. Using AUX1-GFP imaging and pin1 mutants, we experimentally confirmed these model predictions, thus expanding our knowledge of the cellular basis of halotropism. PMID:27510970

  12. Accuracy assessment of modeling architectural structures and details using terrestrial laser scanning

    NASA Astrophysics Data System (ADS)

    Kedzierski, M.; Walczykowski, P.; Orych, A.; Czarnecka, P.

    2015-08-01

    One of the most important aspects when performing architectural documentation of cultural heritage structures is the accuracy of both the data and the products which are generated from these data: documentation in the form of 3D models or vector drawings. The paper describes an assessment of the accuracy of modelling data acquired using a terrestrial phase scanner in relation to the density of a point cloud representing the surface of different types of construction materials typical for cultural heritage structures. This analysis includes the impact of the scanning geometry: the incidence angle of the laser beam and the scanning distance. For the purposes of this research, a test field consisting of samples of different types of construction materials (brick, wood, plastic, plaster, a ceramic tile, sheet metal) was built. The study involved conducting measurements at different angles and from a range of distances for chosen scanning densities. Data, acquired in the form of point clouds, were then filtered and modelled. An accuracy assessment of the 3D model was conducted by fitting it with the point cloud. The reflection intensity of each type of material was also analyzed, trying to determine which construction materials have the highest reflectance coefficients, and which have the lowest reflection coefficients, and in turn how this variable changes for different scanning parameters. Additionally measurements were taken of a fragment of a building in order to compare the results obtained in laboratory conditions, with those taken in field conditions.

  13. Modeling halotropism: a key role for root tip architecture and reflux loop remodeling in redistributing auxin.

    PubMed

    van den Berg, Thea; Korver, Ruud A; Testerink, Christa; Ten Tusscher, Kirsten H W J

    2016-09-15

    A key characteristic of plant development is its plasticity in response to various and dynamically changing environmental conditions. Tropisms contribute to this flexibility by allowing plant organs to grow from or towards environmental cues. Halotropism is a recently described tropism in which plant roots bend away from salt. During halotropism, as in most other tropisms, directional growth is generated through an asymmetric auxin distribution that generates differences in growth rate and hence induces bending. Here, we develop a detailed model of auxin transport in the Arabidopsis root tip and combine this with experiments to investigate the processes generating auxin asymmetry during halotropism. Our model points to the key role of root tip architecture in allowing the decrease in PIN2 at the salt-exposed side of the root to result in a re-routing of auxin to the opposite side. In addition, our model demonstrates how feedback of auxin on the auxin transporter AUX1 amplifies this auxin asymmetry, while a salt-induced transient increase in PIN1 levels increases the speed at which this occurs. Using AUX1-GFP imaging and pin1 mutants, we experimentally confirmed these model predictions, thus expanding our knowledge of the cellular basis of halotropism.

  14. Analysis of Terrestrial Planet Formation by the Grand Tack Model: System Architecture and Tack Location

    NASA Astrophysics Data System (ADS)

    Brasser, R.; Matsumura, S.; Ida, S.; Mojzsis, S. J.; Werner, S. C.

    2016-04-01

    The Grand Tack model of terrestrial planet formation has emerged in recent years as the premier scenario used to account for several observed features of the inner solar system. It relies on the early migration of the giant planets to gravitationally sculpt and mix the planetesimal disk down to ˜1 au, after which the terrestrial planets accrete from material remaining in a narrow circumsolar annulus. Here, we investigate how the model fares under a range of initial conditions and migration course-change (“tack”) locations. We run a large number of N-body simulations with tack locations of 1.5 and 2 au and test initial conditions using equal-mass planetary embryos and a semi-analytical approach to oligarchic growth. We make use of a recent model of the protosolar disk that takes into account viscous heating, includes the full effect of type 1 migration, and employs a realistic mass-radius relation for the growing terrestrial planets. Our results show that the canonical tack location of Jupiter at 1.5 au is inconsistent with the most massive planet residing at 1 au at greater than 95% confidence. This favors a tack farther out at 2 au for the disk model and parameters employed. Of the different initial conditions, we find that the oligarchic case is capable of statistically reproducing the orbital architecture and mass distribution of the terrestrial planets, while the equal-mass embryo case is not.

  15. Image-Based Modeling Techniques for Architectural Heritage 3d Digitalization: Limits and Potentialities

    NASA Astrophysics Data System (ADS)

    Santagati, C.; Inzerillo, L.; Di Paola, F.

    2013-07-01

    3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS), the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases) to large scale buildings for practitioner purpose.

  16. Relevance and limitations of crowding, fractal, and polymer models to describe nuclear architecture.

    PubMed

    Huet, Sébastien; Lavelle, Christophe; Ranchon, Hubert; Carrivain, Pascal; Victor, Jean-Marc; Bancaud, Aurélien

    2014-01-01

    Chromosome architecture plays an essential role for all nuclear functions, and its physical description has attracted considerable interest over the last few years among the biophysics community. These researches at the frontiers of physics and biology have been stimulated by the demand for quantitative analysis of molecular biology experiments, which provide comprehensive data on chromosome folding, or of live cell imaging experiments that enable researchers to visualize selected chromosome loci in living or fixed cells. In this review our goal is to survey several nonmutually exclusive models that have emerged to describe the folding of DNA in the nucleus, the dynamics of proteins in the nucleoplasm, or the movements of chromosome loci. We focus on three classes of models, namely molecular crowding, fractal, and polymer models, draw comparisons, and discuss their merits and limitations in the context of chromosome structure and dynamics, or nuclear protein navigation in the nucleoplasm. Finally, we identify future challenges in the roadmap to a unified model of the nuclear environment.

  17. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models.

    PubMed

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L; Huffman, Jennifer E; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F; Wilson, James F; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S

    2015-07-15

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge.

  18. Influence of diffusive porosity architecture on kinetically-controlled reactions in mobile-immobile models

    NASA Astrophysics Data System (ADS)

    Babey, T.; Ginn, T. R.; De Dreuzy, J. R.

    2014-12-01

    Solute transport in porous media may be structured at various scales by geological features, from connectivity patterns of pores to fracture networks. This structure impacts solute repartition and consequently reactivity. Here we study numerically the influence of the organization of porous volumes within diffusive porosity zones on different reactions. We couple a mobile-immobile transport model where an advective zone exchanges with diffusive zones of variable structure to the geochemical modeling software PHREEQC. We focus on two kinetically-controlled reactions, a linear sorption and a nonlinear dissolution of a mineral. We show that in both cases the structure of the immobile zones has an important impact on the overall reaction rates. Through the Multi-Rate Mass Transfer (MRMT) framework, we show that this impact is very well captured by residence times-based models for the kinetic linear sorption, as it is mathematically equivalent to a modification of the initial diffusive structure; Consequently, the overall reaction rate could be easily extrapolated from a conservative tracer experiment. The MRMT models however struggle to reproduce the non-linearity and the threshold effects associated with the kinetic dissolution. A slower reaction, by allowing more time for diffusion to smooth out the concentration gradients, tends to increase their relevance. Figure: Left: Representation of a mobile-immobile model with a complex immobile architecture. The mobile zone is indicated by an arrow. Right: Total remaining mass of mineral in mobile-immobile models and in their equivalent MRMT models during a flush by a highly under-saturated solution. The models only differ by the organization of their immobile porous volumes.

  19. How Plates Pull Transforms Apart: 3-D Numerical Models of Oceanic Transform Fault Response to Changes in Plate Motion Direction

    NASA Astrophysics Data System (ADS)

    Morrow, T. A.; Mittelstaedt, E. L.; Olive, J. A. L.

    2015-12-01

    Observations along oceanic fracture zones suggest that some mid-ocean ridge transform faults (TFs) previously split into multiple strike-slip segments separated by short (<~50 km) intra-transform spreading centers and then reunited to a single TF trace. This history of segmentation appears to correspond with changes in plate motion direction. Despite the clear evidence of TF segmentation, the processes governing its development and evolution are not well characterized. Here we use a 3-D, finite-difference / marker-in-cell technique to model the evolution of localized strain at a TF subjected to a sudden change in plate motion direction. We simulate the oceanic lithosphere and underlying asthenosphere at a ridge-transform-ridge setting using a visco-elastic-plastic rheology with a history-dependent plastic weakening law and a temperature- and stress-dependent mantle viscosity. To simulate the development of topography, a low density, low viscosity 'sticky air' layer is present above the oceanic lithosphere. The initial thermal gradient follows a half-space cooling solution with an offset across the TF. We impose an enhanced thermal diffusivity in the uppermost 6 km of lithosphere to simulate the effects of hydrothermal circulation. An initial weak seed in the lithosphere helps localize shear deformation between the two offset ridge axes to form a TF. For each model case, the simulation is run initially with TF-parallel plate motion until the thermal structure reaches a steady state. The direction of plate motion is then rotated either instantaneously or over a specified time period, placing the TF in a state of trans-tension. Model runs continue until the system reaches a new steady state. Parameters varied here include: initial TF length, spreading rate, and the rotation rate and magnitude of spreading obliquity. We compare our model predictions to structural observations at existing TFs and records of TF segmentation preserved in oceanic fracture zones.

  20. Modeling of a method of parallel hierarchical transformation for fast recognition of dynamic images

    NASA Astrophysics Data System (ADS)

    Timchenko, Leonid I.; Kokryatskaya, Nataliya I.; Shpakovych, Viktoriya V.

    2013-12-01

    Principles necessary to develop a method and computational facilities for the parallel hierarchical transformation based on high-performance GPUs are discussed in the paper. Mathematic models of the parallel hierarchical (PH) network training for the transformation and a PH network training method for recognition of dynamic images are developed.

  1. Rheology and friction along the Vema transform fault (Central Atlantic) inferred by thermal modeling

    NASA Astrophysics Data System (ADS)

    Cuffaro, Marco; Ligi, Marco

    2016-04-01

    We investigate with 3-D finite element simulations the temperature distribution beneath the Vema transform that offsets the Mid-Atlantic Ridge by ~300 km in the Central Atlantic. The developed thermal model includes the effects of mantle flow beneath a ridge-transform-ridge geometry and the lateral heat conduction across the transform fault, and of the shear heating generated along the fault. Numerical solutions are presented for a 3-D domain, discretized with a non-uniform tetrahedral mesh, where relative plate kinematics is used as boundary condition, providing passive mantle upwelling. Mantle is modelled as a temperature-dependent viscous fluid, and its dynamics can be described by Stokes and advection-conduction heat equations. The results show that shear heating raises significantly the temperature along the transform fault. In order to test model results, we calculated the thermal structure simulating the mantle dynamics beneath an accretionary plate boundary geometry that duplicates the Vema transform fault, assuming the present-day spreading rate and direction of the Mid Atlantic Ridge at 11 °N. Thus, the modelled heat flow at the surface has been compared with 23 heat flow measurements carried out along the Vema Transform valley. Laboratory studies on the frictional stability of olivine aggregates show that the depth extent of oceanic faulting is thermally controlled and limited by the 600 °C isotherm. The depth of isotherms of the thermal model were compared to the depths of earthquakes along transform faults. Slip on oceanic transform faults is primarily aseismic, only 15% of the tectonic offset is accommodated by earthquakes. Despite extensive fault areas, few large earthquakes occur on the fault and few aftershocks follow large events. Rheology constrained by the thermal model combined with geology and seismicity of the Vema Transform fault allows to better understand friction and the spatial distribution of strength along the fault and provides

  2. Bäcklund transformations for the elliptic Gaudin model and a Clebsch system

    NASA Astrophysics Data System (ADS)

    Zullo, Federico

    2011-07-01

    A two-parameters family of Bäcklund transformations for the classical elliptic Gaudin model is constructed. The maps are explicit, symplectic, preserve the same integrals as for the continuous flows, and are a time discretization of each of these flows. The transformations can map real variables into real variables, sending physical solutions of the equations of motion into physical solutions. The starting point of the analysis is the integrability structure of the model. It is shown how the analogue transformations for the rational and trigonometric Gaudin model are a limiting case of this one. An application to a particular case of the Clebsch system is given.

  3. A micromechanics constitutive model for pure dilatant martensitic transformation of ZrO2-containing ceramics

    NASA Astrophysics Data System (ADS)

    Qingping, Sun; Shouwen, Yu; Kehchih, Hwang

    1990-05-01

    A new micromechanics constitutive model for pure dilatant transformation plasticity of structure ceramics is proposed in this paper. Based on the thermodynamics, micromechanics and microscale t→m transformation mechanism analysis of the TZP and PSZ ZrO2-containing ceramics, an analytic expressions of the Helmholtz and complementary free energy of the constitutive element for the case of pure dilatant transformation is derived for the first time in a self-consistent manner. By the analysis of energy dissipation in the forward and reverse transformations, the micromechanics constitutive law is derived in the framework of Hill-Rice's internal variable constitutive theory.

  4. Internet of Things: a possible change in the distributed modeling and simulation architecture paradigm

    NASA Astrophysics Data System (ADS)

    Riecken, Mark; Lessmann, Kurt; Schillero, David

    2016-05-01

    The Data Distribution Service (DDS) was started by the Object Management Group (OMG) in 2004. Currently, DDS is one of the contenders to support the Internet of Things (IoT) and the Industrial IOT (IIoT). DDS has also been used as a distributed simulation architecture. Given the anticipated proliferation of IoT and II devices, along with the explosive growth of sensor technology, can we expect this to have an impact on the broader community of distributed simulation? If it does, what is the impact and which distributed simulation domains will be most affected? DDS shares many of the same goals and characteristics of distributed simulation such as the need to support scale and an emphasis on Quality of Service (QoS) that can be tailored to meet the end user's needs. In addition, DDS has some built-in features such as security that are not present in traditional distributed simulation protocols. If the IoT and II realize their potential application, we predict a large base of technology to be built around this distributed data paradigm, much of which could be directly beneficial to the distributed M&S community. In this paper we compare some of the perceived gaps and shortfalls of current distributed M&S technology to the emerging capabilities of DDS built around the IoT. Although some trial work has been conducted in this area, we propose a more focused examination of the potential of these new technologies and their applicability to current and future problems in distributed M&S. The Internet of Things (IoT) and its data communications mechanisms such as the Data Distribution System (DDS) share properties in common with distributed modeling and simulation (M&S) and its protocols such as the High Level Architecture (HLA) and the Test and Training Enabling Architecture (TENA). This paper proposes a framework based on the sensor use case for how the two communities of practice (CoP) can benefit from one another and achieve greater capability in practical distributed

  5. A new model for the initiation, crustal architecture, and extinction of pull-apart basins

    NASA Astrophysics Data System (ADS)

    van Wijk, J.; Axen, G. J.; Abera, R.

    2015-12-01

    We present a new model for the origin, crustal architecture, and evolution of pull-apart basins. The model is based on results of three-dimensional upper crustal numerical models of deformation, field observations, and fault theory, and answers many of the outstanding questions related to these rifts. In our model, geometric differences between pull-apart basins are inherited from the initial geometry of the strike-slip fault step which results from early geometry of the strike-slip fault system. As strike-slip motion accumulates, pull-apart basins are stationary with respect to underlying basement and the fault tips may propagate beyond the rift basin. Our model predicts that the sediment source areas may thus migrate over time. This implies that, although pull-apart basins lengthen over time, lengthening is accommodated by extension within the pull-apart basin, rather than formation of new faults outside of the rift zone. In this aspect pull-apart basins behave as narrow rifts: with increasing strike-slip the basins deepen but there is no significant younging outward. We explain why pull-apart basins do not go through previously proposed geometric evolutionary stages, which has not been documented in nature. Field studies predict that pull-apart basins become extinct when an active basin-crossing fault forms; this is the most likely fate of pull-apart basins, because strike-slip systems tend to straighten. The model predicts what the favorable step-dimensions are for the formation of such a fault system, and those for which a pull-apart basin may further develop into a short seafloor-spreading ridge. The model also shows that rift shoulder uplift is enhanced if the strike-slip rate is larger than the fault-propagation rate. Crustal compression then contributes to uplift of the rift flanks.

  6. Phase field modeling of tetragonal to monoclinic phase transformation in zirconia

    NASA Astrophysics Data System (ADS)

    Mamivand, Mahmood

    Zirconia based ceramics are strong, hard, inert, and smooth, with low thermal conductivity and good biocompatibility. Such properties made zirconia ceramics an ideal material for different applications form thermal barrier coatings (TBCs) to biomedicine applications like femoral implants and dental bridges. However, this unusual versatility of excellent properties would be mediated by the metastable tetragonal (or cubic) transformation to the stable monoclinic phase after a certain exposure at service temperatures. This transformation from tetragonal to monoclinic, known as LTD (low temperature degradation) in biomedical application, proceeds by propagation of martensite, which corresponds to transformation twinning. As such, tetragonal to monoclinic transformation is highly sensitive to mechanical and chemomechanical stresses. It is known in fact that this transformation is the source of the fracture toughening in stabilized zirconia as it occurs at the stress concentration regions ahead of the crack tip. This dissertation is an attempt to provide a kinetic-based model for tetragonal to monoclinic transformation in zirconia. We used the phase field technique to capture the temporal and spatial evolution of monoclinic phase. In addition to morphological patterns, we were able to calculate the developed internal stresses during tetragonal to monoclinic transformation. The model was started form the two dimensional single crystal then was expanded to the two dimensional polycrystalline and finally to the three dimensional single crystal. The model is able to predict the most physical properties associated with tetragonal to monoclinic transformation in zirconia including: morphological patterns, transformation toughening, shape memory effect, pseudoelasticity, surface uplift, and variants impingement. The model was benched marked with several experimental works. The good agreements between simulation results and experimental data, make the model a reliable tool for

  7. Models to evaluate magnicon architectures and designs suitable for high-perveance beams

    SciTech Connect

    Rees, Daniel E.

    1994-03-01

    The magnicon, a new high-power, radio frequency (rf) deflection- modulated amplifier, was recently developed at the Institute for Nuclear Physics in Novosibirsk, Russia. The first magnicon achieved a peak output power of 2.6 MW for 50-μs pulses at a frequency of 915 MHz with a dc-to-rf conversion efficiency of 73%. The conversion efficiency achieved by the original magnicon represents a significant improvement over state-of-the-art conventional velocity- and density-modulated devices. Therefore, if properly exploited, the magnicon could substantially reduce the operating expenses of industrial, scientific, and military facilities that require large amounts of rf power. This dissertation describes the operational principles of the magnicon, provides small-signal analytical theory (where practical), presents a large-signal numerical model to characterize magnicon performance, and then utilizes this model to investigate the characteristics of the component magnicon structures. Using these modeling tools, the first-generation magnicon architecture is analyzed for its performance sensitivity to electron-beam size and is found to support beams of only limited diameter. Finally, an alternate magnicon geometry, called a ``uniform-field`` magnicon, is presented and shown to support beams of larger diameter.

  8. GS3: A Knowledge Management Architecture for Collaborative Geologic Sequestration Modeling

    SciTech Connect

    Gorton, Ian; Black, Gary D.; Schuchardt, Karen L.; Sivaramakrishnan, Chandrika; Wurstner, Signe K.; Hui, Peter SY

    2010-01-10

    Modern scientific enterprises are inherently knowledge-intensive. In general, scientific studies in domains such as groundwater, climate, and other environmental modeling as well as fundamental research in chemistry, physics, and biology require the acquisition and manipulation of large amounts of experimental and field data in order to create inputs for large-scale computational simulations. The results of these simulations must then be analyzed, leading to refinements of inputs and models and further simulations. In this paper we describe our efforts in creating a knowledge management platform to support collaborative, wide-scale studies in the area of geologic sequestration. The platform, known as GS3 (Geologic Sequestration Software Suite), exploits and integrates off-the-shelf software components including semantic wikis, content management systems and open source middleware to create the core architecture. We then extend the wiki environment to support the capture of provenance, the ability to incorporate various analysis tools, and the ability to launch simulations on supercomputers. The paper describes the key components of GS3 and demonstrates its use through illustrative examples. We conclude by assessing the suitability of our approach for geologic sequestration modeling and generalization to other scientific problem domains

  9. Designing a Component-Based Architecture for the Modeling and Simulation of Nuclear Fuels and Reactors

    SciTech Connect

    Billings, Jay Jay; Elwasif, Wael R; Hively, Lee M; Bernholdt, David E; Hetrick III, John M; Bohn, Tim T

    2009-01-01

    Concerns over the environment and energy security have recently prompted renewed interest in the U.S. in nuclear energy. Recognizing this, the U.S. Dept. of Energy has launched an initiative to revamp and modernize the role that modeling and simulation plays in the development and operation of nuclear facilities. This Nuclear Energy Advanced Modeling and Simulation (NEAMS) program represents a major investment in the development of new software, with one or more large multi-scale multi-physics capabilities in each of four technical areas associated with the nuclear fuel cycle, as well as additional supporting developments. In conjunction with this, we are designing a software architecture, computational environment, and component framework to integrate the NEAMS technical capabilities and make them more accessible to users. In this report of work very much in progress, we lay out the 'problem' we are addressing, describe the model-driven system design approach we are using, and compare them with several large-scale technical software initiatives from the past. We discuss how component technology may be uniquely positioned to address the software integration challenges of the NEAMS program, outline the capabilities planned for the NEAMS computational environment and framework, and describe some initial prototyping activities.

  10. Systems modeling of space medical support architecture: topological mapping of high level characteristics and constraints.

    PubMed

    Musson, David M; Doyle, Thomas E; Saary, Joan

    2012-01-01

    The challenges associated with providing medical support to astronauts on long duration lunar or planetary missions are significant. Experience to date in space has included short duration missions to the lunar surface and both short and long duration stays on board spacecraft and space stations in low Earth orbit. Live actor, terrestrial analogue setting simulation provides a means of studying multiple aspects of the medical challenges of exploration class space missions, though few if any published models exist upon which to construct systems-simulation test beds. Current proposed and projected moon mission scenarios were analyzed from a systems perspective to construct such a model. A resulting topological mapping of high-level architecture for a reference lunar mission with presumed EVA excursion and international mission partners is presented. High-level descriptions of crew operational autonomy, medical support related to crew-member status, and communication characteristics within and between multiple teams are presented. It is hoped this modeling will help guide future efforts to simulate medical support operations for research purposes, such as in the use of live actor simulations in terrestrial analogue environments.

  11. The fractal globule as a model of chromatin architecture in the cell.

    PubMed

    Mirny, Leonid A

    2011-01-01

    The fractal globule is a compact polymer state that emerges during polymer condensation as a result of topological constraints which prevent one region of the chain from passing across another one. This long-lived intermediate state was introduced in 1988 (Grosberg et al. 1988) and has not been observed in experiments or simulations until recently (Lieberman-Aiden et al. 2009). Recent characterization of human chromatin using a novel chromosome conformational capture technique brought the fractal globule into the spotlight as a structural model of human chromosome on the scale of up to 10 Mb (Lieberman-Aiden et al. 2009). Here, we present the concept of the fractal globule, comparing it to other states of a polymer and focusing on its properties relevant for the biophysics of chromatin. We then discuss properties of the fractal globule that make it an attractive model for chromatin organization inside a cell. Next, we connect the fractal globule to recent studies that emphasize topological constraints as a primary factor driving formation of chromosomal territories. We discuss how theoretical predictions, made on the basis of the fractal globule model, can be tested experimentally. Finally, we discuss whether fractal globule architecture can be relevant for chromatin packing in other organisms such as yeast and bacteria.

  12. Effects of Practice on Task Architecture: Combined Evidence from Interference Experiments and Random-Walk Models of Decision Making

    ERIC Educational Resources Information Center

    Kamienkowski, Juan E.; Pashler, Harold; Dehaene, Stanislas; Sigman, Mariano

    2011-01-01

    Does extensive practice reduce or eliminate central interference in dual-task processing? We explored the reorganization of task architecture with practice by combining interference analysis (delays in dual-task experiment) and random-walk models of decision making (measuring the decision and non-decision contributions to RT). The main delay…

  13. A Model-Based Study of On-Board Data Processing Architecture for Balloon-Borne Aurora Observation

    NASA Technical Reports Server (NTRS)

    Lim, Chester

    2011-01-01

    This paper discusses an application of ISAAC design methodology to a balloon-borne payload electronic system for aurora observation. The methodology is composed of two phases, high level design and low level implementation, the focus of this paper is on the high level design. This paper puts the system architecture in the context of a balloon based application but it can be generalized to any airborne/space-borne application. The system architecture includes a front-end detector, its corresponding data processing unit, and a controller. VisualSim has been used to perform modeling and simulations to explore the entire design space, finding optimal solutions that meet system requirements.

  14. Semantic Web-Driven LMS Architecture towards a Holistic Learning Process Model Focused on Personalization

    ERIC Educational Resources Information Center

    Kerkiri, Tania

    2010-01-01

    A comprehensive presentation is here made on the modular architecture of an e-learning platform with a distinctive emphasis on content personalization, combining advantages from semantic web technology, collaborative filtering and recommendation systems. Modules of this architecture handle information about both the domain-specific didactic…

  15. Model Transformation for a System of Systems Dependability Safety Case

    NASA Technical Reports Server (NTRS)

    Murphy, Judy; Driskell, Stephen B.

    2010-01-01

    Software plays an increasingly larger role in all aspects of NASA's science missions. This has been extended to the identification, management and control of faults which affect safety-critical functions and by default, the overall success of the mission. Traditionally, the analysis of fault identification, management and control are hardware based. Due to the increasing complexity of system, there has been a corresponding increase in the complexity in fault management software. The NASA Independent Validation & Verification (IV&V) program is creating processes and procedures to identify, and incorporate safety-critical software requirements along with corresponding software faults so that potential hazards may be mitigated. This Specific to Generic ... A Case for Reuse paper describes the phases of a dependability and safety study which identifies a new, process to create a foundation for reusable assets. These assets support the identification and management of specific software faults and, their transformation from specific to generic software faults. This approach also has applications to other systems outside of the NASA environment. This paper addresses how a mission specific dependability and safety case is being transformed to a generic dependability and safety case which can be reused for any type of space mission with an emphasis on software fault conditions.

  16. B-Transform and Its Application to a Fish-Hyacinth Model

    ERIC Educational Resources Information Center

    Oyelami, B. O.; Ale, S. O.

    2002-01-01

    A new transform proposed by Oyelami and Ale for impulsive systems is applied to an impulsive fish-hyacinth model. A biological policy regarding the growth of the fish and the hyacinth populations is formulated.

  17. An Approach for Detecting Inconsistencies between Behavioral Models of the Software Architecture and the Code

    SciTech Connect

    Ciraci, Selim; Sozer, Hasan; Tekinerdogan, Bedir

    2012-07-16

    In practice, inconsistencies between architectural documentation and the code might arise due to improper implementation of the architecture or the separate, uncontrolled evolution of the code. Several approaches have been proposed to detect the inconsistencies between the architecture and the code but these tend to be limited for capturing inconsistencies that might occur at runtime. We present a runtime verification approach for detecting inconsistencies between the dynamic behavior of the architecture and the actual code. The approach is supported by a set of tools that implement the architecture and the code patterns in Prolog, and support the automatic generation of runtime monitors for detecting inconsistencies. We illustrate the approach and the toolset for a Crisis Management System case study.

  18. Quantitative structure-activity relationship models of chemical transformations from matched pairs analyses.

    PubMed

    Beck, Jeremy M; Springer, Clayton

    2014-04-28

    The concepts of activity cliffs and matched molecular pairs (MMP) are recent paradigms for analysis of data sets to identify structural changes that may be used to modify the potency of lead molecules in drug discovery projects. Analysis of MMPs was recently demonstrated as a feasible technique for quantitative structure-activity relationship (QSAR) modeling of prospective compounds. Although within a small data set, the lack of matched pairs, and the lack of knowledge about specific chemical transformations limit prospective applications. Here we present an alternative technique that determines pairwise descriptors for each matched pair and then uses a QSAR model to estimate the activity change associated with a chemical transformation. The descriptors effectively group similar transformations and incorporate information about the transformation and its local environment. Use of a transformation QSAR model allows one to estimate the activity change for novel transformations and therefore returns predictions for a larger fraction of test set compounds. Application of the proposed methodology to four public data sets results in increased model performance over a benchmark random forest and direct application of chemical transformations using QSAR-by-matched molecular pairs analysis (QSAR-by-MMPA).

  19. Free energy functionals for efficient phase field crystal modeling of structural phase transformations.

    PubMed

    Greenwood, Michael; Provatas, Nikolas; Rottler, Jörg

    2010-07-23

    The phase field crystal (PFC) method is a promising technique for modeling materials with atomic resolution on mesoscopic time scales. While numerically more efficient than classical density functional theory (CDFT), its single mode free energy limits the complexity of structural transformations that can be simulated. We introduce a new PFC model inspired by CDFT, which uses a systematic construction of two-particle correlation functions that allows for a broad class of structural transformations. Our approach considers planar spacings, lattice symmetries, planar atomic densities, and atomic vibrational amplitudes in the unit cell, and parameterizes temperature and anisotropic surface energies. The power of our approach is demonstrated by two examples of structural phase transformations.

  20. Modeling along-axis variations in fault architecture in the Main Ethiopian Rift: Implications for Nubia-Somalia kinematics

    NASA Astrophysics Data System (ADS)

    Erbello, Asfaw; Corti, Giacomo; Agostini, Andrea; Sani, Federico; Kidane, Tesfaye; Buccianti, Antonella

    2016-12-01

    In this contribution, analogue modeling is used to provide new insights into the Nubia-Somalia kinematics responsible for development and evolution of the Main Ethiopian Rift (MER), at the northern termination of the East African Rift system. In particular, we performed new crustal-scale, brittle models to analyze the along-strike variations in fault architecture in the MER and their relations with the rift trend, plate motion and the resulting Miocene-recent kinematics of rifting. The models reproduced the overall geometry of the ∼600 km-long MER with its along-strike variation in orientation to test different hypothesis proposed to explain rift evolution. Analysis of model results in terms of statistics of fault length and orientation, as well as deformation architecture, and its comparison with the MER suggest that models of two-phase rifting (with a first phase of NW-SE extension followed by E-W rifting) or constant NW-SE extension, as well as models of constant ENE-WSW rifting are not able to reproduce the fault architecture observed in nature. Model results suggest instead that the rift has likely developed under a constant, post-11 Ma extension oriented roughly ESE-WNW (N97.5°E), consistent with recent plate kinematics models.

  1. LaPlace Transform1 Adaptive Control Law in Support of Large Flight Envelope Modeling Work

    NASA Technical Reports Server (NTRS)

    Gregory, Irene M.; Xargay, Enric; Cao, Chengyu; Hovakimyan, Naira

    2011-01-01

    This paper presents results of a flight test of the L1 adaptive control architecture designed to directly compensate for significant uncertain cross-coupling in nonlinear systems. The flight test was conducted on the subscale turbine powered Generic Transport Model that is an integral part of the Airborne Subscale Transport Aircraft Research system at the NASA Langley Research Center. The results presented are in support of nonlinear aerodynamic modeling and instrumentation calibration.

  2. Performance of linear and nonlinear texture measures in 2D and 3D for monitoring architectural changes in osteoporosis using computer-generated models of trabecular bone

    NASA Astrophysics Data System (ADS)

    Boehm, Holger F.; Link, Thomas M.; Monetti, Roberto A.; Mueller, Dirk; Rummeny, Ernst J.; Raeth, Christoph W.

    2005-04-01

    Osteoporosis is a metabolic bone disease leading to de-mineralization and increased risk of fracture. The two major factors that determine the biomechanical competence of bone are the degree of mineralization and the micro-architectural integrity. Today, modern imaging modalities (high resolution MRI, micro-CT) are capable of depicting structural details of trabecular bone tissue. From the image data, structural properties obtained by quantitative measures are analysed with respect to the presence of osteoporotic fractures of the spine (in-vivo) or correlated with biomechanical strength as derived from destructive testing (in-vitro). Fairly well established are linear structural measures in 2D that are originally adopted from standard histo-morphometry. Recently, non-linear techniques in 2D and 3D based on the scaling index method (SIM), the standard Hough transform (SHT), and the Minkowski Functionals (MF) have been introduced, which show excellent performance in predicting bone strength and fracture risk. However, little is known about the performance of the various parameters with respect to monitoring structural changes due to progression of osteoporosis or as a result of medical treatment. In this contribution, we generate models of trabecular bone with pre-defined structural properties which are exposed to simulated osteoclastic activity. We apply linear and non-linear texture measures to the models and analyse their performance with respect to detecting architectural changes. This study demonstrates, that the texture measures are capable of monitoring structural changes of complex model data. The diagnostic potential varies for the different parameters and is found to depend on the topological composition of the model and initial "bone density". In our models, non-linear texture measures tend to react more sensitively to small structural changes than linear measures. Best performance is observed for the 3rd and 4th Minkowski Functionals and for the scaling

  3. Classifier models and architectures for EEG-based neonatal seizure detection.

    PubMed

    Greene, B R; Marnane, W P; Lightbody, G; Reilly, R B; Boylan, G B

    2008-10-01

    Neonatal seizures are the most common neurological emergency in the neonatal period and are associated with a poor long-term outcome. Early detection and treatment may improve prognosis. This paper aims to develop an optimal set of parameters and a comprehensive scheme for patient-independent multi-channel EEG-based neonatal seizure detection. We employed a dataset containing 411 neonatal seizures. The dataset consists of multi-channel EEG recordings with a mean duration of 14.8 h from 17 neonatal patients. Early-integration and late-integration classifier architectures were considered for the combination of information across EEG channels. Three classifier models based on linear discriminants, quadratic discriminants and regularized discriminants were employed. Furthermore, the effect of electrode montage was considered. The best performing seizure detection system was found to be an early integration configuration employing a regularized discriminant classifier model. A referential EEG montage was found to outperform the more standard bipolar electrode montage for automated neonatal seizure detection. A cross-fold validation estimate of the classifier performance for the best performing system yielded 81.03% of seizures correctly detected with a false detection rate of 3.82%. With post-processing, the false detection rate was reduced to 1.30% with 59.49% of seizures correctly detected. These results represent a comprehensive illustration that robust reliable patient-independent neonatal seizure detection is possible using multi-channel EEG.

  4. Model-Driven Methodology for Rapid Deployment of Smart Spaces Based on Resource-Oriented Architectures

    PubMed Central

    Corredor, Iván; Bernardos, Ana M.; Iglesias, Josué; Casar, José R.

    2012-01-01

    Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym. PMID:23012544

  5. Model-driven methodology for rapid deployment of smart spaces based on resource-oriented architectures.

    PubMed

    Corredor, Iván; Bernardos, Ana M; Iglesias, Josué; Casar, José R

    2012-01-01

    Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym.

  6. Application and project portfolio valuation using enterprise architecture and business requirements modelling

    NASA Astrophysics Data System (ADS)

    Quartel, Dick; Steen, Maarten W. A.; Lankhorst, Marc M.

    2012-05-01

    This article describes an architecture-based approach to IT valuation. This approach offers organisations an instrument to valuate their application and project portfolios and to make well-balanced decisions about IT investments. The value of a software application is assessed in terms of its contribution to a selection of business goals. Based on such assessments, the value of different applications can be compared, and requirements for innovation, development, maintenance and phasing out can be identified. IT projects are proposed to realise the requirements. The value of each project is assessed in terms of the value it adds to one or more applications. This value can be obtained by relating the 'as-is' application portfolio to the 'to-be' portfolio that is being proposed by the project portfolio. In this way, projects can be ranked according to their added value, given a certain selection of business goals. The approach uses ArchiMate to model the relationship between software applications, business processes, services and products. In addition, two language extensions are used to model the relationship of these elements to business goals and requirements and to projects and project portfolios. The approach is illustrated using the portfolio method of Bedell and has been implemented in BiZZdesign Architect.

  7. Lithospheric architecture of the Levant Basin (Eastern Mediterranean region): A 2D modeling approach

    NASA Astrophysics Data System (ADS)

    Inati, Lama; Zeyen, Hermann; Nader, Fadi Henri; Adelinet, Mathilde; Sursock, Alexandre; Rahhal, Muhsin Elie; Roure, François

    2016-12-01

    This paper discusses the deep structure of the lithosphere underlying the easternmost Mediterranean region, in particular the Levant Basin and its margins, where the nature of the crust, continental versus oceanic, remains debated. Crustal thickness and the depth of the lithosphere-asthenosphere boundary (LAB) as well as the crustal density distribution were calculated by integrating surface heat flow data, free-air gravity anomaly, geoid and topography. Accordingly, two-dimensional, lithospheric models of the study area are discussed, demonstrating the presence of a progressively attenuated crystalline crust from E to W (average thickness from 35 to 8 km). The crystalline crust is best interpreted as a strongly thinned continental crust under the Levant Basin, represented by two distinct components, an upper and a lower crust. Further to the west, the Herodotus Basin is believed to be underlain by an oceanic crust, with a thickness between 6 and 10 km. The Moho under the Arabian Plate is 35-40 km deep and becomes shallower towards the Mediterranean coast. It appears to be situated at depths ranging between 20 and 23 km below the Levant Basin and 26 km beneath the Herodotus Basin, based on our proposed models. At the Levantine margin, the thinning of the crust in the transitional domain between the onshore and the offshore is gradual, indicating successive extensional regimes that did not reach the beak up stage. In addition, the depth to LAB is around 120 km under the Arabian and the Eurasian Plates, 150 km under the Levant Basin, and it plunges to 180 km under the Herodotus Basin. This study shows that detailed 2D lithosphere modeling using integrated geophysical data can help understand the mechanisms responsible for the modelled lithospheric architecture when constrained with geological findings.

  8. Mentoring Resulting in a New Model: Affect-Centered Transformational Leadership

    ERIC Educational Resources Information Center

    Moffett, David W.; Tejeda, Armando R.

    2014-01-01

    The authors were professor and student, in a doctoral leadership course, during fall semester of 2013-2014. Across the term the professor mentored the mentee, guiding him to the creation of the next, needed model for leadership. The new model, known as The Affect-Centered Transformational Leadership Model, came about as the result. Becoming an…

  9. Modeling and Optimization of Multiple Unmanned Aerial Vehicles System Architecture Alternatives

    PubMed Central

    Wang, Weiping; He, Lei

    2014-01-01

    Unmanned aerial vehicle (UAV) systems have already been used in civilian activities, although very limitedly. Confronted different types of tasks, multi UAVs usually need to be coordinated. This can be extracted as a multi UAVs system architecture problem. Based on the general system architecture problem, a specific description of the multi UAVs system architecture problem is presented. Then the corresponding optimization problem and an efficient genetic algorithm with a refined crossover operator (GA-RX) is proposed to accomplish the architecting process iteratively in the rest of this paper. The availability and effectiveness of overall method is validated using 2 simulations based on 2 different scenarios. PMID:25140328

  10. Hyperchannel architecture: a case study of some inadequacies in the ISO-OSI reference model

    SciTech Connect

    Nessett, D.M.

    1981-04-01

    A description of the Hyperchannel architecture is given in terms of the OSI-RM. In the process, a number of deficiencies are presented both in the OSI-RM and in the Hyperchannel architecture. Finally, some suggestions are made about how the Hyperchannel architecture could be changed to provide the basis of an I/O interface standard. Investigation of these suggestions should be a key area of research before an I/O interface standard based on a broadcast-bus is proposed.

  11. Modeling and optimization of multiple unmanned aerial vehicles system architecture alternatives.

    PubMed

    Qin, Dongliang; Li, Zhifei; Yang, Feng; Wang, Weiping; He, Lei

    2014-01-01

    Unmanned aerial vehicle (UAV) systems have already been used in civilian activities, although very limitedly. Confronted different types of tasks, multi UAVs usually need to be coordinated. This can be extracted as a multi UAVs system architecture problem. Based on the general system architecture problem, a specific description of the multi UAVs system architecture problem is presented. Then the corresponding optimization problem and an efficient genetic algorithm with a refined crossover operator (GA-RX) is proposed to accomplish the architecting process iteratively in the rest of this paper. The availability and effectiveness of overall method is validated using 2 simulations based on 2 different scenarios.

  12. A conceptual approach to approximate tree root architecture in infinite slope models

    NASA Astrophysics Data System (ADS)

    Schmaltz, Elmar; Glade, Thomas

    2016-04-01

    Vegetation-related properties - particularly tree root distribution and coherent hydrologic and mechanical effects on the underlying soil mantle - are commonly not considered in infinite slope models. Indeed, from a geotechnical point of view, these effects appear to be difficult to be reproduced reliably in a physically-based modelling approach. The growth of a tree and the expansion of its root architecture are directly connected with both intrinsic properties such as species and age, and extrinsic factors like topography, availability of nutrients, climate and soil type. These parameters control four main issues of the tree root architecture: 1) Type of rooting; 2) maximum growing distance to the tree stem (radius r); 3) maximum growing depth (height h); and 4) potential deformation of the root system. Geometric solids are able to approximate the distribution of a tree root system. The objective of this paper is to investigate whether it is possible to implement root systems and the connected hydrological and mechanical attributes sufficiently in a 3-dimensional slope stability model. Hereby, a spatio-dynamic vegetation module should cope with the demands of performance, computation time and significance. However, in this presentation, we focus only on the distribution of roots. The assumption is that the horizontal root distribution around a tree stem on a 2-dimensional plane can be described by a circle with the stem located at the centroid and a distinct radius r that is dependent on age and species. We classified three main types of tree root systems and reproduced the species-age-related root distribution with three respective mathematical solids in a synthetic 3-dimensional hillslope ambience. Thus, two solids in an Euclidian space were distinguished to represent the three root systems: i) cylinders with radius r and height h, whilst the dimension of latter defines the shape of a taproot-system or a shallow-root-system respectively; ii) elliptic

  13. Modelling Transformations of Quadratic Functions: A Proposal of Inductive Inquiry

    ERIC Educational Resources Information Center

    Sokolowski, Andrzej

    2013-01-01

    This paper presents a study about using scientific simulations to enhance the process of mathematical modelling. The main component of the study is a lesson whose major objective is to have students mathematise a trajectory of a projected object and then apply the model to formulate other trajectories by using the properties of function…

  14. SEMIPARAMETRIC TRANSFORMATION MODELS WITH RANDOM EFFECTS FOR CLUSTERED FAILURE TIME DATA

    PubMed Central

    Zeng, Donglin; Lin, D. Y.; Lin, Xihong

    2009-01-01

    We propose a general class of semiparametric transformation models with random effects to formulate the effects of possibly time-dependent covariates on clustered or correlated failure times. This class encompasses all commonly used transformation models, including proportional hazards and proportional odds models, and it accommodates a variety of random-effects distributions, particularly Gaussian distributions. We show that the nonparametric maximum likelihood estimators of the model parameters are consistent, asymptotically normal and asymptotically efficient. We develop the corresponding likelihood-based inference procedures. Simulation studies demonstrate that the proposed methods perform well in practical situations. An illustration with a well-known diabetic retinopathy study is provided. PMID:19809573

  15. An architecture and model for cognitive engineering simulation analysis - Application to advanced aviation automation

    NASA Technical Reports Server (NTRS)

    Corker, Kevin M.; Smith, Barry R.

    1993-01-01

    The process of designing crew stations for large-scale, complex automated systems is made difficult because of the flexibility of roles that the crew can assume, and by the rapid rate at which system designs become fixed. Modern cockpit automation frequently involves multiple layers of control and display technology in which human operators must exercise equipment in augmented, supervisory, and fully automated control modes. In this context, we maintain that effective human-centered design is dependent on adequate models of human/system performance in which representations of the equipment, the human operator(s), and the mission tasks are available to designers for manipulation and modification. The joint Army-NASA Aircrew/Aircraft Integration (A3I) Program, with its attendant Man-machine Integration Design and Analysis System (MIDAS), was initiated to meet this challenge. MIDAS provides designers with a test bed for analyzing human-system integration in an environment in which both cognitive human function and 'intelligent' machine function are described in similar terms. This distributed object-oriented simulation system, its architecture and assumptions, and our experiences from its application in advanced aviation crew stations are described.

  16. Temperature Control at DBS Electrodes using Heat Sink: Experimentally Validated FEM Model of DBS lead Architecture

    PubMed Central

    Elwassif, Maged M.; Datta, Abhishek; Rahman, Asif; Bikson, Marom

    2012-01-01

    There is a growing interest in the use of Deep Brain Stimulation for the treatment of medically refractory movement disorders and other neurological and psychiatric conditions. The extent of temperature increases around DBS electrodes during normal operation (joule heating and increased metabolic activity) or coupling with an external source (e.g. MRI) remains poorly understood and methods to mitigate temperature increases are being actively investigated. We developed a heat transfer finite element method simulation of DBS incorporating the realistic architecture of Medtronic 3389 leads. The temperature changes were analyzed considering different electrode configurations, stimulation protocols, and tissue properties. The heat-transfer model results were then validated using micro-thermocouple measurements during DBS lead stimulation in a saline bath. FEM results indicate that lead design (materials and geometry) may have a central role in controlling temperature rise by conducting heat. We show how modifying lead design can effectively control temperature increases. The robustness of this heat-sink approach over complimentary heat-mitigation technologies follows from several features: 1) it is insensitive to the mechanisms of heating (e.g. nature of magnetic coupling); 2) does not interfere with device efficacy; and 3) can be practically implemented in a broad range of implanted devices without modifying the normal device operations or the implant procedure. PMID:22764359

  17. When machine vision meets histology: A comparative evaluation of model architecture for classification of histology sections☆

    PubMed Central

    Zhong, Cheng; Han, Ju; Borowsky, Alexander; Parvin, Bahram; Wang, Yunfu; Chang, Hang

    2016-01-01

    Classification of histology sections in large cohorts, in terms of distinct regions of microanatomy (e.g., stromal) and histopathology (e.g., tumor, necrosis), enables the quantification of tumor composition, and the construction of predictive models of genomics and clinical outcome. To tackle the large technical variations and biological heterogeneities, which are intrinsic in large cohorts, emerging systems utilize either prior knowledge from pathologists or unsupervised feature learning for invariant representation of the underlying properties in the data. However, to a large degree, the architecture for tissue histology classification remains unexplored and requires urgent systematical investigation. This paper is the first attempt to provide insights into three fundamental questions in tissue histology classification: I. Is unsupervised feature learning preferable to human engineered features? II. Does cellular saliency help? III. Does the sparse feature encoder contribute to recognition? We show that (a) in I, both Cellular Morphometric Feature and features from unsupervised feature learning lead to superior performance when compared to SIFT and [Color, Texture]; (b) in II, cellular saliency incorporation impairs the performance for systems built upon pixel-/patch-level features; and (c) in III, the effect of the sparse feature encoder is correlated with the robustness of features, and the performance can be consistently improved by the multi-stage extension of systems built upon both Cellular Morphmetric Feature and features from unsupervised feature learning. These insights are validated with two cohorts of Glioblastoma Multiforme (GBM) and Kidney Clear Cell Carcinoma (KIRC). PMID:27644083

  18. Parallel eigenanalysis of finite element models in a completely connected architecture

    NASA Technical Reports Server (NTRS)

    Akl, F. A.; Morel, M. R.

    1989-01-01

    A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis, (K)(phi) = (M)(phi)(omega), where (K) and (M) are of order N, and (omega) is order of q. The concurrent solution of the eigenproblem is based on the multifrontal/modified subspace method and is achieved in a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm was successfully implemented on a tightly coupled multiple-instruction multiple-data parallel processing machine, Cray X-MP. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The macrotasking library routines are used in mapping each domain to a user task. Computational speed-up and efficiency are used to determine the effectiveness of the algorithm. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts and the dimension of the subspace on the performance of the algorithm are investigated. A parallel finite element dynamic analysis program, p-feda, is documented and the performance of its subroutines in parallel environment is analyzed.

  19. An architecturally constrained model of random number generation and its application to modeling the effect of generation rate

    PubMed Central

    Sexton, Nicholas J.; Cooper, Richard P.

    2014-01-01

    Random number generation (RNG) is a complex cognitive task for human subjects, requiring deliberative control to avoid production of habitual, stereotyped sequences. Under various manipulations (e.g., speeded responding, transcranial magnetic stimulation, or neurological damage) the performance of human subjects deteriorates, as reflected in a number of qualitatively distinct, dissociable biases. For example, the intrusion of stereotyped behavior (e.g., counting) increases at faster rates of generation. Theoretical accounts of the task postulate that it requires the integrated operation of multiple, computationally heterogeneous cognitive control (“executive”) processes. We present a computational model of RNG, within the framework of a novel, neuropsychologically-inspired cognitive architecture, ESPro. Manipulating the rate of sequence generation in the model reproduced a number of key effects observed in empirical studies, including increasing sequence stereotypy at faster rates. Within the model, this was due to time limitations on the interaction of supervisory control processes, namely, task setting, proposal of responses, monitoring, and response inhibition. The model thus supports the fractionation of executive function into multiple, computationally heterogeneous processes. PMID:25071644

  20. Evaluating radiative transfer schemes treatment of vegetation canopy architecture in land surface models

    NASA Astrophysics Data System (ADS)

    Braghiere, Renato; Quaife, Tristan; Black, Emily

    2016-04-01

    Incoming shortwave radiation is the primary source of energy driving the majority of the Earth's climate system. The partitioning of shortwave radiation by vegetation into absorbed, reflected, and transmitted terms is important for most of biogeophysical processes, including leaf temperature changes and photosynthesis, and it is currently calculated by most of land surface schemes (LSS) of climate and/or numerical weather prediction models. The most commonly used radiative transfer scheme in LSS is the two-stream approximation, however it does not explicitly account for vegetation architectural effects on shortwave radiation partitioning. Detailed three-dimensional (3D) canopy radiative transfer schemes have been developed, but they are too computationally expensive to address large-scale related studies over long time periods. Using a straightforward one-dimensional (1D) parameterisation proposed by Pinty et al. (2006), we modified a two-stream radiative transfer scheme by including a simple function of Sun zenith angle, so-called "structure factor", which does not require an explicit description and understanding of the complex phenomena arising from the presence of vegetation heterogeneous architecture, and it guarantees accurate simulations of the radiative balance consistently with 3D representations. In order to evaluate the ability of the proposed parameterisation in accurately represent the radiative balance of more complex 3D schemes, a comparison between the modified two-stream approximation with the "structure factor" parameterisation and state-of-art 3D radiative transfer schemes was conducted, following a set of virtual scenarios described in the RAMI4PILPS experiment. These experiments have been evaluating the radiative balance of several models under perfectly controlled conditions in order to eliminate uncertainties arising from an incomplete or erroneous knowledge of the structural, spectral and illumination related canopy characteristics typical

  1. Analysis of central enterprise architecture elements in models of six eHealth projects.

    PubMed

    Virkanen, Hannu; Mykkänen, Juha

    2014-01-01

    Large-scale initiatives for eHealth services have been established in many countries on regional or national level. The use of Enterprise Architecture has been suggested as a methodology to govern and support the initiation, specification and implementation of large-scale initiatives including the governance of business changes as well as information technology. This study reports an analysis of six health IT projects in relation to Enterprise Architecture elements, focusing on central EA elements and viewpoints in different projects.

  2. Industry Standard Notation for Architecture-Centric Model-Based Engineering

    DTIC Science & Technology

    2010-01-20

    Bruce Lewis (AMRDEC SED) served as subcommittee chair, with Peter Feiler (SEI) as technical lead and author of the language. AADL incorporates concepts...underlying runtime system [ Feiler 09]. A number of recent studies have identified this problem and recommended a paradigm shift towards architecture-centric...AVSI 09] Feiler P. H., Hansson J., de Niz D., & Wrage L. System Architecture Virtual Integration: An Industrial Case Study (CMU/SEI-2009-TR-017

  3. A Functional and Structural Mongolian Scots Pine (Pinus sylvestris var. mongolica) Model Integrating Architecture, Biomass and Effects of Precipitation

    PubMed Central

    Wang, Feng; Letort, Véronique; Lu, Qi; Bai, Xuefeng; Guo, Yan; de Reffye, Philippe; Li, Baoguo

    2012-01-01

    Mongolian Scots pine (Pinus sylvestris var. mongolica) is one of the principal tree species in the network of Three-North Shelterbelt for windbreak and sand stabilisation in China. The functions of shelterbelts are highly correlated with the architecture and eco-physiological processes of individual tree. Thus, model-assisted analysis of canopy architecture and function dynamic in Mongolian Scots pine is of value for better understanding its role and behaviour within shelterbelt ecosystems in these arid and semiarid regions. We present here a single-tree functional and structural model, derived from the GreenLab model, which is adapted for young Mongolian Scots pines by incorporation of plant biomass production, allocation, allometric rules and soil water dynamics. The model is calibrated and validated based on experimental measurements taken on Mongolian Scots pines in 2007 and 2006 under local meteorological conditions. Measurements include plant biomass, topology and geometry, as well as soil attributes and standard meteorological data. After calibration, the model allows reconstruction of three-dimensional (3D) canopy architecture and biomass dynamics for trees from one- to six-year-old at the same site using meteorological data for the six years from 2001 to 2006. Sensitivity analysis indicates that rainfall variation has more influence on biomass increment than on architecture, and the internode and needle compartments and the aboveground biomass respond linearly to increases in precipitation. Sensitivity analysis also shows that the balance between internode and needle growth varies only slightly within the range of precipitations considered here. The model is expected to be used to investigate the growth of Mongolian Scots pines in other regions with different soils and climates. PMID:22927982

  4. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  5. A dislocation-based model for variant selection during the γ-to-α‧ transformation

    NASA Astrophysics Data System (ADS)

    Wittridge, N. J.; Jonas, J. J.; Root, J. H.

    2001-04-01

    A phase transformation model is described for variant selection during the austenite-to-martensite transformation. The model depends entirely on the presence of glide dislocations in the deformed austenite. The direct correlation between the 24 slip systems of the Bishop and Hill (B-H) crystal plasticity model and the 24 <112> rotation axes of the Kurdjumov-Sachs (K-S) orientation relationship is employed. Two selection criteria, based on slip activity and permissible dislocation reactions, govern the variants that are chosen to represent the final transformation texture. The development of the model via analysis of the experimental results of Liu and Bunge is described. The model is applied to the four distinct strain paths: (1) plane strain rolling, (2) axisymmetric extension, (3) axisymmetric compression, and (4) simple shear. Experimental deformation and transformation textures were produced for comparison purposes via appropriate deformation and quenching procedures. In each case, the transformation texture predicted using the dislocation reaction model is in excellent agreement with the experimental findings.

  6. [Job crisis and transformations in the new model of accumulation].

    PubMed

    Zerda-Sarmiento, Alvaro

    2012-06-01

    The general and structural crisis capitalism is going through is the token of the difficulties accumulation model has been dealing with since 70's in developed countries. This model has been trying to settle down again on the basis of neoliberal principle and a new technical-economical paradigm. The new accumulation pattern has had a effect in employment sphere which have been made evident at all the elements that constitute work relationships. In Colombia, this model implementation has been partial and segmented. However, its consequences (and the long-term current crisis) have been evident in unemployment, precarious work, segmentation, informal work and restricted and private health insurance. Besides, financial accumulation makes labour profits flow at different levels. The economic model current government has aimed to implement leads to strengthening exports, so making population life conditions more difficult. In order to overcome the current state of affairs, the work sphere needs to become more creative. This creative approach should look for new schemes for expression and mobilization of work sphere's claims. This is supposed to be done by establishing a different economic model aimed to build a more inclusive future, with social justice.

  7. Synapse-Centric Mapping of Cortical Models to the SpiNNaker Neuromorphic Architecture

    PubMed Central

    Knight, James C.; Furber, Steve B.

    2016-01-01

    While the adult human brain has approximately 8.8 × 1010 neurons, this number is dwarfed by its 1 × 1015 synapses. From the point of view of neuromorphic engineering and neural simulation in general this makes the simulation of these synapses a particularly complex problem. SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Current solutions for simulating spiking neural networks on SpiNNaker are heavily inspired by work on distributed high-performance computing. However, while SpiNNaker shares many characteristics with such distributed systems, its component nodes have much more limited resources and, as the system lacks global synchronization, the computation performed on each node must complete within a fixed time step. We first analyze the performance of the current SpiNNaker neural simulation software and identify several problems that occur when it is used to simulate networks of the type often used to model the cortex which contain large numbers of sparsely connected synapses. We then present a new, more flexible approach for mapping the simulation of such networks to SpiNNaker which solves many of these problems. Finally we analyze the performance of our new approach using both benchmarks, designed to represent cortical connectivity, and larger, functional cortical models. In a benchmark network where neurons receive input from 8000 STDP synapses, our new approach allows 4× more neurons to be simulated on each SpiNNaker core than has been previously possible. We also demonstrate that the largest plastic neural network previously simulated on neuromorphic hardware can be run in real time using our new approach: double the speed that was previously achieved. Additionally this network contains two types of plastic synapse which previously had to be trained separately but, using our new approach, can be trained simultaneously. PMID:27683540

  8. Overelaborated synaptic architecture and reduced synaptomatrix glycosylation in a Drosophila classic galactosemia disease model.

    PubMed

    Jumbo-Lucioni, Patricia; Parkinson, William; Broadie, Kendal

    2014-12-01

    Classic galactosemia (CG) is an autosomal recessive disorder resulting from loss of galactose-1-phosphate uridyltransferase (GALT), which catalyzes conversion of galactose-1-phosphate and uridine diphosphate (UDP)-glucose to glucose-1-phosphate and UDP-galactose, immediately upstream of UDP-N-acetylgalactosamine and UDP-N-acetylglucosamine synthesis. These four UDP-sugars are essential donors for driving the synthesis of glycoproteins and glycolipids, which heavily decorate cell surfaces and extracellular spaces. In addition to acute, potentially lethal neonatal symptoms, maturing individuals with CG develop striking neurodevelopmental, motor and cognitive impairments. Previous studies suggest that neurological symptoms are associated with glycosylation defects, with CG recently being described as a congenital disorder of glycosylation (CDG), showing defects in both N- and O-linked glycans. Here, we characterize behavioral traits, synaptic development and glycosylated synaptomatrix formation in a GALT-deficient Drosophila disease model. Loss of Drosophila GALT (dGALT) greatly impairs coordinated movement and results in structural overelaboration and architectural abnormalities at the neuromuscular junction (NMJ). Dietary galactose and mutation of galactokinase (dGALK) or UDP-glucose dehydrogenase (sugarless) genes are identified, respectively, as critical environmental and genetic modifiers of behavioral and cellular defects. Assaying the NMJ extracellular synaptomatrix with a broad panel of lectin probes reveals profound alterations in dGALT mutants, including depletion of galactosyl, N-acetylgalactosamine and fucosylated horseradish peroxidase (HRP) moieties, which are differentially corrected by dGALK co-removal and sugarless overexpression. Synaptogenesis relies on trans-synaptic signals modulated by this synaptomatrix carbohydrate environment, and dGALT-null NMJs display striking changes in heparan sulfate proteoglycan (HSPG) co-receptor and Wnt ligand levels

  9. Notch Signaling and Schwann Cell Transformation: Development of a Model System and Application to Human MPNSTs

    DTIC Science & Technology

    2007-03-01

    Transformation: Development of a Model System and Application to Human MPNSTs PRINCIPAL INVESTIGATOR: Tom Kadesch, Ph.D... MPNSTs 5b. GRANT NUMBER W81XWH-04-1-0209 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Tom Kadesch, Ph.D. 5d. PROJECT NUMBER 5e. TASK...potential role of Notch signaling in the malignant transformation of neurofibromas to MPNSTs in patients with NF1. Our previous work has shown that

  10. Notch Signaling and Schwann Cell Transformation: Development of a Model System and Application to Human MPNSTs

    DTIC Science & Technology

    2008-03-01

    Transformation: Development of a Model System and Application to Human MPNSTs PRINCIPAL INVESTIGATOR: Tom Kadesch, Ph.D...and Application to Human MPNSTs 5b. GRANT NUMBER W81XWH-04-1-0209 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Tom Kadesch, Ph.D. 5d. PROJECT...grant addresses the potential role of Notch signaling in the malignant transformation of neurofibromas to MPNSTs in patients with NF1. Our previous

  11. Time Domain Transformations to Improve Hydrologic Model Consistency: Parameterization in Flow-Corrected Time

    NASA Astrophysics Data System (ADS)

    Smith, T. J.; Marshall, L. A.; McGlynn, B. L.

    2015-12-01

    Streamflow modeling is highly complex. Beyond the identification and mapping of dominant runoff processes to mathematical models, additional challenges are posed by the switching of dominant streamflow generation mechanisms temporally and dynamic catchment responses to precipitation inputs based on antecedent conditions. As a result, model calibration is required to obtain parameter values that produce acceptable simulations of the streamflow hydrograph. Typical calibration approaches assign equal weight to all observations to determine the best fit over the simulation period. However, the objective function can be biased toward (i.e., implicitly weight) certain parts of the hydrograph (e.g., high streamflows). Data transformations (e.g., logarithmic or square root) scale the magnitude of the observations and are commonly used in the calibration process to reduce implicit weighting or better represent assumptions about the model residuals. Here, we consider a time domain data transformation rather than the more common data domain approaches. Flow-corrected time was previously employed in the transit time modeling literature. Conceptually, it stretches time during high streamflow and compresses time during low streamflow periods. Therefore, streamflow is dynamically weighted in the time domain, with greater weight assigned to periods with larger hydrologic flux. Here, we explore the utility of the flow-corrected time transformation in improving model performance of the Catchment Connectivity Model. Model process fidelity was assessed directly using shallow groundwater connectivity data collected at Tenderfoot Creek Experimental Forest. Our analysis highlights the impact of data transformations on model consistency and parameter sensitivity.

  12. Transforming the Gray Factory: The Presidential Leadership of Charles M. Vest and the Architecture of Change at Massachusetts Institute of Technology

    ERIC Educational Resources Information Center

    Daas, Mahesh

    2013-01-01

    The single-site exemplar study presents an in-depth account of the presidential leadership of Charles M. Vest of MIT--the second longest presidency in the Institute's history--and his leadership team's journey between 1990 and 2004 into campus architectural changes that involved over a billion dollars, added a quarter of floor space to MIT's…

  13. The Healing Web: A Transformative Model for Nursing.

    ERIC Educational Resources Information Center

    Bunkers, Sandra Schmidt

    1992-01-01

    A Navajo legend describes a web woven by Spider Woman that saved the people during a great flood. This article uses the imagery of the web to help education and service think more clearly about nursing's future. The Healing Web project seeks to educate nurses in a futuristic differentiated model. (Author/JOW)

  14. Teachers' Practices and Mental Models: Transformation through Reflection on Action

    ERIC Educational Resources Information Center

    Manrique, María Soledad; Sánchez Abchi, Verónica

    2015-01-01

    This contribution explores the relationship between teaching practices, teaching discourses and teachers' implicit representations and mental models and the way these dimensions change through teacher education (T.E). In order to study these relationships, and based on the assumptions that representations underlie teaching practices and that T.E…

  15. Modeling transformations of neurodevelopmental sequences across mammalian species.

    PubMed

    Workman, Alan D; Charvet, Christine J; Clancy, Barbara; Darlington, Richard B; Finlay, Barbara L

    2013-04-24

    A general model of neural development is derived to fit 18 mammalian species, including humans, macaques, several rodent species, and six metatherian (marsupial) mammals. The goal of this work is to describe heterochronic changes in brain evolution within its basic developmental allometry, and provide an empirical basis to recognize equivalent maturational states across animals. The empirical data generating the model comprises 271 developmental events, including measures of initial neurogenesis, axon extension, establishment, and refinement of connectivity, as well as later events such as myelin formation, growth of brain volume, and early behavioral milestones, to the third year of human postnatal life. The progress of neural events across species is sufficiently predictable that a single model can be used to predict the timing of all events in all species, with a correlation of modeled values to empirical data of 0.9929. Each species' rate of progress through the event scale, described by a regression equation predicting duration of development in days, is highly correlated with adult brain size. Neural heterochrony can be seen in selective delay of retinogenesis in the cat, associated with greater numbers of rods in its retina, and delay of corticogenesis in all species but rodents and the rabbit, associated with relatively larger cortices in species with delay. Unexpectedly, precocial mammals (those unusually mature at birth) delay the onset of first neurogenesis but then progress rapidly through remaining developmental events.

  16. Modeling Transformations of Neurodevelopmental Sequences across Mammalian Species

    PubMed Central

    Workman, Alan D.; Charvet, Christine J.; Clancy, Barbara; Darlington, Richard B.

    2013-01-01

    A general model of neural development is derived to fit 18 mammalian species, including humans, macaques, several rodent species, and six metatherian (marsupial) mammals. The goal of this work is to describe heterochronic changes in brain evolution within its basic developmental allometry, and provide an empirical basis to recognize equivalent maturational states across animals. The empirical data generating the model comprises 271 developmental events, including measures of initial neurogenesis, axon extension, establishment, and refinement of connectivity, as well as later events such as myelin formation, growth of brain volume, and early behavioral milestones, to the third year of human postnatal life. The progress of neural events across species is sufficiently predictable that a single model can be used to predict the timing of all events in all species, with a correlation of modeled values to empirical data of 0.9929. Each species' rate of progress through the event scale, described by a regression equation predicting duration of development in days, is highly correlated with adult brain size. Neural heterochrony can be seen in selective delay of retinogenesis in the cat, associated with greater numbers of rods in its retina, and delay of corticogenesis in all species but rodents and the rabbit, associated with relatively larger cortices in species with delay. Unexpectedly, precocial mammals (those unusually mature at birth) delay the onset of first neurogenesis but then progress rapidly through remaining developmental events. PMID:23616543

  17. Transforming the Preparation of Leaders into a True Partnership Model

    ERIC Educational Resources Information Center

    Devin, Mary

    2016-01-01

    A former school superintendent who is now a university professor uses her experience in these partnership roles to describe how Kansas State University's collaboratively designed master's academy leadership preparation models merging theory and practice came about over fifteen years ago, and how it has evolved since then.

  18. Irreversibility of T-Cell Specification: Insights from Computational Modelling of a Minimal Network Architecture

    PubMed Central

    Manesso, Erica; Kueh, Hao Yuan; Freedman, George; Rothenberg, Ellen V.

    2016-01-01

    Background/Objectives A cascade of gene activations under the control of Notch signalling is required during T-cell specification, when T-cell precursors gradually lose the potential to undertake other fates and become fully committed to the T-cell lineage. We elucidate how the gene/protein dynamics for a core transcriptional module governs this important process by computational means. Methods We first assembled existing knowledge about transcription factors known to be important for T-cell specification to form a minimal core module consisting of TCF-1, GATA-3, BCL11B, and PU.1 aiming at dynamical modeling. Model architecture was based on published experimental measurements of the effects on each factor when each of the others is perturbed. While several studies provided gene expression measurements at different stages of T-cell development, pure time series are not available, thus precluding a straightforward study of the dynamical interactions among these genes. We therefore translate stage dependent data into time series. A feed-forward motif with multiple positive feed-backs can account for the observed delay between BCL11B versus TCF-1 and GATA-3 activation by Notch signalling. With a novel computational approach, all 32 possible interactions among Notch signalling, TCF-1, and GATA-3 are explored by translating combinatorial logic expressions into differential equations for BCL11B production rate. Results Our analysis reveals that only 3 of 32 possible configurations, where GATA-3 works as a dimer, are able to explain not only the time delay, but very importantly, also give rise to irreversibility. The winning models explain the data within the 95% confidence region and are consistent with regard to decay rates. Conclusions This first generation model for early T-cell specification has relatively few players. Yet it explains the gradual transition into a committed state with no return. Encoding logics in a rate equation setting allows determination of

  19. Development and Validation of a Materials Preparation Model from the Perspective of Transformative Pedagogy

    ERIC Educational Resources Information Center

    Barjesteh, Hamed; Birjandi, Parviz; Maftoon, Parviz

    2015-01-01

    This study is a report on the design, development, and validation of a model within the main tenets of critical pedagogy (CP) with a hope to implement in education in general and applied linguistics in particular. To develop a transformative L2 materials preparation (TLMP) model, the researchers drew on Crawford's (1978) principles of CP as a…

  20. Climate change induced transformations of agricultural systems: insights from a global model

    NASA Astrophysics Data System (ADS)

    Leclère, D.; Havlík, P.; Fuss, S.; Schmid, E.; Mosnier, A.; Walsh, B.; Valin, H.; Herrero, M.; Khabarov, N.; Obersteiner, M.

    2014-12-01

    Climate change might impact crop yields considerably and anticipated transformations of agricultural systems are needed in the coming decades to sustain affordable food provision. However, decision-making on transformational shifts in agricultural systems is plagued by uncertainties concerning the nature and geography of climate change, its impacts, and adequate responses. Locking agricultural systems into inadequate transformations costly to adjust is a significant risk and this acts as an incentive to delay action. It is crucial to gain insight into how much transformation is required from agricultural systems, how robust such strategies are, and how we can defuse the associated challenge for decision-making. While implementing a definition related to large changes in resource use into a global impact assessment modelling framework, we find transformational adaptations to be required of agricultural systems in most regions by 2050s in order to cope with climate change. However, these transformations widely differ across climate change scenarios: uncertainties in large-scale development of irrigation span in all continents from 2030s on, and affect two-thirds of regions by 2050s. Meanwhile, significant but uncertain reduction of major agricultural areas affects the Northern Hemisphere’s temperate latitudes, while increases to non-agricultural zones could be large but uncertain in one-third of regions. To help reducing the associated challenge for decision-making, we propose a methodology exploring which, when, where and why transformations could be required and uncertain, by means of scenario analysis.

  1. Enhancing the prediction accuracy of bovine lameness models through transformations of limb movement variables.

    PubMed

    Liu, J; Neerchal, N K; Tasch, U; Dyer, R M; Rajkondawar, P G

    2009-06-01

    The issue of modeling bovine lameness was explored by testing the hypothesis that B-spline transformation of limb movement variables (LMV) employed in predictive models improved model accuracy. The objectives were to determine the effect of number of B-spline knots and the degree of the underlying polynomial approximation (degree of freedom) on model accuracy. Knot number used in B-spline transformation improved model accuracy by improving model specificity and to a lesser extent model sensitivity. Degree of polynomial approximation had no effect on model predictive accuracy from the data set of 261 cows. Model stability, defined as changes in predictive accuracy associated with the superimposition of perturbations (0.5 and 1.0%) in LMV on the measured data, was explored. Model specificity and to a lesser degree, sensitivity, increased with increased knot number across data set perturbations. Specificity and sensitivity increased by 43 and 11%, respectively, when knot number increased from 0 to 7 for a perturbation level of 0.5%. When the perturbation level was 1%, the corresponding increases in specificity and sensitivity were 32 and 4%, respectively. Nevertheless, different levels of LMV perturbation varied the optimal knot number associated with highest model accuracy. The optimal knot number for 0.5% perturbation was 8, whereas for 1% perturbation the optimal knot number was 7. The B-spline transformation improved specificity and sensitivity of predictive models for lameness, provided the appropriate number of knots was selected.

  2. Modelling of the effects of grain orientation on transformation-induced plasticity in multiphase carbon steels

    NASA Astrophysics Data System (ADS)

    Tjahjanto, D. D.; Turteltaub, S.; Suiker, A. S. J.; van der Zwaag, S.

    2006-06-01

    The effects of grain orientation on transformation-induced plasticity in multiphase steels are studied through three-dimensional finite element simulations. The boundary value problems analysed concern a uniaxially-loaded sample consisting of a grain of retained austenite surrounded by multiple grains of ferrite. For the ferritic phase, a rate-dependent crystal plasticity model is used that describes the elasto-plastic behaviour of body-centred cubic crystalline structures under large deformations. In this model, the critical-resolved shear stress for plastic slip consists of an evolving slip resistance and a stress-dependent term that corresponds to the projection of the stress tensor on a non-glide plane (i.e. a non-Schmid stress). For the austenitic phase, the transformation model developed by Turteltaub and Suiker (2006 Int. J. Solids Struct. at press, 2005 J. Mech. Phys. Solids 53 1747-88) is employed. This model simulates the displacive phase transformation of a face-centred cubic austenite into a body-centred tetragonal martensite under external mechanical loading. The effective transformation kinematics and the effective anisotropic elastic stiffness components in the model are derived from lower-scale information that follows from the crystallographic theory of martensitic transformations. In the boundary value problems studied, the mutual interaction between the transforming austenitic grain and the plastically deforming ferritic matrix is computed for several grain orientations. From the simulation results, specific combinations of austenitic and ferritic crystalline orientations are identified that either increase or decrease the effective strength of the material. This information is useful to further improve the mechanical properties of multiphase carbon steels. In order to quantify the anisotropic aspects of the crystal plasticity model, the simulation results for the uniaxially-loaded sample are compared with those obtained with an isotropic

  3. Variable Transformation in Nonlinear Least Squares model Fitting

    DTIC Science & Technology

    1981-07-01

    Chemistry, Vol. 10, pp. 91-104, 1973. 11. H.J. Britt and R.H. Luecke , "The Estimation of Parameters in Nonlinear, Implicit Models", Technometrics, Vol...respect to the unknown C, 6, and K. This yields the following set of normal equations. 11 H.J, Britt and H.H. Lueake, "The Estimation of...Carbide Corporation Chemicals and Plastics ATTN: H.J. Britt P.O. Box 8361 Charleston, WV 25303 California Institute of Tech Guggenheim Aeronautical

  4. Study on Information Management for the Conservation of Traditional Chinese Architectural Heritage - 3d Modelling and Metadata Representation

    NASA Astrophysics Data System (ADS)

    Yen, Y. N.; Weng, K. H.; Huang, H. Y.

    2013-07-01

    After over 30 years of practise and development, Taiwan's architectural conservation field is moving rapidly into digitalization and its applications. Compared to modern buildings, traditional Chinese architecture has considerably more complex elements and forms. To document and digitize these unique heritages in their conservation lifecycle is a new and important issue. This article takes the caisson ceiling of the Taipei Confucius Temple, octagonal with 333 elements in 8 types, as a case study for digitization practise. The application of metadata representation and 3D modelling are the two key issues to discuss. Both Revit and SketchUp were appliedin this research to compare its effectiveness to metadata representation. Due to limitation of the Revit database, the final 3D models wasbuilt with SketchUp. The research found that, firstly, cultural heritage databasesmustconvey that while many elements are similar in appearance, they are unique in value; although 3D simulations help the general understanding of architectural heritage, software such as Revit and SketchUp, at this stage, could onlybe used tomodel basic visual representations, and is ineffective indocumenting additional critical data ofindividually unique elements. Secondly, when establishing conservation lifecycle information for application in management systems, a full and detailed presentation of the metadata must also be implemented; the existing applications of BIM in managing conservation lifecycles are still insufficient. Results of the research recommends SketchUp as a tool for present modelling needs, and BIM for sharing data between users, but the implementation of metadata representation is of the utmost importance.

  5. Laser Hardening Prediction Tool Based On a Solid State Transformations Numerical Model

    SciTech Connect

    Martinez, S.; Ukar, E.; Lamikiz, A.

    2011-01-17

    This paper presents a tool to predict hardening layer in selective laser hardening processes where laser beam heats the part locally while the bulk acts as a heat sink.The tool to predict accurately the temperature field in the workpiece is a numerical model that combines a three dimensional transient numerical solution for heating where is possible to introduce different laser sources. The thermal field was modeled using a kinetic model based on Johnson-Mehl-Avrami equation. Considering this equation, an experimental adjustment of transformation parameters was carried out to get the heating transformation diagrams (CHT). With the temperature field and CHT diagrams the model predicts the percentage of base material converted into austenite. These two parameters are used as first step to estimate the depth of hardened layer in the part.The model has been adjusted and validated with experimental data for DIN 1.2379, cold work tool steel typically used in mold and die making industry. This steel presents solid state diffusive transformations at relative low temperature. These transformations must be considered in order to get good accuracy of temperature field prediction during heating phase. For model validation, surface temperature measured by pyrometry, thermal field as well as the hardened layer obtained from metallographic study, were compared with the model data showing a good adjustment.

  6. Animation Strategies for Smooth Transformations Between Discrete Lods of 3d Building Models

    NASA Astrophysics Data System (ADS)

    Kada, Martin; Wichmann, Andreas; Filippovska, Yevgeniya; Hermes, Tobias

    2016-06-01

    The cartographic 3D visualization of urban areas has experienced tremendous progress over the last years. An increasing number of applications operate interactively in real-time and thus require advanced techniques to improve the quality and time response of dynamic scenes. The main focus of this article concentrates on the discussion of strategies for smooth transformation between two discrete levels of detail (LOD) of 3D building models that are represented as restricted triangle meshes. Because the operation order determines the geometrical and topological properties of the transformation process as well as its visual perception by a human viewer, three different strategies are proposed and subsequently analyzed. The simplest one orders transformation operations by the length of the edges to be collapsed, while the other two strategies introduce a general transformation direction in the form of a moving plane. This plane either pushes the nodes that need to be removed, e.g. during the transformation of a detailed LOD model to a coarser one, towards the main building body, or triggers the edge collapse operations used as transformation paths for the cartographic generalization.

  7. Fast and parallel spectral transform algorithms for global shallow water models. Doctoral thesis

    SciTech Connect

    Jakob, R.

    1993-01-01

    The dissertation examines spectral transform algorithms for the solution of the shallow water equations on the sphere and studies their implementation and performance on shared memory vector multiprocessors. Beginning with the standard spectral transform algorithm in vorticity divergence form and its implementation in the Fortran based parallel programming language Force, two modifications are researched. First, the transforms and matrices associated with the meridional derivatives of the associated Legendre functions are replaced by corresponding operations with the spherical harmonic coefficients. Second, based on the fast Fourier transform and the fast multipole method, a lower complexity algorithm is derived that uses fast transformations between Legendre and interior Fourier nodes, fast surface spherical truncation and a fast spherical Helmholz solver. Because the global shallow water equations are similar to the horizontal dynamical component of general circulation models, the results can be applied to spectral transform numerical weather prediction and climate models. In general, the derived algorithms may speed up the solution of time dependent partial differential equations in spherical geometry.

  8. UAS in the NAS Project: Large-Scale Communication Architecture Simulations with NASA GRC Gen5 Radio Model

    NASA Technical Reports Server (NTRS)

    Kubat, Gregory

    2016-01-01

    This report provides a description and performance characterization of the large-scale, Relay architecture, UAS communications simulation capability developed for the NASA GRC, UAS in the NAS Project. The system uses a validated model of the GRC Gen5 CNPC, Flight-Test Radio model. Contained in the report is a description of the simulation system and its model components, recent changes made to the system to improve performance, descriptions and objectives of sample simulations used for test and verification, and a sampling and observations of results and performance data.

  9. From PCK to TPACK: Developing a Transformative Model for Pre-Service Science Teachers

    NASA Astrophysics Data System (ADS)

    Jang, Syh-Jong; Chen, Kuan-Chung

    2010-12-01

    New science teachers should be equipped with the ability to integrate and design the curriculum and technology for innovative teaching. How to integrate technology into pre-service science teachers' pedagogical content knowledge is the important issue. This study examined the impact on a transformative model of integrating technology and peer coaching for developing technological pedagogical and content knowledge (TPACK) of pre-service science teachers. A transformative model and an online system were designed to restructure science teacher education courses. Participants of this study included an instructor and 12 pre-service teachers. The main sources of data included written assignments, online data, reflective journals, videotapes and interviews. This study expanded four views, namely, the comprehensive, imitative, transformative and integrative views to explore the impact of TPACK. The model could help pre-service teachers develop technological pedagogical methods and strategies of integrating subject-matter knowledge into science lessons, and further enhanced their TPACK.

  10. Three-Dimensional Numerical Model Considering Phase Transformation in Friction Stir Welding of Steel

    NASA Astrophysics Data System (ADS)

    Cho, Hoon-Hwe; Kim, Dong-Wan; Hong, Sung-Tae; Jeong, Yong-Ha; Lee, Keunho; Cho, Yi-Gil; Kang, Suk Hoon; Han, Heung Nam

    2015-12-01

    A three-dimensional (3D) thermo-mechanical model is developed considering the phase transformation occurring during the friction stir welding (FSW) of steel, and the simulated result is compared with both the measured temperature distribution during FSW and the microstructural changes after FSW. The austenite grain size (AGS) decreases significantly because of the frictional heat and severe plastic deformation generated during FSW, and the decreased AGS accelerates the diffusional phase transformation during FSW. The ferrite phase, one of the diffusional phases, is developed mainly in mild steel, whereas the bainite phase transformation occurs significantly in high-strength steel with large hardenability. Additionally, transformation-induced heat is observed mainly in the stir zone during FSW. The measured temperature distribution and phase fraction agree fairly well with the predicted data.

  11. Voices of innovation: building a model for curriculum transformation.

    PubMed

    Phillips, Janet M; Resnick, Jerelyn; Boni, Mary Sharon; Bradley, Patricia; Grady, Janet L; Ruland, Judith P; Stuever, Nancy L

    2013-05-07

    Innovation in nursing education curriculum is critically needed to meet the demands of nursing leadership and practice while facing the complexities of today's health care environment. International nursing organizations, the Institute of Medicine, and; our health care practice partners have called for curriculum reform to ensure the quality and safety of patient care. While innovation is occurring in schools of nursing, little is being researched or disseminated. The purposes of this qualitative study were to (a) describe what innovative curricula were being implemented, (b) identify challenges faced by the faculty, and (c) explore how the curricula were evaluated. Interviews were conducted with 15 exemplar schools from a variety of nursing programs throughout the United States. Exemplar innovative curricula were identified, and a model for approaching innovation was developed based on the findings related to conceptualizing, designing, delivering, evaluating, and supporting the curriculum. The results suggest implications for nursing education, research, and practice.

  12. Robotic Intelligence Kernel: Architecture

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.

  13. Finite field-dependent BRST-anti-BRST transformations: Jacobians and application to the Standard Model

    NASA Astrophysics Data System (ADS)

    Yu. Moshin, Pavel; Reshetnyak, Alexander A.

    2016-07-01

    We continue our research1-4 and extend the class of finite BRST-anti-BRST transformations with odd-valued parameters λa, a = 1, 2, introduced in these works. In doing so, we evaluate the Jacobians induced by finite BRST-anti-BRST transformations linear in functionally-dependent parameters, as well as those induced by finite BRST-anti-BRST transformations with arbitrary functional parameters. The calculations cover the cases of gauge theories with a closed algebra, dynamical systems with first-class constraints, and general gauge theories. The resulting Jacobians in the case of linearized transformations are different from those in the case of polynomial dependence on the parameters. Finite BRST-anti-BRST transformations with arbitrary parameters induce an extra contribution to the quantum action, which cannot be absorbed into a change of the gauge. These transformations include an extended case of functionally-dependent parameters that implies a modified compensation equation, which admits nontrivial solutions leading to a Jacobian equal to unity. Finite BRST-anti-BRST transformations with functionally-dependent parameters are applied to the Standard Model, and an explicit form of functionally-dependent parameters λa is obtained, providing the equivalence of path integrals in any 3-parameter Rξ-like gauges. The Gribov-Zwanziger theory is extended to the case of the Standard Model, and a form of the Gribov horizon functional is suggested in the Landau gauge, as well as in Rξ-like gauges, in a gauge-independent way using field-dependent BRST-anti-BRST transformations, and in Rξ-like gauges using transverse-like non-Abelian gauge fields.

  14. Architecture & Environment

    ERIC Educational Resources Information Center

    Erickson, Mary; Delahunt, Michael

    2010-01-01

    Most art teachers would agree that architecture is an important form of visual art, but they do not always include it in their curriculums. In this article, the authors share core ideas from "Architecture and Environment," a teaching resource that they developed out of a long-term interest in teaching architecture and their fascination with the…

  15. Modeling the amorphous-to-crystalline phase transformation in network materials

    NASA Astrophysics Data System (ADS)

    Kohary, K.; Burlakov, V. M.; Pettifor, D. G.

    2005-06-01

    We have developed a computationally efficient rate equation model to study transformations between amorphous and crystalline phases of network forming materials. Amorphous and crystalline phases are treated in terms of their atomic ring distributions. The transformation between the two phases is considered to be driven by the conversion of one set of rings into another, following the Wooten-Winer-Weaire bond-switching algorithm. Our rate equation model describes both the generation and collapse of amorphous regions in thin crystalline films, the processes crucial for phase-change data storage materials. It is found that the amorphous spot collapse is assisted by the motion of certain crystal facets.

  16. A transformative model for undergraduate quantitative biology education.

    PubMed

    Usher, David C; Driscoll, Tobin A; Dhurjati, Prasad; Pelesko, John A; Rossi, Louis F; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B

    2010-01-01

    The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions.

  17. A Transformative Model for Undergraduate Quantitative Biology Education

    PubMed Central

    Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.

    2010-01-01

    The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions. PMID:20810949

  18. Linking lipid architecture to bilayer structure and mechanics using self-consistent field modelling

    SciTech Connect

    Pera, H.; Kleijn, J. M.; Leermakers, F. A. M.

    2014-02-14

    To understand how lipid architecture determines the lipid bilayer structure and its mechanics, we implement a molecularly detailed model that uses the self-consistent field theory. This numerical model accurately predicts parameters such as Helfrichs mean and Gaussian bending modulus k{sub c} and k{sup ¯} and the preferred monolayer curvature J{sub 0}{sup m}, and also delivers structural membrane properties like the core thickness, and head group position and orientation. We studied how these mechanical parameters vary with system variations, such as lipid tail length, membrane composition, and those parameters that control the lipid tail and head group solvent quality. For the membrane composition, negatively charged phosphatidylglycerol (PG) or zwitterionic, phosphatidylcholine (PC), and -ethanolamine (PE) lipids were used. In line with experimental findings, we find that the values of k{sub c} and the area compression modulus k{sub A} are always positive. They respond similarly to parameters that affect the core thickness, but differently to parameters that affect the head group properties. We found that the trends for k{sup ¯} and J{sub 0}{sup m} can be rationalised by the concept of Israelachivili's surfactant packing parameter, and that both k{sup ¯} and J{sub 0}{sup m} change sign with relevant parameter changes. Although typically k{sup ¯}<0, membranes can form stable cubic phases when the Gaussian bending modulus becomes positive, which occurs with membranes composed of PC lipids with long tails. Similarly, negative monolayer curvatures appear when a small head group such as PE is combined with long lipid tails, which hints towards the stability of inverse hexagonal phases at the cost of the bilayer topology. To prevent the destabilisation of bilayers, PG lipids can be mixed into these PC or PE lipid membranes. Progressive loading of bilayers with PG lipids lead to highly charged membranes, resulting in J{sub 0}{sup m}≫0, especially at low ionic

  19. caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability

    PubMed Central

    Komatsoulis, George A.; Warzel, Denise B.; Hartel, Frank W.; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; de Coronado, Sherri; Reeves, Dianne M.; Hadfield, Jillaine B.; Ludet, Christophe; Covitz, Peter A.

    2008-01-01

    One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service Oriented Architecture (SSOA) for cancer research by the National Cancer Institute’s cancer Biomedical Informatics Grid (caBIG™). PMID:17512259

  20. Influence of parameter values and variances and algorithm architecture in ConsExpo model on modeled exposures.

    PubMed

    Arnold, Susan F; Ramachandran, Gurumurthy

    2014-01-01

    This study evaluated the influence of parameter values and variances and model architecture on modeled exposures, and identified important data gaps that influence lack-of-knowledge-related uncertainty, using Consexpo 4.1 as an illustrative case study. Understanding the influential determinants in exposure estimates enables more informed and appropriate use of this model and the resulting exposure estimates. In exploring the influence of parameter placement in an algorithm and of the values and variances chosen to characterize the parameters within ConsExpo, "sensitive" and "important" parameters were identified: product amount, weight fraction, exposure duration, exposure time, and ventilation rate were deemed "important," or "always sensitive." With this awareness, exposure assessors can strategically focus on acquiring the most robust estimates for these parameters. ConsExpo relies predominantly on three algorithms to assess the default scenarios: inhalation vapors evaporation equation using the Langmuir mass transfer, the dermal instant application with diffusion through the skin, and the oral ingestion by direct uptake algorithm. These algorithms, which do not necessarily render health conservative estimates, account for 87, 89 and 59% of the inhalation, dermal and oral default scenario assessments,respectively, according them greater influence relative to the less frequently used algorithms. Default data provided in ConsExpo may be useful to initiate assessments, but are insufficient for determining exposure acceptability or setting policy, as parameters defined by highly uncertain values produce biased estimates that may not be health conservative. Furthermore, this lack-of-knowledge uncertainty makes the magnitude of this bias uncertain. Significant data gaps persist for product amount, exposure time, and exposure duration. These "important" parameters exert influence in requiring broad values and variances to account for their uncertainty. Prioritizing

  1. PRISMA-MAR: An Architecture Model for Data Visualization in Augmented Reality Mobile Devices

    ERIC Educational Resources Information Center

    Gomes Costa, Mauro Alexandre Folha; Serique Meiguins, Bianchi; Carneiro, Nikolas S.; Gonçalves Meiguins, Aruanda Simões

    2013-01-01

    This paper proposes an extension to mobile augmented reality (MAR) environments--the addition of data charts to the more usual text, image and video components. To this purpose, we have designed a client-server architecture including the main necessary modules and services to provide an Information Visualization MAR experience. The server side…

  2. Project Integration Architecture: Application Architecture

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    The Project Integration Architecture (PIA) implements a flexible, object-oriented, wrapping architecture which encapsulates all of the information associated with engineering applications. The architecture allows the progress of a project to be tracked and documented in its entirety. Additionally, by bringing all of the information sources and sinks of a project into a single architectural space, the ability to transport information between those applications is enabled.

  3. Information architecture. Volume 3: Guidance

    SciTech Connect

    1997-04-01

    The purpose of this document, as presented in Volume 1, The Foundations, is to assist the Department of Energy (DOE) in developing and promulgating information architecture guidance. This guidance is aimed at increasing the development of information architecture as a Departmentwide management best practice. This document describes departmental information architecture principles and minimum design characteristics for systems and infrastructures within the DOE Information Architecture Conceptual Model, and establishes a Departmentwide standards-based architecture program. The publication of this document fulfills the commitment to address guiding principles, promote standard architectural practices, and provide technical guidance. This document guides the transition from the baseline or defacto Departmental architecture through approved information management program plans and budgets to the future vision architecture. This document also represents another major step toward establishing a well-organized, logical foundation for the DOE information architecture.

  4. Effect of Colorspace Transformation, the Illuminance Component, and Color Modeling on Skin Detection

    SciTech Connect

    Jayaram, S; Schmugge, S; Shin, M C; Tsap, L V

    2004-03-22

    Skin detection is an important preliminary process in human motion analysis. It is commonly performed in three steps: transforming the pixel color to a non-RGB colorspace, dropping the illumination component of skin color, and classifying by modeling the skin color distribution. In this paper, we evaluate the effect of these three steps on the skin detection performance. The importance of this study is a new comprehensive colorspace and color modeling testing methodology that would allow for making the best choices for skin detection. Combinations of nine colorspaces, the presence of the absence of the illuminance component, and the two color modeling approaches are compared. The performance is measured by using a receiver operating characteristic (ROC) curve on a large dataset of 805 images with manual ground truth. The results reveal that (1) the absence of the illuminance component decreases performance, (2) skin color modeling has a greater impact than colorspace transformation, and (3) colorspace transformations can improve performance in certain instances. We found that the best performance was obtained by transforming the pixel color to the SCT, HSI, or CIELAB colorspaces, keeping the illuminance component, and modeling the color with the histogram approach.

  5. Applying a quality assurance system model to curriculum transformation: transferable lessons learned.

    PubMed

    Kayyal, Mohamad; Gibbs, Trevor

    2012-01-01

    As curricula are transformed throughout the world in response to the need for modern medical education, much attention is given to curriculum content and associated teaching, learning and assessment methodologies. However, an important component of any curriculum is its organisational management, how it is all held together, the way the process is conducted and what mechanisms are applied to ensure quality. In 2008, the Faculty of Medicine at Damascus University embarked on a journey of curriculum transformation. The transformation process was specifically and initially based on a quality assurance model. This entailed a concept for realising curriculum transformation; a framework for organisational management, which ensures that the necessary enabling conditions are met and issues of conflicts in roles and responsibilities are resolved; a plan for securing resources and creating the necessary governance structures needed to carry the transformation process forward; and a systematic analysis of risks facing the effective realisation of the transformation process and the corresponding mitigation measures to alleviate their impacts. Although a full evaluation of such an activity produces reliable results only after a period of time, this article demonstrates the principles and structures applied to the initial process based on some of the early lessons learned. We perceive that the lessons learned from this activity are capable of being translated to other Universities, in other similar developing countries; our hope is that others can learn from our experiences.

  6. Brief Communication: 2-D numerical modeling of the transformation mechanism of a braided channel

    NASA Astrophysics Data System (ADS)

    Xiao, Y.; Yang, S. F.; Shao, X.; Chen, W. X.; Xu, X. M.

    2014-05-01

    This paper investigates the controls on the transformation mechanism among different channel patterns. A 2-D depth-averaged numerical model is applied to produce the evolution of channel patterns with complex interactions among water flow, sediment transport, and bank erosion. Changes of the variables as discharge, sediment supply, and vegetation are considered in the numerical experiments, leading to the transformation from a braided pattern into a meandering one. What controls the transformation is discussed with the numerical results: vegetation helps stabilize the cut bank and bar surface, but is not a key in the transition; a decrease in discharge and sediment supply could lead a braided pattern to a meandering one. The conclusion is in agreement with various previous field work, confirming the two dimensional model's potential in predicting the transition between different rivers and improving understanding of patterning processes.

  7. Penium margaritaceum: A Unicellular Model Organism for Studying Plant Cell Wall Architecture and Dynamics.

    PubMed

    Domozych, David S

    2014-11-18

    Penium margaritaceum is a new and valuable unicellular model organism for studying plant cell wall structure and developmental dynamics. This charophyte has a cell wall composition remarkably similar to the primary cell wall of many higher plants and clearly-defined inclusive zones containing specific polymers. Penium has a simple cylindrical phenotype with a distinct region of focused wall synthesis. Specific polymers, particularly pectins, can be identified using monoclonal antibodies raised against polymers of higher plant cell walls. Immunofluorescence-based labeling is easily performed using live cells that subsequently can be returned to culture and monitored. This feature allows for rapid assessment of wall expansion rates and identification of multiple polymer types in the wall microarchitecture during the cell cycle. Cryofixation by means of spray freezing provides excellent transmission electron microscopy imaging of the cell, including its elaborate endomembrane and cytoskeletal systems, both integral to cell wall development. Penium's fast growth rate allows for convenient microarray screening of various agents that alter wall biosynthesis and metabolism. Finally, recent successful development of transformed cell lines has allowed for non-invasive imaging of proteins in cells and for RNAi reverse genetics that can be used for cell wall biosynthesis studies.

  8. Architecture and design to support rapid prototyping and multiple dynamic models for the Virtual SpacePlane project

    NASA Astrophysics Data System (ADS)

    Banks, Sheila B.; Stytz, Martin R.; Rothermel, Scott A.; Johnson, Troy D.

    1998-08-01

    The advent of requirements for rapid and economical deployment of national space assets in support of Air Force operational missions has resulted in the need for a Manned SpacePlane (MSP) that can perform military missions with minimal preflight preparation and little if any in-orbit support from a mission control center. In this new approach to space operations, successful mission accomplishment will depend almost completely upon the MSP crew and upon the on- board capabilities of the spaceplane. In recognition of the challenges that will be faced by the MSP crew and to begin to address these challenges, the USAF Air Force Research Laboratory (Phillips Laboratory) initiated the Virtual SpacePlane (VSP) project. To support the MSP, the VSP must demonstrate a broad, functional subset of the anticipated missions and capabilities of the MSP throughout its entire flight regime, from takeoff through space operations and on through landing. Additionally, the VSP must execute the anticipated MSP missions in a realistic and tactically sound manner within a distributed virtual environment. Furthermore, the VSP project must also uncover, refine and validate MSP user interface requirements, design and demonstrate an intelligent user interface for the VSP, and design and implement a prototype VSP that can be used to demonstrate Manned SpacePlane missions. To enable us to make rapid progress on the project, we employed portions of the Virtual Cockpit and Solar System Modeler distributed virtual environment applications, and the Common Object Database (CODB) architecture tools developed in our labs. The Virtual Cockpit and Solar System Modeler supplied baseline interface components and tools, 3D graphical models, vehicle motion dynamics models, and VE communication capabilities. We use the CODB architecture to facilitate our use of Rapid Evolutionary and Exploratory Prototyping to uncover application requirements and evaluate solutions. The Information Pod provides the paradigm

  9. An architecture for the development of real-time fault diagnosis systems using model-based reasoning

    NASA Technical Reports Server (NTRS)

    Hall, Gardiner A.; Schuetzle, James; Lavallee, David; Gupta, Uday

    1992-01-01

    Presented here is an architecture for implementing real-time telemetry based diagnostic systems using model-based reasoning. First, we describe Paragon, a knowledge acquisition tool for offline entry and validation of physical system models. Paragon provides domain experts with a structured editing capability to capture the physical component's structure, behavior, and causal relationships. We next describe the architecture of the run time diagnostic system. The diagnostic system, written entirely in Ada, uses the behavioral model developed offline by Paragon to simulate expected component states as reflected in the telemetry stream. The diagnostic algorithm traces causal relationships contained within the model to isolate system faults. Since the diagnostic process relies exclusively on the behavioral model and is implemented without the use of heuristic rules, it can be used to isolate unpredicted faults in a wide variety of systems. Finally, we discuss the implementation of a prototype system constructed using this technique for diagnosing faults in a science instrument. The prototype demonstrates the use of model-based reasoning to develop maintainable systems with greater diagnostic capabilities at a lower cost.

  10. Modelling the effect of wheat canopy architecture as affected by sowing density on Septoria tritici epidemics using a coupled epidemic–virtual plant model

    PubMed Central

    Baccar, Rim; Fournier, Christian; Dornbusch, Tino; Andrieu, Bruno; Gouache, David; Robert, Corinne

    2011-01-01

    Background and Aims The relationship between Septoria tritici, a splash-dispersed disease, and its host is complex because of the interactions between the dynamic plant architecture and the vertical progress of the disease. The aim of this study was to test the capacity of a coupled virtual wheat–Septoria tritici epidemic model (Septo3D) to simulate disease progress on the different leaf layers for contrasted sowing density treatments. Methods A field experiment was performed with winter wheat ‘Soissons’ grown at three contrasted densities. Plant architecture was characterized to parameterize the wheat model, and disease dynamic was monitored to compare with simulations. Three simulation scenarios, differing in the degree of detail with which plant variability of development was represented, were defined. Key Results Despite architectural differences between density treatments, few differences were found in disease progress; only the lower-density treatment resulted in a slightly higher rate of lesion development. Model predictions were consistent with field measurements but did not reproduce the higher rate of lesion progress in the low density. The canopy reconstruction scenario in which inter-plant variability was taken into account yielded the best agreement between measured and simulated epidemics. Simulations performed with the canopy represented by a population of the same average plant deviated strongly from the observations. Conclusions It was possible to compare the predicted and measured epidemics on detailed variables, supporting the hypothesis that the approach is able to provide new insights into the processes and plant traits that contribute to the epidemics. On the other hand, the complex and dynamic responses to sowing density made it difficult to test the model precisely and to disentangle the various aspects involved. This could be overcome by comparing more contrasted and/or simpler canopy architectures such as those resulting from quasi

  11. Interlaboratory comparison of transformation in Syrian hamster embryo cells with model and coded chemicals

    SciTech Connect

    Tu, A.; Hallowell, W.; Pallotta, S.; Sivak, A.; Lubet, R.A.; Curren, R.D.; Avery, M.D.; Jones, C.; Sedita, B.A.; Huberman, E.

    1986-01-01

    Three independent laboratories tested eight model and five coded chemicals in the Syrian hamster embryo clonal transformation assay system to establish the intra- and interlaboratory reproducibility of the system and to identify sources of variability. When a common cell pool and the same lot of fetal calf serum were used, the three laboratories obtained consensus on the activity of eight model chemicals; five chemicals (benzo(a)pyrene, 7,12-dimethylbenz(a)anthracene, N-methyl-N'-nitro-N-nitrosoguanidine, nitroquinoline-N-oxide, and lead chromate) induced morphological transformation without exogenous metabolic activation and three (N-2-fluorenylacetamide, pyrene, and anthracene) produced no transformation response. Five coded chemicals (2,6-dichloro p-phenylenediamine, 4,4'-oxydianiline, cinnamyl anthranilate, dichlorvos, and reserpine), representative of environmental chemical classes, but not necessarily strong carcinogens, produced more equivocal responses in this interlaboratory study. Efforts to increase the transformation frequency or to amplify the expression of the transformed phenotype constitute some of the approaches which should be explored in order to overcome these limitations.

  12. Chemical Transformation System: Cloud Based Cheminformatic Services to Support Integrated Environmental Modeling

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) systems that account for the fate/transport of organics frequently require physicochemical properties as well as transformation products. A myriad of chemical property databases exist but these can be difficult to access and often do not co...

  13. School Counseling Leadership Team: A Statewide Collaborative Model to Transform School Counseling

    ERIC Educational Resources Information Center

    Kaffenberger, Carol J.; Murphy, Sally; Bemak, Fred

    2006-01-01

    The School Counseling Leadership Team (SCLT) is a model of a collaborative team formed to advocate for the transformed role of professional school counselors. The members of the SCLT included school district counseling supervisors, counselor educators, and leaders of statewide school counselor organizations. This article reviews the need for and…

  14. The Federal Transformation Intervention Model in Persistently Lowest Achieving High Schools: A Mixed-Methods Study

    ERIC Educational Resources Information Center

    Le Patner, Michelle B.

    2012-01-01

    This study examined the American Recovery and Reinvestment Act federal mandate of the Transformation Intervention Model (TIM) outlined by the School Improvement Grant, which was designed to turn around persistently lowest achieving schools. The study was conducted in four high schools in a large Southern California urban district that selected the…

  15. On the Formal Componential Structure of the Transformational-Generative Model of Grammar.

    ERIC Educational Resources Information Center

    Brew, P. J.

    1970-01-01

    This paper examines the relationship that exists between the syntactic and phonological components of the transformational-generative model insofar as their formal structures are concerned. It is demonstrated that the number and importance of the structural similarities between the syntax and the phonology make it necessary to provide for them in…

  16. Educational Transformation in Upper-Division Physics: The Science Education Initiative Model, Outcomes, and Lessons Learned

    ERIC Educational Resources Information Center

    Chasteen, Stephanie V.; Wilcox, Bethany; Caballero, Marcos D.; Perkins, Katherine K.; Pollock, Steven J.; Wieman, Carl E.

    2015-01-01

    In response to the need for a scalable, institutionally supported model of educational change, the Science Education Initiative (SEI) was created as an experiment in transforming course materials and faculty practices at two institutions--University of Colorado Boulder (CU) and University of British Columbia. We find that this departmentally…

  17. Chemical Transformation System: Cloud Based Cheminformatic Services to Support Integrated Environmental Modeling (proceedings)

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) systems that account for the fate/transport of organics frequently require physicochemical properties as well as transformation products. A myriad of chemical property databases exist but these can be difficult to access and often do not co...

  18. The Efficacy of Ecological Macro-Models in Preservice Teacher Education: Transforming States of Mind

    ERIC Educational Resources Information Center

    Stibbards, Adam; Puk, Tom

    2011-01-01

    The present study aimed to describe and evaluate a transformative, embodied, emergent learning approach to acquiring ecological literacy through higher education. A class of teacher candidates in a bachelor of education program filled out a survey, which had them rate their level of agreement with 15 items related to ecological macro-models.…

  19. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  20. From PCK to TPACK: Developing a Transformative Model for Pre-Service Science Teachers

    ERIC Educational Resources Information Center

    Jang, Syh-Jong; Chen, Kuan-Chung

    2010-01-01

    New science teachers should be equipped with the ability to integrate and design the curriculum and technology for innovative teaching. How to integrate technology into pre-service science teachers' pedagogical content knowledge is the important issue. This study examined the impact on a transformative model of integrating technology and peer…

  1. Notch Signaling and Schwann Cell Transformation: Development of a Model System and Application to Human MPNSTs

    DTIC Science & Technology

    2006-03-01

    MPNSTs PRINCIPAL INVESTIGATOR: Tom R. Kadesch, Ph.D...Signaling and Schwann Cell Tranformation: Development of a Model System and Application to Human MPNSTs 6. AUTHOR(S) Tom R. Kadesch, Ph.D. W81XWH-04-1...the malignant transformation of neurofibromas to MPNSTs in patients with NF1. Our previous work has shown that constitutive expression of Notch can

  2. A Faculty-Development Model for Transforming Introductory Biology and Ecology Courses

    ERIC Educational Resources Information Center

    D'Avanzo, Charlene; Anderson, Charles W.; Hartley, Laurel M.; Pelaez, Nancy

    2012-01-01

    The Diagnostic Question Cluster (DQC) project integrates education research and faculty development to articulate a model for the effective transformation of introductory biology and ecology teaching. Over three years, faculty members from a wide range of institutions used active teaching and DQCs, a type of concept inventory, as pre- and…

  3. Combining Genome-Wide Information with a Functional Structural Plant Model to Simulate 1-Year-Old Apple Tree Architecture

    PubMed Central

    Migault, Vincent; Pallas, Benoît; Costes, Evelyne

    2017-01-01

    In crops, optimizing target traits in breeding programs can be fostered by selecting appropriate combinations of architectural traits which determine light interception and carbon acquisition. In apple tree, architectural traits were observed to be under genetic control. However, architectural traits also result from many organogenetic and morphological processes interacting with the environment. The present study aimed at combining a FSPM built for apple tree, MAppleT, with genetic determinisms of architectural traits, previously described in a bi-parental population. We focused on parameters related to organogenesis (phyllochron and immediate branching) and morphogenesis processes (internode length and leaf area) during the first year of tree growth. Two independent datasets collected in 2004 and 2007 on 116 genotypes, issued from a ‘Starkrimson’ × ‘Granny Smith’ cross, were used. The phyllochron was estimated as a function of thermal time and sylleptic branching was modeled subsequently depending on phyllochron. From a genetic map built with SNPs, marker effects were estimated on four MAppleT parameters with rrBLUP, using 2007 data. These effects were then considered in MAppleT to simulate tree development in the two climatic conditions. The genome wide prediction model gave consistent estimations of parameter values with correlation coefficients between observed values and estimated values from SNP markers ranging from 0.79 to 0.96. However, the accuracy of the prediction model following cross validation schemas was lower. Three integrative traits (the number of leaves, trunk length, and number of sylleptic laterals) were considered for validating MAppleT simulations. In 2007 climatic conditions, simulated values were close to observations, highlighting the correct simulation of genetic variability. However, in 2004 conditions which were not used for model calibration, the simulations differed from observations. This study demonstrates the possibility

  4. Planning intensive care unit design using computer simulation modeling: optimizing integration of clinical, operational, and architectural requirements.

    PubMed

    OʼHara, Susan

    2014-01-01

    Nurses have increasingly been regarded as critical members of the planning team as architects recognize their knowledge and value. But the nurses' role as knowledge experts can be expanded to leading efforts to integrate the clinical, operational, and architectural expertise through simulation modeling. Simulation modeling allows for the optimal merge of multifactorial data to understand the current state of the intensive care unit and predict future states. Nurses can champion the simulation modeling process and reap the benefits of a cost-effective way to test new designs, processes, staffing models, and future programming trends prior to implementation. Simulation modeling is an evidence-based planning approach, a standard, for integrating the sciences with real client data, to offer solutions for improving patient care.

  5. Business Collaborations in Grids: The BREIN Architectural Principals and VO Model

    NASA Astrophysics Data System (ADS)

    Taylor, Steve; Surridge, Mike; Laria, Giuseppe; Ritrovato, Pierluigi; Schubert, Lutz

    We describe the business-oriented architectural principles of the EC FP7 project “BREIN” for service-based computing. The architecture is founded on principles of how real businesses interact to mutual benefit, and we show how these can be applied to SOA and Grid computing. We present building blocks that can be composed in many ways to produce different value systems and supply chains for the provision of computing services over the Internet. We also introduce the complementary BREIN VO concept, which is centric to, and managed by, a main contractor who bears the responsibility for the whole VO. The BREIN VO has an execution lifecycle for the creation and operation of the VO, and we have related this to an application-focused workflow involving steps that provide real end-user value. We show how this can be applied to an engineering simulation application and how the workflow can be adapted should the need arise.

  6. A two-dimensional analytical model and experimental validation of garter stitch knitted shape memory alloy actuator architecture

    NASA Astrophysics Data System (ADS)

    Abel, Julianna; Luntz, Jonathan; Brei, Diann

    2012-08-01

    Active knits are a unique architectural approach to meeting emerging smart structure needs for distributed high strain actuation with simultaneous force generation. This paper presents an analytical state-based model for predicting the actuation response of a shape memory alloy (SMA) garter knit textile. Garter knits generate significant contraction against moderate to large loads when heated, due to the continuous interlocked network of loops of SMA wire. For this knit architecture, the states of operation are defined on the basis of the thermal and mechanical loading of the textile, the resulting phase change of the SMA, and the load path followed to that state. Transitions between these operational states induce either stick or slip frictional forces depending upon the state and path, which affect the actuation response. A load-extension model of the textile is derived for each operational state using elastica theory and Euler-Bernoulli beam bending for the large deformations within a loop of wire based on the stress-strain behavior of the SMA material. This provides kinematic and kinetic relations which scale to form analytical transcendental expressions for the net actuation motion against an external load. This model was validated experimentally for an SMA garter knit textile over a range of applied forces with good correlation for both the load-extension behavior in each state as well as the net motion produced during the actuation cycle (250% recoverable strain and over 50% actuation). The two-dimensional analytical model of the garter stitch active knit provides the ability to predict the kinetic actuation performance, providing the basis for the design and synthesis of large stroke, large force distributed actuators that employ this novel architecture.

  7. Studies of transformational leadership: evaluating two alternative models of trust and satisfaction.

    PubMed

    Yang, Yi-Feng

    2014-06-01

    This study evaluates the influence of leadership style and employee trust in their leaders on job satisfaction. 341 personnel (164 men, 177 women; M age = 33.5 yr., SD = 5.1) from four large insurance companies in Taiwan completed the transformational leadership behavior inventory, the leadership trust scale and a short version of the Minnesota (Job) Satisfaction Questionnaire. A bootstrapping mediation and structural equation modeling revealed that the effect of transformational leadership on job satisfaction was mediated by leadership trust. This study highlights the importance of leadership trust in leadership-satisfaction relationships, and provides managers with practical ways to enhance job satisfaction.

  8. Variational data assimilation schemes for transport and transformation models of atmospheric chemistry

    NASA Astrophysics Data System (ADS)

    Penenko, Alexey; Penenko, Vladimir; Tsvetova, Elena; Antokhin, Pavel

    2016-04-01

    The work is devoted to data assimilation algorithm for atmospheric chemistry transport and transformation models. In the work a control function is introduced into the model source term (emission rate) to provide flexibility to adjust to data. This function is evaluated as the constrained minimum of the target functional combining a control function norm with a norm of the misfit between measured data and its model-simulated analog. Transport and transformation processes model is acting as a constraint. The constrained minimization problem is solved with Euler-Lagrange variational principle [1] which allows reducing it to a system of direct, adjoint and control function estimate relations. This provides a physically-plausible structure of the resulting analysis without model error covariance matrices that are sought within conventional approaches to data assimilation. High dimensionality of the atmospheric chemistry models and a real-time mode of operation demand for computational efficiency of the data assimilation algorithms. Computational issues with complicated models can be solved by using a splitting technique. Within this approach a complex model is split to a set of relatively independent simpler models equipped with a coupling procedure. In a fine-grained approach data assimilation is carried out quasi-independently on the separate splitting stages with shared measurement data [2]. In integrated schemes data assimilation is carried out with respect to the split model as a whole. We compare the two approaches both theoretically and numerically. Data assimilation on the transport stage is carried out with a direct algorithm without iterations. Different algorithms to assimilate data on nonlinear transformation stage are compared. In the work we compare data assimilation results for both artificial and real measurement data. With these data we study the impact of transformation processes and data assimilation to the performance of the modeling system [3]. The

  9. The Simulation Intranet Architecture

    SciTech Connect

    Holmes, V.P.; Linebarger, J.M.; Miller, D.J.; Vandewart, R.L.

    1998-12-02

    The Simdarion Infranet (S1) is a term which is being used to dcscribc one element of a multidisciplinary distributed and distance computing initiative known as DisCom2 at Sandia National Laboratory (http ct al. 1998). The Simulation Intranet is an architecture for satisfying Sandia's long term goal of providing an end- to-end set of scrviccs for high fidelity full physics simu- lations in a high performance, distributed, and distance computing environment. The Intranet Architecture group was formed to apply current distributed object technologies to this problcm. For the hardware architec- tures and software models involved with the current simulation process, a CORBA-based architecture is best suited to meet Sandia's needs. This paper presents the initial desi-a and implementation of this Intranct based on a three-tier Network Computing Architecture(NCA). The major parts of the architecture include: the Web Cli- ent, the Business Objects, and Data Persistence.

  10. On Using SysML, DoDAF 2.0 and UPDM to Model the Architecture for the NOAA's Joint Polar Satellite System (JPSS) Ground System (GS)

    NASA Technical Reports Server (NTRS)

    Hayden, Jeffrey L.; Jeffries, Alan

    2012-01-01

    The JPSS Ground System is a lIexible system of systems responsible for telemetry, tracking & command (TT &C), data acquisition, routing and data processing services for a varied lIeet of satellites to support weather prediction, modeling and climate modeling. To assist in this engineering effort, architecture modeling tools are being employed to translate the former NPOESS baseline to the new JPSS baseline, The paper will focus on the methodology for the system engineering process and the use of these architecture modeling tools within that process, The Department of Defense Architecture Framework version 2,0 (DoDAF 2.0) viewpoints and views that are being used to describe the JPSS GS architecture are discussed. The Unified Profile for DoOAF and MODAF (UPDM) and Systems Modeling Language (SysML), as ' provided by extensions to the MagicDraw UML modeling tool, are used to develop the diagrams and tables that make up the architecture model. The model development process and structure are discussed, examples are shown, and details of handling the complexities of a large System of Systems (SoS), such as the JPSS GS, with an equally complex modeling tool, are described

  11. Genetic transformation of Knufia petricola A95 - a model organism for biofilm-material interactions

    PubMed Central

    2014-01-01

    We established a protoplast-based system to transfer DNA to Knufia petricola strain A95, a melanised rock-inhabiting microcolonial fungus that is also a component of a model sub-aerial biofilm (SAB) system. To test whether the desiccation resistant, highly melanised cell walls would hinder protoplast formation, we treated a melanin-minus mutant of A95 as well as the type-strain with a variety of cell-degrading enzymes. Of the different enzymes tested, lysing enzymes from Trichoderma harzianum were most effective in producing protoplasts. This mixture was equally effective on the melanin-minus mutant and the type-strain. Protoplasts produced using lysing enzymes were mixed with polyethyleneglycol (PEG) and plasmid pCB1004 which contains the hygromycin B (HmB) phosphotransferase (hph) gene under the control of the Aspergillus nidulans trpC. Integration and expression of hph into the A95 genome conferred hygromycin resistance upon the transformants. Two weeks after plating out on selective agar containing HmB, the protoplasts developed cell-walls and formed colonies. Transformation frequencies were in the range 36 to 87 transformants per 10 μg of vector DNA and 106 protoplasts. Stability of transformation was confirmed by sub-culturing the putative transformants on selective agar containing HmB as well as by PCR-detection of the hph gene in the colonies. The hph gene was stably integrated as shown by five subsequent passages with and without selection pressure. PMID:25401079

  12. A mechanical model for the inside corner uplift at a ridge-transform intersection

    NASA Astrophysics Data System (ADS)

    Chen, Yongshun

    1989-07-01

    At least part of the inside corner uplift observed at slow ridge-transform intersections is shown to be a consequence of the flexure of an elastic plate caused by a twisting moment exerted along the transform fault. The frictional drag exerted along the transform increases with depth (overburden) down to the depth where plastic flow predominates; this depth-dependent drag results in a moment applied to the edge of the plate. An uplift of an inside corner of several hundred meters can be obtained by this mechanism. This can be increased to roughly 1 km (the observed uplift at some intersections) if we assume the elastic thickness of the plate is effectively thinner because of relaxation by long-term creep of the stresses resisting plate flexure. Simple models formulated to test this hypothesis show that the observed inside comer uplift of the Vema and Oceanographer transforms can be explained if the "effective elastic thickness" of the plate is about half of the thickness over which drag increases with depth along the transform fault boundary.

  13. Plug-and -Play Model Architecture and Development Environment for Powertrain/Propulsion System - Final CRADA Report

    SciTech Connect

    Rousseau, Aymeric

    2013-02-01

    Several tools already exist to develop detailed plant model, including GT-Power, AMESim, CarSim, and SimScape. The objective of Autonomie is not to provide a language to develop detailed models; rather, Autonomie supports the assembly and use of models from design to simulation to analysis with complete plug-and-play capabilities. Autonomie provides a plug-and-play architecture to support this ideal use of modeling and simulation for math-based automotive control system design. Models in the standard format create building blocks, which are assembled at runtime into a simulation model of a vehicle, system, subsystem, or component to simulate. All parts of the graphical user interface (GUI) are designed to be flexible to support architectures, systems, components, and processes not yet envisioned. This allows the software to be molded to individual uses, so it can grow as requirements and technical knowledge expands. This flexibility also allows for implementation of legacy code, including models, controller code, processes, drive cycles, and post-processing equations. A library of useful and tested models and processes is included as part of the software package to support a full range of simulation and analysis tasks, immediately. Autonomie also includes a configuration and database management front end to facilitate the storage, versioning, and maintenance of all required files, such as the models themselves, the model’s supporting files, test data, and reports. During the duration of the CRADA, Argonne has worked closely with GM to implement and demonstrate each one of their requirements. A use case was developed by GM for every requirement and demonstrated by Argonne. Each of the new features were verified by GM experts through a series of Gate. Once all the requirements were validated they were presented to the directors as part of GM Gate process.

  14. A program for 2D modeling (cross) correlogram tables using fast Fourier transform

    NASA Astrophysics Data System (ADS)

    Ma, Xianlin; Yao, Tingting

    2001-08-01

    An alternative to the traditional fitting of analytical correlogram models or of a linear model of coregionalization has been recently proposed, whereby the conditions for permissibility of a set of (cross) correlogram tables are imposed on their Fourier transforms, that is on the corresponding set of (cross) spectrum tables. The resulting model is entirely non-parametric and consists of a set of permissible (cross) correlogram tables from which gridded correlogram values can be read directly. This paper gives the suite of GSLIB-type programs to implement this correlogram modeling approach. Presentation of the program is backed by a case study using actual petroleum reservoir data (porosity and seismic reflection energy).

  15. Examining Competing Models of Transformational Leadership, Leadership Trust, Change Commitment, and Job Satisfaction.

    PubMed

    Yang, Yi-Feng

    2016-08-01

    This study discusses the influence of transformational leadership on job satisfaction through assessing six alternative models related to the mediators of leadership trust and change commitment utilizing a data sample (N = 341; M age = 32.5 year, SD = 5.2) for service promotion personnel in Taiwan. The bootstrap sampling technique was used to select the better fitting model. The tool of hierarchical nested model analysis was applied, along with the approaches of bootstrapping mediation, PRODCLIN2, and structural equation modeling comparison. The results overall demonstrate that leadership is important and that leadership role identification (trust) and workgroup cohesiveness (commitment) form an ordered serial relationship.

  16. Seismo-thermo-mechanical modeling of mature and immature transform faults

    NASA Astrophysics Data System (ADS)

    Preuss, Simon; Gerya, Taras; van Dinther, Ylona

    2016-04-01

    Transform faults (TF) are subdivided into continental and oceanic ones due to their markedly different tectonic position, structure, surface expression, dynamics and seismicity. Both continental and oceanic TFs are zones of rheological weakness, which is a pre-requisite for their existence and long-term stability. Compared to subduction zones, TFs are typically characterized by smaller earthquake magnitudes as both their potential seismogenic width and length are reduced. However, a few very large magnitude (Mw>8) strike-slip events were documented, which are presumably related to the generation of new transform boundaries and/or sudden reactivation of pre-existing fossil structures. In particular, the 11 April 2012 Sumatra Mw 8.6 earthquake is challenging the general concept that such high magnitude events only occur at megathrusts. Hence, the processes of TF nucleation, propagation and their direct relation to the seismic cycle and long-term deformation at both oceanic and continental transforms needs to be investigated jointly to overcome the restricted direct observations in time and space. To gain fundamental understanding of involved physical processes the numerical seismo-thermo-mechanical (STM) modeling approach, validated in a subduction zone setting (Van Dinther et al. 2013), will be adapted for TFs. A simple 2D plane view model geometry using visco-elasto-plastic material behavior will be adopted. We will study and compare seismicity patterns and evolution in two end member TF setups, each with strain-dependent and rate-dependent brittle-plastic weakening processes: (1) A single weak and mature transform fault separating two strong plates (e.g., in between oceanic ridges) and (2) A nucleating or evolving (continental) TF system with disconnected predefined faults within a plate subjected to simple shear deformation (e.g., San Andreas Fault system). The modeling of TFs provides a first tool to establish the STM model approach for transform faults in a

  17. Analysis of Transformation Plasticity in Steel Using a Finite Element Method Coupled with a Phase Field Model

    PubMed Central

    Cho, Yi-Gil; Kim, Jin-You; Cho, Hoon-Hwe; Cha, Pil-Ryung; Suh, Dong-Woo; Lee, Jae Kon; Han, Heung Nam

    2012-01-01

    An implicit finite element model was developed to analyze the deformation behavior of low carbon steel during phase transformation. The finite element model was coupled hierarchically with a phase field model that could simulate the kinetics and micro-structural evolution during the austenite-to-ferrite transformation of low carbon steel. Thermo-elastic-plastic constitutive equations for each phase were adopted to confirm the transformation plasticity due to the weaker phase yielding that was proposed by Greenwood and Johnson. From the simulations under various possible plastic properties of each phase, a more quantitative understanding of the origin of transformation plasticity was attempted by a comparison with the experimental observation. PMID:22558295

  18. Co-ordinate transforms underpin multiscale modelling and reduction in deterministic and stochastic systems

    NASA Astrophysics Data System (ADS)

    Roberts, A. J.

    2007-12-01

    A persistent feature of complex systems in engineering and science is the emergence of macroscopic, coarse grained, coherent behaviour from microscale interactions. In current modeling, ranging from ecology to materials science, the underlying microscopic mechanisms are known, but the closures to translate microscale knowledge to a large scale macroscopic description are rarely available in closed form. Kevrekidis proposes new 'equation free' computational methodologies to circumvent this stumbling block in multiscale modelling. Nonlinear coordinate transforms underpin analytic techniques that support these computational methodologies. But to do so we must cross multiple space and time scales, in both deterministic and stochastic systems, and where the microstructure is either smooth or detailed. Using examples, I describe progress in using nonlinear coordinate transforms to illuminate such multiscale modelling issues.

  19. A Novel Monte Carlo Scheme for the Rapid Equilibration of Atomistic Model Polymer Systems of Precisely Defined Molecular Architecture

    NASA Astrophysics Data System (ADS)

    Karayiannis, Nikos Ch.; Mavrantzas, Vlasis G.; Theodorou, Doros N.

    2002-03-01

    Two novel connectivity-altering atomistic Monte Carlo moves are presented for the fast equilibration of condensed phases of long-chain systems with a variety of chain architectures. With the new moves, isotropic or oriented melts of linear or long-chain branched polymers, dense brushes of terminally grafted macromolecules, and cyclic peptides can be simulated. Results concerning the structural, conformational, and volumetric properties of linear, monodisperse polyethylene melts, simulated with a new united-atom molecular model, are in excellent agreement with experimental data.

  20. From structure from motion to historical building information modeling: populating a semantic-aware library of architectural elements

    NASA Astrophysics Data System (ADS)

    Santagati, Cettina; Lo Turco, Massimiliano

    2017-01-01

    In recent years, we have witnessed a huge diffusion of building information modeling (BIM) approaches in the field of architectural design, although very little research has been undertaken to explore the value, criticalities, and advantages attributable to the application of these methodologies in the cultural heritage domain. Furthermore, the last developments in digital photogrammetry lead to the easy generation of reliable low-cost three-dimensional textured models that could be used in BIM platforms to create semantic-aware objects that could compose a specific library of historical architectural elements. In this case, the transfer between the point cloud and its corresponding parametric model is not so trivial and the level of geometrical abstraction could not be suitable with the scope of the BIM. The aim of this paper is to explore and retrace the milestone works on this crucial topic in order to identify the unsolved issues and to propose and test a unique and simple workflow practitioner centered and based on the use of the latest available solutions for point cloud managing into commercial BIM platforms.

  1. Structural model of the dimeric Parkinson’s protein LRRK2 reveals a compact architecture involving distant interdomain contacts

    PubMed Central

    Guaitoli, Giambattista; Raimondi, Francesco; Gilsbach, Bernd K.; Gómez-Llorente, Yacob; Deyaert, Egon; Renzi, Fabiana; Li, Xianting; Schaffner, Adam; Jagtap, Pravin Kumar Ankush; Boldt, Karsten; von Zweydorf, Felix; Gotthardt, Katja; Lorimer, Donald D.; Yue, Zhenyu; Burgin, Alex; Janjic, Nebojsa; Sattler, Michael; Versées, Wim; Ueffing, Marius; Ubarretxena-Belandia, Iban; Kortholt, Arjan; Gloeckner, Christian Johannes

    2016-01-01

    Leucine-rich repeat kinase 2 (LRRK2) is a large, multidomain protein containing two catalytic domains: a Ras of complex proteins (Roc) G-domain and a kinase domain. Mutations associated with familial and sporadic Parkinson’s disease (PD) have been identified in both catalytic domains, as well as in several of its multiple putative regulatory domains. Several of these mutations have been linked to increased kinase activity. Despite the role of LRRK2 in the pathogenesis of PD, little is known about its overall architecture and how PD-linked mutations alter its function and enzymatic activities. Here, we have modeled the 3D structure of dimeric, full-length LRRK2 by combining domain-based homology models with multiple experimental constraints provided by chemical cross-linking combined with mass spectrometry, negative-stain EM, and small-angle X-ray scattering. Our model reveals dimeric LRRK2 has a compact overall architecture with a tight, multidomain organization. Close contacts between the N-terminal ankyrin and C-terminal WD40 domains, and their proximity—together with the LRR domain—to the kinase domain suggest an intramolecular mechanism for LRRK2 kinase activity regulation. Overall, our studies provide, to our knowledge, the first structural framework for understanding the role of the different domains of full-length LRRK2 in the pathogenesis of PD. PMID:27357661

  2. Heat transfer study in oil channels of a transformer ODAF cooling system based on numerical modeling

    NASA Astrophysics Data System (ADS)

    Salari, Sina; Noasrolahzadeh, M. Reza; Parsimoghadam, Azadeh; Khalilikhah, Mostafa

    2012-06-01

    As misperformance of cooling systems in the electrical transformers, could cause damages to the transformers and in the more serious situations devices that use transformer output, it is so important to design these systems reliable and robust, which is depends extremely on knowledge of heat transfer mechanism in the system. This study has been done to understand heat transfer coefficient relations to the bobbin geometry and flow rates in the ODAF cooling systems, which uses forced convection mechanism, and oil as cooling fluid. Considered bobbins have below 1000mm diameter and 2000mm height, which are used in the low voltage side in the power transformers (Voltage < 132Kv). Oil flow has been numerically simulated to model heat transfer in the fluid and the bobbin. Results have been validated by experimental tests, which show about 10 percent error, for 3D modeling. Temperature difference procedure between oil and solid along the bobbin height, and relation between heat transfer coefficient and flow rate have been obtained. Besides three different geometry, axial channels, axial and radial channels with and without baffles where evaluated from heat transfer viewpoint.

  3. Comparison between iteration schemes for three-dimensional coordinate-transformed saturated-unsaturated flow model

    NASA Astrophysics Data System (ADS)

    An, Hyunuk; Ichikawa, Yutaka; Tachikawa, Yasuto; Shiiba, Michiharu

    2012-11-01

    SummaryThree different iteration methods for a three-dimensional coordinate-transformed saturated-unsaturated flow model are compared in this study. The Picard and Newton iteration methods are the common approaches for solving Richards' equation. The Picard method is simple to implement and cost-efficient (on an individual iteration basis). However it converges slower than the Newton method. On the other hand, although the Newton method converges faster, it is more complex to implement and consumes more CPU resources per iteration than the Picard method. The comparison of the two methods in finite-element model (FEM) for saturated-unsaturated flow has been well evaluated in previous studies. However, two iteration methods might exhibit different behavior in the coordinate-transformed finite-difference model (FDM). In addition, the Newton-Krylov method could be a suitable alternative for the coordinate-transformed FDM because it requires the evaluation of a 19-point stencil matrix. The formation of a 19-point stencil is quite a complex and laborious procedure. Instead, the Newton-Krylov method calculates the matrix-vector product, which can be easily approximated by calculating the differences of the original nonlinear function. In this respect, the Newton-Krylov method might be the most appropriate iteration method for coordinate-transformed FDM. However, this method involves the additional cost of taking an approximation at each Krylov iteration in the Newton-Krylov method. In this paper, we evaluated the efficiency and robustness of three iteration methods—the Picard, Newton, and Newton-Krylov methods—for simulating saturated-unsaturated flow through porous media using a three-dimensional coordinate-transformed FDM.

  4. Recent Developments of the Local Effect Model (LEM) - Implications of clustered damage on cell transformation

    NASA Astrophysics Data System (ADS)

    Elsässer, Thilo

    Exposure to radiation of high-energy and highly charged ions (HZE) causes a major risk to human beings, since in long term space explorations about 10 protons per month and about one HZE particle per month hit each cell nucleus (1). Despite the larger number of light ions, the high ionisation power of HZE particles and its corresponding more complex damage represents a major hazard for astronauts. Therefore, in order to get a reasonable risk estimate, it is necessary to take into account the entire mixed radiation field. Frequently, neoplastic cell transformation serves as an indicator for the oncogenic potential of radiation exposure. It can be measured for a small number of ion and energy combinations. However, due to the complexity of the radiation field it is necessary to know the contribution to the radiation damage of each ion species for the entire range of energies. Therefore, a model is required which transfers the few experimental data to other particles with different LETs. We use the Local Effect Model (LEM) (2) with its cluster extension (3) to calculate the relative biological effectiveness (RBE) of neoplastic transformation. It was originally developed in the framework of hadrontherapy and is applicable for a large range of ions and energies. The input parameters for the model include the linear-quadratic parameters for the induction of lethal events as well as for the induction of transformation events per surviving cell. Both processes of cell inactivation and neoplastic transformation per viable cell are combined to eventually yield the RBE for cell transformation. We show that the Local Effect Model is capable of predicting the RBE of neoplastic cell transformation for a broad range of ions and energies. The comparison of experimental data (4) with model calculations shows a reasonable agreement. We find that the cluster extension results in a better representation of the measured RBE values. With this model it should be possible to better

  5. Data Warehouse Design from HL7 Clinical Document Architecture Schema.

    PubMed

    Pecoraro, Fabrizio; Luzi, Daniela; Ricci, Fabrizio L

    2015-01-01

    This paper proposes a semi-automatic approach to extract clinical information structured in a HL7 Clinical Document Architecture (CDA) and transform it in a data warehouse dimensional model schema. It is based on a conceptual framework published in a previous work that maps the dimensional model primitives with CDA elements. Its feasibility is demonstrated providing a case study based on the analysis of vital signs gathered during laboratory tests.

  6. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  7. Brachypodium sylvaticum, a model for perennial grasses: transformation and inbred line development.

    PubMed

    Steinwand, Michael A; Young, Hugh A; Bragg, Jennifer N; Tobias, Christian M; Vogel, John P

    2013-01-01

    Perennial species offer significant advantages as crops including reduced soil erosion, lower energy inputs after the first year, deeper root systems that access more soil moisture, and decreased fertilizer inputs due to the remobilization of nutrients at the end of the growing season. These advantages are particularly relevant for emerging biomass crops and it is projected that perennial grasses will be among the most important dedicated biomass crops. The advantages offered by perennial crops could also prove favorable for incorporation into annual grain crops like wheat, rice, sorghum and barley, especially under the dryer and more variable climate conditions projected for many grain-producing regions. Thus, it would be useful to have a perennial model system to test biotechnological approaches to crop improvement and for fundamental research. The perennial grass Brachypodiumsylvaticum is a candidate for such a model because it is diploid, has a small genome, is self-fertile, has a modest stature, and short generation time. Its close relationship to the annual model Brachypodiumdistachyon will facilitate comparative studies and allow researchers to leverage the resources developed for B. distachyon. Here we report on the development of two keystone resources that are essential for a model plant: high-efficiency transformation and inbred lines. Using Agrobacterium tumefaciens-mediated transformation we achieved an average transformation efficiency of 67%. We also surveyed the genetic diversity of 19 accessions from the National Plant Germplasm System using SSR markers and created 15 inbred lines.

  8. A nonlocal shell model for mode transformation in single-walled carbon nanotubes.

    PubMed

    Shi, M X; Li, Q M; Huang, Y

    2009-11-11

    A second-order strain gradient nonlocal shell model is established to study the mode transformation in single-walled carbon nanotubes (SWCNTs). Nonlocal length is calibrated carefully for SWCNTs in reference to molecular dynamics (MD) simulations through analysis of nonlocal length effects on the frequencies of the radial breathing mode (RBM) and circumferential flexural modes (CFMs) and its effects on mode transformation. All analyses show that only a negative second-order nonlocal shell model is appropriate to SWCNTs. Nonlocal length is evidently related to vibration modes and the radius-to-thickness ratio. It is found that a nonlocal length is approximately 0.1 nm in an average sense when RBM frequency is concerned. A nonlocal length of 0.122-0.259 nm is indicated for the mode transformation in a selected group of armchair SWCNTs. 2:1 and 1:1 internal resonances are found for the same SWCNT based on different models, which implies that the internal resonance mechanism depends on the model employed. Furthermore, it is shown that an effective thickness of approximately 0.1 nm is more appropriate to SWCNTs than 0.066 nm.

  9. Brachypodium sylvaticum, a Model for Perennial Grasses: Transformation and Inbred Line Development

    PubMed Central

    Steinwand, Michael A.; Young, Hugh A.; Bragg, Jennifer N.; Tobias, Christian M.; Vogel, John P.

    2013-01-01

    Perennial species offer significant advantages as crops including reduced soil erosion, lower energy inputs after the first year, deeper root systems that access more soil moisture, and decreased fertilizer inputs due to the remobilization of nutrients at the end of the growing season. These advantages are particularly relevant for emerging biomass crops and it is projected that perennial grasses will be among the most important dedicated biomass crops. The advantages offered by perennial crops could also prove favorable for incorporation into annual grain crops like wheat, rice, sorghum and barley, especially under the dryer and more variable climate conditions projected for many grain-producing regions. Thus, it would be useful to have a perennial model system to test biotechnological approaches to crop improvement and for fundamental research. The perennial grass Brachypodiumsylvaticum is a candidate for such a model because it is diploid, has a small genome, is self-fertile, has a modest stature, and short generation time. Its close relationship to the annual model Brachypodiumdistachyon will facilitate comparative studies and allow researchers to leverage the resources developed for B. distachyon. Here we report on the development of two keystone resources that are essential for a model plant: high-efficiency transformation and inbred lines. Using Agrobacterium tumefaciens-mediated transformation we achieved an average transformation efficiency of 67%. We also surveyed the genetic diversity of 19 accessions from the National Plant Germplasm System using SSR markers and created 15 inbred lines. PMID:24073248

  10. Modeling the Pulse Signal by Wave-Shape Function and Analyzing by Synchrosqueezing Transform

    PubMed Central

    Wang, Chun-Li; Yang, Yueh-Lung; Wu, Wen-Hsiang; Tsai, Tung-Hu; Chang, Hen-Hong

    2016-01-01

    We apply the recently developed adaptive non-harmonic model based on the wave-shape function, as well as the time-frequency analysis tool called synchrosqueezing transform (SST) to model and analyze oscillatory physiological signals. To demonstrate how the model and algorithm work, we apply them to study the pulse wave signal. By extracting features called the spectral pulse signature, and based on functional regression, we characterize the hemodynamics from the radial pulse wave signals recorded by the sphygmomanometer. Analysis results suggest the potential of the proposed signal processing approach to extract health-related hemodynamics features. PMID:27304979

  11. Current transformer model with hysteresis for improving the protection response in electrical transmission systems

    NASA Astrophysics Data System (ADS)

    Matussek, Robert; Dzienis, Cezary; Blumschein, Jörg; Schulte, Horst

    2014-12-01

    In this paper, a generic enhanced protection current transformer (CT) model with saturation effects and transient behavior is presented. The model is used for the purpose of analysis and design of power system protection algorithms. Three major classes of protection CT have been modeled which all take into account the nonlinear inductance with remanence effects. The transient short-circuit currents in power systems are simulated under CT saturation condition. The response of a common power system protection algorithm with respect to robustness to nominal parameter variations and sensitivity against maloperation is demonstrated by simulation studies.

  12. Logical-Rule Models of Classification Response Times: A Synthesis of Mental-Architecture, Random-Walk, and Decision-Bound Approaches

    ERIC Educational Resources Information Center

    Fific, Mario; Little, Daniel R.; Nosofsky, Robert M.

    2010-01-01

    We formalize and provide tests of a set of logical-rule models for predicting perceptual classification response times (RTs) and choice probabilities. The models are developed by synthesizing mental-architecture, random-walk, and decision-bound approaches. According to the models, people make independent decisions about the locations of stimuli…

  13. Modeling Transport and Transformation of Dissolved and Sediment Associated Mercury in Rivers

    NASA Astrophysics Data System (ADS)

    Massoudieh, A.; Ginn, T. R.; Bombardelli, F. A.

    2005-12-01

    Mercury is a hazardous metal in the environment. In many cases, the presence of mercury in water bodies is in the form of a distinct, high-concentration-mercury layer in the bed sediments that is often buried but can be exposed due to erosion of the top layer. In this research, a one-dimensional flow and transport model representing the transport of mercury in the river is coupled with several one-dimensional models incorporating diffusive transport, transformation and sorption of mercury species in the bed sediments. Transport of dissolved and particle associated mercury in the water, and thus the effect of erosion and resuspension of particles on the transport process is taken into consideration. A set of one-dimensional reactive transport sub-models are utilized to model the release, adsorption, and burial of mercury species to the bed sediments. Also, the coupled transport and transformation of a wide variety of mercury species as well as biomass and organic matter which are effective in production of Methyl-Mercury are implemented into the model. Model parameters are obtained using available thermodynamics databases and by calibration of the one-dimensional reactive transport model to available column and batch study data. The model can be used for predicting transport of mercury and production of methyl-mercury, examining various remediation scenarios, and evaluating the effect of anthropological activities on resuspension or burial of mercury species in water systems after doing appropriate calibrations.

  14. Computational Nanophotonics: Model Optical Interactions and Transport in Tailored Nanosystem Architectures

    SciTech Connect

    Stockman, Mark; Gray, Steven

    2014-02-21

    The program is directed toward development of new computational approaches to photoprocesses in nanostructures whose geometry and composition are tailored to obtain desirable optical responses. The emphasis of this specific program is on the development of computational methods and prediction and computational theory of new phenomena of optical energy transfer and transformation on the extreme nanoscale (down to a few nanometers).

  15. Portable scalable architecture for model-based FLIR ATR and SAR/FLIR fusion

    NASA Astrophysics Data System (ADS)

    Stephan, Larisa; Childs, Martin B.; Pujara, Neeraj

    1999-08-01

    For an on-board automatic target recognition (ATR) system to be useful to the crew of a military platform, the ATR must reduce the mission risk or increase its lethality. This utility may be increased by shortening the operator's time to interrogate possible threat targets or by enabling weapon deployment at a greater range. Obstacles to deployment of ATRs have included an excess of false cues and difficulty in adapting developmental configurations to processing architectures that can operate in the required environmental conditions without serious performance degradation. We present a real-time FLIR ATR software architecture that is scaleable across multiple processors and readily portable to a number of hardware platforms. Fusion with cues from an on- or off-board synthetic aperture radar (SAR) provides a significant reduction in the amount of processing required to classify targets while simultaneously increasing the confidence in each target hypothesis. The FLIR ATR and fusion are implemented on commercial off-the-shelf (COTS) processors that are available in ruggedized versions, and the software is constructed to allow portability to other processor families without major disturbance to those parts of the code that embody the algorithm content.

  16. 3D modeling of architectural objects from video data obtained with the fixed focal length lens geometry

    NASA Astrophysics Data System (ADS)

    Deliś, Paulina; Kędzierski, Michał; Fryśkowska, Anna; Wilińska, Michalina

    2013-12-01

    The article describes the process of creating 3D models of architectural objects on the basis of video images, which had been acquired by a Sony NEX-VG10E fixed focal length video camera. It was assumed, that based on video and Terrestrial Laser Scanning data it is possible to develop 3D models of architectural objects. The acquisition of video data was preceded by the calibration of video camera. The process of creating 3D models from video data involves the following steps: video frames selection for the orientation process, orientation of video frames using points with known coordinates from Terrestrial Laser Scanning (TLS), generating a TIN model using automatic matching methods. The above objects have been measured with an impulse laser scanner, Leica ScanStation 2. Created 3D models of architectural objects were compared with 3D models of the same objects for which the self-calibration bundle adjustment process was performed. In this order a PhotoModeler Software was used. In order to assess the accuracy of the developed 3D models of architectural objects, points with known coordinates from Terrestrial Laser Scanning were used. To assess the accuracy a shortest distance method was used. Analysis of the accuracy showed that 3D models generated from video images differ by about 0.06 ÷ 0.13 m compared to TLS data. Artykuł zawiera opis procesu opracowania modeli 3D obiektów architektonicznych na podstawie obrazów wideo pozyskanych kamerą wideo Sony NEX-VG10E ze stałoogniskowym obiektywem. Przyjęto założenie, że na podstawie danych wideo i danych z naziemnego skaningu laserowego (NSL) możliwe jest opracowanie modeli 3D obiektów architektonicznych. Pozyskanie danych wideo zostało poprzedzone kalibracją kamery wideo. Model matematyczny kamery był oparty na rzucie perspektywicznym. Proces opracowania modeli 3D na podstawie danych wideo składał się z następujących etapów: wybór klatek wideo do procesu orientacji, orientacja klatek wideo na

  17. Transform Faults and Lithospheric Structure: Insights from Numerical Models and Shipboard and Geodetic Observations

    NASA Astrophysics Data System (ADS)

    Takeuchi, Christopher S.

    In this dissertation, I study the influence of transform faults on the structure and deformation of the lithosphere, using shipboard and geodetic observations as well as numerical experiments. I use marine topography, gravity, and magnetics to examine the effects of the large age-offset Andrew Bain transform fault on accretionary processes within two adjacent segments of the Southwest Indian Ridge. I infer from morphology, high gravity, and low magnetization that the extremely cold and thick lithosphere associated with the Andrew Bain strongly suppresses melt production and crustal emplacement to the west of the transform fault. These effects are counteracted by enhanced temperature and melt production near the Marion Hotspot, east of the transform fault. I use numerical models to study the development of lithospheric shear zones underneath continental transform faults (e.g. the San Andreas Fault in California), with a particular focus on thermomechanical coupling and shear heating produced by long-term fault slip. I find that these processes may give rise to long-lived localized shear zones, and that such shear zones may in part control the magnitude of stress in the lithosphere. Localized ductile shear participates in both interseismic loading and postseismic relaxation, and predictions of models including shear zones are within observational constraints provided by geodetic and surface heat flow data. I numerically investigate the effects of shear zones on three-dimensional postseismic deformation. I conclude that the presence of a thermally-activated shear zone minimally impacts postseismic deformation, and that thermomechanical coupling alone is unable to generate sufficient localization for postseismic relaxation within a ductile shear zone to kinematically resemble that by aseismic fault creep (afterslip). I find that the current record geodetic observations of postseismic deformation do not provide robust discriminating power between candidate linear and

  18. Transformer modeling for low- and mid-frequency electromagnetic transients simulation

    NASA Astrophysics Data System (ADS)

    Lambert, Mathieu

    In this work, new models are developed for single-phase and three-phase shell-type transformers for the simulation of low-frequency transients, with the use of the coupled leakage model. This approach has the advantage that it avoids the use of fictitious windings to connect the leakage model to a topological core model, while giving the same response in short-circuit as the indefinite admittance matrix (BCTRAN) model. To further increase the model sophistication, it is proposed to divide windings into coils in the new models. However, short-circuit measurements between coils are never available. Therefore, a novel analytical method is elaborated for this purpose, which allows the calculation in 2-D of short-circuit inductances between coils of rectangular cross-section. The results of this new method are in agreement with the results obtained from the finite element method in 2-D. Furthermore, the assumption that the leakage field is approximately 2-D in shell-type transformers is validated with a 3-D simulation. The outcome of this method is used to calculate the self and mutual inductances between the coils of the coupled leakage model and the results are showing good correspondence with terminal short-circuit measurements. Typically, leakage inductances in transformers are calculated from short-circuit measurements and the magnetizing branch is calculated from no-load measurements, assuming that leakages are unimportant for the unloaded transformer and that magnetizing current is negligible during a short-circuit. While the core is assumed to have an infinite permeability to calculate short-circuit inductances, and it is a reasonable assumption since the core's magnetomotive force is negligible during a short-circuit, the same reasoning does not necessarily hold true for leakage fluxes in no-load conditions. This is because the core starts to saturate when the transformer is unloaded. To take this into account, a new analytical method is developed in this

  19. The CMIP5 archive architecture: A system for petabyte-scale distributed archival of climate model data

    NASA Astrophysics Data System (ADS)

    Pascoe, Stephen; Cinquini, Luca; Lawrence, Bryan

    2010-05-01

    The Phase 5 Coupled Model Intercomparison Project (CMIP5) will produce a petabyte scale archive of climate data relevant to future international assessments of climate science (e.g., the IPCC's 5th Assessment Report scheduled for publication in 2013). The infrastructure for the CMIP5 archive must meet many challenges to support this ambitious international project. We describe here the distributed software architecture being deployed worldwide to meet these challenges. The CMIP5 architecture extends the Earth System Grid (ESG) distributed architecture of Datanodes, providing data access and visualisation services, and Gateways providing the user interface including registration, search and browse services. Additional features developed for CMIP5 include a publication workflow incorporating quality control and metadata submission, data replication, version control, update notification and production of citable metadata records. Implementation of these features have been driven by the requirements of reliable global access to over 1Pb of data and consistent citability of data and metadata. Central to the implementation is the concept of Atomic Datasets that are identifiable through a Data Reference Syntax (DRS). Atomic Datasets are immutable to allow them to be replicated and tracked whilst maintaining data consistency. However, since occasional errors in data production and processing is inevitable, new versions can be published and users notified of these updates. As deprecated datasets may be the target of existing citations they can remain visible in the system. Replication of Atomic Datasets is designed to improve regional access and provide fault tolerance. Several datanodes in the system are designated replicating nodes and hold replicas of a portion of the archive expected to be of broad interest to the community. Gateways provide a system-wide interface to users where they can track the version history and location of replicas to select the most appropriate

  20. Thermodynamic Modeling and Experimental Study of Phase Transformations in Alloys Based on γ-TiAl

    NASA Astrophysics Data System (ADS)

    Kuznetsov, A. V.; Sokolovskii, V. S.; Salishchev, G. A.; Belov, N. A.; Nochovnaya, N. A.

    2016-09-01

    Thermo-Calc software is used to model the composition diagram for alloys based on γ-TiAl of the systems Ti - Al - Mo - (4 - 10) at.% Nb and Ti - Al - Nb - X ( X is Cr, Mo, V). The effect of alloying on critical points and sequence of phase transformations is established. Changes in phase composition in relation to alloy TNM-B1 temperature are analyzed using a polythermal section of the Ti - Al - Nb - Mo system.

  1. A 3-D constitutive model for pressure-dependent phase transformation of porous shape memory alloys.

    PubMed

    Ashrafi, M J; Arghavani, J; Naghdabadi, R; Sohrabpour, S

    2015-02-01

    Porous shape memory alloys (SMAs) exhibit the interesting characteristics of porous metals together with shape memory effect and pseudo-elasticity of SMAs that make them appropriate for biomedical applications. In this paper, a 3-D phenomenological constitutive model for the pseudo-elastic behavior and shape memory effect of porous SMAs is developed within the framework of irreversible thermodynamics. Comparing to micromechanical and computational models, the proposed model is computationally cost effective and predicts the behavior of porous SMAs under proportional and non-proportional multiaxial loadings. Considering the pressure dependency of phase transformation in porous SMAs, proper internal variables, free energy and limit functions are introduced. With the aim of numerical implementation, time discretization and solution algorithm for the proposed model are also presented. Due to lack of enough experimental data on multiaxial loadings of porous SMAs, we employ a computational simulation method (CSM) together with available experimental data to validate the proposed constitutive model. The method is based on a 3-D finite element model of a representative volume element (RVE) with random pores pattern. Good agreement between the numerical predictions of the model and CSM results is observed for elastic and phase transformation behaviors in various thermomechanical loadings.

  2. The ecological model web concept: A consultative infrastructure for researchers and decision makers using a Service Oriented Architecture

    NASA Astrophysics Data System (ADS)

    Geller, Gary

    2010-05-01

    Rapid climate and socioeconomic changes may be outrunning society's ability to understand, predict, and respond to change effectively. Decision makers such as natural resource managers want better information about what these changes will be and how the resources they are managing will be affected. Researchers want better understanding of the components and processes of ecological systems, how they interact, and how they respond to change. Nearly all these activities require computer models to make ecological forecasts that can address "what if" questions. However, despite many excellent models in ecology and related disciplines, there is no coordinated model system—that is, a model infrastructure--that researchers or decision makers can consult to gain insight on important ecological questions or help them make decisions. While this is partly due to the complexity of the science, to lack of critical observations, and other issues, limited access to and sharing of models and model outputs is a factor as well. An infrastructure that increased access to and sharing of models and model outputs would benefit researchers, decision makers of all kinds, and modelers. One path to such a "consultative infrastructure" for ecological forecasting is called the Model Web, a concept for an open-ended system of interoperable computer models and databases communicating using a Service Oriented Architectures (SOA). Initially, it could consist of a core of several models, perhaps made interoperable retroactively, and then it could grow gradually as new models or databases were added. Because some models provide basic information of use to many other models, such as simple physical parameters, these "keystone" models are of particular importance in a model web. In the long run, a model web would not be rigidly planned and built--instead, like the World Wide Web, it would grow largely organically, with limited central control, within a framework of broad goals and data exchange

  3. Sacrificial template-directed synthesis of mesoporous magnesium oxide architectures with superior performance for organic dye adsorption [corrected].

    PubMed

    Ai, Lunhong; Yue, Haitao; Jiang, Jing

    2012-09-07

    Mesoporous MgO architectures were successfully synthesized by the direct thermal transformation of the sacrificial oxalate template. The as-prepared mesoporous architectures were characterized by X-ray diffraction (XRD), scanning electronic microscopy (SEM), transmission electron microscopy (TEM), X-ray energy dispersive spectroscopy (EDS), Fourier transform infrared spectroscopy (FTIR), and nitrogen adsorption-desorption techniques. The MgO architectures showed extraordinary adsorption capacity and rapid adsorption rate for removal of Congo red (CR) from water. The maximum adsorption capacity of the MgO architectures toward CR reached 689.7 mg g⁻¹, much higher than most of the previously reported hierarchical adsorbents. The CR removal process was found to obey the Langmuir adsorption model and its kinetics followed pseudo-second-order rate equation. The superior adsorption performance of the mesoporous MgO architectures could be attributed to the unique mesoporous structure, high specific surface area as well as strong electrostatic interaction.

  4. A structural equation model analysis of phosphorus transformations in global unfertilized and uncultivated soils

    NASA Astrophysics Data System (ADS)

    Hou, Enqing; Chen, Chengrong; Kuang, Yuanwen; Zhang, Yuguang; Heenan, Marijke; Wen, Dazhi

    2016-09-01

    Understanding the soil phosphorus (P) cycle is a prerequisite for predicting how environmental changes may influence the dynamics and availability of P in soil. We compiled a database of P fractions sequentially extracted by the Hedley procedure and its modification in 626 unfertilized and uncultivated soils worldwide. With this database, we applied structural equation modeling to test hypothetical soil P transformation models and to quantify the importance of different soil P pools and P transformation pathways in shaping soil P availability at a global scale. Our models revealed that soluble inorganic P (Pi, a readily available P pool) was positively and directly influenced by labile Pi, labile organic P (Po), and primary mineral P and negatively and directly influenced by secondary mineral P; soluble Pi was not directly influenced by moderately labile Po or occluded P. The overall effect on soluble Pi was greatest for labile Pi followed by the organic P pools, occluded P, and then primary mineral P; the overall influence from secondary mineral P was small. Labile Pi was directly linked to all other soil P pools and was more strongly linked than soluble Pi to labile Po and primary mineral P. Our study highlights the important roles of labile Pi in mediating P transformations and in determining overall P availability in soils throughout the world.

  5. Phase-field modeling of the beta to omega phase transformation in Zr–Nb alloys

    SciTech Connect

    Yeddu, Hemantha Kumar; Lookman, Turab

    2015-05-01

    A three-dimensional elastoplastic phase-field model is developed, using the Finite Element Method (FEM), for modeling the athermal beta to omega phase transformation in Zr–Nb alloys by including plastic deformation and strain hardening of the material. The microstructure evolution during athermal transformation as well as under different stress states, e.g. uni-axial tensile and compressive, bi-axial tensile and compressive, shear and tri-axial loadings, is studied. The effects of plasticity, stress states and the stress loading direction on the microstructure evolution as well as on the mechanical properties are studied. The input data corresponding to a Zr – 8 at.% Nb alloy are acquired from experimental studies as well as by using the CALPHAD method. Our simulations show that the four different omega variants grow as ellipsoidal shaped particles. Our results show that due to stress relaxation, the athermal phase transformation occurs slightly more readily in the presence of plasticity compared to that in its absence. The evolution of omega phase is different under different stress states, which leads to the differences in the mechanical properties of the material. The variant selection mechanism, i.e. formation of different variants under different stress loading directions, is also nicely captured by our model.

  6. Phase-field modeling of the beta to omega phase transformation in Zr–Nb alloys

    DOE PAGES

    Yeddu, Hemantha Kumar; Lookman, Turab

    2015-05-01

    A three-dimensional elastoplastic phase-field model is developed, using the Finite Element Method (FEM), for modeling the athermal beta to omega phase transformation in Zr–Nb alloys by including plastic deformation and strain hardening of the material. The microstructure evolution during athermal transformation as well as under different stress states, e.g. uni-axial tensile and compressive, bi-axial tensile and compressive, shear and tri-axial loadings, is studied. The effects of plasticity, stress states and the stress loading direction on the microstructure evolution as well as on the mechanical properties are studied. The input data corresponding to a Zr – 8 at.% Nb alloy aremore » acquired from experimental studies as well as by using the CALPHAD method. Our simulations show that the four different omega variants grow as ellipsoidal shaped particles. Our results show that due to stress relaxation, the athermal phase transformation occurs slightly more readily in the presence of plasticity compared to that in its absence. The evolution of omega phase is different under different stress states, which leads to the differences in the mechanical properties of the material. The variant selection mechanism, i.e. formation of different variants under different stress loading directions, is also nicely captured by our model.« less

  7. Educational transformation in upper-division physics: The Science Education Initiative model, outcomes, and lessons learned

    NASA Astrophysics Data System (ADS)

    Chasteen, Stephanie V.; Wilcox, Bethany; Caballero, Marcos D.; Perkins, Katherine K.; Pollock, Steven J.; Wieman, Carl E.

    2015-12-01

    [This paper is part of the Focused Collection on Upper Division Physics Courses.] In response to the need for a scalable, institutionally supported model of educational change, the Science Education Initiative (SEI) was created as an experiment in transforming course materials and faculty practices at two institutions—University of Colorado Boulder (CU) and University of British Columbia. We find that this departmentally focused model of change, which includes an explicit focus on course transformation as supported by a discipline-based postdoctoral education specialist, was generally effective in impacting courses and faculty across the institution. In CU's Department of Physics, the SEI effort focused primarily on upper-division courses, creating high-quality course materials, approaches, and assessments, and demonstrating an impact on student learning. We argue that the SEI implementation in the CU Physics Department, as compared to that in other departments, achieved more extensive impacts on specific course materials, and high-quality assessments, due to guidance by the physics education research group—but with more limited impact on the departmental faculty as a whole. We review the process and progress of the SEI Physics at CU and reflect on lessons learned in the CU Physics Department in particular. These results are useful in considering both institutional and faculty-led models of change and course transformation.

  8. Managers as Role Models for Health: Moderators of the Relationship of Transformational Leadership With Employee Exhaustion and Cynicism.

    PubMed

    Kranabetter, Caroline; Niessen, Cornelia

    2016-05-19

    Drawing on social learning literature, this study examined managers' health awareness and health behavior (health-related self-regulation) as a moderator of the relationships between transformational leadership and employee exhaustion and cynicism. In 2 organizations, employees (n = 247; n = 206) rated their own exhaustion and cynicism, and their managers' transformational leadership. Managers (n = 57; n = 30) assessed their own health-related self-regulation. Multilevel modeling showed that, as expected, managers' health awareness moderated the relationship between transformational leadership and employee exhaustion and cynicism. Employees experienced less exhaustion and cynicism when transformational leaders were aware of their own health. Managers' health behavior moderated the relationship between transformational leadership and employee exhaustion in 1 organization, but not in the other. With respect to health behavior, we found no significant results for employee cynicism. In sum, the results indicate that when managers are role models for health, employees will benefit more from the transformational leadership style. (PsycINFO Database Record

  9. Collaborative Proposal: Transforming How Climate System Models are Used: A Global, Multi-Resolution Approach

    SciTech Connect

    Estep, Donald

    2013-04-15

    Despite the great interest in regional modeling for both weather and climate applications, regional modeling is not yet at the stage that it can be used routinely and effectively for climate modeling of the ocean. The overarching goal of this project is to transform how climate models are used by developing and implementing a robust, efficient, and accurate global approach to regional ocean modeling. To achieve this goal, we will use theoretical and computational means to resolve several basic modeling and algorithmic issues. The first task is to develop techniques for transitioning between parameterized and high-fidelity regional ocean models as the discretization grid transitions from coarse to fine regions. The second task is to develop estimates for the error in scientifically relevant quantities of interest that provide a systematic way to automatically determine where refinement is needed in order to obtain accurate simulations of dynamic and tracer transport in regional ocean models. The third task is to develop efficient, accurate, and robust time-stepping schemes for variable spatial resolution discretizations used in regional ocean models of dynamics and tracer transport. The fourth task is to develop frequency-dependent eddy viscosity finite element and discontinuous Galerkin methods and study their performance and effectiveness for simulation of dynamics and tracer transport in regional ocean models. These four projects share common difficulties and will be approach using a common computational and mathematical toolbox. This is a multidisciplinary project involving faculty and postdocs from Colorado State University, Florida State University, and Penn State University along with scientists from Los Alamos National Laboratory. The completion of the tasks listed within the discussion of the four sub-projects will go a long way towards meeting our goal of developing superior regional ocean models that will transform how climate system models are used.

  10. L-py: an L-system simulation framework for modeling plant architecture development based on a dynamic language.

    PubMed

    Boudon, Frédéric; Pradal, Christophe; Cokelaer, Thomas; Prusinkiewicz, Przemyslaw; Godin, Christophe

    2012-01-01

    The study of plant development requires increasingly powerful modeling tools to help understand and simulate the growth and functioning of plants. In the last decade, the formalism of L-systems has emerged as a major paradigm for modeling plant development. Previous implementations of this formalism were made based on static languages, i.e., languages that require explicit definition of variable types before using them. These languages are often efficient but involve quite a lot of syntactic overhead, thus restricting the flexibility of use for modelers. In this work, we present an adaptation of L-systems to the Python language, a popular and powerful open-license dynamic language. We show that the use of dynamic language properties makes it possible to enhance the development of plant growth models: (i) by keeping a simple syntax while allowing for high-level programming constructs, (ii) by making code execution easy and avoiding compilation overhead, (iii) by allowing a high-level of model reusability and the building of complex modular models, and (iv) by providing powerful solutions to integrate MTG data-structures (that are a common way to represent plants at several scales) into L-systems and thus enabling to use a wide spectrum of computer tools based on MTGs developed for plant architecture. We then illustrate the use of L-Py in real applications to build complex models or to teach plant modeling in the classroom.

  11. L-Py: An L-System Simulation Framework for Modeling Plant Architecture Development Based on a Dynamic Language

    PubMed Central

    Boudon, Frédéric; Pradal, Christophe; Cokelaer, Thomas; Prusinkiewicz, Przemyslaw; Godin, Christophe

    2012-01-01

    The study of plant development requires increasingly powerful modeling tools to help understand and simulate the growth and functioning of plants. In the last decade, the formalism of L-systems has emerged as a major paradigm for modeling plant development. Previous implementations of this formalism were made based on static languages, i.e., languages that require explicit definition of variable types before using them. These languages are often efficient but involve quite a lot of syntactic overhead, thus restricting the flexibility of use for modelers. In this work, we present an adaptation of L-systems to the Python language, a popular and powerful open-license dynamic language. We show that the use of dynamic language properties makes it possible to enhance the development of plant growth models: (i) by keeping a simple syntax while allowing for high-level programming constructs, (ii) by making code execution easy and avoiding compilation overhead, (iii) by allowing a high-level of model reusability and the building of complex modular models, and (iv) by providing powerful solutions to integrate MTG data-structures (that are a common way to represent plants at several scales) into L-systems and thus enabling to use a wide spectrum of computer tools based on MTGs developed for plant architecture. We then illustrate the use of L-Py in real applications to build complex models or to teach plant modeling in the classroom. PMID:22670147

  12. Project Integration Architecture: Architectural Overview

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2001-01-01

    The Project Integration Architecture (PIA) implements a flexible, object-oriented, wrapping architecture which encapsulates all of the information associated with engineering applications. The architecture allows the progress of a project to be tracked and documented in its entirety. By being a single, self-revealing architecture, the ability to develop single tools, for example a single graphical user interface, to span all applications is enabled. Additionally, by bringing all of the information sources and sinks of a project into a single architectural space, the ability to transport information between those applications becomes possible, Object-encapsulation further allows information to become in a sense self-aware, knowing things such as its own dimensionality and providing functionality appropriate to its kind.

  13. Application of Distribution Transformer Thermal Life Models to Electrified Vehicle Charging Loads Using Monte-Carlo Method: Preprint

    SciTech Connect

    Kuss, M.; Markel, T.; Kramer, W.

    2011-01-01

    Concentrated purchasing patterns of plug-in vehicles may result in localized distribution transformer overload scenarios. Prolonged periods of transformer overloading causes service life decrements, and in worst-case scenarios, results in tripped thermal relays and residential service outages. This analysis will review distribution transformer load models developed in the IEC 60076 standard, and apply the model to a neighborhood with plug-in hybrids. Residential distribution transformers are sized such that night-time cooling provides thermal recovery from heavy load conditions during the daytime utility peak. It is expected that PHEVs will primarily be charged at night in a residential setting. If not managed properly, some distribution transformers could become overloaded, leading to a reduction in transformer life expectancy, thus increasing costs to utilities and consumers. A Monte-Carlo scheme simulated each day of the year, evaluating 100 load scenarios as it swept through the following variables: number of vehicle per transformer, transformer size, and charging rate. A general method for determining expected transformer aging rate will be developed, based on the energy needs of plug-in vehicles loading a residential transformer.

  14. A High-Rate, Single-Crystal Model including Phase Transformations, Plastic Slip, and Twinning

    SciTech Connect

    Addessio, Francis L.; Bronkhorst, Curt Allan; Bolme, Cynthia Anne; Brown, Donald William; Cerreta, Ellen Kathleen; Lebensohn, Ricardo A.; Lookman, Turab; Luscher, Darby Jon; Mayeur, Jason Rhea; Morrow, Benjamin M.; Rigg, Paulo A.

    2016-08-09

    An anisotropic, rate-­dependent, single-­crystal approach for modeling materials under the conditions of high strain rates and pressures is provided. The model includes the effects of large deformations, nonlinear elasticity, phase transformations, and plastic slip and twinning. It is envisioned that the model may be used to examine these coupled effects on the local deformation of materials that are subjected to ballistic impact or explosive loading. The model is formulated using a multiplicative decomposition of the deformation gradient. A plate impact experiment on a multi-­crystal sample of titanium was conducted. The particle velocities at the back surface of three crystal orientations relative to the direction of impact were measured. Molecular dynamics simulations were conducted to investigate the details of the high-­rate deformation and pursue issues related to the phase transformation for titanium. Simulations using the single crystal model were conducted and compared to the high-­rate experimental data for the impact loaded single crystals. The model was found to capture the features of the experiments.

  15. A micromechanics-inspired constitutive model for shape-memory alloys that accounts for initiation and saturation of phase transformation

    NASA Astrophysics Data System (ADS)

    Kelly, Alex; Stebner, Aaron P.; Bhattacharya, Kaushik

    2016-12-01

    A constitutive model to describe macroscopic elastic and transformation behaviors of polycrystalline shape-memory alloys is formulated using an internal variable thermodynamic framework. In a departure from prior phenomenological models, the proposed model treats initiation, growth kinetics, and saturation of transformation distinctly, consistent with physics revealed by recent multi-scale experiments and theoretical studies. Specifically, the proposed approach captures the macroscopic manifestations of three micromechanial facts, even though microstructures are not explicitly modeled: (1) Individual grains with favorable orientations and stresses for transformation are the first to nucleate martensite, and the local nucleation strain is relatively large. (2) Then, transformation interfaces propagate according to growth kinetics to traverse networks of grains, while previously formed martensite may reorient. (3) Ultimately, transformation saturates prior to 100% completion as some unfavorably-oriented grains do not transform; thus the total transformation strain of a polycrystal is modest relative to the initial, local nucleation strain. The proposed formulation also accounts for tension-compression asymmetry, processing anisotropy, and the distinction between stress-induced and temperature-induced transformations. Consequently, the model describes thermoelastic responses of shape-memory alloys subject to complex, multi-axial thermo-mechanical loadings. These abilities are demonstrated through detailed comparisons of simulations with experiments.

  16. Implementation of subject-specific collagen architecture of cartilage into a 2D computational model of a knee joint--data from the Osteoarthritis Initiative (OAI).

    PubMed

    Räsänen, Lasse P; Mononen, Mika E; Nieminen, Miika T; Lammentausta, Eveliina; Jurvelin, Jukka S; Korhonen, Rami K

    2013-01-01

    A subject-specific collagen architecture of cartilage, obtained from T(2) mapping of 3.0 T magnetic resonance imaging (MRI; data from the Osteoarthritis Initiative), was implemented into a 2D finite element model of a knee joint with fibril-reinforced poroviscoelastic cartilage properties. For comparison, we created two models with alternative collagen architectures, addressing the potential inaccuracies caused by the nonoptimal estimation of the collagen architecture from MRI. Also two models with constant depth-dependent zone thicknesses obtained from literature were created. The mechanical behavior of the models were analyzed and compared under axial impact loading of 846N. Compared to the model with patient-specific collagen architecture, the cartilage model without tangentially oriented collagen fibrils in the superficial zone showed up to 69% decrease in maximum principal stress and fibril strain and 35% and 13% increase in maximum principal strain and pore pressure, respectively, in the superficial layers of the cartilage. The model with increased thickness for the superficial and middle zones, as obtained from the literature, demonstrated at most 73% increase in stress, 143% increase in fibril strain, and 26% and 23% decrease in strain and pore pressure, respectively, in the intermediate cartilage. The present results demonstrate that the computational model of a knee joint with the collagen architecture of cartilage estimated from patient-specific MRI or literature lead to different stress and strain distributions. The findings also suggest that minor errors in the analysis of collagen architecture from MRI, for example due to the analysis method or MRI resolution, can lead to alterations in knee joint stresses and strains.

  17. Tensor Product Model Transformation Based Adaptive Integral-Sliding Mode Controller: Equivalent Control Method

    PubMed Central

    Zhao, Guoliang; Li, Hongxing

    2013-01-01

    This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model. PMID:24453897

  18. Problems in mechanistic theoretical models for cell transformation by ionizing radiation

    SciTech Connect

    Chatterjee, A.; Holley, W.R.

    1991-10-01

    A mechanistic model based on yields of double strand breaks has been developed to determine the dose response curves for cell transformation frequencies. At its present stage the model is applicable to immortal cell lines and to various qualities (X-rays, Neon and Iron) of ionizing radiation. Presently, we have considered four types of processes which can lead to activation phenomena: (1) point mutation events on a regulatory segment of selected oncogenes, (2) inactivation of suppressor genes, through point mutation, (3) deletion of a suppressor gene by a single track, and (4) deletion of a suppressor gene by two tracks.

  19. Tensor product model transformation based adaptive integral-sliding mode controller: equivalent control method.

    PubMed

    Zhao, Guoliang; Sun, Kaibiao; Li, Hongxing

    2013-01-01

    This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model.

  20. A Class of Semiparametric Transformation Models for Survival Data with a Cured Proportion

    PubMed Central

    Choi, Sangbum; Huang, Xuelin; Chen, Yi-Hau

    2013-01-01

    We propose a new class of semiparametric regression models based on a multiplicative frailty assumption with a discrete frailty, which may account for cured subgroup in population. The cure model framework is then recast as a problem with a transformation model. The proposed models can explain a broad range of nonproportional hazards structures along with a cured proportion. An efficient and simple algorithm based on the martingale process is developed to locate the nonparametric maximum likelihood estimator. Unlike existing expectation-maximization based methods, our approach directly maximizes a nonparametric likelihood function, and the calculation of consistent variance estimates is immediate. The proposed method is useful for resolving identifiability features embedded in semiparametric cure models. Simulation studies are presented to demonstrate the finite sample properties of the proposed method. A case study of stage III soft-tissue sarcoma is given as an illustration. PMID:23760878

  1. Efficient Estimation of Semiparametric Transformation Models for the Cumulative Incidence of Competing Risks.

    PubMed

    Mao, Lu; Lin, D Y

    2017-03-01

    The cumulative incidence is the probability of failure from the cause of interest over a certain time period in the presence of other risks. A semiparametric regression model proposed by Fine and Gray (1999) has become the method of choice for formulating the effects of covariates on the cumulative incidence. Its estimation, however, requires modeling of the censoring distribution and is not statistically efficient. In this paper, we present a broad class of semiparametric transformation models which extends the Fine and Gray model, and we allow for unknown causes of failure. We derive the nonparametric maximum likelihood estimators (NPMLEs) and develop simple and fast numerical algorithms using the profile likelihood. We establish the consistency, asymptotic normality, and semiparametric efficiency of the NPMLEs. In addition, we construct graphical and numerical procedures to evaluate and select models. Finally, we demonstrate the advantages of the proposed methods over the existing ones through extensive simulation studies and an application to a major study on bone marrow transplantation.

  2. Development of the Architectural Simulation Model for Future Launch Systems and its Application to an Existing Launch Fleet

    NASA Technical Reports Server (NTRS)

    Rabadi, Ghaith

    2005-01-01

    A significant portion of lifecycle costs for launch vehicles are generated during the operations phase. Research indicates that operations costs can account for a large percentage of the total life-cycle costs of reusable space transportation systems. These costs are largely determined by decisions made early during conceptual design. Therefore, operational considerations are an important part of vehicle design and concept analysis process that needs to be modeled and studied early in the design phase. However, this is a difficult and challenging task due to uncertainties of operations definitions, the dynamic and combinatorial nature of the processes, and lack of analytical models and the scarcity of historical data during the conceptual design phase. Ultimately, NASA would like to know the best mix of launch vehicle concepts that would meet the missions launch dates at the minimum cost. To answer this question, we first need to develop a model to estimate the total cost, including the operational cost, to accomplish this set of missions. In this project, we have developed and implemented a discrete-event simulation model using ARENA (a simulation modeling environment) to determine this cost assessment. Discrete-event simulation is widely used in modeling complex systems, including transportation systems, due to its flexibility, and ability to capture the dynamics of the system. The simulation model accepts manifest inputs including the set of missions that need to be accomplished over a period of time, the clients (e.g., NASA or DoD) who wish to transport the payload to space, the payload weights, and their destinations (e.g., International Space Station, LEO, or GEO). A user of the simulation model can define an architecture of reusable or expendable launch vehicles to achieve these missions. Launch vehicles may belong to different families where each family may have it own set of resources, processing times, and cost factors. The goal is to capture the required

  3. Policy insights from the nutritional food market transformation model: the case of obesity prevention.

    PubMed

    Struben, Jeroen; Chan, Derek; Dubé, Laurette

    2014-12-01

    This paper presents a system dynamics policy model of nutritional food market transformation, tracing over-time interactions between the nutritional quality of supply, consumer food choice, population health, and governmental policy. Applied to the Canadian context and with body mass index as the primary outcome, we examine policy portfolios for obesity prevention, including (1) industry self-regulation efforts, (2) health- and nutrition-sensitive governmental policy, and (3) efforts to foster health- and nutrition-sensitive innovation. This work provides novel theoretical and practical insights on drivers of nutritional market transformations, highlighting the importance of integrative policy portfolios to simultaneously shift food demand and supply for successful and self-sustaining nutrition and health sensitivity. We discuss model extensions for deeper and more comprehensive linkages of nutritional food market transformation with supply, demand, and policy in agrifood and health/health care. These aim toward system design and policy that can proactively, and with greater impact, scale, and resilience, address single as well as double malnutrition in varying country settings.

  4. Application of a partnership model for transformative and sustainable international development.

    PubMed

    Powell, Dorothy L; Gilliss, Catherine L; Hewitt, Hermi H; Flint, Elizabeth P

    2010-01-01

    There are differences of intent and impact between short-term and long-term engagement of U.S. academic institutions with communities of need in developing nations. Global health programs that produce long-term transformative change rather than transient relief are more likely to be sustainable and in ethical harmony with expressed needs of a region or community. This article explores characteristics of successful ethical partnerships in global health and the challenges that threaten them, introducing a consensus community engagement model as a framework for building relationships, evolving an understanding of needs, and collaboratively developing solutions and responses to priority health needs in underserved regions of the world. The community engagement model is applied to a case study of an initiative by a U.S. school of nursing to establish long-term relationships with the nursing community in the Caribbean region with the goal of promoting transformative change through collaborative development of programs and services addressing health care needs of the region's growing elderly population and the increasing prevalence of noncommunicable chronic diseases. Progress of this ongoing long-term relationship is analyzed in the context of the organizational, philosophical, ethical, and resource commitments embodied in this approach to initiation of transformative and sustainable improvements in public health.

  5. Spatial model of lifting scheme in wavelet transforms and image compression

    NASA Astrophysics Data System (ADS)

    Wu, Yu; Li, Gang; Wang, Guoyin

    2002-03-01

    Wavelet transforms via lifting scheme are called the second-generation wavelet transforms. However, in some lifting schemes the coefficients are transformed using mathematical method from the first-generation wavelets, so the filters with better performance using in lifting are limited. The spatial structures of lifting scheme are also simple. For example, the classical lifting scheme, predicting-updating, is two-stage, and most researchers simply adopt this structure. In addition, in most design results the lifting filters are not only hard to get and also fixed. In our former work, we had presented a new three-stage lifting scheme, predicting-updating-adapting, and the results of filter design are no more fixed. In this paper, we continue to research the spatial model of lifting scheme. A group of general multi-stage lifting schemes are achieved and designed. All lifting filters are designed in spatial domain and proper mathematical methods are selected. Our designed coefficients are flexible and can be adjusted according to different data. We give the mathematical design details in this paper. Finally, all designed model of lifting are used in image compression and satisfactory results are achieved.

  6. Designing an architectural style for Pervasive Healthcare systems.

    PubMed

    Rafe, Vahid; Hajvali, Masoumeh

    2013-04-01

    Nowadays, the Pervasive Healthcare (PH) systems are considered as an important research area. These systems have a dynamic structure and configuration. Therefore, an appropriate method for designing such systems is necessary. The Publish/Subscribe Architecture (pub/sub) is one of the convenient architectures to support such systems. PH systems are safety critical; hence, errors can bring disastrous results. To prevent such problems, a powerful analytical tool is required. So using a proper formal language like graph transformation systems for developing of these systems seems necessary. But even if software engineers use such high level methodologies, errors may occur in the system under design. Hence, it should be investigated automatically and formally that whether this model of system satisfies all their requirements or not. In this paper, a dynamic architectural style for developing PH systems is presented. Then, the behavior of these systems is modeled and evaluated using GROOVE toolset. The results of the analysis show its high reliability.

  7. Experimental Architecture.

    ERIC Educational Resources Information Center

    Alter, Kevin

    2003-01-01

    Describes the design of the Centre for Architectural Structures and Technology at the University of Manitoba, including the educational context and design goals. Includes building plans and photographs. (EV)

  8. An Analysis of Model Scale Data Transformation to Full Scale Flight Using Chevron Nozzles

    NASA Technical Reports Server (NTRS)

    Brown, Clifford; Bridges, James

    2003-01-01

    Ground-based model scale aeroacoustic data is frequently used to predict the results of flight tests while saving time and money. The value of a model scale test is therefore dependent on how well the data can be transformed to the full scale conditions. In the spring of 2000, a model scale test was conducted to prove the value of chevron nozzles as a noise reduction device for turbojet applications. The chevron nozzle reduced noise by 2 EPNdB at an engine pressure ratio of 2.3 compared to that of the standard conic nozzle. This result led to a full scale flyover test in the spring of 2001 to verify these results. The flyover test confirmed the 2 EPNdB reduction predicted by the model scale test one year earlier. However, further analysis of the data revealed that the spectra and directivity, both on an OASPL and PNL basis, do not agree in either shape or absolute level. This paper explores these differences in an effort to improve the data transformation from model scale to full scale.

  9. SVD-based modeling for image texture classification using wavelet transformation.

    PubMed

    Selvan, Srinivasan; Ramakrishnan, Srinivasan

    2007-11-01

    This paper introduces a new model for image texture classification based on wavelet transformation and singular value decomposition. The probability density function of the singular values of wavelet transformation coefficients of image textures is modeled as an exponential function. The model parameter of the exponential function is estimated using maximum likelihood estimation technique. Truncation of lower singular values is employed to classify textures in the presence of noise. Kullback-Leibler distance (KLD) between estimated model parameters of image textures is used as a similarity metric to perform the classification using minimum distance classifier. The exponential function permits us to have closed-form expressions for the estimate of the model parameter and computation of the KLD. These closed-form expressions reduce the computational complexity of the proposed approach. Experimental results are presented to demonstrate the effectiveness of this approach on the entire 111 textures from Brodatz database. The experimental results demonstrate that the proposed approach improves recognition rates using a lower number of parameters on large databases. The proposed approach achieves higher recognition rates compared to the traditional sub-band energy-based approach, the hybrid IMM/SVM approach, and the GGD-based approach.

  10. Position-specific isotope modeling of organic micropollutants transformation through different reaction pathways.

    PubMed

    Jin, Biao; Rolle, Massimo

    2016-03-01

    The degradation of organic micropollutants occurs via different reaction pathways. Compound specific isotope analysis is a valuable tool to identify such degradation pathways in different environmental systems. We propose a mechanism-based modeling approach that provides a quantitative framework to simultaneously evaluate concentration as well as bulk and position-specific multi-element isotope evolution during the transformation of organic micropollutants. The model explicitly simulates position-specific isotopologues for those atoms that experience isotope effects and, thereby, provides a mechanistic description of isotope fractionation occurring at different molecular positions. To demonstrate specific features of the modeling approach, we simulated the degradation of three selected organic micropollutants: dichlorobenzamide (BAM), isoproturon (IPU) and diclofenac (DCF). The model accurately reproduces the multi-element isotope data observed in previous experimental studies. Furthermore, it precisely captures the dual element isotope trends characteristic of different reaction pathways as well as their range of variation consistent with observed bulk isotope fractionation. It was also possible to directly validate the model capability to predict the evolution of position-specific isotope ratios with available experimental data. Therefore, the approach is useful both for a mechanism-based evaluation of experimental results and as a tool to explore transformation pathways in scenarios for which position-specific isotope data are not yet available.

  11. In search of improving the numerical accuracy of the k - ɛ model by a transformation to the k - τ model

    NASA Astrophysics Data System (ADS)

    Dijkstra, Yoeri M.; Uittenbogaard, Rob E.; van Kester, Jan A. Th. M.; Pietrzak, Julie D.

    2016-08-01

    This study presents a detailed comparison between the k - ɛ and k - τ turbulence models. It is demonstrated that the numerical accuracy of the k - ɛ turbulence model can be improved in geophysical and environmental high Reynolds number boundary layer flows. This is achieved by transforming the k - ɛ model to the k - τ model, so that both models use the same physical parametrisation. The models therefore only differ in numerical aspects. A comparison between the two models is carried out using four idealised one-dimensional vertical (1DV) test cases. The advantage of a 1DV model is that it is feasible to carry out convergence tests with grids containing 5 to several thousands of vertical layers. It is shown hat the k - τ model is more accurate than the k - ɛ model in stratified and non-stratified boundary layer flows for grid resolutions between 10 and 100 layers. The k - τ model also shows a more monotonous convergence behaviour than the k - ɛ model. The price for the improved accuracy is about 20% more computational time for the k - τ model, which is due to additional terms in the model equations. The improved performance of the k - τ model is explained by the linearity of τ in the boundary layer and the better defined boundary condition.

  12. Effect of numerical diffusion on the water mass transformation in eddy-resolving models

    NASA Astrophysics Data System (ADS)

    Urakawa, L. Shogo; Hasumi, Hiroyasu

    2014-02-01

    This study investigates the effect of numerical diffusion associated with advection schemes on water mass transformation in an eddy-resolving model. The effect of numerical diffusion is evaluated as a residual between the total water mass transformation and the explicit water mass transformation: the former is calculated as the sum of meridional streamfunction and the temporal change rate of an isopycnal surface depth, and the latter is directly calculated with the use of the tendency equation of density. This method is used for investigating a dependency of numerical diffusion on explicit diffusivity. It is found that idealized channel experiments are categorized into three regimes according to a magnitude of explicit diffusivity: numerical diffusion, transitional, and explicit diffusion regimes. The numerical diffusion regime is defined as the regime where explicit diffusion changes do not significantly impact the solution. The magnitude of numerical diffusion is independent of the explicit diffusivity there. In the transitional regime, explicit (numerical) diffusion works more (less) with higher explicit diffusivity. Explicit and numerical diffusions are comparably important there. The explicit diffusion becomes significantly large and the numerical diffusion is almost negligible in the explicit diffusion regime. The total diffusion effect on water mass transformation there is considerably larger than those in the two other regimes. Two experiments are conducted with a Southern Ocean model under a realistic configuration. These belong to the numerical diffusion and transitional regimes. The model becomes a little too diffusive in the latter experiment. This result and results of channel experiments indicate that it is not an adequate option for a realistic Southern Ocean simulation that we adopt a diffusion coefficient in the explicit diffusion regime in order to reduce levels of numerical diffusion. It indicates that numerical diffusion is inevitable for eddy

  13. [Transformation of carbonate minerals in a cyano-bacterial mat in the course of laboratory modeling].

    PubMed

    Zaĭtseva, L V; Orleanskiĭ, V K; Alekseev, A O; Ushatinskaia, G T; Gerasimenko, L M

    2007-01-01

    A laboratory model of a cyano-bacterial mat with mineral layers of carbonates was used to examine the dynamics of the transformation of calcium-magnesium carbonate under the conditions of a soda lake. The activity of various organisms of the cyanobacterial community results in conditions under which the Ca-Mg carbonate precipitate undergoes changes. The crystal lattice of the initial carbonate is restructured; its mineralogical composition changes depending on the conditions of the mat. In magnesium calcites, which are formed under such low-temperature conditions, a rudimentary cation adjustment can occur with the formation of dolomite domains. These experiments confirm the hypothesis that the dolomite found in stromatolites is of a secondary origin and can be formed in the course of transformation of Ca-Mg carbonates under alkaline conditions in an alkaliphilic cyanobacterial community.

  14. Array CGH data modeling and smoothing in Stationary Wavelet Packet Transform domain

    PubMed Central

    Huang, Heng; Nguyen, Nha; Oraintara, Soontorn; Vo, An

    2008-01-01

    Background Array-based comparative genomic hybridization (array CGH) is a highly efficient technique, allowing the simultaneous measurement of genomic DNA copy number at hundreds or thousands of loci and the reliable detection of local one-copy-level variations. Characterization of these DNA copy number changes is important for both the basic understanding of cancer and its diagnosis. In order to develop effective methods to identify aberration regions from array CGH data, many recent research work focus on both smoothing-based and segmentation-based data processing. In this paper, we propose stationary packet wavelet transform based approach to smooth array CGH data. Our purpose is to remove CGH noise in whole frequency while keeping true signal by using bivariate model. Results In both synthetic and real CGH data, Stationary Wavelet Packet Transform (SWPT) is the best wavelet transform to analyze CGH signal in whole frequency. We also introduce a new bivariate shrinkage model which shows the relationship of CGH noisy coefficients of two scales in SWPT. Before smoothing, the symmetric extension is considered as a preprocessing step to save information at the border. Conclusion We have designed the SWTP and the SWPT-Bi which are using the stationary wavelet packet transform with the hard thresholding and the new bivariate shrinkage estimator respectively to smooth the array CGH data. We demonstrate the effectiveness of our approach through theoretical and experimental exploration of a set of array CGH data, including both synthetic data and real data. The comparison results show that our method outperforms the previous approaches. PMID:18831782

  15. Architecture and morphology of coral reef sequences. Modeling and observations from uplifting islands of SE Sulawesi, Indonesia

    NASA Astrophysics Data System (ADS)

    Pastier, Anne-Morwenn; Husson, Laurent; Bezos, Antoine; Pedoja, Kevin; Elliot, Mary; Hafidz, Abdul; Imran, Muhammad; Lacroix, Pascal; Robert, Xavier

    2016-04-01

    During the Late Neogene, sea level oscillations have profoundly shaped the morphology of the coastlines of intertropical zones, wherein relative sea level simultaneously controlled reef expansion and erosion of earlier reef bodies. In uplifted domains like SE Sulawesi, the sequences of fossil reefs display a variety of fossil morphologies. Similarly, the morphologies of the modern reefs are highly variable, including cliff notches, narrow fringing reefs, wide flat terraces, and barriers reefs. In this region, where uplift rates vary rapidly laterally, the entire set of morphologies is displayed within short distances. We developed a numerical model that predicts the architecture of fossil reefs sequences and apply it to observations from SE Sulawesi, accounting -amongst other parameters- for reef growth, coastal erosion, and uplift rates. The observations that we use to calibrate our models are mostly the morphology of both the onshore (dGPS and high-resolution Pleiades DEM) and offshore (sonar) coast, as well as U-Th radiometrically dated coral samples. Our method allows unravelling the spatial and temporal evolution of large domains on map view. Our analysis indicates that the architecture and morphology of uplifting coastlines is almost systematically polyphased (as attested by samples of different ages within a unique terrace), which assigns a primordial role to erosion, comparable to reef growth. Our models also reproduce the variety of modern morphologies, which are chiefly dictated by the uplift rates of the pre-existing morphology of the substratum, itself responding to the joint effects of reef building and subsequent erosion. In turn, we find that fossil and modern morphologies can be returned to uplift rates rather precisely, as the parametric window of each specific morphology is often narrow.

  16. A functional–structural model for radiata pine (Pinus radiata) focusing on tree architecture and wood quality

    PubMed Central

    Fernández, M. Paulina; Norero, Aldo; Vera, Jorge R.; Pérez, Eduardo

    2011-01-01

    Backgrounds and Aims Functional–structural models are interesting tools to relate environmental and management conditions with forest growth. Their three-dimensional images can reveal important characteristics of wood used for industrial products. Like virtual laboratories, they can be used to evaluate relationships among species, sites and management, and to support silvicultural design and decision processes. Our aim was to develop a functional–structural model for radiata pine (Pinus radiata) given its economic importance in many countries. Methods The plant model uses the L-system language. The structure of the model is based on operational units, which obey particular rules, and execute photosynthesis, respiration and morphogenesis, according to their particular characteristics. Plant allometry is adhered to so that harmonic growth and plant development are achieved. Environmental signals for morphogenesis are used. Dynamic turnover guides the normal evolution of the tree. Monthly steps allow for detailed information of wood characteristics. The model is independent of traditional forest inventory relationships and is conceived as a mechanistic model. For model parameterization, three databases which generated new information relating to P. radiata were analysed and incorporated. Key Results Simulations under different and contrasting environmental and management conditions were run and statistically tested. The model was validated against forest inventory data for the same sites and times and against true crown architectural data. The performance of the model for 6-year-old trees was encouraging. Total height, diameter and lengths of growth units were adequately estimated. Branch diameters were slightly overestimated. Wood density values were not satisfactory, but the cyclical pattern and increase of growth rings were reasonably well modelled. Conclusions The model was able to reproduce the development and growth of the species based on mechanistic

  17. First evaluation of the CPU, GPGPU and MIC architectures for real time particle tracking based on Hough transform at the LHC

    NASA Astrophysics Data System (ADS)

    Halyo, V.; LeGresley, P.; Lujan, P.; Karpusenko, V.; Vladimirov, A.

    2014-04-01

    Recent innovations focused around parallel processing, either through systems containing multiple processors or processors containing multiple cores, hold great promise for enhancing the performance of the trigger at the LHC and extending its physics program. The flexibility of the CMS/ATLAS trigger system allows for easy integration of computational accelerators, such as NVIDIA's Tesla Graphics Processing Unit (GPU) or Intel's Xeon Phi, in the High Level Trigger. These accelerators have the potential to provide faster or more energy efficient event selection, thus opening up possibilities for new complex triggers that were not previously feasible. At the same time, it is crucial to explore the performance limits achievable on the latest generation multicore CPUs with the use of the best software optimization methods. In this article, a new tracking algorithm based on the Hough transform will be evaluated for the first time on multi-core Intel i7-3770 and Intel Xeon E5-2697v2 CPUs, an NVIDIA Tesla K20c GPU, and an Intel Xeon Phi 7120 coprocessor. Preliminary time performance will be presented.

  18. Modeling along-axis variations in fault architecture in the Main Ethiopian Rift: implications for Nubia-Somalia kinematics

    NASA Astrophysics Data System (ADS)

    Erbello, Asfaw; Corti, Giacomo; Sani, Federico; Kidane, Tesfaye

    2016-04-01

    The Main Ethiopian Rift (MER), at the northern termination of the East African Rift, is an ideal locale where to get insights into the long-term motion between Nubia and Somalia. The rift is indeed one of the few places along the plate boundary where the deformation is narrow: its evolution is thus strictly related to the kinematics of the two major plates, whereas south of the Turkana depression a two-plate model for the EARS is too simplistic as extension occurs both along the Western and Eastern branches and different microplates are present between the two major plates. Despite its importance, the kinematics responsible for development and evolution of the MER is still a matter of debate: indeed, whereas the Quaternary-present kinematics of rifting is rather well constrained, the plate kinematics driving the initial, Mio-Pliocene stages of extension is still not clear, and different hypothesis have been put forward, including: polyphase rifting, with a change in direction of extension from NW-SE extension to E-W extension; constant Miocene-recent NW-SE extension; constant Miocene-recent NE-SW extension; constant, post-11 Ma extension consistent with the GPS-derived kinematics (i.e., roughly E-W to ESE-WNW). To shed additional light on this controversy and to test these different hypothesis, in this contribution we use new crustal-scale analogue models to analyze the along-strike variations in fault architecture in the MER and their relations with the rift trend, plate motion and the resulting Miocene-recent kinematics of rifting. The extension direction is indeed one of the most important parameters controlling the architecture of continental rifts and the relative abundance and orientation of different fault sets that develop during oblique rifting is typically a function of the angle between the extension direction and the orthogonal to the rift trend (i.e., the obliquity angle). Since the trend of the MER varies along strike, and consequently it is

  19. Phenomenological modeling of induced transformation anisotropy in shape memory alloy actuators

    NASA Astrophysics Data System (ADS)

    Hartl, Darren J.; Solomou, Alexandros; Lagoudas, Dimitris C.; Saravanos, Dimitris

    2012-04-01

    This paper considers new extensions to a three-dimensional constitutive model originally developed by Lagoudas and co-workers. The proposed model accurately and robustly captures the highly anisotropic transformation strain generation and recovery observed in actuator components that have been subjected to common material processing and training methods. A constant back stress tensor is introduced into the model, which is implemented in an exact form for simple tension/torsion loading as well as into a commercial finite element code to perform a 3-D analysis of a Shape Memory Alloy (SMA) torque tube actuator subjected to different loading schemes. Numerical correlations between predicted and available experimental results demonstrate the accuracy of the model.

  20. An ordinary differential equation model for the multistep transformation to cancer.

    PubMed

    Spencer, Sabrina L; Berryman, Matthew J; García, José A; Abbott, Derek

    2004-12-21

    Cancer is viewed as a multistep process whereby a normal cell is transformed into a cancer cell through the acquisition of mutations. We reduce the complexities of cancer progression to a simple set of underlying rules that govern the transformation of normal cells to malignant cells. In doing so, we derive an ordinary differential equation model that explores how the balance of angiogenesis, cell death rates, genetic instability, and replication rates give rise to different kinetics in the development of cancer. The key predictions of the model are that cancer develops fastest through a particular ordering of mutations and that mutations in genes that maintain genomic integrity would be the most deleterious type of mutations to inherit. In addition, we perform a sensitivity analysis on the parameters included in the model to determine the probable contribution of each. This paper presents a novel approach to viewing the genetic basis of cancer from a systems biology perspective and provides the groundwork for other models that can be directly tied to clinical and molecular data.