Science.gov

Sample records for architectural model transformations

  1. A Concept Transformation Learning Model for Architectural Design Learning Process

    ERIC Educational Resources Information Center

    Wu, Yun-Wu; Weng, Kuo-Hua; Young, Li-Ming

    2016-01-01

    Generally, in the foundation course of architectural design, much emphasis is placed on teaching of the basic design skills without focusing on teaching students to apply the basic design concepts in their architectural designs or promoting students' own creativity. Therefore, this study aims to propose a concept transformation learning model to…

  2. Adaptive Neuron Model: An architecture for the rapid learning of nonlinear topological transformations

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1994-01-01

    A method for the rapid learning of nonlinear mappings and topological transformations using a dynamically reconfigurable artificial neural network is presented. This fully-recurrent Adaptive Neuron Model (ANM) network was applied to the highly degenerate inverse kinematics problem in robotics, and its performance evaluation is bench-marked. Once trained, the resulting neuromorphic architecture was implemented in custom analog neural network hardware and the parameters capturing the functional transformation downloaded onto the system. This neuroprocessor, capable of 10(exp 9) ops/sec, was interfaced directly to a three degree of freedom Heathkit robotic manipulator. Calculation of the hardware feed-forward pass for this mapping was benchmarked at approximately 10 microsec.

  3. Architectural Models

    ERIC Educational Resources Information Center

    Levenson, Harold E.; Hurni, Andre

    1978-01-01

    Suggests building models as a way to reinforce and enhance related subjects such as architectural drafting, structural carpentry, etc., and discusses time, materials, scales, tools or equipment needed, how to achieve realistic special effects, and the types of projects that can be built (model of complete building, a panoramic model, and model…

  4. Modeling dynamic reciprocity: Engineering three-dimensional culture models of breast architecture, function, and neoplastic transformation

    PubMed Central

    Nelson, Celeste M.; Bissell, Mina J.

    2010-01-01

    In order to understand why cancer develops as well as predict the outcome of pharmacological treatments, we need to model the structure and function of organs in culture so that our experimental manipulations occur under physiological contexts. This review traces the history of the development of a prototypic example, the three-dimensional (3D) model of the mammary gland acinus. We briefly describe the considerable information available on both normal mammary gland function and breast cancer generated by the current model and present future challenges that will require an increase in its complexity. We propose the need for engineered tissues that faithfully recapitulate their native structures to allow a greater understanding of tissue function, dysfunction, and potential therapeutic intervention. PMID:15963732

  5. Consistent model driven architecture

    NASA Astrophysics Data System (ADS)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  6. Protocol Architecture Model Report

    NASA Technical Reports Server (NTRS)

    Dhas, Chris

    2000-01-01

    NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to examine protocols and architectures for an In-Space Internet Node. CNS has developed a methodology for network reference models to support NASA's four mission areas: Earth Science, Space Science, Human Exploration and Development of Space (REDS), Aerospace Technology. This report applies the methodology to three space Internet-based communications scenarios for future missions. CNS has conceptualized, designed, and developed space Internet-based communications protocols and architectures for each of the independent scenarios. The scenarios are: Scenario 1: Unicast communications between a Low-Earth-Orbit (LEO) spacecraft inspace Internet node and a ground terminal Internet node via a Tracking and Data Rela Satellite (TDRS) transfer; Scenario 2: Unicast communications between a Low-Earth-Orbit (LEO) International Space Station and a ground terminal Internet node via a TDRS transfer; Scenario 3: Multicast Communications (or "Multicasting"), 1 Spacecraft to N Ground Receivers, N Ground Transmitters to 1 Ground Receiver via a Spacecraft.

  7. Transforming Space Missions into Service Oriented Architectures

    NASA Technical Reports Server (NTRS)

    Mandl, Dan; Frye, Stuart; Cappelaere, Pat

    2006-01-01

    This viewgraph presentation reviews the vision of the sensor web enablement via a Service Oriented Architecture (SOA). An generic example is given of a user finding a service through the Web, and initiating a request for the desired observation. The parts that comprise this system and how they interact are reviewed. The advantages of the use of SOA are reviewed.

  8. An improved architecture for video rate image transformations

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.; Juday, Richard D.

    1989-01-01

    Geometric image transformations are of interest to pattern recognition algorithms for their use in simplifying some aspects of the pattern recognition process. Examples include reducing sensitivity to rotation, scale, and perspective of the object being recognized. The NASA Programmable Remapper can perform a wide variety of geometric transforms at full video rate. An architecture is proposed that extends its abilities and alleviates many of the first version's shortcomings. The need for the improvements are discussed in the context of the initial Programmable Remapper and the benefits and limitations it has delivered. The implementation and capabilities of the proposed architecture are discussed.

  9. Optical chirp z-transform processor with a simplified architecture.

    PubMed

    Ngo, Nam Quoc

    2014-12-29

    Using a simplified chirp z-transform (CZT) algorithm based on the discrete-time convolution method, this paper presents the synthesis of a simplified architecture of a reconfigurable optical chirp z-transform (OCZT) processor based on the silica-based planar lightwave circuit (PLC) technology. In the simplified architecture of the reconfigurable OCZT, the required number of optical components is small and there are no waveguide crossings which make fabrication easy. The design of a novel type of optical discrete Fourier transform (ODFT) processor as a special case of the synthesized OCZT is then presented to demonstrate its effectiveness. The designed ODFT can be potentially used as an optical demultiplexer at the receiver of an optical fiber orthogonal frequency division multiplexing (OFDM) transmission system. PMID:25607197

  10. Optical chirp z-transform processor with a simplified architecture.

    PubMed

    Ngo, Nam Quoc

    2014-12-29

    Using a simplified chirp z-transform (CZT) algorithm based on the discrete-time convolution method, this paper presents the synthesis of a simplified architecture of a reconfigurable optical chirp z-transform (OCZT) processor based on the silica-based planar lightwave circuit (PLC) technology. In the simplified architecture of the reconfigurable OCZT, the required number of optical components is small and there are no waveguide crossings which make fabrication easy. The design of a novel type of optical discrete Fourier transform (ODFT) processor as a special case of the synthesized OCZT is then presented to demonstrate its effectiveness. The designed ODFT can be potentially used as an optical demultiplexer at the receiver of an optical fiber orthogonal frequency division multiplexing (OFDM) transmission system.

  11. Efficient VLSI architecture for multi-dimensional discrete wavelet transform

    NASA Astrophysics Data System (ADS)

    Xiong, Cheng-Yi; Tian, Jin-Wen; Liu, Jian

    2005-10-01

    Efficient VLSI architectures for multi-dimensional (m-D) discrete wavelet transform (DWT), e.g. m=2, 3, are presented, in which the lifting scheme of DWT is used to reduce efficiently hardware complexity. The parallelism of 2m subbands transforms in lifting-based m-D DWT is explored, which increases efficiently the throughput rate of separable m-D DWT. The proposed architecture is composed of m2m-1 1-D DWT modules working in parallel and pipelined, which is designed to process 2m input samples per clock cycle, and generate 2m subbands coefficients synchronously. The total time of computing one level of decomposition for a 2-D image (3-D image sequence) of size N2 (MN2) is approximately N2/4 (MN2/8) intra- clock cycles (ccs). An efficient line-based architecture framework for both 2D+t and t+2D 3-D DWT is first proposed. Compared with the similar works reported in previous literature, the proposed architecture has good performance in terms of production of computation time and hardware cost. The proposed architecture is simple, regular, scalable and well suited for VLSI implementation.

  12. A VLSI architecture for simplified arithmetic Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.

    1992-01-01

    The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.

  13. Matrix-Vector Based Fast Fourier Transformations on SDR Architectures

    NASA Astrophysics Data System (ADS)

    He, Y.; Hueske, K.; Götze, J.; Coersmeier, E.

    2008-05-01

    Today Discrete Fourier Transforms (DFTs) are applied in various radio standards based on OFDM (Orthogonal Frequency Division Multiplex). It is important to gain a fast computational speed for the DFT, which is usually achieved by using specialized Fast Fourier Transform (FFT) engines. However, in face of the Software Defined Radio (SDR) development, more general (parallel) processor architectures are often desirable, which are not tailored to FFT computations. Therefore, alternative approaches are required to reduce the complexity of the DFT. Starting from a matrix-vector based description of the FFT idea, we will present different factorizations of the DFT matrix, which allow a reduction of the complexity that lies between the original DFT and the minimum FFT complexity. The computational complexities of these factorizations and their suitability for implementation on different processor architectures are investigated.

  14. Transformation of legacy network management system to service oriented architecture

    NASA Astrophysics Data System (ADS)

    Sathyan, Jithesh; Shenoy, Krishnananda

    2007-09-01

    Service providers today are facing the challenge of operating and maintaining multiple networks, based on multiple technologies. Network Management System (NMS) solutions are being used to manage these networks. However the NMS is tightly coupled with Element or the Core network components. Hence there are multiple NMS solutions for heterogeneous networks. Current network management solutions are targeted at a variety of independent networks. The wide spread popularity of IP Multimedia Subsystem (IMS) is a clear indication that all of these independent networks will be integrated into a single IP-based infrastructure referred to as Next Generation Networks (NGN) in the near future. The services, network architectures and traffic pattern in NGN will dramatically differ from the current networks. The heterogeneity and complexity in NGN including concepts like Fixed Mobile Convergence will bring a number of challenges to network management. The high degree of complexity accompanying the network element technology necessitates network management systems (NMS) which can utilize this technology to provide more service interfaces while hiding the inherent complexity. As operators begin to add new networks and expand existing networks to support new technologies and products, the necessity of scalable, flexible and functionally rich NMS systems arises. Another important factor influencing NMS architecture is mergers and acquisitions among the key vendors. Ease of integration is a key impediment in the traditional hierarchical NMS architecture. These requirements trigger the need for an architectural framework that will address the NGNM (Next Generation Network Management) issues seamlessly. This paper presents a unique perspective of bringing service orientated architecture (SOA) to legacy network management systems (NMS). It advocates a staged approach in transforming a legacy NMS to SOA. The architecture at each stage is detailed along with the technical advantages and

  15. HRST architecture modeling and assessments

    SciTech Connect

    Comstock, D.A.

    1997-01-01

    This paper presents work supporting the assessment of advanced concept options for the Highly Reusable Space Transportation (HRST) study. It describes the development of computer models as the basis for creating an integrated capability to evaluate the economic feasibility and sustainability of a variety of system architectures. It summarizes modeling capabilities for use on the HRST study to perform sensitivity analysis of alternative architectures (consisting of different combinations of highly reusable vehicles, launch assist systems, and alternative operations and support concepts) in terms of cost, schedule, performance, and demand. In addition, the identification and preliminary assessment of alternative market segments for HRST applications, such as space manufacturing, space tourism, etc., is described. Finally, the development of an initial prototype model that can begin to be used for modeling alternative HRST concepts at the system level is presented. {copyright} {ital 1997 American Institute of Physics.}

  16. Interoperability format translation and transformation between IFC architectural design file and simulation file formats

    DOEpatents

    Chao, Tian-Jy; Kim, Younghun

    2015-01-06

    Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function to convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.

  17. Interoperability format translation and transformation between IFC architectural design file and simulation file formats

    DOEpatents

    Chao, Tian-Jy; Kim, Younghun

    2015-02-03

    Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function to convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.

  18. Predicting and Modeling RNA Architecture

    PubMed Central

    Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice

    2011-01-01

    SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963

  19. Formalism Challenges of the Cougaar Model Driven Architecture

    NASA Technical Reports Server (NTRS)

    Bohner, Shawn A.; George, Boby; Gracanin, Denis; Hinchey, Michael G.

    2004-01-01

    The Cognitive Agent Architecture (Cougaar) is one of the most sophisticated distributed agent architectures developed today. As part of its research and evolution, Cougaar is being studied for application to large, logistics-based applications for the Department of Defense (DoD). Anticipiting future complex applications of Cougaar, we are investigating the Model Driven Architecture (MDA) approach to understand how effective it would be for increasing productivity in Cougar-based development efforts. Recognizing the sophistication of the Cougaar development environment and the limitations of transformation technologies for agents, we have systematically developed an approach that combines component assembly in the large and transformation in the small. This paper describes some of the key elements that went into the Cougaar Model Driven Architecture approach and the characteristics that drove the approach.

  20. Building Paradigms: Major Transformations in School Architecture (1798-2009)

    ERIC Educational Resources Information Center

    Gislason, Neil

    2009-01-01

    This article provides an historical overview of significant trends in school architecture from 1798 to the present. I divide the history of school architecture into two major phases. The first period falls between 1798 and 1921: the modern graded classroom emerged as a standard architectural feature during this period. The second period, which…

  1. Modeling and analysis of multiprocessor architectures

    NASA Technical Reports Server (NTRS)

    Yalamanchili, S.; Carpenter, T.

    1989-01-01

    Some technologies developed for system level modeling and analysis of algorithms/architectures using an architecture design and development system are reviewed. Modeling and analysis is described with attention given to modeling constraints and analysis using constrained software graphs. An example is presented of an ADAS graph and its associated attributes, such as firing delay, token consume rate, token produce rate, firing threshold, firing condition, arc queue lengths, associated C or Ada functional model, and stochastic behavior.

  2. Unified transform architecture for AVC, AVS, VC-1 and HEVC high-performance codecs

    NASA Astrophysics Data System (ADS)

    Dias, Tiago; Roma, Nuno; Sousa, Leonel

    2014-12-01

    A unified architecture for fast and efficient computation of the set of two-dimensional (2-D) transforms adopted by the most recent state-of-the-art digital video standards is presented in this paper. Contrasting to other designs with similar functionality, the presented architecture is supported on a scalable, modular and completely configurable processing structure. This flexible structure not only allows to easily reconfigure the architecture to support different transform kernels, but it also permits its resizing to efficiently support transforms of different orders (e.g. order-4, order-8, order-16 and order-32). Consequently, not only is it highly suitable to realize high-performance multi-standard transform cores, but it also offers highly efficient implementations of specialized processing structures addressing only a reduced subset of transforms that are used by a specific video standard. The experimental results that were obtained by prototyping several configurations of this processing structure in a Xilinx Virtex-7 FPGA show the superior performance and hardware efficiency levels provided by the proposed unified architecture for the implementation of transform cores for the Advanced Video Coding (AVC), Audio Video coding Standard (AVS), VC-1 and High Efficiency Video Coding (HEVC) standards. In addition, such results also demonstrate the ability of this processing structure to realize multi-standard transform cores supporting all the standards mentioned above and that are capable of processing the 8k Ultra High Definition Television (UHDTV) video format (7,680 × 4,320 at 30 fps) in real time.

  3. Utilizing Rapid Prototyping for Architectural Modeling

    ERIC Educational Resources Information Center

    Kirton, E. F.; Lavoie, S. D.

    2006-01-01

    This paper will discuss our approach to, success with and future direction in rapid prototyping for architectural modeling. The premise that this emerging technology has broad and exciting applications in the building design and construction industry will be supported by visual and physical evidence. This evidence will be presented in the form of…

  4. Modeling Operations Costs for Human Exploration Architectures

    NASA Technical Reports Server (NTRS)

    Shishko, Robert

    2013-01-01

    Operations and support (O&S) costs for human spaceflight have not received the same attention in the cost estimating community as have development costs. This is unfortunate as O&S costs typically comprise a majority of life-cycle costs (LCC) in such programs as the International Space Station (ISS) and the now-cancelled Constellation Program. Recognizing this, the Constellation Program and NASA HQs supported the development of an O&S cost model specifically for human spaceflight. This model, known as the Exploration Architectures Operations Cost Model (ExAOCM), provided the operations cost estimates for a variety of alternative human missions to the moon, Mars, and Near-Earth Objects (NEOs) in architectural studies. ExAOCM is philosophically based on the DoD Architecture Framework (DoDAF) concepts of operational nodes, systems, operational functions, and milestones. This paper presents some of the historical background surrounding the development of the model, and discusses the underlying structure, its unusual user interface, and lastly, previous examples of its use in the aforementioned architectural studies.

  5. Systolic Architectures For Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Hwang, J. N.; Vlontzos, J. A.; Kung, S. Y.

    1988-10-01

    This paper proposes an unidirectional ring systolic architecture for implementing the hidden Markov models (HMMs). This array architecture maximizes the strength of VLSI in terms of intensive and pipelined computing and yet circumvents the limitation on communication. Both the scoring and learning phases of an HMM are formulated as a consecutive matrix-vector multiplication problem, which can be executed in a fully pipelined fashion (100% utilization effi-ciency) by using an unidirectional ring systolic architecture. By appropriately scheduling the algorithm, which combines both the operations of the backward evaluation procedure and reestimation algorithm at the same time, we can use this systolic HMM in a most efficient manner. The systolic HMM can also be easily adapted to the left-to-right HMM by using bidirectional semi-global links with significant time saving. This architecture can also incorporate the scaling scheme with little extra effort in the computations of forward and backward evaluation variables to prevent the frequently encountered mathematical undertow problems. We also discuss a possible implementation of this proposed architecture using Inmos transputer (T-800) as the building block.

  6. A Dualistic Model To Describe Computer Architectures

    NASA Astrophysics Data System (ADS)

    Nitezki, Peter; Engel, Michael

    1985-07-01

    The Dualistic Model for Computer Architecture Description uses a hierarchy of abstraction levels to describe a computer in arbitrary steps of refinement from the top of the user interface to the bottom of the gate level. In our Dualistic Model the description of an architecture may be divided into two major parts called "Concept" and "Realization". The Concept of an architecture on each level of the hierarchy is an Abstract Data Type that describes the functionality of the computer and an implementation of that data type relative to the data type of the next lower level of abstraction. The Realization on each level comprises a language describing the means of user interaction with the machine, and a processor interpreting this language in terms of the language of the lower level. The surface of each hierarchical level, the data type and the language express the behaviour of a ma-chine at this level, whereas the implementation and the processor describe the structure of the algorithms and the system. In this model the Principle of Operation maps the object and computational structure of the Concept onto the structures of the Realization. Describing a system in terms of the Dualistic Model is therefore a process of refinement starting at a mere description of behaviour and ending at a description of structure. This model has proven to be a very valuable tool in exploiting the parallelism in a problem and it is very transparent in discovering the points where par-allelism is lost in a special architecture. It has successfully been used in a project on a survey of Computer Architecture for Image Processing and Pattern Analysis in Germany.

  7. Efficient architectures for two-dimensional discrete wavelet transform using lifting scheme.

    PubMed

    Xiong, Chengyi; Tian, Jinwen; Liu, Jian

    2007-03-01

    Novel architectures for 1-D and 2-D discrete wavelet transform (DWT) by using lifting schemes are presented in this paper. An embedded decimation technique is exploited to optimize the architecture for 1-D DWT, which is designed to receive an input and generate an output with the low- and high-frequency components of original data being available alternately. Based on this 1-D DWT architecture, an efficient line-based architecture for 2-D DWT is further proposed by employing parallel and pipeline techniques, which is mainly composed of two horizontal filter modules and one vertical filter module, working in parallel and pipeline fashion with 100% hardware utilization. This 2-D architecture is called fast architecture (FA) that can perform J levels of decomposition for N * N image in approximately 2N2(1 - 4(-J))/3 internal clock cycles. Moreover, another efficient generic line-based 2-D architecture is proposed by exploiting the parallelism among four subband transforms in lifting-based 2-D DWT, which can perform J levels of decomposition for N * N image in approximately N2(1 - 4(-J))/3 internal clock cycles; hence, it is called high-speed architecture. The throughput rate of the latter is increased by two times when comparing with the former 2-D architecture, but only less additional hardware cost is added. Compared with the works reported in previous literature, the proposed architectures for 2-D DWT are efficient alternatives in tradeoff among hardware cost, throughput rate, output latency and control complexity, etc. PMID:17357722

  8. Real-time hybrid joint transform correlator with parallel processing architecture

    NASA Astrophysics Data System (ADS)

    Qin, Yuwen; Ge, Bao-Zhen; Zhang, Yimo; Zhao, Xiao-Dong; Huang, Zhanhua

    1996-12-01

    A real-time hybrid joint transform correlator (JTC) with parallel processing architecture that use two liquid crystal light valves spatial light modulators, two VP32 image boards and two optical wavefront-division multiplexers as the key parts was presented. Using this hybrid JTC< real-time high- efficiency joint transform correlation, high-speed joint transform correlation and four-channel joint transform correlation were realized. The hybrid JTC system has also been used in the domain of morphological complex-valued kernel scale-space image processing. In this paper, the principles of the above experiments are described, experimental results are also given and analyzed.

  9. Parameter estimation for transformer modeling

    NASA Astrophysics Data System (ADS)

    Cho, Sung Don

    Large Power transformers, an aging and vulnerable part of our energy infrastructure, are at choke points in the grid and are key to reliability and security. Damage or destruction due to vandalism, misoperation, or other unexpected events is of great concern, given replacement costs upward of $2M and lead time of 12 months. Transient overvoltages can cause great damage and there is much interest in improving computer simulation models to correctly predict and avoid the consequences. EMTP (the Electromagnetic Transients Program) has been developed for computer simulation of power system transients. Component models for most equipment have been developed and benchmarked. Power transformers would appear to be simple. However, due to their nonlinear and frequency-dependent behaviors, they can be one of the most complex system components to model. It is imperative that the applied models be appropriate for the range of frequencies and excitation levels that the system experiences. Thus, transformer modeling is not a mature field and newer improved models must be made available. In this work, improved topologically-correct duality-based models are developed for three-phase autotransformers having five-legged, three-legged, and shell-form cores. The main problem in the implementation of detailed models is the lack of complete and reliable data, as no international standard suggests how to measure and calculate parameters. Therefore, parameter estimation methods are developed here to determine the parameters of a given model in cases where available information is incomplete. The transformer nameplate data is required and relative physical dimensions of the core are estimated. The models include a separate representation of each segment of the core, including hysteresis of the core, lambda-i saturation characteristic, capacitive effects, and frequency dependency of winding resistance and core loss. Steady-state excitation, and de-energization and re-energization transients

  10. Architecture Models and Data Flows in Local and Group Datawarehouses

    NASA Astrophysics Data System (ADS)

    Bogza, R. M.; Zaharie, Dorin; Avasilcai, Silvia; Bacali, Laura

    Architecture models and possible data flows for local and group datawarehouses are presented, together with some data processing models. The architecture models consists of several layers and the data flow between them. The choosen architecture of a datawarehouse depends on the data type and volumes from the source data, and inflences the analysis, data mining and reports done upon the data from DWH.

  11. A parallel VLSI architecture for a digital filter of arbitrary length using Fermat number transforms

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Reed, I. S.; Yeh, C. S.; Shao, H. M.

    1982-01-01

    A parallel architecture for computation of the linear convolution of two sequences of arbitrary lengths using the Fermat number transform (FNT) is described. In particular a pipeline structure is designed to compute a 128-point FNT. In this FNT, only additions and bit rotations are required. A standard barrel shifter circuit is modified so that it performs the required bit rotation operation. The overlap-save method is generalized for the FNT to compute a linear convolution of arbitrary length. A parallel architecture is developed to realize this type of overlap-save method using one FNT and several inverse FNTs of 128 points. The generalized overlap save method alleviates the usual dynamic range limitation in FNTs of long transform lengths. Its architecture is regular, simple, and expandable, and therefore naturally suitable for VLSI implementation.

  12. Mathematical analysis of a muscle architecture model.

    PubMed

    Navallas, Javier; Malanda, Armando; Gila, Luis; Rodríguez, Javier; Rodríguez, Ignacio

    2009-01-01

    Modeling of muscle architecture, which aims to recreate mathematically the physiological structure of the muscle fibers and motor units, is a powerful tool for understanding and modeling the mechanical and electrical behavior of the muscle. Most of the published models are presented in the form of algorithms, without mathematical analysis of mechanisms or outcomes of the model. Through the study of the muscle architecture model proposed by Stashuk, we present the analytical tools needed to better understand these models. We provide a statistical description for the spatial relations between motor units and muscle fibers. We are particularly concerned with two physiological quantities: the motor unit fiber number, which we expect to be proportional to the motor unit territory area; and the motor unit fiber density, which we expect to be constant for all motor units. Our results indicate that the Stashuk model is in good agreement with the physiological evidence in terms of the expectations outlined above. However, the resulting variance is very high. In addition, a considerable 'edge effect' is present in the outer zone of the muscle cross-section, making the properties of the motor units dependent on their location. This effect is relevant when motor unit territories and muscle cross-section are of similar size.

  13. Performance and Architecture Lab Modeling Tool

    SciTech Connect

    2014-06-19

    Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this link makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior

  14. Performance and Architecture Lab Modeling Tool

    2014-06-19

    Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, itmore » formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this link makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program

  15. Anisotropic analysis of trabecular architecture in human femur bone radiographs using quaternion wavelet transforms.

    PubMed

    Sangeetha, S; Sujatha, C M; Manamalli, D

    2014-01-01

    In this work, anisotropy of compressive and tensile strength regions of femur trabecular bone are analysed using quaternion wavelet transforms. The normal and abnormal femur trabecular bone radiographic images are considered for this study. The sub-anatomic regions, which include compressive and tensile regions, are delineated using pre-processing procedures. These delineated regions are subjected to quaternion wavelet transforms and statistical parameters are derived from the transformed images. These parameters are correlated with apparent porosity, which is derived from the strength regions. Further, anisotropy is also calculated from the transformed images and is analyzed. Results show that the anisotropy values derived from second and third phase components of quaternion wavelet transform are found to be distinct for normal and abnormal samples with high statistical significance for both compressive and tensile regions. These investigations demonstrate that architectural anisotropy derived from QWT analysis is able to differentiate normal and abnormal samples. PMID:25571265

  16. Resource utilization model for the algorithm to architecture mapping model

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Patel, Rakesh R.

    1993-01-01

    The analytical model for resource utilization and the variable node time and conditional node model for the enhanced ATAMM model for a real-time data flow architecture are presented in this research. The Algorithm To Architecture Mapping Model, ATAMM, is a Petri net based graph theoretic model developed at Old Dominion University, and is capable of modeling the execution of large-grained algorithms on a real-time data flow architecture. Using the resource utilization model, the resource envelope may be obtained directly from a given graph and, consequently, the maximum number of required resources may be evaluated. The node timing diagram for one iteration period may be obtained using the analytical resource envelope. The variable node time model, which describes the change in resource requirement for the execution of an algorithm under node time variation, is useful to expand the applicability of the ATAMM model to heterogeneous architectures. The model also describes a method of detecting the presence of resource limited mode and its subsequent prevention. Graphs with conditional nodes are shown to be reduced to equivalent graphs with time varying nodes and, subsequently, may be analyzed using the variable node time model to determine resource requirements. Case studies are performed on three graphs for the illustration of applicability of the analytical theories.

  17. The Fermilab Central Computing Facility architectural model

    SciTech Connect

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs.

  18. Architectural approach for semantic EHR systems development based on Detailed Clinical Models.

    PubMed

    Bernal, Juan G; Lopez, Diego M; Blobel, Bernd

    2012-01-01

    The integrative approach to health information in general and the development of pHealth systems in particular, require an integrated approach of formally modeled system architectures. Detailed Clinical Models (DCM) is one of the most promising modeling efforts for clinical concept representation in EHR system architectures. Although the feasibility of DCM modeling methodology has been demonstrated through examples, there is no formal, generic and automatic modeling transformation technique to ensure a semantic lossless transformation of clinical concepts expressed in DCM to either clinical concept representations based on ISO 13606/openEHR Archetypes or HL7 Templates. The objective of this paper is to propose a generic model transformation method and tooling for transforming DCM Clinical Concepts into ISO/EN 13606/openEHR Archetypes or HL7 Template models. The automation of the transformation process is supported by Model Driven-Development (MDD) transformation mechanisms and tools. The availability of processes, techniques and tooling for automatic DCM transformation would enable the development of intelligent, adaptive information systems as demanded for pHealth solutions. PMID:22942049

  19. Modeling the Europa Pathfinder avionics system with a model based avionics architecture tool

    NASA Technical Reports Server (NTRS)

    Chau, S.; Traylor, M.; Hall, R.; Whitfield, A.

    2002-01-01

    In order to shorten the avionics architecture development time, the Jet Propulsion Laboratory has developed a model-based architecture simultion tool called the Avionics System Architecture Tool (ASAT).

  20. Advancing Software Architecture Modeling for Large Scale Heterogeneous Systems

    SciTech Connect

    Gorton, Ian; Liu, Yan

    2010-11-07

    In this paper we describe how incorporating technology-specific modeling at the architecture level can help reduce risks and produce better designs for large, heterogeneous software applications. We draw an analogy with established modeling approaches in scientific domains, using groundwater modeling as an example, to help illustrate gaps in current software architecture modeling approaches. We then describe the advances in modeling, analysis and tooling that are required to bring sophisticated modeling and development methods within reach of software architects.

  1. Space Generic Open Avionics Architecture (SGOAA) reference model technical guide

    NASA Technical Reports Server (NTRS)

    Wray, Richard B.; Stovall, John R.

    1993-01-01

    This report presents a full description of the Space Generic Open Avionics Architecture (SGOAA). The SGOAA consists of a generic system architecture for the entities in spacecraft avionics, a generic processing architecture, and a six class model of interfaces in a hardware/software system. The purpose of the SGOAA is to provide an umbrella set of requirements for applying the generic architecture interface model to the design of specific avionics hardware/software systems. The SGOAA defines a generic set of system interface points to facilitate identification of critical interfaces and establishes the requirements for applying appropriate low level detailed implementation standards to those interface points. The generic core avionics system and processing architecture models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.

  2. Model-Drive Architecture for Agent-Based Systems

    NASA Technical Reports Server (NTRS)

    Gradanin, Denis; Singh, H. Lally; Bohner, Shawn A.; Hinchey, Michael G.

    2004-01-01

    The Model Driven Architecture (MDA) approach uses a platform-independent model to define system functionality, or requirements, using some specification language. The requirements are then translated to a platform-specific model for implementation. An agent architecture based on the human cognitive model of planning, the Cognitive Agent Architecture (Cougaar) is selected for the implementation platform. The resulting Cougaar MDA prescribes certain kinds of models to be used, how those models may be prepared and the relationships of the different kinds of models. Using the existing Cougaar architecture, the level of application composition is elevated from individual components to domain level model specifications in order to generate software artifacts. The software artifacts generation is based on a metamodel. Each component maps to a UML structured component which is then converted into multiple artifacts: Cougaar/Java code, documentation, and test cases.

  3. Modelling the pulse transformer in SPICE

    NASA Astrophysics Data System (ADS)

    Godlewska, Malgorzata; Górecki, Krzysztof; Górski, Krzysztof

    2016-01-01

    The paper is devoted to modelling pulse transformers in SPICE. It shows the character of the selected models of this element, points out their advantages and disadvantages, and presents the results of experimental verification of the considered models. These models are characterized by varying degrees of complexity - from linearly coupled linear coils to nonlinear electrothermal models. The study was conducted for transformer with ring cores made of a variety of ferromagnetic materials, while exciting the sinusoidal signal of a frequency 100 kHz and different values of load resistance. The transformers operating conditions under which the considered models ensure the acceptable accuracy of calculations are indicated.

  4. Modeling Techniques for High Dependability Protocols and Architecture

    NASA Technical Reports Server (NTRS)

    LaValley, Brian; Ellis, Peter; Walter, Chris J.

    2012-01-01

    This report documents an investigation into modeling high dependability protocols and some specific challenges that were identified as a result of the experiments. The need for an approach was established and foundational concepts proposed for modeling different layers of a complex protocol and capturing the compositional properties that provide high dependability services for a system architecture. The approach centers around the definition of an architecture layer, its interfaces for composability with other layers and its bindings to a platform specific architecture model that implements the protocols required for the layer.

  5. Origin and models of oceanic transform faults

    NASA Astrophysics Data System (ADS)

    Gerya, Taras

    2012-02-01

    Mid-ocean ridges sectioned by transform faults represent prominent surface expressions of plate tectonics. A fundamental problem of plate tectonics is how this pattern has formed and why it is maintained. Gross-scale geometry of mid-ocean ridges is often inherited from respective rifted margins. Indeed, transform faults seem to nucleate after the beginning of the oceanic spreading and can spontaneously form at a single straight ridge. Both analog and numerical models of transform faults were investigated since the 1970s. Two main groups of analog models were developed: thermomechanical (freezing wax) models with accreting and cooling plates and mechanical models with non-accreting lithosphere. Freezing wax models reproduced ridge-ridge transform faults, inactive fracture zones, rotating microplates, overlapping spreading centers and other features of oceanic ridges. However, these models often produced open spreading centers that are dissimilar to nature. Mechanical models, on the other hand, do not accrete the lithosphere and their results are thus only applicable for relatively small amount of spreading. Three main types of numerical models were investigated: models of stress and displacement distribution around transforms, models of their thermal structure and crustal growth, and models of nucleation and evolution of ridge-transform fault patterns. It was shown that a limited number of spreading modes can form: transform faults, microplates, overlapping spreading centers, zigzag ridges and oblique connecting spreading centers. However, the controversy exists whether these patterns always result from pre-existing ridge offsets or can also form spontaneously at a single straight ridge during millions of year of accretion. Therefore, two types of transform fault interpretation exist: plate fragmentation structures vs. plate accretion structures. Models of transform faults are yet relatively scarce and partly controversial. Consequently, a number of first order

  6. Quantum decoration transformation for spin models

    NASA Astrophysics Data System (ADS)

    Braz, F. F.; Rodrigues, F. C.; de Souza, S. M.; Rojas, Onofre

    2016-09-01

    It is quite relevant the extension of decoration transformation for quantum spin models since most of the real materials could be well described by Heisenberg type models. Here we propose an exact quantum decoration transformation and also showing interesting properties such as the persistence of symmetry and the symmetry breaking during this transformation. Although the proposed transformation, in principle, cannot be used to map exactly a quantum spin lattice model into another quantum spin lattice model, since the operators are non-commutative. However, it is possible the mapping in the "classical" limit, establishing an equivalence between both quantum spin lattice models. To study the validity of this approach for quantum spin lattice model, we use the Zassenhaus formula, and we verify how the correction could influence the decoration transformation. But this correction could be useless to improve the quantum decoration transformation because it involves the second-nearest-neighbor and further nearest neighbor couplings, which leads into a cumbersome task to establish the equivalence between both lattice models. This correction also gives us valuable information about its contribution, for most of the Heisenberg type models, this correction could be irrelevant at least up to the third order term of Zassenhaus formula. This transformation is applied to a finite size Heisenberg chain, comparing with the exact numerical results, our result is consistent for weak xy-anisotropy coupling. We also apply to bond-alternating Ising-Heisenberg chain model, obtaining an accurate result in the limit of the quasi-Ising chain.

  7. E-Governance and Service Oriented Computing Architecture Model

    NASA Astrophysics Data System (ADS)

    Tejasvee, Sanjay; Sarangdevot, S. S.

    2010-11-01

    E-Governance is the effective application of information communication and technology (ICT) in the government processes to accomplish safe and reliable information lifecycle management. Lifecycle of the information involves various processes as capturing, preserving, manipulating and delivering information. E-Governance is meant to transform of governance in better manner to the citizens which is transparent, reliable, participatory, and accountable in point of view. The purpose of this paper is to attempt e-governance model, focus on the Service Oriented Computing Architecture (SOCA) that includes combination of information and services provided by the government, innovation, find out the way of optimal service delivery to citizens and implementation in transparent and liable practice. This paper also try to enhance focus on the E-government Service Manager as a essential or key factors service oriented and computing model that provides a dynamically extensible structural design in which all area or branch can bring in innovative services. The heart of this paper examine is an intangible model that enables E-government communication for trade and business, citizen and government and autonomous bodies.

  8. A Model of Transformative Collaboration

    ERIC Educational Resources Information Center

    Swartz, Ann L.; Triscari, Jacqlyn S.

    2011-01-01

    Two collaborative writing partners sought to deepen their understanding of transformative learning by conducting several spirals of grounded theory research on their own collaborative relationship. Drawing from adult education, business, and social science literature and including descriptive analysis of their records of activity and interaction…

  9. Framework for the Parametric System Modeling of Space Exploration Architectures

    NASA Technical Reports Server (NTRS)

    Komar, David R.; Hoffman, Jim; Olds, Aaron D.; Seal, Mike D., II

    2008-01-01

    This paper presents a methodology for performing architecture definition and assessment prior to, or during, program formulation that utilizes a centralized, integrated architecture modeling framework operated by a small, core team of general space architects. This framework, known as the Exploration Architecture Model for IN-space and Earth-to-orbit (EXAMINE), enables: 1) a significantly larger fraction of an architecture trade space to be assessed in a given study timeframe; and 2) the complex element-to-element and element-to-system relationships to be quantitatively explored earlier in the design process. Discussion of the methodology advantages and disadvantages with respect to the distributed study team approach typically used within NASA to perform architecture studies is presented along with an overview of EXAMINE s functional components and tools. An example Mars transportation system architecture model is used to demonstrate EXAMINE s capabilities in this paper. However, the framework is generally applicable for exploration architecture modeling with destinations to any celestial body in the solar system.

  10. Metabotropic glutamate receptor 1 disrupts mammary acinar architecture and initiates malignant transformation of mammary epithelial cells.

    PubMed

    Teh, Jessica L F; Shah, Raj; La Cava, Stephanie; Dolfi, Sonia C; Mehta, Madhura S; Kongara, Sameera; Price, Sandy; Ganesan, Shridar; Reuhl, Kenneth R; Hirshfield, Kim M; Karantza, Vassiliki; Chen, Suzie

    2015-05-01

    Metabotropic glutamate receptor 1 (mGluR1/Grm1) is a member of the G-protein-coupled receptor superfamily, which was once thought to only participate in synaptic transmission and neuronal excitability, but has more recently been implicated in non-neuronal tissue functions. We previously described the oncogenic properties of Grm1 in cultured melanocytes in vitro and in spontaneous melanoma development with 100 % penetrance in vivo. Aberrant mGluR1 expression was detected in 60-80 % of human melanoma cell lines and biopsy samples. As most human cancers are of epithelial origin, we utilized immortalized mouse mammary epithelial cells (iMMECs) as a model system to study the transformative properties of Grm1. We introduced Grm1 into iMMECs and isolated several stable mGluR1-expressing clones. Phenotypic alterations in mammary acinar architecture were assessed using three-dimensional morphogenesis assays. We found that mGluR1-expressing iMMECs exhibited delayed lumen formation in association with decreased central acinar cell death, disrupted cell polarity, and a dramatic increase in the activation of the mitogen-activated protein kinase pathway. Orthotopic implantation of mGluR1-expressing iMMEC clones into mammary fat pads of immunodeficient nude mice resulted in mammary tumor formation in vivo. Persistent mGluR1 expression was required for the maintenance of the tumorigenic phenotypes in vitro and in vivo, as demonstrated by an inducible Grm1-silencing RNA system. Furthermore, mGluR1 was found be expressed in human breast cancer cell lines and breast tumor biopsies. Elevated levels of extracellular glutamate were observed in mGluR1-expressing breast cancer cell lines and concurrent treatment of MCF7 xenografts with glutamate release inhibitor, riluzole, and an AKT inhibitor led to suppression of tumor progression. Our results are likely relevant to human breast cancer, highlighting a putative role of mGluR1 in the pathophysiology of breast cancer and the potential

  11. Vibrational testing of trabecular bone architectures using rapid prototype models.

    PubMed

    Mc Donnell, P; Liebschner, M A K; Tawackoli, Wafa; Mc Hugh, P E

    2009-01-01

    The purpose of this study was to investigate if standard analysis of the vibrational characteristics of trabecular architectures can be used to detect changes in the mechanical properties due to progressive bone loss. A cored trabecular specimen from a human lumbar vertebra was microCT scanned and a three-dimensional, virtual model in stereolithography (STL) format was generated. Uniform bone loss was simulated using a surface erosion algorithm. Rapid prototype (RP) replicas were manufactured from these virtualised models with 0%, 16% and 42% bone loss. Vibrational behaviour of the RP replicas was evaluated by performing a dynamic compression test through a frequency range using an electro-dynamic shaker. The acceleration and dynamic force responses were recorded and fast Fourier transform (FFT) analyses were performed to determine the response spectrum. Standard resonant frequency analysis and damping factor calculations were performed. The RP replicas were subsequently tested in compression beyond failure to determine their strength and modulus. It was found that the reductions in resonant frequency with increasing bone loss corresponded well with reductions in apparent stiffness and strength. This suggests that structural dynamics has the potential to be an alternative diagnostic technique for osteoporosis, although significant challenges must be overcome to determine the effect of the skin/soft tissue interface, the cortex and variabilities associated with in vivo testing. PMID:18555727

  12. Modeling of transformation toughening in brittle materials

    SciTech Connect

    LeSar, R.; Rollett, A.D. ); Srolovitz, D.J. . Dept. of Materials Science and Engineering)

    1992-01-24

    Results from modeling of transformation toughening in brittle materials using a discrete micromechanical model are presented. The material is represented as a two-dimensional triangular array of nodes connected by elastic springs. Microstructural effects are included by varying the spring parameters for the bulk, grain boundaries, and transforming particles. Using the width of the damage zone and the effective compliance (after the initial creation of the damage zone) as measures of fracture toughness, we find that there is a strong dependence of toughness on the amount, size, and shape of the transforming particles, with the maximum toughness achieved with the higher amounts of the larger particles.

  13. Model based analysis of piezoelectric transformers.

    PubMed

    Hemsel, T; Priya, S

    2006-12-22

    Piezoelectric transformers are increasingly getting popular in the electrical devices owing to several advantages such as small size, high efficiency, no electromagnetic noise and non-flammable. In addition to the conventional applications such as ballast for back light inverter in notebook computers, camera flash, and fuel ignition several new applications have emerged such as AC/DC converter, battery charger and automobile lighting. These new applications demand high power density and wide range of voltage gain. Currently, the transformer power density is limited to 40 W/cm(3) obtained at low voltage gain. The purpose of this study was to investigate a transformer design that has the potential of providing higher power density and wider range of voltage gain. The new transformer design utilizes radial mode both at the input and output port and has the unidirectional polarization in the ceramics. This design was found to provide 30 W power with an efficiency of 98% and 30 degrees C temperature rise from the room temperature. An electro-mechanical equivalent circuit model was developed to describe the characteristics of the piezoelectric transformer. The model was found to successfully predict the characteristics of the transformer. Excellent matching was found between the computed and experimental results. The results of this study will allow to deterministically design unipoled piezoelectric transformers with specified performance. It is expected that in near future the unipoled transformer will gain significant importance in various electrical components. PMID:16808951

  14. Systolic architectures for the computation of the discrete Hartley and the discrete cosine transforms based on prime factor decomposition

    SciTech Connect

    Chakrabarti, C. . Dept. of Electrical Engineering); Ja Ja, J. . Dept. of Electrical Engineering)

    1990-11-01

    This paper proposes two-dimensional systolic array implementations for computing the discrete Hartley (DHT) and the discrete cosine transforms (DCT) when the transform size N is decomposable into mutually prime factors. The existing two-dimensional formulations for DHT and DCT are modified and the corresponding algorithms are mapped into two-dimensional systolic arrays. The resulting architecture is fully pipelined with no control units. The hardware design is based on bit serial left to right MSB to LSB binary arithmetic.

  15. Transformation as a Design Process and Runtime Architecture for High Integrity Software

    SciTech Connect

    Bespalko, S.J.; Winter, V.L.

    1999-04-05

    We have discussed two aspects of creating high integrity software that greatly benefit from the availability of transformation technology, which in this case is manifest by the requirement for a sophisticated backtracking parser. First, because of the potential for correctly manipulating programs via small changes, an automated non-procedural transformation system can be a valuable tool for constructing high assurance software. Second, modeling the processing of translating data into information as a, perhaps, context-dependent grammar leads to an efficient, compact implementation. From a practical perspective, the transformation process should begin in the domain language in which a problem is initially expressed. Thus in order for a transformation system to be practical it must be flexible with respect to domain-specific languages. We have argued that transformation applied to specification results in a highly reliable system. We also attempted to briefly demonstrate that transformation technology applied to the runtime environment will result in a safe and secure system. We thus believe that the sophisticated multi-lookahead backtracking parsing technology is central to the task of being in a position to demonstrate the existence of HIS.

  16. Demand Activated Manufacturing Architecture (DAMA) model for supply chain collaboration

    SciTech Connect

    CHAPMAN,LEON D.; PETERSEN,MARJORIE B.

    2000-03-13

    The Demand Activated Manufacturing Architecture (DAMA) project during the last five years of work with the U.S. Integrated Textile Complex (retail, apparel, textile, and fiber sectors) has developed an inter-enterprise architecture and collaborative model for supply chains. This model will enable improved collaborative business across any supply chain. The DAMA Model for Supply Chain Collaboration is a high-level model for collaboration to achieve Demand Activated Manufacturing. The five major elements of the architecture to support collaboration are (1) activity or process, (2) information, (3) application, (4) data, and (5) infrastructure. These five elements are tied to the application of the DAMA architecture to three phases of collaboration - prepare, pilot, and scale. There are six collaborative activities that may be employed in this model: (1) Develop Business Planning Agreements, (2) Define Products, (3) Forecast and Plan Capacity Commitments, (4) Schedule Product and Product Delivery, (5) Expedite Production and Delivery Exceptions, and (6) Populate Supply Chain Utility. The Supply Chain Utility is a set of applications implemented to support collaborative product definition, forecast visibility, planning, scheduling, and execution. The DAMA architecture and model will be presented along with the process for implementing this DAMA model.

  17. The Architecture of Higher Education. University Spatial Models at the Start of the Twenty First Century.

    ERIC Educational Resources Information Center

    Calvo-Sotelo, Pablo Campos

    2001-01-01

    Examines trends in university architecture through history and the current Spanish model, asserting that good architecture produces a good university. Discusses function, culture, and character as they relate to university education and architecture. (EV)

  18. Non-linear transformer modeling and simulation

    SciTech Connect

    Archer, W.E.; Deveney, M.F.; Nagel, R.L.

    1994-08-01

    Transformers models for simulation with Pspice and Analogy`s Saber are being developed using experimental B-H Loop and network analyzer measurements. The models are evaluated for accuracy and convergence using several test circuits. Results are presented which demonstrate the effects on circuit performance from magnetic core losses eddy currents and mechanical stress on the magnetic cores.

  19. Optimizing transformations of stencil operations for parallel cache-based architectures

    SciTech Connect

    Bassetti, F.; Davis, K.

    1999-06-28

    This paper describes a new technique for optimizing serial and parallel stencil- and stencil-like operations for cache-based architectures. This technique takes advantage of the semantic knowledge implicity in stencil-like computations. The technique is implemented as a source-to-source program transformation; because of its specificity it could not be expected of a conventional compiler. Empirical results demonstrate a uniform factor of two speedup. The experiments clearly show the benefits of this technique to be a consequence, as intended, of the reduction in cache misses. The test codes are based on a 5-point stencil obtained by the discretization of the Poisson equation and applied to a two-dimensional uniform grid using the Jacobi method as an iterative solver. Results are presented for a 1-D tiling for a single processor, and in parallel using 1-D data partition. For the parallel case both blocking and non-blocking communication are tested. The same scheme of experiments has bee n performed for the 2-D tiling case. However, for the parallel case the 2-D partitioning is not discussed here, so the parallel case handled for 2-D is 2-D tiling with 1-D data partitioning.

  20. Drawing-Based Procedural Modeling of Chinese Architectures.

    PubMed

    Fei Hou; Yue Qi; Hong Qin

    2012-01-01

    This paper presents a novel modeling framework to build 3D models of Chinese architectures from elevation drawing. Our algorithm integrates the capability of automatic drawing recognition with powerful procedural modeling to extract production rules from elevation drawing. First, different from the previous symbol-based floor plan recognition, based on the novel concept of repetitive pattern trees, small horizontal repetitive regions of the elevation drawing are clustered in a bottom-up manner to form architectural components with maximum repetition, which collectively serve as building blocks for 3D model generation. Second, to discover the global architectural structure and its components' interdependencies, the components are structured into a shape tree in a top-down subdivision manner and recognized hierarchically at each level of the shape tree based on Markov Random Fields (MRFs). Third, shape grammar rules can be derived to construct 3D semantic model and its possible variations with the help of a 3D component repository. The salient contribution lies in the novel integration of procedural modeling with elevation drawing, with a unique application to Chinese architectures.

  1. Extending enterprise architecture modelling with business goals and requirements

    NASA Astrophysics Data System (ADS)

    Engelsman, Wilco; Quartel, Dick; Jonkers, Henk; van Sinderen, Marten

    2011-02-01

    The methods for enterprise architecture (EA), such as The Open Group Architecture Framework, acknowledge the importance of requirements modelling in the development of EAs. Modelling support is needed to specify, document, communicate and reason about goals and requirements. The current modelling techniques for EA focus on the products, services, processes and applications of an enterprise. In addition, techniques may be provided to describe structured requirements lists and use cases. Little support is available however for modelling the underlying motivation of EAs in terms of stakeholder concerns and the high-level goals that address these concerns. This article describes a language that supports the modelling of this motivation. The definition of the language is based on existing work on high-level goal and requirements modelling and is aligned with an existing standard for enterprise modelling: the ArchiMate language. Furthermore, the article illustrates how EA can benefit from analysis techniques from the requirements engineering domain.

  2. Hierarchical decomposition model for reconfigurable architecture

    NASA Astrophysics Data System (ADS)

    Erdogan, Simsek; Wahab, Abdul

    1996-10-01

    This paper introduces a systematic approach for abstract modeling of VLSI digital systems using a hierarchical decomposition process and HDL. In particular, the modeling of the back propagation neural network on a massively parallel reconfigurable hardware is used to illustrate the design process rather than toy examples. Based on the design specification of the algorithm, a functional model is developed through successive refinement and decomposition for execution on the reconfiguration machine. First, a top- level block diagram of the system is derived. Then, a schematic sheet of the corresponding structural model is developed to show the interconnections of the main functional building blocks. Next, the functional blocks are decomposed iteratively as required. Finally, the blocks are modeled using HDL and verified against the block specifications.

  3. Numerical modeling of transformer inrush currents

    NASA Astrophysics Data System (ADS)

    Cardelli, E.; Faba, A.

    2014-02-01

    This paper presents an application of a vector hysteresis model to the prediction of the inrush current due the arbitrary initial excitation of a transformer after a fault. The approach proposed seems promising in order to predict the transient overshoot in current and the optimal time to close the circuit after the fault.

  4. Improving Project Management Using Formal Models and Architectures

    NASA Technical Reports Server (NTRS)

    Kahn, Theodore; Sturken, Ian

    2011-01-01

    This talk discusses the advantages formal modeling and architecture brings to project management. These emerging technologies have both great potential and challenges for improving information available for decision-making. The presentation covers standards, tools and cultural issues needing consideration, and includes lessons learned from projects the presenters have worked on.

  5. Hybrid modeling, HMM/NN architectures, and protein applications.

    PubMed

    Baldi, P; Chauvin, Y

    1996-10-01

    We describe a hybrid modeling approach where the parameters of a mode are calculated and modulated by another model, typically a neural network (NN), to avoid both overfitting and underfitting. We develop the approach for the case of Hidden Markov Models (HMMs), by deriving a class of hybrid HMM/NN architectures. These architectures can be trained with unified algorithms that blend HMM dynamic programming with NN backpropagation. In the case of complex data, mixtures of HMMs or modulated HMMs must be used. NNs can then be applied both to the parameters of each single HMM, and to the switching or modulatation of the models, as a function of input or context. Hybrid HMM/NN architectures provide a flexible NN parameterization for the control of model structure and complexity. At the same time, they can capture distributions that, in practice, are inaccessible to single HMMs. The HMM/NN hybrid approach is tested, in its simplest form, by constructing a model of the immunoglobulin protein family. A hybrid model is trained, and a multiple alignment derived, with less than a fourth of the number of parameters used with previous single HMMs.

  6. System Architecture Modeling for Technology Portfolio Management using ATLAS

    NASA Technical Reports Server (NTRS)

    Thompson, Robert W.; O'Neil, Daniel A.

    2006-01-01

    Strategic planners and technology portfolio managers have traditionally relied on consensus-based tools, such as Analytical Hierarchy Process (AHP) and Quality Function Deployment (QFD) in planning the funding of technology development. While useful to a certain extent, these tools are limited in the ability to fully quantify the impact of a technology choice on system mass, system reliability, project schedule, and lifecycle cost. The Advanced Technology Lifecycle Analysis System (ATLAS) aims to provide strategic planners a decision support tool for analyzing technology selections within a Space Exploration Architecture (SEA). Using ATLAS, strategic planners can select physics-based system models from a library, configure the systems with technologies and performance parameters, and plan the deployment of a SEA. Key parameters for current and future technologies have been collected from subject-matter experts and other documented sources in the Technology Tool Box (TTB). ATLAS can be used to compare the technical feasibility and economic viability of a set of technology choices for one SEA, and compare it against another set of technology choices or another SEA. System architecture modeling in ATLAS is a multi-step process. First, the modeler defines the system level requirements. Second, the modeler identifies technologies of interest whose impact on an SEA. Third, the system modeling team creates models of architecture elements (e.g. launch vehicles, in-space transfer vehicles, crew vehicles) if they are not already in the model library. Finally, the architecture modeler develops a script for the ATLAS tool to run, and the results for comparison are generated.

  7. Space station architectural elements model study

    NASA Technical Reports Server (NTRS)

    Taylor, T. C.; Spencer, J. S.; Rocha, C. J.; Kahn, E.; Cliffton, E.; Carr, C.

    1987-01-01

    The worksphere, a user controlled computer workstation enclosure, was expanded in scope to an engineering workstation suitable for use on the Space Station as a crewmember desk in orbit. The concept was also explored as a module control station capable of enclosing enough equipment to control the station from each module. The concept has commercial potential for the Space Station and surface workstation applications. The central triangular beam interior configuration was expanded and refined to seven different beam configurations. These included triangular on center, triangular off center, square, hexagonal small, hexagonal medium, hexagonal large and the H beam. Each was explored with some considerations as to the utilities and a suggested evaluation factor methodology was presented. Scale models of each concept were made. The models were helpful in researching the seven beam configurations and determining the negative residual (unused) volume of each configuration. A flexible hardware evaluation factor concept is proposed which could be helpful in evaluating interior space volumes from a human factors point of view. A magnetic version with all the graphics is available from the author or the technical monitor.

  8. A Model-Driven Architecture Approach for Modeling, Specifying and Deploying Policies in Autonomous and Autonomic Systems

    NASA Technical Reports Server (NTRS)

    Pena, Joaquin; Hinchey, Michael G.; Sterritt, Roy; Ruiz-Cortes, Antonio; Resinas, Manuel

    2006-01-01

    Autonomic Computing (AC), self-management based on high level guidance from humans, is increasingly gaining momentum as the way forward in designing reliable systems that hide complexity and conquer IT management costs. Effectively, AC may be viewed as Policy-Based Self-Management. The Model Driven Architecture (MDA) approach focuses on building models that can be transformed into code in an automatic manner. In this paper, we look at ways to implement Policy-Based Self-Management by means of models that can be converted to code using transformations that follow the MDA philosophy. We propose a set of UML-based models to specify autonomic and autonomous features along with the necessary procedures, based on modification and composition of models, to deploy a policy as an executing system.

  9. Modelling parallel programs and multiprocessor architectures with AXE

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Fineman, Charles E.

    1991-01-01

    AXE, An Experimental Environment for Parallel Systems, was designed to model and simulate for parallel systems at the process level. It provides an integrated environment for specifying computation models, multiprocessor architectures, data collection, and performance visualization. AXE is being used at NASA-Ames for developing resource management strategies, parallel problem formulation, multiprocessor architectures, and operating system issues related to the High Performance Computing and Communications Program. AXE's simple, structured user-interface enables the user to model parallel programs and machines precisely and efficiently. Its quick turn-around time keeps the user interested and productive. AXE models multicomputers. The user may easily modify various architectural parameters including the number of sites, connection topologies, and overhead for operating system activities. Parallel computations in AXE are represented as collections of autonomous computing objects known as players. Their use and behavior is described. Performance data of the multiprocessor model can be observed on a color screen. These include CPU and message routing bottlenecks, and the dynamic status of the software.

  10. Entity-Centric Abstraction and Modeling Framework for Transportation Architectures

    NASA Technical Reports Server (NTRS)

    Lewe, Jung-Ho; DeLaurentis, Daniel A.; Mavris, Dimitri N.; Schrage, Daniel P.

    2007-01-01

    A comprehensive framework for representing transpportation architectures is presented. After discussing a series of preceding perspectives and formulations, the intellectual underpinning of the novel framework using an entity-centric abstraction of transportation is described. The entities include endogenous and exogenous factors and functional expressions are offered that relate these and their evolution. The end result is a Transportation Architecture Field which permits analysis of future concepts under the holistic perspective. A simulation model which stems from the framework is presented and exercised producing results which quantify improvements in air transportation due to advanced aircraft technologies. Finally, a modeling hypothesis and its accompanying criteria are proposed to test further use of the framework for evaluating new transportation solutions.

  11. Modeling of transformers using circuit simulators

    SciTech Connect

    Archer, W.E.; Deveney, M.F.; Nagel, R.L.

    1994-07-01

    Transformers of two different designs; and unencapsulated pot core and an encapsulated toroidal core have been modeled for circuit analysis with circuit simulation tools. We selected MicroSim`s PSPICE and Anology`s SABER as the simulation tools and used experimental BH Loop and network analyzer measurements to generate the needed input data. The models are compared for accuracy and convergence using the circuit simulators. Results are presented which demonstrate the effects on circuit performance from magnetic core losses, eddy currents, and mechanical stress on the magnetic cores.

  12. Columnar architecture improves noise robustness in a model cortical network.

    PubMed

    Bush, Paul C; Mainen, Zachary F

    2015-01-01

    Cortical columnar architecture was discovered decades ago yet there is no agreed upon explanation for its function. Indeed, some have suggested that it has no function, it is simply an epiphenomenon of developmental processes. To investigate this problem we have constructed a computer model of one square millimeter of layer 2/3 of the primary visual cortex (V1) of the cat. Model cells are connected according to data from recent paired cell studies, in particular the connection probability between pyramidal cells is inversely proportional both to the distance separating the cells and to the distance between the preferred parameters (features) of the cells. We find that these constraints, together with a columnar architecture, produce more tightly clustered populations of cells when compared to the random architecture seen in, for example, rodents. This causes the columnar network to converge more quickly and accurately on the pattern representing a particular stimulus in the presence of noise, suggesting that columnar connectivity functions to improve pattern recognition in cortical circuits. The model also suggests that synaptic failure, a phenomenon exhibited by weak synapses, may conserve metabolic resources by reducing transmitter release at these connections that do not contribute to network function.

  13. Transforming Collaborative Process Models into Interface Process Models by Applying an MDA Approach

    NASA Astrophysics Data System (ADS)

    Lazarte, Ivanna M.; Chiotti, Omar; Villarreal, Pablo D.

    Collaborative business models among enterprises require defining collaborative business processes. Enterprises implement B2B collaborations to execute these processes. In B2B collaborations the integration and interoperability of processes and systems of the enterprises are required to support the execution of collaborative processes. From a collaborative process model, which describes the global view of the enterprise interactions, each enterprise must define the interface process that represents the role it performs in the collaborative process in order to implement the process in a Business Process Management System. Hence, in this work we propose a method for the automatic generation of the interface process model of each enterprise from a collaborative process model. This method is based on a Model-Driven Architecture to transform collaborative process models into interface process models. By applying this method, interface processes are guaranteed to be interoperable and defined according to a collaborative process.

  14. Coaching Model + Clinical Playbook = Transformative Learning.

    PubMed

    Fletcher, Katherine A; Meyer, Mary

    2016-01-01

    Health care employers demand that workers be skilled in clinical reasoning, able to work within complex interprofessional teams to provide safe, quality patient-centered care in a complex evolving system. To this end, there have been calls for radical transformation of nursing education including the development of a baccalaureate generalist nurse. Based on recommendations from the American Association of Colleges of Nursing, faculty concluded that clinical education must change moving beyond direct patient care by applying the concepts associated with designer, manager, and coordinator of care and being a member of a profession. To accomplish this, the faculty utilized a system of focused learning assignments (FLAs) that present transformative learning opportunities that expose students to "disorienting dilemmas," alternative perspectives, and repeated opportunities to reflect and challenge their own beliefs. The FLAs collected in a "Playbook" were scaffolded to build the student's competencies over the course of the clinical experience. The FLAs were centered on the 6 Quality and Safety Education for Nurses competencies, with 2 additional concepts of professionalism and systems-based practice. The FLAs were competency-based exercises that students performed when not assigned to direct patient care or had free clinical time. Each FLA had a lesson plan that allowed the student and faculty member to see the competency addressed by the lesson, resources, time on task, student instructions, guide for reflection, grading rubric, and recommendations for clinical instructor. The major advantages of the model included (a) consistent implementation of structured learning experiences by a diverse teaching staff using a coaching model of instruction; (b) more systematic approach to present learning activities that build upon each other; (c) increased time for faculty to interact with students providing direct patient care; (d) guaranteed capture of selected transformative

  15. Coaching Model + Clinical Playbook = Transformative Learning.

    PubMed

    Fletcher, Katherine A; Meyer, Mary

    2016-01-01

    Health care employers demand that workers be skilled in clinical reasoning, able to work within complex interprofessional teams to provide safe, quality patient-centered care in a complex evolving system. To this end, there have been calls for radical transformation of nursing education including the development of a baccalaureate generalist nurse. Based on recommendations from the American Association of Colleges of Nursing, faculty concluded that clinical education must change moving beyond direct patient care by applying the concepts associated with designer, manager, and coordinator of care and being a member of a profession. To accomplish this, the faculty utilized a system of focused learning assignments (FLAs) that present transformative learning opportunities that expose students to "disorienting dilemmas," alternative perspectives, and repeated opportunities to reflect and challenge their own beliefs. The FLAs collected in a "Playbook" were scaffolded to build the student's competencies over the course of the clinical experience. The FLAs were centered on the 6 Quality and Safety Education for Nurses competencies, with 2 additional concepts of professionalism and systems-based practice. The FLAs were competency-based exercises that students performed when not assigned to direct patient care or had free clinical time. Each FLA had a lesson plan that allowed the student and faculty member to see the competency addressed by the lesson, resources, time on task, student instructions, guide for reflection, grading rubric, and recommendations for clinical instructor. The major advantages of the model included (a) consistent implementation of structured learning experiences by a diverse teaching staff using a coaching model of instruction; (b) more systematic approach to present learning activities that build upon each other; (c) increased time for faculty to interact with students providing direct patient care; (d) guaranteed capture of selected transformative

  16. From Point Clouds to Architectural Models: Algorithms for Shape Reconstruction

    NASA Astrophysics Data System (ADS)

    Canciani, M.; Falcolini, C.; Saccone, M.; Spadafora, G.

    2013-02-01

    The use of terrestrial laser scanners in architectural survey applications has become more and more common. Row data complexity, as given by scanner restitution, leads to several problems about design and 3D-modelling starting from Point Clouds. In this context we present a study on architectural sections and mathematical algorithms for their shape reconstruction, according to known or definite geometrical rules, focusing on shapes of different complexity. Each step of the semi-automatic algorithm has been developed using Mathematica software and CAD, integrating both programs in order to reconstruct a geometrical CAD model of the object. Our study is motivated by the fact that, for architectural survey, most of three dimensional modelling procedures concerning point clouds produce superabundant, but often unnecessary, information and are also very expensive in terms of cpu time using more and more sophisticated hardware and software. On the contrary, it's important to simplify/decimate the point cloud in order to recognize a particular form out of some definite geometric/architectonic shapes. Such a process consists of several steps: first the definition of plane sections and characterization of their architecture; secondly the construction of a continuous plane curve depending on some parameters. In the third step we allow the selection on the curve of some nodal points with given specific characteristics (symmetry, tangency conditions, shadowing exclusion, corners, … ). The fourth and last step is the construction of a best shape defined by the comparison with an abacus of known geometrical elements, such as moulding profiles, leading to a precise architectonical section. The algorithms have been developed and tested in very different situations and are presented in a case study of complex geometries such as some mouldings profiles in the Church of San Carlo alle Quattro Fontane.

  17. A performance model of the OSI communication architecture

    NASA Astrophysics Data System (ADS)

    Kritzinger, P. S.

    1986-06-01

    An analytical model aiming at predicting the performance of software implementations which would be built according to the OSI basic reference model is proposed. The model uses the peer protocol standard of a layer as the reference description of an implementation of that layer. The model is basically a closed multiclass multichain queueing network with a processor-sharing center, modeling process contention at the processor, and a delay center, modeling times spent waiting for responses from the corresponding peer processes. Each individual transition of the protocol constitutes a different class and each layer of the architecture forms a closed chain. Performance statistics include queue lengths and response times at the processor as a function of processor speed and the number of open connections. It is shown how to reduce the model should the protocol state space become very large. Numerical results based upon the derived formulas are given.

  18. Managing changes in the enterprise architecture modelling context

    NASA Astrophysics Data System (ADS)

    Khanh Dam, Hoa; Lê, Lam-Son; Ghose, Aditya

    2016-07-01

    Enterprise architecture (EA) models the whole enterprise in various aspects regarding both business processes and information technology resources. As the organisation grows, the architecture of its systems and processes must also evolve to meet the demands of the business environment. Evolving an EA model may involve making changes to various components across different levels of the EA. As a result, an important issue before making a change to an EA model is assessing the ripple effect of the change, i.e. change impact analysis. Another critical issue is change propagation: given a set of primary changes that have been made to the EA model, what additional secondary changes are needed to maintain consistency across multiple levels of the EA. There has been however limited work on supporting the maintenance and evolution of EA models. This article proposes an EA description language, namely ChangeAwareHierarchicalEA, integrated with an evolution framework to support both change impact analysis and change propagation within an EA model. The core part of our framework is a technique for computing the impact of a change and a new method for generating interactive repair plans from Alloy consistency rules that constrain the EA model.

  19. A new global GIS architecture based on STQIE model

    NASA Astrophysics Data System (ADS)

    Cheng, Chengqi; Guan, Li; Guo, Shide; Pu, Guoliang; Sun, Min

    2007-06-01

    Global GIS is a system, which supports the huge data process and the global direct manipulation on global grid based on spheroid or ellipsoid surface. A new Global GIS architecture based on STQIE model is designed in this paper, according to the computer cluster theory, the space-time integration technology and the virtual real technology. There is four-level protocol framework and three-layer data management pattern of Global GIS based on organization, management and publication of spatial information in this architecture. In this paper a global 3D prototype system is developed taking advantage of C++ language according to the above thought. This system integrated the simulation system with GIS, and supported display of multi-resolution DEM, image and multi-dimensional static or dynamic 3D objects.

  20. SpaceWire model development technology for satellite architecture.

    SciTech Connect

    Eldridge, John M.; Leemaster, Jacob Edward; Van Leeuwen, Brian P.

    2011-09-01

    Packet switched data communications networks that use distributed processing architectures have the potential to simplify the design and development of new, increasingly more sophisticated satellite payloads. In addition, the use of reconfigurable logic may reduce the amount of redundant hardware required in space-based applications without sacrificing reliability. These concepts were studied using software modeling and simulation, and the results are presented in this report. Models of the commercially available, packet switched data interconnect SpaceWire protocol were developed and used to create network simulations of data networks containing reconfigurable logic with traffic flows for timing system distribution.

  1. Reservoir architecture modeling: Nonstationary models for quantitative geological characterization. Final report, April 30, 1998

    SciTech Connect

    Kerr, D.; Epili, D.; Kelkar, M.; Redner, R.; Reynolds, A.

    1998-12-01

    The study was comprised of four investigations: facies architecture; seismic modeling and interpretation; Markov random field and Boolean models for geologic modeling of facies distribution; and estimation of geological architecture using the Bayesian/maximum entropy approach. This report discusses results from all four investigations. Investigations were performed using data from the E and F units of the Middle Frio Formation, Stratton Field, one of the major reservoir intervals in the Gulf Coast Basin.

  2. Building energy modeling for green architecture and intelligent dashboard applications

    NASA Astrophysics Data System (ADS)

    DeBlois, Justin

    Buildings are responsible for 40% of the carbon emissions in the United States. Energy efficiency in this sector is key to reducing overall greenhouse gas emissions. This work studied the passive technique called the roof solar chimney for reducing the cooling load in homes architecturally. Three models of the chimney were created: a zonal building energy model, computational fluid dynamics model, and numerical analytic model. The study estimated the error introduced to the building energy model (BEM) through key assumptions, and then used a sensitivity analysis to examine the impact on the model outputs. The conclusion was that the error in the building energy model is small enough to use it for building simulation reliably. Further studies simulated the roof solar chimney in a whole building, integrated into one side of the roof. Comparisons were made between high and low efficiency constructions, and three ventilation strategies. The results showed that in four US climates, the roof solar chimney results in significant cooling load energy savings of up to 90%. After developing this new method for the small scale representation of a passive architecture technique in BEM, the study expanded the scope to address a fundamental issue in modeling - the implementation of the uncertainty from and improvement of occupant behavior. This is believed to be one of the weakest links in both accurate modeling and proper, energy efficient building operation. A calibrated model of the Mascaro Center for Sustainable Innovation's LEED Gold, 3,400 m2 building was created. Then algorithms were developed for integration to the building's dashboard application that show the occupant the energy savings for a variety of behaviors in real time. An approach using neural networks to act on real-time building automation system data was found to be the most accurate and efficient way to predict the current energy savings for each scenario. A stochastic study examined the impact of the

  3. A Distributed, Cross-Agency Software Architecture for Sharing Climate Models and Observational Data Sets (Invited)

    NASA Astrophysics Data System (ADS)

    Crichton, D. J.; Mattmann, C. A.; Braverman, A. J.; Cinquini, L.

    2010-12-01

    The Jet Propulsion Laboratory (JPL) has been developing a distributed infrastructure to supporting access and sharing of Earth Science observational data sets with climate models to support model-to-data intercomparison for climate research. The Climate Data Exchange (CDX), a framework for linking distributed repositories coupled with tailored distributed services to support the intercomparison, provides mechanisms to discover, access, transform and share observational and model output data [2]. These services are critical to allowing data to remain distributed, but be pulled together to support analysis. The architecture itself provides a services-based approach allowing for integrating and working with other computing infrastructures through well-defined software interfaces. Specifically, JPL has worked very closely with the Earth System Grid (ESG) and the Program for Climate Model Diagnostics and Intercomparisons (PCMDI) at Lawrence Livermore National Laboratory (LLNL) to integrate NASA science data systems with the Earth System Grid to support federation across organizational and agency boundaries [1]. Of particular interest near-term is enabling access to NASA observational data along-side climate models for the Coupled Model Intercomparison Project known as CMIP5. CMIP5 is the protocol that will be used for the next International Panel for Climate Change (IPCC) Assessment Report (AR5) on climate change. JPL and NASA are currently engaged in a project to ensure that observational data are available to the climate research community through the Earth System Grid. By both developing a software architecture and working with the key architects for the ESG, JPL has been successful at building a prototype for AR5. This presentation will review the software architecture including core principles, models and interfaces, the Climate Data Exchange project and specific goals to support access to both observational data and models for AR5. It will highlight the progress

  4. Probabilistic logic modeling of network reliability for hybrid network architectures

    SciTech Connect

    Wyss, G.D.; Schriner, H.K.; Gaylor, T.R.

    1996-10-01

    Sandia National Laboratories has found that the reliability and failure modes of current-generation network technologies can be effectively modeled using fault tree-based probabilistic logic modeling (PLM) techniques. We have developed fault tree models that include various hierarchical networking technologies and classes of components interconnected in a wide variety of typical and atypical configurations. In this paper we discuss the types of results that can be obtained from PLMs and why these results are of great practical value to network designers and analysts. After providing some mathematical background, we describe the `plug-and-play` fault tree analysis methodology that we have developed for modeling connectivity and the provision of network services in several current- generation network architectures. Finally, we demonstrate the flexibility of the method by modeling the reliability of a hybrid example network that contains several interconnected ethernet, FDDI, and token ring segments. 11 refs., 3 figs., 1 tab.

  5. Architecture in motion: A model for music composition

    NASA Astrophysics Data System (ADS)

    Variego, Jorge Elias

    2011-12-01

    Speculations regarding the relationship between music and architecture go back to the very origins of these disciplines. Throughout history, these links have always reaffirmed that music and architecture are analogous art forms that only diverge in their object of study. In the 1 st c. BCE Vitruvius conceived Architecture as "one of the most inclusive and universal human activities" where the architect should be educated in all the arts, having a vast knowledge in history, music and philosophy. In the 18th c., the German thinker Johann Wolfgang von Goethe, described Architecture as "frozen music". More recently, in the 20th c., Iannis Xenakis studied the similar structuring principles between Music and Architecture creating his own "models" of musical composition based on mathematical principles and geometric constructions. The goal of this document is to propose a compositional method that will function as a translator between the acoustical properties of a room and music, to facilitate the creation of musical works that will not only happen within an enclosed space but will also intentionally interact with the space. Acoustical measurements of rooms such as reverberation time, frequency response and volume will be measured and systematically organized in correspondence with orchestrational parameters. The musical compositions created after the proposed model are evocative of the spaces on which they are based. They are meant to be performed in any space, not exclusively in the one where the acoustical measurements were obtained. The visual component of architectural design is disregarded; the room is considered a musical instrument, with its particular sound qualities and resonances. Compositions using the proposed model will not result as sonified shapes, they will be musical works literally "tuned" to a specific space. This Architecture in motion is an attempt to adopt scientific research to the service of a creative activity and to let the aural properties of

  6. A Functional Model of Sensemaking in a Neurocognitive Architecture

    PubMed Central

    Lebiere, Christian; Paik, Jaehyon; Rutledge-Taylor, Matthew; Staszewski, James; Anderson, John R.

    2013-01-01

    Sensemaking is the active process of constructing a meaningful representation (i.e., making sense) of some complex aspect of the world. In relation to intelligence analysis, sensemaking is the act of finding and interpreting relevant facts amongst the sea of incoming reports, images, and intelligence. We present a cognitive model of core information-foraging and hypothesis-updating sensemaking processes applied to complex spatial probability estimation and decision-making tasks. While the model was developed in a hybrid symbolic-statistical cognitive architecture, its correspondence to neural frameworks in terms of both structure and mechanisms provided a direct bridge between rational and neural levels of description. Compared against data from two participant groups, the model correctly predicted both the presence and degree of four biases: confirmation, anchoring and adjustment, representativeness, and probability matching. It also favorably predicted human performance in generating probability distributions across categories, assigning resources based on these distributions, and selecting relevant features given a prior probability distribution. This model provides a constrained theoretical framework describing cognitive biases as arising from three interacting factors: the structure of the task environment, the mechanisms and limitations of the cognitive architecture, and the use of strategies to adapt to the dual constraints of cognition and the environment. PMID:24302930

  7. A functional model of sensemaking in a neurocognitive architecture.

    PubMed

    Lebiere, Christian; Pirolli, Peter; Thomson, Robert; Paik, Jaehyon; Rutledge-Taylor, Matthew; Staszewski, James; Anderson, John R

    2013-01-01

    Sensemaking is the active process of constructing a meaningful representation (i.e., making sense) of some complex aspect of the world. In relation to intelligence analysis, sensemaking is the act of finding and interpreting relevant facts amongst the sea of incoming reports, images, and intelligence. We present a cognitive model of core information-foraging and hypothesis-updating sensemaking processes applied to complex spatial probability estimation and decision-making tasks. While the model was developed in a hybrid symbolic-statistical cognitive architecture, its correspondence to neural frameworks in terms of both structure and mechanisms provided a direct bridge between rational and neural levels of description. Compared against data from two participant groups, the model correctly predicted both the presence and degree of four biases: confirmation, anchoring and adjustment, representativeness, and probability matching. It also favorably predicted human performance in generating probability distributions across categories, assigning resources based on these distributions, and selecting relevant features given a prior probability distribution. This model provides a constrained theoretical framework describing cognitive biases as arising from three interacting factors: the structure of the task environment, the mechanisms and limitations of the cognitive architecture, and the use of strategies to adapt to the dual constraints of cognition and the environment. PMID:24302930

  8. Architecture for Integrated Medical Model Dynamic Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    Jaworske, D. A.; Myers, J. G.; Goodenow, D.; Young, M.; Arellano, J. D.

    2016-01-01

    Probabilistic Risk Assessment (PRA) is a modeling tool used to predict potential outcomes of a complex system based on a statistical understanding of many initiating events. Utilizing a Monte Carlo method, thousands of instances of the model are considered and outcomes are collected. PRA is considered static, utilizing probabilities alone to calculate outcomes. Dynamic Probabilistic Risk Assessment (dPRA) is an advanced concept where modeling predicts the outcomes of a complex system based not only on the probabilities of many initiating events, but also on a progression of dependencies brought about by progressing down a time line. Events are placed in a single time line, adding each event to a queue, as managed by a planner. Progression down the time line is guided by rules, as managed by a scheduler. The recently developed Integrated Medical Model (IMM) summarizes astronaut health as governed by the probabilities of medical events and mitigation strategies. Managing the software architecture process provides a systematic means of creating, documenting, and communicating a software design early in the development process. The software architecture process begins with establishing requirements and the design is then derived from the requirements.

  9. A functional model of sensemaking in a neurocognitive architecture.

    PubMed

    Lebiere, Christian; Pirolli, Peter; Thomson, Robert; Paik, Jaehyon; Rutledge-Taylor, Matthew; Staszewski, James; Anderson, John R

    2013-01-01

    Sensemaking is the active process of constructing a meaningful representation (i.e., making sense) of some complex aspect of the world. In relation to intelligence analysis, sensemaking is the act of finding and interpreting relevant facts amongst the sea of incoming reports, images, and intelligence. We present a cognitive model of core information-foraging and hypothesis-updating sensemaking processes applied to complex spatial probability estimation and decision-making tasks. While the model was developed in a hybrid symbolic-statistical cognitive architecture, its correspondence to neural frameworks in terms of both structure and mechanisms provided a direct bridge between rational and neural levels of description. Compared against data from two participant groups, the model correctly predicted both the presence and degree of four biases: confirmation, anchoring and adjustment, representativeness, and probability matching. It also favorably predicted human performance in generating probability distributions across categories, assigning resources based on these distributions, and selecting relevant features given a prior probability distribution. This model provides a constrained theoretical framework describing cognitive biases as arising from three interacting factors: the structure of the task environment, the mechanisms and limitations of the cognitive architecture, and the use of strategies to adapt to the dual constraints of cognition and the environment.

  10. Building Structure Design as an Integral Part of Architecture: A Teaching Model for Students of Architecture

    ERIC Educational Resources Information Center

    Unay, Ali Ihsan; Ozmen, Cengiz

    2006-01-01

    This paper explores the place of structural design within undergraduate architectural education. The role and format of lecture-based structure courses within an education system, organized around the architectural design studio is discussed with its most prominent problems and proposed solutions. The fundamental concept of the current teaching…

  11. Crystal Level Continuum Modeling of Phase Transformations: The (alpha) <--> (epsilon) Transformation in Iron

    SciTech Connect

    Barton, N R; Benson, D J; Becker, R; Bykov, Y; Caplan, M

    2004-10-18

    We present a crystal level model for thermo-mechanical deformation with phase transformation capabilities. The model is formulated to allow for large pressures (on the order of the elastic moduli) and makes use of a multiplicative decomposition of the deformation gradient. Elastic and thermal lattice distortions are combined into a single lattice stretch to allow the model to be used in conjunction with general equation of state relationships. Phase transformations change the mass fractions of the material constituents. The driving force for phase transformations includes terms arising from mechanical work, from the temperature dependent chemical free energy change on transformation, and from interaction energy among the constituents. Deformation results from both these phase transformations and elasto-viscoplastic deformation of the constituents themselves. Simulation results are given for the {alpha} to {epsilon} phase transformation in iron. Results include simulations of shock induced transformation in single crystals and of compression of polycrystals. Results are compared to available experimental data.

  12. An architecture model for multiple disease management information systems.

    PubMed

    Chen, Lichin; Yu, Hui-Chu; Li, Hao-Chun; Wang, Yi-Van; Chen, Huang-Jen; Wang, I-Ching; Wang, Chiou-Shiang; Peng, Hui-Yu; Hsu, Yu-Ling; Chen, Chi-Huang; Chuang, Lee-Ming; Lee, Hung-Chang; Chung, Yufang; Lai, Feipei

    2013-04-01

    Disease management is a program which attempts to overcome the fragmentation of healthcare system and improve the quality of care. Many studies have proven the effectiveness of disease management. However, the case managers were spending the majority of time in documentation, coordinating the members of the care team. They need a tool to support them with daily practice and optimizing the inefficient workflow. Several discussions have indicated that information technology plays an important role in the era of disease management. Whereas applications have been developed, it is inefficient to develop information system for each disease management program individually. The aim of this research is to support the work of disease management, reform the inefficient workflow, and propose an architecture model that enhance on the reusability and time saving of information system development. The proposed architecture model had been successfully implemented into two disease management information system, and the result was evaluated through reusability analysis, time consumed analysis, pre- and post-implement workflow analysis, and user questionnaire survey. The reusability of the proposed model was high, less than half of the time was consumed, and the workflow had been improved. The overall user aspect is positive. The supportiveness during daily workflow is high. The system empowers the case managers with better information and leads to better decision making.

  13. Modern multicore and manycore architectures: Modelling, optimisation and benchmarking a multiblock CFD code

    NASA Astrophysics Data System (ADS)

    Hadade, Ioan; di Mare, Luca

    2016-08-01

    Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.

  14. Polygonal Shapes Detection in 3d Models of Complex Architectures

    NASA Astrophysics Data System (ADS)

    Benciolini, G. B.; Vitti, A.

    2015-02-01

    A sequential application of two global models defined on a variational framework is proposed for the detection of polygonal shapes in 3D models of complex architectures. As a first step, the procedure involves the use of the Mumford and Shah (1989) 1st-order variational model in dimension two (gridded height data are processed). In the Mumford-Shah model an auxiliary function detects the sharp changes, i.e., the discontinuities, of a piecewise smooth approximation of the data. The Mumford-Shah model requires the global minimization of a specific functional to simultaneously produce both the smooth approximation and its discontinuities. In the proposed procedure, the edges of the smooth approximation derived by a specific processing of the auxiliary function are then processed using the Blake and Zisserman (1987) 2nd-order variational model in dimension one (edges are processed in the plane). This second step permits to describe the edges of an object by means of piecewise almost-linear approximation of the input edges themselves and to detects sharp changes of the first-derivative of the edges so to detect corners. The Mumford-Shah variational model is used in two dimensions accepting the original data as primary input. The Blake-Zisserman variational model is used in one dimension for the refinement of the description of the edges. The selection among all the boundaries detected by the Mumford-Shah model of those that present a shape close to a polygon is performed by considering only those boundaries for which the Blake-Zisserman model identified discontinuities in their first derivative. The output of the procedure are hence shapes, coming from 3D geometric data, that can be considered as polygons. The application of the procedure is suitable for, but not limited to, the detection of objects such as foot-print of polygonal buildings, building facade boundaries or windows contours. v The procedure is applied to a height model of the building of the Engineering

  15. Spatial Models for Architectural Heritage in Urban Database Context

    NASA Astrophysics Data System (ADS)

    Costamagna, E.; Spanò, A.

    2011-08-01

    Despite the GIS (Geographic Information Systems/Geospatial Information Systems) have been provided with several applications to manage the two-dimensional geometric information and arrange the topological relations among different spatial primitives, most of these systems have limited capabilities to manage the three-dimensional space. Other tools, such as CAD systems, have already achieved a full capability of representing 3D data. Most of the researches in the field of GIS have underlined the necessity of a full 3D management capability which is not yet achieved by the available systems (Rahman, Pilouk 2008) (Zlatanova 2002). First of all to reach this goal is important to define the spatial data model, which is at the same time a geometric and topological model and so integrating these two aspects in relation to the database management efficiency and documentation purposes. The application field on which these model can be tested is the spatial data managing of Architectural Heritage documentation, to evaluate the pertinence of these spatial models to the requested scale for the needs of such a documentation. Most of the important aspects are the integration of metric data originated from different sources and the representation and management of multiscale data. The issues connected with the representation of objects at higher LOD than the ones defined by the CityGML will be taken into account. The aim of this paper is then to investigate which are the favorable application of a framework in order to integrate two different approaches: architectural heritage spatial documentation and urban scale spatial data management.

  16. 3D model tools for architecture and archaeology reconstruction

    NASA Astrophysics Data System (ADS)

    Vlad, Ioan; Herban, Ioan Sorin; Stoian, Mircea; Vilceanu, Clara-Beatrice

    2016-06-01

    The main objective of architectural and patrimonial survey is to provide a precise documentation of the status quo of the surveyed objects (monuments, buildings, archaeological object and sites) for preservation and protection, for scientific studies and restoration purposes, for the presentation to the general public. Cultural heritage documentation includes an interdisciplinary approach having as purpose an overall understanding of the object itself and an integration of the information which characterize it. The accuracy and the precision of the model are directly influenced by the quality of the measurements realized on field and by the quality of the software. The software is in the process of continuous development, which brings many improvements. On the other side, compared to aerial photogrammetry, close range photogrammetry and particularly architectural photogrammetry is not limited to vertical photographs with special cameras. The methodology of terrestrial photogrammetry has changed significantly and various photographic acquisitions are widely in use. In this context, the present paper brings forward a comparative study of TLS (Terrestrial Laser Scanner) and digital photogrammetry for 3D modeling. The authors take into account the accuracy of the 3D models obtained, the overall costs involved for each technology and method and the 4th dimension - time. The paper proves its applicability as photogrammetric technologies are nowadays used at a large scale for obtaining the 3D model of cultural heritage objects, efficacious in their assessment and monitoring, thus contributing to historic conservation. Its importance also lies in highlighting the advantages and disadvantages of each method used - very important issue for both the industrial and scientific segment when facing decisions such as in which technology to invest more research and funds.

  17. Hydrologic Modeling in a Service-Oriented Architecture

    NASA Astrophysics Data System (ADS)

    Goodall, J. L.

    2008-12-01

    Service Oriented Architectures (SOA) offer an approach for creating hydrologic models whereby a model is decomposed into independent computational services that are geographically distributed yet accessible through the Internet. The advantage of this modeling approach is that diverse groups can contribute computational routines that are usable by a wide community, and these routines can be used across operating systems and languages with minimal requirements on the client computer. While the approach has clear benefits in building next generation hydrologic models, a number of challenges must be addressed in order for the approach to reach its full potential. One such challenge in achieving service-oriented hydrologic modeling is establishing standards for web service interfaces and for service-to-service data exchanges. This study presents a prototype service-oriented modeling system that leverages existing protocols and standards (OpenMI, WaterML, GML, etc.) to perform service-oriented hydrologic modeling. The goal of the research is to access the completeness of these existing protocols and standards in achieving the goal, and to highlight shortcomings that should be addressed through future research and development efforts.

  18. Optimization of Forward Wave Modeling on Contemporary HPC Architectures

    SciTech Connect

    Krueger, Jens; Micikevicius, Paulius; Williams, Samuel

    2012-07-20

    Reverse Time Migration (RTM) is one of the main approaches in the seismic processing industry for imaging the subsurface structure of the Earth. While RTM provides qualitative advantages over its predecessors, it has a high computational cost warranting implementation on HPC architectures. We focus on three progressively more complex kernels extracted from RTM: for isotropic (ISO), vertical transverse isotropic (VTI) and tilted transverse isotropic (TTI) media. In this work, we examine performance optimization of forward wave modeling, which describes the computational kernels used in RTM, on emerging multi- and manycore processors and introduce a novel common subexpression elimination optimization for TTI kernels. We compare attained performance and energy efficiency in both the single-node and distributed memory environments in order to satisfy industry’s demands for fidelity, performance, and energy efficiency. Moreover, we discuss the interplay between architecture (chip and system) and optimizations (both on-node computation) highlighting the importance of NUMA-aware approaches to MPI communication. Ultimately, our results show we can improve CPU energy efficiency by more than 10× on Magny Cours nodes while acceleration via multiple GPUs can surpass the energy-efficient Intel Sandy Bridge by as much as 3.6×.

  19. ARPENTEUR: a web-based photogrammetry tool for architectural modeling

    NASA Astrophysics Data System (ADS)

    Grussenmeyer, Pierre; Drap, Pierre

    2000-12-01

    ARPENTEUR is a web application for digital photogrammetry mainly dedicated to architecture. ARPENTEUR has been developed since 1998 by two French research teams: the 'Photogrammetry and Geomatics' group of ENSAIS-LERGEC's laboratory and the MAP-gamsau CNRS laboratory located in the school of Architecture of Marseille. The software package is a web based tool since photogrammetric concepts are embedded in Web technology and Java programming language. The aim of this project is to propose a photogrammetric software package and 3D modeling methods available on the Internet as applets through a simple browser. The use of Java and the Web platform is ful of advantages. Distributing software on any platform, at any pace connected to Internet is of course very promising. The updating is done directly on the server and the user always works with the latest release installed on the server. Three years ago the first prototype of ARPENTEUR was based on the Java Development Kit at the time only available for some browsers. Nowadays, we are working with the JDK 1.3 plug-in enriched by Java Advancing Imaging library.

  20. Genetic transformation of the model green alga Chlamydomonas reinhardtii.

    PubMed

    Neupert, Juliane; Shao, Ning; Lu, Yinghong; Bock, Ralph

    2012-01-01

    Over the past three decades, the single-celled green alga Chlamydomonas reinhardtii has become an invaluable model organism in plant biology and an attractive production host in biotechnology. The genetic transformation of Chlamydomonas is relatively simple and efficient, but achieving high expression levels of foreign genes has remained challenging. Here, we provide working protocols for algal cultivation and transformation as well as for selection and analysis of transgenic algal clones. We focus on two commonly used transformation methods for Chlamydomonas: glass bead-assisted transformation and particle gun-mediated (biolistic) transformation. In addition, we describe available tools for promoting efficient transgene expression and highlight important considerations for designing transformation vectors.

  1. Java Architecture for Detect and Avoid Extensibility and Modeling

    NASA Technical Reports Server (NTRS)

    Santiago, Confesor; Mueller, Eric Richard; Johnson, Marcus A.; Abramson, Michael; Snow, James William

    2015-01-01

    Unmanned aircraft will equip with a detect-and-avoid (DAA) system that enables them to comply with the requirement to "see and avoid" other aircraft, an important layer in the overall set of procedural, strategic and tactical separation methods designed to prevent mid-air collisions. This paper describes a capability called Java Architecture for Detect and Avoid Extensibility and Modeling (JADEM), developed to prototype and help evaluate various DAA technological requirements by providing a flexible and extensible software platform that models all major detect-and-avoid functions. Figure 1 illustrates JADEM's architecture. The surveillance module can be actual equipment on the unmanned aircraft or simulators that model the process by which sensors on-board detect other aircraft and provide track data to the traffic display. The track evaluation function evaluates each detected aircraft and decides whether to provide an alert to the pilot and its severity. Guidance is a combination of intruder track information, alerting, and avoidance/advisory algorithms behind the tools shown on the traffic display to aid the pilot in determining a maneuver to avoid a loss of well clear. All these functions are designed with a common interface and configurable implementation, which is critical in exploring DAA requirements. To date, JADEM has been utilized in three computer simulations of the National Airspace System, three pilot-in-the-loop experiments using a total of 37 professional UAS pilots, and two flight tests using NASA's Predator-B unmanned aircraft, named Ikhana. The data collected has directly informed the quantitative separation standard for "well clear", safety case, requirements development, and the operational environment for the DAA minimum operational performance standards. This work was performed by the Separation Assurance/Sense and Avoid Interoperability team under NASA's UAS Integration in the NAS project.

  2. Automatic Texture Mapping of Architectural and Archaeological 3d Models

    NASA Astrophysics Data System (ADS)

    Kersten, T. P.; Stallmann, D.

    2012-07-01

    Today, detailed, complete and exact 3D models with photo-realistic textures are increasingly demanded for numerous applications in architecture and archaeology. Manual texture mapping of 3D models by digital photographs with software packages, such as Maxon Cinema 4D, Autodesk 3Ds Max or Maya, still requires a complex and time-consuming workflow. So, procedures for automatic texture mapping of 3D models are in demand. In this paper two automatic procedures are presented. The first procedure generates 3D surface models with textures by web services, while the second procedure textures already existing 3D models with the software tmapper. The program tmapper is based on the Multi Layer 3D image (ML3DImage) algorithm and developed in the programming language C++. The studies showing that the visibility analysis using the ML3DImage algorithm is not sufficient to obtain acceptable results of automatic texture mapping. To overcome the visibility problem the Point Cloud Painter algorithm in combination with the Z-buffer-procedure will be applied in the future.

  3. NASA Integrated Model-Centric Architecture (NIMA) model use and re-use

    NASA Astrophysics Data System (ADS)

    Conroy, Mike; Mazzone, Rebecca; Lin, Wei

    This whitepaper accepts the goals, needs and objectives of NASA's Integrated Model-centric Architecture (NIMA); adds experience and expertise from the Constellation program as well as NASA's architecture development efforts; and provides suggested concepts, practices and norms that nurture and enable model use and re-use across programs, projects and other complex endeavors. Key components include the ability to effectively move relevant information through a large community, process patterns that support model reuse and the identification of the necessary meta-information (e.g. history, credibility, and provenance) to safely use and re-use that information.

  4. An ontological model of the practice transformation process.

    PubMed

    Sen, Arun; Sinha, Atish P

    2016-06-01

    Patient-centered medical home is defined as an approach for providing comprehensive primary care that facilitates partnerships between individual patients and their personal providers. The current state of the practice transformation process is ad hoc and no methodological basis exists for transforming a practice into a patient-centered medical home. Practices and hospitals somehow accomplish the transformation and send the transformation information to a certification agency, such as the National Committee for Quality Assurance, completely ignoring the development and maintenance of the processes that keep the medical home concept alive. Many recent studies point out that such a transformation is hard as it requires an ambitious whole-practice reengineering and redesign. As a result, the practices suffer change fatigue in getting the transformation done. In this paper, we focus on the complexities of the practice transformation process and present a robust ontological model for practice transformation. The objective of the model is to create an understanding of the practice transformation process in terms of key process areas and their activities. We describe how our ontology captures the knowledge of the practice transformation process, elicited from domain experts, and also discuss how, in the future, that knowledge could be diffused across stakeholders in a healthcare organization. Our research is the first effort in practice transformation process modeling. To build an ontological model for practice transformation, we adopt the Methontology approach. Based on the literature, we first identify the key process areas essential for a practice transformation process to achieve certification status. Next, we develop the practice transformation ontology by creating key activities and precedence relationships among the key process areas using process maturity concepts. At each step, we employ a panel of domain experts to verify the intermediate representations of the

  5. An ontological model of the practice transformation process.

    PubMed

    Sen, Arun; Sinha, Atish P

    2016-06-01

    Patient-centered medical home is defined as an approach for providing comprehensive primary care that facilitates partnerships between individual patients and their personal providers. The current state of the practice transformation process is ad hoc and no methodological basis exists for transforming a practice into a patient-centered medical home. Practices and hospitals somehow accomplish the transformation and send the transformation information to a certification agency, such as the National Committee for Quality Assurance, completely ignoring the development and maintenance of the processes that keep the medical home concept alive. Many recent studies point out that such a transformation is hard as it requires an ambitious whole-practice reengineering and redesign. As a result, the practices suffer change fatigue in getting the transformation done. In this paper, we focus on the complexities of the practice transformation process and present a robust ontological model for practice transformation. The objective of the model is to create an understanding of the practice transformation process in terms of key process areas and their activities. We describe how our ontology captures the knowledge of the practice transformation process, elicited from domain experts, and also discuss how, in the future, that knowledge could be diffused across stakeholders in a healthcare organization. Our research is the first effort in practice transformation process modeling. To build an ontological model for practice transformation, we adopt the Methontology approach. Based on the literature, we first identify the key process areas essential for a practice transformation process to achieve certification status. Next, we develop the practice transformation ontology by creating key activities and precedence relationships among the key process areas using process maturity concepts. At each step, we employ a panel of domain experts to verify the intermediate representations of the

  6. An avionics scenario and command model description for Space Generic Open Avionics Architecture (SGOAA)

    NASA Technical Reports Server (NTRS)

    Stovall, John R.; Wray, Richard B.

    1994-01-01

    This paper presents a description of a model for a space vehicle operational scenario and the commands for avionics. This model will be used in developing a dynamic architecture simulation model using the Statemate CASE tool for validation of the Space Generic Open Avionics Architecture (SGOAA). The SGOAA has been proposed as an avionics architecture standard to NASA through its Strategic Avionics Technology Working Group (SATWG) and has been accepted by the Society of Automotive Engineers (SAE) for conversion into an SAE Avionics Standard. This architecture was developed for the Flight Data Systems Division (FDSD) of the NASA Johnson Space Center (JSC) by the Lockheed Engineering and Sciences Company (LESC), Houston, Texas. This SGOAA includes a generic system architecture for the entities in spacecraft avionics, a generic processing external and internal hardware architecture, and a nine class model of interfaces. The SGOAA is both scalable and recursive and can be applied to any hierarchical level of hardware/software processing systems.

  7. Fortran Transformational Tools in Support of Scientific Application Development for Petascale Computer Architectures

    SciTech Connect

    Sottille, Matthew

    2013-09-12

    This document is the final report for a multi-year effort building infrastructure to support tool development for Fortran programs. We also investigated static analysis and code transformation methods relevant to scientific programmers who are writing Fortran programs for petascale-class high performance computing systems. This report details our accomplishments, technical approaches, and provides information on where the research results and code may be obtained from an open source software repository. The report for the first year of the project that was performed at the University of Oregon prior to the PI moving to Galois, Inc. is included as an appendix.

  8. Developing a scalable modeling architecture for studying survivability technologies

    NASA Astrophysics Data System (ADS)

    Mohammad, Syed; Bounker, Paul; Mason, James; Brister, Jason; Shady, Dan; Tucker, David

    2006-05-01

    To facilitate interoperability of models in a scalable environment, and provide a relevant virtual environment in which Survivability technologies can be evaluated, the US Army Research Development and Engineering Command (RDECOM) Modeling Architecture for Technology Research and Experimentation (MATREX) Science and Technology Objective (STO) program has initiated the Survivability Thread which will seek to address some of the many technical and programmatic challenges associated with the effort. In coordination with different Thread customers, such as the Survivability branches of various Army labs, a collaborative group has been formed to define the requirements for the simulation environment that would in turn provide them a value-added tool for assessing models and gauge system-level performance relevant to Future Combat Systems (FCS) and the Survivability requirements of other burgeoning programs. An initial set of customer requirements has been generated in coordination with the RDECOM Survivability IPT lead, through the Survivability Technology Area at RDECOM Tank-automotive Research Development and Engineering Center (TARDEC, Warren, MI). The results of this project are aimed at a culminating experiment and demonstration scheduled for September, 2006, which will include a multitude of components from within RDECOM and provide the framework for future experiments to support Survivability research. This paper details the components with which the MATREX Survivability Thread was created and executed, and provides insight into the capabilities currently demanded by the Survivability faculty within RDECOM.

  9. Phase transformations in a model mesenchymal tissue

    NASA Astrophysics Data System (ADS)

    Newman, Stuart A.; Forgacs, Gabor; Hinner, Bernhard; Maier, Christian W.; Sackmann, Erich

    2004-06-01

    Connective tissues, the most abundant tissue type of the mature mammalian body, consist of cells suspended in complex microenvironments known as extracellular matrices (ECMs). In the immature connective tissues (mesenchymes) encountered in developmental biology and tissue engineering applications, the ECMs contain varying amounts of randomly arranged fibers, and the physical state of the ECM changes as the fibers secreted by the cells undergo fibril and fiber assembly and organize into networks. In vitro composites consisting of assembling solutions of type I collagen, containing suspended polystyrene latex beads (~6 µm in diameter) with collagen-binding surface properties, provide a simplified model for certain physical aspects of developing mesenchymes. In particular, assembly-dependent topological (i.e., connectivity) transitions within the ECM could change a tissue from one in which cell-sized particles (e.g., latex beads or cells) are mechanically unlinked to one in which the particles are part of a mechanical continuum. Any particle-induced alterations in fiber organization would imply that cells could similarly establish physically distinct microdomains within tissues. Here we show that the presence of beads above a critical number density accelerates the sol-gel transition that takes place during the assembly of collagen into a globally interconnected network of fibers. The presence of this suprathreshold number of beads also dramatically changes the viscoelastic properties of the collagen matrix, but only when the initial concentration of soluble collagen is itself above a critical value. Our studies provide a starting point for the analysis of phase transformations of more complex biomaterials including developing and healing tissues as well as tissue substitutes containing living cells.

  10. Emergence of a Common Modeling Architecture for Earth System Science (Invited)

    NASA Astrophysics Data System (ADS)

    Deluca, C.

    2010-12-01

    Common modeling architecture can be viewed as a natural outcome of common modeling infrastructure. The development of model utility and coupling packages (ESMF, MCT, OpenMI, etc.) over the last decade represents the realization of a community vision for common model infrastructure. The adoption of these packages has led to increased technical communication among modeling centers and newly coupled modeling systems. However, adoption has also exposed aspects of interoperability that must be addressed before easy exchange of model components among different groups can be achieved. These aspects include common physical architecture (how a model is divided into components) and model metadata and usage conventions. The National Unified Operational Prediction Capability (NUOPC), an operational weather prediction consortium, is collaborating with weather and climate researchers to define a common model architecture that encompasses these advanced aspects of interoperability and looks to future needs. The nature and structure of the emergent common modeling architecture will be discussed along with its implications for future model development.

  11. Transform continental margins - part 1: Concepts and models

    NASA Astrophysics Data System (ADS)

    Basile, Christophe

    2015-10-01

    This paper reviews the geodynamic concepts and models related to transform continental margins, and their implications on the structure of these margins. Simple kinematic models of transform faulting associated with continental rifting and oceanic accretion allow to define three successive stages of evolution, including intra-continental transform faulting, active transform margin, and passive transform margin. Each part of the transform margin experiences these three stages, but the evolution is diachronous along the margin. Both the duration of each stage and the cumulated strike-slip deformation increase from one extremity of the margin (inner corner) to the other (outer corner). Initiation of transform faulting is related to the obliquity between the trend of the lithospheric deformed zone and the relative displacement of the lithospheric plates involved in divergence. In this oblique setting, alternating transform and divergent plate boundaries correspond to spatial partitioning of the deformation. Both obliquity and the timing of partitioning influence the shape of transform margins. Oblique margin can be defined when oblique rifting is followed by oblique oceanic accretion. In this case, no transform margin should exist in the prolongation of the oceanic fracture zones. Vertical displacements along transform margins were mainly studied to explain the formation of marginal ridges. Numerous models were proposed, one of the most used is being based on thermal exchanges between the oceanic and the continental lithospheres across the transform fault. But this model is compatible neither with numerical computation including flexural behavior of the lithosphere nor with timing of vertical displacements and the lack of heating related to the passing of the oceanic accretion axis as recorded by the Côte d'Ivoire-Ghana marginal ridge. Enhanced models are still needed. They should better take into account the erosion on the continental slope, and the level of coupling

  12. Policy improvement by a model-free Dyna architecture.

    PubMed

    Hwang, Kao-Shing; Lo, Chia-Yue

    2013-05-01

    The objective of this paper is to accelerate the process of policy improvement in reinforcement learning. The proposed Dyna-style system combines two learning schemes, one of which utilizes a temporal difference method for direct learning; the other uses relative values for indirect learning in planning between two successive direct learning cycles. Instead of establishing a complicated world model, the approach introduces a simple predictor of average rewards to actor-critic architecture in the simulation (planning) mode. The relative value of a state, defined as the accumulated differences between immediate reward and average reward, is used to steer the improvement process in the right direction. The proposed learning scheme is applied to control a pendulum system for tracking a desired trajectory to demonstrate its adaptability and robustness. Through reinforcement signals from the environment, the system takes the appropriate action to drive an unknown dynamic to track desired outputs in few learning cycles. Comparisons are made between the proposed model-free method, a connectionist adaptive heuristic critic, and an advanced method of Dyna-Q learning in the experiments of labyrinth exploration. The proposed method outperforms its counterparts in terms of elapsed time and convergence rate. PMID:24808427

  13. An 8×8/4×4 Adaptive Hadamard Transform Based FME VLSI Architecture for 4K×2K H.264/AVC Encoder

    NASA Astrophysics Data System (ADS)

    Fan, Yibo; Liu, Jialiang; Zhang, Dexue; Zeng, Xiaoyang; Chen, Xinhua

    Fidelity Range Extension (FRExt) (i.e. High Profile) was added to the H.264/AVC recommendation in the second version. One of the features included in FRExt is the Adaptive Block-size Transform (ABT). In order to conform to the FRExt, a Fractional Motion Estimation (FME) architecture is proposed to support the 8×8/4×4 adaptive Hadamard Transform (8×8/4×4 AHT). The 8×8/4×4 AHT circuit contributes to higher throughput and encoding performance. In order to increase the utilization of SATD (Sum of Absolute Transformed Difference) Generator (SG) in unit time, the proposed architecture employs two 8-pel interpolators (IP) to time-share one SG. These two IPs can work in turn to provide the available data continuously to the SG, which increases the data throughput and significantly reduces the cycles that are needed to process one Macroblock. Furthermore, this architecture also exploits the linear feature of Hadamard Transform to generate the quarter-pel SATD. This method could help to shorten the long datapath in the second-step of two-iteration FME algorithm. Finally, experimental results show that this architecture could be used in the applications requiring different performances by adjusting the supported modes and operation frequency. It can support the real-time encoding of the seven-mode 4K×2K@24fps or six-mode 4K×2K@30fps video sequences.

  14. Five-factor model of personality and transformational leadership.

    PubMed

    Judge, T A; Bono, J E

    2000-10-01

    This study linked traits from the 5-factor model of personality (the Big 5) to transformational leadership behavior. Neuroticism, Extraversion, Openness to Experience, and Agreeableness were hypothesized to predict transformational leadership. Results based on 14 samples of leaders from over 200 organizations revealed that Extraversion and Agreeableness positively predicted transformational leadership; Openness to Experience was positively correlated with transformational leadership, but its effect disappeared once the influence of the other traits was controlled. Neuroticism and Conscientiousness were unrelated to transformational leadership. Results further indicated that specific facets of the Big 5 traits predicted transformational leadership less well than the general constructs. Finally, transformational leadership behavior predicted a number of outcomes reflecting leader effectiveness, controlling for the effect of transactional leadership. PMID:11055147

  15. Shallow architecture of the Wadi Araba fault (Dead Sea Transform) from high-resolution seismic investigations

    NASA Astrophysics Data System (ADS)

    Haberland, Ch.; Maercklin, N.; Kesten, D.; Ryberg, T.; Janssen, Ch.; Agnon, A.; Weber, M.; Schulze, A.; Qabbani, I.; El-Kelani, R.

    2007-03-01

    In a high-resolution small-scale seismic experiment we investigated the shallow structure of the Wadi Araba fault (WAF), the principal fault strand of the Dead Sea Transform System between the Gulf of Aqaba/Eilat and the Dead Sea. The experiment consisted of 8 sub-parallel 1 km long seismic lines crossing the WAF. The recording station spacing was 5 m and the source point distance was 20 m. The first break tomography yields insight into the fault structure down to a depth of about 200 m. The velocity structure varies from one section to the other which were 1 to 2 km apart, but destinct velocity variations along the fault are visible between several profiles. The reflection seismic images show positive flower structures and indications for different sedimentary layers at the two sides of the main fault. Often the superficial sedimentary layers are bent upward close to the WAF. Our results indicate that this section of the fault (at shallow depths) is characterized by a transpressional regime. We detected a 100 to 300 m wide heterogeneous zone of deformed and displaced material which, however, is not characterized by low seismic velocities at a larger scale. At greater depth the geophysical images indicate a blocked cross-fault structure. The structure revealed, fault cores not wider than 10 m, are consistent with scaling from wear mechanics and with the low loading to healing ratio anticipated for the fault.

  16. Practical Application of Model-based Programming and State-based Architecture to Space Missions

    NASA Technical Reports Server (NTRS)

    Horvath, Gregory; Ingham, Michel; Chung, Seung; Martin, Oliver; Williams, Brian

    2006-01-01

    A viewgraph presentation to develop models from systems engineers that accomplish mission objectives and manage the health of the system is shown. The topics include: 1) Overview; 2) Motivation; 3) Objective/Vision; 4) Approach; 5) Background: The Mission Data System; 6) Background: State-based Control Architecture System; 7) Background: State Analysis; 8) Overview of State Analysis; 9) Background: MDS Software Frameworks; 10) Background: Model-based Programming; 10) Background: Titan Model-based Executive; 11) Model-based Execution Architecture; 12) Compatibility Analysis of MDS and Titan Architectures; 13) Integrating Model-based Programming and Execution into the Architecture; 14) State Analysis and Modeling; 15) IMU Subsystem State Effects Diagram; 16) Titan Subsystem Model: IMU Health; 17) Integrating Model-based Programming and Execution into the Software IMU; 18) Testing Program; 19) Computationally Tractable State Estimation & Fault Diagnosis; 20) Diagnostic Algorithm Performance; 21) Integration and Test Issues; 22) Demonstrated Benefits; and 23) Next Steps

  17. Quantum corrections of Abelian Duality Transformations in Sigma models

    NASA Astrophysics Data System (ADS)

    Balog, J.; Forgács, P.; Horváth, Z.; Palla, L.

    1997-07-01

    A review is given of a recently proposed modification of the Abelian Duality transformations guaranteeing that a (not necessarily conformally invariant) σ-model be quantum equivalent (at least up to two loops in perturbation theory) to its dual. This requires a somewhat non standard perturbative treatment of the dual σ-model. Explicit formulae of the modified duality transformation are presented for a special class of block diagonal purely metric σ-models.

  18. Organoids as Models for Neoplastic Transformation | Office of Cancer Genomics

    Cancer.gov

    Cancer models strive to recapitulate the incredible diversity inherent in human tumors. A key challenge in accurate tumor modeling lies in capturing the panoply of homo- and heterotypic cellular interactions within the context of a three-dimensional tissue microenvironment. To address this challenge, researchers have developed organotypic cancer models (organoids) that combine the 3D architecture of in vivo tissues with the experimental facility of 2D cell lines.

  19. Connection and coordination: the interplay between architecture and dynamics in evolved model pattern generators.

    PubMed

    Psujek, Sean; Ames, Jeffrey; Beer, Randall D

    2006-03-01

    We undertake a systematic study of the role of neural architecture in shaping the dynamics of evolved model pattern generators for a walking task. First, we consider the minimum number of connections necessary to achieve high performance on this task. Next, we identify architectural motifs associated with high fitness. We then examine how high-fitness architectures differ in their ability to evolve. Finally, we demonstrate the existence of distinct parameter subgroups in some architectures and show that these subgroups are characterized by differences in neuron excitabilities and connection signs. PMID:16483415

  20. Modeling the Contribution of Enterprise Architecture Practice to the Achievement of Business Goals

    NASA Astrophysics Data System (ADS)

    van Steenbergen, Marlies; Brinkkemper, Sjaak

    Enterprise architecture is a young, but well-accepted discipline in information management. Establishing the effectiveness of an enterprise architecture practice, however, appears difficult. In this chapter we introduce an architecture effectiveness model (AEM) to express how enterprise architecture practices are meant to contribute to the business goals of an organization. We developed an AEM for three different organizations. These three instances show that the concept of the AEM is applicable in a variety of organizations. It also shows that the objectives of enterprise architecture are not to be restricted to financial goals. The AEM can be used by organizations to set coherent priorities for their architectural practices and to define KPIs for measuring the effectiveness of these practices.

  1. A Multiperspectival Conceptual Model of Transformative Meaning Making

    ERIC Educational Resources Information Center

    Freed, Maxine

    2009-01-01

    Meaning making is central to transformative learning, but little work has explored how meaning is constructed in the process. Moreover, no meaning-making theory adequately captures its characteristics and operations during radical transformation. The purpose of this dissertation was to formulate and specify a multiperspectival conceptual model of…

  2. Typical Phases of Transformative Learning: A Practice-Based Model

    ERIC Educational Resources Information Center

    Nohl, Arnd-Michael

    2015-01-01

    Empirical models of transformative learning offer important insights into the core characteristics of this concept. Whereas previous analyses were limited to specific social groups or topical terrains, this article empirically typifies the phases of transformative learning on the basis of a comparative analysis of various social groups and topical…

  3. An architectural model of conscious and unconscious brain functions: Global Workspace Theory and IDA.

    PubMed

    Baars, Bernard J; Franklin, Stan

    2007-11-01

    While neural net models have been developed to a high degree of sophistication, they have some drawbacks at a more integrative, "architectural" level of analysis. We describe a "hybrid" cognitive architecture that is implementable in neuronal nets, and which has uniform brainlike features, including activation-passing and highly distributed "codelets," implementable as small-scale neural nets. Empirically, this cognitive architecture accounts qualitatively for the data described by Baars' Global Workspace Theory (GWT), and Franklin's LIDA architecture, including state-of-the-art models of conscious contents in action-planning, Baddeley-style Working Memory, and working models of episodic and semantic longterm memory. These terms are defined both conceptually and empirically for the current theoretical domain. The resulting architecture meets four desirable goals for a unified theory of cognition: practical workability, autonomous agency, a plausible role for conscious cognition, and translatability into plausible neural terms. It also generates testable predictions, both empirical and computational.

  4. Plum (Prunus domestica) trees transformed with poplar FT1 result in altered architecture, dormancy requirement, and continuous flowering.

    PubMed

    Srinivasan, Chinnathambi; Dardick, Chris; Callahan, Ann; Scorza, Ralph

    2012-01-01

    The Flowering Locus T1 (FT1) gene from Populus trichocarpa under the control of the 35S promoter was transformed into European plum (Prunus domestica L). Transgenic plants expressing higher levels of FT flowered and produced fruits in the greenhouse within 1 to 10 months. FT plums did not enter dormancy after cold or short day treatments yet field planted FT plums remained winter hardy down to at least -10°C. The plants also displayed pleiotropic phenotypes atypical for plum including shrub-type growth habit and panicle flower architecture. The flowering and fruiting phenotype was found to be continuous in the greenhouse but limited to spring and fall in the field. The pattern of flowering in the field correlated with lower daily temperatures. This apparent temperature effect was subsequently confirmed in growth chamber studies. The pleitropic phenotypes associated with FT1 expression in plum suggests a fundamental role of this gene in plant growth and development. This study demonstrates the potential for a single transgene event to markedly affect the vegetative and reproductive growth and development of an economically important temperate woody perennial crop. We suggest that FT1 may be a useful tool to modify temperate plants to changing climates and/or to adapt these crops to new growing areas.

  5. Feature Matching with Affine-Function Transformation Models.

    PubMed

    Li, Hongsheng; Huang, Xiaolei; Huang, Junzhou; Zhang, Shaoting

    2014-12-01

    Feature matching is an important problem and has extensive uses in computer vision. However, existing feature matching methods support either a specific or a small set of transformation models. In this paper, we propose a unified feature matching framework which supports a large family of transformation models. We call the family of transformation models the affine-function family, in which all transformations can be expressed by affine functions with convex constraints. In this framework, the goal is to recover transformation parameters for every feature point in a template point set to calculate their optimal matching positions in an input image. Given pairwise feature dissimilarity values between all points in the template set and the input image, we create a convex dissimilarity function for each template point. Composition of such convex functions with any transformation model in the affine-function family is shown to have an equivalent convex optimization form that can be optimized efficiently. Four example transformation models in the affine-function family are introduced to show the flexibility of our proposed framework. Our framework achieves 0.0 percent matching errors for both CMU House and Hotel sequences following the experimental setup in [6]. PMID:26353148

  6. Canonical Transformation for Stiff Matter Models in Quantum Cosmology

    NASA Astrophysics Data System (ADS)

    Neves, C.; Monerat, G. A.; Corrêa Silva, E. V.; Ferreira Filho, L. G.; Oliveira-Neto, G.

    2011-06-01

    In the present work we consider Friedmann-Robertson-Walker models in the presence of a stiff matter perfect fluid and a cosmological constant. We write the superhamiltonian of these models using the Schutz's variational formalism. We notice that the resulting superhamiltonians have terms that will lead to factor ordering ambiguities when they are written as quantum operators. In order to remove these ambiguities, we introduce appropriate coordinate transformations and prove that these transformations are canonical using the symplectic method.

  7. Model Assessment and Optimization Using a Flow Time Transformation

    NASA Astrophysics Data System (ADS)

    Smith, T. J.; Marshall, L. A.; McGlynn, B. L.

    2012-12-01

    Hydrologic modeling is a particularly complex problem that is commonly confronted with complications due to multiple dominant streamflow states, temporal switching of streamflow generation mechanisms, and dynamic responses to model inputs based on antecedent conditions. These complexities can inhibit the development of model structures and their fitting to observed data. As a result of these complexities and the heterogeneity that can exist within a catchment, optimization techniques are typically employed to obtain reasonable estimates of model parameters. However, when calibrating a model, the cost function itself plays a large role in determining the "optimal" model parameters. In this study, we introduce a transformation that allows for the estimation of model parameters in the "flow time" domain. The flow time transformation dynamically weights streamflows in the time domain, effectively stretching time during high streamflows and compressing time during low streamflows. Given the impact of cost functions on model optimization, such transformations focus on the hydrologic fluxes themselves rather than on equal time weighting common to traditional approaches. The utility of such a transform is of particular note to applications concerned with total hydrologic flux (water resources management, nutrient loading, etc.). The flow time approach can improve the predictive consistency of total fluxes in hydrologic models and provide insights into model performance by highlighting model strengths and deficiencies in an alternate modeling domain. Flow time transformations can also better remove positive skew from the streamflow time series, resulting in improved model fits, satisfaction of the normality assumption of model residuals, and enhanced uncertainty quantification. We illustrate the value of this transformation for two distinct sets of catchment conditions (snow-dominated and subtropical).

  8. Evaluating the Effectiveness of Reference Models in Federating Enterprise Architectures

    ERIC Educational Resources Information Center

    Wilson, Jeffery A.

    2012-01-01

    Agencies need to collaborate with each other to perform missions, improve mission performance, and find efficiencies. The ability of individual government agencies to collaborate with each other for mission and business success and efficiency is complicated by the different techniques used to describe their Enterprise Architectures (EAs).…

  9. Transforming teacher knowledge: Modeling instruction in physics

    NASA Astrophysics Data System (ADS)

    Cabot, Lloyd H.

    I show that the Modeling physics curriculum is readily accommodated by most teachers in favor of traditional didactic pedagogies. This is so, at least in part, because Modeling focuses on a small set of connected models embedded in a self-consistent theoretical framework and thus is closely congruent with human cognition in this context which is to generate mental models of physical phenomena as both predictive and explanatory devices. Whether a teacher fully implements the Modeling pedagogy depends on the depth of the teacher's commitment to inquiry-based instruction, specifically Modeling instruction, as a means of promoting student understanding of Newtonian mechanics. Moreover, this commitment trumps all other characteristics: teacher educational background, content coverage issues, student achievement data, district or state learning standards, and district or state student assessments. Indeed, distinctive differences exist in how Modeling teachers deliver their curricula and some teachers are measurably more effective than others in their delivery, but they all share an unshakable belief in the efficacy of inquiry-based, constructivist-oriented instruction. The Modeling Workshops' pedagogy, duration, and social interactions impacts teachers' self-identification as members of a professional community. Finally, I discuss the consequences my research may have for the Modeling Instruction program designers and for designers of professional development programs generally.

  10. An Agent-Based Dynamic Model for Analysis of Distributed Space Exploration Architectures

    NASA Astrophysics Data System (ADS)

    Sindiy, Oleg V.; DeLaurentis, Daniel A.; Stein, William B.

    2009-07-01

    A range of complex challenges, but also potentially unique rewards, underlie the development of exploration architectures that use a distributed, dynamic network of resources across the solar system. From a methodological perspective, the prime challenge is to systematically model the evolution (and quantify comparative performance) of such architectures, under uncertainty, to effectively direct further study of specialized trajectories, spacecraft technologies, concept of operations, and resource allocation. A process model for System-of-Systems Engineering is used to define time-varying performance measures for comparative architecture analysis and identification of distinguishing patterns among interoperating systems. Agent-based modeling serves as the means to create a discrete-time simulation that generates dynamics for the study of architecture evolution. A Solar System Mobility Network proof-of-concept problem is introduced representing a set of longer-term, distributed exploration architectures. Options within this set revolve around deployment of human and robotic exploration and infrastructure assets, their organization, interoperability, and evolution, i.e., a system-of-systems. Agent-based simulations quantify relative payoffs for a fully distributed architecture (which can be significant over the long term), the latency period before they are manifest, and the up-front investment (which can be substantial compared to alternatives). Verification and sensitivity results provide further insight on development paths and indicate that the framework and simulation modeling approach may be useful in architectural design of other space exploration mass, energy, and information exchange settings.

  11. Rapid architecture alternative modeling (RAAM): A framework for capability-based analysis of system of systems architectures

    NASA Astrophysics Data System (ADS)

    Iacobucci, Joseph V.

    The research objective for this manuscript is to develop a Rapid Architecture Alternative Modeling (RAAM) methodology to enable traceable Pre-Milestone A decision making during the conceptual phase of design of a system of systems. Rather than following current trends that place an emphasis on adding more analysis which tends to increase the complexity of the decision making problem, RAAM improves on current methods by reducing both runtime and model creation complexity. RAAM draws upon principles from computer science, system architecting, and domain specific languages to enable the automatic generation and evaluation of architecture alternatives. For example, both mission dependent and mission independent metrics are considered. Mission dependent metrics are determined by the performance of systems accomplishing a task, such as Probability of Success. In contrast, mission independent metrics, such as acquisition cost, are solely determined and influenced by the other systems in the portfolio. RAAM also leverages advances in parallel computing to significantly reduce runtime by defining executable models that are readily amendable to parallelization. This allows the use of cloud computing infrastructures such as Amazon's Elastic Compute Cloud and the PASTEC cluster operated by the Georgia Institute of Technology Research Institute (GTRI). Also, the amount of data that can be generated when fully exploring the design space can quickly exceed the typical capacity of computational resources at the analyst's disposal. To counter this, specific algorithms and techniques are employed. Streaming algorithms and recursive architecture alternative evaluation algorithms are used that reduce computer memory requirements. Lastly, a domain specific language is created to provide a reduction in the computational time of executing the system of systems models. A domain specific language is a small, usually declarative language that offers expressive power focused on a particular

  12. Rapid architecture alternative modeling (RAAM): A framework for capability-based analysis of system of systems architectures

    NASA Astrophysics Data System (ADS)

    Iacobucci, Joseph V.

    The research objective for this manuscript is to develop a Rapid Architecture Alternative Modeling (RAAM) methodology to enable traceable Pre-Milestone A decision making during the conceptual phase of design of a system of systems. Rather than following current trends that place an emphasis on adding more analysis which tends to increase the complexity of the decision making problem, RAAM improves on current methods by reducing both runtime and model creation complexity. RAAM draws upon principles from computer science, system architecting, and domain specific languages to enable the automatic generation and evaluation of architecture alternatives. For example, both mission dependent and mission independent metrics are considered. Mission dependent metrics are determined by the performance of systems accomplishing a task, such as Probability of Success. In contrast, mission independent metrics, such as acquisition cost, are solely determined and influenced by the other systems in the portfolio. RAAM also leverages advances in parallel computing to significantly reduce runtime by defining executable models that are readily amendable to parallelization. This allows the use of cloud computing infrastructures such as Amazon's Elastic Compute Cloud and the PASTEC cluster operated by the Georgia Institute of Technology Research Institute (GTRI). Also, the amount of data that can be generated when fully exploring the design space can quickly exceed the typical capacity of computational resources at the analyst's disposal. To counter this, specific algorithms and techniques are employed. Streaming algorithms and recursive architecture alternative evaluation algorithms are used that reduce computer memory requirements. Lastly, a domain specific language is created to provide a reduction in the computational time of executing the system of systems models. A domain specific language is a small, usually declarative language that offers expressive power focused on a particular

  13. New Models of Mechanisms for the Motion Transformation

    NASA Astrophysics Data System (ADS)

    Petrović, Tomislav; Ivanov, Ivan

    In this paper two new mechanisms for the motion transformations are presented: screw mechanism for the transformation of one-way circular into two-way linear motion with impulse control and worm-planetary gear train with extremely height gear ratio. Both mechanisms represent new models of construction solutions for which patent protection has been achieved. These mechanisms are based on the application of the differential gearbox with two degrees of freedom. They are characterized by series of kinematic impacts at motion transformation and the possibility of temporary or permanent changes in the structure by subtracting the redundant degree of freedom. Thus the desired characteristic of the motion transformation is achieved. For each mechanism separately the principles of motion and transformation are described and the basic equations that describe the interdependence of geometric and kinematic and kinetic parameters of the system dynamics are given. The basic principles of controlling new mechanisms for motion transformation have been pointed to and the basic constructional performances which may find practical application have been given. The physical models of new systems of motion transformation have been designed and their operation has been presented. Performed experimental researches confirmed the theoretical results and very favorable kinematic characteristics of the mechanisms.

  14. Transforming Community Access to Space Science Models

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Heese, Michael; Kunetsova, Maria; Maddox, Marlo; Rastaetter, Lutz; Berrios, David; Pulkkinen, Antti

    2012-01-01

    Researching and forecasting the ever changing space environment (often referred to as space weather) and its influence on humans and their activities are model-intensive disciplines. This is true because the physical processes involved are complex, but, in contrast to terrestrial weather, the supporting observations are typically sparse. Models play a vital role in establishing a physically meaningful context for interpreting limited observations, testing theory, and producing both nowcasts and forecasts. For example, with accurate forecasting of hazardous space weather conditions, spacecraft operators can place sensitive systems in safe modes, and power utilities can protect critical network components from damage caused by large currents induced in transmission lines by geomagnetic storms.

  15. TRANSFORMATION

    SciTech Connect

    LACKS,S.A.

    2003-10-09

    Transformation, which alters the genetic makeup of an individual, is a concept that intrigues the human imagination. In Streptococcus pneumoniae such transformation was first demonstrated. Perhaps our fascination with genetics derived from our ancestors observing their own progeny, with its retention and assortment of parental traits, but such interest must have been accelerated after the dawn of agriculture. It was in pea plants that Gregor Mendel in the late 1800s examined inherited traits and found them to be determined by physical elements, or genes, passed from parents to progeny. In our day, the material basis of these genetic determinants was revealed to be DNA by the lowly bacteria, in particular, the pneumococcus. For this species, transformation by free DNA is a sexual process that enables cells to sport new combinations of genes and traits. Genetic transformation of the type found in S. pneumoniae occurs naturally in many species of bacteria (70), but, initially only a few other transformable species were found, namely, Haemophilus influenzae, Neisseria meningitides, Neisseria gonorrheae, and Bacillus subtilis (96). Natural transformation, which requires a set of genes evolved for the purpose, contrasts with artificial transformation, which is accomplished by shocking cells either electrically, as in electroporation, or by ionic and temperature shifts. Although such artificial treatments can introduce very small amounts of DNA into virtually any type of cell, the amounts introduced by natural transformation are a million-fold greater, and S. pneumoniae can take up as much as 10% of its cellular DNA content (40).

  16. Negotiation Areas for "Transformation" and "Turnaround" Intervention Models

    ERIC Educational Resources Information Center

    Mass Insight Education (NJ1), 2011

    2011-01-01

    To receive School Improvement Grant (SIG) funding, districts must submit an application to the state that outlines their strategic plan to implement one of four intervention models in their persistently lowest-achieving schools. The four intervention models include: (1) School Closure; (2) Restart; (3) Turnaround; and (4) Transformation. The…

  17. NASA Integrated Model Centric Architecture (NIMA) Model Use and Re-Use

    NASA Technical Reports Server (NTRS)

    Conroy, Mike; Mazzone, Rebecca; Lin, Wei

    2012-01-01

    This whitepaper accepts the goals, needs and objectives of NASA's Integrated Model-centric Architecture (NIMA); adds experience and expertise from the Constellation program as well as NASA's architecture development efforts; and provides suggested concepts, practices and norms that nurture and enable model use and re-use across programs, projects and other complex endeavors. Key components include the ability to effectively move relevant information through a large community, process patterns that support model reuse and the identification of the necessary meta-information (ex. history, credibility, and provenance) to safely use and re-use that information. In order to successfully Use and Re-Use Models and Simulations we must define and meet key organizational and structural needs: 1. We must understand and acknowledge all the roles and players involved from the initial need identification through to the final product, as well as how they change across the lifecycle. 2. We must create the necessary structural elements to store and share NIMA-enabled information throughout the Program or Project lifecycle. 3. We must create the necessary organizational processes to stand up and execute a NIMA-enabled Program or Project throughout its lifecycle. NASA must meet all three of these needs to successfully use and re-use models. The ability to Reuse Models a key component of NIMA and the capabilities inherent in NIMA are key to accomplishing NASA's space exploration goals. 11

  18. TRANSFORMER

    DOEpatents

    Baker, W.R.

    1959-08-25

    Transformers of a type adapted for use with extreme high power vacuum tubes where current requirements may be of the order of 2,000 to 200,000 amperes are described. The transformer casing has the form of a re-entrant section being extended through an opening in one end of the cylinder to form a coaxial terminal arrangement. A toroidal multi-turn primary winding is disposed within the casing in coaxial relationship therein. In a second embodiment, means are provided for forming the casing as a multi-turn secondary. The transformer is characterized by minimized resistance heating, minimized external magnetic flux, and an economical construction.

  19. Locating Pd in Transformers through Detailed Model and Neural Networks

    NASA Astrophysics Data System (ADS)

    Nafisi, Hamed; Abedi, Mehrdad; Gharehpetian, Gevorg B.

    2014-03-01

    In a power transformer as one of the major component in electric power networks, partial discharge (PD) is a major source of insulation failure. Therefore the accurate and high speed techniques for locating of PD sources are required regarding to repair and maintenance. In this paper an attempt has been made to introduce the novel methods based on two different artificial neural networks (ANN) for identifying PD location in the power transformers. In present report Fuzzy ARTmap and Bayesian neural networks are employed for PD locating while using detailed model (DM) for a power transformer for simulation purposes. In present paper PD phenomenon is implemented in different points of transformer winding using threecapacitor model. Then impulse test is applied to transformer terminals in order to use produced current in neutral point for training and test of employed ANNs. In practice obtained current signals include noise components. Thus the performance of Fuzzy ARTmap and Bayesian networks for correct identification of PD location in a noisy condition for detected currents is also investigated. In this paper RBF learning procedure is used for Bayesian network, while Markov chain Monte Carlo (MCMC) method is employed for training of Fuzzy ARTmap network for locating PD in a power transformer winding and results are compared.

  20. Transformative leadership: an ethical stewardship model for healthcare.

    PubMed

    Caldwell, Cam; Voelker, Carolyn; Dixon, Rolf D; LeJeune, Adena

    2008-01-01

    The need for effective leadership is a compelling priority for those who would choose to govern in public, private, and nonprofit organizations, and applies as much to the healthcare profession as it does to other sectors of the economy (Moody, Horton-Deutsch, & Pesut, 2007). Transformative Leadership, an approach to leadership and governance that incorporates the best characteristics of six other highly respected leadership models, is an integrative theory of ethical stewardship that can help healthcare professionals to more effectively achieve organizational efficiencies, build stakeholder commitment and trust, and create valuable synergies to transform and enrich today's healthcare systems (cf. Caldwell, LeJeune, & Dixon, 2007). The purpose of this article is to introduce the concept of Transformative Leadership and to explain how this model applies within a healthcare context. We define Transformative Leadership and identify its relationship to Transformational, Charismatic, Level 5, Principle-Centered, Servant, and Covenantal Leadership--providing examples of each of these elements of Transformative Leadership within a healthcare leadership context. We conclude by identifying contributions of this article to the healthcare leadership literature.

  1. Transfer Function Identification Using Orthogonal Fourier Transform Modeling Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2013-01-01

    A method for transfer function identification, including both model structure determination and parameter estimation, was developed and demonstrated. The approach uses orthogonal modeling functions generated from frequency domain data obtained by Fourier transformation of time series data. The method was applied to simulation data to identify continuous-time transfer function models and unsteady aerodynamic models. Model fit error, estimated model parameters, and the associated uncertainties were used to show the effectiveness of the method for identifying accurate transfer function models from noisy data.

  2. Plant Growth Modelling and Applications: The Increasing Importance of Plant Architecture in Growth Models

    PubMed Central

    Fourcaud, Thierry; Zhang, Xiaopeng; Stokes, Alexia; Lambers, Hans; Körner, Christian

    2008-01-01

    Background Modelling plant growth allows us to test hypotheses and carry out virtual experiments concerning plant growth processes that could otherwise take years in field conditions. The visualization of growth simulations allows us to see directly and vividly the outcome of a given model and provides us with an instructive tool useful for agronomists and foresters, as well as for teaching. Functional–structural (FS) plant growth models are nowadays particularly important for integrating biological processes with environmental conditions in 3-D virtual plants, and provide the basis for more advanced research in plant sciences. Scope In this viewpoint paper, we ask the following questions. Are we modelling the correct processes that drive plant growth, and is growth driven mostly by sink or source activity? In current models, is the importance of soil resources (nutrients, water, temperature and their interaction with meristematic activity) considered adequately? Do classic models account for architectural adjustment as well as integrating the fundamental principles of development? Whilst answering these questions with the available data in the literature, we put forward the opinion that plant architecture and sink activity must be pushed to the centre of plant growth models. In natural conditions, sinks will more often drive growth than source activity, because sink activity is often controlled by finite soil resources or developmental constraints. PMA06 This viewpoint paper also serves as an introduction to this Special Issue devoted to plant growth modelling, which includes new research covering areas stretching from cell growth to biomechanics. All papers were presented at the Second International Symposium on Plant Growth Modeling, Simulation, Visualization and Applications (PMA06), held in Beijing, China, from 13–17 November, 2006. Although a large number of papers are devoted to FS models of agricultural and forest crop species, physiological and genetic

  3. Modeling of a 3DTV service in the software-defined networking architecture

    NASA Astrophysics Data System (ADS)

    Wilczewski, Grzegorz

    2014-11-01

    In this article a newly developed concept towards modeling of a multimedia service offering stereoscopic motion imagery is presented. Proposed model is based on the approach of utilization of Software-defined Networking or Software Defined Networks architecture (SDN). The definition of 3D television service spanning SDN concept is identified, exposing basic characteristic of a 3DTV service in a modern networking organization layout. Furthermore, exemplary functionalities of the proposed 3DTV model are depicted. It is indicated that modeling of a 3DTV service in the Software-defined Networking architecture leads to multiplicity of improvements, especially towards flexibility of a service supporting heterogeneity of end user devices.

  4. A Service Oriented Architecture for Exploring High Performance Distributed Power Models

    SciTech Connect

    Liu, Yan; Chase, Jared M.; Gorton, Ian

    2012-11-12

    Power grids are increasingly incorporating high quality, high throughput sensor devices inside power distribution networks. These devices are driving an unprecedented increase in the volume and rate of available information. The real-time requirements for handling this data are beyond the capacity of conventional power models running in central utilities. Hence, we are exploring distributed power models deployed at the regional scale. The connection of these models for a larger geographic region is supported by a distributed system architecture. This architecture is built in a service oriented style, whereby distributed power models running on high performance clusters are exposed as services. Each service is semantically annotated and therefore can be discovered through a service catalog and composed into workflows. The overall architecture has been implemented as an integrated workflow environment useful for power researchers to explore newly developed distributed power models.

  5. Lie algebraic similarity transformed Hamiltonians for lattice model systems

    NASA Astrophysics Data System (ADS)

    Wahlen-Strothman, Jacob M.; Jiménez-Hoyos, Carlos A.; Henderson, Thomas M.; Scuseria, Gustavo E.

    2015-01-01

    We present a class of Lie algebraic similarity transformations generated by exponentials of two-body on-site Hermitian operators whose Hausdorff series can be summed exactly without truncation. The correlators are defined over the entire lattice and include the Gutzwiller factor ni ↑ni ↓ , and two-site products of density (ni ↑+ni ↓) and spin (ni ↑-ni ↓) operators. The resulting non-Hermitian many-body Hamiltonian can be solved in a biorthogonal mean-field approach with polynomial computational cost. The proposed similarity transformation generates locally weighted orbital transformations of the reference determinant. Although the energy of the model is unbound, projective equations in the spirit of coupled cluster theory lead to well-defined solutions. The theory is tested on the one- and two-dimensional repulsive Hubbard model where it yields accurate results for small and medium sized interaction strengths.

  6. Software architecture and design of the web services facilitating climate model diagnostic analysis

    NASA Astrophysics Data System (ADS)

    Pan, L.; Lee, S.; Zhang, J.; Tang, B.; Zhai, C.; Jiang, J. H.; Wang, W.; Bao, Q.; Qi, M.; Kubar, T. L.; Teixeira, J.

    2015-12-01

    Climate model diagnostic analysis is a computationally- and data-intensive task because it involves multiple numerical model outputs and satellite observation data that can both be high resolution. We have built an online tool that facilitates this process. The tool is called Climate Model Diagnostic Analyzer (CMDA). It employs the web service technology and provides a web-based user interface. The benefits of these choices include: (1) No installation of any software other than a browser, hence it is platform compatable; (2) Co-location of computation and big data on the server side, and small results and plots to be downloaded on the client side, hence high data efficiency; (3) multi-threaded implementation to achieve parallel performance on multi-core servers; and (4) cloud deployment so each user has a dedicated virtual machine. In this presentation, we will focus on the computer science aspects of this tool, namely the architectural design, the infrastructure of the web services, the implementation of the web-based user interface, the mechanism of provenance collection, the approach to virtualization, and the Amazon Cloud deployment. As an example, We will describe our methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks (i.e., Flask, Gunicorn, and Tornado). Another example is the use of Docker, a light-weight virtualization container, to distribute and deploy CMDA onto an Amazon EC2 instance. Our tool of CMDA has been successfully used in the 2014 Summer School hosted by the JPL Center for Climate Science. Students had positive feedbacks in general and we will report their comments. An enhanced version of CMDA with several new features, some requested by the 2014 students, will be used in the 2015 Summer School soon.

  7. Bayesian spatial transformation models with applications in neuroimaging data

    PubMed Central

    Miranda, Michelle F.; Zhu, Hongtu; Ibrahim, Joseph G.

    2013-01-01

    Summary The aim of this paper is to develop a class of spatial transformation models (STM) to spatially model the varying association between imaging measures in a three-dimensional (3D) volume (or 2D surface) and a set of covariates. Our STMs include a varying Box-Cox transformation model for dealing with the issue of non-Gaussian distributed imaging data and a Gaussian Markov Random Field model for incorporating spatial smoothness of the imaging data. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. Simulations and real data analysis demonstrate that the STM significantly outperforms the voxel-wise linear model with Gaussian noise in recovering meaningful geometric patterns. Our STM is able to reveal important brain regions with morphological changes in children with attention deficit hyperactivity disorder. PMID:24128143

  8. Estimation in a semi-Markov transformation model

    PubMed Central

    Dabrowska, Dorota M.

    2012-01-01

    Multi-state models provide a common tool for analysis of longitudinal failure time data. In biomedical applications, models of this kind are often used to describe evolution of a disease and assume that patient may move among a finite number of states representing different phases in the disease progression. Several authors developed extensions of the proportional hazard model for analysis of multi-state models in the presence of covariates. In this paper, we consider a general class of censored semi-Markov and modulated renewal processes and propose the use of transformation models for their analysis. Special cases include modulated renewal processes with interarrival times specified using transformation models, and semi-Markov processes with with one-step transition probabilities defined using copula-transformation models. We discuss estimation of finite and infinite dimensional parameters of the model, and develop an extension of the Gaussian multiplier method for setting confidence bands for transition probabilities. A transplant outcome data set from the Center for International Blood and Marrow Transplant Research is used for illustrative purposes. PMID:22740583

  9. Laguerre-Volterra model and architecture for MIMO system identification and output prediction.

    PubMed

    Li, Will X Y; Xin, Yao; Chan, Rosa H M; Song, Dong; Berger, Theodore W; Cheung, Ray C C

    2014-01-01

    A generalized mathematical model is proposed for behaviors prediction of biological causal systems with multiple inputs and multiple outputs (MIMO). The system properties are represented by a set of model parameters, which can be derived with random input stimuli probing it. The system calculates predicted outputs based on the estimated parameters and its novel inputs. An efficient hardware architecture is established for this mathematical model and its circuitry has been implemented using the field-programmable gate arrays (FPGAs). This architecture is scalable and its functionality has been validated by using experimental data gathered from real-world measurement. PMID:25571001

  10. Model Transformation for a System of Systems Dependability Safety Case

    NASA Technical Reports Server (NTRS)

    Murphy, Judy; Driskell, Steve

    2011-01-01

    The presentation reviews the dependability and safety effort of NASA's Independent Verification and Validation Facility. Topics include: safety engineering process, applications to non-space environment, Phase I overview, process creation, sample SRM artifact, Phase I end result, Phase II model transformation, fault management, and applying Phase II to individual projects.

  11. Transforming a School of Education via the Accelerated Schools Model.

    ERIC Educational Resources Information Center

    Mims, J. Sabrina; Slovacek, Simeon; Wong, Gay Yuen

    This paper describes how the Accelerated Schools Model has served as a catalyst for transforming the Charter School of Education at California State University, Los Angeles. The Accelerated Schools Project has been one of the largest and most comprehensive school restructuring movements of the last decade. The focus of Accelerated Schools is…

  12. Correctness, Completeness and Termination of Pattern-Based Model-to-Model Transformation

    NASA Astrophysics Data System (ADS)

    Orejas, Fernando; Guerra, Esther; de Lara, Juan; Ehrig, Hartmut

    Model-to-model (M2M) transformation consists in trans- forming models from a source to a target language. Many transformation languages exist, but few of them combine a declarative and relational style with a formal underpinning able to show properties of the transformation. Pattern-based transformation is an algebraic, bidirectional, and relational approach to M2M transformation. Specifications are made of patterns stating the allowed or forbidden relations between source and target models, and then compiled into low level operational mechanisms to perform source-to-target or target-to-source transformations. In this paper, we study the compilation into operational triple graph grammar rules and show: (i) correctness of the compilation of a specification without negative patterns; (ii) termination of the rules, and (iii) completeness, in the sense that every model considered relevant can be built by the rules.

  13. To transform or not to transform: using generalized linear mixed models to analyse reaction time data

    PubMed Central

    Lo, Steson; Andrews, Sally

    2015-01-01

    Linear mixed-effect models (LMMs) are being increasingly widely used in psychology to analyse multi-level research designs. This feature allows LMMs to address some of the problems identified by Speelman and McGann (2013) about the use of mean data, because they do not average across individual responses. However, recent guidelines for using LMM to analyse skewed reaction time (RT) data collected in many cognitive psychological studies recommend the application of non-linear transformations to satisfy assumptions of normality. Uncritical adoption of this recommendation has important theoretical implications which can yield misleading conclusions. For example, Balota et al. (2013) showed that analyses of raw RT produced additive effects of word frequency and stimulus quality on word identification, which conflicted with the interactive effects observed in analyses of transformed RT. Generalized linear mixed-effect models (GLMM) provide a solution to this problem by satisfying normality assumptions without the need for transformation. This allows differences between individuals to be properly assessed, using the metric most appropriate to the researcher's theoretical context. We outline the major theoretical decisions involved in specifying a GLMM, and illustrate them by reanalysing Balota et al.'s datasets. We then consider the broader benefits of using GLMM to investigate individual differences. PMID:26300841

  14. Transitioning ISR architecture into the cloud

    NASA Astrophysics Data System (ADS)

    Lash, Thomas D.

    2012-06-01

    Emerging cloud computing platforms offer an ideal opportunity for Intelligence, Surveillance, and Reconnaissance (ISR) intelligence analysis. Cloud computing platforms help overcome challenges and limitations of traditional ISR architectures. Modern ISR architectures can benefit from examining commercial cloud applications, especially as they relate to user experience, usage profiling, and transformational business models. This paper outlines legacy ISR architectures and their limitations, presents an overview of cloud technologies and their applications to the ISR intelligence mission, and presents an idealized ISR architecture implemented with cloud computing.

  15. Assessing biocomputational modelling in transforming clinical guidelines for osteoporosis management.

    PubMed

    Thiel, Rainer; Viceconti, Marco; Stroetmann, Karl

    2011-01-01

    Biocomputational modelling as developed by the European Virtual Physiological Human (VPH) Initiative is the area of ICT most likely to revolutionise in the longer term the practice of medicine. Using the example of osteoporosis management, a socio-economic assessment framework is presented that captures how the transformation of clinical guidelines through VPH models can be evaluated. Applied to the Osteoporotic Virtual Physiological Human Project, a consequent benefit-cost analysis delivers promising results, both methodologically and substantially. PMID:21893787

  16. Understanding transparency perception in architecture: presentation of the simplified perforated model.

    PubMed

    Brzezicki, Marcin

    2013-01-01

    Issues of transparency perception are addressed from an architectural perspective, pointing out previously neglected factors that greatly influence this phenomenon in the scale of a building. The simplified perforated model of a transparent surface presented in the paper has been based on previously developed theories and involves the balance of light reflected versus light transmitted. Its aim is to facilitate an understanding of non-intuitive phenomena related to transparency (eg dynamically changing reflectance) for readers without advanced knowledge of molecular physics. A verification of the presented model has been based on the comparison of optical performance of the model with the results of Fresnel's equations for light-transmitting materials. The presented methodology is intended to be used both in the design and explanatory stages of architectural practice and vision research. Incorporation of architectural issues could enrich the perspective of scientists representing other disciplines.

  17. Multimodal electromechanical model of piezoelectric transformers by Hamilton's principle.

    PubMed

    Nadal, Clement; Pigache, Francois

    2009-11-01

    This work deals with a general energetic approach to establish an accurate electromechanical model of a piezoelectric transformer (PT). Hamilton's principle is used to obtain the equations of motion for free vibrations. The modal characteristics (mass, stiffness, primary and secondary electromechanical conversion factors) are also deduced. Then, to illustrate this general electromechanical method, the variational principle is applied to both homogeneous and nonhomogeneous Rosen-type PT models. A comparison of modal parameters, mechanical displacements, and electrical potentials are presented for both models. Finally, the validity of the electrodynamical model of nonhomogeneous Rosen-type PT is confirmed by a numerical comparison based on a finite elements method and an experimental identification.

  18. Cultural heritage conservation and communication by digital modeling tools. Case studies: minor architectures of the Thirties in the Turin area

    NASA Astrophysics Data System (ADS)

    Bruno, A., Jr.; Spallone, R.

    2015-08-01

    Between the end of the twenties and the beginning of the World war two Turin, as the most of the Italian cities, was endowed by the fascist regime of many new buildings to guarantee its visibility and to control the territory: the fascist party main houses and the local ones. The style that was adopted for these constructions was inspired by the guide lines of the Modern movement which were spreading by a generation of architects as Le Corbusier, Gropius, Mendelsohn. At the end of the war many buildings were reconverted to several functions that led heavy transformations not respectful of the original worth, other were demolished. Today it's possible to rebuild those lost architectures in their primal format as it was created by their architects on paper (and in their mind). This process can guarantee the three-dimensional perception, the authenticity of the materials and the placement into the Turin urban tissue, using static and dynamic digital representation systems. The "three-dimensional re-drawing" of the projects, thought as an heuristic practice devoted to reveal the original idea of the project, inserts itself in a digital model of the urban and natural context as we can live it today, to simulate the perceptive effects that the building could stir up today. The modeling skills are the basis to product videos able to explore the relationship between the environment and "re-built architectures", describing with the synthetic movie techniques, the main formal and perceptive roots. The model represents a scientific product that can be involved in a virtual archive of cultural goods to preserve the collective memory of the architectural and urban past image of Turin.

  19. Algorithm To Architecture Mapping Model (ATAMM) multicomputer operating system functional specification

    NASA Technical Reports Server (NTRS)

    Mielke, R.; Stoughton, J.; Som, S.; Obando, R.; Malekpour, M.; Mandala, B.

    1990-01-01

    A functional description of the ATAMM Multicomputer Operating System is presented. ATAMM (Algorithm to Architecture Mapping Model) is a marked graph model which describes the implementation of large grained, decomposed algorithms on data flow architectures. AMOS, the ATAMM Multicomputer Operating System, is an operating system which implements the ATAMM rules. A first generation version of AMOS which was developed for the Advanced Development Module (ADM) is described. A second generation version of AMOS being developed for the Generic VHSIC Spaceborne Computer (GVSC) is also presented.

  20. Modeling solid-state transformations occurring in dissolution testing.

    PubMed

    Laaksonen, Timo; Aaltonen, Jaakko

    2013-04-15

    Changes in the solid-state form can occur during dissolution testing of drugs. This can often complicate interpretation of results. Additionally, there can be several mechanisms through which such a change proceeds, e.g. solvent-mediated transformation or crystal growth within the drug material itself. Here, a mathematical model was constructed to study the dissolution testing of a material, which undergoes such changes. The model consisted of two processes: the recrystallization of the drug from a supersaturated liquid state caused by the dissolution of the more soluble solid form and the crystal growth of the stable solid form at the surface of the drug formulation. Comparison to experimental data on theophylline dissolution showed that the results obtained with the model matched real solid-state changes and that it was able to distinguish between cases where the transformation was controlled either by solvent-mediated crystallization or solid-state crystal growth. PMID:23506958

  1. Tensor product model transformation based decoupled terminal sliding mode control

    NASA Astrophysics Data System (ADS)

    Zhao, Guoliang; Li, Hongxing; Song, Zhankui

    2016-06-01

    The main objective of this paper is to propose a tensor product model transformation based decoupled terminal sliding mode controller design methodology. The methodology is divided into two steps. In the first step, tensor product model transformation is applied to the single-input-multi-output system and a parameter-varying weighted linear time-invariant system is obtained. Then, decoupled terminal sliding mode controller is designed based on the linear time-invariant systems. The main novelty of this paper is that the nonsingular terminal sliding mode control design is based on a numerical model rather than an analytical one. Finally, simulations are tested on cart-pole system and translational oscillations with a rotational actuator system.

  2. Protein modeling with hybrid Hidden Markov Model/Neurel network architectures

    SciTech Connect

    Baldi, P.; Chauvin, Y.

    1995-12-31

    Hidden Markov Models (HMMs) are useful in a number of tasks in computational molecular biology, and in particular to model and align protein families. We argue that HMMs are somewhat optimal within a certain modeling hierarchy. Single first order HMMs, however, have two potential limitations: a large number of unstructured parameters, and a built-in inability to deal with long-range dependencies. Hybrid HMM/Neural Network (NN) architectures attempt to overcome these limitations. In hybrid HMM/NN, the HMM parameters are computed by a NN. This provides a reparametrization that allows for flexible control of model complexity, and incorporation of constraints. The approach is tested on the immunoglobulin family. A hybrid model is trained, and a multiple alignment derived, with less than a fourth of the number of parameters used with previous single HMMs. To capture dependencies, however, one must resort to a larger hybrid model class, where the data is modeled by multiple HMMs. The parameters of the HMMs, and their modulation as a function of input or context, is again calculated by a NN.

  3. Phase Transformation Hysteresis in a Plutonium Alloy System: Modeling the Resistivity during the Transformation

    SciTech Connect

    Haslam, J J; Wall, M A; Johnson, D L; Mayhall, D J; Schwartz, A J

    2001-11-14

    We have induced, measured, and modeled the {delta}-{alpha}' martensitic transformation in a Pu-Ga alloy by a resistivity technique on a 2.8-mm diameter disk sample. Our measurements of the resistance by a 4-probe technique were consistent with the expected resistance obtained from a finite element analysis of the 4-point measurement of resistivity in our round disk configuration. Analysis by finite element methods of the postulated configuration of {alpha}' particles within model {delta} grains suggests that a considerable anisotropy in the resistivity may be obtained depending on the arrangement of the {alpha}' lens shaped particles within the grains. The resistivity of these grains departs from the series resistance model and can lead to significant errors in the predicted amount of the {alpha}' phase present in the microstructure. An underestimation of the amount of {alpha}' in the sample by 15%, or more, appears to be possible.

  4. VHDL Modeling and Simulation for a Digital Target Imaging Architecture for Multiple Large Targets Generation

    NASA Astrophysics Data System (ADS)

    Bergoen, Halkan

    2002-09-01

    The subject of this thesis is to model and verify the correctness of the architecture of the Digital Image Synthesizer (DIS). The DIS, a system-on-a-chip, is especially useful as a counter-targeting repeater. It synthesizes the characteristic echo signature of a pre-selected target. The VHDL (VHSIC (Very High Speed Integrated Circuit) Hardware Description Language) description of the DIS architecture was exported from Tanner S-Edit, modified, and simulated. Different software oriented verification approaches were researched and a White-box approach to functional verification was adopted. An algorithm based on the hardware functionality was developed to compare expected and simulated results. Initially, the architecture of one Range Bin Modulator was exported. Modifications to the VHDL source code included modeling of the behavior of the N-FET and P-FET (Positive Channel Field Effect Transistor) transistors as well as Ground and Vdd (the voltages connected to the drains of the FETs). It also included renaming of entities to comply with VHDL naming conventions. Simulation results were compared to manual calculations and Matlab programs to verify the architecture. The procedure was repeated for the architecture of an Eight-Range Bin Modulator with equally successful results. VHDL was then used to create a super class of a 32-Range Bin Modulator. Test vectors developed in Matlab were used to yet again verify correct functionality.

  5. Understanding Portability of a High-Level Programming Model on Contemporary Heterogeneous Architectures

    DOE PAGES

    Sabne, Amit J.; Sakdhnagool, Putt; Lee, Seyong; Vetter, Jeffrey S.

    2015-07-13

    Accelerator-based heterogeneous computing is gaining momentum in the high-performance computing arena. However, the increased complexity of heterogeneous architectures demands more generic, high-level programming models. OpenACC is one such attempt to tackle this problem. Although the abstraction provided by OpenACC offers productivity, it raises questions concerning both functional and performance portability. In this article, the authors propose HeteroIR, a high-level, architecture-independent intermediate representation, to map high-level programming models, such as OpenACC, to heterogeneous architectures. They present a compiler approach that translates OpenACC programs into HeteroIR and accelerator kernels to obtain OpenACC functional portability. They then evaluate the performance portability obtained bymore » OpenACC with their approach on 12 OpenACC programs on Nvidia CUDA, AMD GCN, and Intel Xeon Phi architectures. They study the effects of various compiler optimizations and OpenACC program settings on these architectures to provide insights into the achieved performance portability.« less

  6. Understanding Portability of a High-Level Programming Model on Contemporary Heterogeneous Architectures

    SciTech Connect

    Sabne, Amit J.; Sakdhnagool, Putt; Lee, Seyong; Vetter, Jeffrey S.

    2015-07-13

    Accelerator-based heterogeneous computing is gaining momentum in the high-performance computing arena. However, the increased complexity of heterogeneous architectures demands more generic, high-level programming models. OpenACC is one such attempt to tackle this problem. Although the abstraction provided by OpenACC offers productivity, it raises questions concerning both functional and performance portability. In this article, the authors propose HeteroIR, a high-level, architecture-independent intermediate representation, to map high-level programming models, such as OpenACC, to heterogeneous architectures. They present a compiler approach that translates OpenACC programs into HeteroIR and accelerator kernels to obtain OpenACC functional portability. They then evaluate the performance portability obtained by OpenACC with their approach on 12 OpenACC programs on Nvidia CUDA, AMD GCN, and Intel Xeon Phi architectures. They study the effects of various compiler optimizations and OpenACC program settings on these architectures to provide insights into the achieved performance portability.

  7. Towards automatic Markov reliability modeling of computer architectures

    NASA Technical Reports Server (NTRS)

    Liceaga, C. A.; Siewiorek, D. P.

    1986-01-01

    The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.

  8. Research and development of the evolving architecture for beyond the Standard Model

    NASA Astrophysics Data System (ADS)

    Cho, Kihyeon; Kim, Jangho; Kim, Junghyun

    2015-12-01

    The Standard Model (SM) has been successfully validated with the discovery of Higgs boson. However, the model is not yet fully regarded as a complete description. There are efforts to develop phenomenological models that are collectively termed beyond the standard model (BSM). The BSM requires several orders of magnitude more simulations compared with those required for the Higgs boson events. On the other hand, particle physics research involves major investments in hardware coupled with large-scale theoretical and computational efforts along with experiments. These fields include simulation toolkits based on an evolving computing architecture. Using the simulation toolkits, we study particle physics beyond the standard model. Here, we describe the state of this research and development effort for evolving computing architecture of high throughput computing (HTC) and graphic processing units (GPUs) for searching beyond the standard model.

  9. A transformation model for Laminaria Japonica (Phaeophyta, Laminariales)

    NASA Astrophysics Data System (ADS)

    Qin, Song; Jiang, Peng; Li, Xin-Ping; Wang, Xi-Hua; Zeng, Cheng-Kui

    1998-03-01

    A genetic transformation model for the seaweed Laminaria japonica mainly includes the following aspects: 1. The method to introduce foreign genes into the kelp, L. japonica Biolistic bombardment has been proved to be an effective method to bombard foreign DNA through cell walls into intact cells of both sporophytes and gametophytes. The expression of cat and lacZ was detected in regenerated sporophytes, which suggests that this method could induce random integration of foreign genes. Promoters to drive gene expression

  10. Coupled modified baker's transformations for the Ising model.

    PubMed

    Sakaguchi, H

    1999-12-01

    An invertible coupled map lattice is proposed for the Ising model. Each elemental map is a modified baker's transformation, which is a two-dimensional map of X and Y. The time evolution of the spin variable is memorized in the binary representation of the Y variable. The temporal entropy and time correlation of the spin variable are calculated from the snapshot configuration of the Y variables.

  11. Modeling Two-Channel Speech Processing With the EPIC Cognitive Architecture.

    PubMed

    Kieras, David E; Wakefield, Gregory H; Thompson, Eric R; Iyer, Nandini; Simpson, Brian D

    2016-01-01

    An important application of cognitive architectures is to provide human performance models that capture psychological mechanisms in a form that can be "programmed" to predict task performance of human-machine system designs. Although many aspects of human performance have been successfully modeled in this approach, accounting for multitalker speech task performance is a novel problem. This article presents a model for performance in a two-talker task that incorporates concepts from psychoacoustics, in particular, masking effects and stream formation. PMID:26748483

  12. Challenges in Materials Transformation Modeling for Polyolefins Industry

    NASA Astrophysics Data System (ADS)

    Lai, Shih-Yaw; Swogger, Kurt W.

    2004-06-01

    Unlike most published polymer processing and/or forming research, the transformation of polyolefins to fabricated articles often involves non-confined flow or so-called free surface flow (e.g. fiber spinning, blown films, and cast films) in which elongational flow takes place during a fabrication process. Obviously, the characterization and validation of extensional rheological parameters and their use to develop rheological constitutive models are the focus of polyolefins materials transformation research. Unfortunately, there are challenges that remain with limited validation for non-linear, non-isothermal constitutive models for polyolefins. Further complexity arises in the transformation of polyolefins in the elongational flow system as it involves stress-induced crystallization process. The complicated nature of elongational, non-linear rheology and non-isothermal crystallization kinetics make the development of numerical methods very challenging for the polyolefins materials forming modeling. From the product based company standpoint, the challenges of materials transformation research go beyond elongational rheology, crystallization kinetics and its numerical modeling. In order to make models useful for the polyolefin industry, it is critical to develop links between molecular parameters to both equipment and materials forming parameters. The recent advances in the constrained geometry catalysis and materials sciences understanding (INSITE technology and molecular design capability) has made industrial polyolefinic materials forming modeling more viable due to the fact that the molecular structure of the polymer can be well predicted and controlled during the polymerization. In this paper, we will discuss inter-relationship (models) among molecular parameters such as polymer molecular weight (Mw), molecular weight distribution (MWD), long chain branching (LCB), short chain branching (SCB or comonomer types and distribution) and their affects on shear and

  13. Using UML Modeling to Facilitate Three-Tier Architecture Projects in Software Engineering Courses

    ERIC Educational Resources Information Center

    Mitra, Sandeep

    2014-01-01

    This article presents the use of a model-centric approach to facilitate software development projects conforming to the three-tier architecture in undergraduate software engineering courses. Many instructors intend that such projects create software applications for use by real-world customers. While it is important that the first version of these…

  14. Can diversity in root architecture explain plant water use efficiency? A modeling study

    PubMed Central

    Tron, Stefania; Bodner, Gernot; Laio, Francesco; Ridolfi, Luca; Leitner, Daniel

    2015-01-01

    Drought stress is a dominant constraint to crop production. Breeding crops with adapted root systems for effective uptake of water represents a novel strategy to increase crop drought resistance. Due to complex interaction between root traits and high diversity of hydrological conditions, modeling provides important information for trait based selection. In this work we use a root architecture model combined with a soil-hydrological model to analyze whether there is a root system ideotype of general adaptation to drought or water uptake efficiency of root systems is a function of specific hydrological conditions. This was done by modeling transpiration of 48 root architectures in 16 drought scenarios with distinct soil textures, rainfall distributions, and initial soil moisture availability. We find that the efficiency in water uptake of root architecture is strictly dependent on the hydrological scenario. Even dense and deep root systems are not superior in water uptake under all hydrological scenarios. Our results demonstrate that mere architectural description is insufficient to find root systems of optimum functionality. We find that in environments with sufficient rainfall before the growing season, root depth represents the key trait for the exploration of stored water, especially in fine soils. Root density, instead, especially near the soil surface, becomes the most relevant trait for exploiting soil moisture when plant water supply is mainly provided by rainfall events during the root system development. We therefore concluded that trait based root breeding has to consider root systems with specific adaptation to the hydrology of the target environment. PMID:26412932

  15. A Model Based Framework for Semantic Interpretation of Architectural Construction Drawings

    ERIC Educational Resources Information Center

    Babalola, Olubi Oluyomi

    2011-01-01

    The study addresses the automated translation of architectural drawings from 2D Computer Aided Drafting (CAD) data into a Building Information Model (BIM), with emphasis on the nature, possible role, and limitations of a drafting language Knowledge Representation (KR) on the problem and process. The central idea is that CAD to BIM translation is a…

  16. A data-driven parallel execution model and architecture for logic programs

    SciTech Connect

    Tseng, Chien-Chao.

    1989-01-01

    Logic Programming has come to prominence in recent years after the decision of the Japanese Fifth Generation Project to adopt it as the kernel language. A significant number of research projects are attempting to implement different schemes to exploit the inherent parallelism in logic programs. Data flow architectural model has been found to attractive for parallel execution of logic programs. In this research, five dataflow execution models available in literature, have been critically reviewed. The primary aim of the critical review was to establish a set of design issues critical to efficient execution. Based on the established design issues, the abstract date - driven machine model, names LogDf, is developed for parallel execution of logic programs. The execution scheme supports OR - parallelism, Restricted AND parallelism and stream parallelism. Multiple binding environments are represented using stream of streams structure (S-stream). Eager evaluation is performed by passing binding environment between subgoal literals as S-streams, which are formed using non-strict constructors. The hierarchical multi-level stream structure provides a logical framework for distributing the streams to enhance parallelism in production/consumption as well as control of parallelism. The scheme for compiling the dataflow graphs, developed in this thesis, eliminates the necessity of any operand matching unit in the underlying dynamic dataflow architecture. In this thesis, an architecture for the abstract machine LogDf is also provided and the performance evaluation of this model is based on this architecture.

  17. Transforming 2d Cadastral Data Into a Dynamic Smart 3d Model

    NASA Astrophysics Data System (ADS)

    Tsiliakou, E.; Labropoulos, T.; Dimopoulou, E.

    2013-08-01

    3D property registration has become an imperative need in order to optimally reflect all complex cases of the multilayer reality of property rights and restrictions, revealing their vertical component. This paper refers to the potentials and multiple applications of 3D cadastral systems and explores the current state-of-the art, especially the available software with which 3D visualization can be achieved. Within this context, the Hellenic Cadastre's current state is investigated, in particular its data modeling frame. Presenting the methodologies and specifications addressing the registration of 3D properties, the operating cadastral system's shortcomings and merits are pointed out. Nonetheless, current technological advances as well as the availability of sophisticated software packages (proprietary or open source) call for 3D modeling. In order to register and visualize the complex reality in 3D, Esri's CityEngine modeling software has been used, which is specialized in the generation of 3D urban environments, transforming 2D GIS Data into Smart 3D City Models. The application of the 3D model concerns the Campus of the National Technical University of Athens, in which a complex ownership status is established along with approved special zoning regulations. The 3D model was built using different parameters based on input data, derived from cadastral and urban planning datasets, as well as legal documents and architectural plans. The process resulted in a final 3D model, optimally describing the cadastral situation and built environment and proved to be a good practice example of 3D visualization.

  18. Designing Capital-Intensive Systems with Architectural and Operational Flexibility Using a Screening Model

    NASA Astrophysics Data System (ADS)

    Lin, Jijun; de Weck, Olivier; de Neufville, Richard; Robinson, Bob; MacGowan, David

    Development of capital intensive systems, such as offshore oil platforms or other industrial infrastructure, generally requires a significant amount of capital investment under various resource, technical, and market uncertainties. It is a very challenging task for development co-owners or joint ventures because important decisions, such as system architectures, have to be made while uncertainty remains high. This paper develops a screening model and a simulation framework to quickly explore the design space for complex engineering systems under uncertainty allowing promising strategies or architectures to be identified. Flexibility in systems’ design and operation is proposed as a proactive means to enable systems to adapt to future uncertainty. Architectural and operational flexibility can improve systems’ lifecycle value by mitigating downside risks and capturing upside opportunities. In order to effectively explore different flexible strategies addressing a view of uncertainty which changes with time, a computational framework based on Monte Carlo simulation is proposed in this paper. This framework is applied to study flexible development strategies for a representative offshore petroleum project. The complexity of this problem comes from multi-domain uncertainties, large architectural design space, and structure of flexibility decision rules. The results demonstrate that architectural and operational flexibility can significantly improve projects’ Expected Net Present Value (ENPV), reduce downside risks, and improve upside gains, compared to adopting an inflexible strategy appropriate to the view of uncertainty at the start of the project. In this particular case study, the most flexible strategy improves ENPV by 85% over an inflexible base case.

  19. Modelling a single phase voltage controlled rectifier using Laplace transforms

    NASA Technical Reports Server (NTRS)

    Kraft, L. Alan; Kankam, M. David

    1992-01-01

    The development of a 20 kHz, AC power system by NASA for large space projects has spurred a need to develop models for the equipment which will be used on these single phase systems. To date, models for the AC source (i.e., inverters) have been developed. It is the intent of this paper to develop a method to model the single phase voltage controlled rectifiers which will be attached to the AC power grid as an interface for connected loads. A modified version of EPRI's HARMFLO program is used as the shell for these models. The results obtained from the model developed in this paper are quite adequate for the analysis of problems such as voltage resonance. The unique technique presented in this paper uses the Laplace transforms to determine the harmonic content of the load current of the rectifier rather than a curve fitting technique. Laplace transforms yield the coefficient of the differential equations which model the line current to the rectifier directly.

  20. Distributed model predictive control with hierarchical architecture for communication: application in automated irrigation channels

    NASA Astrophysics Data System (ADS)

    Farhadi, Alireza; Khodabandehlou, Ali

    2016-08-01

    This paper is concerned with a distributed model predictive control (DMPC) method that is based on a distributed optimisation method with two-level architecture for communication. Feasibility (constraints satisfaction by the approximated solution), convergence and optimality of this distributed optimisation method are mathematically proved. For an automated irrigation channel, the satisfactory performance of the proposed DMPC method in attenuation of the undesired upstream transient error propagation and amplification phenomenon is illustrated and compared with the performance of another DMPC method that exploits a single-level architecture for communication. It is illustrated that the DMPC that exploits a two-level architecture for communication has a better performance by better managing communication overhead.

  1. Connecting Requirements to Architecture and Analysis via Model-Based Systems Engineering

    NASA Technical Reports Server (NTRS)

    Cole, Bjorn F.; Jenkins, J. Steven

    2015-01-01

    In traditional systems engineering practice, architecture, concept development, and requirements development are related but still separate activities. Concepts for operation, key technical approaches, and related proofs of concept are developed. These inform the formulation of an architecture at multiple levels, starting with the overall system composition and functionality and progressing into more detail. As this formulation is done, a parallel activity develops a set of English statements that constrain solutions. These requirements are often called "shall statements" since they are formulated to use "shall." The separation of requirements from design is exacerbated by well-meaning tools like the Dynamic Object-Oriented Requirements System (DOORS) that remained separated from engineering design tools. With the Europa Clipper project, efforts are being taken to change the requirements development approach from a separate activity to one intimately embedded in formulation effort. This paper presents a modeling approach and related tooling to generate English requirement statements from constraints embedded in architecture definition.

  2. A model to simulate the oxygen distribution in hypoxic tumors for different vascular architectures

    SciTech Connect

    Espinoza, Ignacio; Peschke, Peter; Karger, Christian P.

    2013-08-15

    Purpose: As hypoxic cells are more resistant to photon radiation, it is desirable to obtain information about the oxygen distribution in tumors prior to the radiation treatment. Noninvasive techniques are currently not able to provide reliable oxygenation maps with sufficient spatial resolution; therefore mathematical models may help to simulate microvascular architectures and the resulting oxygen distributions in the surrounding tissue. Here, the authors present a new computer model, which uses the vascular fraction of tumor voxels, in principle measurable noninvasively in vivo, as input parameter for simulating realistic PO2 histograms in tumors, assuming certain 3D vascular architectures.Methods: Oxygen distributions were calculated by solving a reaction-diffusion equation in a reference volume using the particle strength exchange method. Different types of vessel architectures as well as different degrees of vascular heterogeneities are considered. Two types of acute hypoxia (ischemic and hypoxemic) occurring additionally to diffusion-limited (chronic) hypoxia were implemented as well.Results: No statistically significant differences were observed when comparing 2D- and 3D-vessel architectures (p > 0.79 in all cases) and highly heterogeneously distributed linear vessels show good agreement, when comparing with published experimental intervessel distance distributions and PO2 histograms. It could be shown that, if information about additional acute hypoxia is available, its contribution to the hypoxic fraction (HF) can be simulated as well. Increases of 128% and 168% in the HF were obtained when representative cases of ischemic and hypoxemic acute hypoxia, respectively, were considered in the simulations.Conclusions: The presented model is able to simulate realistic microscopic oxygen distributions in tumors assuming reasonable vessel architectures and using the vascular fraction as macroscopic input parameter. The model may be used to generate PO2 histograms

  3. Product Lifecycle Management Architecture: A Model Based Systems Engineering Analysis.

    SciTech Connect

    Noonan, Nicholas James

    2015-07-01

    This report is an analysis of the Product Lifecycle Management (PLM) program. The analysis is centered on a need statement generated by a Nuclear Weapons (NW) customer. The need statement captured in this report creates an opportunity for the PLM to provide a robust service as a solution. Lifecycles for both the NW and PLM are analyzed using Model Based System Engineering (MBSE).

  4. Analysis of trabecular bone architectural changes induced by osteoarthritis in rabbit femur using 3D active shape model and digital topology

    NASA Astrophysics Data System (ADS)

    Saha, P. K.; Rajapakse, C. S.; Williams, D. S.; Duong, L.; Coimbra, A.

    2007-03-01

    Osteoarthritis (OA) is the most common chronic joint disease, which causes the cartilage between the bone joints to wear away, leading to pain and stiffness. Currently, progression of OA is monitored by measuring joint space width using x-ray or cartilage volume using MRI. However, OA affects all periarticular tissues, including cartilage and bone. It has been shown previously that in animal models of OA, trabecular bone (TB) architecture is particularly affected. Furthermore, relative changes in architecture are dependent on the depth of the TB region with respect to the bone surface and main direction of load on the bone. The purpose of this study was to develop a new method for accurately evaluating 3D architectural changes induced by OA in TB. Determining the TB test domain that represents the same anatomic region across different animals is crucial for studying disease etiology, progression and response to therapy. It also represents a major technical challenge in analyzing architectural changes. Here, we solve this problem using a new active shape model (ASM)-based approach. A new and effective semi-automatic landmark selection approach has been developed for rabbit distal femur surface that can easily be adopted for many other anatomical regions. It has been observed that, on average, a trained operator can complete the user interaction part of landmark specification process in less than 15 minutes for each bone data set. Digital topological analysis and fuzzy distance transform derived parameters are used for quantifying TB architecture. The method has been applied on micro-CT data of excised rabbit femur joints from anterior cruciate ligament transected (ACLT) (n = 6) and sham (n = 9) operated groups collected at two and two-to-eight week post-surgery, respectively. An ASM of the rabbit right distal femur has been generated from the sham group micro-CT data. The results suggest that, in conjunction with ASM, digital topological parameters are suitable for

  5. Research on mixed network architecture collaborative application model

    NASA Astrophysics Data System (ADS)

    Jing, Changfeng; Zhao, Xi'an; Liang, Song

    2009-10-01

    When facing complex requirements of city development, ever-growing spatial data, rapid development of geographical business and increasing business complexity, collaboration between multiple users and departments is needed urgently, however conventional GIS software (such as Client/Server model or Browser/Server model) are not support this well. Collaborative application is one of the good resolutions. Collaborative application has four main problems to resolve: consistency and co-edit conflict, real-time responsiveness, unconstrained operation, spatial data recoverability. In paper, application model called AMCM is put forward based on agent and multi-level cache. AMCM can be used in mixed network structure and supports distributed collaborative. Agent is an autonomous, interactive, initiative and reactive computing entity in a distributed environment. Agent has been used in many fields such as compute science and automation. Agent brings new methods for cooperation and the access for spatial data. Multi-level cache is a part of full data. It reduces the network load and improves the access and handle of spatial data, especially, in editing the spatial data. With agent technology, we make full use of its characteristics of intelligent for managing the cache and cooperative editing that brings a new method for distributed cooperation and improves the efficiency.

  6. Deep phenotyping of coarse root architecture in R. pseudoacacia reveals that tree root system plasticity is confined within its architectural model.

    PubMed

    Danjon, Frédéric; Khuder, Hayfa; Stokes, Alexia

    2013-01-01

    This study aims at assessing the influence of slope angle and multi-directional flexing and their interaction on the root architecture of Robinia pseudoacacia seedlings, with a particular focus on architectural model and trait plasticity. 36 trees were grown from seed in containers inclined at 0° (control) or 45° (slope) in a glasshouse. The shoots of half the plants were gently flexed for 5 minutes a day. After 6 months, root systems were excavated and digitized in 3D, and biomass measured. Over 100 root architectural traits were determined. Both slope and flexing increased significantly plant size. Non-flexed trees on 45° slopes developed shallow roots which were largely aligned perpendicular to the slope. Compared to the controls, flexed trees on 0° slopes possessed a shorter and thicker taproot held in place by regularly distributed long and thin lateral roots. Flexed trees on the 45° slope also developed a thick vertically aligned taproot, with more volume allocated to upslope surface lateral roots, due to the greater soil volume uphill. We show that there is an inherent root system architectural model, but that a certain number of traits are highly plastic. This plasticity will permit root architectural design to be modified depending on external mechanical signals perceived by young trees.

  7. Use of the Chemical Transformation Simulator as a Parameterization Tool for Modeling the Environmental Fate of Organic Chemicals and their Transformation Products

    EPA Science Inventory

    A Chemical Transformation Simulator is a web-based system for predicting transformation pathways and physicochemical properties of organic chemicals. Role in Environmental Modeling • Screening tool for identifying likely transformation products in the environment • Parameteri...

  8. Coupling root architecture and pore network modeling - an attempt towards better understanding root-soil interactions

    NASA Astrophysics Data System (ADS)

    Leitner, Daniel; Bodner, Gernot; Raoof, Amir

    2013-04-01

    Understanding root-soil interactions is of high importance for environmental and agricultural management. Root uptake is an essential component in water and solute transport modeling. The amount of groundwater recharge and solute leaching significantly depends on the demand based plant extraction via its root system. Plant uptake however not only responds to the potential demand, but in most situations is limited by supply form the soil. The ability of the plant to access water and solutes in the soil is governed mainly by root distribution. Particularly under conditions of heterogeneous distribution of water and solutes in the soil, it is essential to capture the interaction between soil and roots. Root architecture models allow studying plant uptake from soil by describing growth and branching of root axes in the soil. Currently root architecture models are able to respond dynamically to water and nutrient distribution in the soil by directed growth (tropism), modified branching and enhanced exudation. The porous soil medium as rooting environment in these models is generally described by classical macroscopic water retention and sorption models, average over the pore scale. In our opinion this simplified description of the root growth medium implies several shortcomings for better understanding root-soil interactions: (i) It is well known that roots grow preferentially in preexisting pores, particularly in more rigid/dry soil. Thus the pore network contributes to the architectural form of the root system; (ii) roots themselves can influence the pore network by creating preferential flow paths (biopores) which are an essential element of structural porosity with strong impact on transport processes; (iii) plant uptake depend on both the spatial location of water/solutes in the pore network as well as the spatial distribution of roots. We therefore consider that for advancing our understanding in root-soil interactions, we need not only to extend our root models

  9. Implementation of Remaining Useful Lifetime Transformer Models in the Fleet-Wide Prognostic and Health Management Suite

    SciTech Connect

    Agarwal, Vivek; Lybeck, Nancy J.; Pham, Binh; Rusaw, Richard; Bickford, Randall

    2015-02-01

    Research and development efforts are required to address aging and reliability concerns of the existing fleet of nuclear power plants. As most plants continue to operate beyond the license life (i.e., towards 60 or 80 years), plant components are more likely to incur age-related degradation mechanisms. To assess and manage the health of aging plant assets across the nuclear industry, the Electric Power Research Institute has developed a web-based Fleet-Wide Prognostic and Health Management (FW-PHM) Suite for diagnosis and prognosis. FW-PHM is a set of web-based diagnostic and prognostic tools and databases, comprised of the Diagnostic Advisor, the Asset Fault Signature Database, the Remaining Useful Life Advisor, and the Remaining Useful Life Database, that serves as an integrated health monitoring architecture. The main focus of this paper is the implementation of prognostic models for generator step-up transformers in the FW-PHM Suite. One prognostic model discussed is based on the functional relationship between degree of polymerization, (the most commonly used metrics to assess the health of the winding insulation in a transformer) and furfural concentration in the insulating oil. The other model is based on thermal-induced degradation of the transformer insulation. By utilizing transformer loading information, established thermal models are used to estimate the hot spot temperature inside the transformer winding. Both models are implemented in the Remaining Useful Life Database of the FW-PHM Suite. The Remaining Useful Life Advisor utilizes the implemented prognostic models to estimate the remaining useful life of the paper winding insulation in the transformer based on actual oil testing and operational data.

  10. Architectural Improvements and New Processing Tools for the Open XAL Online Model

    SciTech Connect

    Allen, Christopher K; Pelaia II, Tom; Freed, Jonathan M

    2015-01-01

    The online model is the component of Open XAL providing accelerator modeling, simulation, and dynamic synchronization to live hardware. Significant architectural changes and feature additions have been recently made in two separate areas: 1) the managing and processing of simulation data, and 2) the modeling of RF cavities. Simulation data and data processing have been completely decoupled. A single class manages all simulation data while standard tools were developed for processing the simulation results. RF accelerating cavities are now modeled as composite structures where parameter and dynamics computations are distributed. The beam and hardware models both maintain their relative phase information, which allows for dynamic phase slip and elapsed time computation.

  11. Creative Practices Embodied, Embedded, and Enacted in Architectural Settings: Toward an Ecological Model of Creativity

    PubMed Central

    Malinin, Laura H.

    2016-01-01

    Memoires by eminently creative people often describe architectural spaces and qualities they believe instrumental for their creativity. However, places designed to encourage creativity have had mixed results, with some found to decrease creative productivity for users. This may be due, in part, to lack of suitable empirical theory or model to guide design strategies. Relationships between creative cognition and features of the physical environment remain largely uninvestigated in the scientific literature, despite general agreement among researchers that human cognition is physically and socially situated. This paper investigates what role architectural settings may play in creative processes by examining documented first person and biographical accounts of creativity with respect to three central theories of situated cognition. First, the embodied thesis argues that cognition encompasses both the mind and the body. Second, the embedded thesis maintains that people exploit features of the physical and social environment to increase their cognitive capabilities. Third, the enaction thesis describes cognition as dependent upon a person’s interactions with the world. Common themes inform three propositions, illustrated in a new theoretical framework describing relationships between people and their architectural settings with respect to different cognitive processes of creativity. The framework is intended as a starting point toward an ecological model of creativity, which may be used to guide future creative process research and architectural design strategies to support user creative productivity. PMID:26779087

  12. Creative Practices Embodied, Embedded, and Enacted in Architectural Settings: Toward an Ecological Model of Creativity.

    PubMed

    Malinin, Laura H

    2015-01-01

    Memoires by eminently creative people often describe architectural spaces and qualities they believe instrumental for their creativity. However, places designed to encourage creativity have had mixed results, with some found to decrease creative productivity for users. This may be due, in part, to lack of suitable empirical theory or model to guide design strategies. Relationships between creative cognition and features of the physical environment remain largely uninvestigated in the scientific literature, despite general agreement among researchers that human cognition is physically and socially situated. This paper investigates what role architectural settings may play in creative processes by examining documented first person and biographical accounts of creativity with respect to three central theories of situated cognition. First, the embodied thesis argues that cognition encompasses both the mind and the body. Second, the embedded thesis maintains that people exploit features of the physical and social environment to increase their cognitive capabilities. Third, the enaction thesis describes cognition as dependent upon a person's interactions with the world. Common themes inform three propositions, illustrated in a new theoretical framework describing relationships between people and their architectural settings with respect to different cognitive processes of creativity. The framework is intended as a starting point toward an ecological model of creativity, which may be used to guide future creative process research and architectural design strategies to support user creative productivity.

  13. Development and validation of a tokamak skin effect transformer model

    NASA Astrophysics Data System (ADS)

    Romero, J. A.; Moret, J.-M.; Coda, S.; Felici, F.; Garrido, I.

    2012-02-01

    A lumped parameter, state space model for a tokamak transformer including the slow flux penetration in the plasma (skin effect transformer model) is presented. The model does not require detailed or explicit information about plasma profiles or geometry. Instead, this information is lumped in system variables, parameters and inputs. The model has an exact mathematical structure built from energy and flux conservation theorems, predicting the evolution and non-linear interaction of plasma current and internal inductance as functions of the primary coil currents, plasma resistance, non-inductive current drive and the loop voltage at a specific location inside the plasma (equilibrium loop voltage). Loop voltage profile in the plasma is substituted by a three-point discretization, and ordinary differential equations are used to predict the equilibrium loop voltage as a function of the boundary and resistive loop voltages. This provides a model for equilibrium loop voltage evolution, which is reminiscent of the skin effect. The order and parameters of this differential equation are determined empirically using system identification techniques. Fast plasma current modulation experiments with random binary signals have been conducted in the TCV tokamak to generate the required data for the analysis. Plasma current was modulated under ohmic conditions between 200 and 300 kA with 30 ms rise time, several times faster than its time constant L/R ≈ 200 ms. A second-order linear differential equation for equilibrium loop voltage is sufficient to describe the plasma current and internal inductance modulation with 70% and 38% fit parameters, respectively. The model explains the most salient features of the plasma current transients, such as the inverse correlation between plasma current ramp rates and internal inductance changes, without requiring detailed or explicit information about resistivity profiles. This proves that a lumped parameter modelling approach can be used to

  14. Operations Assessment of Launch Vehicle Architectures using Activity Based Cost Models

    NASA Technical Reports Server (NTRS)

    Ruiz-Torres, Alex J.; McCleskey, Carey

    2000-01-01

    The growing emphasis on affordability for space transportation systems requires the assessment of new space vehicles for all life cycle activities, from design and development, through manufacturing and operations. This paper addresses the operational assessment of launch vehicles, focusing on modeling the ground support requirements of a vehicle architecture, and estimating the resulting costs and flight rate. This paper proposes the use of Activity Based Costing (ABC) modeling for this assessment. The model uses expert knowledge to determine the activities, the activity times and the activity costs based on vehicle design characteristics. The approach provides several advantages to current approaches to vehicle architecture assessment including easier validation and allowing vehicle designers to understand the cost and cycle time drivers.

  15. Modelling Elastic Media With Arbitrary Shapes Using the Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Rosa, J. W.; Cardoso, F. A.; Rosa, J. W.; Aki, K.

    2004-12-01

    We extend the new method proposed by Rosa et al. (2001) for the study of elastic bodies with complete arbitrary shapes. The method was originally developed for modelling 2-D elastic media with the application of the wavelet transform, and was extended to cases where discontinuities simulated geologic faults between two different elastic media. In addition to extending the method for the study of bodies with complete arbitrary shapes, we also test new transforms with the objective of making the related matrices more compact, which are also applied to the most general case of the method. The basic method consists of the discretization of the polynomial expansion for the boundary conditions of the 2-D problem involving the stress and strain relations for the media. This parameterization leads to a system of linear equations that should be solved for the determination of the expansion coefficients, which are the model parameters, and their determination leads to the solution of the problem. Despite the fact that the media we studied originally were 2-D bodies, the result of the application of this new method can be viewed as an approximate solution to some specific 3-D problems. Among the motivations for developing this method are possible geological applications (that is, the study of tectonic plates and geologic faults) and simulations of the elastic behaviour of materials in several other fields of science. The wavelet transform is applied with two main objectives, namely to decrease the error related to the truncation of the polynomial expansion and to make the system of linear equations more compact for computation. Having validated this method for the original 2-D elastic media, we plan that this extension to elastic bodies with complete arbitrary shapes will enable it to be even more attractive for modelling real media. Reference Rosa, J. W. C., F. A. C. M. Cardoso, K. Aki, H. S. Malvar, F. A. V. Artola, and J. W. C. Rosa, Modelling elastic media with the

  16. Culture models of human mammary epithelial cell transformation

    SciTech Connect

    Stampfer, Martha R.; Yaswen, Paul

    2000-11-10

    Human pre-malignant breast diseases, particularly ductal carcinoma in situ (DCIS)3 already display several of the aberrant phenotypes found in primary breast cancers, including chromosomal abnormalities, telomerase activity, inactivation of the p53 gene and overexpression of some oncogenes. Efforts to model early breast carcinogenesis in human cell cultures have largely involved studies in vitro transformation of normal finite lifespan human mammary epithelial cells (HMEC) to immortality and malignancy. We present a model of HMEC immortal transformation consistent with the know in vivo data. This model includes a recently described, presumably epigenetic process, termed conversion, which occurs in cells that have overcome stringent replicative senescence and are thus able to maintain proliferation with critically short telomeres. The conversion process involves reactivation of telomerase activity, and acquisition of good uniform growth in the absence and presence of TFGB. We propose th at overcoming the proliferative constraints set by senescence, and undergoing conversion, represent key rate-limiting steps in human breast carcinogenesis, and occur during early stage breast cancer progression.

  17. Diagnostic and Prognostic Models for Generator Step-Up Transformers

    SciTech Connect

    Vivek Agarwal; Nancy J. Lybeck; Binh T. Pham

    2014-09-01

    In 2014, the online monitoring (OLM) of active components project under the Light Water Reactor Sustainability program at Idaho National Laboratory (INL) focused on diagnostic and prognostic capabilities for generator step-up transformers. INL worked with subject matter experts from the Electric Power Research Institute (EPRI) to augment and revise the GSU fault signatures previously implemented in the Electric Power Research Institute’s (EPRI’s) Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. Two prognostic models were identified and implemented for GSUs in the FW-PHM Suite software. INL and EPRI demonstrated the use of prognostic capabilities for GSUs. The complete set of fault signatures developed for GSUs in the Asset Fault Signature Database of the FW-PHM Suite for GSUs is presented in this report. Two prognostic models are described for paper insulation: the Chendong model for degree of polymerization, and an IEEE model that uses a loading profile to calculates life consumption based on hot spot winding temperatures. Both models are life consumption models, which are examples of type II prognostic models. Use of the models in the FW-PHM Suite was successfully demonstrated at the 2014 August Utility Working Group Meeting, Idaho Falls, Idaho, to representatives from different utilities, EPRI, and the Halden Research Project.

  18. Modeling biochemical transformation processes and information processing with Narrator

    PubMed Central

    Mandel, Johannes J; Fuß, Hendrik; Palfreyman, Niall M; Dubitzky, Werner

    2007-01-01

    Background Software tools that model and simulate the dynamics of biological processes and systems are becoming increasingly important. Some of these tools offer sophisticated graphical user interfaces (GUIs), which greatly enhance their acceptance by users. Such GUIs are based on symbolic or graphical notations used to describe, interact and communicate the developed models. Typically, these graphical notations are geared towards conventional biochemical pathway diagrams. They permit the user to represent the transport and transformation of chemical species and to define inhibitory and stimulatory dependencies. A critical weakness of existing tools is their lack of supporting an integrative representation of transport, transformation as well as biological information processing. Results Narrator is a software tool facilitating the development and simulation of biological systems as Co-dependence models. The Co-dependence Methodology complements the representation of species transport and transformation together with an explicit mechanism to express biological information processing. Thus, Co-dependence models explicitly capture, for instance, signal processing structures and the influence of exogenous factors or events affecting certain parts of a biological system or process. This combined set of features provides the system biologist with a powerful tool to describe and explore the dynamics of life phenomena. Narrator's GUI is based on an expressive graphical notation which forms an integral part of the Co-dependence Methodology. Behind the user-friendly GUI, Narrator hides a flexible feature which makes it relatively easy to map models defined via the graphical notation to mathematical formalisms and languages such as ordinary differential equations, the Systems Biology Markup Language or Gillespie's direct method. This powerful feature facilitates reuse, interoperability and conceptual model development. Conclusion Narrator is a flexible and intuitive systems

  19. A scaleable architecture for the modeling and simulation of intelligent transportation systems.

    SciTech Connect

    Ewing, T.; Tentner, A.

    1999-03-17

    A distributed, scaleable architecture for the modeling and simulation of Intelligent Transportation Systems on a network of workstations or a parallel computer has been developed at Argonne National Laboratory. The resulting capability provides a modular framework supporting plug-in models, hardware, and live data sources; visually realistic graphics displays to support training and human factors studies; and a set of basic ITS models. The models and capabilities are described, along with atypical scenario involving dynamic rerouting of smart vehicles which send probe reports to and receive traffic advisories from a traffic management center capable of incident detection.

  20. Double images hiding by using joint transform correlator architecture adopting two-step phase-shifting digital holography

    NASA Astrophysics Data System (ADS)

    Shi, Xiaoyan; Zhao, Daomu; Huang, Yinbo

    2013-06-01

    Based on the joint Fresnel transform correlator, a new system for double images hiding is presented. By this security system, the dual secret images are encrypted and recorded as intensity patterns employing phase-shifting interference technology. To improve the system security, a dual images hiding method is used. By digital means, the deduced encryption complex distribution is divided into two subparts. For each image, only one subpart is reserved and modulated by a phase factor. Then these modified results are combined together and embedded into the host image. With all correct keys, by inverse Fresnel transform, the secret images can be extracted. By the phase modulation, the cross talk caused by images superposition can be reduced for their spatial parallel separation. Theoretical analyses have shown the system's feasibility. Computer simulations are performed to show the encryption capacity of the proposed system. Numerical results are presented to verify the validity and the efficiency of the proposed method.

  1. Practical Application of Model-based Programming and State-based Architecture to Space Missions

    NASA Technical Reports Server (NTRS)

    Horvath, Gregory A.; Ingham, Michel D.; Chung, Seung; Martin, Oliver; Williams, Brian

    2006-01-01

    Innovative systems and software engineering solutions are required to meet the increasingly challenging demands of deep-space robotic missions. While recent advances in the development of an integrated systems and software engineering approach have begun to address some of these issues, they are still at the core highly manual and, therefore, error-prone. This paper describes a task aimed at infusing MIT's model-based executive, Titan, into JPL's Mission Data System (MDS), a unified state-based architecture, systems engineering process, and supporting software framework. Results of the task are presented, including a discussion of the benefits and challenges associated with integrating mature model-based programming techniques and technologies into a rigorously-defined domain specific architecture.

  2. Causal Estimation using Semiparametric Transformation Models under Prevalent Sampling

    PubMed Central

    Cheng, Yu-Jen; Wang, Mei-Cheng

    2015-01-01

    Summary This paper develops methods and inference for causal estimation in semiparametric transformation models for prevalent survival data. Through estimation of the transformation models and covariate distribution, we propose analytical procedures to estimate the causal survival function. As the data are observational, the unobserved potential outcome (survival time) may be associated with the treatment assignment, and therefore there may exist a systematic imbalance between the data observed from each treatment arm. Further, due to prevalent sampling, subjects are observed only if they have not experienced the failure event when data collection began, causing the prevalent sampling bias. We propose a unified approach which simultaneously corrects the bias from the prevalent sampling and balances the systematic differences from the observational data. We illustrate in the simulation study that standard analysis without proper adjustment would result in biased causal inference. Large sample properties of the proposed estimation procedures are established by techniques of empirical processes and examined by simulation studies. The proposed methods are applied to the Surveillance, Epidemiology, and End Results (SEER) and Medicare linked data for women diagnosed with breast cancer. PMID:25715045

  3. Developing a reversible rapid coordinate transformation model for the cylindrical projection

    NASA Astrophysics Data System (ADS)

    Ye, Si-jing; Yan, Tai-lai; Yue, Yan-li; Lin, Wei-yan; Li, Lin; Yao, Xiao-chuang; Mu, Qin-yun; Li, Yong-qin; Zhu, De-hai

    2016-04-01

    Numerical models are widely used for coordinate transformations. However, in most numerical models, polynomials are generated to approximate "true" geographic coordinates or plane coordinates, and one polynomial is hard to make simultaneously appropriate for both forward and inverse transformations. As there is a transformation rule between geographic coordinates and plane coordinates, how accurate and efficient is the calculation of the coordinate transformation if we construct polynomials to approximate the transformation rule instead of "true" coordinates? In addition, is it preferable to compare models using such polynomials with traditional numerical models with even higher exponents? Focusing on cylindrical projection, this paper reports on a grid-based rapid numerical transformation model - a linear rule approximation model (LRA-model) that constructs linear polynomials to approximate the transformation rule and uses a graticule to alleviate error propagation. Our experiments on cylindrical projection transformation between the WGS 84 Geographic Coordinate System (EPSG 4326) and the WGS 84 UTM ZONE 50N Plane Coordinate System (EPSG 32650) with simulated data demonstrate that the LRA-model exhibits high efficiency, high accuracy, and high stability; is simple and easy to use for both forward and inverse transformations; and can be applied to the transformation of a large amount of data with a requirement of high calculation efficiency. Furthermore, the LRA-model exhibits advantages in terms of calculation efficiency, accuracy and stability for coordinate transformations, compared to the widely used hyperbolic transformation model.

  4. Using two coefficients modeling of nonsubsampled Shearlet transform for despeckling

    NASA Astrophysics Data System (ADS)

    Jafari, Saeed; Ghofrani, Sedigheh

    2016-01-01

    Synthetic aperture radar (SAR) images are inherently affected by multiplicative speckle noise. Two approaches based on modeling the nonsubsampled Shearlet transform (NSST) coefficients are presented. Two-sided generalized Gamma distribution and normal inverse Gaussian probability density function have been used to model the statistics of NSST coefficients. Bayesian maximum a posteriori estimator is applied to the corrupted NSST coefficients in order to estimate the noise-free NSST coefficients. Finally, experimental results, according to objective and subjective criteria, carried out on both artificially speckled images and the true SAR images, demonstrate that the proposed methods outperform other state of art references via two points of view, speckle noise reduction and image quality preservation.

  5. A functional–structural kiwifruit vine model integrating architecture, carbon dynamics and effects of the environment

    PubMed Central

    Cieslak, Mikolaj; Seleznyova, Alla N.; Hanan, Jim

    2011-01-01

    Background and Aims Functional–structural modelling can be used to increase our understanding of how different aspects of plant structure and function interact, identify knowledge gaps and guide priorities for future experimentation. By integrating existing knowledge of the different aspects of the kiwifruit (Actinidia deliciosa) vine's architecture and physiology, our aim is to develop conceptual and mathematical hypotheses on several of the vine's features: (a) plasticity of the vine's architecture; (b) effects of organ position within the canopy on its size; (c) effects of environment and horticultural management on shoot growth, light distribution and organ size; and (d) role of carbon reserves in early shoot growth. Methods Using the L-system modelling platform, a functional–structural plant model of a kiwifruit vine was created that integrates architectural development, mechanistic modelling of carbon transport and allocation, and environmental and management effects on vine and fruit growth. The branching pattern was captured at the individual shoot level by modelling axillary shoot development using a discrete-time Markov chain. An existing carbon transport resistance model was extended to account for several source/sink components of individual plant elements. A quasi-Monte Carlo path-tracing algorithm was used to estimate the absorbed irradiance of each leaf. Key Results Several simulations were performed to illustrate the model's potential to reproduce the major features of the vine's behaviour. The model simulated vine growth responses that were qualitatively similar to those observed in experiments, including the plastic response of shoot growth to local carbon supply, the branching patterns of two Actinidia species, the effect of carbon limitation and topological distance on fruit size and the complex behaviour of sink competition for carbon. Conclusions The model is able to reproduce differences in vine and fruit growth arising from various

  6. Development of Groundwater Modeling Support System Based on Service-Oriented Architecture

    NASA Astrophysics Data System (ADS)

    WANG, Y.; Tsai, J. P.; Hsiao, C. T.; Chang, L. C.

    2014-12-01

    Groundwater simulation has become an essential step on the groundwater resources management and assessment. There are many stand-alone pre and post processing software packages to alleviate the model simulation loading, but the stand-alone software do not consider centralized management of data and simulation results neither do they provide network sharing function. The model buildings are still implemented independently case to case when using these packages. Hence, it is difficult to share and reuse the data and knowledge (simulation cases) systematically within or across companies. Therefore, this study develops a centralized and network based groundwater model developing system to assist model simulation. The system is based on service-oriented architecture and allows remote user to develop their modeling cases on internet. The data and cases (knowledge) are thus easy to manage centralized. MODFLOW is the modeling engine of the system, which is the most popular groundwater model in the world. Other functions include the database management and variety of model developing assisted web services including auto digitalizing of geology profile map、groundwater missing data recovery assisting、graphic data demonstration and auto generation of MODFLOW input files from database that is the most important function of the system. Since the system architecture is service-oriented, it is scalable and flexible. The system can be easily extended to include the scenarios analysis and knowledge management to facilitate the reuse of groundwater modeling knowledge.

  7. A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture

    NASA Technical Reports Server (NTRS)

    Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.

    2005-01-01

    Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.

  8. Characterization of Model-Based Reasoning Strategies for Use in IVHM Architectures

    NASA Technical Reports Server (NTRS)

    Poll, Scott; Iverson, David; Patterson-Hine, Ann

    2003-01-01

    Open architectures are gaining popularity for Integrated Vehicle Health Management (IVHM) applications due to the diversity of subsystem health monitoring strategies in use and the need to integrate a variety of techniques at the system health management level. The basic concept of an open architecture suggests that whatever monitoring or reasoning strategy a subsystem wishes to deploy, the system architecture will support the needs of that subsystem and will be capable of transmitting subsystem health status across subsystem boundaries and up to the system level for system-wide fault identification and diagnosis. There is a need to understand the capabilities of various reasoning engines and how they, coupled with intelligent monitoring techniques, can support fault detection and system level fault management. Researchers in IVHM at NASA Ames Research Center are supporting the development of an IVHM system for liquefying-fuel hybrid rockets. In the initial stage of this project, a few readily available reasoning engines were studied to assess candidate technologies for application in next generation launch systems. Three tools representing the spectrum of model-based reasoning approaches, from a quantitative simulation based approach to a graph-based fault propagation technique, were applied to model the behavior of the Hybrid Combustion Facility testbed at Ames. This paper summarizes the characterization of the modeling process for each of the techniques.

  9. Impact of plant shoot architecture on leaf cooling: a coupled heat and mass transfer model.

    PubMed

    Bridge, L J; Franklin, K A; Homer, M E

    2013-08-01

    Plants display a range of striking architectural adaptations when grown at elevated temperatures. In the model plant Arabidopsis thaliana, these include elongation of petioles, and increased petiole and leaf angles from the soil surface. The potential physiological significance of these architectural changes remains speculative. We address this issue computationally by formulating a mathematical model and performing numerical simulations, testing the hypothesis that elongated and elevated plant configurations may reflect a leaf-cooling strategy. This sets in place a new basic model of plant water use and interaction with the surrounding air, which couples heat and mass transfer within a plant to water vapour diffusion in the air, using a transpiration term that depends on saturation, temperature and vapour concentration. A two-dimensional, multi-petiole shoot geometry is considered, with added leaf-blade shape detail. Our simulations show that increased petiole length and angle generally result in enhanced transpiration rates and reduced leaf temperatures in well-watered conditions. Furthermore, our computations also reveal plant configurations for which elongation may result in decreased transpiration rate owing to decreased leaf liquid saturation. We offer further qualitative and quantitative insights into the role of architectural parameters as key determinants of leaf-cooling capacity.

  10. Impact of plant shoot architecture on leaf cooling: a coupled heat and mass transfer model

    PubMed Central

    Bridge, L. J.; Franklin, K. A.; Homer, M. E.

    2013-01-01

    Plants display a range of striking architectural adaptations when grown at elevated temperatures. In the model plant Arabidopsis thaliana, these include elongation of petioles, and increased petiole and leaf angles from the soil surface. The potential physiological significance of these architectural changes remains speculative. We address this issue computationally by formulating a mathematical model and performing numerical simulations, testing the hypothesis that elongated and elevated plant configurations may reflect a leaf-cooling strategy. This sets in place a new basic model of plant water use and interaction with the surrounding air, which couples heat and mass transfer within a plant to water vapour diffusion in the air, using a transpiration term that depends on saturation, temperature and vapour concentration. A two-dimensional, multi-petiole shoot geometry is considered, with added leaf-blade shape detail. Our simulations show that increased petiole length and angle generally result in enhanced transpiration rates and reduced leaf temperatures in well-watered conditions. Furthermore, our computations also reveal plant configurations for which elongation may result in decreased transpiration rate owing to decreased leaf liquid saturation. We offer further qualitative and quantitative insights into the role of architectural parameters as key determinants of leaf-cooling capacity. PMID:23720538

  11. Gait-based person recognition using arbitrary view transformation model.

    PubMed

    Muramatsu, Daigo; Shiraishi, Akira; Makihara, Yasushi; Uddin, Md Zasim; Yagi, Yasushi

    2015-01-01

    Gait recognition is a useful biometric trait for person authentication because it is usable even with low image resolution. One challenge is robustness to a view change (cross-view matching); view transformation models (VTMs) have been proposed to solve this. The VTMs work well if the target views are the same as their discrete training views. However, the gait traits are observed from an arbitrary view in a real situation. Thus, the target views may not coincide with discrete training views, resulting in recognition accuracy degradation. We propose an arbitrary VTM (AVTM) that accurately matches a pair of gait traits from an arbitrary view. To realize an AVTM, we first construct 3D gait volume sequences of training subjects, disjoint from the test subjects in the target scene. We then generate 2D gait silhouette sequences of the training subjects by projecting the 3D gait volume sequences onto the same views as the target views, and train the AVTM with gait features extracted from the 2D sequences. In addition, we extend our AVTM by incorporating a part-dependent view selection scheme (AVTM_PdVS), which divides the gait feature into several parts, and sets part-dependent destination views for transformation. Because appropriate destination views may differ for different body parts, the part-dependent destination view selection can suppress transformation errors, leading to increased recognition accuracy. Experiments using data sets collected in different settings show that the AVTM improves the accuracy of cross-view matching and that the AVTM_PdVS further improves the accuracy in many cases, in particular, verification scenarios. PMID:25423652

  12. The design, modeling and optimization of on-chip inductor and transformer circuits

    NASA Astrophysics Data System (ADS)

    Mohan, Sunderarajan Sunderesan

    2000-08-01

    On-chip inductors and transformers play a crucial role in radio frequency integrated circuits (RFICs). For gigahertz circuitry, these components are usually realized using bond-wires or planar on-chip spirals. Although bond wires exhibit higher quality factors (Q) than on-chip spirals, their use is constrained by the limited range of realizable inductances, large production fluctuations and large parasitic (bondpad) capacitances. On the other hand, spiral inductors exhibit good matching and are therefore attractive for commonly used differential architectures. Furthermore, they permit a large range of inductances to be realized. However, they possess smaller Q values and are more difficult to model. In this dissertation, we develop a current sheet theory based on fundamental electromagnetic principles that yields simple, accurate inductance expressions for a variety of geometries, including planar spirals that are square, hexagonal, octagonal or circular. When compared to field solver simulations and measurements over a wide design space, these expressions exhibit typical errors of 2-3%, making them ideal for use in circuit synthesis and optimization. When combined with a commonly used lumped π model, these expressions allow the engineer to explore trade-offs quickly and easily. These current sheet based expressions eliminate the need for using segmented summation methods (such as the Greenhouse approach) to evaluate the inductance of spirals. Thus, the design and optimization of on-chip spiral inductors and transformers can now be performed in a standard circuit design environment (such as SPICE). Field solvers (which are difficult to integrate into a circuit design environment) are now only needed to verify the final design. Using these newly developed inductance expressions, this thesis explores how on-chip inductors should be optimized for various circuit applications. In particular, a new design methodology is presented for enhancing the bandwidth of

  13. Dependence of physical and mechanical properties on polymer architecture for model polymer networks

    NASA Astrophysics Data System (ADS)

    Guo, Ruilan

    Effect of architecture at nanoscale on the macroscopic properties of polymer materials has long been a field of major interest, as evidenced by inhomogeneities in networks, multimodal network topologies, etc. The primary purpose of this research is to establish the architecture-property relationship of polymer networks by studying the physical and mechanical responses of a series of topologically different PTHF networks. Monodispersed allyl-tenninated PTHF precursors were synthesized through "living" cationic polymerization and functional end-capping. Model networks of various crosslink densities and inhomogeneities levels (unimodal, bimodal and clustered) were prepared by endlinking precursors via thiol-ene reaction. Thermal characteristics, i.e., glass transition, melting point, and heat of fusion, of model PTHF networks were investigated as functions of crosslink density and inhomogeneities, which showed different dependence on these two architectural parameters. Study of freezing point depression (FPD) of solvent confined in swollen networks indicated that the size of solvent microcrystals is comparable to the mesh size formed by intercrosslink chains depending on crosslink density and inhomogeneities. Relationship between crystal size and FPD provided a good reflection of the existing architecture facts in the networks. Mechanical responses of elastic chains to uniaxial strains were studied through SANS. Spatial inhomogeneities in bimodal and clustered networks gave rise to "abnormal butterfly patterns", which became more pronounced as elongation ratio increases. Radii of gyration of chains were analyzed at directions parallel and perpendicular to stretching axis. Dependence of Rg on lambda was compared to three rubber elasticity models and the molecular deformation mechanisms for unimodal, bimodal and clustered networks were explored. The thesis focused its last part on the investigation of evolution of free volume distribution of linear polymer (PE

  14. An Evaluation of the High Level Architecture (HLA) as a Framework for NASA Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reid, Michael R.; Powers, Edward I. (Technical Monitor)

    2000-01-01

    The High Level Architecture (HLA) is a current US Department of Defense and an industry (IEEE-1516) standard architecture for modeling and simulations. It provides a framework and set of functional rules and common interfaces for integrating separate and disparate simulators into a larger simulation. The goal of the HLA is to reduce software costs by facilitating the reuse of simulation components and by providing a runtime infrastructure to manage the simulations. In order to evaluate the applicability of the HLA as a technology for NASA space mission simulations, a Simulations Group at Goddard Space Flight Center (GSFC) conducted a study of the HLA and developed a simple prototype HLA-compliant space mission simulator. This paper summarizes the prototyping effort and discusses the potential usefulness of the HLA in the design and planning of future NASA space missions with a focus on risk mitigation and cost reduction.

  15. The NIST Real-Time Control System (RCS): A Reference Model Architecture for Computational Intelligence

    NASA Technical Reports Server (NTRS)

    Albus, James S.

    1996-01-01

    The Real-time Control System (RCS) developed at NIST and elsewhere over the past two decades defines a reference model architecture for design and analysis of complex intelligent control systems. The RCS architecture consists of a hierarchically layered set of functional processing modules connected by a network of communication pathways. The primary distinguishing feature of the layers is the bandwidth of the control loops. The characteristic bandwidth of each level is determined by the spatial and temporal integration window of filters, the temporal frequency of signals and events, the spatial frequency of patterns, and the planning horizon and granularity of the planners that operate at each level. At each level, tasks are decomposed into sequential subtasks, to be performed by cooperating sets of subordinate agents. At each level, signals from sensors are filtered and correlated with spatial and temporal features that are relevant to the control function being implemented at that level.

  16. From Tls to Hbim. High Quality Semantically-Aware 3d Modeling of Complex Architecture

    NASA Astrophysics Data System (ADS)

    Quattrini, R.; Malinverni, E. S.; Clini, P.; Nespeca, R.; Orlietti, E.

    2015-02-01

    In order to improve the framework for 3D modeling, a great challenge is to obtain the suitability of Building Information Model (BIM) platform for historical architecture. A specific challenge in HBIM is to guarantee appropriateness of geometrical accuracy. The present work demonstrates the feasibility of a whole HBIM approach for complex architectural shapes, starting from TLS point clouds. A novelty of our method is to work in a 3D environment throughout the process and to develop semantics during the construction phase. This last feature of HBIM was analyzed in the present work verifying the studied ontologies, enabling the data enrichment of the model with non-geometrical information, such as historical notes, decay or deformation evidence, decorative elements etc. The case study is the Church of Santa Maria at Portonovo, an abbey from the Romanesque period. Irregular or complex historical architecture, such as Romanesque, needs the construction of shared libraries starting from the survey of its already existing elements. This is another key aspect in delivering Building Information Modeling standards. In particular, we focus on the quality assessment of the obtained model, using an open-source sw and the point cloud as reference. The proposed work shows how it is possible to develop a high quality 3D model semantic-aware, capable of connecting geometrical-historical survey with descriptive thematic databases. In this way, a centralized HBIM will serve as comprehensive dataset of information about all disciplines, particularly for restoration and conservation. Moreover, the geometric accuracy will ensure also reliable visualization outputs.

  17. Combining Wavelet Transform and Hidden Markov Models for ECG Segmentation

    NASA Astrophysics Data System (ADS)

    Andreão, Rodrigo Varejão; Boudy, Jérôme

    2006-12-01

    This work aims at providing new insights on the electrocardiogram (ECG) segmentation problem using wavelets. The wavelet transform has been originally combined with a hidden Markov models (HMMs) framework in order to carry out beat segmentation and classification. A group of five continuous wavelet functions commonly used in ECG analysis has been implemented and compared using the same framework. All experiments were realized on the QT database, which is composed of a representative number of ambulatory recordings of several individuals and is supplied with manual labels made by a physician. Our main contribution relies on the consistent set of experiments performed. Moreover, the results obtained in terms of beat segmentation and premature ventricular beat (PVC) detection are comparable to others works reported in the literature, independently of the type of the wavelet. Finally, through an original concept of combining two wavelet functions in the segmentation stage, we achieve our best performances.

  18. Modeling the Office of Science Ten Year Facilities Plan: The PERI Architecture Tiger Team

    SciTech Connect

    de Supinski, Bronis R.; Alam, Sadaf; Bailey, David H.; Carrington, Laura; Daley, Chris; Dubey, Anshu; Gamblin, Todd; Gunter, Dan; Hovland, Paul D.; Jagode, Heike; Karavanic, Karen; Marin, Gabriel; Mellor-Crummey, John; Moore, Shirley; Norris, Boyana; Oliker, Leonid; Olschanowsky, Catherine; Roth, Philip C.; Schulz, Martin; Shende, Sameer; Snavely, Allan; Spear, Wyatt; Tikir, Mustafa; Vetter, Jeff; Worley, Pat; Wright, Nicholas

    2009-06-26

    The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort optimizing key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measured the performance of these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfill our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully.

  19. Modeling the Office of Science Ten Year Facilities Plan: The PERI Architecture Team

    SciTech Connect

    de Supinski, Bronis R.; Alam, Sadaf R; Bailey, David; Carrington, Laura; Daley, Christopher; Dubey, Anshu; Gamblin, Todd; Gunter, Dan; Hovland, Paul; Jagode, Heike; Karavanic, Karen; Marin, Gabriel; Mellor-Crummey, John; Moore, Shirley; Norris, Boyana; Oliker, Leonid; Olschanowsky, Cathy; Roth, Philip C; Schulz, Martin; Shende, Sameer; Snavely, Allan; Spea, Wyatt; Tikir, Mustafa; Vetter, Jeffrey S; Worley, Patrick H; Wright, Nicholas

    2009-01-01

    The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort optimizing key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measured the performance of these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfilll our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully.

  20. Modeling the Office of Science Ten Year FacilitiesPlan: The PERI Architecture Tiger Team

    SciTech Connect

    de Supinski, B R; Alam, S R; Bailey, D H; Carrington, L; Daley, C

    2009-05-27

    The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort to the optimization of key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measured the performance of these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfill our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully.

  1. The Transformative Individual School Counseling Model: An Accountability Model for Urban School Counselors

    ERIC Educational Resources Information Center

    Eschenauer, Robert; Chen-Hayes, Stuart F.

    2005-01-01

    The realities and needs of urban students, families, and educators have outgrown traditional individual counseling models. The American School Counselor Association's National Model and National Standards and the Education Trust's Transforming School Counseling Initiative encourage professional school counselors to shift roles toward implementing…

  2. State of the Art of the Landscape Architecture Spatial Data Model from a Geospatial Perspective

    NASA Astrophysics Data System (ADS)

    Kastuari, A.; Suwardhi, D.; Hanan, H.; Wikantika, K.

    2016-10-01

    Spatial data and information had been used for some time in planning or landscape design. For a long time, architects were using spatial data in the form of topographic map for their designs. This method is not efficient, and it is also not more accurate than using spatial analysis by utilizing GIS. Architects are sometimes also only accentuating the aesthetical aspect for their design, but not taking landscape process into account which could cause the design could be not suitable for its use and its purpose. Nowadays, GIS role in landscape architecture has been formalized by the emergence of Geodesign terminology that starts in Representation Model and ends in Decision Model. The development of GIS could be seen in several fields of science that now have the urgency to use 3 dimensional GIS, such as in: 3D urban planning, flood modeling, or landscape planning. In this fields, 3 dimensional GIS is able to support the steps in modeling, analysis, management, and integration from related data, that describe the human activities and geophysics phenomena in more realistic way. Also, by applying 3D GIS and geodesign in landscape design, geomorphology information can be better presented and assessed. In some research, it is mentioned that the development of 3D GIS is not established yet, either in its 3D data structure, or in its spatial analysis function. This study literature will able to accommodate those problems by providing information on existing development of 3D GIS for landscape architecture, data modeling, the data accuracy, representation of data that is needed by landscape architecture purpose, specifically in the river area.

  3. Phase-field-crystal methodology for modeling of structural transformations.

    PubMed

    Greenwood, Michael; Rottler, Jörg; Provatas, Nikolas

    2011-03-01

    We introduce and characterize free-energy functionals for modeling of solids with different crystallographic symmetries within the phase-field-crystal methodology. The excess free energy responsible for the emergence of periodic phases is inspired by classical density-functional theory, but uses only a minimal description for the modes of the direct correlation function to preserve computational efficiency. We provide a detailed prescription for controlling the crystal structure and introduce parameters for changing temperature and surface energies, so that phase transformations between body-centered-cubic (bcc), face-centered-cubic (fcc), hexagonal-close-packed (hcp), and simple-cubic (sc) lattices can be studied. To illustrate the versatility of our free-energy functional, we compute the phase diagram for fcc-bcc-liquid coexistence in the temperature-density plane. We also demonstrate that our model can be extended to include hcp symmetry by dynamically simulating hcp-liquid coexistence from a seeded crystal nucleus. We further quantify the dependence of the elastic constants on the model control parameters in two and three dimensions, showing how the degree of elastic anisotropy can be tuned from the shape of the direct correlation functions. PMID:21517507

  4. Kinetic Modeling of Damage Repair, Genome Instability, and Neoplastic Transformation

    SciTech Connect

    Stewart, Robert D

    2007-03-17

    Inducible repair and pathway interactions may fundamentally alter the shape of dose-response curves because different mechanisms may be important under low- and high-dose exposure conditions. However, the significance of these phenomena for risk assessment purposes is an open question. This project developed new modeling tools to study the putative effects of DNA damage induction and repair on higher-level biological endpoints, including cell killing, neoplastic transformation and cancer. The project scope included (1) the development of new approaches to simulate the induction and base excision repair (BER) of DNA damage using Monte Carlo methods and (2) the integration of data from the Monte Carlo simulations with kinetic models for higher-level biological endpoints. Methods of calibrating and testing such multiscale biological simulations were developed. We also developed models to aid in the analysis and interpretation of data from experimental assays, such as the pulsed-field gel electrophoresis (PFGE) assay used to quantity the amount of DNA damage caused by ionizing radiation.

  5. Model for a transformer-coupled toroidal plasma source

    NASA Astrophysics Data System (ADS)

    Rauf, Shahid; Balakrishna, Ajit; Chen, Zhigang; Collins, Ken

    2012-01-01

    A two-dimensional fluid plasma model for a transformer-coupled toroidal plasma source is described. Ferrites are used in this device to improve the electromagnetic coupling between the primary coils carrying radio frequency (rf) current and a secondary plasma loop. Appropriate components of the Maxwell equations are solved to determine the electromagnetic fields and electron power deposition in the model. The effect of gas flow on species transport is also considered. The model is applied to 1 Torr Ar/NH3 plasma in this article. Rf electric field lines form a loop in the vacuum chamber and generate a plasma ring. Due to rapid dissociation of NH3, NHx+ ions are more prevalent near the gas inlet and Ar+ ions are the dominant ions farther downstream. NH3 and its by-products rapidly dissociate into small fragments as the gas flows through the plasma. With increasing source power, NH3 dissociates more readily and NHx+ ions are more tightly confined near the gas inlet. Gas flow rate significantly influences the plasma characteristics. With increasing gas flow rate, NH3 dissociation occurs farther from the gas inlet in regions with higher electron density. Consequently, more NH4+ ions are produced and dissociation by-products have higher concentrations near the outlet.

  6. Behavioral Model Architectures: A New Way Of Doing Real-Time Planning In Intelligent Robots

    NASA Astrophysics Data System (ADS)

    Cassinis, Riccardo; Biroli, Ernesto; Meregalli, Alberto; Scalise, Fabio

    1987-01-01

    Traditional hierarchical robot control systems, although well suited for manufacturing applications, appear to be inefficient for innovative applications, such as mobile robots. The research we present aims to the development of a new architecture, designed to overcome actual limitations. The control system was named BARCS (Behavioral Architecture Robot Control System). It is composed of several modules, that exchange information through a blackboard. The original point is that the functions of the modules were selected according to a behavioral rather than a functional decomposition model. Therefore, the system includes, among other, purpose, strategy, movement, sensor handling and safety modules. Both the hardware structure and the logical decomposition allow a great freedom in the design of each module and of the connections between modules, that have to be as flexible and efficient as possible. In order to obtain an "intelligent" behavior, a mixture of traditional programming, artificial intelligence techniques and fuzzy logic are used, according to the needs of each moddle. The approach is particularly interesting because the robot can be quite easily "specialized", i.e. it can be given behaviors and problem solving strategies that suit some applications better than other. Another interesting aspect of the proposed architecture is that sensor information handling and fusion can be dynamically tailored to the robot's situation, thus eliminating all time-consuming useless processing.

  7. The software architecture of climate models: a graphical comparison of CMIP5 and EMICAR5 configurations

    NASA Astrophysics Data System (ADS)

    Alexander, K.; Easterbrook, S. M.

    2015-04-01

    We analyze the source code of eight coupled climate models, selected from those that participated in the CMIP5 (Taylor et al., 2012) or EMICAR5 (Eby et al., 2013; Zickfeld et al., 2013) intercomparison projects. For each model, we sort the preprocessed code into components and subcomponents based on dependency structure. We then create software architecture diagrams that show the relative sizes of these components/subcomponents and the flow of data between them. The diagrams also illustrate several major classes of climate model design; the distribution of complexity between components, which depends on historical development paths as well as the conscious goals of each institution; and the sharing of components between different modeling groups. These diagrams offer insights into the similarities and differences in structure between climate models, and have the potential to be useful tools for communication between scientists, scientific institutions, and the public.

  8. The software architecture of climate models: a graphical comparison of CMIP5 and EMICAR5 configurations

    NASA Astrophysics Data System (ADS)

    Alexander, K.; Easterbrook, S. M.

    2015-01-01

    We analyse the source code of eight coupled climate models, selected from those that participated in the CMIP5 (Taylor et al., 2012) or EMICAR5 (Eby et al., 2013; Zickfeld et al., 2013) intercomparison projects. For each model, we sort the preprocessed code into components and subcomponents based on dependency structure. We then create software architecture diagrams which show the relative sizes of these components/subcomponents and the flow of data between them. The diagrams also illustrate several major classes of climate model design; the distribution of complexity between components, which depends on historical development paths as well as the conscious goals of each institution; and the sharing of components between different modelling groups. These diagrams offer insights into the similarities and differences between models, and have the potential to be useful tools for communication between scientists, scientific institutions, and the public.

  9. Investigating the genetic architecture of conditional strategies using the environmental threshold model.

    PubMed

    Buzatto, Bruno A; Buoro, Mathieu; Hazel, Wade N; Tomkins, Joseph L

    2015-12-22

    The threshold expression of dichotomous phenotypes that are environmentally cued or induced comprise the vast majority of phenotypic dimorphisms in colour, morphology, behaviour and life history. Modelled as conditional strategies under the framework of evolutionary game theory, the quantitative genetic basis of these traits is a challenge to estimate. The challenge exists firstly because the phenotypic expression of the trait is dichotomous and secondly because the apparent environmental cue is separate from the biological signal pathway that induces the switch between phenotypes. It is the cryptic variation underlying the translation of cue to phenotype that we address here. With a 'half-sib common environment' and a 'family-level split environment' experiment, we examine the environmental and genetic influences that underlie male dimorphism in the earwig Forficula auricularia. From the conceptual framework of the latent environmental threshold (LET) model, we use pedigree information to dissect the genetic architecture of the threshold expression of forceps length. We investigate for the first time the strength of the correlation between observable and cryptic 'proximate' cues. Furthermore, in support of the environmental threshold model, we found no evidence for a genetic correlation between cue and the threshold between phenotypes. Our results show strong correlations between observable and proximate cues and less genetic variation for thresholds than previous studies have suggested. We discuss the importance of generating better estimates of the genetic variation for thresholds when investigating the genetic architecture and heritability of threshold traits. By investigating genetic architecture by means of the LET model, our study supports several key evolutionary ideas related to conditional strategies and improves our understanding of environmentally cued decisions. PMID:26674955

  10. Model-Driven Development of Reliable Avionics Architectures for Lunar Surface Systems

    NASA Technical Reports Server (NTRS)

    Borer, Nicholas; Claypool, Ian; Clark, David; West, John; Somervill, Kevin; Odegard, Ryan; Suzuki, Nantel

    2010-01-01

    This paper discusses a method used for the systematic improvement of NASA s Lunar Surface Systems avionics architectures in the area of reliability and fault-tolerance. This approach utilizes an integrated system model to determine the effects of component failure on the system s ability to provide critical functions. A Markov model of the potential degraded system modes is created to characterize the probability of these degraded modes, and the system model is run for each Markov state to determine its status (operational or system loss). The probabilistic results from the Markov model are first produced from state transition rates based on NASA data for heritage failure rate data of similar components. An additional set of probabilistic results are created from a representative set of failure rates developed for this study, for a variety of component quality grades (space-rated, mil-spec, ruggedized, and commercial). The results show that careful application of redundancy and selected component improvement should result in Lunar Surface Systems architectures that exhibit an appropriate degree of fault-tolerance, reliability, performance, and affordability.

  11. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The nonlinear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  12. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The non-linear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  13. Second Annual Transformative Vertical Flight Concepts Workshop: Enabling New Flight Concepts Through Novel Propulsion and Energy Architectures

    NASA Technical Reports Server (NTRS)

    Dudley, Michael R. (Editor); Duffy, Michael; Hirschberg, Michael; Moore, Mark; German, Brian; Goodrich, Ken; Gunnarson, Tom; Petermaier,Korbinian; Stoll, Alex; Fredericks, Bill; Gibson, Andy; Newman, Aron; Ouellette, Richard; Antcliff, Kevin; Sinkula, Michael; Buettner-Garrett, Josh; Ricci, Mike; Keogh, Rory; Moser, Tim; Borer, Nick; Rizzi, Steve; Lighter, Gwen

    2015-01-01

    On August 3rd and 4th, 2015, a workshop was held at the NASA Ames Research Center, located at the Moffett Federal Airfield in California to explore the aviation communities interest in Transformative Vertical Flight (TVF) Concepts. The Workshop was sponsored by the AHS International (AHS), the American Institute of Aeronautics and Astronautics (AIAA), the National Aeronautics and Space Administration (NASA), and hosted by the NASA Aeronautics Research Institute (NARI). This second annual workshop built on the success and enthusiasm generated by the first TVF Workshop held in Washington, DC in August of 2014. The previous Workshop identified the existence of a multi-disciplinary community interested in this topic and established a consensus among the participants that opportunities to establish further collaborations in this area are warranted. The desire to conduct a series of annual workshops augmented by online virtual technical seminars to strengthen the TVF community and continue planning for advocacy and collaboration was a direct outcome of the first Workshop. The second Workshop organizers focused on four desired action-oriented outcomes. The first was to establish and document common stakeholder needs and areas of potential collaborations. This includes advocacy strategies to encourage the future success of unconventional vertiport capable flight concept solutions that are enabled by emerging technologies. The second was to assemble a community that can collaborate on new conceptual design and analysis tools to permit novel configuration paths with far greater multi-disciplinary coupling (i.e., aero-propulsive-control) to be investigated. The third was to establish a community to develop and deploy regulatory guidelines. This community would have the potential to initiate formation of an American Society for Testing and Materials (ASTM) F44 Committee Subgroup for the development of consensus-based certification standards for General Aviation scale vertiport

  14. The Reactive-Causal Architecture: Introducing an Emotion Model along with Theories of Needs

    NASA Astrophysics Data System (ADS)

    Aydin, Ali Orhan; Orgun, Mehmet Ali

    In the entertainment application area, one of the major aims is to develop believable agents. To achieve this aim, agents should be highly autonomous, situated, flexible, and display affect. The Reactive-Causal Architecture (ReCau) is proposed to simulate these core attributes. In its current form, ReCau cannot explain the effects of emotions on intelligent behaviour. This study aims is to further improve the emotion model of ReCau to explain the effects of emotions on intelligent behaviour. This improvement allows ReCau to be emotional to support the development of believable agents.

  15. Structural Models that Manage IT Portfolio Affecting Business Value of Enterprise Architecture

    NASA Astrophysics Data System (ADS)

    Kamogawa, Takaaki

    This paper examines the structural relationships between Information Technology (IT) governance and Enterprise Architecture (EA), with the objective of enhancing business value in the enterprise society. Structural models consisting of four related hypotheses reveal the relationship between IT governance and EA in the improvement of business values. We statistically examined the hypotheses by analyzing validated questionnaire items from respondents within firms listed on the Japanese stock exchange who were qualified to answer them. We concluded that firms which have organizational ability controlled by IT governance are more likely to deliver business value based on IT portfolio management.

  16. Guiding Principles for Data Architecture to Support the Pathways Community HUB Model

    PubMed Central

    Zeigler, Bernard P.; Redding, Sarah; Leath, Brenda A.; Carter, Ernest L.; Russell, Cynthia

    2016-01-01

    Introduction: The Pathways Community HUB Model provides a unique strategy to effectively supplement health care services with social services needed to overcome barriers for those most at risk of poor health outcomes. Pathways are standardized measurement tools used to define and track health and social issues from identification through to a measurable completion point. The HUB use Pathways to coordinate agencies and service providers in the community to eliminate the inefficiencies and duplication that exist among them. Pathways Community HUB Model and Formalization: Experience with the Model has brought out the need for better information technology solutions to support implementation of the Pathways themselves through decision-support tools for care coordinators and other users to track activities and outcomes, and to facilitate reporting. Here we provide a basis for discussing recommendations for such a data infrastructure by developing a conceptual model that formalizes the Pathway concept underlying current implementations. Requirements for Data Architecture to Support the Pathways Community HUB Model: The main contribution is a set of core recommendations as a framework for developing and implementing a data architecture to support implementation of the Pathways Community HUB Model. The objective is to present a tool for communities interested in adopting the Model to learn from and to adapt in their own development and implementation efforts. Problems with Quality of Data Extracted from the CHAP Database: Experience with the Community Health Access Project (CHAP) data base system (the core implementation of the Model) has identified several issues and remedies that have been developed to address these issues. Based on analysis of issues and remedies, we present several key features for a data architecture meeting the just mentioned recommendations. Implementation of Features: Presentation of features is followed by a practical guide to their implementation

  17. High-Performance Work Systems: American Models of Workplace Transformation.

    ERIC Educational Resources Information Center

    Appelbaum, Eileen; Batt, Rosemary

    Rising competition in world and domestic markets for the past 2 decades has necessitated that U.S. companies undergo significant transformations to improve their performance with respect to a wide array of efficiency and quality indicators. Research on the transformations recently undertaken by some U.S. companies to boost performance revealed two…

  18. Shedding new light on the molecular architecture of oocytes using a combination of synchrotron Fourier transform-infrared and Raman spectroscopic mapping.

    PubMed

    Wood, Bayden R; Chernenko, Tatyana; Matthäus, Christian; Diem, Max; Chong, Connie; Bernhard, Uditha; Jene, Cassandra; Brandli, Alice A; McNaughton, Don; Tobin, Mark J; Trounson, Alan; Lacham-Kaplan, Orly

    2008-12-01

    Synchrotron Fourier transform-infrared (FT-IR) and Raman microspectroscopy were applied to investigate changes in the molecular architecture of mouse oocytes and demonstrate the overall morphology of the maturing oocyte. Here we show that differences were identified between immature mouse oocytes at the germinal vesicle (GV) and mature metaphase II (MII) stage when using this technology, without the introduction of any extrinsic markers, labels, or dyes. GV mouse oocytes were found to have a small, centrally located lipid deposit and another larger polar deposit of similar composition. MII oocytes have very large, centrally located lipid deposits. Each lipid deposit for both cell types contains an inner and outer lipid environment that differs in composition. To assess interoocyte variability, line scans were recorded across the diameter of the oocytes and compared from three independent trials (GV, n = 91; MII, n = 172), and the data were analyzed with principal component analysis (PCA). The average spectra and PCA loading plots show distinct and reproducible changes in the CH stretching region that can be used as molecular maturation markers. The method paves the way for developing an independent assay to assess oocyte status during maturation providing new insights into lipid distribution at the single cell level.

  19. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    NASA Astrophysics Data System (ADS)

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  20. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    NASA Astrophysics Data System (ADS)

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  1. View factor modeling of sputter-deposition on micron-scale-architectured surfaces exposed to plasma

    NASA Astrophysics Data System (ADS)

    Huerta, C. E.; Matlock, T. S.; Wirz, R. E.

    2016-03-01

    The sputter-deposition on surfaces exposed to plasma plays an important role in the erosion behavior and overall performance of a wide range of plasma devices. Plasma models in the low density, low energy plasma regime typically neglect micron-scale surface feature effects on the net sputter yield and erosion rate. The model discussed in this paper captures such surface architecture effects via a computationally efficient view factor model. The model compares well with experimental measurements of argon ion sputter yield from a nickel surface with a triangle wave geometry with peak heights in the hundreds of microns range. Further analysis with the model shows that increasing the surface pitch angle beyond about 45° can lead to significant decreases in the normalized net sputter yield for all simulated ion incident energies (i.e., 75, 100, 200, and 400 eV) for both smooth and roughened surfaces. At higher incident energies, smooth triangular surfaces exhibit a nonmonotonic trend in the normalized net sputter yield with surface pitch angle with a maximum yield above unity over a range of intermediate angles. The resulting increased erosion rate occurs because increased sputter yield due to the local ion incidence angle outweighs increased deposition due to the sputterant angular distribution. The model also compares well with experimentally observed radial expansion of protuberances (measuring tens of microns) in a nano-rod field exposed to an argon beam. The model captures the coalescence of sputterants at the protuberance sites and accurately illustrates the structure's expansion due to deposition from surrounding sputtering surfaces; these capabilities will be used for future studies into more complex surface architectures.

  2. A Generic Model to Simulate Air-Borne Diseases as a Function of Crop Architecture

    PubMed Central

    Casadebaig, Pierre; Quesnel, Gauthier; Langlais, Michel; Faivre, Robert

    2012-01-01

    In a context of pesticide use reduction, alternatives to chemical-based crop protection strategies are needed to control diseases. Crop and plant architectures can be viewed as levers to control disease outbreaks by affecting microclimate within the canopy or pathogen transmission between plants. Modeling and simulation is a key approach to help analyze the behaviour of such systems where direct observations are difficult and tedious. Modeling permits the joining of concepts from ecophysiology and epidemiology to define structures and functions generic enough to describe a wide range of epidemiological dynamics. Additionally, this conception should minimize computing time by both limiting the complexity and setting an efficient software implementation. In this paper, our aim was to present a model that suited these constraints so it could first be used as a research and teaching tool to promote discussions about epidemic management in cropping systems. The system was modelled as a combination of individual hosts (population of plants or organs) and infectious agents (pathogens) whose contacts are restricted through a network of connections. The system dynamics were described at an individual scale. Additional attention was given to the identification of generic properties of host-pathogen systems to widen the model's applicability domain. Two specific pathosystems with contrasted crop architectures were considered: ascochyta blight on pea (homogeneously layered canopy) and potato late blight (lattice of individualized plants). The model behavior was assessed by simulation and sensitivity analysis and these results were discussed against the model ability to discriminate between the defined types of epidemics. Crop traits related to disease avoidance resulting in a low exposure, a slow dispersal or a de-synchronization of plant and pathogen cycles were shown to strongly impact the disease severity at the crop scale. PMID:23226209

  3. A generic model to simulate air-borne diseases as a function of crop architecture.

    PubMed

    Casadebaig, Pierre; Quesnel, Gauthier; Langlais, Michel; Faivre, Robert

    2012-01-01

    In a context of pesticide use reduction, alternatives to chemical-based crop protection strategies are needed to control diseases. Crop and plant architectures can be viewed as levers to control disease outbreaks by affecting microclimate within the canopy or pathogen transmission between plants. Modeling and simulation is a key approach to help analyze the behaviour of such systems where direct observations are difficult and tedious. Modeling permits the joining of concepts from ecophysiology and epidemiology to define structures and functions generic enough to describe a wide range of epidemiological dynamics. Additionally, this conception should minimize computing time by both limiting the complexity and setting an efficient software implementation. In this paper, our aim was to present a model that suited these constraints so it could first be used as a research and teaching tool to promote discussions about epidemic management in cropping systems. The system was modelled as a combination of individual hosts (population of plants or organs) and infectious agents (pathogens) whose contacts are restricted through a network of connections. The system dynamics were described at an individual scale. Additional attention was given to the identification of generic properties of host-pathogen systems to widen the model's applicability domain. Two specific pathosystems with contrasted crop architectures were considered: ascochyta blight on pea (homogeneously layered canopy) and potato late blight (lattice of individualized plants). The model behavior was assessed by simulation and sensitivity analysis and these results were discussed against the model ability to discriminate between the defined types of epidemics. Crop traits related to disease avoidance resulting in a low exposure, a slow dispersal or a de-synchronization of plant and pathogen cycles were shown to strongly impact the disease severity at the crop scale. PMID:23226209

  4. Infra-Free® (IF) Architecture System as the Method for Post-Disaster Shelter Model

    NASA Astrophysics Data System (ADS)

    Chang, Huai-Chien; Anilir, Serkan

    Currently, International Space Station (ISS) is capable to support 3 to 4 astronauts onboard for at least 6 months using an integrated life support system to support the need of crew onboard. Waste from daily life of the crew members are collected by waste recycle systems, electricity consumption depends on collecting solar energy, etc. though it likes the infrastructure we use on Earth, ISS can be comprehended nearly a self-reliant integrated architecture so far, this could be given an important hint for current architecture which is based on urban centralized infrastructure to support our daily lives but could be vulnerable in case of nature disasters. Comparatively, more and more economic activities and communications rely on the enormous urban central infrastructure to support our daily lives. Therefore, when in case of natural disasters, it may cut-out the infrastructure system temporarily or permanent. In order to solve this problem, we propose to design a temporary shelter, which is capable to work without depending on any existing infrastructure. We propose to use some closed-life-cycle or integrated technologies inspired by the possibilities of space and other emerging technologies into current daily architecture by using Infra-free® design framework; which proposes to integrate various life supporting infrastructural elements into one-closed system. We try to work on a scenario for post-disaster management housing as the method for solving the lifeline problems such as solid and liquid waste, energy, and water and hygiene solution into one system. And trying to establish an Infra-free® model of shelter for disaster area. The ultimate objective is to design a Temp Infra-free® model dealing with the sanitation and environment preservation concerns for disaster area.

  5. The Empirical Comparison of Coordinate Transformation Models and Distortion Modeling Methods Based on a Case Study of Croatia

    NASA Astrophysics Data System (ADS)

    Grgic, M.; Varga, M.; Bašić, T.

    2015-12-01

    Several coordinate transformation models enable performing of the coordinate transformations between the historical astro-geodetic datums, which were utilized before the GNSS (Global Navigation Satellite System) technologies were developed, and datums related to the International Terrestrial Reference System (ITRS), which today are most often used to determine the position. The decision on the most appropriate coordinate transformation model is influenced by many factors, such as: required accuracy, available computational resources, possibility of the model application regarding the size and shape of the territory, coordinate distortion that very often exist in historical astro-geodetic datums, etc. This study is based on the geodetic data of the Republic of Croatia in both, historical and ITRS-related datum. It investigates different transformation models, including conformal Molodensky 3 parameters (p) and 5p (standard and abridged) transformation models, 7p transformation models (Bursa-Wolf and Molodensky-Badekas model), Affine transformation models (8p, 9p, 12p), and Multiple Regression Equation approach. Besides, it investigates the 7p, 8p, 9p, and 12p transformation models extended with distortion modeling, and the grid based only transformation model (NTv2 model). Furthermore, several distortion modeling methods were used to produce various models of distortion shifts in different resolutions. Thereafter, their performance and the performance of the transformation models was evaluated using summary statistics derived from the remained positional residuals that were computed for the independent control spatial data set. Lastly, the most appropriate method(s) of distortion modeling and most appropriate coordinate transformation model(s) were defined regarding the required accuracy for the Croatian case.

  6. Lamb wave propagation modelling and simulation using parallel processing architecture and graphical cards

    NASA Astrophysics Data System (ADS)

    Paćko, P.; Bielak, T.; Spencer, A. B.; Staszewski, W. J.; Uhl, T.; Worden, K.

    2012-07-01

    This paper demonstrates new parallel computation technology and an implementation for Lamb wave propagation modelling in complex structures. A graphical processing unit (GPU) and computer unified device architecture (CUDA), available in low-cost graphical cards in standard PCs, are used for Lamb wave propagation numerical simulations. The local interaction simulation approach (LISA) wave propagation algorithm has been implemented as an example. Other algorithms suitable for parallel discretization can also be used in practice. The method is illustrated using examples related to damage detection. The results demonstrate good accuracy and effective computational performance of very large models. The wave propagation modelling presented in the paper can be used in many practical applications of science and engineering.

  7. Hierarchical fiber bundle model to investigate the complex architectures of biological materials.

    PubMed

    Pugno, Nicola M; Bosia, Federico; Abdalrahman, Tamer

    2012-01-01

    The mechanics of fiber bundles has been widely studied in the literature, and fiber bundle models in particular have provided a wealth of useful analytical and numerical results for modeling ordinary materials. These models, however, are inadequate to treat bioinspired nanostructured materials, where hierarchy, multiscale, and complex properties play a decisive role in determining the overall mechanical characteristics. Here, we develop an ad hoc hierarchical theory designed to tackle these complex architectures, thus allowing the determination of the strength of macroscopic hierarchical materials from the properties of their constituents at the nanoscale. The roles of finite size, twisting angle, and friction are also included. Size effects on the statistical distribution of fiber strengths naturally emerge without invoking best-fit or unknown parameters. A comparison between the developed theory and various experimental results on synthetic and natural materials yields considerable agreement. PMID:22400587

  8. Service Oriented Architectural Model for Load Flow Analysis in Power Systems

    NASA Astrophysics Data System (ADS)

    Muthu, Balasingh Moses; Veilumuthu, Ramachandran; Ponnusamy, Lakshmi

    2011-07-01

    The main objective of this paper is to develop the Service Oriented Architectural (SOA) Model for representation of power systems, especially of computing load flow analysis of large interconnected power systems. The proposed SOA model has three elements namely load flow service provider, power systems registry and client. The exchange of data using XML makes the power system services standardized and adaptable. The load flow service is provided by the service provider, which is published in power systems registry for enabling universal visibility and access to the service. The message oriented style of SOA using Simple Object Access Protocol (SOAP) makes the service provider and the power systems client to exist in a loosely coupled environment. This proposed model, portraits the load flow services as Web services in service oriented environment. To suit the power system industry needs, it easily integrates with the Web applications which enables faster power system operations.

  9. Development of a Subcell Based Modeling Approach for Modeling the Architecturally Dependent Impact Response of Triaxially Braided Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Sorini, Chris; Chattopadhyay, Aditi; Goldberg, Robert K.; Kohlman, Lee W.

    2016-01-01

    Understanding the high velocity impact response of polymer matrix composites with complex architectures is critical to many aerospace applications, including engine fan blade containment systems where the structure must be able to completely contain fan blades in the event of a blade-out. Despite the benefits offered by these materials, the complex nature of textile composites presents a significant challenge for the prediction of deformation and damage under both quasi-static and impact loading conditions. The relatively large mesoscale repeating unit cell (in comparison to the size of structural components) causes the material to behave like a structure rather than a homogeneous material. Impact experiments conducted at NASA Glenn Research Center have shown the damage patterns to be a function of the underlying material architecture. Traditional computational techniques that involve modeling these materials using smeared homogeneous, orthotropic material properties at the macroscale result in simulated damage patterns that are a function of the structural geometry, but not the material architecture. In order to preserve heterogeneity at the highest length scale in a robust yet computationally efficient manner, and capture the architecturally dependent damage patterns, a previously-developed subcell modeling approach where the braided composite unit cell is approximated as a series of four adjacent laminated composites is utilized. This work discusses the implementation of the subcell methodology into the commercial transient dynamic finite element code LS-DYNA (Livermore Software Technology Corp.). Verification and validation studies are also presented, including simulation of the tensile response of straight-sided and notched quasi-static coupons composed of a T700/PR520 triaxially braided [0deg/60deg/-60deg] composite. Based on the results of the verification and validation studies, advantages and limitations of the methodology as well as plans for future work

  10. Conversion of Highly Complex Faulted Hydrostratigraphic Architectures into MODFLOW Grid for Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Pham, H. V.; Tsai, F. T.

    2013-12-01

    The USGS MODFLOW is widely used for groundwater modeling. Because of using structured grid, all layers have to be continuous throughout the model domain. This makes it difficult to generate computational grid for complex hydrostratigraphic architectures including thin and discontinuous layers, interconnections of sand units, pinch-outs, and faults. In this study, we present a technique for automatically generating MODFLOW grid for complex aquifer systems of strongly sand-clay binary heterogeneity. To do so, an indicator geostatistical method is adopted to interpolate sand and clay distributions in a gridded two-dimensional plane along the structural dip for every one-foot vertical interval. A three-dimensional gridded binary geological architecture is reconstructed by assembling all two-dimensional planes. Then, the geological architecture is converted to MODFLOW computational grid by the procedures as follows. First, we determine bed boundary elevation of sand and clay units for each vertical column. Then, we determine the total number of bed boundaries for a vertical column by projecting the bed boundaries of its adjacent four vertical columns to the column. This step is of importance to preserve flow pathways, especially for narrow connections between sand units. Finally, we determine the number of MODFLOW layers and assign layer indices to bed boundaries. A MATLAB code was developed to implement the technique. The inputs for the code are bed boundary data from well logs, a structural dip, minimal layer thickness, and the number of layers. The outputs are MODFLOW grid of sand and clay indicators. The technique is able to generate grid that preserves fault features in the geological architecture. Moreover, the code is very efficient for regenerating MODFLOW grid with different grid resolutions. The technique was applied to MODFLOW grid generation for the fluvial aquifer system in Baton Rouge, Louisiana. The study area consists of the '1,200-foot' sand, the '1

  11. Quantitative Analysis and Modeling of 3-D TSV-Based Power Delivery Architectures

    NASA Astrophysics Data System (ADS)

    He, Huanyu

    As 3-D technology enters the commercial production stage, it is critical to understand different 3-D power delivery architectures on the stacked ICs and packages with through-silicon vias (TSVs). Appropriate design, modeling, analysis, and optimization approaches of the 3-D power delivery system are of foremost significance and great practical interest to the semiconductor industry in general. Based on fundamental physics of 3-D integration components, the objective of this thesis work is to quantitatively analyze the power delivery for 3D-IC systems, develop appropriate physics-based models and simulation approaches, understand the key issues, and provide potential solutions for design of 3D-IC power delivery architectures. In this work, a hybrid simulation approach is adopted as the major approach along with analytical method to examine 3-D power networks. Combining electromagnetic (EM) tools and circuit simulators, the hybrid approach is able to analyze and model micrometer-scale components as well as centimeter-scale power delivery system with high accuracy and efficiency. The parasitic elements of the components on the power delivery can be precisely modeled by full-wave EM solvers. Stack-up circuit models for the 3-D power delivery networks (PDNs) are constructed through a partition and assembly method. With the efficiency advantage of the SPICE circuit simulation, the overall 3-D system power performance can be analyzed and the 3-D power delivery architectures can be evaluated in a short computing time. The major power delivery issues are the voltage drop (IR drop) and voltage noise. With a baseline of 3-D power delivery architecture, the on-chip PDNs of TSV-based chip stacks are modeled and analyzed for the IR drop and AC noise. The basic design factors are evaluated using the hybrid approach, such as the number of stacked chips, the number of TSVs, and the TSV arrangement. Analytical formulas are also developed to evaluate the IR drop in 3-D chip stack in

  12. A computational model for sex-specific genetic architecture of complex traits in humans: Implications for mapping pain sensitivity

    PubMed Central

    Wang, Chenguang; Cheng, Yun; Liu, Tian; Li, Qin; Fillingim, Roger B; Wallace, Margaret R; Staud, Roland; Kaplan, Lee; Wu, Rongling

    2008-01-01

    Understanding differences in the genetic architecture of complex traits between the two sexes has significant implications for evolutionary studies and clinical diagnosis. However, our knowledge about sex-specific genetic architecture is limited largely because of a lack of analytical models that can detect and quantify the effects of sex on the complexity of quantitative genetic variation. Here, we derived a statistical model for mapping DNA sequence variants that contribute to sex-specific differences in allele frequencies, linkage disequilibria, and additive and dominance genetic effects due to haplotype diversity. This model allows a genome-wide search for functional haplotypes and the estimation and test of haplotype by sex interactions and sex-specific heritability. The model, validated by simulation studies, was used to detect sex-specific functional haplotypes that encode a pain sensitivity trait in humans. The model could have important implications for mapping complex trait genes and studying the detailed genetic architecture of sex-specific differences. PMID:18416828

  13. Modeling workplace contact networks: The effects of organizational structure, architecture, and reporting errors on epidemic predictions

    PubMed Central

    Potter, Gail E.; Smieszek, Timo; Sailer, Kerstin

    2015-01-01

    Face-to-face social contacts are potentially important transmission routes for acute respiratory infections, and understanding the contact network can improve our ability to predict, contain, and control epidemics. Although workplaces are important settings for infectious disease transmission, few studies have collected workplace contact data and estimated workplace contact networks. We use contact diaries, architectural distance measures, and institutional structures to estimate social contact networks within a Swiss research institute. Some contact reports were inconsistent, indicating reporting errors. We adjust for this with a latent variable model, jointly estimating the true (unobserved) network of contacts and duration-specific reporting probabilities. We find that contact probability decreases with distance, and that research group membership, role, and shared projects are strongly predictive of contact patterns. Estimated reporting probabilities were low only for 0–5 min contacts. Adjusting for reporting error changed the estimate of the duration distribution, but did not change the estimates of covariate effects and had little effect on epidemic predictions. Our epidemic simulation study indicates that inclusion of network structure based on architectural and organizational structure data can improve the accuracy of epidemic forecasting models. PMID:26634122

  14. Modeling and characterization of long term material behavior in polymer composites with woven fiber architecture

    NASA Astrophysics Data System (ADS)

    Gupta, Vikas

    The purpose of this research is to develop an analytical tool which, when coupled with accelerated material characterization, is capable of predicting long-term durability of polymers and their composites. Conducting creep test on each composite laminate with different fibers, fiber volume fractions, and weave architectures is impractical. Moreover, in case of thin laminates, accurately characterizing the out-of-plane matrix dominated viscoelastic response is not easily achievable. Therefore, the primary objective of this paper is to present a multi-scale modeling methodology to simulate the long-term interlaminar properties in polymer matrix woven composites and then predict the critical regions where failure is most likely to occur. A micromechanics approach towards modeling the out-of-plane viscoelastic behavior of a five-harness satin woven-fiber cross-ply composite laminate is presented, taking into consideration the weave architecture and time-dependent effects. Short-term creep tests were performed on neat resin at different test temperatures and stress levels to characterize physical aging of the resin matrix. In addition, creep and recovery experiments were conducted on un-aged resin specimens in order to characterize the pronounced stress-dependent nonlinear viscoelastic response of the PR500 resin. Two-dimensional micromechanics analysis was carried out using a test-bed finite element code, NOVA-3D, including interactions between non-linear material constitutive behavior, geometric nonlinearity, aging and environmental effects.

  15. Modelling skin penetration using the Laplace transform technique.

    PubMed

    Anissimov, Y G; Watkinson, A

    2013-01-01

    The Laplace transform is a convenient mathematical tool for solving ordinary and partial differential equations. The application of this technique to problems arising in drug penetration through the skin is reviewed in this paper.

  16. A hybrid fast Hankel transform algorithm for electromagnetic modeling

    USGS Publications Warehouse

    Anderson, W.L.

    1989-01-01

    A hybrid fast Hankel transform algorithm has been developed that uses several complementary features of two existing algorithms: Anderson's digital filtering or fast Hankel transform (FHT) algorithm and Chave's quadrature and continued fraction algorithm. A hybrid FHT subprogram (called HYBFHT) written in standard Fortran-77 provides a simple user interface to call either subalgorithm. The hybrid approach is an attempt to combine the best features of the two subalgorithms to minimize the user's coding requirements and to provide fast execution and good accuracy for a large class of electromagnetic problems involving various related Hankel transform sets with multiple arguments. Special cases of Hankel transforms of double-order and double-argument are discussed, where use of HYBFHT is shown to be advantageous for oscillatory kernal functions. -Author

  17. Dawn: A Simulation Model for Evaluating Costs and Tradeoffs of Big Data Science Architectures

    NASA Astrophysics Data System (ADS)

    Cinquini, L.; Crichton, D. J.; Braverman, A. J.; Kyo, L.; Fuchs, T.; Turmon, M.

    2014-12-01

    In many scientific disciplines, scientists and data managers are bracing for an upcoming deluge of big data volumes, which will increase the size of current data archives by a factor of 10-100 times. For example, the next Climate Model Inter-comparison Project (CMIP6) will generate a global archive of model output of approximately 10-20 Peta-bytes, while the upcoming next generation of NASA decadal Earth Observing instruments are expected to collect tens of Giga-bytes/day. In radio-astronomy, the Square Kilometre Array (SKA) will collect data in the Exa-bytes/day range, of which (after reduction and processing) around 1.5 Exa-bytes/year will be stored. The effective and timely processing of these enormous data streams will require the design of new data reduction and processing algorithms, new system architectures, and new techniques for evaluating computation uncertainty. Yet at present no general software tool or framework exists that will allow system architects to model their expected data processing workflow, and determine the network, computational and storage resources needed to prepare their data for scientific analysis. In order to fill this gap, at NASA/JPL we have been developing a preliminary model named DAWN (Distributed Analytics, Workflows and Numerics) for simulating arbitrary complex workflows composed of any number of data processing and movement tasks. The model can be configured with a representation of the problem at hand (the data volumes, the processing algorithms, the available computing and network resources), and is able to evaluate tradeoffs between different possible workflows based on several estimators: overall elapsed time, separate computation and transfer times, resulting uncertainty, and others. So far, we have been applying DAWN to analyze architectural solutions for 4 different use cases from distinct science disciplines: climate science, astronomy, hydrology and a generic cloud computing use case. This talk will present

  18. Some approaches for modeling and analysis of a parallel mechanism with stewart platform architecture

    SciTech Connect

    V. De Sapio

    1998-05-01

    Parallel mechanisms represent a family of devices based on a closed kinematic architecture. This is in contrast to serial mechanisms, which are comprised of a chain-like series of joints and links in an open kinematic architecture. The closed architecture of parallel mechanisms offers certain benefits and disadvantages.

  19. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGES

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; Brandt, Steven R.; Ciznicki, Milosz; Kierzynka, Michal; Löffler, Frank; Schnetter, Erik; Tao, Jian

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretizationmore » is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  20. Publishing biomedical journals on the World-Wide Web using an open architecture model.

    PubMed Central

    Shareck, E. P.; Greenes, R. A.

    1996-01-01

    BACKGROUND: In many respects, biomedical publications are ideally suited for distribution via the World-Wide Web, but economic concerns have prevented the rapid adoption of an on-line publishing model. PURPOSE: We report on our experiences with assisting biomedical journals in developing an online presence, issues that were encountered, and methods used to address these issues. Our approach is based on an open architecture that fosters adaptation and interconnection of biomedical resources. METHODS: We have worked with the New England Journal of Medicine (NEJM), as well as five other publishers. A set of tools and protocols was employed to develop a scalable and customizable solution for publishing journals on-line. RESULTS: In March, 1996, the New England Journal of Medicine published its first World-Wide Web issue. Explorations with other publishers have helped to generalize the model. CONCLUSIONS: Economic and technical issues play a major role in developing World-Wide Web publishing solutions. PMID:8947685

  1. Analysis of optical near-field energy transfer by stochastic model unifying architectural dependencies

    SciTech Connect

    Naruse, Makoto; Akahane, Kouichi; Yamamoto, Naokatsu; Holmström, Petter; Thylén, Lars; Huant, Serge; Ohtsu, Motoichi

    2014-04-21

    We theoretically and experimentally demonstrate energy transfer mediated by optical near-field interactions in a multi-layer InAs quantum dot (QD) structure composed of a single layer of larger dots and N layers of smaller ones. We construct a stochastic model in which optical near-field interactions that follow a Yukawa potential, QD size fluctuations, and temperature-dependent energy level broadening are unified, enabling us to examine device-architecture-dependent energy transfer efficiencies. The model results are consistent with the experiments. This study provides an insight into optical energy transfer involving inherent disorders in materials and paves the way to systematic design principles of nanophotonic devices that will allow optimized performance and the realization of designated functions.

  2. A Grid-Based Architecture for Coupling Hydro-Meteorological Models

    NASA Astrophysics Data System (ADS)

    Schiffers, Michael; Straube, Christian; gentschen Felde, Nils; Clematis, Andrea; Galizia, Antonella; D'Agostino, Daniele; Danovaro, Emanuele

    2014-05-01

    Computational hydro-meteorological research (HMR) requires the execution of various meteorological, hydrological, hydraulic, and impact models, either standalone or as well-orchestrated chains (workflows). While the former approach is straightforward, the latter one is not because consecutive models may depend on different execution environments, on organizational constraints, and on separate data formats and semantics to be bridged. Consequently, in order to gain the most benefit from HMR model chains, it is of paramount interest a) to seamlessly couple heterogeneous models; b) to access models and data in various administrative domains; c) to execute models on the most appropriate resources available in right time. In this contribution we present our experience in using a Grid-based computing infrastructure for HMR. In particular we will first explore various coupling mechanisms. We then specify an enabling Grid infrastructure to support dynamic model chains. Using the DRIHM project as an example we report on implementation details, especially in the context of the European Grid Infrastructure (EGI). Finally, we apply the architecture for hydro-meteorological disaster management and elaborate on the opportunities the Grid infrastructure approach offers in a worldwide context.

  3. Key Technology Research on Open Architecture for The Sharing of Heterogeneous Geographic Analysis Models

    NASA Astrophysics Data System (ADS)

    Yue, S. S.; Wen, Y. N.; Lv, G. N.; Hu, D.

    2013-10-01

    In recent years, the increasing development of cloud computing technologies laid critical foundation for efficiently solving complicated geographic issues. However, it is still difficult to realize the cooperative operation of massive heterogeneous geographical models. Traditional cloud architecture is apt to provide centralized solution to end users, while all the required resources are often offered by large enterprises or special agencies. Thus, it's a closed framework from the perspective of resource utilization. Solving comprehensive geographic issues requires integrating multifarious heterogeneous geographical models and data. In this case, an open computing platform is in need, with which the model owners can package and deploy their models into cloud conveniently, while model users can search, access and utilize those models with cloud facility. Based on this concept, the open cloud service strategies for the sharing of heterogeneous geographic analysis models is studied in this article. The key technology: unified cloud interface strategy, sharing platform based on cloud service, and computing platform based on cloud service are discussed in detail, and related experiments are conducted for further verification.

  4. Continuous distribution model for the investigation of complex molecular architectures near interfaces with scattering techniques

    NASA Astrophysics Data System (ADS)

    Shekhar, Prabhanshu; Nanda, Hirsh; Lösche, Mathias; Heinrich, Frank

    2011-11-01

    Biological membranes are composed of a thermally disordered lipid matrix and therefore require non-crystallographic scattering approaches for structural characterization with x-rays or neutrons. Here we develop a continuous distribution (CD) model to refine neutron or x-ray reflectivity data from complex architectures of organic molecules. The new model is a flexible implementation of the composition-space refinement of interfacial structures to constrain the resulting scattering length density profiles. We show this model increases the precision with which molecular components may be localized within a sample, with a minimal use of free model parameters. We validate the new model by parameterizing all-atom molecular dynamics (MD) simulations of bilayers and by evaluating the neutron reflectivity of a phospholipid bilayer physisorbed to a solid support. The determination of the structural arrangement of a sparsely-tethered bilayer lipid membrane (stBLM) comprised of a multi-component phospholipid bilayer anchored to a gold substrate by a thiolated oligo(ethylene oxide) linker is also demonstrated. From the model we extract the bilayer composition and density of tether points, information which was previously inaccessible for stBLM systems. The new modeling strategy has been implemented into the ga_refl reflectivity data evaluation suite, available through the National Institute of Standards and Technology (NIST) Center for Neutron Research (NCNR).

  5. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  6. PDS4: Meeting Big Data Challenges Via a Model-Driven Planetary Science Data Architecture and System

    NASA Astrophysics Data System (ADS)

    Law, E.; Hughes, J. S.; Crichton, D. J.; Hardman, S. H.; Joyner, R.; Ramirez, P.

    2014-12-01

    Big science data management entails cataloging, processing, distribution, multiple ways of analyzing and interpreting the data, long-term preservation, and international cooperation of massive amount of scientific data. PDS4, the next generation of the Planetary Data System (PDS), uses an information model-driven architectural approach coupled with modern information technologies and standards to meet theses challenges of big science data management. PDS4 is an operational example of the use of an explicit data system architecture and an ontology-base information model to drive the development, operations, and evolution of a scalable data system along the entire science data lifecycle from ground systems to the archives. This overview of PDS4 will include a description of its model-driven approach and its overall systems architecture. It will illustrate how the system is being used to help meet the expectations of modern scientists for interoperable data systems and correlatable data in the Big Data era.

  7. PREDICTING SUBSURFACE CONTAMINANT TRANSPORT AND TRANSFORMATION: CONSIDERATIONS FOR MODEL SELECTION AND FIELD VALIDATION

    EPA Science Inventory

    Predicting subsurface contaminant transport and transformation requires mathematical models based on a variety of physical, chemical, and biological processes. The mathematical model is an attempt to quantitatively describe observed processes in order to permit systematic forecas...

  8. Integrating Model-Based Transmission Reduction into a multi-tier architecture

    NASA Astrophysics Data System (ADS)

    Straub, J.

    A multi-tier architecture consists of numerous craft as part of the system, orbital, aerial, and surface tiers. Each tier is able to collect progressively greater levels of information. Generally, craft from lower-level tiers are deployed to a target of interest based on its identification by a higher-level craft. While the architecture promotes significant amounts of science being performed in parallel, this may overwhelm the computational and transmission capabilities of higher-tier craft and links (particularly the deep space link back to Earth). Because of this, a new paradigm in in-situ data processing is required. Model-based transmission reduction (MBTR) is such a paradigm. Under MBTR, each node (whether a single spacecraft in orbit of the Earth or another planet or a member of a multi-tier network) is given an a priori model of the phenomenon that it is assigned to study. It performs activities to validate this model. If the model is found to be erroneous, corrective changes are identified, assessed to ensure their significance for being passed on, and prioritized for transmission. A limited amount of verification data is sent with each MBTR assertion message to allow those that might rely on the data to validate the correct operation of the spacecraft and MBTR engine onboard. Integrating MBTR with a multi-tier framework creates an MBTR hierarchy. Higher levels of the MBTR hierarchy task lower levels with data collection and assessment tasks that are required to validate or correct elements of its model. A model of the expected conditions is sent to the lower level craft; which then engages its own MBTR engine to validate or correct the model. This may include tasking a yet lower level of craft to perform activities. When the MBTR engine at a given level receives all of its component data (whether directly collected or from delegation), it randomly chooses some to validate (by reprocessing the validation data), performs analysis and sends its own results (v

  9. Architectural Methodology Report

    NASA Technical Reports Server (NTRS)

    Dhas, Chris

    2000-01-01

    The establishment of conventions between two communicating entities in the end systems is essential for communications. Examples of the kind of decisions that need to be made in establishing a protocol convention include the nature of the data representation, the for-mat and the speed of the date representation over the communications path, and the sequence of control messages (if any) which are sent. One of the main functions of a protocol is to establish a standard path between the communicating entities. This is necessary to create a virtual communications medium with certain desirable characteristics. In essence, it is the function of the protocol to transform the characteristics of the physical communications environment into a more useful virtual communications model. The final function of a protocol is to establish standard data elements for communications over the path; that is, the protocol serves to create a virtual data element for exchange. Other systems may be constructed in which the transferred element is a program or a job. Finally, there are special purpose applications in which the element to be transferred may be a complex structure such as all or part of a graphic display. NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to describe the methodologies used in developing a protocol architecture for an in-space Internet node. The node would support NASA:s four mission areas: Earth Science; Space Science; Human Exploration and Development of Space (HEDS); Aerospace Technology. This report presents the methodology for developing the protocol architecture. The methodology addresses the architecture for a computer communications environment. It does not address an analog voice architecture.

  10. Equivalent circuit of radio frequency-plasma with the transformer model.

    PubMed

    Nishida, K; Mochizuki, S; Ohta, M; Yasumoto, M; Lettry, J; Mattei, S; Hatayama, A

    2014-02-01

    LINAC4 H(-) source is radio frequency (RF) driven type source. In the RF system, it is required to match the load impedance, which includes H(-) source, to that of final amplifier. We model RF plasma inside the H(-) source as circuit elements using transformer model so that characteristics of the load impedance become calculable. It has been shown that the modeling based on the transformer model works well to predict the resistance and inductance of the plasma.

  11. Use of Dynamic Models and Operational Architecture to Solve Complex Navy Challenges

    NASA Technical Reports Server (NTRS)

    Grande, Darby; Black, J. Todd; Freeman, Jared; Sorber, TIm; Serfaty, Daniel

    2010-01-01

    The United States Navy established 8 Maritime Operations Centers (MOC) to enhance the command and control of forces at the operational level of warfare. Each MOC is a headquarters manned by qualified joint operational-level staffs, and enabled by globally interoperable C41 systems. To assess and refine MOC staffing, equipment, and schedules, a dynamic software model was developed. The model leverages pre-existing operational process architecture, joint military task lists that define activities and their precedence relations, as well as Navy documents that specify manning and roles per activity. The software model serves as a "computational wind-tunnel" in which to test a MOC on a mission, and to refine its structure, staffing, processes, and schedules. More generally, the model supports resource allocation decisions concerning Doctrine, Organization, Training, Material, Leadership, Personnel and Facilities (DOTMLPF) at MOCs around the world. A rapid prototype effort efficiently produced this software in less than five months, using an integrated process team consisting of MOC military and civilian staff, modeling experts, and software developers. The work reported here was conducted for Commander, United States Fleet Forces Command in Norfolk, Virginia, code N5-0LW (Operational Level of War) that facilitates the identification, consolidation, and prioritization of MOC capabilities requirements, and implementation and delivery of MOC solutions.

  12. Coupling Multi-Component Models with MPH on Distributed MemoryComputer Architectures

    SciTech Connect

    He, Yun; Ding, Chris

    2005-03-24

    A growing trend in developing large and complex applications on today's Teraflop scale computers is to integrate stand-alone and/or semi-independent program components into a comprehensive simulation package. One example is the Community Climate System Model which consists of atmosphere, ocean, land-surface and sea-ice components. Each component is semi-independent and has been developed at a different institution. We study how this multi-component, multi-executable application can run effectively on distributed memory architectures. For the first time, we clearly identify five effective execution modes and develop the MPH library to support application development utilizing these modes. MPH performs component-name registration, resource allocation and initial component handshaking in a flexible way.

  13. Enhanced Engine Performance During Emergency Operation Using a Model-Based Engine Control Architecture

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and application of model-based engine control (MBEC) for use during emergency operation of the aircraft. The MBEC methodology is applied to the Commercial Modular Aero-Propulsion System Simulation 40k (CMAPSS40k) and features an optimal tuner Kalman Filter (OTKF) to estimate unmeasured engine parameters, which can then be used for control. During an emergency scenario, normally-conservative engine operating limits may be relaxed to increase the performance of the engine and overall survivability of the aircraft; this comes at the cost of additional risk of an engine failure. The MBEC architecture offers the advantage of estimating key engine parameters that are not directly measureable. Estimating the unknown parameters allows for tighter control over these parameters, and on the level of risk the engine will operate at. This will allow the engine to achieve better performance than possible when operating to more conservative limits on a related, measurable parameter.

  14. Enhanced Engine Performance During Emergency Operation Using a Model-Based Engine Control Architecture

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2015-01-01

    This paper discusses the design and application of model-based engine control (MBEC) for use during emergency operation of the aircraft. The MBEC methodology is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40,000) and features an optimal tuner Kalman Filter (OTKF) to estimate unmeasured engine parameters, which can then be used for control. During an emergency scenario, normally-conservative engine operating limits may be relaxed to increase the performance of the engine and overall survivability of the aircraft; this comes at the cost of additional risk of an engine failure. The MBEC architecture offers the advantage of estimating key engine parameters that are not directly measureable. Estimating the unknown parameters allows for tighter control over these parameters, and on the level of risk the engine will operate at. This will allow the engine to achieve better performance than possible when operating to more conservative limits on a related, measurable parameter.

  15. Transforming an MFT Program: A Model for Enhancing Diversity

    ERIC Educational Resources Information Center

    McDowell, Teresa; Fang, Shi-Ruei; Brownlee, Kenya; Young, Cecilia Gomez; Khanna, Anchal

    2002-01-01

    Marriage and family therapy programs need to go beyond the typical practices of recruiting and retaining students of color. Marriage and family therapy educators must assume positions of leadership by transforming graduate programs to reflect a deep, active, systemic commitment to both diversity and social justice. In this article, we argue that…

  16. High Resolution Genomic Scans Reveal Genetic Architecture Controlling Alcohol Preference in Bidirectionally Selected Rat Model.

    PubMed

    Lo, Chiao-Ling; Lossie, Amy C; Liang, Tiebing; Liu, Yunlong; Xuei, Xiaoling; Lumeng, Lawrence; Zhou, Feng C; Muir, William M

    2016-08-01

    Investigations on the influence of nature vs. nurture on Alcoholism (Alcohol Use Disorder) in human have yet to provide a clear view on potential genomic etiologies. To address this issue, we sequenced a replicated animal model system bidirectionally-selected for alcohol preference (AP). This model is uniquely suited to map genetic effects with high reproducibility, and resolution. The origin of the rat lines (an 8-way cross) resulted in small haplotype blocks (HB) with a corresponding high level of resolution. We sequenced DNAs from 40 samples (10 per line of each replicate) to determine allele frequencies and HB. We achieved ~46X coverage per line and replicate. Excessive differentiation in the genomic architecture between lines, across replicates, termed signatures of selection (SS), were classified according to gene and region. We identified SS in 930 genes associated with AP. The majority (50%) of the SS were confined to single gene regions, the greatest numbers of which were in promoters (284) and intronic regions (169) with the least in exon's (4), suggesting that differences in AP were primarily due to alterations in regulatory regions. We confirmed previously identified genes and found many new genes associated with AP. Of those newly identified genes, several demonstrated neuronal function involved in synaptic memory and reward behavior, e.g. ion channels (Kcnf1, Kcnn3, Scn5a), excitatory receptors (Grin2a, Gria3, Grip1), neurotransmitters (Pomc), and synapses (Snap29). This study not only reveals the polygenic architecture of AP, but also emphasizes the importance of regulatory elements, consistent with other complex traits. PMID:27490364

  17. High Resolution Genomic Scans Reveal Genetic Architecture Controlling Alcohol Preference in Bidirectionally Selected Rat Model

    PubMed Central

    Lo, Chiao-Ling; Liang, Tiebing; Liu, Yunlong; Lumeng, Lawrence; Zhou, Feng C.; Muir, William M.

    2016-01-01

    Investigations on the influence of nature vs. nurture on Alcoholism (Alcohol Use Disorder) in human have yet to provide a clear view on potential genomic etiologies. To address this issue, we sequenced a replicated animal model system bidirectionally-selected for alcohol preference (AP). This model is uniquely suited to map genetic effects with high reproducibility, and resolution. The origin of the rat lines (an 8-way cross) resulted in small haplotype blocks (HB) with a corresponding high level of resolution. We sequenced DNAs from 40 samples (10 per line of each replicate) to determine allele frequencies and HB. We achieved ~46X coverage per line and replicate. Excessive differentiation in the genomic architecture between lines, across replicates, termed signatures of selection (SS), were classified according to gene and region. We identified SS in 930 genes associated with AP. The majority (50%) of the SS were confined to single gene regions, the greatest numbers of which were in promoters (284) and intronic regions (169) with the least in exon's (4), suggesting that differences in AP were primarily due to alterations in regulatory regions. We confirmed previously identified genes and found many new genes associated with AP. Of those newly identified genes, several demonstrated neuronal function involved in synaptic memory and reward behavior, e.g. ion channels (Kcnf1, Kcnn3, Scn5a), excitatory receptors (Grin2a, Gria3, Grip1), neurotransmitters (Pomc), and synapses (Snap29). This study not only reveals the polygenic architecture of AP, but also emphasizes the importance of regulatory elements, consistent with other complex traits. PMID:27490364

  18. High Resolution Genomic Scans Reveal Genetic Architecture Controlling Alcohol Preference in Bidirectionally Selected Rat Model.

    PubMed

    Lo, Chiao-Ling; Lossie, Amy C; Liang, Tiebing; Liu, Yunlong; Xuei, Xiaoling; Lumeng, Lawrence; Zhou, Feng C; Muir, William M

    2016-08-01

    Investigations on the influence of nature vs. nurture on Alcoholism (Alcohol Use Disorder) in human have yet to provide a clear view on potential genomic etiologies. To address this issue, we sequenced a replicated animal model system bidirectionally-selected for alcohol preference (AP). This model is uniquely suited to map genetic effects with high reproducibility, and resolution. The origin of the rat lines (an 8-way cross) resulted in small haplotype blocks (HB) with a corresponding high level of resolution. We sequenced DNAs from 40 samples (10 per line of each replicate) to determine allele frequencies and HB. We achieved ~46X coverage per line and replicate. Excessive differentiation in the genomic architecture between lines, across replicates, termed signatures of selection (SS), were classified according to gene and region. We identified SS in 930 genes associated with AP. The majority (50%) of the SS were confined to single gene regions, the greatest numbers of which were in promoters (284) and intronic regions (169) with the least in exon's (4), suggesting that differences in AP were primarily due to alterations in regulatory regions. We confirmed previously identified genes and found many new genes associated with AP. Of those newly identified genes, several demonstrated neuronal function involved in synaptic memory and reward behavior, e.g. ion channels (Kcnf1, Kcnn3, Scn5a), excitatory receptors (Grin2a, Gria3, Grip1), neurotransmitters (Pomc), and synapses (Snap29). This study not only reveals the polygenic architecture of AP, but also emphasizes the importance of regulatory elements, consistent with other complex traits.

  19. Model-Based Systems Engineering With the Architecture Analysis and Design Language (AADL) Applied to NASA Mission Operations

    NASA Technical Reports Server (NTRS)

    Munoz Fernandez, Michela Miche

    2014-01-01

    The potential of Model Model Systems Engineering (MBSE) using the Architecture Analysis and Design Language (AADL) applied to space systems will be described. AADL modeling is applicable to real-time embedded systems- the types of systems NASA builds. A case study with the Juno mission to Jupiter showcases how this work would enable future missions to benefit from using these models throughout their life cycle from design to flight operations.

  20. A physically based model for the isothermal martensitic transformation in a maraging steel

    NASA Astrophysics Data System (ADS)

    Kruijver, S. O.; Blaauw, H. S.; Beyer, J.; Post, J.

    2003-10-01

    Isothermal transformation from austenite to martensite in steel products during or after the production process often show residual stresses which can create unacceptable dimensional changes in the final product. Tn order to gain more insight in the effects infiuencing the isothermai transformation, the overall kinetics in a low Carbon-Nickel maraging steel is investigated. The influence of the austenitizing température, time and quenching rate on the transformation is measured magnetically and yields information about the transformation rate and final amount of transformation. A physically based model describing the nucleation and growth of martensite is used to explain the observed effects. The results show a very good fit of the experimental values and the model description of the transformation, within the limitations of the inhomogeneities (carbides and intermetallics, size and distribution in the material and stress state) and experimental conditions.

  1. Forecasting performance of denoising signal by Wavelet and Fourier Transforms using SARIMA model

    NASA Astrophysics Data System (ADS)

    Ismail, Mohd Tahir; Mamat, Siti Salwana; Hamzah, Firdaus Mohamad; Karim, Samsul Ariffin Abdul

    2014-07-01

    The goal of this research is to determine the forecasting performance of denoising signal. Monthly rainfall and monthly number of raindays with duration of 20 years (1990-2009) from Bayan Lepas station are utilized as the case study. The Fast Fourier Transform (FFT) and Wavelet Transform (WT) are used in this research to find the denoise signal. The denoise data obtained by Fast Fourier Transform and Wavelet Transform are being analyze by seasonal ARIMA model. The best fitted model is determined by the minimum value of MSE. The result indicates that Wavelet Transform is an effective method in denoising the monthly rainfall and number of rain days signals compared to Fast Fourier Transform.

  2. Hysteresis Loop for a No-loaded, Delta-connected Transformer Model Deduced from Measurements

    NASA Astrophysics Data System (ADS)

    Corrodi, Yves; Kamei, Kenji; Kohyama, Haruhiko; Ito, Hiroki

    At a transformer's steady-state condition, whereby a transformer and its load are constantly supplied by a sinusoidal source, the current-flux pair within the transformer core and its windings will cycle along a hysteresis loop. This nonlinear current-flux characteristic becomes important while at transformer gets reenergized. A remaining residual flux and the fact that a transformer is typically used up to its saturation level can lead to high-amplitude magnetizing inrush currents and associated voltage disturbances. These disturbances can be reduced by controlled transformer switching. In order to pre-evaluate the effect of a specific controlled transformer energization, pre-simulations can be applied. In that case the hysteresis loop and its saturation characteristic will become the most important model parameter. If the corresponding manufacturer specifications are not available a standard hysteresis loops can be used, but might come up with an inaccurate simulation result. Therefore, this paper analyses the measured 3-phase currents from two delta-connected power transformers by “Fourier Series” in order to deduce a single-phase hysteresis loop, which can be implemented into a typical 3-phase transformer model. Additionally, the saturation behavior of a power-transformer will be estimated and a comparison of ATP/EMTP simulations will conclude this paper.

  3. Integrating mixed-effect models into an architectural plant model to simulate inter- and intra-progeny variability: a case study on oil palm (Elaeis guineensis Jacq.).

    PubMed

    Perez, Raphaël P A; Pallas, Benoît; Le Moguédec, Gilles; Rey, Hervé; Griffon, Sébastien; Caliman, Jean-Pierre; Costes, Evelyne; Dauzat, Jean

    2016-08-01

    Three-dimensional (3D) reconstruction of plants is time-consuming and involves considerable levels of data acquisition. This is possibly one reason why the integration of genetic variability into 3D architectural models has so far been largely overlooked. In this study, an allometry-based approach was developed to account for architectural variability in 3D architectural models of oil palm (Elaeis guineensis Jacq.) as a case study. Allometric relationships were used to model architectural traits from individual leaflets to the entire crown while accounting for ontogenetic and morphogenetic gradients. Inter- and intra-progeny variabilities were evaluated for each trait and mixed-effect models were used to estimate the mean and variance parameters required for complete 3D virtual plants. Significant differences in leaf geometry (petiole length, density of leaflets, and rachis curvature) and leaflet morphology (gradients of leaflet length and width) were detected between and within progenies and were modelled in order to generate populations of plants that were consistent with the observed populations. The application of mixed-effect models on allometric relationships highlighted an interesting trade-off between model accuracy and ease of defining parameters for the 3D reconstruction of plants while at the same time integrating their observed variability. Future research will be dedicated to sensitivity analyses coupling the structural model presented here with a radiative balance model in order to identify the key architectural traits involved in light interception efficiency. PMID:27302128

  4. Integrating mixed-effect models into an architectural plant model to simulate inter- and intra-progeny variability: a case study on oil palm (Elaeis guineensis Jacq.).

    PubMed

    Perez, Raphaël P A; Pallas, Benoît; Le Moguédec, Gilles; Rey, Hervé; Griffon, Sébastien; Caliman, Jean-Pierre; Costes, Evelyne; Dauzat, Jean

    2016-08-01

    Three-dimensional (3D) reconstruction of plants is time-consuming and involves considerable levels of data acquisition. This is possibly one reason why the integration of genetic variability into 3D architectural models has so far been largely overlooked. In this study, an allometry-based approach was developed to account for architectural variability in 3D architectural models of oil palm (Elaeis guineensis Jacq.) as a case study. Allometric relationships were used to model architectural traits from individual leaflets to the entire crown while accounting for ontogenetic and morphogenetic gradients. Inter- and intra-progeny variabilities were evaluated for each trait and mixed-effect models were used to estimate the mean and variance parameters required for complete 3D virtual plants. Significant differences in leaf geometry (petiole length, density of leaflets, and rachis curvature) and leaflet morphology (gradients of leaflet length and width) were detected between and within progenies and were modelled in order to generate populations of plants that were consistent with the observed populations. The application of mixed-effect models on allometric relationships highlighted an interesting trade-off between model accuracy and ease of defining parameters for the 3D reconstruction of plants while at the same time integrating their observed variability. Future research will be dedicated to sensitivity analyses coupling the structural model presented here with a radiative balance model in order to identify the key architectural traits involved in light interception efficiency.

  5. Assessment of model uncertainty during the river export modelling of pesticides and transformation products

    NASA Astrophysics Data System (ADS)

    Gassmann, Matthias; Olsson, Oliver; Kümmerer, Klaus

    2013-04-01

    The modelling of organic pollutants in the environment is burdened by a load of uncertainties. Not only parameter values are uncertain but often also the mass and timing of pesticide application. By introducing transformation products (TPs) into modelling, further uncertainty coming from the dependence of these substances on their parent compounds and the introduction of new model parameters are likely. The purpose of this study was the investigation of the behaviour of a parsimonious catchment scale model for the assessment of river concentrations of the insecticide Chlorpyrifos (CP) and two of its TPs, Chlorpyrifos Oxon (CPO) and 3,5,6-trichloro-2-pyridinol (TCP) under the influence of uncertain input parameter values. Especially parameter uncertainty and pesticide application uncertainty were investigated by Global Sensitivity Analysis (GSA) and the Generalized Likelihood Uncertainty Estimation (GLUE) method, based on Monte-Carlo sampling. GSA revealed that half-lives and sorption parameters as well as half-lives and transformation parameters were correlated to each other. This means, that the concepts of modelling sorption and degradation/transformation were correlated. Thus, it may be difficult in modelling studies to optimize parameter values for these modules. Furthermore, we could show that erroneous pesticide application mass and timing were compensated during Monte-Carlo sampling by changing the half-life of CP. However, the introduction of TCP into the calculation of the objective function was able to enhance identifiability of pesticide application mass. The GLUE analysis showed that CP and TCP were modelled successfully, but CPO modelling failed with high uncertainty and insensitive parameters. We assumed a structural error of the model which was especially important for CPO assessment. This shows that there is the possibility that a chemical and some of its TPs can be modelled successfully by a specific model structure, but for other TPs, the model

  6. Analysis of Terrestrial Planet Formation by the Grand Tack Model: System Architecture and Tack Location

    NASA Astrophysics Data System (ADS)

    Brasser, R.; Matsumura, S.; Ida, S.; Mojzsis, S. J.; Werner, S. C.

    2016-04-01

    The Grand Tack model of terrestrial planet formation has emerged in recent years as the premier scenario used to account for several observed features of the inner solar system. It relies on the early migration of the giant planets to gravitationally sculpt and mix the planetesimal disk down to ˜1 au, after which the terrestrial planets accrete from material remaining in a narrow circumsolar annulus. Here, we investigate how the model fares under a range of initial conditions and migration course-change (“tack”) locations. We run a large number of N-body simulations with tack locations of 1.5 and 2 au and test initial conditions using equal-mass planetary embryos and a semi-analytical approach to oligarchic growth. We make use of a recent model of the protosolar disk that takes into account viscous heating, includes the full effect of type 1 migration, and employs a realistic mass-radius relation for the growing terrestrial planets. Our results show that the canonical tack location of Jupiter at 1.5 au is inconsistent with the most massive planet residing at 1 au at greater than 95% confidence. This favors a tack farther out at 2 au for the disk model and parameters employed. Of the different initial conditions, we find that the oligarchic case is capable of statistically reproducing the orbital architecture and mass distribution of the terrestrial planets, while the equal-mass embryo case is not.

  7. Analysis of Terrestrial Planet Formation by the Grand Tack Model: System Architecture and Tack Location

    NASA Astrophysics Data System (ADS)

    Brasser, R.; Matsumura, S.; Ida, S.; Mojzsis, S. J.; Werner, S. C.

    2016-04-01

    The Grand Tack model of terrestrial planet formation has emerged in recent years as the premier scenario used to account for several observed features of the inner solar system. It relies on the early migration of the giant planets to gravitationally sculpt and mix the planetesimal disk down to ∼1 au, after which the terrestrial planets accrete from material remaining in a narrow circumsolar annulus. Here, we investigate how the model fares under a range of initial conditions and migration course-change (“tack”) locations. We run a large number of N-body simulations with tack locations of 1.5 and 2 au and test initial conditions using equal-mass planetary embryos and a semi-analytical approach to oligarchic growth. We make use of a recent model of the protosolar disk that takes into account viscous heating, includes the full effect of type 1 migration, and employs a realistic mass–radius relation for the growing terrestrial planets. Our results show that the canonical tack location of Jupiter at 1.5 au is inconsistent with the most massive planet residing at 1 au at greater than 95% confidence. This favors a tack farther out at 2 au for the disk model and parameters employed. Of the different initial conditions, we find that the oligarchic case is capable of statistically reproducing the orbital architecture and mass distribution of the terrestrial planets, while the equal-mass embryo case is not.

  8. A Kinetic Vlasov Model for Plasma Simulation Using Discontinuous Galerkin Method on Many-Core Architectures

    NASA Astrophysics Data System (ADS)

    Reddell, Noah

    Advances are reported in the three pillars of computational science achieving a new capability for understanding dynamic plasma phenomena outside of local thermodynamic equilibrium. A continuum kinetic model for plasma based on the Vlasov-Maxwell system for multiple particle species is developed. Consideration is added for boundary conditions in a truncated velocity domain and supporting wall interactions. A scheme to scale the velocity domain for multiple particle species with different temperatures and particle mass while sharing one computational mesh is described. A method for assessing the degree to which the kinetic solution differs from a Maxwell-Boltzmann distribution is introduced and tested on a thoroughly studied test case. The discontinuous Galerkin numerical method is extended for efficient solution of hyperbolic conservation laws in five or more particle phase-space dimensions using tensor-product hypercube elements with arbitrary polynomial order. A scheme for velocity moment integration is integrated as required for coupling between the plasma species and electromagnetic waves. A new high performance simulation code WARPM is developed to efficiently implement the model and numerical method on emerging many-core supercomputing architectures. WARPM uses the OpenCL programming model for computational kernels and task parallelism to overlap computation with communication. WARPM single-node performance and parallel scaling efficiency are analyzed with bottlenecks identified guiding future directions for the implementation. The plasma modeling capability is validated against physical problems with analytic solutions and well established benchmark problems.

  9. Modeling halotropism: a key role for root tip architecture and reflux loop remodeling in redistributing auxin.

    PubMed

    van den Berg, Thea; Korver, Ruud A; Testerink, Christa; Ten Tusscher, Kirsten H W J

    2016-09-15

    A key characteristic of plant development is its plasticity in response to various and dynamically changing environmental conditions. Tropisms contribute to this flexibility by allowing plant organs to grow from or towards environmental cues. Halotropism is a recently described tropism in which plant roots bend away from salt. During halotropism, as in most other tropisms, directional growth is generated through an asymmetric auxin distribution that generates differences in growth rate and hence induces bending. Here, we develop a detailed model of auxin transport in the Arabidopsis root tip and combine this with experiments to investigate the processes generating auxin asymmetry during halotropism. Our model points to the key role of root tip architecture in allowing the decrease in PIN2 at the salt-exposed side of the root to result in a re-routing of auxin to the opposite side. In addition, our model demonstrates how feedback of auxin on the auxin transporter AUX1 amplifies this auxin asymmetry, while a salt-induced transient increase in PIN1 levels increases the speed at which this occurs. Using AUX1-GFP imaging and pin1 mutants, we experimentally confirmed these model predictions, thus expanding our knowledge of the cellular basis of halotropism. PMID:27510970

  10. Accuracy assessment of modeling architectural structures and details using terrestrial laser scanning

    NASA Astrophysics Data System (ADS)

    Kedzierski, M.; Walczykowski, P.; Orych, A.; Czarnecka, P.

    2015-08-01

    One of the most important aspects when performing architectural documentation of cultural heritage structures is the accuracy of both the data and the products which are generated from these data: documentation in the form of 3D models or vector drawings. The paper describes an assessment of the accuracy of modelling data acquired using a terrestrial phase scanner in relation to the density of a point cloud representing the surface of different types of construction materials typical for cultural heritage structures. This analysis includes the impact of the scanning geometry: the incidence angle of the laser beam and the scanning distance. For the purposes of this research, a test field consisting of samples of different types of construction materials (brick, wood, plastic, plaster, a ceramic tile, sheet metal) was built. The study involved conducting measurements at different angles and from a range of distances for chosen scanning densities. Data, acquired in the form of point clouds, were then filtered and modelled. An accuracy assessment of the 3D model was conducted by fitting it with the point cloud. The reflection intensity of each type of material was also analyzed, trying to determine which construction materials have the highest reflectance coefficients, and which have the lowest reflection coefficients, and in turn how this variable changes for different scanning parameters. Additionally measurements were taken of a fragment of a building in order to compare the results obtained in laboratory conditions, with those taken in field conditions.

  11. Modeling halotropism: a key role for root tip architecture and reflux loop remodeling in redistributing auxin

    PubMed Central

    van den Berg, Thea; Korver, Ruud A.; Testerink, Christa

    2016-01-01

    A key characteristic of plant development is its plasticity in response to various and dynamically changing environmental conditions. Tropisms contribute to this flexibility by allowing plant organs to grow from or towards environmental cues. Halotropism is a recently described tropism in which plant roots bend away from salt. During halotropism, as in most other tropisms, directional growth is generated through an asymmetric auxin distribution that generates differences in growth rate and hence induces bending. Here, we develop a detailed model of auxin transport in the Arabidopsis root tip and combine this with experiments to investigate the processes generating auxin asymmetry during halotropism. Our model points to the key role of root tip architecture in allowing the decrease in PIN2 at the salt-exposed side of the root to result in a re-routing of auxin to the opposite side. In addition, our model demonstrates how feedback of auxin on the auxin transporter AUX1 amplifies this auxin asymmetry, while a salt-induced transient increase in PIN1 levels increases the speed at which this occurs. Using AUX1-GFP imaging and pin1 mutants, we experimentally confirmed these model predictions, thus expanding our knowledge of the cellular basis of halotropism. PMID:27510970

  12. Influence of diffusive porosity architecture on kinetically-controlled reactions in mobile-immobile models

    NASA Astrophysics Data System (ADS)

    Babey, T.; Ginn, T. R.; De Dreuzy, J. R.

    2014-12-01

    Solute transport in porous media may be structured at various scales by geological features, from connectivity patterns of pores to fracture networks. This structure impacts solute repartition and consequently reactivity. Here we study numerically the influence of the organization of porous volumes within diffusive porosity zones on different reactions. We couple a mobile-immobile transport model where an advective zone exchanges with diffusive zones of variable structure to the geochemical modeling software PHREEQC. We focus on two kinetically-controlled reactions, a linear sorption and a nonlinear dissolution of a mineral. We show that in both cases the structure of the immobile zones has an important impact on the overall reaction rates. Through the Multi-Rate Mass Transfer (MRMT) framework, we show that this impact is very well captured by residence times-based models for the kinetic linear sorption, as it is mathematically equivalent to a modification of the initial diffusive structure; Consequently, the overall reaction rate could be easily extrapolated from a conservative tracer experiment. The MRMT models however struggle to reproduce the non-linearity and the threshold effects associated with the kinetic dissolution. A slower reaction, by allowing more time for diffusion to smooth out the concentration gradients, tends to increase their relevance. Figure: Left: Representation of a mobile-immobile model with a complex immobile architecture. The mobile zone is indicated by an arrow. Right: Total remaining mass of mineral in mobile-immobile models and in their equivalent MRMT models during a flush by a highly under-saturated solution. The models only differ by the organization of their immobile porous volumes.

  13. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models.

    PubMed

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L; Huffman, Jennifer E; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F; Wilson, James F; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S

    2015-07-15

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge.

  14. Internet of Things: a possible change in the distributed modeling and simulation architecture paradigm

    NASA Astrophysics Data System (ADS)

    Riecken, Mark; Lessmann, Kurt; Schillero, David

    2016-05-01

    The Data Distribution Service (DDS) was started by the Object Management Group (OMG) in 2004. Currently, DDS is one of the contenders to support the Internet of Things (IoT) and the Industrial IOT (IIoT). DDS has also been used as a distributed simulation architecture. Given the anticipated proliferation of IoT and II devices, along with the explosive growth of sensor technology, can we expect this to have an impact on the broader community of distributed simulation? If it does, what is the impact and which distributed simulation domains will be most affected? DDS shares many of the same goals and characteristics of distributed simulation such as the need to support scale and an emphasis on Quality of Service (QoS) that can be tailored to meet the end user's needs. In addition, DDS has some built-in features such as security that are not present in traditional distributed simulation protocols. If the IoT and II realize their potential application, we predict a large base of technology to be built around this distributed data paradigm, much of which could be directly beneficial to the distributed M&S community. In this paper we compare some of the perceived gaps and shortfalls of current distributed M&S technology to the emerging capabilities of DDS built around the IoT. Although some trial work has been conducted in this area, we propose a more focused examination of the potential of these new technologies and their applicability to current and future problems in distributed M&S. The Internet of Things (IoT) and its data communications mechanisms such as the Data Distribution System (DDS) share properties in common with distributed modeling and simulation (M&S) and its protocols such as the High Level Architecture (HLA) and the Test and Training Enabling Architecture (TENA). This paper proposes a framework based on the sensor use case for how the two communities of practice (CoP) can benefit from one another and achieve greater capability in practical distributed

  15. Functional mapping of quantitative trait loci underlying growth trajectories using a transform-both-sides logistic model.

    PubMed

    Wu, Rongling; Ma, Chang-Xing; Lin, Min; Wang, Zuoheng; Casella, George

    2004-09-01

    The incorporation of developmental control mechanisms of growth has proven to be a powerful tool in mapping quantitative trait loci (QTL) underlying growth trajectories. A theoretical framework for implementing a QTL mapping strategy with growth laws has been established. This framework can be generalized to an arbitrary number of time points, where growth is measured, and becomes computationally more tractable, when the assumption of variance stationarity is made. In practice, however, this assumption is likely to be violated for age-specific growth traits due to a scale effect. In this article, we present a new statistical model for mapping growth QTL, which also addresses the problem of variance stationarity, by using a transform-both-sides (TBS) model advocated by Carroll and Ruppert (1984, Journal of the American Statistical Association 79, 321-328). The TBS-based model for mapping growth QTL cannot only maintain the original biological properties of a growth model, but also can increase the accuracy and precision of parameter estimation and the power to detect a QTL responsible for growth differentiation. Using the TBS-based model, we successfully map a QTL governing growth trajectories to a linkage group in an example of forest trees. The statistical and biological properties of the estimates of this growth QTL position and effect are investigated using Monte Carlo simulation studies. The implications of our model for understanding the genetic architecture of growth are discussed.

  16. Ivory Coast-Ghana margin: model of a transform margin

    SciTech Connect

    Mascle, J.; Blarez, E.

    1987-05-01

    The authors present a marine study of the eastern Ivory Coast-Ghana continental margins which they consider one of the most spectacular extinct transform margins. This margin has been created during Early-Lower Cretaceous time and has not been submitted to any major geodynamic reactivation since its fabric. Based on this example, they propose to consider during the evolution of the transform margin four main and successive stages. Shearing contact is first active between two probably thick continental crusts and then between progressively thinning continental crusts. This leads to the creation of specific geological structures such as pull-apart graben, elongated fault lineaments, major fault scarps, shear folds, and marginal ridges. After the final continental breakup, a hot center (the mid-oceanic ridge axis) is progressively drifting along the newly created margin. The contact between two lithospheres of different nature should necessarily induce, by thermal exchanges, vertical crustal readjustments. Finally, the transform margin remains directly adjacent to a hot but cooling oceanic lithosphere; its subsidence behavior should then progressively be comparable to the thermal subsidence of classic rifted margins.

  17. 4D/RCS: a reference model architecture for intelligent unmanned ground vehicles

    NASA Astrophysics Data System (ADS)

    Albus, James S.

    2002-07-01

    4D/RCS consists of a multi-layered multi-resolutional hierarchy of computational nodes each containing elements of sensory processing (SP), world modeling (WM), value judgment (VJ), and behavior generation (BG). At the lower levels, these elements generate goal-seeking reactive behavior. At higher levels, they enable goal-defining deliberative behavior. At low levels, range in space and time is short and resolution is high. At high levels, distance and time are long and resolution is low. This enables high-precision fast-action response over short intervals of time and space at low levels, while long-range plans and abstract concepts are being formulated over broad regions of time and space at high levels. 4D/RCS closes feedback loops at every level. SP processes focus attention (i.e., window regions of space or time), group (i.e., segment regions into entities), compute entity attributes, estimate entity state, and assign entities to classes at every level. WM processes maintain a rich and dynamic database of knowledge about the world in the form of images, maps, entities, events, and relationships at every level. Other WM processes use that knowledge to generate estimates and predictions that support perception, reasoning, and planning at every level. 4D/RCS was developed for the Army Research Laboratory Demo III program. To date, only the lower levels of the 4D/RCS architecture have been fully implemented, but the results have been extremely positive. It seems clear that the theoretical basis of 4D/RCS is sound and the architecture is capable of being extended to support much higher levels of performance.

  18. How Plates Pull Transforms Apart: 3-D Numerical Models of Oceanic Transform Fault Response to Changes in Plate Motion Direction

    NASA Astrophysics Data System (ADS)

    Morrow, T. A.; Mittelstaedt, E. L.; Olive, J. A. L.

    2015-12-01

    Observations along oceanic fracture zones suggest that some mid-ocean ridge transform faults (TFs) previously split into multiple strike-slip segments separated by short (<~50 km) intra-transform spreading centers and then reunited to a single TF trace. This history of segmentation appears to correspond with changes in plate motion direction. Despite the clear evidence of TF segmentation, the processes governing its development and evolution are not well characterized. Here we use a 3-D, finite-difference / marker-in-cell technique to model the evolution of localized strain at a TF subjected to a sudden change in plate motion direction. We simulate the oceanic lithosphere and underlying asthenosphere at a ridge-transform-ridge setting using a visco-elastic-plastic rheology with a history-dependent plastic weakening law and a temperature- and stress-dependent mantle viscosity. To simulate the development of topography, a low density, low viscosity 'sticky air' layer is present above the oceanic lithosphere. The initial thermal gradient follows a half-space cooling solution with an offset across the TF. We impose an enhanced thermal diffusivity in the uppermost 6 km of lithosphere to simulate the effects of hydrothermal circulation. An initial weak seed in the lithosphere helps localize shear deformation between the two offset ridge axes to form a TF. For each model case, the simulation is run initially with TF-parallel plate motion until the thermal structure reaches a steady state. The direction of plate motion is then rotated either instantaneously or over a specified time period, placing the TF in a state of trans-tension. Model runs continue until the system reaches a new steady state. Parameters varied here include: initial TF length, spreading rate, and the rotation rate and magnitude of spreading obliquity. We compare our model predictions to structural observations at existing TFs and records of TF segmentation preserved in oceanic fracture zones.

  19. An efficient non linear transformer model and its application to ferroresonance study

    SciTech Connect

    Tran-Quoc, T.; Pierrat, L. |

    1995-05-01

    This paper presents a new method for determination of instantaneous magnetization characteristics of transformers (saturation and hysteresis loop) by taking into account only the rms values and no-load losses. This model is used to accurately study ferroresonance phenomena in a system of a cable-connected transformer.

  20. Designing a Component-Based Architecture for the Modeling and Simulation of Nuclear Fuels and Reactors

    SciTech Connect

    Billings, Jay Jay; Elwasif, Wael R; Hively, Lee M; Bernholdt, David E; Hetrick III, John M; Bohn, Tim T

    2009-01-01

    Concerns over the environment and energy security have recently prompted renewed interest in the U.S. in nuclear energy. Recognizing this, the U.S. Dept. of Energy has launched an initiative to revamp and modernize the role that modeling and simulation plays in the development and operation of nuclear facilities. This Nuclear Energy Advanced Modeling and Simulation (NEAMS) program represents a major investment in the development of new software, with one or more large multi-scale multi-physics capabilities in each of four technical areas associated with the nuclear fuel cycle, as well as additional supporting developments. In conjunction with this, we are designing a software architecture, computational environment, and component framework to integrate the NEAMS technical capabilities and make them more accessible to users. In this report of work very much in progress, we lay out the 'problem' we are addressing, describe the model-driven system design approach we are using, and compare them with several large-scale technical software initiatives from the past. We discuss how component technology may be uniquely positioned to address the software integration challenges of the NEAMS program, outline the capabilities planned for the NEAMS computational environment and framework, and describe some initial prototyping activities.

  1. GS3: A Knowledge Management Architecture for Collaborative Geologic Sequestration Modeling

    SciTech Connect

    Gorton, Ian; Black, Gary D.; Schuchardt, Karen L.; Sivaramakrishnan, Chandrika; Wurstner, Signe K.; Hui, Peter SY

    2010-01-10

    Modern scientific enterprises are inherently knowledge-intensive. In general, scientific studies in domains such as groundwater, climate, and other environmental modeling as well as fundamental research in chemistry, physics, and biology require the acquisition and manipulation of large amounts of experimental and field data in order to create inputs for large-scale computational simulations. The results of these simulations must then be analyzed, leading to refinements of inputs and models and further simulations. In this paper we describe our efforts in creating a knowledge management platform to support collaborative, wide-scale studies in the area of geologic sequestration. The platform, known as GS3 (Geologic Sequestration Software Suite), exploits and integrates off-the-shelf software components including semantic wikis, content management systems and open source middleware to create the core architecture. We then extend the wiki environment to support the capture of provenance, the ability to incorporate various analysis tools, and the ability to launch simulations on supercomputers. The paper describes the key components of GS3 and demonstrates its use through illustrative examples. We conclude by assessing the suitability of our approach for geologic sequestration modeling and generalization to other scientific problem domains

  2. Rheology and friction along the Vema transform fault (Central Atlantic) inferred by thermal modeling

    NASA Astrophysics Data System (ADS)

    Cuffaro, Marco; Ligi, Marco

    2016-04-01

    We investigate with 3-D finite element simulations the temperature distribution beneath the Vema transform that offsets the Mid-Atlantic Ridge by ~300 km in the Central Atlantic. The developed thermal model includes the effects of mantle flow beneath a ridge-transform-ridge geometry and the lateral heat conduction across the transform fault, and of the shear heating generated along the fault. Numerical solutions are presented for a 3-D domain, discretized with a non-uniform tetrahedral mesh, where relative plate kinematics is used as boundary condition, providing passive mantle upwelling. Mantle is modelled as a temperature-dependent viscous fluid, and its dynamics can be described by Stokes and advection-conduction heat equations. The results show that shear heating raises significantly the temperature along the transform fault. In order to test model results, we calculated the thermal structure simulating the mantle dynamics beneath an accretionary plate boundary geometry that duplicates the Vema transform fault, assuming the present-day spreading rate and direction of the Mid Atlantic Ridge at 11 °N. Thus, the modelled heat flow at the surface has been compared with 23 heat flow measurements carried out along the Vema Transform valley. Laboratory studies on the frictional stability of olivine aggregates show that the depth extent of oceanic faulting is thermally controlled and limited by the 600 °C isotherm. The depth of isotherms of the thermal model were compared to the depths of earthquakes along transform faults. Slip on oceanic transform faults is primarily aseismic, only 15% of the tectonic offset is accommodated by earthquakes. Despite extensive fault areas, few large earthquakes occur on the fault and few aftershocks follow large events. Rheology constrained by the thermal model combined with geology and seismicity of the Vema Transform fault allows to better understand friction and the spatial distribution of strength along the fault and provides

  3. A Model for the Epigenetic Switch Linking Inflammation to Cell Transformation: Deterministic and Stochastic Approaches

    PubMed Central

    Gérard, Claude; Gonze, Didier; Lemaigre, Frédéric; Novák, Béla

    2014-01-01

    Recently, a molecular pathway linking inflammation to cell transformation has been discovered. This molecular pathway rests on a positive inflammatory feedback loop between NF-κB, Lin28, Let-7 microRNA and IL6, which leads to an epigenetic switch allowing cell transformation. A transient activation of an inflammatory signal, mediated by the oncoprotein Src, activates NF-κB, which elicits the expression of Lin28. Lin28 decreases the expression of Let-7 microRNA, which results in higher level of IL6 than achieved directly by NF-κB. In turn, IL6 can promote NF-κB activation. Finally, IL6 also elicits the synthesis of STAT3, which is a crucial activator for cell transformation. Here, we propose a computational model to account for the dynamical behavior of this positive inflammatory feedback loop. By means of a deterministic model, we show that an irreversible bistable switch between a transformed and a non-transformed state of the cell is at the core of the dynamical behavior of the positive feedback loop linking inflammation to cell transformation. The model indicates that inhibitors (tumor suppressors) or activators (oncogenes) of this positive feedback loop regulate the occurrence of the epigenetic switch by modulating the threshold of inflammatory signal (Src) needed to promote cell transformation. Both stochastic simulations and deterministic simulations of a heterogeneous cell population suggest that random fluctuations (due to molecular noise or cell-to-cell variability) are able to trigger cell transformation. Moreover, the model predicts that oncogenes/tumor suppressors respectively decrease/increase the robustness of the non-transformed state of the cell towards random fluctuations. Finally, the model accounts for the potential effect of competing endogenous RNAs, ceRNAs, on the dynamics of the epigenetic switch. Depending on their microRNA targets, the model predicts that ceRNAs could act as oncogenes or tumor suppressors by regulating the occurrence of

  4. Bäcklund transformations for the elliptic Gaudin model and a Clebsch system

    NASA Astrophysics Data System (ADS)

    Zullo, Federico

    2011-07-01

    A two-parameters family of Bäcklund transformations for the classical elliptic Gaudin model is constructed. The maps are explicit, symplectic, preserve the same integrals as for the continuous flows, and are a time discretization of each of these flows. The transformations can map real variables into real variables, sending physical solutions of the equations of motion into physical solutions. The starting point of the analysis is the integrability structure of the model. It is shown how the analogue transformations for the rational and trigonometric Gaudin model are a limiting case of this one. An application to a particular case of the Clebsch system is given.

  5. Effects of Practice on Task Architecture: Combined Evidence from Interference Experiments and Random-Walk Models of Decision Making

    ERIC Educational Resources Information Center

    Kamienkowski, Juan E.; Pashler, Harold; Dehaene, Stanislas; Sigman, Mariano

    2011-01-01

    Does extensive practice reduce or eliminate central interference in dual-task processing? We explored the reorganization of task architecture with practice by combining interference analysis (delays in dual-task experiment) and random-walk models of decision making (measuring the decision and non-decision contributions to RT). The main delay…

  6. Change in the Pathologic Supraspinatus: A Three-Dimensional Model of Fiber Bundle Architecture within Anterior and Posterior Regions

    PubMed Central

    Kim, Soo Y.; Sachdeva, Rohit; Li, Zi; Lee, Dongwoon; Rosser, Benjamin W. C.

    2015-01-01

    Supraspinatus tendon tears are common and lead to changes in the muscle architecture. To date, these changes have not been investigated for the distinct regions and parts of the pathologic supraspinatus. The purpose of this study was to create a novel three-dimensional (3D) model of the muscle architecture throughout the supraspinatus and to compare the architecture between muscle regions and parts in relation to tear severity. Twelve cadaveric specimens with varying degrees of tendon tears were used. Three-dimensional coordinates of fiber bundles were collected in situ using serial dissection and digitization. Data were reconstructed and modeled in 3D using Maya. Fiber bundle length (FBL) and pennation angle (PA) were computed and analyzed. FBL was significantly shorter in specimens with large retracted tears compared to smaller tears, with the deeper fibers being significantly shorter than other parts in the anterior region. PA was significantly greater in specimens with large retracted tears, with the superficial fibers often demonstrating the largest PA. The posterior region was absent in two specimens with extensive tears. Architectural changes associated with tendon tears affect the regions and varying depths of supraspinatus differently. The results provide important insights on residual function of the pathologic muscle, and the 3D model includes detailed data that can be used in future modeling studies. PMID:26413533

  7. Semantic Web-Driven LMS Architecture towards a Holistic Learning Process Model Focused on Personalization

    ERIC Educational Resources Information Center

    Kerkiri, Tania

    2010-01-01

    A comprehensive presentation is here made on the modular architecture of an e-learning platform with a distinctive emphasis on content personalization, combining advantages from semantic web technology, collaborative filtering and recommendation systems. Modules of this architecture handle information about both the domain-specific didactic…

  8. An Approach for Detecting Inconsistencies between Behavioral Models of the Software Architecture and the Code

    SciTech Connect

    Ciraci, Selim; Sozer, Hasan; Tekinerdogan, Bedir

    2012-07-16

    In practice, inconsistencies between architectural documentation and the code might arise due to improper implementation of the architecture or the separate, uncontrolled evolution of the code. Several approaches have been proposed to detect the inconsistencies between the architecture and the code but these tend to be limited for capturing inconsistencies that might occur at runtime. We present a runtime verification approach for detecting inconsistencies between the dynamic behavior of the architecture and the actual code. The approach is supported by a set of tools that implement the architecture and the code patterns in Prolog, and support the automatic generation of runtime monitors for detecting inconsistencies. We illustrate the approach and the toolset for a Crisis Management System case study.

  9. Phase field modeling of tetragonal to monoclinic phase transformation in zirconia

    NASA Astrophysics Data System (ADS)

    Mamivand, Mahmood

    Zirconia based ceramics are strong, hard, inert, and smooth, with low thermal conductivity and good biocompatibility. Such properties made zirconia ceramics an ideal material for different applications form thermal barrier coatings (TBCs) to biomedicine applications like femoral implants and dental bridges. However, this unusual versatility of excellent properties would be mediated by the metastable tetragonal (or cubic) transformation to the stable monoclinic phase after a certain exposure at service temperatures. This transformation from tetragonal to monoclinic, known as LTD (low temperature degradation) in biomedical application, proceeds by propagation of martensite, which corresponds to transformation twinning. As such, tetragonal to monoclinic transformation is highly sensitive to mechanical and chemomechanical stresses. It is known in fact that this transformation is the source of the fracture toughening in stabilized zirconia as it occurs at the stress concentration regions ahead of the crack tip. This dissertation is an attempt to provide a kinetic-based model for tetragonal to monoclinic transformation in zirconia. We used the phase field technique to capture the temporal and spatial evolution of monoclinic phase. In addition to morphological patterns, we were able to calculate the developed internal stresses during tetragonal to monoclinic transformation. The model was started form the two dimensional single crystal then was expanded to the two dimensional polycrystalline and finally to the three dimensional single crystal. The model is able to predict the most physical properties associated with tetragonal to monoclinic transformation in zirconia including: morphological patterns, transformation toughening, shape memory effect, pseudoelasticity, surface uplift, and variants impingement. The model was benched marked with several experimental works. The good agreements between simulation results and experimental data, make the model a reliable tool for

  10. Model-driven methodology for rapid deployment of smart spaces based on resource-oriented architectures.

    PubMed

    Corredor, Iván; Bernardos, Ana M; Iglesias, Josué; Casar, José R

    2012-01-01

    Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym.

  11. Application and project portfolio valuation using enterprise architecture and business requirements modelling

    NASA Astrophysics Data System (ADS)

    Quartel, Dick; Steen, Maarten W. A.; Lankhorst, Marc M.

    2012-05-01

    This article describes an architecture-based approach to IT valuation. This approach offers organisations an instrument to valuate their application and project portfolios and to make well-balanced decisions about IT investments. The value of a software application is assessed in terms of its contribution to a selection of business goals. Based on such assessments, the value of different applications can be compared, and requirements for innovation, development, maintenance and phasing out can be identified. IT projects are proposed to realise the requirements. The value of each project is assessed in terms of the value it adds to one or more applications. This value can be obtained by relating the 'as-is' application portfolio to the 'to-be' portfolio that is being proposed by the project portfolio. In this way, projects can be ranked according to their added value, given a certain selection of business goals. The approach uses ArchiMate to model the relationship between software applications, business processes, services and products. In addition, two language extensions are used to model the relationship of these elements to business goals and requirements and to projects and project portfolios. The approach is illustrated using the portfolio method of Bedell and has been implemented in BiZZdesign Architect.

  12. Model-Driven Methodology for Rapid Deployment of Smart Spaces Based on Resource-Oriented Architectures

    PubMed Central

    Corredor, Iván; Bernardos, Ana M.; Iglesias, Josué; Casar, José R.

    2012-01-01

    Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym. PMID:23012544

  13. Model Transformation for a System of Systems Dependability Safety Case

    NASA Technical Reports Server (NTRS)

    Murphy, Judy; Driskell, Stephen B.

    2010-01-01

    Software plays an increasingly larger role in all aspects of NASA's science missions. This has been extended to the identification, management and control of faults which affect safety-critical functions and by default, the overall success of the mission. Traditionally, the analysis of fault identification, management and control are hardware based. Due to the increasing complexity of system, there has been a corresponding increase in the complexity in fault management software. The NASA Independent Validation & Verification (IV&V) program is creating processes and procedures to identify, and incorporate safety-critical software requirements along with corresponding software faults so that potential hazards may be mitigated. This Specific to Generic ... A Case for Reuse paper describes the phases of a dependability and safety study which identifies a new, process to create a foundation for reusable assets. These assets support the identification and management of specific software faults and, their transformation from specific to generic software faults. This approach also has applications to other systems outside of the NASA environment. This paper addresses how a mission specific dependability and safety case is being transformed to a generic dependability and safety case which can be reused for any type of space mission with an emphasis on software fault conditions.

  14. Performance of linear and nonlinear texture measures in 2D and 3D for monitoring architectural changes in osteoporosis using computer-generated models of trabecular bone

    NASA Astrophysics Data System (ADS)

    Boehm, Holger F.; Link, Thomas M.; Monetti, Roberto A.; Mueller, Dirk; Rummeny, Ernst J.; Raeth, Christoph W.

    2005-04-01

    Osteoporosis is a metabolic bone disease leading to de-mineralization and increased risk of fracture. The two major factors that determine the biomechanical competence of bone are the degree of mineralization and the micro-architectural integrity. Today, modern imaging modalities (high resolution MRI, micro-CT) are capable of depicting structural details of trabecular bone tissue. From the image data, structural properties obtained by quantitative measures are analysed with respect to the presence of osteoporotic fractures of the spine (in-vivo) or correlated with biomechanical strength as derived from destructive testing (in-vitro). Fairly well established are linear structural measures in 2D that are originally adopted from standard histo-morphometry. Recently, non-linear techniques in 2D and 3D based on the scaling index method (SIM), the standard Hough transform (SHT), and the Minkowski Functionals (MF) have been introduced, which show excellent performance in predicting bone strength and fracture risk. However, little is known about the performance of the various parameters with respect to monitoring structural changes due to progression of osteoporosis or as a result of medical treatment. In this contribution, we generate models of trabecular bone with pre-defined structural properties which are exposed to simulated osteoclastic activity. We apply linear and non-linear texture measures to the models and analyse their performance with respect to detecting architectural changes. This study demonstrates, that the texture measures are capable of monitoring structural changes of complex model data. The diagnostic potential varies for the different parameters and is found to depend on the topological composition of the model and initial "bone density". In our models, non-linear texture measures tend to react more sensitively to small structural changes than linear measures. Best performance is observed for the 3rd and 4th Minkowski Functionals and for the scaling

  15. B-Transform and Its Application to a Fish-Hyacinth Model

    ERIC Educational Resources Information Center

    Oyelami, B. O.; Ale, S. O.

    2002-01-01

    A new transform proposed by Oyelami and Ale for impulsive systems is applied to an impulsive fish-hyacinth model. A biological policy regarding the growth of the fish and the hyacinth populations is formulated.

  16. Quantitative structure-activity relationship models of chemical transformations from matched pairs analyses.

    PubMed

    Beck, Jeremy M; Springer, Clayton

    2014-04-28

    The concepts of activity cliffs and matched molecular pairs (MMP) are recent paradigms for analysis of data sets to identify structural changes that may be used to modify the potency of lead molecules in drug discovery projects. Analysis of MMPs was recently demonstrated as a feasible technique for quantitative structure-activity relationship (QSAR) modeling of prospective compounds. Although within a small data set, the lack of matched pairs, and the lack of knowledge about specific chemical transformations limit prospective applications. Here we present an alternative technique that determines pairwise descriptors for each matched pair and then uses a QSAR model to estimate the activity change associated with a chemical transformation. The descriptors effectively group similar transformations and incorporate information about the transformation and its local environment. Use of a transformation QSAR model allows one to estimate the activity change for novel transformations and therefore returns predictions for a larger fraction of test set compounds. Application of the proposed methodology to four public data sets results in increased model performance over a benchmark random forest and direct application of chemical transformations using QSAR-by-matched molecular pairs analysis (QSAR-by-MMPA).

  17. LaPlace Transform1 Adaptive Control Law in Support of Large Flight Envelope Modeling Work

    NASA Technical Reports Server (NTRS)

    Gregory, Irene M.; Xargay, Enric; Cao, Chengyu; Hovakimyan, Naira

    2011-01-01

    This paper presents results of a flight test of the L1 adaptive control architecture designed to directly compensate for significant uncertain cross-coupling in nonlinear systems. The flight test was conducted on the subscale turbine powered Generic Transport Model that is an integral part of the Airborne Subscale Transport Aircraft Research system at the NASA Langley Research Center. The results presented are in support of nonlinear aerodynamic modeling and instrumentation calibration.

  18. A Neural Network Architecture For Rapid Model Indexing In Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Pawlicki, Ted

    1988-03-01

    Models of objects stored in memory have been shown to be useful for guiding the processing of computer vision systems. A major consideration in such systems, however, is how stored models are initially accessed and indexed by the system. As the number of stored models increases, the time required to search memory for the correct model becomes high. Parallel distributed, connectionist, neural networks' have been shown to have appealing content addressable memory properties. This paper discusses an architecture for efficient storage and reference of model memories stored as stable patterns of activity in a parallel, distributed, connectionist, neural network. The emergent properties of content addressability and resistance to noise are exploited to perform indexing of the appropriate object centered model from image centered primitives. The system consists of three network modules each of which represent information relative to a different frame of reference. The model memory network is a large state space vector where fields in the vector correspond to ordered component objects and relative, object based spatial relationships between the component objects. The component assertion network represents evidence about the existence of object primitives in the input image. It establishes local frames of reference for object primitives relative to the image based frame of reference. The spatial relationship constraint network is an intermediate representation which enables the association between the object based and the image based frames of reference. This intermediate level represents information about possible object orderings and establishes relative spatial relationships from the image based information in the component assertion network below. It is also constrained by the lawful object orderings in the model memory network above. The system design is consistent with current psychological theories of recognition by component. It also seems to support Marr's notions

  19. Modeling and optimization of multiple unmanned aerial vehicles system architecture alternatives.

    PubMed

    Qin, Dongliang; Li, Zhifei; Yang, Feng; Wang, Weiping; He, Lei

    2014-01-01

    Unmanned aerial vehicle (UAV) systems have already been used in civilian activities, although very limitedly. Confronted different types of tasks, multi UAVs usually need to be coordinated. This can be extracted as a multi UAVs system architecture problem. Based on the general system architecture problem, a specific description of the multi UAVs system architecture problem is presented. Then the corresponding optimization problem and an efficient genetic algorithm with a refined crossover operator (GA-RX) is proposed to accomplish the architecting process iteratively in the rest of this paper. The availability and effectiveness of overall method is validated using 2 simulations based on 2 different scenarios.

  20. Modeling and Optimization of Multiple Unmanned Aerial Vehicles System Architecture Alternatives

    PubMed Central

    Wang, Weiping; He, Lei

    2014-01-01

    Unmanned aerial vehicle (UAV) systems have already been used in civilian activities, although very limitedly. Confronted different types of tasks, multi UAVs usually need to be coordinated. This can be extracted as a multi UAVs system architecture problem. Based on the general system architecture problem, a specific description of the multi UAVs system architecture problem is presented. Then the corresponding optimization problem and an efficient genetic algorithm with a refined crossover operator (GA-RX) is proposed to accomplish the architecting process iteratively in the rest of this paper. The availability and effectiveness of overall method is validated using 2 simulations based on 2 different scenarios. PMID:25140328

  1. A conceptual approach to approximate tree root architecture in infinite slope models

    NASA Astrophysics Data System (ADS)

    Schmaltz, Elmar; Glade, Thomas

    2016-04-01

    Vegetation-related properties - particularly tree root distribution and coherent hydrologic and mechanical effects on the underlying soil mantle - are commonly not considered in infinite slope models. Indeed, from a geotechnical point of view, these effects appear to be difficult to be reproduced reliably in a physically-based modelling approach. The growth of a tree and the expansion of its root architecture are directly connected with both intrinsic properties such as species and age, and extrinsic factors like topography, availability of nutrients, climate and soil type. These parameters control four main issues of the tree root architecture: 1) Type of rooting; 2) maximum growing distance to the tree stem (radius r); 3) maximum growing depth (height h); and 4) potential deformation of the root system. Geometric solids are able to approximate the distribution of a tree root system. The objective of this paper is to investigate whether it is possible to implement root systems and the connected hydrological and mechanical attributes sufficiently in a 3-dimensional slope stability model. Hereby, a spatio-dynamic vegetation module should cope with the demands of performance, computation time and significance. However, in this presentation, we focus only on the distribution of roots. The assumption is that the horizontal root distribution around a tree stem on a 2-dimensional plane can be described by a circle with the stem located at the centroid and a distinct radius r that is dependent on age and species. We classified three main types of tree root systems and reproduced the species-age-related root distribution with three respective mathematical solids in a synthetic 3-dimensional hillslope ambience. Thus, two solids in an Euclidian space were distinguished to represent the three root systems: i) cylinders with radius r and height h, whilst the dimension of latter defines the shape of a taproot-system or a shallow-root-system respectively; ii) elliptic

  2. Plant regeneration and genetic transformation of C. canadensis: a non-model plant appropriate for investigation of flower development in Cornus (Cornaceae).

    PubMed

    Liu, Xiang; Feng, Chun-Miao; Franks, Robert; Qu, Rongda; Xie, De-Yu; Xiang, Qiu-Yun Jenny

    2013-01-01

    KEY MESSAGE : Efficient Agrobacterium -mediated genetic transformation for investigation of genetic and molecular mechanisms involved in inflorescence architectures in Cornus species. Cornus canadensis is a subshrub species in Cornus, Cornaceae. It has recently become a favored non-model plant species to study genes involved in development and evolution of inflorescence architectures in Cornaceae. Here, we report an effective protocol of plant regeneration and genetic transformation of C. canadensis. We use young inflorescence buds as explants to efficiently induce calli and multiple adventitious shoots on an optimized induction medium consisting of basal MS medium supplemented with 1 mg/l of 6-benzylaminopurine and 0.1 mg/l of 1-naphthaleneacetic acid. On the same medium, primary adventitious shoots can produce a large number of secondary adventitious shoots. Using leaves of 8-week-old secondary shoots as explants, GFP as a reporter gene controlled by 35S promoter and hygromycin B as the selection antibiotic, a standard procedure including pre-culture of explants, infection, co-cultivation, resting and selection has been developed to transform C. canadensis via Agrobacterium strain EHA105-mediated transformation. Under a strict selection condition using 14 mg/l hygromycin B, approximately 5 % explants infected by Agrobacterium produce resistant calli, from which clusters of adventitious shoots are induced. On an optimized rooting medium consisting of basal MS medium supplemented with 0.1 mg/l of indole-3-butyric acid and 7 mg/l hygromycin B, most of the resistant shoots develop adventitious roots to form complete transgenic plantlets, which can grow normally in soil. RT-PCR analysis demonstrates the expression of GFP transgene. Green fluorescence emitted by GFP is observed in transgenic calli, roots and cells of transgenic leaves under both stereo fluorescence microscope and confocal microscope. The success of genetic transformation provides an appropriate

  3. Blind watermark algorithm on 3D motion model based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Qi, Hu; Zhai, Lang

    2013-12-01

    With the continuous development of 3D vision technology, digital watermark technology, as the best choice for copyright protection, has fused with it gradually. This paper proposed a blind watermark plan of 3D motion model based on wavelet transform, and made it loaded into the Vega real-time visual simulation system. Firstly, put 3D model into affine transform, and take the distance from the center of gravity to the vertex of 3D object in order to generate a one-dimensional discrete signal; then make this signal into wavelet transform to change its frequency coefficients and embed watermark, finally generate 3D motion model with watermarking. In fixed affine space, achieve the robustness in translation, revolving and proportion transforms. The results show that this approach has better performances not only in robustness, but also in watermark- invisibility.

  4. An architecture and model for cognitive engineering simulation analysis - Application to advanced aviation automation

    NASA Technical Reports Server (NTRS)

    Corker, Kevin M.; Smith, Barry R.

    1993-01-01

    The process of designing crew stations for large-scale, complex automated systems is made difficult because of the flexibility of roles that the crew can assume, and by the rapid rate at which system designs become fixed. Modern cockpit automation frequently involves multiple layers of control and display technology in which human operators must exercise equipment in augmented, supervisory, and fully automated control modes. In this context, we maintain that effective human-centered design is dependent on adequate models of human/system performance in which representations of the equipment, the human operator(s), and the mission tasks are available to designers for manipulation and modification. The joint Army-NASA Aircrew/Aircraft Integration (A3I) Program, with its attendant Man-machine Integration Design and Analysis System (MIDAS), was initiated to meet this challenge. MIDAS provides designers with a test bed for analyzing human-system integration in an environment in which both cognitive human function and 'intelligent' machine function are described in similar terms. This distributed object-oriented simulation system, its architecture and assumptions, and our experiences from its application in advanced aviation crew stations are described.

  5. Parallel eigenanalysis of finite element models in a completely connected architecture

    NASA Technical Reports Server (NTRS)

    Akl, F. A.; Morel, M. R.

    1989-01-01

    A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis, (K)(phi) = (M)(phi)(omega), where (K) and (M) are of order N, and (omega) is order of q. The concurrent solution of the eigenproblem is based on the multifrontal/modified subspace method and is achieved in a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm was successfully implemented on a tightly coupled multiple-instruction multiple-data parallel processing machine, Cray X-MP. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The macrotasking library routines are used in mapping each domain to a user task. Computational speed-up and efficiency are used to determine the effectiveness of the algorithm. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts and the dimension of the subspace on the performance of the algorithm are investigated. A parallel finite element dynamic analysis program, p-feda, is documented and the performance of its subroutines in parallel environment is analyzed.

  6. A phase-field model for incoherent martensitic transformations including plastic accommodation processes in the austenite

    NASA Astrophysics Data System (ADS)

    Kundin, J.; Raabe, D.; Emmerich, H.

    2011-10-01

    If alloys undergo an incoherent martensitic transformation, then plastic accommodation and relaxation accompany the transformation. To capture these mechanisms we develop an improved 3D microelastic-plastic phase-field model. It is based on the classical concepts of phase-field modeling of microelastic problems (Chen, L.Q., Wang Y., Khachaturyan, A.G., 1992. Philos. Mag. Lett. 65, 15-23). In addition to these it takes into account the incoherent formation of accommodation dislocations in the austenitic matrix, as well as their inheritance into the martensitic plates based on the crystallography of the martensitic transformation. We apply this new phase-field approach to the butterfly-type martensitic transformation in a Fe-30 wt%Ni alloy in direct comparison to recent experimental data (Sato, H., Zaefferer, S., 2009. Acta Mater. 57, 1931-1937). It is shown that the therein proposed mechanisms of plastic accommodation during the transformation can indeed explain the experimentally observed morphology of the martensitic plates as well as the orientation between martensitic plates and the austenitic matrix. The developed phase-field model constitutes a general simulations approach for different kinds of phase transformation phenomena that inherently include dislocation based accommodation processes. The approach does not only predict the final equilibrium topology, misfit, size, crystallography, and aspect ratio of martensite-austenite ensembles resulting from a transformation, but it also resolves the associated dislocation dynamics and the distribution, and the size of the crystals itself.

  7. Algorithm comparison and benchmarking using a parallel spectra transform shallow water model

    SciTech Connect

    Worley, P.H.; Foster, I.T.; Toonen, B.

    1995-04-01

    In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.

  8. Evaluating radiative transfer schemes treatment of vegetation canopy architecture in land surface models

    NASA Astrophysics Data System (ADS)

    Braghiere, Renato; Quaife, Tristan; Black, Emily

    2016-04-01

    Incoming shortwave radiation is the primary source of energy driving the majority of the Earth's climate system. The partitioning of shortwave radiation by vegetation into absorbed, reflected, and transmitted terms is important for most of biogeophysical processes, including leaf temperature changes and photosynthesis, and it is currently calculated by most of land surface schemes (LSS) of climate and/or numerical weather prediction models. The most commonly used radiative transfer scheme in LSS is the two-stream approximation, however it does not explicitly account for vegetation architectural effects on shortwave radiation partitioning. Detailed three-dimensional (3D) canopy radiative transfer schemes have been developed, but they are too computationally expensive to address large-scale related studies over long time periods. Using a straightforward one-dimensional (1D) parameterisation proposed by Pinty et al. (2006), we modified a two-stream radiative transfer scheme by including a simple function of Sun zenith angle, so-called "structure factor", which does not require an explicit description and understanding of the complex phenomena arising from the presence of vegetation heterogeneous architecture, and it guarantees accurate simulations of the radiative balance consistently with 3D representations. In order to evaluate the ability of the proposed parameterisation in accurately represent the radiative balance of more complex 3D schemes, a comparison between the modified two-stream approximation with the "structure factor" parameterisation and state-of-art 3D radiative transfer schemes was conducted, following a set of virtual scenarios described in the RAMI4PILPS experiment. These experiments have been evaluating the radiative balance of several models under perfectly controlled conditions in order to eliminate uncertainties arising from an incomplete or erroneous knowledge of the structural, spectral and illumination related canopy characteristics typical

  9. Marginal deformations of WZNW and coset models from O( d, d) transformations

    NASA Astrophysics Data System (ADS)

    Hassan, S. F.; Sen, Ashoke

    1993-09-01

    We show that the O(2, 2) transformation of the SU(2) WZNW model gives rise to marginal deformation of this model by the operator ∫ d2zJ(z) overlineJ( overlinez) where J, overlineJareU(1) currents in the Cartan subalgebra. Generalization of this result to other WZNW theories is discussed. We also consider the O(3, 3) transformation of the product of an SU(2) WZNW model and a gauged SU(2) WZNW model. The three-parameter set of models obtained after the transformation is shown to be the result of first deforming the product of two SU(2) WZNW theories by marginal operators of the form Σ i,j = 12 C ijJ ioverlineJj, and then gauging an appropriate U(1) subgroup of the theory. Our analysis leads to a general conjecture that O( d, d) transformations of any WZNW model correspond to marginal deformation of the WZNW theory by an appropriate combination of left and right moving currents belonging to the Cartan subalgebra; and O( d, d) transformations of a gauged WZNW model can be identified to the gauged version of such marginally deformed WZNW models.

  10. Mentoring Resulting in a New Model: Affect-Centered Transformational Leadership

    ERIC Educational Resources Information Center

    Moffett, David W.; Tejeda, Armando R.

    2014-01-01

    The authors were professor and student, in a doctoral leadership course, during fall semester of 2013-2014. Across the term the professor mentored the mentee, guiding him to the creation of the next, needed model for leadership. The new model, known as The Affect-Centered Transformational Leadership Model, came about as the result. Becoming an…

  11. Modelling Transformations of Quadratic Functions: A Proposal of Inductive Inquiry

    ERIC Educational Resources Information Center

    Sokolowski, Andrzej

    2013-01-01

    This paper presents a study about using scientific simulations to enhance the process of mathematical modelling. The main component of the study is a lesson whose major objective is to have students mathematise a trajectory of a projected object and then apply the model to formulate other trajectories by using the properties of function…

  12. Coupling Thermo-Mecanical Simulation and Stratigraphic Modelling: Impact of Lithosphere Deformation on Stratigraphic Architecture of Passive Margin Basins

    NASA Astrophysics Data System (ADS)

    Rouby, D.; Huismans, R. S.; Braun, J.

    2013-12-01

    The aim of this study is to revise the view of the long-term stratigraphic trends of passive margins to include the impact of the coupling between the lithosphere deformation and the surface processes. However, modeling coupling lithosphere deformation and surface processes usually address large-scale deformation processes, i.e. they cannot resolve the stratigraphic trend of the simulated basins. On the other hand, models dedicated to stratigraphic simulation do not include these feedbacks of erosion/sedimentation on deformation processes. The recent development of a numerical modeling tool, coupling the thermal and flexural evolution of the lithosphere and including the (un)loading effects of surface processes in 3D (Flex3D; J. Braun), allows us to propose a new procedure to investigate, in 3D, the evolution of passive margins, from the scale of the lithosphere to the detailed stratigraphic architecture, including syn- and post-rift phases and onshore and offshore domains. To do this, we first simulate the syn-rift phase of lithosphere stretching by thermo-mechanical modeling (Sopal, R. Huismans). We use the resulting lithosphere geometry as input of the 3D flexural modeling to simulate the post-rift evolution of the margin. We then use the resulting accumulation and subsidence histories as input of the stratigraphic simulation (Dionisos, D. Granjeon) to model the detailed stratigraphic architecture of the basin. Using this procedure, we evaluate the signature of various boundary conditions (lithosphere geometries and thermal states, stretching distributions, surface processes efficiencies and drainage organization) in the uplift/subsidence and denudation histories as well as the stratigraphic architecture of the associated sedimentary basins. We apply the procedure to the case study of passives margins surrounding the West African craton, for which we have compiled data constraining the denudation and accumulation history, and the long term stratigraphic

  13. A functional and structural Mongolian Scots pine (Pinus sylvestris var. mongolica) model integrating architecture, biomass and effects of precipitation.

    PubMed

    Wang, Feng; Letort, Véronique; Lu, Qi; Bai, Xuefeng; Guo, Yan; de Reffye, Philippe; Li, Baoguo

    2012-01-01

    Mongolian Scots pine (Pinus sylvestris var. mongolica) is one of the principal tree species in the network of Three-North Shelterbelt for windbreak and sand stabilisation in China. The functions of shelterbelts are highly correlated with the architecture and eco-physiological processes of individual tree. Thus, model-assisted analysis of canopy architecture and function dynamic in Mongolian Scots pine is of value for better understanding its role and behaviour within shelterbelt ecosystems in these arid and semiarid regions. We present here a single-tree functional and structural model, derived from the GreenLab model, which is adapted for young Mongolian Scots pines by incorporation of plant biomass production, allocation, allometric rules and soil water dynamics. The model is calibrated and validated based on experimental measurements taken on Mongolian Scots pines in 2007 and 2006 under local meteorological conditions. Measurements include plant biomass, topology and geometry, as well as soil attributes and standard meteorological data. After calibration, the model allows reconstruction of three-dimensional (3D) canopy architecture and biomass dynamics for trees from one- to six-year-old at the same site using meteorological data for the six years from 2001 to 2006. Sensitivity analysis indicates that rainfall variation has more influence on biomass increment than on architecture, and the internode and needle compartments and the aboveground biomass respond linearly to increases in precipitation. Sensitivity analysis also shows that the balance between internode and needle growth varies only slightly within the range of precipitations considered here. The model is expected to be used to investigate the growth of Mongolian Scots pines in other regions with different soils and climates. PMID:22927982

  14. A Functional and Structural Mongolian Scots Pine (Pinus sylvestris var. mongolica) Model Integrating Architecture, Biomass and Effects of Precipitation

    PubMed Central

    Wang, Feng; Letort, Véronique; Lu, Qi; Bai, Xuefeng; Guo, Yan; de Reffye, Philippe; Li, Baoguo

    2012-01-01

    Mongolian Scots pine (Pinus sylvestris var. mongolica) is one of the principal tree species in the network of Three-North Shelterbelt for windbreak and sand stabilisation in China. The functions of shelterbelts are highly correlated with the architecture and eco-physiological processes of individual tree. Thus, model-assisted analysis of canopy architecture and function dynamic in Mongolian Scots pine is of value for better understanding its role and behaviour within shelterbelt ecosystems in these arid and semiarid regions. We present here a single-tree functional and structural model, derived from the GreenLab model, which is adapted for young Mongolian Scots pines by incorporation of plant biomass production, allocation, allometric rules and soil water dynamics. The model is calibrated and validated based on experimental measurements taken on Mongolian Scots pines in 2007 and 2006 under local meteorological conditions. Measurements include plant biomass, topology and geometry, as well as soil attributes and standard meteorological data. After calibration, the model allows reconstruction of three-dimensional (3D) canopy architecture and biomass dynamics for trees from one- to six-year-old at the same site using meteorological data for the six years from 2001 to 2006. Sensitivity analysis indicates that rainfall variation has more influence on biomass increment than on architecture, and the internode and needle compartments and the aboveground biomass respond linearly to increases in precipitation. Sensitivity analysis also shows that the balance between internode and needle growth varies only slightly within the range of precipitations considered here. The model is expected to be used to investigate the growth of Mongolian Scots pines in other regions with different soils and climates. PMID:22927982

  15. Characterization, Modeling, and Energy Harvesting of Phase Transformations in Ferroelectric Materials

    NASA Astrophysics Data System (ADS)

    Dong, Wenda

    Solid state phase transformations can be induced through mechanical, electrical, and thermal loading in ferroelectric materials that are compositionally close to morphotropic phase boundaries. Large changes in strain, polarization, compliance, permittivity, and coupling properties are typically observed across the phase transformation regions and are phenomena of interest for energy harvesting and transduction applications where increased coupling behavior is desired. This work characterized and modeled solid state phase transformations in ferroelectric materials and assessed the potential of phase transforming materials for energy harvesting applications. Two types of phase transformations were studied. The first type was ferroelectric rhombohedral to ferroelectric orthorhombic observed in lead indium niobate lead magnesium niobate lead titanate (PIN-PMN-PT) and driven by deviatoric stress, temperature, and electric field. The second type of phase transformation is ferroelectric to antiferroelectric observed in lead zirconate titanate (PZT) and driven by pressure, temperature, and electric field. Experimental characterizations of the phase transformations were conducted in both PIN-PMN-PT and PZT in order to understand the thermodynamic characteristics of the phase transformations and map out the phase stability of both materials. The ferroelectric materials were characterized under combinations of stress, electric field, and temperature. Material models of phase transforming materials were developed using a thermodynamic based variant switching technique and thermodynamic observations of the phase transformations. These models replicate the phase transformation behavior of PIN-PMN-PT and PZT under mechanical and electrical loading conditions. The switching model worked in conjunction with linear piezoelectric equations as ferroelectric/ferroelastic constitutive equations within a finite element framework that solved the mechanical and electrical field equations

  16. Synapse-Centric Mapping of Cortical Models to the SpiNNaker Neuromorphic Architecture.

    PubMed

    Knight, James C; Furber, Steve B

    2016-01-01

    While the adult human brain has approximately 8.8 × 10(10) neurons, this number is dwarfed by its 1 × 10(15) synapses. From the point of view of neuromorphic engineering and neural simulation in general this makes the simulation of these synapses a particularly complex problem. SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Current solutions for simulating spiking neural networks on SpiNNaker are heavily inspired by work on distributed high-performance computing. However, while SpiNNaker shares many characteristics with such distributed systems, its component nodes have much more limited resources and, as the system lacks global synchronization, the computation performed on each node must complete within a fixed time step. We first analyze the performance of the current SpiNNaker neural simulation software and identify several problems that occur when it is used to simulate networks of the type often used to model the cortex which contain large numbers of sparsely connected synapses. We then present a new, more flexible approach for mapping the simulation of such networks to SpiNNaker which solves many of these problems. Finally we analyze the performance of our new approach using both benchmarks, designed to represent cortical connectivity, and larger, functional cortical models. In a benchmark network where neurons receive input from 8000 STDP synapses, our new approach allows 4× more neurons to be simulated on each SpiNNaker core than has been previously possible. We also demonstrate that the largest plastic neural network previously simulated on neuromorphic hardware can be run in real time using our new approach: double the speed that was previously achieved. Additionally this network contains two types of plastic synapse which previously had to be trained separately but, using our new approach, can be trained simultaneously.

  17. Synapse-Centric Mapping of Cortical Models to the SpiNNaker Neuromorphic Architecture

    PubMed Central

    Knight, James C.; Furber, Steve B.

    2016-01-01

    While the adult human brain has approximately 8.8 × 1010 neurons, this number is dwarfed by its 1 × 1015 synapses. From the point of view of neuromorphic engineering and neural simulation in general this makes the simulation of these synapses a particularly complex problem. SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Current solutions for simulating spiking neural networks on SpiNNaker are heavily inspired by work on distributed high-performance computing. However, while SpiNNaker shares many characteristics with such distributed systems, its component nodes have much more limited resources and, as the system lacks global synchronization, the computation performed on each node must complete within a fixed time step. We first analyze the performance of the current SpiNNaker neural simulation software and identify several problems that occur when it is used to simulate networks of the type often used to model the cortex which contain large numbers of sparsely connected synapses. We then present a new, more flexible approach for mapping the simulation of such networks to SpiNNaker which solves many of these problems. Finally we analyze the performance of our new approach using both benchmarks, designed to represent cortical connectivity, and larger, functional cortical models. In a benchmark network where neurons receive input from 8000 STDP synapses, our new approach allows 4× more neurons to be simulated on each SpiNNaker core than has been previously possible. We also demonstrate that the largest plastic neural network previously simulated on neuromorphic hardware can be run in real time using our new approach: double the speed that was previously achieved. Additionally this network contains two types of plastic synapse which previously had to be trained separately but, using our new approach, can be trained simultaneously.

  18. Synapse-Centric Mapping of Cortical Models to the SpiNNaker Neuromorphic Architecture.

    PubMed

    Knight, James C; Furber, Steve B

    2016-01-01

    While the adult human brain has approximately 8.8 × 10(10) neurons, this number is dwarfed by its 1 × 10(15) synapses. From the point of view of neuromorphic engineering and neural simulation in general this makes the simulation of these synapses a particularly complex problem. SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Current solutions for simulating spiking neural networks on SpiNNaker are heavily inspired by work on distributed high-performance computing. However, while SpiNNaker shares many characteristics with such distributed systems, its component nodes have much more limited resources and, as the system lacks global synchronization, the computation performed on each node must complete within a fixed time step. We first analyze the performance of the current SpiNNaker neural simulation software and identify several problems that occur when it is used to simulate networks of the type often used to model the cortex which contain large numbers of sparsely connected synapses. We then present a new, more flexible approach for mapping the simulation of such networks to SpiNNaker which solves many of these problems. Finally we analyze the performance of our new approach using both benchmarks, designed to represent cortical connectivity, and larger, functional cortical models. In a benchmark network where neurons receive input from 8000 STDP synapses, our new approach allows 4× more neurons to be simulated on each SpiNNaker core than has been previously possible. We also demonstrate that the largest plastic neural network previously simulated on neuromorphic hardware can be run in real time using our new approach: double the speed that was previously achieved. Additionally this network contains two types of plastic synapse which previously had to be trained separately but, using our new approach, can be trained simultaneously. PMID:27683540

  19. Synapse-Centric Mapping of Cortical Models to the SpiNNaker Neuromorphic Architecture

    PubMed Central

    Knight, James C.; Furber, Steve B.

    2016-01-01

    While the adult human brain has approximately 8.8 × 1010 neurons, this number is dwarfed by its 1 × 1015 synapses. From the point of view of neuromorphic engineering and neural simulation in general this makes the simulation of these synapses a particularly complex problem. SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Current solutions for simulating spiking neural networks on SpiNNaker are heavily inspired by work on distributed high-performance computing. However, while SpiNNaker shares many characteristics with such distributed systems, its component nodes have much more limited resources and, as the system lacks global synchronization, the computation performed on each node must complete within a fixed time step. We first analyze the performance of the current SpiNNaker neural simulation software and identify several problems that occur when it is used to simulate networks of the type often used to model the cortex which contain large numbers of sparsely connected synapses. We then present a new, more flexible approach for mapping the simulation of such networks to SpiNNaker which solves many of these problems. Finally we analyze the performance of our new approach using both benchmarks, designed to represent cortical connectivity, and larger, functional cortical models. In a benchmark network where neurons receive input from 8000 STDP synapses, our new approach allows 4× more neurons to be simulated on each SpiNNaker core than has been previously possible. We also demonstrate that the largest plastic neural network previously simulated on neuromorphic hardware can be run in real time using our new approach: double the speed that was previously achieved. Additionally this network contains two types of plastic synapse which previously had to be trained separately but, using our new approach, can be trained simultaneously. PMID:27683540

  20. Modeling organic transformations by microorganisms of soils in six contrasting ecosystems: Validation of the MOMOS model

    NASA Astrophysics Data System (ADS)

    Pansu, M.; Sarmiento, L.; Rujano, M. A.; Ablan, M.; Acevedo, D.; Bottner, P.

    2010-03-01

    The Modeling Organic Transformations by Microorganisms of Soils (MOMOS) model simulates the growth, respiration, and mortality of soil microorganisms as main drivers of the mineralization and humification processes of organic substrates. Originally built and calibrated using data from two high-altitude sites, the model is now validated with data from a 14C experiment carried out in six contrasting tropical ecosystems covering a large gradient of temperature, rainfall, vegetation, and soil types from 65 to 3968 m asl. MOMOS enabled prediction of a greater number of variables using a lower number of parameter values than for predictions previously published on this experiment. The measured 14C mineralization and transfer into microbial biomass (MB) and humified compartments were accurately modeled using (1) temperature and moisture response functions to daily adjust the model responses to weather conditions and (2) optimization of only one parameter, the respiration rate kresp of soil microorganisms at optimal temperature and moisture. This validates the parameterization and hypotheses of the previous calibration experiment. Climate and microbial respiratory activity, related to soil properties, appear as the main factors that regulate the C cycle. The kresp rate was found to be negatively related to the fine textural fraction of soil and positively related to soil pH, allowing the proposition of two transfer functions that can be helpful to generalize MOMOS application at regional or global scale.

  1. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  2. Foundational model of structural connectivity in the nervous system with a schema for wiring diagrams, connectome, and basic plan architecture.

    PubMed

    Swanson, Larry W; Bota, Mihail

    2010-11-30

    The nervous system is a biological computer integrating the body's reflex and voluntary environmental interactions (behavior) with a relatively constant internal state (homeostasis)-- promoting survival of the individual and species. The wiring diagram of the nervous system's structural connectivity provides an obligatory foundational model for understanding functional localization at molecular, cellular, systems, and behavioral organization levels. This paper provides a high-level, downwardly extendible, conceptual framework--like a compass and map--for describing and exploring in neuroinformatics systems (such as our Brain Architecture Knowledge Management System) the structural architecture of the nervous system's basic wiring diagram. For this, the Foundational Model of Connectivity's universe of discourse is the structural architecture of nervous system connectivity in all animals at all resolutions, and the model includes two key elements--a set of basic principles and an internally consistent set of concepts (defined vocabulary of standard terms)--arranged in an explicitly defined schema (set of relationships between concepts) allowing automatic inferences. In addition, rules and procedures for creating and modifying the foundational model are considered. Controlled vocabularies with broad community support typically are managed by standing committees of experts that create and refine boundary conditions, and a set of rules that are available on the Web. PMID:21078980

  3. Transforming the Gray Factory: The Presidential Leadership of Charles M. Vest and the Architecture of Change at Massachusetts Institute of Technology

    ERIC Educational Resources Information Center

    Daas, Mahesh

    2013-01-01

    The single-site exemplar study presents an in-depth account of the presidential leadership of Charles M. Vest of MIT--the second longest presidency in the Institute's history--and his leadership team's journey between 1990 and 2004 into campus architectural changes that involved over a billion dollars, added a quarter of floor space to MIT's…

  4. [Job crisis and transformations in the new model of accumulation].

    PubMed

    Zerda-Sarmiento, Alvaro

    2012-06-01

    The general and structural crisis capitalism is going through is the token of the difficulties accumulation model has been dealing with since 70's in developed countries. This model has been trying to settle down again on the basis of neoliberal principle and a new technical-economical paradigm. The new accumulation pattern has had a effect in employment sphere which have been made evident at all the elements that constitute work relationships. In Colombia, this model implementation has been partial and segmented. However, its consequences (and the long-term current crisis) have been evident in unemployment, precarious work, segmentation, informal work and restricted and private health insurance. Besides, financial accumulation makes labour profits flow at different levels. The economic model current government has aimed to implement leads to strengthening exports, so making population life conditions more difficult. In order to overcome the current state of affairs, the work sphere needs to become more creative. This creative approach should look for new schemes for expression and mobilization of work sphere's claims. This is supposed to be done by establishing a different economic model aimed to build a more inclusive future, with social justice. PMID:23258748

  5. [Job crisis and transformations in the new model of accumulation].

    PubMed

    Zerda-Sarmiento, Alvaro

    2012-06-01

    The general and structural crisis capitalism is going through is the token of the difficulties accumulation model has been dealing with since 70's in developed countries. This model has been trying to settle down again on the basis of neoliberal principle and a new technical-economical paradigm. The new accumulation pattern has had a effect in employment sphere which have been made evident at all the elements that constitute work relationships. In Colombia, this model implementation has been partial and segmented. However, its consequences (and the long-term current crisis) have been evident in unemployment, precarious work, segmentation, informal work and restricted and private health insurance. Besides, financial accumulation makes labour profits flow at different levels. The economic model current government has aimed to implement leads to strengthening exports, so making population life conditions more difficult. In order to overcome the current state of affairs, the work sphere needs to become more creative. This creative approach should look for new schemes for expression and mobilization of work sphere's claims. This is supposed to be done by establishing a different economic model aimed to build a more inclusive future, with social justice.

  6. Irreversibility of T-Cell Specification: Insights from Computational Modelling of a Minimal Network Architecture

    PubMed Central

    Manesso, Erica; Kueh, Hao Yuan; Freedman, George; Rothenberg, Ellen V.

    2016-01-01

    Background/Objectives A cascade of gene activations under the control of Notch signalling is required during T-cell specification, when T-cell precursors gradually lose the potential to undertake other fates and become fully committed to the T-cell lineage. We elucidate how the gene/protein dynamics for a core transcriptional module governs this important process by computational means. Methods We first assembled existing knowledge about transcription factors known to be important for T-cell specification to form a minimal core module consisting of TCF-1, GATA-3, BCL11B, and PU.1 aiming at dynamical modeling. Model architecture was based on published experimental measurements of the effects on each factor when each of the others is perturbed. While several studies provided gene expression measurements at different stages of T-cell development, pure time series are not available, thus precluding a straightforward study of the dynamical interactions among these genes. We therefore translate stage dependent data into time series. A feed-forward motif with multiple positive feed-backs can account for the observed delay between BCL11B versus TCF-1 and GATA-3 activation by Notch signalling. With a novel computational approach, all 32 possible interactions among Notch signalling, TCF-1, and GATA-3 are explored by translating combinatorial logic expressions into differential equations for BCL11B production rate. Results Our analysis reveals that only 3 of 32 possible configurations, where GATA-3 works as a dimer, are able to explain not only the time delay, but very importantly, also give rise to irreversibility. The winning models explain the data within the 95% confidence region and are consistent with regard to decay rates. Conclusions This first generation model for early T-cell specification has relatively few players. Yet it explains the gradual transition into a committed state with no return. Encoding logics in a rate equation setting allows determination of

  7. A multistage differential transformation method for approximate solution of Hantavirus infection model

    NASA Astrophysics Data System (ADS)

    Gökdoğan, Ahmet; Merdan, Mehmet; Yildirim, Ahmet

    2012-01-01

    The goal of this study is presented a reliable algorithm based on the standard differential transformation method (DTM), which is called the multi-stage differential transformation method (MsDTM) for solving Hantavirus infection model. The results obtanied by using MsDTM are compared to those obtained by using the Runge-Kutta method (R-K-method). The proposed technique is a hopeful tool to solving for a long time intervals in this kind of systems.

  8. OFMspert - Inference of operator intentions in supervisory control using a blackboard architecture. [operator function model expert system

    NASA Technical Reports Server (NTRS)

    Jones, Patricia S.; Mitchell, Christine M.; Rubin, Kenneth S.

    1988-01-01

    The authors proposes an architecture for an expert system that can function as an operator's associate in the supervisory control of a complex dynamic system. Called OFMspert (operator function model (OFM) expert system), the architecture uses the operator function modeling methodology as the basis for the design. The authors put emphasis on the understanding capabilities, i.e., the intent referencing property, of an operator's associate. The authors define the generic structure of OFMspert, particularly those features that support intent inferencing. They also describe the implementation and validation of OFMspert in GT-MSOCC (Georgia Tech-Multisatellite Operations Control Center), a laboratory domain designed to support research in human-computer interaction and decision aiding in complex, dynamic systems.

  9. Time Domain Transformations to Improve Hydrologic Model Consistency: Parameterization in Flow-Corrected Time

    NASA Astrophysics Data System (ADS)

    Smith, T. J.; Marshall, L. A.; McGlynn, B. L.

    2015-12-01

    Streamflow modeling is highly complex. Beyond the identification and mapping of dominant runoff processes to mathematical models, additional challenges are posed by the switching of dominant streamflow generation mechanisms temporally and dynamic catchment responses to precipitation inputs based on antecedent conditions. As a result, model calibration is required to obtain parameter values that produce acceptable simulations of the streamflow hydrograph. Typical calibration approaches assign equal weight to all observations to determine the best fit over the simulation period. However, the objective function can be biased toward (i.e., implicitly weight) certain parts of the hydrograph (e.g., high streamflows). Data transformations (e.g., logarithmic or square root) scale the magnitude of the observations and are commonly used in the calibration process to reduce implicit weighting or better represent assumptions about the model residuals. Here, we consider a time domain data transformation rather than the more common data domain approaches. Flow-corrected time was previously employed in the transit time modeling literature. Conceptually, it stretches time during high streamflow and compresses time during low streamflow periods. Therefore, streamflow is dynamically weighted in the time domain, with greater weight assigned to periods with larger hydrologic flux. Here, we explore the utility of the flow-corrected time transformation in improving model performance of the Catchment Connectivity Model. Model process fidelity was assessed directly using shallow groundwater connectivity data collected at Tenderfoot Creek Experimental Forest. Our analysis highlights the impact of data transformations on model consistency and parameter sensitivity.

  10. The Blended Advising Model: Transforming Advising with ePortfolios

    ERIC Educational Resources Information Center

    Ambrose, G. Alex; Williamson Ambrose, Laura

    2013-01-01

    This paper provides the rationale and framework for the blended advising model, a coherent approach to fusing technology--particularly the ePortfolio--into advising. The proposed term, "blended advising," is based on blended learning theory and incorporates the deliberate use of the strengths from both face-to-face and online…

  11. Modeling Transformations of Neurodevelopmental Sequences across Mammalian Species

    PubMed Central

    Workman, Alan D.; Charvet, Christine J.; Clancy, Barbara; Darlington, Richard B.

    2013-01-01

    A general model of neural development is derived to fit 18 mammalian species, including humans, macaques, several rodent species, and six metatherian (marsupial) mammals. The goal of this work is to describe heterochronic changes in brain evolution within its basic developmental allometry, and provide an empirical basis to recognize equivalent maturational states across animals. The empirical data generating the model comprises 271 developmental events, including measures of initial neurogenesis, axon extension, establishment, and refinement of connectivity, as well as later events such as myelin formation, growth of brain volume, and early behavioral milestones, to the third year of human postnatal life. The progress of neural events across species is sufficiently predictable that a single model can be used to predict the timing of all events in all species, with a correlation of modeled values to empirical data of 0.9929. Each species' rate of progress through the event scale, described by a regression equation predicting duration of development in days, is highly correlated with adult brain size. Neural heterochrony can be seen in selective delay of retinogenesis in the cat, associated with greater numbers of rods in its retina, and delay of corticogenesis in all species but rodents and the rabbit, associated with relatively larger cortices in species with delay. Unexpectedly, precocial mammals (those unusually mature at birth) delay the onset of first neurogenesis but then progress rapidly through remaining developmental events. PMID:23616543

  12. Teachers' Practices and Mental Models: Transformation through Reflection on Action

    ERIC Educational Resources Information Center

    Manrique, María Soledad; Sánchez Abchi, Verónica

    2015-01-01

    This contribution explores the relationship between teaching practices, teaching discourses and teachers' implicit representations and mental models and the way these dimensions change through teacher education (T.E). In order to study these relationships, and based on the assumptions that representations underlie teaching practices and that T.E…

  13. The Healing Web: A Transformative Model for Nursing.

    ERIC Educational Resources Information Center

    Bunkers, Sandra Schmidt

    1992-01-01

    A Navajo legend describes a web woven by Spider Woman that saved the people during a great flood. This article uses the imagery of the web to help education and service think more clearly about nursing's future. The Healing Web project seeks to educate nurses in a futuristic differentiated model. (Author/JOW)

  14. Modeling transformations of neurodevelopmental sequences across mammalian species.

    PubMed

    Workman, Alan D; Charvet, Christine J; Clancy, Barbara; Darlington, Richard B; Finlay, Barbara L

    2013-04-24

    A general model of neural development is derived to fit 18 mammalian species, including humans, macaques, several rodent species, and six metatherian (marsupial) mammals. The goal of this work is to describe heterochronic changes in brain evolution within its basic developmental allometry, and provide an empirical basis to recognize equivalent maturational states across animals. The empirical data generating the model comprises 271 developmental events, including measures of initial neurogenesis, axon extension, establishment, and refinement of connectivity, as well as later events such as myelin formation, growth of brain volume, and early behavioral milestones, to the third year of human postnatal life. The progress of neural events across species is sufficiently predictable that a single model can be used to predict the timing of all events in all species, with a correlation of modeled values to empirical data of 0.9929. Each species' rate of progress through the event scale, described by a regression equation predicting duration of development in days, is highly correlated with adult brain size. Neural heterochrony can be seen in selective delay of retinogenesis in the cat, associated with greater numbers of rods in its retina, and delay of corticogenesis in all species but rodents and the rabbit, associated with relatively larger cortices in species with delay. Unexpectedly, precocial mammals (those unusually mature at birth) delay the onset of first neurogenesis but then progress rapidly through remaining developmental events. PMID:23616543

  15. Transforming the Preparation of Leaders into a True Partnership Model

    ERIC Educational Resources Information Center

    Devin, Mary

    2016-01-01

    A former school superintendent who is now a university professor uses her experience in these partnership roles to describe how Kansas State University's collaboratively designed master's academy leadership preparation models merging theory and practice came about over fifteen years ago, and how it has evolved since then.

  16. Modeling transformations of neurodevelopmental sequences across mammalian species.

    PubMed

    Workman, Alan D; Charvet, Christine J; Clancy, Barbara; Darlington, Richard B; Finlay, Barbara L

    2013-04-24

    A general model of neural development is derived to fit 18 mammalian species, including humans, macaques, several rodent species, and six metatherian (marsupial) mammals. The goal of this work is to describe heterochronic changes in brain evolution within its basic developmental allometry, and provide an empirical basis to recognize equivalent maturational states across animals. The empirical data generating the model comprises 271 developmental events, including measures of initial neurogenesis, axon extension, establishment, and refinement of connectivity, as well as later events such as myelin formation, growth of brain volume, and early behavioral milestones, to the third year of human postnatal life. The progress of neural events across species is sufficiently predictable that a single model can be used to predict the timing of all events in all species, with a correlation of modeled values to empirical data of 0.9929. Each species' rate of progress through the event scale, described by a regression equation predicting duration of development in days, is highly correlated with adult brain size. Neural heterochrony can be seen in selective delay of retinogenesis in the cat, associated with greater numbers of rods in its retina, and delay of corticogenesis in all species but rodents and the rabbit, associated with relatively larger cortices in species with delay. Unexpectedly, precocial mammals (those unusually mature at birth) delay the onset of first neurogenesis but then progress rapidly through remaining developmental events.

  17. Optimized transformation of the glottal motion into a mechanical model.

    PubMed

    Triep, M; Brücker, C; Stingl, M; Döllinger, M

    2011-03-01

    During phonation the human vocal folds exhibit a complex self-sustained oscillation which is a result of the transglottic pressure difference, of the characteristics of the tissue of the folds and of the flow in the gap between the vocal folds (Van den Berg J. Myoelastic-aerodynamic theory of voice production. J Speech Hearing Res 1958;1:227-44 [1]). Obviously, extensive experiments cannot be performed in vivo. Therefore, in literature a variety of model experiments that try to replicate the vocal folds kinematics for specific studies within the vocal tract can be found. Here, we present an experimental model to visualize the fluid dynamics which result from the complex motions of real human vocal folds. An existing up-scaled glottal cam model with approximate glottal kinematics is extended to replicate more realistically observed glottal closure types. This extension of the model is a further step in understanding the fluid dynamical mechanisms contributing to the quality of human voice during phonation, in particular the cause (changed glottal kinematics) and its effect (changed aero-acoustic field). For four typical glottal closure types cam geometries of varying profile are generated. Two counter rotating cams covered with a silicone membrane reproduce as well as possible the observed glottal movements.

  18. Population-Dynamic Modeling of Bacterial Horizontal Gene Transfer by Natural Transformation.

    PubMed

    Mao, Junwen; Lu, Ting

    2016-01-01

    Natural transformation is a major mechanism of horizontal gene transfer (HGT) and plays an essential role in bacterial adaptation, evolution, and speciation. Although its molecular underpinnings have been increasingly revealed, natural transformation is not well characterized in terms of its quantitative ecological roles. Here, by using Neisseria gonorrhoeae as an example, we developed a population-dynamic model for natural transformation and analyzed its dynamic characteristics with nonlinear tools and simulations. Our study showed that bacteria capable of natural transformation can display distinct population behaviors ranging from extinction to coexistence and to bistability, depending on their HGT rate and selection coefficient. With the model, we also illustrated the roles of environmental DNA sources-active secretion and passive release-in impacting population dynamics. Additionally, by constructing and utilizing a stochastic version of the model, we examined how noise shapes the steady and dynamic behaviors of the system. Notably, we found that distinct waiting time statistics for HGT events, namely a power-law distribution, an exponential distribution, and a mix of the both, are associated with the dynamics in the regimes of extinction, coexistence, and bistability accordingly. This work offers a quantitative illustration of natural transformation by revealing its complex population dynamics and associated characteristics, therefore advancing our ecological understanding of natural transformation as well as HGT in general.

  19. Population-Dynamic Modeling of Bacterial Horizontal Gene Transfer by Natural Transformation.

    PubMed

    Mao, Junwen; Lu, Ting

    2016-01-01

    Natural transformation is a major mechanism of horizontal gene transfer (HGT) and plays an essential role in bacterial adaptation, evolution, and speciation. Although its molecular underpinnings have been increasingly revealed, natural transformation is not well characterized in terms of its quantitative ecological roles. Here, by using Neisseria gonorrhoeae as an example, we developed a population-dynamic model for natural transformation and analyzed its dynamic characteristics with nonlinear tools and simulations. Our study showed that bacteria capable of natural transformation can display distinct population behaviors ranging from extinction to coexistence and to bistability, depending on their HGT rate and selection coefficient. With the model, we also illustrated the roles of environmental DNA sources-active secretion and passive release-in impacting population dynamics. Additionally, by constructing and utilizing a stochastic version of the model, we examined how noise shapes the steady and dynamic behaviors of the system. Notably, we found that distinct waiting time statistics for HGT events, namely a power-law distribution, an exponential distribution, and a mix of the both, are associated with the dynamics in the regimes of extinction, coexistence, and bistability accordingly. This work offers a quantitative illustration of natural transformation by revealing its complex population dynamics and associated characteristics, therefore advancing our ecological understanding of natural transformation as well as HGT in general. PMID:26745428

  20. Application of Conjunctive Nonlinear Model Based on Wavelet Transforms and Artificial Neural Networks to Drought Forecasting

    NASA Astrophysics Data System (ADS)

    Abrishamchi, A.; Mehdikhani, H.; Tajrishy, M.; Marino, M. A.; Abrishamchi, A.

    2007-12-01

    Drought forecasting plays an important role in mitigation of economic, environmental and social impacts of drought. Traditional statistical time series methods have a limited ability to capture non-stationarities and nonlinearities in data. Artificial Neural Network (ANN) because of highly flexible function estimator that has self- learning and self-adaptive feature has shown great ability in forecasting nonlinear and nonstationary time series in hydrology. Recently wavelet transforms have become a common tool for analyzing local variation in time series. Wavelet transforms provide a useful decomposition of a signal, or time series; therefore, hybrid models have been proposed for forecasting a time series based on a wavelet transform preprocessing. Wavelet-transformed data aids in improving the ability of forecasting models by diagnosing signal's main frequency component and abstract local information of the original time series on various resolution levels. This paper presents a conjunctive nonlinear model using Wavelet Transforms and Artificial Neural Network. Application of the model in Zayandeh-Rood River basin (Iran) shows that the conjunctive model significantly improves the ability of artificial neural networks for 1, 3, 6 and 9 months ahead forecasting of EDI (effective drought indices) time series. Improved forecasts allow water resources decision makers to develop drought preparedness plans far in advance.

  1. Study on Information Management for the Conservation of Traditional Chinese Architectural Heritage - 3d Modelling and Metadata Representation

    NASA Astrophysics Data System (ADS)

    Yen, Y. N.; Weng, K. H.; Huang, H. Y.

    2013-07-01

    After over 30 years of practise and development, Taiwan's architectural conservation field is moving rapidly into digitalization and its applications. Compared to modern buildings, traditional Chinese architecture has considerably more complex elements and forms. To document and digitize these unique heritages in their conservation lifecycle is a new and important issue. This article takes the caisson ceiling of the Taipei Confucius Temple, octagonal with 333 elements in 8 types, as a case study for digitization practise. The application of metadata representation and 3D modelling are the two key issues to discuss. Both Revit and SketchUp were appliedin this research to compare its effectiveness to metadata representation. Due to limitation of the Revit database, the final 3D models wasbuilt with SketchUp. The research found that, firstly, cultural heritage databasesmustconvey that while many elements are similar in appearance, they are unique in value; although 3D simulations help the general understanding of architectural heritage, software such as Revit and SketchUp, at this stage, could onlybe used tomodel basic visual representations, and is ineffective indocumenting additional critical data ofindividually unique elements. Secondly, when establishing conservation lifecycle information for application in management systems, a full and detailed presentation of the metadata must also be implemented; the existing applications of BIM in managing conservation lifecycles are still insufficient. Results of the research recommends SketchUp as a tool for present modelling needs, and BIM for sharing data between users, but the implementation of metadata representation is of the utmost importance.

  2. Multi phase field model for solid state transformation with elastic strain

    NASA Astrophysics Data System (ADS)

    Steinbach, I.; Apel, M.

    2006-05-01

    A multi phase field model is presented for the investigation of the effect of transformation strain on the transformation kinetics, morphology and thermodynamic stability in multi phase materials. The model conserves homogeneity of stress in the diffuse interface between elastically inhomogeneous phases, in which respect it differs from previous models. The model is formulated consistently with the multi phase field model for diffusional and surface driven phase transitions [I. Steinbach, F. Pezzolla, B. Nestler, M. Seeßelberg, R. Prieler, G.J. Schmitz, J.L.L. Rezende, A phase field concept for multiphase systems, Physica D 94 (1996) 135-147; J. Tiaden, B. Nestler, H.J. Diepers, I. Steinbach, The multiphase-field model with an integrated concept for modeling solute diffusion, Physica D 115 (1998) 73-86; I. Steinbach, F. Pezzolla, A generalized field method for multiphase transformations using interface fields, Physica D 134 (1999) 385] and gives a consistent description of interfacial tension, multi phase thermodynamics and elastic stress balance in multiple junctions between an arbitrary number of grains and phases. Some aspects of the model are demonstrated with respect to numerical accuracy and the relation between transformation strain, external stress and thermodynamic equilibrium.

  3. The use of CMAC neural architectures in obstacle avoidance. [Cerebellar Model Articulated Controller

    NASA Technical Reports Server (NTRS)

    Peterson, James K.; Shelton, Robert O.

    1993-01-01

    In this paper, CMAC neural architectures are used in conjunction with a hierarchical planning approach to find collision free paths over two dimensional analog valued obstacle fields. The method constructs a coarse resolution version of the original problem and then finds the corresponding coarse optimal path using multipass dynamic programming. CMAC artificial neural architectures are used to estimate the analog transition costs that dynamic programming requires. The coarse optimal path is then used as a baseline for the construction of a fine scale optimal path through the original obstacle array.

  4. Architecture & Environment

    ERIC Educational Resources Information Center

    Erickson, Mary; Delahunt, Michael

    2010-01-01

    Most art teachers would agree that architecture is an important form of visual art, but they do not always include it in their curriculums. In this article, the authors share core ideas from "Architecture and Environment," a teaching resource that they developed out of a long-term interest in teaching architecture and their fascination with the…

  5. Robotic Intelligence Kernel: Architecture

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.

  6. Laser Hardening Prediction Tool Based On a Solid State Transformations Numerical Model

    SciTech Connect

    Martinez, S.; Ukar, E.; Lamikiz, A.

    2011-01-17

    This paper presents a tool to predict hardening layer in selective laser hardening processes where laser beam heats the part locally while the bulk acts as a heat sink.The tool to predict accurately the temperature field in the workpiece is a numerical model that combines a three dimensional transient numerical solution for heating where is possible to introduce different laser sources. The thermal field was modeled using a kinetic model based on Johnson-Mehl-Avrami equation. Considering this equation, an experimental adjustment of transformation parameters was carried out to get the heating transformation diagrams (CHT). With the temperature field and CHT diagrams the model predicts the percentage of base material converted into austenite. These two parameters are used as first step to estimate the depth of hardened layer in the part.The model has been adjusted and validated with experimental data for DIN 1.2379, cold work tool steel typically used in mold and die making industry. This steel presents solid state diffusive transformations at relative low temperature. These transformations must be considered in order to get good accuracy of temperature field prediction during heating phase. For model validation, surface temperature measured by pyrometry, thermal field as well as the hardened layer obtained from metallographic study, were compared with the model data showing a good adjustment.

  7. Laser Hardening Prediction Tool Based On a Solid State Transformations Numerical Model

    NASA Astrophysics Data System (ADS)

    Martínez, S.; Ukar, E.; Lamikiz, A.; Liebana, F.

    2011-01-01

    This paper presents a tool to predict hardening layer in selective laser hardening processes where laser beam heats the part locally while the bulk acts as a heat sink. The tool to predict accurately the temperature field in the workpiece is a numerical model that combines a three dimensional transient numerical solution for heating where is possible to introduce different laser sources. The thermal field was modeled using a kinetic model based on Johnson-Mehl-Avrami equation. Considering this equation, an experimental adjustment of transformation parameters was carried out to get the heating transformation diagrams (CHT). With the temperature field and CHT diagrams the model predicts the percentage of base material converted into austenite. These two parameters are used as first step to estimate the depth of hardened layer in the part. The model has been adjusted and validated with experimental data for DIN 1.2379, cold work tool steel typically used in mold and die making industry. This steel presents solid state diffusive transformations at relative low temperature. These transformations must be considered in order to get good accuracy of temperature field prediction during heating phase. For model validation, surface temperature measured by pyrometry, thermal field as well as the hardened layer obtained from metallographic study, were compared with the model data showing a good adjustment.

  8. Animation Strategies for Smooth Transformations Between Discrete Lods of 3d Building Models

    NASA Astrophysics Data System (ADS)

    Kada, Martin; Wichmann, Andreas; Filippovska, Yevgeniya; Hermes, Tobias

    2016-06-01

    The cartographic 3D visualization of urban areas has experienced tremendous progress over the last years. An increasing number of applications operate interactively in real-time and thus require advanced techniques to improve the quality and time response of dynamic scenes. The main focus of this article concentrates on the discussion of strategies for smooth transformation between two discrete levels of detail (LOD) of 3D building models that are represented as restricted triangle meshes. Because the operation order determines the geometrical and topological properties of the transformation process as well as its visual perception by a human viewer, three different strategies are proposed and subsequently analyzed. The simplest one orders transformation operations by the length of the edges to be collapsed, while the other two strategies introduce a general transformation direction in the form of a moving plane. This plane either pushes the nodes that need to be removed, e.g. during the transformation of a detailed LOD model to a coarser one, towards the main building body, or triggers the edge collapse operations used as transformation paths for the cartographic generalization.

  9. Microbial interactions affecting the natural transformation of Bacillus subtilis in a model aquatic ecosystem.

    PubMed

    Matsui, Kazuaki; Ishii, Nobuyoshi; Kawabata, Zen'ichiro

    2003-08-01

    The involvement of microbial interactions in natural transformation of bacteria was evaluated using an aquatic model system. For this purpose, the naturally transformable Bacillus subtilis was used as the model bacterium which was co-cultivated with the protist Tetrahymena thermophila (a consumer) and/or the photosynthetic alga Euglena gracilis (a producer). Co-cultivation with as few as 10(2) individuals ml(-1) of T. thermophila lowered the number of transformants to less than the detectable level (<1x10(0) ml(-1)), while co-cultivation with E. gracilis did not. Metabolites from co-cultures of T. thermophila and B. subtilis also decreased the number of transformants to less than the detectable level, while metabolites from co-culture of T. thermophila and B. subtilis with E. gracilis did not. Thus, the introduction of transformation inhibitory factor(s) by the grazing of T. thermophila and the attenuation of this inhibitory factor(s) by E. gracilis is indicated. These observations suggest that biological components do affect the natural transformation of B. subtilis. The study described is the first to suggest that ecological interactions are responsible not only for the carbon and energy cycles, but also for the processes governing horizontal transfer of genes, in microbial ecosystems.

  10. Linking lipid architecture to bilayer structure and mechanics using self-consistent field modelling

    SciTech Connect

    Pera, H.; Kleijn, J. M.; Leermakers, F. A. M.

    2014-02-14

    To understand how lipid architecture determines the lipid bilayer structure and its mechanics, we implement a molecularly detailed model that uses the self-consistent field theory. This numerical model accurately predicts parameters such as Helfrichs mean and Gaussian bending modulus k{sub c} and k{sup ¯} and the preferred monolayer curvature J{sub 0}{sup m}, and also delivers structural membrane properties like the core thickness, and head group position and orientation. We studied how these mechanical parameters vary with system variations, such as lipid tail length, membrane composition, and those parameters that control the lipid tail and head group solvent quality. For the membrane composition, negatively charged phosphatidylglycerol (PG) or zwitterionic, phosphatidylcholine (PC), and -ethanolamine (PE) lipids were used. In line with experimental findings, we find that the values of k{sub c} and the area compression modulus k{sub A} are always positive. They respond similarly to parameters that affect the core thickness, but differently to parameters that affect the head group properties. We found that the trends for k{sup ¯} and J{sub 0}{sup m} can be rationalised by the concept of Israelachivili's surfactant packing parameter, and that both k{sup ¯} and J{sub 0}{sup m} change sign with relevant parameter changes. Although typically k{sup ¯}<0, membranes can form stable cubic phases when the Gaussian bending modulus becomes positive, which occurs with membranes composed of PC lipids with long tails. Similarly, negative monolayer curvatures appear when a small head group such as PE is combined with long lipid tails, which hints towards the stability of inverse hexagonal phases at the cost of the bilayer topology. To prevent the destabilisation of bilayers, PG lipids can be mixed into these PC or PE lipid membranes. Progressive loading of bilayers with PG lipids lead to highly charged membranes, resulting in J{sub 0}{sup m}≫0, especially at low ionic

  11. From PCK to TPACK: Developing a Transformative Model for Pre-Service Science Teachers

    NASA Astrophysics Data System (ADS)

    Jang, Syh-Jong; Chen, Kuan-Chung

    2010-12-01

    New science teachers should be equipped with the ability to integrate and design the curriculum and technology for innovative teaching. How to integrate technology into pre-service science teachers' pedagogical content knowledge is the important issue. This study examined the impact on a transformative model of integrating technology and peer coaching for developing technological pedagogical and content knowledge (TPACK) of pre-service science teachers. A transformative model and an online system were designed to restructure science teacher education courses. Participants of this study included an instructor and 12 pre-service teachers. The main sources of data included written assignments, online data, reflective journals, videotapes and interviews. This study expanded four views, namely, the comprehensive, imitative, transformative and integrative views to explore the impact of TPACK. The model could help pre-service teachers develop technological pedagogical methods and strategies of integrating subject-matter knowledge into science lessons, and further enhanced their TPACK.

  12. Quantum equivalence of σ models related by non-Abelian duality transformations

    NASA Astrophysics Data System (ADS)

    Balázs, L. K.; Balog, J.; Forgács, P.; Mohammedi, N.; Palla, L.; Schnittger, J.

    1998-03-01

    Coupling constant renormalization is investigated in two dimensional σ models related by non-Abelian duality transformations. In this respect it is shown that in the one loop order of perturbation theory the duals of a one parameter family of models, interpolating between the SU(2) principal model and the O(3) sigma model, exhibit the same behavior as the original models. For the O(3) model also the two loop equivalence is investigated, and is found to be broken just like in the already known example of the principal model.

  13. Voices of innovation: building a model for curriculum transformation.

    PubMed

    Phillips, Janet M; Resnick, Jerelyn; Boni, Mary Sharon; Bradley, Patricia; Grady, Janet L; Ruland, Judith P; Stuever, Nancy L

    2013-05-07

    Innovation in nursing education curriculum is critically needed to meet the demands of nursing leadership and practice while facing the complexities of today's health care environment. International nursing organizations, the Institute of Medicine, and; our health care practice partners have called for curriculum reform to ensure the quality and safety of patient care. While innovation is occurring in schools of nursing, little is being researched or disseminated. The purposes of this qualitative study were to (a) describe what innovative curricula were being implemented, (b) identify challenges faced by the faculty, and (c) explore how the curricula were evaluated. Interviews were conducted with 15 exemplar schools from a variety of nursing programs throughout the United States. Exemplar innovative curricula were identified, and a model for approaching innovation was developed based on the findings related to conceptualizing, designing, delivering, evaluating, and supporting the curriculum. The results suggest implications for nursing education, research, and practice.

  14. Transformation of 3DP gypsum model to HA by treating in ammonium phosphate solution.

    PubMed

    Lowmunkong, Rungnapa; Sohmura, Taiji; Takahashi, Junzo; Suzuki, Yumiko; Matsuya, Shigeki; Ishikawa, Kunio

    2007-02-01

    Three-dimensional printing (3DP) is a CAD/CAM built-up using ink-jet printing technique. Commercially available 3DP system can form only gypsum model and not for bioceramics. On the other hand, transformation of hardened gypsum into hydroxyapatite (HA) by treatment in ammonium phosphate solution was found lately. In the present study, transformation of the 3DP gypsum block to HA was attempted. However, the fabricated 3DP block was soluble in water. To insolubilize, it was heated at 300 degrees C for 10 min, and then, gypsum was transformed to calcium sulfate hemihydrate, CaSO(4) x 0.5H(2)O. The 3D block was immersed in 1M (NH(4))(3)PO(4) x 3H(2)O solution at 80 degrees C for 1-24 h, and the transformation into HA within 4 h was ascertained. A heat-treated plaster of Paris (POP) block was also investigated for comparison. The unheated POP block consisting of gypsum dihydrate took 24 h to complete the transformation, while the heat-treated POP consisting calcium sulfate hemihydrate promoted the transformation into HA; but the transformed thickness in the block was less than the 3DP block. This is probably due to higher solubility of the hemihydrate than gypsum dihydrate. Accelerated transformation of the 3DP block was also caused by its porous structure, which enabled an easy penetration of the phosphate solution. With the present method, it is possible to transform the fabricated gypsum by 3D printing that is adaptive to the osseous defect into HA prostheses or scaffold.

  15. Finite field-dependent BRST-anti-BRST transformations: Jacobians and application to the Standard Model

    NASA Astrophysics Data System (ADS)

    Yu. Moshin, Pavel; Reshetnyak, Alexander A.

    2016-07-01

    We continue our research1-4 and extend the class of finite BRST-anti-BRST transformations with odd-valued parameters λa, a = 1, 2, introduced in these works. In doing so, we evaluate the Jacobians induced by finite BRST-anti-BRST transformations linear in functionally-dependent parameters, as well as those induced by finite BRST-anti-BRST transformations with arbitrary functional parameters. The calculations cover the cases of gauge theories with a closed algebra, dynamical systems with first-class constraints, and general gauge theories. The resulting Jacobians in the case of linearized transformations are different from those in the case of polynomial dependence on the parameters. Finite BRST-anti-BRST transformations with arbitrary parameters induce an extra contribution to the quantum action, which cannot be absorbed into a change of the gauge. These transformations include an extended case of functionally-dependent parameters that implies a modified compensation equation, which admits nontrivial solutions leading to a Jacobian equal to unity. Finite BRST-anti-BRST transformations with functionally-dependent parameters are applied to the Standard Model, and an explicit form of functionally-dependent parameters λa is obtained, providing the equivalence of path integrals in any 3-parameter Rξ-like gauges. The Gribov-Zwanziger theory is extended to the case of the Standard Model, and a form of the Gribov horizon functional is suggested in the Landau gauge, as well as in Rξ-like gauges, in a gauge-independent way using field-dependent BRST-anti-BRST transformations, and in Rξ-like gauges using transverse-like non-Abelian gauge fields.

  16. PRISMA-MAR: An Architecture Model for Data Visualization in Augmented Reality Mobile Devices

    ERIC Educational Resources Information Center

    Gomes Costa, Mauro Alexandre Folha; Serique Meiguins, Bianchi; Carneiro, Nikolas S.; Gonçalves Meiguins, Aruanda Simões

    2013-01-01

    This paper proposes an extension to mobile augmented reality (MAR) environments--the addition of data charts to the more usual text, image and video components. To this purpose, we have designed a client-server architecture including the main necessary modules and services to provide an Information Visualization MAR experience. The server side…

  17. Project Integration Architecture: Application Architecture

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    The Project Integration Architecture (PIA) implements a flexible, object-oriented, wrapping architecture which encapsulates all of the information associated with engineering applications. The architecture allows the progress of a project to be tracked and documented in its entirety. Additionally, by bringing all of the information sources and sinks of a project into a single architectural space, the ability to transport information between those applications is enabled.

  18. Using an Architecture-Centric Model-Driven Approach for Developing Service-Oriented Solutions: A Case Study

    NASA Astrophysics Data System (ADS)

    López-Sanz, Marcos; Acuña, César J.; de Castro, Valeria; Marcos, Esperanza; Cuesta, Carlos E.

    As services continue achieving more importance in the development of software solutions for the Internet, software developers and researchers turn their attention to strategies based on the SOC (Service-Oriented Computing) paradigm. One of these development approaches is SOD-M specifically designed for building service-oriented solutions. In this article we present the results of redesigning a real-world Web-based Information System, called MEDiWIS, using SOD-M. The main goal of the redesigned MEDiWIS system is to support the storage and management of digital medical images and related information by presenting its functionalities as software services. We analyze in detail the main challenges we have found using an ACMDA (Architecture-Centric Model-Driven Architecture) approach to achieve this goal.

  19. A transformative model for undergraduate quantitative biology education.

    PubMed

    Usher, David C; Driscoll, Tobin A; Dhurjati, Prasad; Pelesko, John A; Rossi, Louis F; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B

    2010-01-01

    The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions.

  20. [Segmentation of medical images based on dyadic wavelet transform and active contour model].

    PubMed

    Li, Hong; Wang, Huinan; Chang, Linfeng; Shao, Xiaoli

    2008-12-01

    The interference of noise and the weak edge characteristic of symptom information on medical images prevent the traditional methods of segmentation from having good effects. In this paper is proposed a boundary detection method of focus which is based on dyadic wavelet transform and active contour model. In this method, the true edge points are detected by dyadic wavelet transform and linked by improved fast active contour model algorithm. The result of experiment on MRI of brain shows that the method can remove the influence of noise effective and detect the contour of brain tumor actually. PMID:19166191

  1. Penium margaritaceum: A Unicellular Model Organism for Studying Plant Cell Wall Architecture and Dynamics

    PubMed Central

    Domozych, David S.

    2014-01-01

    Penium margaritaceum is a new and valuable unicellular model organism for studying plant cell wall structure and developmental dynamics. This charophyte has a cell wall composition remarkably similar to the primary cell wall of many higher plants and clearly-defined inclusive zones containing specific polymers. Penium has a simple cylindrical phenotype with a distinct region of focused wall synthesis. Specific polymers, particularly pectins, can be identified using monoclonal antibodies raised against polymers of higher plant cell walls. Immunofluorescence-based labeling is easily performed using live cells that subsequently can be returned to culture and monitored. This feature allows for rapid assessment of wall expansion rates and identification of multiple polymer types in the wall microarchitecture during the cell cycle. Cryofixation by means of spray freezing provides excellent transmission electron microscopy imaging of the cell, including its elaborate endomembrane and cytoskeletal systems, both integral to cell wall development. Penium’s fast growth rate allows for convenient microarray screening of various agents that alter wall biosynthesis and metabolism. Finally, recent successful development of transformed cell lines has allowed for non-invasive imaging of proteins in cells and for RNAi reverse genetics that can be used for cell wall biosynthesis studies. PMID:27135519

  2. Modelling the self-assembly of elastomeric proteins provides insights into the evolution of their domain architectures.

    PubMed

    Song, Hongyan; Parkinson, John

    2012-01-01

    Elastomeric proteins have evolved independently multiple times through evolution. Produced as monomers, they self-assemble into polymeric structures that impart properties of stretch and recoil. They are composed of an alternating domain architecture of elastomeric domains interspersed with cross-linking elements. While the former provide the elasticity as well as help drive the assembly process, the latter serve to stabilise the polymer. Changes in the number and arrangement of the elastomeric and cross-linking regions have been shown to significantly impact their assembly and mechanical properties. However, to date, such studies are relatively limited. Here we present a theoretical study that examines the impact of domain architecture on polymer assembly and integrity. At the core of this study is a novel simulation environment that uses a model of diffusion limited aggregation to simulate the self-assembly of rod-like particles with alternating domain architectures. Applying the model to different domain architectures, we generate a variety of aggregates which are subsequently analysed by graph-theoretic metrics to predict their structural integrity. Our results show that the relative length and number of elastomeric and cross-linking domains can significantly impact the morphology and structural integrity of the resultant polymeric structure. For example, the most highly connected polymers were those constructed from asymmetric rods consisting of relatively large cross-linking elements interspersed with smaller elastomeric domains. In addition to providing insights into the evolution of elastomeric proteins, simulations such as those presented here may prove valuable for the tuneable design of new molecules that may be exploited as useful biomaterials. PMID:22396636

  3. Modelling the Self-Assembly of Elastomeric Proteins Provides Insights into the Evolution of Their Domain Architectures

    PubMed Central

    Song, Hongyan; Parkinson, John

    2012-01-01

    Elastomeric proteins have evolved independently multiple times through evolution. Produced as monomers, they self-assemble into polymeric structures that impart properties of stretch and recoil. They are composed of an alternating domain architecture of elastomeric domains interspersed with cross-linking elements. While the former provide the elasticity as well as help drive the assembly process, the latter serve to stabilise the polymer. Changes in the number and arrangement of the elastomeric and cross-linking regions have been shown to significantly impact their assembly and mechanical properties. However, to date, such studies are relatively limited. Here we present a theoretical study that examines the impact of domain architecture on polymer assembly and integrity. At the core of this study is a novel simulation environment that uses a model of diffusion limited aggregation to simulate the self-assembly of rod-like particles with alternating domain architectures. Applying the model to different domain architectures, we generate a variety of aggregates which are subsequently analysed by graph-theoretic metrics to predict their structural integrity. Our results show that the relative length and number of elastomeric and cross-linking domains can significantly impact the morphology and structural integrity of the resultant polymeric structure. For example, the most highly connected polymers were those constructed from asymmetric rods consisting of relatively large cross-linking elements interspersed with smaller elastomeric domains. In addition to providing insights into the evolution of elastomeric proteins, simulations such as those presented here may prove valuable for the tuneable design of new molecules that may be exploited as useful biomaterials. PMID:22396636

  4. Information architecture. Volume 3: Guidance

    SciTech Connect

    1997-04-01

    The purpose of this document, as presented in Volume 1, The Foundations, is to assist the Department of Energy (DOE) in developing and promulgating information architecture guidance. This guidance is aimed at increasing the development of information architecture as a Departmentwide management best practice. This document describes departmental information architecture principles and minimum design characteristics for systems and infrastructures within the DOE Information Architecture Conceptual Model, and establishes a Departmentwide standards-based architecture program. The publication of this document fulfills the commitment to address guiding principles, promote standard architectural practices, and provide technical guidance. This document guides the transition from the baseline or defacto Departmental architecture through approved information management program plans and budgets to the future vision architecture. This document also represents another major step toward establishing a well-organized, logical foundation for the DOE information architecture.

  5. Properties of index transforms in modeling of nanostructures and plasmonic systems

    NASA Astrophysics Data System (ADS)

    Passian, A.; Koucheckian, S.; Yakubovich, S. B.; Thundat, T.

    2010-02-01

    In material structures with nanometer scale curvature or dimensions, electrons may be excited to oscillate in confined spaces. The consequence of such geometric confinement is of great importance in nano-optics and plasmonics. Furthermore, the geometric complexity of the probe-substrate/sample assemblies of many scanning probe microscopy experiments often poses a challenging modeling problem due to the high curvature of the probe apex or sample surface protrusions and indentations. Index transforms such as Mehler-Fock and Kontorovich-Lebedev, where integration occurs over the index of the function rather than over the argument, prove useful in solving the resulting differential equations when modeling optical or electronic response of such problems. By considering the scalar potential distribution of a charged probe in the presence of a dielectric substrate, we discuss certain implications and criteria of the index transform and prove the existence and the inversion theorems for the Mehler-Fock transform of the order m ɛN0. The probe charged to a potential V0, measured at the apex, is modeled, in the noncontact case, as a one-sheeted hyperboloid of revolution, and in the contact case or in the limit of a very sharp probe, as a cone. Using the Mehler-Fock integral transform in the first case, and the Fourier integral transform in the second, we discuss the necessary conditions imposed on the potential distribution on the probe surface.

  6. Properties of Index Transforms in Modeling of Nanostructures and Plasmonic Systems

    SciTech Connect

    Passian, Ali

    2010-01-01

    In material structures with nanometer scale curvature or dimensions electrons may be excited to oscillate in confined spaces. The consequence of such geometric confinement is of great importance in nano-optics and plasmonics. Furthermore, the geometric complexity of the probe-substrate/sample assemblies of many scanning probe microscopy experiments often pose a challenging modeling prob- lem due to the high curvature of the probe apex or sample surface protrusions and indentations. Index transforms such as Mehler-Fock and Kontorovich-Lebedev, where integration occurs over the index of the function rather than over the argument, prove useful in solving the resulting differential equations when modeling optical or electronic response of such problems. By considering the scalar potential distribution of a charged probe in presence of a dielectric substrate, we discuss certain implications and criteria of the index transform and prove the existence and the inversion theorems for the Mehler- Fock transform of the order m ∈ N 0 . The probe charged to a potential V0 , measured at the apex, is modeled, in the non-contact case, as a one-sheeted hyperboloid of revolution, and in the contact case or in the limit of a very sharp probe, as a cone. Using the Mehler-Fock integral transform in the first case, and the Fourier integral transform in the second, we discuss the necessary conditions imposed on the potential distribution on the probe surface.

  7. An architecture for the development of real-time fault diagnosis systems using model-based reasoning

    NASA Technical Reports Server (NTRS)

    Hall, Gardiner A.; Schuetzle, James; Lavallee, David; Gupta, Uday

    1992-01-01

    Presented here is an architecture for implementing real-time telemetry based diagnostic systems using model-based reasoning. First, we describe Paragon, a knowledge acquisition tool for offline entry and validation of physical system models. Paragon provides domain experts with a structured editing capability to capture the physical component's structure, behavior, and causal relationships. We next describe the architecture of the run time diagnostic system. The diagnostic system, written entirely in Ada, uses the behavioral model developed offline by Paragon to simulate expected component states as reflected in the telemetry stream. The diagnostic algorithm traces causal relationships contained within the model to isolate system faults. Since the diagnostic process relies exclusively on the behavioral model and is implemented without the use of heuristic rules, it can be used to isolate unpredicted faults in a wide variety of systems. Finally, we discuss the implementation of a prototype system constructed using this technique for diagnosing faults in a science instrument. The prototype demonstrates the use of model-based reasoning to develop maintainable systems with greater diagnostic capabilities at a lower cost.

  8. NitroScape: a model to integrate nitrogen transfers and transformations in rural landscapes.

    PubMed

    Duretz, S; Drouet, J L; Durand, P; Hutchings, N J; Theobald, M R; Salmon-Monviola, J; Dragosits, U; Maury, O; Sutton, M A; Cellier, P

    2011-11-01

    Modelling nitrogen transfer and transformation at the landscape scale is relevant to estimate the mobility of the reactive forms of nitrogen (N(r)) and the associated threats to the environment. Here we describe the development of a spatially and temporally explicit model to integrate N(r) transfer and transformation at the landscape scale. The model couples four existing models, to simulate atmospheric, farm, agro-ecosystem and hydrological N(r) fluxes and transformations within a landscape. Simulations were carried out on a theoretical landscape consisting of pig-crop farms interspersed with unmanaged ecosystems. Simulation results illustrated the effect of spatial interactions between landscape elements on N(r) fluxes and losses to the environment. More than 10% of the total N(2)O emissions were due to indirect emissions. The nitrogen budgets and transformations of the unmanaged ecosystems varied considerably, depending on their location within the landscape. The model represents a new tool for assessing the effect of changes in landscape structure on N(r) fluxes.

  9. Effect of Colorspace Transformation, the Illuminance Component, and Color Modeling on Skin Detection

    SciTech Connect

    Jayaram, S; Schmugge, S; Shin, M C; Tsap, L V

    2004-03-22

    Skin detection is an important preliminary process in human motion analysis. It is commonly performed in three steps: transforming the pixel color to a non-RGB colorspace, dropping the illumination component of skin color, and classifying by modeling the skin color distribution. In this paper, we evaluate the effect of these three steps on the skin detection performance. The importance of this study is a new comprehensive colorspace and color modeling testing methodology that would allow for making the best choices for skin detection. Combinations of nine colorspaces, the presence of the absence of the illuminance component, and the two color modeling approaches are compared. The performance is measured by using a receiver operating characteristic (ROC) curve on a large dataset of 805 images with manual ground truth. The results reveal that (1) the absence of the illuminance component decreases performance, (2) skin color modeling has a greater impact than colorspace transformation, and (3) colorspace transformations can improve performance in certain instances. We found that the best performance was obtained by transforming the pixel color to the SCT, HSI, or CIELAB colorspaces, keeping the illuminance component, and modeling the color with the histogram approach.

  10. Phase Field Modeling of Cyclic Austenite-Ferrite Transformations in Fe-C-Mn Alloys

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Zhu, Benqiang; Militzer, Matthias

    2016-08-01

    Three different approaches for considering the effect of Mn on the austenite-ferrite interface migration in an Fe-0.1C-0.5Mn alloy have been coupled with a phase field model (PFM). In the first approach (PFM-I), only long-range C diffusion is considered while Mn is assumed to be immobile during the phase transformations. Both long-range C and Mn diffusions are considered in the second approach (PFM-II). In the third approach (PFM-III), long-range C diffusion is considered in combination with the Gibbs energy dissipation due to Mn diffusion inside the interface instead of solving for long-range diffusion of Mn. The three PFM approaches are first benchmarked with isothermal austenite-to-ferrite transformation at 1058.15 K (785 °C) before considering cyclic phase transformations. It is found that PFM-II can predict the stagnant stage and growth retardation experimentally observed during cycling transformations, whereas PFM-III can only replicate the stagnant stage but not the growth retardation and PFM-I predicts neither the stagnant stage nor the growth retardation. The results of this study suggest a significant role of Mn redistribution near the interface on reducing transformation rates, which should, therefore, be considered in future simulations of austenite-ferrite transformations in steels, particularly at temperatures in the intercritical range and above.

  11. A thermo-mechanical modelling of the Tribological Transformations of Surface

    NASA Astrophysics Data System (ADS)

    Antoni, Grégory; Désoyer, Thierry; Lebon, Frédéric

    2009-09-01

    The Tribological Transformations of Surface (TTS) are observed on samples of certain steels undergoing repeated compressive loadings. They correspond to a permanent, solid-solid phase transformation localized on the surfaces of the sample on which the loading is applied. The main hypothesis of the study is that TTS are not only due to the mechanical loading but also to the thermal loading which is associated to. Thus, a thermo-mechanical model is first proposed in the present Note, which is inspired by previous works on TRansformation Induced Plasticity (TRIP). The potentialities of the model are also briefly illustrated by a simple 1D example. To cite this article: G. Antoni et al., C. R. Mecanique 337 (2009).

  12. Systems architecture: a new model for sustainability and the built environment using nanotechnology, biotechnology, information technology, and cognitive science with living technology.

    PubMed

    Armstrong, Rachel

    2010-01-01

    This report details a workshop held at the Bartlett School of Architecture, University College London, to initiate interdisciplinary collaborations for the practice of systems architecture, which is a new model for the generation of sustainable architecture that combines the discipline of the study of the built environment with the scientific study of complexity, or systems science, and adopts the perspective of systems theory. Systems architecture offers new perspectives on the organization of the built environment that enable architects to consider architecture as a series of interconnected networks with embedded links into natural systems. The public workshop brought together architects and scientists working with the convergence of nanotechnology, biotechnology, information technology, and cognitive science and with living technology to investigate the possibility of a new generation of smart materials that are implied by this approach.

  13. Business Collaborations in Grids: The BREIN Architectural Principals and VO Model

    NASA Astrophysics Data System (ADS)

    Taylor, Steve; Surridge, Mike; Laria, Giuseppe; Ritrovato, Pierluigi; Schubert, Lutz

    We describe the business-oriented architectural principles of the EC FP7 project “BREIN” for service-based computing. The architecture is founded on principles of how real businesses interact to mutual benefit, and we show how these can be applied to SOA and Grid computing. We present building blocks that can be composed in many ways to produce different value systems and supply chains for the provision of computing services over the Internet. We also introduce the complementary BREIN VO concept, which is centric to, and managed by, a main contractor who bears the responsibility for the whole VO. The BREIN VO has an execution lifecycle for the creation and operation of the VO, and we have related this to an application-focused workflow involving steps that provide real end-user value. We show how this can be applied to an engineering simulation application and how the workflow can be adapted should the need arise.

  14. Modeling complex geological structures with elementary training images and transform-invariant distances

    NASA Astrophysics Data System (ADS)

    Mariethoz, Gregoire; Kelly, Bryce F. J.

    2011-07-01

    We present a new framework for multiple-point simulation involving small and simple training images. The use of transform-invariant distances (by applying random transformations) expands the range of structures available in the simple patterns of the training image. The training image is no longer regarded as a global conceptual geological model, but rather a basic structural element of the subsurface. Complex geological structures are obtained whose spatial structure can be parameterized by adjusting the statistics of the random transformations, on the basis of field data or geological context. In most cases, such parameterization is possible by adjusting two numbers only. This method allows us to build models that (1) reproduce shapes corresponding to a desired prior geological concept and (2) are in phase with different types of field observations such as orientation, hydrofacies, or geophysical measurements. The main advantage is that the training images are so simple that they can be easily built even in 3-D. We apply the method on a synthetic example involving seismic data where the transformation parameters are data-driven. We also show examples where realistic 2- and 3-D structures are built from simplistic training images, with transformation parameters inferred using a small number of orientation data.

  15. Planning intensive care unit design using computer simulation modeling: optimizing integration of clinical, operational, and architectural requirements.

    PubMed

    OʼHara, Susan

    2014-01-01

    Nurses have increasingly been regarded as critical members of the planning team as architects recognize their knowledge and value. But the nurses' role as knowledge experts can be expanded to leading efforts to integrate the clinical, operational, and architectural expertise through simulation modeling. Simulation modeling allows for the optimal merge of multifactorial data to understand the current state of the intensive care unit and predict future states. Nurses can champion the simulation modeling process and reap the benefits of a cost-effective way to test new designs, processes, staffing models, and future programming trends prior to implementation. Simulation modeling is an evidence-based planning approach, a standard, for integrating the sciences with real client data, to offer solutions for improving patient care.

  16. Effect of ovariectomy on BMD, micro-architecture and biomechanics of cortical and cancellous bones in a sheep model.

    PubMed

    Wu, Zi-xiang; Lei, Wei; Hu, Yun-yu; Wang, Hai-qiang; Wan, Shi-yong; Ma, Zhen-sheng; Sang, Hong-xun; Fu, Suo-chao; Han, Yi-sheng

    2008-11-01

    Osteoporotic/osteopenia fractures occur most frequently in trabeculae-rich skeletal sites. The purpose of this study was to use a high-resolution micro-computed tomography (micro-CT) and dual energy X-ray absorptionmeter (DEXA) to investigate the changes in micro-architecture and bone mineral density (BMD) in a sheep model resulted from ovariectomy (OVX). Biomechanical tests were performed to evaluate the strength of the trabecular bone. Twenty adult sheeps were randomly divided into three groups: sham group (n=8), group 1 (n=4) and group 2 (n=8). In groups 1 and 2, all sheep were ovariectomized (OVX); in the sham group, the ovaries were located and the oviducts were ligated. In all animals, BMD for lumbar spine was obtained during the surgical procedure. BMD at the spine, femoral neck and femoral condyle was determined 6 months (group 1) and 12 months (group 2) post-OVX. Lumbar spines and femora were obtained and underwent BMD scan, micro-CT analysis. Compressive mechanical properties were determined from biopsies of vertebral bodies and femoral condyles. BMD, micro-architectural parameters and mechanical properties of cancellous bone did not decrease significantly at 6 months post-OVX. Twelve months after OVX, BMD, micro-architectural parameters and mechanical properties decreased significantly. The results of linear regression analyses showed that trabecular thickness (Tb.Th) (r=0.945, R2=0.886) and bone volume fraction (BV/TV) (r=0.783, R2=0.586) had strong (R2>0.5) correlation to compression stress. In OVX sheep, changes in the structural parameters of trabecular bone are comparable to the human situation during osteoporosis was induced. The sheep model presented seems to meet the criteria for an osteopenia model for fracture treatment with respect to morphometric and mechanical properties. But the duration of OVX must be longer than 12 months to ensure the animal model can be established successfully.

  17. A two-dimensional analytical model and experimental validation of garter stitch knitted shape memory alloy actuator architecture

    NASA Astrophysics Data System (ADS)

    Abel, Julianna; Luntz, Jonathan; Brei, Diann

    2012-08-01

    Active knits are a unique architectural approach to meeting emerging smart structure needs for distributed high strain actuation with simultaneous force generation. This paper presents an analytical state-based model for predicting the actuation response of a shape memory alloy (SMA) garter knit textile. Garter knits generate significant contraction against moderate to large loads when heated, due to the continuous interlocked network of loops of SMA wire. For this knit architecture, the states of operation are defined on the basis of the thermal and mechanical loading of the textile, the resulting phase change of the SMA, and the load path followed to that state. Transitions between these operational states induce either stick or slip frictional forces depending upon the state and path, which affect the actuation response. A load-extension model of the textile is derived for each operational state using elastica theory and Euler-Bernoulli beam bending for the large deformations within a loop of wire based on the stress-strain behavior of the SMA material. This provides kinematic and kinetic relations which scale to form analytical transcendental expressions for the net actuation motion against an external load. This model was validated experimentally for an SMA garter knit textile over a range of applied forces with good correlation for both the load-extension behavior in each state as well as the net motion produced during the actuation cycle (250% recoverable strain and over 50% actuation). The two-dimensional analytical model of the garter stitch active knit provides the ability to predict the kinetic actuation performance, providing the basis for the design and synthesis of large stroke, large force distributed actuators that employ this novel architecture.

  18. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  19. The Federal Transformation Intervention Model in Persistently Lowest Achieving High Schools: A Mixed-Methods Study

    ERIC Educational Resources Information Center

    Le Patner, Michelle B.

    2012-01-01

    This study examined the American Recovery and Reinvestment Act federal mandate of the Transformation Intervention Model (TIM) outlined by the School Improvement Grant, which was designed to turn around persistently lowest achieving schools. The study was conducted in four high schools in a large Southern California urban district that selected the…

  20. A Faculty-Development Model for Transforming Introductory Biology and Ecology Courses

    ERIC Educational Resources Information Center

    D'Avanzo, Charlene; Anderson, Charles W.; Hartley, Laurel M.; Pelaez, Nancy

    2012-01-01

    The Diagnostic Question Cluster (DQC) project integrates education research and faculty development to articulate a model for the effective transformation of introductory biology and ecology teaching. Over three years, faculty members from a wide range of institutions used active teaching and DQCs, a type of concept inventory, as pre- and…

  1. Educational Transformation in Upper-Division Physics: The Science Education Initiative Model, Outcomes, and Lessons Learned

    ERIC Educational Resources Information Center

    Chasteen, Stephanie V.; Wilcox, Bethany; Caballero, Marcos D.; Perkins, Katherine K.; Pollock, Steven J.; Wieman, Carl E.

    2015-01-01

    In response to the need for a scalable, institutionally supported model of educational change, the Science Education Initiative (SEI) was created as an experiment in transforming course materials and faculty practices at two institutions--University of Colorado Boulder (CU) and University of British Columbia. We find that this departmentally…

  2. Chemical Transformation System: Cloud Based Cheminformatic Services to Support Integrated Environmental Modeling

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) systems that account for the fate/transport of organics frequently require physicochemical properties as well as transformation products. A myriad of chemical property databases exist but these can be difficult to access and often do not co...

  3. Cultural Arts Education as Community Development: An Innovative Model of Healing and Transformation

    ERIC Educational Resources Information Center

    Archer-Cunningham, Kwayera

    2007-01-01

    This article discusses a three-tiered process of collective experiences of various artistic and cultural forms that fosters the healing and transformation of individuals, families, and communities of the African Diaspora. Ifetayo Cultural Arts, located in the Flatbush section of Brooklyn, espouses and practices a three-tiered model of community…

  4. The Efficacy of Ecological Macro-Models in Preservice Teacher Education: Transforming States of Mind

    ERIC Educational Resources Information Center

    Stibbards, Adam; Puk, Tom

    2011-01-01

    The present study aimed to describe and evaluate a transformative, embodied, emergent learning approach to acquiring ecological literacy through higher education. A class of teacher candidates in a bachelor of education program filled out a survey, which had them rate their level of agreement with 15 items related to ecological macro-models.…

  5. From PCK to TPACK: Developing a Transformative Model for Pre-Service Science Teachers

    ERIC Educational Resources Information Center

    Jang, Syh-Jong; Chen, Kuan-Chung

    2010-01-01

    New science teachers should be equipped with the ability to integrate and design the curriculum and technology for innovative teaching. How to integrate technology into pre-service science teachers' pedagogical content knowledge is the important issue. This study examined the impact on a transformative model of integrating technology and peer…

  6. A Transformational Curriculum Model: A Wilderness Travel Adventure Dog Sledding in Temagami.

    ERIC Educational Resources Information Center

    Leckie, Linda

    1996-01-01

    Personal narrative links elements of a dog sledding trip with the transformational curriculum model as applied to outdoor education. Describes the physical, mental, and spiritual challenges of a seven-day winter camping and dog sledding trip, during which students learned responsibility through experience and natural consequences and realized the…

  7. Mellin transforming the minimal model CFTs: AdS/CFT at strong curvature

    NASA Astrophysics Data System (ADS)

    Lowe, David A.

    2016-09-01

    Mack has conjectured that all conformal field theories are equivalent to string theories. We explore the example of the two-dimensional minimal model CFTs and confirm that the Mellin transformed amplitudes have the desired properties of string theory in three-dimensional anti-de Sitter spacetime.

  8. Green Architecture

    NASA Astrophysics Data System (ADS)

    Lee, Seung-Ho

    Today, the environment has become a main subject in lots of science disciplines and the industrial development due to the global warming. This paper presents the analysis of the tendency of Green Architecture in France on the threes axes: Regulations and Approach for the Sustainable Architecture (Certificate and Standard), Renewable Materials (Green Materials) and Strategies (Equipments) of Sustainable Technology. The definition of 'Green Architecture' will be cited in the introduction and the question of the interdisciplinary for the technological development in 'Green Architecture' will be raised up in the conclusion.

  9. Rates and probabilities in economic modelling: transformation, translation and appropriate application.

    PubMed

    Fleurence, Rachael L; Hollenbeak, Christopher S

    2007-01-01

    Economic modelling is increasingly being used to evaluate the cost effectiveness of health technologies. One of the requirements for good practice in modelling is appropriate application of rates and probabilities. In spite of previous descriptions of appropriate use of rates and probabilities, confusions persist beyond a simple understanding of their definitions. The objective of this article is to provide a concise guide to understanding the issues surrounding the use of rates and probabilities reported in the literature in economic models, and an understanding of when and how to transform them appropriately. The article begins by defining rates and probabilities and shows the essential difference between the two measures. Appropriate conversions between rates and probabilities are discussed, and simple examples are provided to illustrate the techniques and pitfalls. How the transformed rates and probabilities may be used in economic models is then described and some recommendations are suggested.

  10. The Simulation Intranet Architecture

    SciTech Connect

    Holmes, V.P.; Linebarger, J.M.; Miller, D.J.; Vandewart, R.L.

    1998-12-02

    The Simdarion Infranet (S1) is a term which is being used to dcscribc one element of a multidisciplinary distributed and distance computing initiative known as DisCom2 at Sandia National Laboratory (http ct al. 1998). The Simulation Intranet is an architecture for satisfying Sandia's long term goal of providing an end- to-end set of scrviccs for high fidelity full physics simu- lations in a high performance, distributed, and distance computing environment. The Intranet Architecture group was formed to apply current distributed object technologies to this problcm. For the hardware architec- tures and software models involved with the current simulation process, a CORBA-based architecture is best suited to meet Sandia's needs. This paper presents the initial desi-a and implementation of this Intranct based on a three-tier Network Computing Architecture(NCA). The major parts of the architecture include: the Web Cli- ent, the Business Objects, and Data Persistence.

  11. A study on thermal characteristics analysis model of high frequency switching transformer

    NASA Astrophysics Data System (ADS)

    Yoo, Jin-Hyung; Jung, Tae-Uk

    2015-05-01

    Recently, interest has been shown in research on the module-integrated converter (MIC) in small-scale photovoltaic (PV) generation. In an MIC, the voltage boosting high frequency transformer should be designed to be compact in size and have high efficiency. In response to the need to satisfy these requirements, this paper presents a coupled electromagnetic analysis model of a transformer connected with a high frequency switching DC-DC converter circuit while considering thermal characteristics due to the copper and core losses. A design optimization procedure for high efficiency is also presented using this design analysis method, and it is verified by the experimental result.

  12. Transformation from Spots to Waves in a Model of Actin Pattern Formation

    NASA Astrophysics Data System (ADS)

    Whitelam, Stephen; Bretschneider, Till; Burroughs, Nigel J.

    2009-05-01

    Actin networks in certain single-celled organisms exhibit a complex pattern-forming dynamics that starts with the appearance of static spots of actin on the cell cortex. Spots soon become mobile, executing persistent random walks, and eventually give rise to traveling waves of actin. Here we describe a possible physical mechanism for this distinctive set of dynamic transformations, by equipping an excitable reaction-diffusion model with a field describing the spatial orientation of its chief constituent (which we consider to be actin). The interplay of anisotropic actin growth and spatial inhibition drives a transformation at fixed parameter values from static spots to moving spots to waves.

  13. Transformation from spots to waves in a model of actin pattern formation.

    PubMed

    Whitelam, Stephen; Bretschneider, Till; Burroughs, Nigel J

    2009-05-15

    Actin networks in certain single-celled organisms exhibit a complex pattern-forming dynamics that starts with the appearance of static spots of actin on the cell cortex. Spots soon become mobile, executing persistent random walks, and eventually give rise to traveling waves of actin. Here we describe a possible physical mechanism for this distinctive set of dynamic transformations, by equipping an excitable reaction-diffusion model with a field describing the spatial orientation of its chief constituent (which we consider to be actin). The interplay of anisotropic actin growth and spatial inhibition drives a transformation at fixed parameter values from static spots to moving spots to waves.

  14. Studies of transformational leadership: evaluating two alternative models of trust and satisfaction.

    PubMed

    Yang, Yi-Feng

    2014-06-01

    This study evaluates the influence of leadership style and employee trust in their leaders on job satisfaction. 341 personnel (164 men, 177 women; M age = 33.5 yr., SD = 5.1) from four large insurance companies in Taiwan completed the transformational leadership behavior inventory, the leadership trust scale and a short version of the Minnesota (Job) Satisfaction Questionnaire. A bootstrapping mediation and structural equation modeling revealed that the effect of transformational leadership on job satisfaction was mediated by leadership trust. This study highlights the importance of leadership trust in leadership-satisfaction relationships, and provides managers with practical ways to enhance job satisfaction.

  15. Conceptual Model Formalization in a Semantic Interoperability Service Framework: Transforming Relational Database Schemas to OWL.

    PubMed

    Bravo, Carlos; Suarez, Carlos; González, Carolina; López, Diego; Blobel, Bernd

    2014-01-01

    Healthcare information is distributed through multiple heterogeneous and autonomous systems. Access to, and sharing of, distributed information sources are a challenging task. To contribute to meeting this challenge, this paper presents a formal, complete and semi-automatic transformation service from Relational Databases to Web Ontology Language. The proposed service makes use of an algorithm that allows to transform several data models of different domains by deploying mainly inheritance rules. The paper emphasizes the relevance of integrating the proposed approach into an ontology-based interoperability service to achieve semantic interoperability.

  16. Variational data assimilation schemes for transport and transformation models of atmospheric chemistry

    NASA Astrophysics Data System (ADS)

    Penenko, Alexey; Penenko, Vladimir; Tsvetova, Elena; Antokhin, Pavel

    2016-04-01

    The work is devoted to data assimilation algorithm for atmospheric chemistry transport and transformation models. In the work a control function is introduced into the model source term (emission rate) to provide flexibility to adjust to data. This function is evaluated as the constrained minimum of the target functional combining a control function norm with a norm of the misfit between measured data and its model-simulated analog. Transport and transformation processes model is acting as a constraint. The constrained minimization problem is solved with Euler-Lagrange variational principle [1] which allows reducing it to a system of direct, adjoint and control function estimate relations. This provides a physically-plausible structure of the resulting analysis without model error covariance matrices that are sought within conventional approaches to data assimilation. High dimensionality of the atmospheric chemistry models and a real-time mode of operation demand for computational efficiency of the data assimilation algorithms. Computational issues with complicated models can be solved by using a splitting technique. Within this approach a complex model is split to a set of relatively independent simpler models equipped with a coupling procedure. In a fine-grained approach data assimilation is carried out quasi-independently on the separate splitting stages with shared measurement data [2]. In integrated schemes data assimilation is carried out with respect to the split model as a whole. We compare the two approaches both theoretically and numerically. Data assimilation on the transport stage is carried out with a direct algorithm without iterations. Different algorithms to assimilate data on nonlinear transformation stage are compared. In the work we compare data assimilation results for both artificial and real measurement data. With these data we study the impact of transformation processes and data assimilation to the performance of the modeling system [3]. The

  17. On Using SysML, DoDAF 2.0 and UPDM to Model the Architecture for the NOAA's Joint Polar Satellite System (JPSS) Ground System (GS)

    NASA Technical Reports Server (NTRS)

    Hayden, Jeffrey L.; Jeffries, Alan

    2012-01-01

    The JPSS Ground System is a lIexible system of systems responsible for telemetry, tracking & command (TT &C), data acquisition, routing and data processing services for a varied lIeet of satellites to support weather prediction, modeling and climate modeling. To assist in this engineering effort, architecture modeling tools are being employed to translate the former NPOESS baseline to the new JPSS baseline, The paper will focus on the methodology for the system engineering process and the use of these architecture modeling tools within that process, The Department of Defense Architecture Framework version 2,0 (DoDAF 2.0) viewpoints and views that are being used to describe the JPSS GS architecture are discussed. The Unified Profile for DoOAF and MODAF (UPDM) and Systems Modeling Language (SysML), as ' provided by extensions to the MagicDraw UML modeling tool, are used to develop the diagrams and tables that make up the architecture model. The model development process and structure are discussed, examples are shown, and details of handling the complexities of a large System of Systems (SoS), such as the JPSS GS, with an equally complex modeling tool, are described

  18. Numerical Modeling of Arsenic Mobility during Reductive Iron-Mineral Transformations.

    PubMed

    Rawson, Joey; Prommer, Henning; Siade, Adam; Carr, Jackson; Berg, Michael; Davis, James A; Fendorf, Scott

    2016-03-01

    Millions of individuals worldwide are chronically exposed to hazardous concentrations of arsenic from contaminated drinking water. Despite massive efforts toward understanding the extent and underlying geochemical processes of the problem, numerical modeling and reliable predictions of future arsenic behavior remain a significant challenge. One of the key knowledge gaps concerns a refined understanding of the mechanisms that underlie arsenic mobilization, particularly under the onset of anaerobic conditions, and the quantification of the factors that affect this process. In this study, we focus on the development and testing of appropriate conceptual and numerical model approaches to represent and quantify the reductive dissolution of iron oxides, the concomitant release of sorbed arsenic, and the role of iron-mineral transformations. The initial model development in this study was guided by data and hypothesized processes from a previously reported,1 well-controlled column experiment in which arsenic desorption from ferrihydrite coated sands by variable loads of organic carbon was investigated. Using the measured data as constraints, we provide a quantitative interpretation of the processes controlling arsenic mobility during the microbial reductive transformation of iron oxides. Our analysis suggests that the observed arsenic behavior is primarily controlled by a combination of reductive dissolution of ferrihydrite, arsenic incorporation into or co-precipitation with freshly transformed iron minerals, and partial arsenic redox transformations. PMID:26835553

  19. Numerical Modeling of Arsenic Mobility during Reductive Iron-Mineral Transformations.

    PubMed

    Rawson, Joey; Prommer, Henning; Siade, Adam; Carr, Jackson; Berg, Michael; Davis, James A; Fendorf, Scott

    2016-03-01

    Millions of individuals worldwide are chronically exposed to hazardous concentrations of arsenic from contaminated drinking water. Despite massive efforts toward understanding the extent and underlying geochemical processes of the problem, numerical modeling and reliable predictions of future arsenic behavior remain a significant challenge. One of the key knowledge gaps concerns a refined understanding of the mechanisms that underlie arsenic mobilization, particularly under the onset of anaerobic conditions, and the quantification of the factors that affect this process. In this study, we focus on the development and testing of appropriate conceptual and numerical model approaches to represent and quantify the reductive dissolution of iron oxides, the concomitant release of sorbed arsenic, and the role of iron-mineral transformations. The initial model development in this study was guided by data and hypothesized processes from a previously reported,1 well-controlled column experiment in which arsenic desorption from ferrihydrite coated sands by variable loads of organic carbon was investigated. Using the measured data as constraints, we provide a quantitative interpretation of the processes controlling arsenic mobility during the microbial reductive transformation of iron oxides. Our analysis suggests that the observed arsenic behavior is primarily controlled by a combination of reductive dissolution of ferrihydrite, arsenic incorporation into or co-precipitation with freshly transformed iron minerals, and partial arsenic redox transformations.

  20. Modeling of organic substrate transformation in the high-rate activated sludge process.

    PubMed

    Nogaj, Thomas; Randall, Andrew; Jimenez, Jose; Takacs, Imre; Bott, Charles; Miller, Mark; Murthy, Sudhir; Wett, Bernhard

    2015-01-01

    This study describes the development of a modified activated sludge model No.1 framework to describe the organic substrate transformation in the high-rate activated sludge (HRAS) process. New process mechanisms for dual soluble substrate utilization, production of extracellular polymeric substances (EPS), absorption of soluble substrate (storage), and adsorption of colloidal substrate were included in the modified model. Data from two HRAS pilot plants were investigated to calibrate and to validate the proposed model for HRAS systems. A subdivision of readily biodegradable soluble substrate into a slow and fast fraction were included to allow accurate description of effluent soluble chemical oxygen demand (COD) in HRAS versus longer solids retention time (SRT) systems. The modified model incorporates production of EPS and storage polymers as part of the aerobic growth transformation process on the soluble substrate and transformation processes for flocculation of colloidal COD to particulate COD. The adsorbed organics are then converted through hydrolysis to the slowly biodegradable soluble fraction. Two soluble substrate models were evaluated during this study, i.e., the dual substrate and the diauxic models. Both models used two state variables for biodegradable soluble substrate (SBf and SBs) and a single biomass population. The A-stage pilot typically removed 63% of the soluble substrate (SB) at an SRT <0.13 d and 79% at SRT of 0.23 d. In comparison, the dual substrate model predicted 58% removal at the lower SRT and 78% at the higher SRT, with the diauxic model predicting 32% and 70% removals, respectively. Overall, the dual substrate model provided better results than the diauxic model and therefore it was adopted during this study. The dual substrate model successfully described the higher effluent soluble COD observed in the HRAS systems due to the partial removal of SBs, which is almost completely removed in higher SRT systems.

  1. Mature seed-derived callus of the model indica rice variety Kasalath is highly competent in Agrobacterium-mediated transformation.

    PubMed

    Saika, Hiroaki; Toki, Seiichi

    2010-12-01

    We previously established an efficient Agrobacterium-mediated transformation system using primary calli derived from mature seeds of the model japonica rice variety Nipponbare. We expected that the shortened tissue culture period would reduce callus browning--a common problem with the indica transformation system during prolonged tissue culture in the undifferentiated state. In this study, we successfully applied our efficient transformation system to Kasalath--a model variety of indica rice. The Luc reporter system is sensitive enough to allow quantitative analysis of the competency of rice callus for Agrobacterium-mediated transformation. We unexpectedly discovered that primary callus of Kasalath exhibits a remarkably high competency for Agrobacterium-mediated transformation compared to Nipponbare. Southern blot analysis and Luc luminescence showed that independent transformation events in primary callus of Kasalath occurred successfully at ca. tenfold higher frequency than in Nipponbare, and single copy T-DNA integration was observed in ~40% of these events. We also compared the competency of secondary callus of Nipponbare and Kasalath and again found superior competency in Kasalath, although the identification and subsequent observation of independent transformation events in secondary callus is difficult due to the vigorous growth of both transformed and non-transformed cells. An efficient transformation system in Kasalath could facilitate the identification of QTL genes, since many QTL genes are analyzed in a Nipponbare × Kasalath genetic background. The higher transformation competency of Kasalath could be a useful trait in the establishment of highly efficient systems involving new transformation technologies such as gene targeting.

  2. Modeling and simulation of high voltage and radio-frequency transformer

    NASA Astrophysics Data System (ADS)

    Salazar, Andrés O.; Barbosa, Giancarlos C.; Vieira, Madson A. A.; Quintaes, Filipe de O.; da Silva, Jacimário R.

    2012-04-01

    This work presents a methodology for designing a 50 kW RF transformer operating at a frequency of 400 kHz with a view to operation with minimal magnetic losses used in the project experimental treatment of industrial wastes and effluents of petrochemical thermal plasma. This innovator model of a RF transformer offers many advantages over traditional transformers, the main ones being their small size for this power level, high power density, low electromagnetic radiation level, and easy and economic manufacturing. The equivalent circuit was obtained practically and theoretically at the university lab. From the project, simulations are made to evaluate the performance of different parameters as a function of magnetic induction, current density, and temperature.

  3. Focus on connections for successful organizational transformation to model based engineering

    NASA Astrophysics Data System (ADS)

    Babineau, Guy L.

    2015-05-01

    Organizational Transformation to a Model Based Engineering Culture is a significant goal for Northrop Grumman Electronic Systems in order to achieve objectives of increased engineering performance. While organizational change is difficult, a focus on connections is creating success. Connections include model to model, program phase to program phase and organization to organization all through Model Based techniques. This presentation will address the techniques employed by Northrop Grumman to achieve these results as well as address continued focus and efforts. Model to model connections are very effective in automating implicit linkages between models for the purpose of ensuring consistency across a set of models and also for rapidly assessing impact of change. Program phase to phase connections are very important for reducing development time as well as reducing potential errors in moving from one program phase to another. Organization to organization communication is greatly facilitated using model based techniques to eliminate ambiguity and drive consistency and reuse.

  4. Genetic transformation of Knufia petricola A95 - a model organism for biofilm-material interactions

    PubMed Central

    2014-01-01

    We established a protoplast-based system to transfer DNA to Knufia petricola strain A95, a melanised rock-inhabiting microcolonial fungus that is also a component of a model sub-aerial biofilm (SAB) system. To test whether the desiccation resistant, highly melanised cell walls would hinder protoplast formation, we treated a melanin-minus mutant of A95 as well as the type-strain with a variety of cell-degrading enzymes. Of the different enzymes tested, lysing enzymes from Trichoderma harzianum were most effective in producing protoplasts. This mixture was equally effective on the melanin-minus mutant and the type-strain. Protoplasts produced using lysing enzymes were mixed with polyethyleneglycol (PEG) and plasmid pCB1004 which contains the hygromycin B (HmB) phosphotransferase (hph) gene under the control of the Aspergillus nidulans trpC. Integration and expression of hph into the A95 genome conferred hygromycin resistance upon the transformants. Two weeks after plating out on selective agar containing HmB, the protoplasts developed cell-walls and formed colonies. Transformation frequencies were in the range 36 to 87 transformants per 10 μg of vector DNA and 106 protoplasts. Stability of transformation was confirmed by sub-culturing the putative transformants on selective agar containing HmB as well as by PCR-detection of the hph gene in the colonies. The hph gene was stably integrated as shown by five subsequent passages with and without selection pressure. PMID:25401079

  5. Genetic transformation of Knufia petricola A95 - a model organism for biofilm-material interactions.

    PubMed

    Noack-Schönmann, Steffi; Bus, Tanja; Banasiak, Ronald; Knabe, Nicole; Broughton, William J; Den Dulk-Ras, H; Hooykaas, Paul Jj; Gorbushina, Anna A

    2014-01-01

    We established a protoplast-based system to transfer DNA to Knufia petricola strain A95, a melanised rock-inhabiting microcolonial fungus that is also a component of a model sub-aerial biofilm (SAB) system. To test whether the desiccation resistant, highly melanised cell walls would hinder protoplast formation, we treated a melanin-minus mutant of A95 as well as the type-strain with a variety of cell-degrading enzymes. Of the different enzymes tested, lysing enzymes from Trichoderma harzianum were most effective in producing protoplasts. This mixture was equally effective on the melanin-minus mutant and the type-strain. Protoplasts produced using lysing enzymes were mixed with polyethyleneglycol (PEG) and plasmid pCB1004 which contains the hygromycin B (HmB) phosphotransferase (hph) gene under the control of the Aspergillus nidulans trpC. Integration and expression of hph into the A95 genome conferred hygromycin resistance upon the transformants. Two weeks after plating out on selective agar containing HmB, the protoplasts developed cell-walls and formed colonies. Transformation frequencies were in the range 36 to 87 transformants per 10 μg of vector DNA and 10(6) protoplasts. Stability of transformation was confirmed by sub-culturing the putative transformants on selective agar containing HmB as well as by PCR-detection of the hph gene in the colonies. The hph gene was stably integrated as shown by five subsequent passages with and without selection pressure.

  6. Examining Competing Models of Transformational Leadership, Leadership Trust, Change Commitment, and Job Satisfaction.

    PubMed

    Yang, Yi-Feng

    2016-08-01

    This study discusses the influence of transformational leadership on job satisfaction through assessing six alternative models related to the mediators of leadership trust and change commitment utilizing a data sample (N = 341; M age = 32.5 year, SD = 5.2) for service promotion personnel in Taiwan. The bootstrap sampling technique was used to select the better fitting model. The tool of hierarchical nested model analysis was applied, along with the approaches of bootstrapping mediation, PRODCLIN2, and structural equation modeling comparison. The results overall demonstrate that leadership is important and that leadership role identification (trust) and workgroup cohesiveness (commitment) form an ordered serial relationship.

  7. Modeling the cure kinetics of crosslinking free radical polymerizations using the Avrami theory of phase transformation

    SciTech Connect

    Finnegan, G.R.; Shine, A.D.

    1995-12-01

    A model, based on Avrami`s theory of phase transformation, has been developed to describe the cure kinetics of crosslinking free radical polymerizations. The model assumes the growing polymer can be treated as a distinct phase and the nucleation rate is proportional to the initiation rate of the polymerization. The Avrami time exponent was verified to be 4.0. This physically-based, two-parameter model fits vinyl ester resin heat flow data as well as the empirical, four-parameter autocatalytic model, and is capable of describing both neat and fiber-containing resin.

  8. Seismo-thermo-mechanical modeling of mature and immature transform faults

    NASA Astrophysics Data System (ADS)

    Preuss, Simon; Gerya, Taras; van Dinther, Ylona

    2016-04-01

    Transform faults (TF) are subdivided into continental and oceanic ones due to their markedly different tectonic position, structure, surface expression, dynamics and seismicity. Both continental and oceanic TFs are zones of rheological weakness, which is a pre-requisite for their existence and long-term stability. Compared to subduction zones, TFs are typically characterized by smaller earthquake magnitudes as both their potential seismogenic width and length are reduced. However, a few very large magnitude (Mw>8) strike-slip events were documented, which are presumably related to the generation of new transform boundaries and/or sudden reactivation of pre-existing fossil structures. In particular, the 11 April 2012 Sumatra Mw 8.6 earthquake is challenging the general concept that such high magnitude events only occur at megathrusts. Hence, the processes of TF nucleation, propagation and their direct relation to the seismic cycle and long-term deformation at both oceanic and continental transforms needs to be investigated jointly to overcome the restricted direct observations in time and space. To gain fundamental understanding of involved physical processes the numerical seismo-thermo-mechanical (STM) modeling approach, validated in a subduction zone setting (Van Dinther et al. 2013), will be adapted for TFs. A simple 2D plane view model geometry using visco-elasto-plastic material behavior will be adopted. We will study and compare seismicity patterns and evolution in two end member TF setups, each with strain-dependent and rate-dependent brittle-plastic weakening processes: (1) A single weak and mature transform fault separating two strong plates (e.g., in between oceanic ridges) and (2) A nucleating or evolving (continental) TF system with disconnected predefined faults within a plate subjected to simple shear deformation (e.g., San Andreas Fault system). The modeling of TFs provides a first tool to establish the STM model approach for transform faults in a

  9. Structural model of the dimeric Parkinson's protein LRRK2 reveals a compact architecture involving distant interdomain contacts.

    PubMed

    Guaitoli, Giambattista; Raimondi, Francesco; Gilsbach, Bernd K; Gómez-Llorente, Yacob; Deyaert, Egon; Renzi, Fabiana; Li, Xianting; Schaffner, Adam; Jagtap, Pravin Kumar Ankush; Boldt, Karsten; von Zweydorf, Felix; Gotthardt, Katja; Lorimer, Donald D; Yue, Zhenyu; Burgin, Alex; Janjic, Nebojsa; Sattler, Michael; Versées, Wim; Ueffing, Marius; Ubarretxena-Belandia, Iban; Kortholt, Arjan; Gloeckner, Christian Johannes

    2016-07-26

    Leucine-rich repeat kinase 2 (LRRK2) is a large, multidomain protein containing two catalytic domains: a Ras of complex proteins (Roc) G-domain and a kinase domain. Mutations associated with familial and sporadic Parkinson's disease (PD) have been identified in both catalytic domains, as well as in several of its multiple putative regulatory domains. Several of these mutations have been linked to increased kinase activity. Despite the role of LRRK2 in the pathogenesis of PD, little is known about its overall architecture and how PD-linked mutations alter its function and enzymatic activities. Here, we have modeled the 3D structure of dimeric, full-length LRRK2 by combining domain-based homology models with multiple experimental constraints provided by chemical cross-linking combined with mass spectrometry, negative-stain EM, and small-angle X-ray scattering. Our model reveals dimeric LRRK2 has a compact overall architecture with a tight, multidomain organization. Close contacts between the N-terminal ankyrin and C-terminal WD40 domains, and their proximity-together with the LRR domain-to the kinase domain suggest an intramolecular mechanism for LRRK2 kinase activity regulation. Overall, our studies provide, to our knowledge, the first structural framework for understanding the role of the different domains of full-length LRRK2 in the pathogenesis of PD. PMID:27357661

  10. Structural model of the dimeric Parkinson’s protein LRRK2 reveals a compact architecture involving distant interdomain contacts

    PubMed Central

    Guaitoli, Giambattista; Raimondi, Francesco; Gilsbach, Bernd K.; Gómez-Llorente, Yacob; Deyaert, Egon; Renzi, Fabiana; Li, Xianting; Schaffner, Adam; Jagtap, Pravin Kumar Ankush; Boldt, Karsten; von Zweydorf, Felix; Gotthardt, Katja; Lorimer, Donald D.; Yue, Zhenyu; Burgin, Alex; Janjic, Nebojsa; Sattler, Michael; Versées, Wim; Ueffing, Marius; Ubarretxena-Belandia, Iban; Kortholt, Arjan; Gloeckner, Christian Johannes

    2016-01-01

    Leucine-rich repeat kinase 2 (LRRK2) is a large, multidomain protein containing two catalytic domains: a Ras of complex proteins (Roc) G-domain and a kinase domain. Mutations associated with familial and sporadic Parkinson’s disease (PD) have been identified in both catalytic domains, as well as in several of its multiple putative regulatory domains. Several of these mutations have been linked to increased kinase activity. Despite the role of LRRK2 in the pathogenesis of PD, little is known about its overall architecture and how PD-linked mutations alter its function and enzymatic activities. Here, we have modeled the 3D structure of dimeric, full-length LRRK2 by combining domain-based homology models with multiple experimental constraints provided by chemical cross-linking combined with mass spectrometry, negative-stain EM, and small-angle X-ray scattering. Our model reveals dimeric LRRK2 has a compact overall architecture with a tight, multidomain organization. Close contacts between the N-terminal ankyrin and C-terminal WD40 domains, and their proximity—together with the LRR domain—to the kinase domain suggest an intramolecular mechanism for LRRK2 kinase activity regulation. Overall, our studies provide, to our knowledge, the first structural framework for understanding the role of the different domains of full-length LRRK2 in the pathogenesis of PD. PMID:27357661

  11. Analysis of Transformation Plasticity in Steel Using a Finite Element Method Coupled with a Phase Field Model

    PubMed Central

    Cho, Yi-Gil; Kim, Jin-You; Cho, Hoon-Hwe; Cha, Pil-Ryung; Suh, Dong-Woo; Lee, Jae Kon; Han, Heung Nam

    2012-01-01

    An implicit finite element model was developed to analyze the deformation behavior of low carbon steel during phase transformation. The finite element model was coupled hierarchically with a phase field model that could simulate the kinetics and micro-structural evolution during the austenite-to-ferrite transformation of low carbon steel. Thermo-elastic-plastic constitutive equations for each phase were adopted to confirm the transformation plasticity due to the weaker phase yielding that was proposed by Greenwood and Johnson. From the simulations under various possible plastic properties of each phase, a more quantitative understanding of the origin of transformation plasticity was attempted by a comparison with the experimental observation. PMID:22558295

  12. Analysis of transformation plasticity in steel using a finite element method coupled with a phase field model.

    PubMed

    Cho, Yi-Gil; Kim, Jin-You; Cho, Hoon-Hwe; Cha, Pil-Ryung; Suh, Dong-Woo; Lee, Jae Kon; Han, Heung Nam

    2012-01-01

    An implicit finite element model was developed to analyze the deformation behavior of low carbon steel during phase transformation. The finite element model was coupled hierarchically with a phase field model that could simulate the kinetics and micro-structural evolution during the austenite-to-ferrite transformation of low carbon steel. Thermo-elastic-plastic constitutive equations for each phase were adopted to confirm the transformation plasticity due to the weaker phase yielding that was proposed by Greenwood and Johnson. From the simulations under various possible plastic properties of each phase, a more quantitative understanding of the origin of transformation plasticity was attempted by a comparison with the experimental observation. PMID:22558295

  13. Micro-Tom Tomato as an Alternative Plant Model System: Mutant Collection and Efficient Transformation.

    PubMed

    Shikata, Masahito; Ezura, Hiroshi

    2016-01-01

    Tomato is a model plant for fruit development, a unique feature that classical model plants such as Arabidopsis and rice do not have. The tomato genome was sequenced in 2012 and tomato is becoming very popular as an alternative system for plant research. Among many varieties of tomato, Micro-Tom has been recognized as a model cultivar for tomato research because it shares some key advantages with Arabidopsis including its small size, short life cycle, and capacity to grow under fluorescent lights at a high density. Mutants and transgenic plants are essential materials for functional genomics research, and therefore, the availability of mutant resources and methods for genetic transformation are key tools to facilitate tomato research. Here, we introduce the Micro-Tom mutant database "TOMATOMA" and an efficient transformation protocol for Micro-Tom.

  14. A semiparametric approach for the nonparametric transformation survival model with multiple covariates.

    PubMed

    Song, Xiao; Ma, Shuangge; Huang, Jian; Zhou, Xiao-Hua

    2007-04-01

    The nonparametric transformation model makes no parametric assumptions on the forms of the transformation function and the error distribution. This model is appealing in its flexibility for modeling censored survival data. Current approaches for estimation of the regression parameters involve maximizing discontinuous objective functions, which are numerically infeasible to implement with multiple covariates. Based on the partial rank (PR) estimator (Khan and Tamer, 2004), we propose a smoothed PR estimator which maximizes a smooth approximation of the PR objective function. The estimator is shown to be asymptotically equivalent to the PR estimator but is much easier to compute when there are multiple covariates. We further propose using the weighted bootstrap, which is more stable than the usual sandwich technique with smoothing parameters, for estimating the standard error. The estimator is evaluated via simulation studies and illustrated with the Veterans Administration lung cancer data set.

  15. A Modified Approach to Modeling of Diffusive Transformation Kinetics from Nonisothermal Data and Experimental Verification

    NASA Astrophysics Data System (ADS)

    Chen, Xiangjun; Xiao, Namin; Cai, Minghui; Li, Dianzhong; Li, Guangyao; Sun, Guangyong; Rolfe, Bernard F.

    2016-09-01

    An inverse model is proposed to construct the mathematical relationship between continuous cooling transformation (CCT) kinetics with constant rates and the isothermal one. The kinetic parameters in JMAK equations of isothermal kinetics can be deduced from the experimental CCT kinetics. Furthermore, a generalized model with a new additive rule is developed for predicting the kinetics of nucleation and growth during diffusional phase transformation with arbitrary cooling paths based only on CCT curve. A generalized contribution coefficient is introduced into the new additivity rule to describe the influences of current temperature and cooling rate on the incubation time of nuclei. Finally, then the reliability of the proposed model is validated using dilatometry experiments of a microalloy steel with fully bainitic microstructure based on various cooling routes.

  16. Statistical selection of multiple-input multiple-output nonlinear dynamic models of spike train transformation.

    PubMed

    Song, Dong; Chan, Rosa H M; Marmarelis, Vasilis Z; Hampson, Robert E; Deadwyler, Sam A; Berger, Theodore W

    2007-01-01

    Multiple-input multiple-output nonlinear dynamic model of spike train to spike train transformations was previously formulated for hippocampal-cortical prostheses. This paper further described the statistical methods of selecting significant inputs (self-terms) and interactions between inputs (cross-terms) of this Volterra kernel-based model. In our approach, model structure was determined by progressively adding self-terms and cross-terms using a forward stepwise model selection technique. Model coefficients were then pruned based on Wald test. Results showed that the reduced kernel models, which contained much fewer coefficients than the full Volterra kernel model, gave good fits to the novel data. These models could be used to analyze the functional interactions between neurons during behavior.

  17. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  18. Data Warehouse Design from HL7 Clinical Document Architecture Schema.

    PubMed

    Pecoraro, Fabrizio; Luzi, Daniela; Ricci, Fabrizio L

    2015-01-01

    This paper proposes a semi-automatic approach to extract clinical information structured in a HL7 Clinical Document Architecture (CDA) and transform it in a data warehouse dimensional model schema. It is based on a conceptual framework published in a previous work that maps the dimensional model primitives with CDA elements. Its feasibility is demonstrated providing a case study based on the analysis of vital signs gathered during laboratory tests.

  19. Data Warehouse Design from HL7 Clinical Document Architecture Schema.

    PubMed

    Pecoraro, Fabrizio; Luzi, Daniela; Ricci, Fabrizio L

    2015-01-01

    This paper proposes a semi-automatic approach to extract clinical information structured in a HL7 Clinical Document Architecture (CDA) and transform it in a data warehouse dimensional model schema. It is based on a conceptual framework published in a previous work that maps the dimensional model primitives with CDA elements. Its feasibility is demonstrated providing a case study based on the analysis of vital signs gathered during laboratory tests. PMID:26152975

  20. Computational Nanophotonics: Model Optical Interactions and Transport in Tailored Nanosystem Architectures

    SciTech Connect

    Stockman, Mark; Gray, Steven

    2014-02-21

    The program is directed toward development of new computational approaches to photoprocesses in nanostructures whose geometry and composition are tailored to obtain desirable optical responses. The emphasis of this specific program is on the development of computational methods and prediction and computational theory of new phenomena of optical energy transfer and transformation on the extreme nanoscale (down to a few nanometers).

  1. Development of a mercury transformation model in coal combustion flue gas.

    PubMed

    Zhuang, Ye; Thompson, Jeffrey S; Zygarlicke, Christopher J; Pavlish, John H

    2004-11-01

    A bench-scale entrained-flow reactor was used to extract flue gas produced by burning a subbituminous Belle Ayr coal in a 580-MJ/h combustion system. The reactor was operated at 400 degrees, 275 degrees, and 150 degrees C with a flow rate corresponding to residence times of 0-7 s. Transformations of elemental mercury (Hg0) and total gas mercury (Hg(gas)) in the reactor were evaluated as functions of temperature and residence time. The most significant mercury transformations (Hg0 to Hg(p) and Hg0 to Hg2+) occurred at 150 degrees C, while virtually no obvious mercury transformations were observed at 275 degrees and 400 degrees C. Approximately 30% of total mercury has been oxidized at temperatures higher than 400 degrees C. A mass transfer-capacity limit model was developed to quantify in-flight mercury sorption on fly ash in flue gas at different temperatures. A more sophisticated model was developed to demonstrate not only the temperature and residence time effects but also to consider the effective surface area of fly ash and dependence of mercury vapor concentration on mercury transformations in flue gas. The reaction orders were 0.02 and 0.55 for Hg0 and Hg(gas), respectively. Only a few percent of the total surface area of the fly ash, in the range of 1%-3%, can effectively adsorb mercury vapor.

  2. Monthly river flow forecasting using artificial neural network and support vector regression models coupled with wavelet transform

    NASA Astrophysics Data System (ADS)

    Kalteh, Aman Mohammad

    2013-04-01

    Reliable and accurate forecasts of river flow is needed in many water resources planning, design development, operation and maintenance activities. In this study, the relative accuracy of artificial neural network (ANN) and support vector regression (SVR) models coupled with wavelet transform in monthly river flow forecasting is investigated, and compared to regular ANN and SVR models, respectively. The relative performance of regular ANN and SVR models is also compared to each other. For this, monthly river flow data of Kharjegil and Ponel stations in Northern Iran are used. The comparison of the results reveals that both ANN and SVR models coupled with wavelet transform, are able to provide more accurate forecasting results than the regular ANN and SVR models. However, it is found that SVR models coupled with wavelet transform provide better forecasting results than ANN models coupled with wavelet transform. The results also indicate that regular SVR models perform slightly better than regular ANN models.

  3. Recent Developments of the Local Effect Model (LEM) - Implications of clustered damage on cell transformation

    NASA Astrophysics Data System (ADS)

    Elsässer, Thilo

    Exposure to radiation of high-energy and highly charged ions (HZE) causes a major risk to human beings, since in long term space explorations about 10 protons per month and about one HZE particle per month hit each cell nucleus (1). Despite the larger number of light ions, the high ionisation power of HZE particles and its corresponding more complex damage represents a major hazard for astronauts. Therefore, in order to get a reasonable risk estimate, it is necessary to take into account the entire mixed radiation field. Frequently, neoplastic cell transformation serves as an indicator for the oncogenic potential of radiation exposure. It can be measured for a small number of ion and energy combinations. However, due to the complexity of the radiation field it is necessary to know the contribution to the radiation damage of each ion species for the entire range of energies. Therefore, a model is required which transfers the few experimental data to other particles with different LETs. We use the Local Effect Model (LEM) (2) with its cluster extension (3) to calculate the relative biological effectiveness (RBE) of neoplastic transformation. It was originally developed in the framework of hadrontherapy and is applicable for a large range of ions and energies. The input parameters for the model include the linear-quadratic parameters for the induction of lethal events as well as for the induction of transformation events per surviving cell. Both processes of cell inactivation and neoplastic transformation per viable cell are combined to eventually yield the RBE for cell transformation. We show that the Local Effect Model is capable of predicting the RBE of neoplastic cell transformation for a broad range of ions and energies. The comparison of experimental data (4) with model calculations shows a reasonable agreement. We find that the cluster extension results in a better representation of the measured RBE values. With this model it should be possible to better

  4. Logical-Rule Models of Classification Response Times: A Synthesis of Mental-Architecture, Random-Walk, and Decision-Bound Approaches

    ERIC Educational Resources Information Center

    Fific, Mario; Little, Daniel R.; Nosofsky, Robert M.

    2010-01-01

    We formalize and provide tests of a set of logical-rule models for predicting perceptual classification response times (RTs) and choice probabilities. The models are developed by synthesizing mental-architecture, random-walk, and decision-bound approaches. According to the models, people make independent decisions about the locations of stimuli…

  5. The Quantum Arnold Transformation for the damped harmonic oscillator: from the Caldirola-Kanai model toward the Bateman model

    NASA Astrophysics Data System (ADS)

    López-Ruiz, F. F.; Guerrero, J.; Aldaya, V.; Cossío, F.

    2012-08-01

    Using a quantum version of the Arnold transformation of classical mechanics, all quantum dynamical systems whose classical equations of motion are non-homogeneous linear second-order ordinary differential equations (LSODE), including systems with friction linear in velocity such as the damped harmonic oscillator, can be related to the quantum free-particle dynamical system. This implies that symmetries and simple computations in the free particle can be exported to the LSODE-system. The quantum Arnold transformation is given explicitly for the damped harmonic oscillator, and an algebraic connection between the Caldirola-Kanai model for the damped harmonic oscillator and the Bateman system will be sketched out.

  6. The CMIP5 archive architecture: A system for petabyte-scale distributed archival of climate model data

    NASA Astrophysics Data System (ADS)

    Pascoe, Stephen; Cinquini, Luca; Lawrence, Bryan

    2010-05-01

    The Phase 5 Coupled Model Intercomparison Project (CMIP5) will produce a petabyte scale archive of climate data relevant to future international assessments of climate science (e.g., the IPCC's 5th Assessment Report scheduled for publication in 2013). The infrastructure for the CMIP5 archive must meet many challenges to support this ambitious international project. We describe here the distributed software architecture being deployed worldwide to meet these challenges. The CMIP5 architecture extends the Earth System Grid (ESG) distributed architecture of Datanodes, providing data access and visualisation services, and Gateways providing the user interface including registration, search and browse services. Additional features developed for CMIP5 include a publication workflow incorporating quality control and metadata submission, data replication, version control, update notification and production of citable metadata records. Implementation of these features have been driven by the requirements of reliable global access to over 1Pb of data and consistent citability of data and metadata. Central to the implementation is the concept of Atomic Datasets that are identifiable through a Data Reference Syntax (DRS). Atomic Datasets are immutable to allow them to be replicated and tracked whilst maintaining data consistency. However, since occasional errors in data production and processing is inevitable, new versions can be published and users notified of these updates. As deprecated datasets may be the target of existing citations they can remain visible in the system. Replication of Atomic Datasets is designed to improve regional access and provide fault tolerance. Several datanodes in the system are designated replicating nodes and hold replicas of a portion of the archive expected to be of broad interest to the community. Gateways provide a system-wide interface to users where they can track the version history and location of replicas to select the most appropriate

  7. Brachypodium sylvaticum, a model for perennial grasses: transformation and inbred line development.

    PubMed

    Steinwand, Michael A; Young, Hugh A; Bragg, Jennifer N; Tobias, Christian M; Vogel, John P

    2013-01-01

    Perennial species offer significant advantages as crops including reduced soil erosion, lower energy inputs after the first year, deeper root systems that access more soil moisture, and decreased fertilizer inputs due to the remobilization of nutrients at the end of the growing season. These advantages are particularly relevant for emerging biomass crops and it is projected that perennial grasses will be among the most important dedicated biomass crops. The advantages offered by perennial crops could also prove favorable for incorporation into annual grain crops like wheat, rice, sorghum and barley, especially under the dryer and more variable climate conditions projected for many grain-producing regions. Thus, it would be useful to have a perennial model system to test biotechnological approaches to crop improvement and for fundamental research. The perennial grass Brachypodiumsylvaticum is a candidate for such a model because it is diploid, has a small genome, is self-fertile, has a modest stature, and short generation time. Its close relationship to the annual model Brachypodiumdistachyon will facilitate comparative studies and allow researchers to leverage the resources developed for B. distachyon. Here we report on the development of two keystone resources that are essential for a model plant: high-efficiency transformation and inbred lines. Using Agrobacterium tumefaciens-mediated transformation we achieved an average transformation efficiency of 67%. We also surveyed the genetic diversity of 19 accessions from the National Plant Germplasm System using SSR markers and created 15 inbred lines. PMID:24073248

  8. Modeling the Pulse Signal by Wave-Shape Function and Analyzing by Synchrosqueezing Transform

    PubMed Central

    Wang, Chun-Li; Yang, Yueh-Lung; Wu, Wen-Hsiang; Tsai, Tung-Hu; Chang, Hen-Hong

    2016-01-01

    We apply the recently developed adaptive non-harmonic model based on the wave-shape function, as well as the time-frequency analysis tool called synchrosqueezing transform (SST) to model and analyze oscillatory physiological signals. To demonstrate how the model and algorithm work, we apply them to study the pulse wave signal. By extracting features called the spectral pulse signature, and based on functional regression, we characterize the hemodynamics from the radial pulse wave signals recorded by the sphygmomanometer. Analysis results suggest the potential of the proposed signal processing approach to extract health-related hemodynamics features. PMID:27304979

  9. Application of chaotic prediction model based on wavelet transform on water quality prediction

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Zou, Z. H.; Zhao, Y. F.

    2016-08-01

    Dissolved oxygen (DO) is closely related to water self-purification capacity. In order to better forecast its concentration, the chaotic prediction model, based on the wavelet transform, is proposed and applied to a certain monitoring section of the Mentougou area of the Haihe River Basin. The result is compared with the simple application of the chaotic prediction model. The study indicates that the new model aligns better with the real data and has a higher accuracy. Therefore, it will provide significant decision support for water protection and water environment treatment.

  10. A condensed variational model for thermo-mechanically coupled phase transformations in polycrystalline shape memory alloys

    NASA Astrophysics Data System (ADS)

    Junker, Philipp; Hackl, Klaus

    2013-11-01

    We derive an energy-based material model for thermomechanically coupled phase transformations in polycrystalline shape memory alloys. For the variational formulation of the model, we use the principle of the minimum of the dissipation potential for nonisothermal processes for which only a minimal number of constitutive assumptions has to be made. By introducing a condensed formulation for the representative orientation distribution function, the resulting material model is numerically highly efficient. For a first analysis, we present the results of material point calculations, where the evolution of temperature as well as its influence on the mechanical material response is investigated.

  11. The ecological model web concept: A consultative infrastructure for researchers and decision makers using a Service Oriented Architecture

    NASA Astrophysics Data System (ADS)

    Geller, Gary

    2010-05-01

    Rapid climate and socioeconomic changes may be outrunning society's ability to understand, predict, and respond to change effectively. Decision makers such as natural resource managers want better information about what these changes will be and how the resources they are managing will be affected. Researchers want better understanding of the components and processes of ecological systems, how they interact, and how they respond to change. Nearly all these activities require computer models to make ecological forecasts that can address "what if" questions. However, despite many excellent models in ecology and related disciplines, there is no coordinated model system—that is, a model infrastructure--that researchers or decision makers can consult to gain insight on important ecological questions or help them make decisions. While this is partly due to the complexity of the science, to lack of critical observations, and other issues, limited access to and sharing of models and model outputs is a factor as well. An infrastructure that increased access to and sharing of models and model outputs would benefit researchers, decision makers of all kinds, and modelers. One path to such a "consultative infrastructure" for ecological forecasting is called the Model Web, a concept for an open-ended system of interoperable computer models and databases communicating using a Service Oriented Architectures (SOA). Initially, it could consist of a core of several models, perhaps made interoperable retroactively, and then it could grow gradually as new models or databases were added. Because some models provide basic information of use to many other models, such as simple physical parameters, these "keystone" models are of particular importance in a model web. In the long run, a model web would not be rigidly planned and built--instead, like the World Wide Web, it would grow largely organically, with limited central control, within a framework of broad goals and data exchange

  12. Project Integration Architecture: Architectural Overview

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2001-01-01

    The Project Integration Architecture (PIA) implements a flexible, object-oriented, wrapping architecture which encapsulates all of the information associated with engineering applications. The architecture allows the progress of a project to be tracked and documented in its entirety. By being a single, self-revealing architecture, the ability to develop single tools, for example a single graphical user interface, to span all applications is enabled. Additionally, by bringing all of the information sources and sinks of a project into a single architectural space, the ability to transport information between those applications becomes possible, Object-encapsulation further allows information to become in a sense self-aware, knowing things such as its own dimensionality and providing functionality appropriate to its kind.

  13. L-py: an L-system simulation framework for modeling plant architecture development based on a dynamic language.

    PubMed

    Boudon, Frédéric; Pradal, Christophe; Cokelaer, Thomas; Prusinkiewicz, Przemyslaw; Godin, Christophe

    2012-01-01

    The study of plant development requires increasingly powerful modeling tools to help understand and simulate the growth and functioning of plants. In the last decade, the formalism of L-systems has emerged as a major paradigm for modeling plant development. Previous implementations of this formalism were made based on static languages, i.e., languages that require explicit definition of variable types before using them. These languages are often efficient but involve quite a lot of syntactic overhead, thus restricting the flexibility of use for modelers. In this work, we present an adaptation of L-systems to the Python language, a popular and powerful open-license dynamic language. We show that the use of dynamic language properties makes it possible to enhance the development of plant growth models: (i) by keeping a simple syntax while allowing for high-level programming constructs, (ii) by making code execution easy and avoiding compilation overhead, (iii) by allowing a high-level of model reusability and the building of complex modular models, and (iv) by providing powerful solutions to integrate MTG data-structures (that are a common way to represent plants at several scales) into L-systems and thus enabling to use a wide spectrum of computer tools based on MTGs developed for plant architecture. We then illustrate the use of L-Py in real applications to build complex models or to teach plant modeling in the classroom.

  14. L-py: an L-system simulation framework for modeling plant architecture development based on a dynamic language.

    PubMed

    Boudon, Frédéric; Pradal, Christophe; Cokelaer, Thomas; Prusinkiewicz, Przemyslaw; Godin, Christophe

    2012-01-01

    The study of plant development requires increasingly powerful modeling tools to help understand and simulate the growth and functioning of plants. In the last decade, the formalism of L-systems has emerged as a major paradigm for modeling plant development. Previous implementations of this formalism were made based on static languages, i.e., languages that require explicit definition of variable types before using them. These languages are often efficient but involve quite a lot of syntactic overhead, thus restricting the flexibility of use for modelers. In this work, we present an adaptation of L-systems to the Python language, a popular and powerful open-license dynamic language. We show that the use of dynamic language properties makes it possible to enhance the development of plant growth models: (i) by keeping a simple syntax while allowing for high-level programming constructs, (ii) by making code execution easy and avoiding compilation overhead, (iii) by allowing a high-level of model reusability and the building of complex modular models, and (iv) by providing powerful solutions to integrate MTG data-structures (that are a common way to represent plants at several scales) into L-systems and thus enabling to use a wide spectrum of computer tools based on MTGs developed for plant architecture. We then illustrate the use of L-Py in real applications to build complex models or to teach plant modeling in the classroom. PMID:22670147

  15. Transform Faults and Lithospheric Structure: Insights from Numerical Models and Shipboard and Geodetic Observations

    NASA Astrophysics Data System (ADS)

    Takeuchi, Christopher S.

    In this dissertation, I study the influence of transform faults on the structure and deformation of the lithosphere, using shipboard and geodetic observations as well as numerical experiments. I use marine topography, gravity, and magnetics to examine the effects of the large age-offset Andrew Bain transform fault on accretionary processes within two adjacent segments of the Southwest Indian Ridge. I infer from morphology, high gravity, and low magnetization that the extremely cold and thick lithosphere associated with the Andrew Bain strongly suppresses melt production and crustal emplacement to the west of the transform fault. These effects are counteracted by enhanced temperature and melt production near the Marion Hotspot, east of the transform fault. I use numerical models to study the development of lithospheric shear zones underneath continental transform faults (e.g. the San Andreas Fault in California), with a particular focus on thermomechanical coupling and shear heating produced by long-term fault slip. I find that these processes may give rise to long-lived localized shear zones, and that such shear zones may in part control the magnitude of stress in the lithosphere. Localized ductile shear participates in both interseismic loading and postseismic relaxation, and predictions of models including shear zones are within observational constraints provided by geodetic and surface heat flow data. I numerically investigate the effects of shear zones on three-dimensional postseismic deformation. I conclude that the presence of a thermally-activated shear zone minimally impacts postseismic deformation, and that thermomechanical coupling alone is unable to generate sufficient localization for postseismic relaxation within a ductile shear zone to kinematically resemble that by aseismic fault creep (afterslip). I find that the current record geodetic observations of postseismic deformation do not provide robust discriminating power between candidate linear and

  16. Transformer modeling for low- and mid-frequency electromagnetic transients simulation

    NASA Astrophysics Data System (ADS)

    Lambert, Mathieu

    In this work, new models are developed for single-phase and three-phase shell-type transformers for the simulation of low-frequency transients, with the use of the coupled leakage model. This approach has the advantage that it avoids the use of fictitious windings to connect the leakage model to a topological core model, while giving the same response in short-circuit as the indefinite admittance matrix (BCTRAN) model. To further increase the model sophistication, it is proposed to divide windings into coils in the new models. However, short-circuit measurements between coils are never available. Therefore, a novel analytical method is elaborated for this purpose, which allows the calculation in 2-D of short-circuit inductances between coils of rectangular cross-section. The results of this new method are in agreement with the results obtained from the finite element method in 2-D. Furthermore, the assumption that the leakage field is approximately 2-D in shell-type transformers is validated with a 3-D simulation. The outcome of this method is used to calculate the self and mutual inductances between the coils of the coupled leakage model and the results are showing good correspondence with terminal short-circuit measurements. Typically, leakage inductances in transformers are calculated from short-circuit measurements and the magnetizing branch is calculated from no-load measurements, assuming that leakages are unimportant for the unloaded transformer and that magnetizing current is negligible during a short-circuit. While the core is assumed to have an infinite permeability to calculate short-circuit inductances, and it is a reasonable assumption since the core's magnetomotive force is negligible during a short-circuit, the same reasoning does not necessarily hold true for leakage fluxes in no-load conditions. This is because the core starts to saturate when the transformer is unloaded. To take this into account, a new analytical method is developed in this

  17. A biologically inspired neural network model to transformation invariant object recognition

    NASA Astrophysics Data System (ADS)

    Iftekharuddin, Khan M.; Li, Yaqin; Siddiqui, Faraz

    2007-09-01

    Transformation invariant image recognition has been an active research area due to its widespread applications in a variety of fields such as military operations, robotics, medical practices, geographic scene analysis, and many others. The primary goal for this research is detection of objects in the presence of image transformations such as changes in resolution, rotation, translation, scale and occlusion. We investigate a biologically-inspired neural network (NN) model for such transformation-invariant object recognition. In a classical training-testing setup for NN, the performance is largely dependent on the range of transformation or orientation involved in training. However, an even more serious dilemma is that there may not be enough training data available for successful learning or even no training data at all. To alleviate this problem, a biologically inspired reinforcement learning (RL) approach is proposed. In this paper, the RL approach is explored for object recognition with different types of transformations such as changes in scale, size, resolution and rotation. The RL is implemented in an adaptive critic design (ACD) framework, which approximates the neuro-dynamic programming of an action network and a critic network, respectively. Two ACD algorithms such as Heuristic Dynamic Programming (HDP) and Dual Heuristic dynamic Programming (DHP) are investigated to obtain transformation invariant object recognition. The two learning algorithms are evaluated statistically using simulated transformations in images as well as with a large-scale UMIST face database with pose variations. In the face database authentication case, the 90° out-of-plane rotation of faces from 20 different subjects in the UMIST database is used. Our simulations show promising results for both designs for transformation-invariant object recognition and authentication of faces. Comparing the two algorithms, DHP outperforms HDP in learning capability, as DHP takes fewer steps to

  18. A 3-D constitutive model for pressure-dependent phase transformation of porous shape memory alloys.

    PubMed

    Ashrafi, M J; Arghavani, J; Naghdabadi, R; Sohrabpour, S

    2015-02-01

    Porous shape memory alloys (SMAs) exhibit the interesting characteristics of porous metals together with shape memory effect and pseudo-elasticity of SMAs that make them appropriate for biomedical applications. In this paper, a 3-D phenomenological constitutive model for the pseudo-elastic behavior and shape memory effect of porous SMAs is developed within the framework of irreversible thermodynamics. Comparing to micromechanical and computational models, the proposed model is computationally cost effective and predicts the behavior of porous SMAs under proportional and non-proportional multiaxial loadings. Considering the pressure dependency of phase transformation in porous SMAs, proper internal variables, free energy and limit functions are introduced. With the aim of numerical implementation, time discretization and solution algorithm for the proposed model are also presented. Due to lack of enough experimental data on multiaxial loadings of porous SMAs, we employ a computational simulation method (CSM) together with available experimental data to validate the proposed constitutive model. The method is based on a 3-D finite element model of a representative volume element (RVE) with random pores pattern. Good agreement between the numerical predictions of the model and CSM results is observed for elastic and phase transformation behaviors in various thermomechanical loadings.

  19. Sacrificial template-directed synthesis of mesoporous magnesium oxide architectures with superior performance for organic dye adsorption [corrected].

    PubMed

    Ai, Lunhong; Yue, Haitao; Jiang, Jing

    2012-09-01

    Mesoporous MgO architectures were successfully synthesized by the direct thermal transformation of the sacrificial oxalate template. The as-prepared mesoporous architectures were characterized by X-ray diffraction (XRD), scanning electronic microscopy (SEM), transmission electron microscopy (TEM), X-ray energy dispersive spectroscopy (EDS), Fourier transform infrared spectroscopy (FTIR), and nitrogen adsorption-desorption techniques. The MgO architectures showed extraordinary adsorption capacity and rapid adsorption rate for removal of Congo red (CR) from water. The maximum adsorption capacity of the MgO architectures toward CR reached 689.7 mg g⁻¹, much higher than most of the previously reported hierarchical adsorbents. The CR removal process was found to obey the Langmuir adsorption model and its kinetics followed pseudo-second-order rate equation. The superior adsorption performance of the mesoporous MgO architectures could be attributed to the unique mesoporous structure, high specific surface area as well as strong electrostatic interaction.

  20. Phase-field modeling of the beta to omega phase transformation in Zr–Nb alloys

    DOE PAGES

    Yeddu, Hemantha Kumar; Lookman, Turab

    2015-05-01

    A three-dimensional elastoplastic phase-field model is developed, using the Finite Element Method (FEM), for modeling the athermal beta to omega phase transformation in Zr–Nb alloys by including plastic deformation and strain hardening of the material. The microstructure evolution during athermal transformation as well as under different stress states, e.g. uni-axial tensile and compressive, bi-axial tensile and compressive, shear and tri-axial loadings, is studied. The effects of plasticity, stress states and the stress loading direction on the microstructure evolution as well as on the mechanical properties are studied. The input data corresponding to a Zr – 8 at.% Nb alloy aremore » acquired from experimental studies as well as by using the CALPHAD method. Our simulations show that the four different omega variants grow as ellipsoidal shaped particles. Our results show that due to stress relaxation, the athermal phase transformation occurs slightly more readily in the presence of plasticity compared to that in its absence. The evolution of omega phase is different under different stress states, which leads to the differences in the mechanical properties of the material. The variant selection mechanism, i.e. formation of different variants under different stress loading directions, is also nicely captured by our model.« less

  1. Educational transformation in upper-division physics: The Science Education Initiative model, outcomes, and lessons learned

    NASA Astrophysics Data System (ADS)

    Chasteen, Stephanie V.; Wilcox, Bethany; Caballero, Marcos D.; Perkins, Katherine K.; Pollock, Steven J.; Wieman, Carl E.

    2015-12-01

    [This paper is part of the Focused Collection on Upper Division Physics Courses.] In response to the need for a scalable, institutionally supported model of educational change, the Science Education Initiative (SEI) was created as an experiment in transforming course materials and faculty practices at two institutions—University of Colorado Boulder (CU) and University of British Columbia. We find that this departmentally focused model of change, which includes an explicit focus on course transformation as supported by a discipline-based postdoctoral education specialist, was generally effective in impacting courses and faculty across the institution. In CU's Department of Physics, the SEI effort focused primarily on upper-division courses, creating high-quality course materials, approaches, and assessments, and demonstrating an impact on student learning. We argue that the SEI implementation in the CU Physics Department, as compared to that in other departments, achieved more extensive impacts on specific course materials, and high-quality assessments, due to guidance by the physics education research group—but with more limited impact on the departmental faculty as a whole. We review the process and progress of the SEI Physics at CU and reflect on lessons learned in the CU Physics Department in particular. These results are useful in considering both institutional and faculty-led models of change and course transformation.

  2. Phase-field modeling of the beta to omega phase transformation in Zr–Nb alloys

    SciTech Connect

    Yeddu, Hemantha Kumar; Lookman, Turab

    2015-05-01

    A three-dimensional elastoplastic phase-field model is developed, using the Finite Element Method (FEM), for modeling the athermal beta to omega phase transformation in Zr–Nb alloys by including plastic deformation and strain hardening of the material. The microstructure evolution during athermal transformation as well as under different stress states, e.g. uni-axial tensile and compressive, bi-axial tensile and compressive, shear and tri-axial loadings, is studied. The effects of plasticity, stress states and the stress loading direction on the microstructure evolution as well as on the mechanical properties are studied. The input data corresponding to a Zr – 8 at.% Nb alloy are acquired from experimental studies as well as by using the CALPHAD method. Our simulations show that the four different omega variants grow as ellipsoidal shaped particles. Our results show that due to stress relaxation, the athermal phase transformation occurs slightly more readily in the presence of plasticity compared to that in its absence. The evolution of omega phase is different under different stress states, which leads to the differences in the mechanical properties of the material. The variant selection mechanism, i.e. formation of different variants under different stress loading directions, is also nicely captured by our model.

  3. Modeling phase transformation behavior during thermal cycling in the heat-affected zone of stainless steel welds

    SciTech Connect

    Vitek, J.M.; Iskander, Y.S.; David, S.A.

    1995-12-31

    An implicit finite-difference analysis was used to model the diffusion-controlled transformation behavior in a ternary system. The present analysis extends earlier work by examining the transformation behavior under the influence of multiple thermal cycles. The analysis was applied to the Fe-Cr-Ni ternary system to simulate the microstructural development in austenitic stainless steel welds. The ferrite-to-austenite transformation was studied in an effort to model the response of the heat-affected zone to multiple thermal cycles experienced during multipass welding. Results show that under some conditions, a transformation ``inertia`` exists that delays the system`s response when changing from cooling to heating. Conditions under which this ``inertia`` is most influential were examined. It was also found that under some conditions, the transformation behavior does not follow the equilibrium behavior as a function of temperature. Results also provide some insight into effect of composition distribution on transformation behavior.

  4. Modeling the coupling between martensitic phase transformation and plasticity in shape memory alloys

    NASA Astrophysics Data System (ADS)

    Manchiraju, Sivom

    The thermo-mechanical response of NiTi shape memory alloys (SMAs) is predominantly dictated by two inelastic deformation processes---martensitic phase transformation and plastic deformation. This thesis presents a new microstructural finite element (MFE) model that couples these processes and anisotropic elasticity. The coupling occurs via the stress redistribution induced by each mechanism. The approach includes three key improvements to the literature. First, transformation and plasticity are modeled at a crystallographic level and can occur simultaneously. Second, a rigorous large-strain finite element formulation is used, thereby capturing texture development (crystal rotation). Third, the formulation adopts recent first principle calculations of monoclinic martensite stiffness. The model is calibrated to experimental data for polycrystalline NiTi (49.9 at% Ni). Inputs include anisotropic elastic properties, texture, and DSC data as well as a subset of pseudoelastic and load-biased thermal cycling data. This calibration process provides updated material values---namely, larger self-hardening between similar martensite plates. It is then assessed against additional pseudoelastic and load-biased thermal cycling experimental data and neutron diffraction measurements of martensite texture evolution. Several experimental trends are captured---in particular, the transformation strain during thermal cycling monotonically increases with increasing bias stress, reaching a peak and then decreasing due to intervention of plasticity---a trend which existing MFE models are unable to capture. Plasticity is also shown to enhance stress-induced martensite formation during loading and generate retained martensite upon unloading. The simulations even enable a quantitative connection between deformation processing and two-way shape memory effect. Some experimental trends are not captured---in particular, the ratcheting of macrostrain with repeated thermal cycling. This may

  5. Collaborative Proposal: Transforming How Climate System Models are Used: A Global, Multi-Resolution Approach

    SciTech Connect

    Estep, Donald

    2013-04-15

    Despite the great interest in regional modeling for both weather and climate applications, regional modeling is not yet at the stage that it can be used routinely and effectively for climate modeling of the ocean. The overarching goal of this project is to transform how climate models are used by developing and implementing a robust, efficient, and accurate global approach to regional ocean modeling. To achieve this goal, we will use theoretical and computational means to resolve several basic modeling and algorithmic issues. The first task is to develop techniques for transitioning between parameterized and high-fidelity regional ocean models as the discretization grid transitions from coarse to fine regions. The second task is to develop estimates for the error in scientifically relevant quantities of interest that provide a systematic way to automatically determine where refinement is needed in order to obtain accurate simulations of dynamic and tracer transport in regional ocean models. The third task is to develop efficient, accurate, and robust time-stepping schemes for variable spatial resolution discretizations used in regional ocean models of dynamics and tracer transport. The fourth task is to develop frequency-dependent eddy viscosity finite element and discontinuous Galerkin methods and study their performance and effectiveness for simulation of dynamics and tracer transport in regional ocean models. These four projects share common difficulties and will be approach using a common computational and mathematical toolbox. This is a multidisciplinary project involving faculty and postdocs from Colorado State University, Florida State University, and Penn State University along with scientists from Los Alamos National Laboratory. The completion of the tasks listed within the discussion of the four sub-projects will go a long way towards meeting our goal of developing superior regional ocean models that will transform how climate system models are used.

  6. 3D modeling of fault-zone architecture and hydraulic structure along a major Alpine wrench lineament: the Pusteria Fault

    NASA Astrophysics Data System (ADS)

    Bistacchi, A.; Massironi, M.; Menegon, L.

    2007-05-01

    The E-W Pusteria (Pustertal) line is the eastern segment of the Periadriatic lineament, the > 600 km tectonic boundary between the Europe and Adria-vergent portions of the Alpine Collisional Orogen. The lithospheric-scale Periadriatic lineament is characterized by a transcurrent polyphase activity of Tertiary age, and is marked by an array of calcalkaline to shoshonitic magmatic bodies. At the map scale, the western edge of the Pusteria line is characterized by a complex network of generally transcurrent brittle fault zones, interconnected by a full spectrum of transtensional and transpressional features related to releasing and restraining bands respectively. An older ductile/brittle sinistral activity can be recognized in some segments of the fault thanks to their relationships with a strongly tectonized Oligocene tonalite/diorite body (Mules tonalitic "lamella"), emplaced along the Pusteria line, and minor related dikes. A late dextral activity involved the whole Pusteria Fault network and is consistent with the Eastward escape of a major lithospheric block of the Eastern Alps towards the Pannonian basin. During its polyphase activity, the fault network developed a complex architecture, showing different kinds of damage and core zones. Here we report the first results of a detailed mapping project in which, in addition to a traditional structural geology work, the spatial distribution of fault rocks in core zones and the degree and characteristics of fracturing (e.g. joint spacing and number of joint sets) in damage zones are taken into account. As regards the quantitative characterization of damage zones, a new description schema, partly inspired by engineering geology classifications, is proposed. The results of this work are implemented in a 3D structural model (developed with gOcad), allowing the study of the complex relationships among the various structural, mechanical and lithological parameters which concur in the development of the fault

  7. Transformation-induced plasticity in high-temperature shape memory alloys: a one-dimensional continuum model

    NASA Astrophysics Data System (ADS)

    Sakhaei, Amir Hosein; Lim, Kian-Meng

    2016-07-01

    A constitutive model based on isotropic plasticity consideration is presented in this work to model the thermo-mechanical behavior of high-temperature shape memory alloys. In high-temperature shape memory alloys (HTSMAs), both martensitic transformation and rate-dependent plasticity (creep) occur simultaneously at high temperatures. Furthermore, transformation-induced plasticity is another deformation mechanism during martensitic transformation. All these phenomena are considered as dissipative processes to model the mechanical behavior of HTSMAs in this study. The constitutive model was implemented for one-dimensional cases, and the results have been compared with experimental data from thermal cycling test for actuator applications.

  8. Experimental Architecture.

    ERIC Educational Resources Information Center

    Alter, Kevin

    2003-01-01

    Describes the design of the Centre for Architectural Structures and Technology at the University of Manitoba, including the educational context and design goals. Includes building plans and photographs. (EV)

  9. Development of the Architectural Simulation Model for Future Launch Systems and its Application to an Existing Launch Fleet

    NASA Technical Reports Server (NTRS)

    Rabadi, Ghaith

    2005-01-01

    A significant portion of lifecycle costs for launch vehicles are generated during the operations phase. Research indicates that operations costs can account for a large percentage of the total life-cycle costs of reusable space transportation systems. These costs are largely determined by decisions made early during conceptual design. Therefore, operational considerations are an important part of vehicle design and concept analysis process that needs to be modeled and studied early in the design phase. However, this is a difficult and challenging task due to uncertainties of operations definitions, the dynamic and combinatorial nature of the processes, and lack of analytical models and the scarcity of historical data during the conceptual design phase. Ultimately, NASA would like to know the best mix of launch vehicle concepts that would meet the missions launch dates at the minimum cost. To answer this question, we first need to develop a model to estimate the total cost, including the operational cost, to accomplish this set of missions. In this project, we have developed and implemented a discrete-event simulation model using ARENA (a simulation modeling environment) to determine this cost assessment. Discrete-event simulation is widely used in modeling complex systems, including transportation systems, due to its flexibility, and ability to capture the dynamics of the system. The simulation model accepts manifest inputs including the set of missions that need to be accomplished over a period of time, the clients (e.g., NASA or DoD) who wish to transport the payload to space, the payload weights, and their destinations (e.g., International Space Station, LEO, or GEO). A user of the simulation model can define an architecture of reusable or expendable launch vehicles to achieve these missions. Launch vehicles may belong to different families where each family may have it own set of resources, processing times, and cost factors. The goal is to capture the required

  10. Application of Distribution Transformer Thermal Life Models to Electrified Vehicle Charging Loads Using Monte-Carlo Method: Preprint

    SciTech Connect

    Kuss, M.; Markel, T.; Kramer, W.

    2011-01-01

    Concentrated purchasing patterns of plug-in vehicles may result in localized distribution transformer overload scenarios. Prolonged periods of transformer overloading causes service life decrements, and in worst-case scenarios, results in tripped thermal relays and residential service outages. This analysis will review distribution transformer load models developed in the IEC 60076 standard, and apply the model to a neighborhood with plug-in hybrids. Residential distribution transformers are sized such that night-time cooling provides thermal recovery from heavy load conditions during the daytime utility peak. It is expected that PHEVs will primarily be charged at night in a residential setting. If not managed properly, some distribution transformers could become overloaded, leading to a reduction in transformer life expectancy, thus increasing costs to utilities and consumers. A Monte-Carlo scheme simulated each day of the year, evaluating 100 load scenarios as it swept through the following variables: number of vehicle per transformer, transformer size, and charging rate. A general method for determining expected transformer aging rate will be developed, based on the energy needs of plug-in vehicles loading a residential transformer.

  11. Improved spread transform dither modulation based on robust perceptual just noticeable distortion model

    NASA Astrophysics Data System (ADS)

    Wan, Wenbo; Liu, Ju; Sun, Jiande; Ge, Chuan; Nie, Xiushan; Gao, Di

    2015-03-01

    In the quantization-based watermarking framework, the perceptual just noticeable distortion (JND) model has been widely used to determine the quantization step size, as it can provide a better tradeoff between fidelity and robustness. However, the calculated JND values can vary due to changes introduced by watermark embedding. As a result, the mismatch problem will lead to watermark extraction errors in the absence of attacks. We present an improved spread transform dither modulation (STDM) watermarking scheme. Performance improvement with respect to the existing algorithm is obtained by a discrete cosine transform (DCT)-based perceptual JND model that is highly compatible with the STDM watermarking algorithm. The proposed scheme not only incorporates various masking effects of human visual perception, but also avoids the mismatch problem by utilizing a new measurement of the pixel intensity and edge strength. In contrast to conventional JND models, the proposed model can be theoretically invariant to the changes in the watermark-embedding processing, therefore, more fit for quantization-based watermarking. Experimental results confirm the improved robustness performance of the JND model in the STDM watermarking framework. Simulation results show that the proposed scheme is more robust than the existing JND model-based watermarking algorithms with uniform fidelity. Furthermore, our proposed scheme has a superior performance compared with previously proposed perceptual STDM schemes.

  12. Modelling and Investigating Dune Transformations Driven by Vegetation and Environmental Change

    NASA Astrophysics Data System (ADS)

    Yan, Na; Baas, Andreas

    2013-04-01

    Despite growing perception of the significant role of vegetation in shaping distinct landscapes in aeolian systems, the complex eco-geomorphic interrelationships between vegetation and dune landforms are not well understood. Projections of future climatic change, meanwhile, in particular increased temperature and drought severity, raise concerns that widespread aeolian activity may intensify as a result of semi-stabilised dunes transforming to highly mobile forms. Computer modelling of aeolian landscapes and sand transport processes has been in wide use in the past decade, due to its capability of bridging the gap between different temporal and spatial scales. Numerical simulations serve as an important tool to investigate and explore theoretical foundations underlying distinctive landscape patterns and their response to perturbations arising from both natural and anthropogenic impacts. This research focuses on modelling and understanding the transformation of a semi-fixed parabolic dunefield with shrubs and nebkhas into a highly mobile barchanoid dunefield, and tries to clarify the fundamental mechanisms underlying dunefield reactivation and transformation driven by vegetation and environmental change in Inner Mongolia, China. Vegetation distribution and topography maps of a number of parabolic dunes on the Ordos Plateau were acquired using quadrat surveys and d-GPS. Sampling transects were established along longitudinal sections, cross sections and lee slopes. Historical trajectories of vegetation and morphologic change of two active parabolic dunes were determined by analysing three satellite RS images in 2005, 2007 and 2010. Vegetation density maps and potential sand transport rates were estimated by combining the DEM acquired from the field and the migration rate determined from the remote sensing image interpretation. Based on this fieldwork investigation, remote sensing image interpretation, and local climatic context analysis, the DECAL (Discrete Eco

  13. Problems in mechanistic theoretical models for cell transformation by ionizing radiation

    SciTech Connect

    Chatterjee, A.; Holley, W.R.

    1991-10-01

    A mechanistic model based on yields of double strand breaks has been developed to determine the dose response curves for cell transformation frequencies. At its present stage the model is applicable to immortal cell lines and to various qualities (X-rays, Neon and Iron) of ionizing radiation. Presently, we have considered four types of processes which can lead to activation phenomena: (1) point mutation events on a regulatory segment of selected oncogenes, (2) inactivation of suppressor genes, through point mutation, (3) deletion of a suppressor gene by a single track, and (4) deletion of a suppressor gene by two tracks.

  14. Tensor Product Model Transformation Based Adaptive Integral-Sliding Mode Controller: Equivalent Control Method

    PubMed Central

    Zhao, Guoliang; Li, Hongxing

    2013-01-01

    This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model. PMID:24453897

  15. Tensor product model transformation based adaptive integral-sliding mode controller: equivalent control method.

    PubMed

    Zhao, Guoliang; Sun, Kaibiao; Li, Hongxing

    2013-01-01

    This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model.

  16. Modification of Yoshida-Uemori Model with Consideration of Transformation-Induced Plasticity Effect

    NASA Astrophysics Data System (ADS)

    Hu, Jun; Knoerr, Lay; Abu-Farha, Fadi

    2016-08-01

    Transformation-induced plasticity (TRIP) assisted steels possess improved strain hardening behavior and resistance to necking that are favorable for automotive body applications. However, the TRIP effect causes complex springback behavior of these steels that can hardly be predicted by existing constitutive models for other steels. In this work, the functions in the original Yoshida-Uemori model describing isotropic and kinematic hardening were modified by adding new parameters that can represent the TRIP effect. Cyclic tension/compression experiments were performed on a selected TRIP-steel grade, and the results were used to calibrate the modified model. The modified model was coded via user subroutine into a commercial FE solver. The springback predictions were compared with actual try-out stamping experimental results for highlighting the improvement of predictions with the modified model.

  17. Micro CT Analysis of Spine Architecture in a Mouse Model of Scoliosis

    PubMed Central

    Gao, Chan; Chen, Brian P.; Sullivan, Michael B.; Hui, Jasmine; Ouellet, Jean A.; Henderson, Janet E.; Saran, Neil

    2015-01-01

    Objective: Mice homozygous for targeted deletion of the gene encoding fibroblast growth factor receptor 3 (FGFR3−/−) develop kyphoscoliosis by 2 months of age. The first objective of this study was to use high resolution X-ray to characterize curve progression in vivo and micro CT to quantify spine architecture ex vivo in FGFR3−/− mice. The second objective was to determine if slow release of the bone anabolic peptide parathyroid hormone related protein (PTHrP-1-34) from a pellet placed adjacent to the thoracic spine could inhibit progressive kyphoscoliosis. Materials and methods: Pellets loaded with placebo or PTHrP-1-34 were implanted adjacent to the thoracic spine of 1-month-old FGFR3−/− mice obtained from in house breeding. X rays were captured at monthly intervals up to 4 months to quantify curve progression using the Cobb method. High resolution post-mortem scans of FGFR3−/− and FGFR3+/+ spines, from C5/6 to L4/5, were captured to evaluate the 3D structure, rotation, and micro-architecture of the affected vertebrae. Un-decalcified and decalcified histology were performed on the apical and adjacent vertebrae of FGFR3−/− spines, and the corresponding vertebrae from FGFR3+/+ spines. Results: The mean Cobb angle was significantly greater at all ages in FGFR3−/− mice compared with wild type mice and appeared to stabilize around skeletal maturity at 4 months. 3D reconstructions of the thoracic spine of 4-month-old FGFR3−/− mice treated with PTHrP-1-34 revealed correction of left/right asymmetry, vertebral rotation, and lateral displacement compared with mice treated with placebo. Histologic analysis of the apical vertebrae confirmed correction of the asymmetry in PTHrP-1-34 treated mice, in the absence of any change in bone volume, and a significant reduction in the wedging of intervertebral disks (IVD) seen in placebo treated mice. Conclusion: Local treatment of the thoracic spine of juvenile FGFR3−/− mice with a bone anabolic

  18. Architecture and morphology of coral reef sequences. Modeling and observations from uplifting islands of SE Sulawesi, Indonesia

    NASA Astrophysics Data System (ADS)

    Pastier, Anne-Morwenn; Husson, Laurent; Bezos, Antoine; Pedoja, Kevin; Elliot, Mary; Hafidz, Abdul; Imran, Muhammad; Lacroix, Pascal; Robert, Xavier

    2016-04-01

    During the Late Neogene, sea level oscillations have profoundly shaped the morphology of the coastlines of intertropical zones, wherein relative sea level simultaneously controlled reef expansion and erosion of earlier reef bodies. In uplifted domains like SE Sulawesi, the sequences of fossil reefs display a variety of fossil morphologies. Similarly, the morphologies of the modern reefs are highly variable, including cliff notches, narrow fringing reefs, wide flat terraces, and barriers reefs. In this region, where uplift rates vary rapidly laterally, the entire set of morphologies is displayed within short distances. We developed a numerical model that predicts the architecture of fossil reefs sequences and apply it to observations from SE Sulawesi, accounting -amongst other parameters- for reef growth, coastal erosion, and uplift rates. The observations that we use to calibrate our models are mostly the morphology of both the onshore (dGPS and high-resolution Pleiades DEM) and offshore (sonar) coast, as well as U-Th radiometrically dated coral samples. Our method allows unravelling the spatial and temporal evolution of large domains on map view. Our analysis indicates that the architecture and morphology of uplifting coastlines is almost systematically polyphased (as attested by samples of different ages within a unique terrace), which assigns a primordial role to erosion, comparable to reef growth. Our models also reproduce the variety of modern morphologies, which are chiefly dictated by the uplift rates of the pre-existing morphology of the substratum, itself responding to the joint effects of reef building and subsequent erosion. In turn, we find that fossil and modern morphologies can be returned to uplift rates rather precisely, as the parametric window of each specific morphology is often narrow.

  19. Designing an architectural style for Pervasive Healthcare systems.

    PubMed

    Rafe, Vahid; Hajvali, Masoumeh

    2013-04-01

    Nowadays, the Pervasive Healthcare (PH) systems are considered as an important research area. These systems have a dynamic structure and configuration. Therefore, an appropriate method for designing such systems is necessary. The Publish/Subscribe Architecture (pub/sub) is one of the convenient architectures to support such systems. PH systems are safety critical; hence, errors can bring disastrous results. To prevent such problems, a powerful analytical tool is required. So using a proper formal language like graph transformation systems for developing of these systems seems necessary. But even if software engineers use such high level methodologies, errors may occur in the system under design. Hence, it should be investigated automatically and formally that whether this model of system satisfies all their requirements or not. In this paper, a dynamic architectural style for developing PH systems is presented. Then, the behavior of these systems is modeled and evaluated using GROOVE toolset. The results of the analysis show its high reliability.

  20. A functional–structural model for radiata pine (Pinus radiata) focusing on tree architecture and wood quality

    PubMed Central

    Fernández, M. Paulina; Norero, Aldo; Vera, Jorge R.; Pérez, Eduardo

    2011-01-01

    Backgrounds and Aims Functional–structural models are interesting tools to relate environmental and management conditions with forest growth. Their three-dimensional images can reveal important characteristics of wood used for industrial products. Like virtual laboratories, they can be used to evaluate relationships among species, sites and management, and to support silvicultural design and decision processes. Our aim was to develop a functional–structural model for radiata pine (Pinus radiata) given its economic importance in many countries. Methods The plant model uses the L-system language. The structure of the model is based on operational units, which obey particular rules, and execute photosynthesis, respiration and morphogenesis, according to their particular characteristics. Plant allometry is adhered to so that harmonic growth and plant development are achieved. Environmental signals for morphogenesis are used. Dynamic turnover guides the normal evolution of the tree. Monthly steps allow for detailed information of wood characteristics. The model is independent of traditional forest inventory relationships and is conceived as a mechanistic model. For model parameterization, three databases which generated new information relating to P. radiata were analysed and incorporated. Key Results Simulations under different and contrasting environmental and management conditions were run and statistically tested. The model was validated against forest inventory data for the same sites and times and against true crown architectural data. The performance of the model for 6-year-old trees was encouraging. Total height, diameter and lengths of growth units were adequately estimated. Branch diameters were slightly overestimated. Wood density values were not satisfactory, but the cyclical pattern and increase of growth rings were reasonably well modelled. Conclusions The model was able to reproduce the development and growth of the species based on mechanistic

  1. Development of a Time-Dependent 3-PARAMETER Helmert Datum Transformation Model: a Case Study for Malaysia

    NASA Astrophysics Data System (ADS)

    Gill, J.; Shariff, N. S.; Omar, K. M.; Din, A. H. M.; Amin, Z. M.

    2016-09-01

    This paper aims to develop a time-dependent 3-parameter Helmert datum transformation model for Malaysia as a proposed solution to the current non-geocentric issue of the Geocentric Datum of Malaysia 2000 (GDM2000). Methodologically, the datum transformation models is categorised into three parts; firstly, the time-dependent aspect of the datum transformation model is determined using the tectonic motion velocities computed from linear least squares regression of the long-term time series of MyRTKnet stations positions from year December 2004 to 2014; whereby the station positions are obtained from high-precision daily double-difference processing of MyRTKnet and IGS stations via Bernese 5.0. Secondly, the 3 Helmert translation-only parameters, are derived between the original GDM2000 and GDM2000@2013 - the new datum coordinates which refers to ITRF2008 at epoch 3/7/2013 - via Bernese 5.0 software. Thirdly, a distortion model is computed in order to minimise the coordinate residuals between the `processed' and `transformed' new datum. The datum transformation model is then validated to determine the reliability of the model. The validation results show that the datum transformation model is within centimetre-level accuracy, i.e., below 3 cm, over Malaysia for forward transformations to year 2014 and 2015. Therefore, this study anticipates that it will contribute as a feasible solution for the GDM2000 issue with consideration of the core concern: the complex tectonic motion of Malaysia.

  2. Application of a partnership model for transformative and sustainable international development.

    PubMed

    Powell, Dorothy L; Gilliss, Catherine L; Hewitt, Hermi H; Flint, Elizabeth P

    2010-01-01

    There are differences of intent and impact between short-term and long-term engagement of U.S. academic institutions with communities of need in developing nations. Global health programs that produce long-term transformative change rather than transient relief are more likely to be sustainable and in ethical harmony with expressed needs of a region or community. This article explores characteristics of successful ethical partnerships in global health and the challenges that threaten them, introducing a consensus community engagement model as a framework for building relationships, evolving an understanding of needs, and collaboratively developing solutions and responses to priority health needs in underserved regions of the world. The community engagement model is applied to a case study of an initiative by a U.S. school of nursing to establish long-term relationships with the nursing community in the Caribbean region with the goal of promoting transformative change through collaborative development of programs and services addressing health care needs of the region's growing elderly population and the increasing prevalence of noncommunicable chronic diseases. Progress of this ongoing long-term relationship is analyzed in the context of the organizational, philosophical, ethical, and resource commitments embodied in this approach to initiation of transformative and sustainable improvements in public health.

  3. Policy insights from the nutritional food market transformation model: the case of obesity prevention.

    PubMed

    Struben, Jeroen; Chan, Derek; Dubé, Laurette

    2014-12-01

    This paper presents a system dynamics policy model of nutritional food market transformation, tracing over-time interactions between the nutritional quality of supply, consumer food choice, population health, and governmental policy. Applied to the Canadian context and with body mass index as the primary outcome, we examine policy portfolios for obesity prevention, including (1) industry self-regulation efforts, (2) health- and nutrition-sensitive governmental policy, and (3) efforts to foster health- and nutrition-sensitive innovation. This work provides novel theoretical and practical insights on drivers of nutritional market transformations, highlighting the importance of integrative policy portfolios to simultaneously shift food demand and supply for successful and self-sustaining nutrition and health sensitivity. We discuss model extensions for deeper and more comprehensive linkages of nutritional food market transformation with supply, demand, and policy in agrifood and health/health care. These aim toward system design and policy that can proactively, and with greater impact, scale, and resilience, address single as well as double malnutrition in varying country settings.

  4. Input-to-output transformation in a model of the rat hippocampal CA1 network.

    PubMed

    Olypher, Andrey V; Lytton, William W; Prinz, Astrid A

    2012-01-01

    Here we use computational modeling to gain new insights into the transformation of inputs in hippocampal field CA1. We considered input-output transformation in CA1 principal cells of the rat hippocampus, with activity synchronized by population gamma oscillations. Prior experiments have shown that such synchronization is especially strong for cells within one millimeter of each other. We therefore simulated a one-millimeter ıt patch of CA1 with 23,500 principal cells. We used morphologically and biophysically detailed neuronal models, each with more than 1000 compartments and thousands of synaptic inputs. Inputs came from binary patterns of spiking neurons from field CA3 and entorhinal cortex (EC). On average, each presynaptic pattern initiated action potentials in the same number of CA1 principal cells in the patch. We considered pairs of similar and pairs of distinct patterns. In all the cases CA1 strongly separated input patterns. However, CA1 cells were considerably more sensitive to small alterations in EC patterns compared to CA3 patterns. Our results can be used for comparison of input-to-output transformations in normal and pathological hippocampal networks.

  5. Modeling along-axis variations in fault architecture in the Main Ethiopian Rift: implications for Nubia-Somalia kinematics

    NASA Astrophysics Data System (ADS)

    Erbello, Asfaw; Corti, Giacomo; Sani, Federico; Kidane, Tesfaye

    2016-04-01

    The Main Ethiopian Rift (MER), at the northern termination of the East African Rift, is an ideal locale where to get insights into the long-term motion between Nubia and Somalia. The rift is indeed one of the few places along the plate boundary where the deformation is narrow: its evolution is thus strictly related to the kinematics of the two major plates, whereas south of the Turkana depression a two-plate model for the EARS is too simplistic as extension occurs both along the Western and Eastern branches and different microplates are present between the two major plates. Despite its importance, the kinematics responsible for development and evolution of the MER is still a matter of debate: indeed, whereas the Quaternary-present kinematics of rifting is rather well constrained, the plate kinematics driving the initial, Mio-Pliocene stages of extension is still not clear, and different hypothesis have been put forward, including: polyphase rifting, with a change in direction of extension from NW-SE extension to E-W extension; constant Miocene-recent NW-SE extension; constant Miocene-recent NE-SW extension; constant, post-11 Ma extension consistent with the GPS-derived kinematics (i.e., roughly E-W to ESE-WNW). To shed additional light on this controversy and to test these different hypothesis, in this contribution we use new crustal-scale analogue models to analyze the along-strike variations in fault architecture in the MER and their relations with the rift trend, plate motion and the resulting Miocene-recent kinematics of rifting. The extension direction is indeed one of the most important parameters controlling the architecture of continental rifts and the relative abundance and orientation of different fault sets that develop during oblique rifting is typically a function of the angle between the extension direction and the orthogonal to the rift trend (i.e., the obliquity angle). Since the trend of the MER varies along strike, and consequently it is

  6. Position-specific isotope modeling of organic micropollutants transformation through different reaction pathways.

    PubMed

    Jin, Biao; Rolle, Massimo

    2016-03-01

    The degradation of organic micropollutants occurs via different reaction pathways. Compound specific isotope analysis is a valuable tool to identify such degradation pathways in different environmental systems. We propose a mechanism-based modeling approach that provides a quantitative framework to simultaneously evaluate concentration as well as bulk and position-specific multi-element isotope evolution during the transformation of organic micropollutants. The model explicitly simulates position-specific isotopologues for those atoms that experience isotope effects and, thereby, provides a mechanistic description of isotope fractionation occurring at different molecular positions. To demonstrate specific features of the modeling approach, we simulated the degradation of three selected organic micropollutants: dichlorobenzamide (BAM), isoproturon (IPU) and diclofenac (DCF). The model accurately reproduces the multi-element isotope data observed in previous experimental studies. Furthermore, it precisely captures the dual element isotope trends characteristic of different reaction pathways as well as their range of variation consistent with observed bulk isotope fractionation. It was also possible to directly validate the model capability to predict the evolution of position-specific isotope ratios with available experimental data. Therefore, the approach is useful both for a mechanism-based evaluation of experimental results and as a tool to explore transformation pathways in scenarios for which position-specific isotope data are not yet available.

  7. Position-specific isotope modeling of organic micropollutants transformation through different reaction pathways.

    PubMed

    Jin, Biao; Rolle, Massimo

    2016-03-01

    The degradation of organic micropollutants occurs via different reaction pathways. Compound specific isotope analysis is a valuable tool to identify such degradation pathways in different environmental systems. We propose a mechanism-based modeling approach that provides a quantitative framework to simultaneously evaluate concentration as well as bulk and position-specific multi-element isotope evolution during the transformation of organic micropollutants. The model explicitly simulates position-specific isotopologues for those atoms that experience isotope effects and, thereby, provides a mechanistic description of isotope fractionation occurring at different molecular positions. To demonstrate specific features of the modeling approach, we simulated the degradation of three selected organic micropollutants: dichlorobenzamide (BAM), isoproturon (IPU) and diclofenac (DCF). The model accurately reproduces the multi-element isotope data observed in previous experimental studies. Furthermore, it precisely captures the dual element isotope trends characteristic of different reaction pathways as well as their range of variation consistent with observed bulk isotope fractionation. It was also possible to directly validate the model capability to predict the evolution of position-specific isotope ratios with available experimental data. Therefore, the approach is useful both for a mechanism-based evaluation of experimental results and as a tool to explore transformation pathways in scenarios for which position-specific isotope data are not yet available. PMID:26708763

  8. An Analysis of Model Scale Data Transformation to Full Scale Flight Using Chevron Nozzles

    NASA Technical Reports Server (NTRS)

    Brown, Clifford; Bridges, James

    2003-01-01

    Ground-based model scale aeroacoustic data is frequently used to predict the results of flight tests while saving time and money. The value of a model scale test is therefore dependent on how well the data can be transformed to the full scale conditions. In the spring of 2000, a model scale test was conducted to prove the value of chevron nozzles as a noise reduction device for turbojet applications. The chevron nozzle reduced noise by 2 EPNdB at an engine pressure ratio of 2.3 compared to that of the standard conic nozzle. This result led to a full scale flyover test in the spring of 2001 to verify these results. The flyover test confirmed the 2 EPNdB reduction predicted by the model scale test one year earlier. However, further analysis of the data revealed that the spectra and directivity, both on an OASPL and PNL basis, do not agree in either shape or absolute level. This paper explores these differences in an effort to improve the data transformation from model scale to full scale.

  9. In search of improving the numerical accuracy of the k - ɛ model by a transformation to the k - τ model

    NASA Astrophysics Data System (ADS)

    Dijkstra, Yoeri M.; Uittenbogaard, Rob E.; van Kester, Jan A. Th. M.; Pietrzak, Julie D.

    2016-08-01

    This study presents a detailed comparison between the k - ɛ and k - τ turbulence models. It is demonstrated that the numerical accuracy of the k - ɛ turbulence model can be improved in geophysical and environmental high Reynolds number boundary layer flows. This is achieved by transforming the k - ɛ model to the k - τ model, so that both models use the same physical parametrisation. The models therefore only differ in numerical aspects. A comparison between the two models is carried out using four idealised one-dimensional vertical (1DV) test cases. The advantage of a 1DV model is that it is feasible to carry out convergence tests with grids containing 5 to several thousands of vertical layers. It is shown hat the k - τ model is more accurate than the k - ɛ model in stratified and non-stratified boundary layer flows for grid resolutions between 10 and 100 layers. The k - τ model also shows a more monotonous convergence behaviour than the k - ɛ model. The price for the improved accuracy is about 20% more computational time for the k - τ model, which is due to additional terms in the model equations. The improved performance of the k - τ model is explained by the linearity of τ in the boundary layer and the better defined boundary condition.

  10. Note: Tesla transformer damping

    NASA Astrophysics Data System (ADS)

    Reed, J. L.

    2012-07-01

    Unexpected heavy damping in the two winding Tesla pulse transformer is shown to be due to small primary inductances. A small primary inductance is a necessary condition of operability, but is also a refractory inefficiency. A 30% performance loss is demonstrated using a typical "spiral strip" transformer. The loss is investigated by examining damping terms added to the transformer's governing equations. A significant alteration of the transformer's architecture is suggested to mitigate these losses. Experimental and simulated data comparing the 2 and 3 winding transformers are cited to support the suggestion.

  11. [Study on nitrogen cycling and transformations in a duckweed pond by means of modeling analysis].

    PubMed

    Peng, Jian-feng; Song, Yong-hui; Yuan, Peng; Wang, Bao-zhen

    2006-10-01

    Based on the simulated results from N cycling and transformation model of duckweed pond, the influences of different major transfer pathways on various nitrogen removal performances are investigated. The effects of seasonal variations of water conditions on nitrogen transformations are determined. The simulated results show that nitrification and denitrification were the major removal pathways for nitrogen in duckweed pond, and the removal contributions of organic nitrogen sedimentation and ammonia volatilization for total nitrogen removal were less than 2.1%. Furthermore, in duckweed pond, nitrification and denitrification decided the removal efficiencies of ammonia and NOx., respectively; both algae decaying and organic nitrogen ammonification controlled primarily the organic nitrogen removal performances; both organic nitrogen sedimentation and mineralization of sedimentary nitrogen determined the variations of sedimentary nitrogen. Duckweed pond with duckweed growing largely can increase sharply algae mortality and keep the low content of algae in effluent. Besides, through accelerating the nitrification and denitrification rate, duckweed can evidently improve the removal efficiencies of total nitrogen.

  12. Array CGH data modeling and smoothing in Stationary Wavelet Packet Transform domain

    PubMed Central

    Huang, Heng; Nguyen, Nha; Oraintara, Soontorn; Vo, An

    2008-01-01

    Background Array-based comparative genomic hybridization (array CGH) is a highly efficient technique, allowing the simultaneous measurement of genomic DNA copy number at hundreds or thousands of loci and the reliable detection of local one-copy-level variations. Characterization of these DNA copy number changes is important for both the basic understanding of cancer and its diagnosis. In order to develop effective methods to identify aberration regions from array CGH data, many recent research work focus on both smoothing-based and segmentation-based data processing. In this paper, we propose stationary packet wavelet transform based approach to smooth array CGH data. Our purpose is to remove CGH noise in whole frequency while keeping true signal by using bivariate model. Results In both synthetic and real CGH data, Stationary Wavelet Packet Transform (SWPT) is the best wavelet transform to analyze CGH signal in whole frequency. We also introduce a new bivariate shrinkage model which shows the relationship of CGH noisy coefficients of two scales in SWPT. Before smoothing, the symmetric extension is considered as a preprocessing step to save information at the border. Conclusion We have designed the SWTP and the SWPT-Bi which are using the stationary wavelet packet transform with the hard thresholding and the new bivariate shrinkage estimator respectively to smooth the array CGH data. We demonstrate the effectiveness of our approach through theoretical and experimental exploration of a set of array CGH data, including both synthetic data and real data. The comparison results show that our method outperforms the previous approaches. PMID:18831782

  13. Chain architecture and micellization: A mean-field coarse-grained model for poly(ethylene oxide) alkyl ether surfactants

    NASA Astrophysics Data System (ADS)

    García Daza, Fabián A.; Colville, Alexander J.; Mackie, Allan D.

    2015-03-01

    Microscopic modeling of surfactant systems is expected to be an important tool to describe, understand, and take full advantage of the micellization process for different molecular architectures. Here, we implement a single chain mean field theory to study the relevant equilibrium properties such as the critical micelle concentration (CMC) and aggregation number for three sets of surfactants with different geometries maintaining constant the number of hydrophobic and hydrophilic monomers. The results demonstrate the direct effect of the block organization for the surfactants under study by means of an analysis of the excess energy and entropy which can be accurately determined from the mean-field scheme. Our analysis reveals that the CMC values are sensitive to branching in the hydrophilic head part of the surfactant and can be observed in the entropy-enthalpy balance, while aggregation numbers are also affected by splitting the hydrophobic tail of the surfactant and are manifested by slight changes in the packing entropy.

  14. Chain architecture and micellization: A mean-field coarse-grained model for poly(ethylene oxide) alkyl ether surfactants

    SciTech Connect

    García Daza, Fabián A.; Mackie, Allan D.; Colville, Alexander J.

    2015-03-21

    Microscopic modeling of surfactant systems is expected to be an important tool to describe, understand, and take full advantage of the micellization process for different molecular architectures. Here, we implement a single chain mean field theory to study the relevant equilibrium properties such as the critical micelle concentration (CMC) and aggregation number for three sets of surfactants with different geometries maintaining constant the number of hydrophobic and hydrophilic monomers. The results demonstrate the direct effect of the block organization for the surfactants under study by means of an analysis of the excess energy and entropy which can be accurately determined from the mean-field scheme. Our analysis reveals that the CMC values are sensitive to branching in the hydrophilic head part of the surfactant and can be observed in the entropy-enthalpy balance, while aggregation numbers are also affected by splitting the hydrophobic tail of the surfactant and are manifested by slight changes in the packing entropy.

  15. Model-based system-of-systems engineering for space-based command, control, communication, and information architecture design

    NASA Astrophysics Data System (ADS)

    Sindiy, Oleg V.

    This dissertation presents a model-based system-of-systems engineering (SoSE) approach as a design philosophy for architecting in system-of-systems (SoS) problems. SoS refers to a special class of systems in which numerous systems with operational and managerial independence interact to generate new capabilities that satisfy societal needs. Design decisions are more complicated in a SoS setting. A revised Process Model for SoSE is presented to support three phases in SoS architecting: defining the scope of the design problem, abstracting key descriptors and their interrelations in a conceptual model, and implementing computer-based simulations for architectural analyses. The Process Model enables improved decision support considering multiple SoS features and develops computational models capable of highlighting configurations of organizational, policy, financial, operational, and/or technical features. Further, processes for verification and validation of SoS models and simulations are also important due to potential impact on critical decision-making and, thus, are addressed. Two research questions frame the research efforts described in this dissertation. The first concerns how the four key sources of SoS complexity---heterogeneity of systems, connectivity structure, multi-layer interactions, and the evolutionary nature---influence the formulation of SoS models and simulations, trade space, and solution performance and structure evaluation metrics. The second question pertains to the implementation of SoSE architecting processes to inform decision-making for a subset of SoS problems concerning the design of information exchange services in space-based operations domain. These questions motivate and guide the dissertation's contributions. A formal methodology for drawing relationships within a multi-dimensional trade space, forming simulation case studies from applications of candidate architecture solutions to a campaign of notional mission use cases, and

  16. Examining Competing Models of Transformational Leadership, Leadership Trust, Change Commitment, and Job Satisfaction.

    PubMed

    Yang, Yi-Feng

    2016-08-01

    This study discusses the influence of transformational leadership on job satisfaction through assessing six alternative models related to the mediators of leadership trust and change commitment utilizing a data sample (N = 341; M age = 32.5 year, SD = 5.2) for service promotion personnel in Taiwan. The bootstrap sampling technique was used to select the better fitting model. The tool of hierarchical nested model analysis was applied, along with the approaches of bootstrapping mediation, PRODCLIN2, and structural equation modeling comparison. The results overall demonstrate that leadership is important and that leadership role identification (trust) and workgroup cohesiveness (commitment) form an ordered serial relationship. PMID:27381411

  17. Sweet Pepper (Capsicum annuum L.) Canopy Photosynthesis Modeling Using 3D Plant Architecture and Light Ray-Tracing.

    PubMed

    Kim, Jee Hoon; Lee, Joon Woo; Ahn, Tae In; Shin, Jong Hwa; Park, Kyung Sub; Son, Jung Eek

    2016-01-01

    Canopy photosynthesis has typically been estimated using mathematical models that have the following assumptions: the light interception inside the canopy exponentially declines with the canopy depth, and the photosynthetic capacity is affected by light interception as a result of acclimation. However, in actual situations, light interception in the canopy is quite heterogenous depending on environmental factors such as the location, microclimate, leaf area index, and canopy architecture. It is important to apply these factors in an analysis. The objective of the current study is to estimate the canopy photosynthesis of paprika (Capsicum annuum L.) with an analysis of by simulating the intercepted irradiation of the canopy using a 3D ray-tracing and photosynthetic capacity in each layer. By inputting the structural data of an actual plant, the 3D architecture of paprika was reconstructed using graphic software (Houdini FX, FX, Canada). The light curves and A/C i curve of each layer were measured to parameterize the Farquhar, von Caemmerer, and Berry (FvCB) model. The difference in photosynthetic capacity within the canopy was observed. With the intercepted irradiation data and photosynthetic parameters of each layer, the values of an entire plant's photosynthesis rate were estimated by integrating the calculated photosynthesis rate at each layer. The estimated photosynthesis rate of an entire plant showed good agreement with the measured plant using a closed chamber for validation. From the results, this method was considered as a reliable tool to predict canopy photosynthesis using light interception, and can be extended to analyze the canopy photosynthesis in actual greenhouse conditions. PMID:27667994

  18. Sweet Pepper (Capsicum annuum L.) Canopy Photosynthesis Modeling Using 3D Plant Architecture and Light Ray-Tracing.

    PubMed

    Kim, Jee Hoon; Lee, Joon Woo; Ahn, Tae In; Shin, Jong Hwa; Park, Kyung Sub; Son, Jung Eek

    2016-01-01

    Canopy photosynthesis has typically been estimated using mathematical models that have the following assumptions: the light interception inside the canopy exponentially declines with the canopy depth, and the photosynthetic capacity is affected by light interception as a result of acclimation. However, in actual situations, light interception in the canopy is quite heterogenous depending on environmental factors such as the location, microclimate, leaf area index, and canopy architecture. It is important to apply these factors in an analysis. The objective of the current study is to estimate the canopy photosynthesis of paprika (Capsicum annuum L.) with an analysis of by simulating the intercepted irradiation of the canopy using a 3D ray-tracing and photosynthetic capacity in each layer. By inputting the structural data of an actual plant, the 3D architecture of paprika was reconstructed using graphic software (Houdini FX, FX, Canada). The light curves and A/C i curve of each layer were measured to parameterize the Farquhar, von Caemmerer, and Berry (FvCB) model. The difference in photosynthetic capacity within the canopy was observed. With the intercepted irradiation data and photosynthetic parameters of each layer, the values of an entire plant's photosynthesis rate were estimated by integrating the calculated photosynthesis rate at each layer. The estimated photosynthesis rate of an entire plant showed good agreement with the measured plant using a closed chamber for validation. From the results, this method was considered as a reliable tool to predict canopy photosynthesis using light interception, and can be extended to analyze the canopy photosynthesis in actual greenhouse conditions.

  19. Sweet Pepper (Capsicum annuum L.) Canopy Photosynthesis Modeling Using 3D Plant Architecture and Light Ray-Tracing

    PubMed Central

    Kim, Jee Hoon; Lee, Joon Woo; Ahn, Tae In; Shin, Jong Hwa; Park, Kyung Sub; Son, Jung Eek

    2016-01-01

    Canopy photosynthesis has typically been estimated using mathematical models that have the following assumptions: the light interception inside the canopy exponentially declines with the canopy depth, and the photosynthetic capacity is affected by light interception as a result of acclimation. However, in actual situations, light interception in the canopy is quite heterogenous depending on environmental factors such as the location, microclimate, leaf area index, and canopy architecture. It is important to apply these factors in an analysis. The objective of the current study is to estimate the canopy photosynthesis of paprika (Capsicum annuum L.) with an analysis of by simulating the intercepted irradiation of the canopy using a 3D ray-tracing and photosynthetic capacity in each layer. By inputting the structural data of an actual plant, the 3D architecture of paprika was reconstructed using graphic software (Houdini FX, FX, Canada). The light curves and A/Ci curve of each layer were measured to parameterize the Farquhar, von Caemmerer, and Berry (FvCB) model. The difference in photosynthetic capacity within the canopy was observed. With the intercepted irradiation data and photosynthetic parameters of each layer, the values of an entire plant's photosynthesis rate were estimated by integrating the calculated photosynthesis rate at each layer. The estimated photosynthesis rate of an entire plant showed good agreement with the measured plant using a closed chamber for validation. From the results, this method was considered as a reliable tool to predict canopy photosynthesis using light interception, and can be extended to analyze the canopy photosynthesis in actual greenhouse conditions. PMID:27667994

  20. Sweet Pepper (Capsicum annuum L.) Canopy Photosynthesis Modeling Using 3D Plant Architecture and Light Ray-Tracing

    PubMed Central

    Kim, Jee Hoon; Lee, Joon Woo; Ahn, Tae In; Shin, Jong Hwa; Park, Kyung Sub; Son, Jung Eek

    2016-01-01

    Canopy photosynthesis has typically been estimated using mathematical models that have the following assumptions: the light interception inside the canopy exponentially declines with the canopy depth, and the photosynthetic capacity is affected by light interception as a result of acclimation. However, in actual situations, light interception in the canopy is quite heterogenous depending on environmental factors such as the location, microclimate, leaf area index, and canopy architecture. It is important to apply these factors in an analysis. The objective of the current study is to estimate the canopy photosynthesis of paprika (Capsicum annuum L.) with an analysis of by simulating the intercepted irradiation of the canopy using a 3D ray-tracing and photosynthetic capacity in each layer. By inputting the structural data of an actual plant, the 3D architecture of paprika was reconstructed using graphic software (Houdini FX, FX, Canada). The light curves and A/Ci curve of each layer were measured to parameterize the Farquhar, von Caemmerer, and Berry (FvCB) model. The difference in photosynthetic capacity within the canopy was observed. With the intercepted irradiation data and photosynthetic parameters of each layer, the values of an entire plant's photosynthesis rate were estimated by integrating the calculated photosynthesis rate at each layer. The estimated photosynthesis rate of an entire plant showed good agreement with the measured plant using a closed chamber for validation. From the results, this method was considered as a reliable tool to predict canopy photosynthesis using light interception, and can be extended to analyze the canopy photosynthesis in actual greenhouse conditions.