Science.gov

Sample records for architectural model transformations

  1. A Concept Transformation Learning Model for Architectural Design Learning Process

    ERIC Educational Resources Information Center

    Wu, Yun-Wu; Weng, Kuo-Hua; Young, Li-Ming

    2016-01-01

    Generally, in the foundation course of architectural design, much emphasis is placed on teaching of the basic design skills without focusing on teaching students to apply the basic design concepts in their architectural designs or promoting students' own creativity. Therefore, this study aims to propose a concept transformation learning model to…

  2. Adaptive Neuron Model: An architecture for the rapid learning of nonlinear topological transformations

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1994-01-01

    A method for the rapid learning of nonlinear mappings and topological transformations using a dynamically reconfigurable artificial neural network is presented. This fully-recurrent Adaptive Neuron Model (ANM) network was applied to the highly degenerate inverse kinematics problem in robotics, and its performance evaluation is bench-marked. Once trained, the resulting neuromorphic architecture was implemented in custom analog neural network hardware and the parameters capturing the functional transformation downloaded onto the system. This neuroprocessor, capable of 10(exp 9) ops/sec, was interfaced directly to a three degree of freedom Heathkit robotic manipulator. Calculation of the hardware feed-forward pass for this mapping was benchmarked at approximately 10 microsec.

  3. Fourier transform spectrometer controller for partitioned architectures

    NASA Astrophysics Data System (ADS)

    Tamas-Selicean, D.; Keymeulen, D.; Berisford, D.; Carlson, R.; Hand, K.; Pop, P.; Wadsworth, W.; Levy, R.

    The current trend in spacecraft computing is to integrate applications of different criticality levels on the same platform using no separation. This approach increases the complexity of the development, verification and integration processes, with an impact on the whole system life cycle. Researchers at ESA and NASA advocated for the use of partitioned architecture to reduce this complexity. Partitioned architectures rely on platform mechanisms to provide robust temporal and spatial separation between applications. Such architectures have been successfully implemented in several industries, such as avionics and automotive. In this paper we investigate the challenges of developing and the benefits of integrating a scientific instrument, namely a Fourier Transform Spectrometer, in such a partitioned architecture.

  4. Consistent model driven architecture

    NASA Astrophysics Data System (ADS)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  5. Avionics Architecture Modelling Language

    NASA Astrophysics Data System (ADS)

    Alana, Elena; Naranjo, Hector; Valencia, Raul; Medina, Alberto; Honvault, Christophe; Rugina, Ana; Panunzia, Marco; Dellandrea, Brice; Garcia, Gerald

    2014-08-01

    This paper presents the ESA AAML (Avionics Architecture Modelling Language) study, which aimed at advancing the avionics engineering practices towards a model-based approach by (i) identifying and prioritising the avionics-relevant analyses, (ii) specifying the modelling language features necessary to support the identified analyses, and (iii) recommending/prototyping software tooling to demonstrate the automation of the selected analyses based on a modelling language and compliant with the defined specification.

  6. Supramolecular transformations within discrete coordination-driven supramolecular architectures.

    PubMed

    Wang, Wei; Wang, Yu-Xuan; Yang, Hai-Bo

    2016-05-01

    In this review, a comprehensive summary of supramolecular transformations within discrete coordination-driven supramolecular architectures, including helices, metallacycles, metallacages, etc., is presented. Recent investigations have demonstrated that coordination-driven self-assembled architectures provide an ideal platform to study supramolecular transformations mainly due to the relatively rigid yet dynamic nature of the coordination bonds. Various stimuli have been extensively employed to trigger the transformation processes of metallosupramolecular architectures, such as solvents, concentration, anions, guests, change in component fractions or chemical compositions, light, and post-modification reactions, which allowed for the formation of new structures with specific properties and functions. Thus, it is believed that supramolecular transformations could serve as another highly efficient approach for generating diverse metallosupramolecular architectures. Classified by the aforementioned various stimuli used to induce the interconversion processes, the emphasis in this review will be on the transformation conditions, structural changes, mechanisms, and the output of specific properties and functions upon induction of structural transformations. PMID:27009833

  7. Protocol Architecture Model Report

    NASA Technical Reports Server (NTRS)

    Dhas, Chris

    2000-01-01

    NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to examine protocols and architectures for an In-Space Internet Node. CNS has developed a methodology for network reference models to support NASA's four mission areas: Earth Science, Space Science, Human Exploration and Development of Space (REDS), Aerospace Technology. This report applies the methodology to three space Internet-based communications scenarios for future missions. CNS has conceptualized, designed, and developed space Internet-based communications protocols and architectures for each of the independent scenarios. The scenarios are: Scenario 1: Unicast communications between a Low-Earth-Orbit (LEO) spacecraft inspace Internet node and a ground terminal Internet node via a Tracking and Data Rela Satellite (TDRS) transfer; Scenario 2: Unicast communications between a Low-Earth-Orbit (LEO) International Space Station and a ground terminal Internet node via a TDRS transfer; Scenario 3: Multicast Communications (or "Multicasting"), 1 Spacecraft to N Ground Receivers, N Ground Transmitters to 1 Ground Receiver via a Spacecraft.

  8. Transforming Space Missions into Service Oriented Architectures

    NASA Technical Reports Server (NTRS)

    Mandl, Dan; Frye, Stuart; Cappelaere, Pat

    2006-01-01

    This viewgraph presentation reviews the vision of the sensor web enablement via a Service Oriented Architecture (SOA). An generic example is given of a user finding a service through the Web, and initiating a request for the desired observation. The parts that comprise this system and how they interact are reviewed. The advantages of the use of SOA are reviewed.

  9. An improved architecture for video rate image transformations

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.; Juday, Richard D.

    1989-01-01

    Geometric image transformations are of interest to pattern recognition algorithms for their use in simplifying some aspects of the pattern recognition process. Examples include reducing sensitivity to rotation, scale, and perspective of the object being recognized. The NASA Programmable Remapper can perform a wide variety of geometric transforms at full video rate. An architecture is proposed that extends its abilities and alleviates many of the first version's shortcomings. The need for the improvements are discussed in the context of the initial Programmable Remapper and the benefits and limitations it has delivered. The implementation and capabilities of the proposed architecture are discussed.

  10. A VLSI architecture for simplified arithmetic Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.

    1992-01-01

    The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.

  11. Transformation of legacy network management system to service oriented architecture

    NASA Astrophysics Data System (ADS)

    Sathyan, Jithesh; Shenoy, Krishnananda

    2007-09-01

    Service providers today are facing the challenge of operating and maintaining multiple networks, based on multiple technologies. Network Management System (NMS) solutions are being used to manage these networks. However the NMS is tightly coupled with Element or the Core network components. Hence there are multiple NMS solutions for heterogeneous networks. Current network management solutions are targeted at a variety of independent networks. The wide spread popularity of IP Multimedia Subsystem (IMS) is a clear indication that all of these independent networks will be integrated into a single IP-based infrastructure referred to as Next Generation Networks (NGN) in the near future. The services, network architectures and traffic pattern in NGN will dramatically differ from the current networks. The heterogeneity and complexity in NGN including concepts like Fixed Mobile Convergence will bring a number of challenges to network management. The high degree of complexity accompanying the network element technology necessitates network management systems (NMS) which can utilize this technology to provide more service interfaces while hiding the inherent complexity. As operators begin to add new networks and expand existing networks to support new technologies and products, the necessity of scalable, flexible and functionally rich NMS systems arises. Another important factor influencing NMS architecture is mergers and acquisitions among the key vendors. Ease of integration is a key impediment in the traditional hierarchical NMS architecture. These requirements trigger the need for an architectural framework that will address the NGNM (Next Generation Network Management) issues seamlessly. This paper presents a unique perspective of bringing service orientated architecture (SOA) to legacy network management systems (NMS). It advocates a staged approach in transforming a legacy NMS to SOA. The architecture at each stage is detailed along with the technical advantages and

  12. Interoperability format translation and transformation between IFC architectural design file and simulation file formats

    DOEpatents

    Chao, Tian-Jy; Kim, Younghun

    2015-01-06

    Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function to convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.

  13. Interoperability format translation and transformation between IFC architectural design file and simulation file formats

    DOEpatents

    Chao, Tian-Jy; Kim, Younghun

    2015-02-03

    Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function to convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.

  14. HRST architecture modeling and assessments

    SciTech Connect

    Comstock, D.A.

    1997-01-01

    This paper presents work supporting the assessment of advanced concept options for the Highly Reusable Space Transportation (HRST) study. It describes the development of computer models as the basis for creating an integrated capability to evaluate the economic feasibility and sustainability of a variety of system architectures. It summarizes modeling capabilities for use on the HRST study to perform sensitivity analysis of alternative architectures (consisting of different combinations of highly reusable vehicles, launch assist systems, and alternative operations and support concepts) in terms of cost, schedule, performance, and demand. In addition, the identification and preliminary assessment of alternative market segments for HRST applications, such as space manufacturing, space tourism, etc., is described. Finally, the development of an initial prototype model that can begin to be used for modeling alternative HRST concepts at the system level is presented. {copyright} {ital 1997 American Institute of Physics.}

  15. HRST architecture modeling and assessments

    NASA Astrophysics Data System (ADS)

    Comstock, Douglas A.

    1997-01-01

    This paper presents work supporting the assessment of advanced concept options for the Highly Reusable Space Transportation (HRST) study. It describes the development of computer models as the basis for creating an integrated capability to evaluate the economic feasibility and sustainability of a variety of system architectures. It summarizes modeling capabilities for use on the HRST study to perform sensitivity analysis of alternative architectures (consisting of different combinations of highly reusable vehicles, launch assist systems, and alternative operations and support concepts) in terms of cost, schedule, performance, and demand. In addition, the identification and preliminary assessment of alternative market segments for HRST applications, such as space manufacturing, space tourism, etc., is described. Finally, the development of an initial prototype model that can begin to be used for modeling alternative HRST concepts at the system level is presented.

  16. A VLSI architecture for lifting-based wavelet transform with power efficient

    NASA Astrophysics Data System (ADS)

    Xiong, Chengyi; Zheng, Sheng; Tian, Jinwen; Liu, Jian

    2003-09-01

    In this paper, an efficient VLSI architecture for biorthogonal 9/7 wavelet transform by lifting scheme is presented. The proposed architecture has many advantages including, symmetrical forward and inverse wavelet transform as a result of adopting pipeline parallel technique, as well as area and power efficient because of the decrease in the amount of memory required together with the reduction in the number of read/write accesses on account of using embedded boundary data-extension technique. We have developed a behavioral Verilog HDL model of the proposed architecture, which simulation results match exactly that of the Matlab code simulations. The design has been synthesized into XILINX xcv50e-cs144-8, and the estimated frequency is 100MHz.

  17. Wavelength multiplexing encryption using joint transform correlator architecture.

    PubMed

    Amaya, Dafne; Tebaldi, Myrian; Torroba, Roberto; Bolognini, Néstor

    2009-04-10

    We show that multiple secure data recording under a wavelength multiplexing technique is possible in a joint transform correlator (JTC) arrangement. We evaluate both the performance of the decrypting procedure and the influence of the input image size when decrypting with a wavelength different from that employed in the encryption step. This analysis reveals that the wavelength is a valid parameter to conduct image multiplexing encoding with the JTC architecture. In addition, we study the influence of the minimum wavelength change that prevents decoding cross talk. Computer simulations confirm the performance of the proposed technique. PMID:19363548

  18. Predicting and Modeling RNA Architecture

    PubMed Central

    Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice

    2011-01-01

    SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963

  19. Combined treatment with a transforming growth factor beta inhibitor (1D11) and bortezomib improves bone architecture in a mouse model of myeloma-induced bone disease.

    PubMed

    Nyman, Jeffry S; Merkel, Alyssa R; Uppuganti, Sasidhar; Nayak, Bijaya; Rowland, Barbara; Makowski, Alexander J; Oyajobi, Babatunde O; Sterling, Julie A

    2016-10-01

    Multiple myeloma (MM) patients frequently develop tumor-induced bone destruction, yet no therapy completely eliminates the tumor or fully reverses bone loss. Transforming growth factor-β (TGF-β) activity often contributes to tumor-induced bone disease, and pre-clinical studies have indicated that TGF-β inhibition improves bone volume and reduces tumor growth in bone metastatic breast cancer. We hypothesized that inhibition of TGF-β signaling also reduces tumor growth, increases bone volume, and improves vertebral body strength in MM-bearing mice. We treated myeloma tumor-bearing (immunocompetent KaLwRij and immunocompromised Rag2-/-) mice with a TGF-β inhibitory (1D11) or control (13C4) antibody, with or without the anti-myeloma drug bortezomib, for 4weeks after inoculation of murine 5TGM1 MM cells. TGF-β inhibition increased trabecular bone volume, improved trabecular architecture, increased tissue mineral density of the trabeculae as assessed by ex vivo micro-computed tomography, and was associated with significantly greater vertebral body strength in biomechanical compression tests. Serum monoclonal paraprotein titers and spleen weights showed that 1D11 monotherapy did not reduce overall MM tumor burden. Combination therapy with 1D11 and bortezomib increased vertebral body strength, reduced tumor burden, and reduced cortical lesions in the femoral metaphysis, although it did not significantly improve cortical bone strength in three-point bending tests of the mid-shaft femur. Overall, our data provides rationale for evaluating inhibition of TGF-β signaling in combination with existing anti-myeloma agents as a potential therapeutic strategy to improve outcomes in patients with myeloma bone disease. PMID:27423464

  20. Models of novel battery architectures

    NASA Astrophysics Data System (ADS)

    Haney, Paul; Ruzmetov, Dmitry; Talin, Alec

    2013-03-01

    We use a 1-dimensional model of electronic and ionic transport, coupled with experimental data, to extract the interfacial electrochemical parameters for LiCoO2-LIPON-Si thin film batteries. TEM imaging of batteries has shown that charge/discharge cycles can lead to breakdown of the interfaces, which reduces the effective area through which further Li ion transfer can occur. This is modeled phenomenologically by changing the effective cross sectional area, in order to correlate this structural change with the change in charge/discharge I-V curves. Finally, by adopting the model to radial coordinates, the geometrical effect of nanowire architectures for batteries is investigated.

  1. Formalism Challenges of the Cougaar Model Driven Architecture

    NASA Technical Reports Server (NTRS)

    Bohner, Shawn A.; George, Boby; Gracanin, Denis; Hinchey, Michael G.

    2004-01-01

    The Cognitive Agent Architecture (Cougaar) is one of the most sophisticated distributed agent architectures developed today. As part of its research and evolution, Cougaar is being studied for application to large, logistics-based applications for the Department of Defense (DoD). Anticipiting future complex applications of Cougaar, we are investigating the Model Driven Architecture (MDA) approach to understand how effective it would be for increasing productivity in Cougar-based development efforts. Recognizing the sophistication of the Cougaar development environment and the limitations of transformation technologies for agents, we have systematically developed an approach that combines component assembly in the large and transformation in the small. This paper describes some of the key elements that went into the Cougaar Model Driven Architecture approach and the characteristics that drove the approach.

  2. Experimental color encryption in a joint transform correlator architecture

    NASA Astrophysics Data System (ADS)

    Tebaldi, Myrian; Horrillo, Sergi; Pérez-Cabré, Elisabet; Millán, María S.; Amaya, Dafne; Torroba, Roberto; Bolognini, Néstor

    2011-01-01

    We present an experimental color image encryption by using a photorefractive crystal and a joint transform correlator (JTC) architecture. We achieve the color storing by changing the illumination wavelength. One JTC aperture has the input image information corresponding to a determined color channel bonded to a random phase mask (object aperture), and the other JTC aperture contains the key code mask. The joint power spectrum is stored in a photorefractive crystal. Each color data is stored as a modulation of birefringence in this photosensitive medium. The adequate wavelength change produces a corresponding power spectrum modification that avoids image encryption cross talk in the read out step. An analysis in terms of the sensitivity of the photorefractive silenite crystal for different recording wavelengths is carried out. It should be highlighted that the multiplexed power spectrum shows neither the multiplexing operation nor the amount of stored information increasing the system security. We present experimental results that support our approach

  3. A pipelined IC architecture for radon transform computations in a multiprocessor array

    SciTech Connect

    Agi, I.; Hurst, P.J.; Current, K.W. . Dept. of Electrical Engineering and Computer Science)

    1990-05-25

    The amount of data generated by CT scanners is enormous, making the reconstruction operation slow, especially for 3-D and limited-data scans requiring iterative algorithms. The Radon transform and its inverse, commonly used for CT image reconstruction from projections, are computationally burdensome for today's single-processor computer architectures. If the processing times for the forward and inverse Radon transforms were comparatively small, a large set of new CT algorithms would become feasible, especially those for 3-D and iterative tomographic image reconstructions. In addition to image reconstruction, a fast Radon Transform Computer'' could be naturally applied in other areas of multidimensional signal processing including 2-D power spectrum estimation, modeling of human perception, Hough transforms, image representation, synthetic aperture radar processing, and others. A high speed processor for this operation is likely to motivate new algorithms for general multidimensional signal processing using the Radon transform. In the proposed workshop paper, we will first describe interpolation schemes useful in computation of the discrete Radon transform and backprojection and compare their errors and hardware complexities. We then will evaluate through statistical means the fixed-point number system required to accept and generate 12-bit input and output data with acceptable error using the linear interpolation scheme selected. These results set some of the requirements that must be met by our new VLSI chip architecture. Finally we will present a new unified architecture for a single-chip processor for computing both the forward Radon transform and backprojection at high data rates. 3 refs., 2 figs.

  4. Building Paradigms: Major Transformations in School Architecture (1798-2009)

    ERIC Educational Resources Information Center

    Gislason, Neil

    2009-01-01

    This article provides an historical overview of significant trends in school architecture from 1798 to the present. I divide the history of school architecture into two major phases. The first period falls between 1798 and 1921: the modern graded classroom emerged as a standard architectural feature during this period. The second period, which…

  5. Extracellular Polymeric Substance Architecture Influences Natural Genetic Transformation of Acinetobacter baylyi in Biofilms

    PubMed Central

    Merod, Robin T.

    2014-01-01

    Genetic exchange by natural transformation is an important mechanism of horizontal gene transfer in biofilms. Thirty-two biofilm metrics were quantified in a heavily encapsulated Acinetobacter baylyi strain and a miniencapsulated mutant strain, accounting for cellular architecture, extracellular polymeric substances (EPS) architecture, and their combined biofilm architecture. In general, transformation location, abundance, and frequency were more closely correlated to EPS architecture than to cellular or combined architecture. Transformation frequency and transformant location had the greatest correlation with the EPS metric surface area-to-biovolume ratio. Transformation frequency peaked when EPS surface area-to-biovolume ratio was greater than 3 μm2/μm3 and less than 5 μm2/μm3. Transformant location shifted toward the biofilm-bulk fluid interface as the EPS surface area-to-biovolume ratio increased. Transformant biovolume was most closely correlated with EPS biovolume and peaked when transformation occurred in close proximity to the substratum. This study demonstrates that biofilm architecture influences A. baylyi transformation frequency and transformant location and abundance. The major role of EPS may be to facilitate the binding and stabilization of plasmid DNA for cellular uptake. PMID:25304505

  6. Unified transform architecture for AVC, AVS, VC-1 and HEVC high-performance codecs

    NASA Astrophysics Data System (ADS)

    Dias, Tiago; Roma, Nuno; Sousa, Leonel

    2014-12-01

    A unified architecture for fast and efficient computation of the set of two-dimensional (2-D) transforms adopted by the most recent state-of-the-art digital video standards is presented in this paper. Contrasting to other designs with similar functionality, the presented architecture is supported on a scalable, modular and completely configurable processing structure. This flexible structure not only allows to easily reconfigure the architecture to support different transform kernels, but it also permits its resizing to efficiently support transforms of different orders (e.g. order-4, order-8, order-16 and order-32). Consequently, not only is it highly suitable to realize high-performance multi-standard transform cores, but it also offers highly efficient implementations of specialized processing structures addressing only a reduced subset of transforms that are used by a specific video standard. The experimental results that were obtained by prototyping several configurations of this processing structure in a Xilinx Virtex-7 FPGA show the superior performance and hardware efficiency levels provided by the proposed unified architecture for the implementation of transform cores for the Advanced Video Coding (AVC), Audio Video coding Standard (AVS), VC-1 and High Efficiency Video Coding (HEVC) standards. In addition, such results also demonstrate the ability of this processing structure to realize multi-standard transform cores supporting all the standards mentioned above and that are capable of processing the 8k Ultra High Definition Television (UHDTV) video format (7,680 × 4,320 at 30 fps) in real time.

  7. The Software Architecture of Global Climate Models

    NASA Astrophysics Data System (ADS)

    Alexander, K. A.; Easterbrook, S. M.

    2011-12-01

    It has become common to compare and contrast the output of multiple global climate models (GCMs), such as in the Climate Model Intercomparison Project Phase 5 (CMIP5). However, intercomparisons of the software architecture of GCMs are almost nonexistent. In this qualitative study of seven GCMs from Canada, the United States, and Europe, we attempt to fill this gap in research. We describe the various representations of the climate system as computer programs, and account for architectural differences between models. Most GCMs now practice component-based software engineering, where Earth system components (such as the atmosphere or land surface) are present as highly encapsulated sub-models. This architecture facilitates a mix-and-match approach to climate modelling that allows for convenient sharing of model components between institutions, but it also leads to difficulty when choosing where to draw the lines between systems that are not encapsulated in the real world, such as sea ice. We also examine different styles of couplers in GCMs, which manage interaction and data flow between components. Finally, we pay particular attention to the varying levels of complexity in GCMs, both between and within models. Many GCMs have some components that are significantly more complex than others, a phenomenon which can be explained by the respective institution's research goals as well as the origin of the model components. In conclusion, although some features of software architecture have been adopted by every GCM we examined, other features show a wide range of different design choices and strategies. These architectural differences may provide new insights into variability and spread between models.

  8. Heteroscedastic transformation cure regression models.

    PubMed

    Chen, Chyong-Mei; Chen, Chen-Hsin

    2016-06-30

    Cure models have been applied to analyze clinical trials with cures and age-at-onset studies with nonsusceptibility. Lu and Ying (On semiparametric transformation cure model. Biometrika 2004; 91:331?-343. DOI: 10.1093/biomet/91.2.331) developed a general class of semiparametric transformation cure models, which assumes that the failure times of uncured subjects, after an unknown monotone transformation, follow a regression model with homoscedastic residuals. However, it cannot deal with frequently encountered heteroscedasticity, which may result from dispersed ranges of failure time span among uncured subjects' strata. To tackle the phenomenon, this article presents semiparametric heteroscedastic transformation cure models. The cure status and the failure time of an uncured subject are fitted by a logistic regression model and a heteroscedastic transformation model, respectively. Unlike the approach of Lu and Ying, we derive score equations from the full likelihood for estimating the regression parameters in the proposed model. The similar martingale difference function to their proposal is used to estimate the infinite-dimensional transformation function. Our proposed estimating approach is intuitively applicable and can be conveniently extended to other complicated models when the maximization of the likelihood may be too tedious to be implemented. We conduct simulation studies to validate large-sample properties of the proposed estimators and to compare with the approach of Lu and Ying via the relative efficiency. The estimating method and the two relevant goodness-of-fit graphical procedures are illustrated by using breast cancer data and melanoma data. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26887342

  9. Utilizing Rapid Prototyping for Architectural Modeling

    ERIC Educational Resources Information Center

    Kirton, E. F.; Lavoie, S. D.

    2006-01-01

    This paper will discuss our approach to, success with and future direction in rapid prototyping for architectural modeling. The premise that this emerging technology has broad and exciting applications in the building design and construction industry will be supported by visual and physical evidence. This evidence will be presented in the form of…

  10. Modeling Operations Costs for Human Exploration Architectures

    NASA Technical Reports Server (NTRS)

    Shishko, Robert

    2013-01-01

    Operations and support (O&S) costs for human spaceflight have not received the same attention in the cost estimating community as have development costs. This is unfortunate as O&S costs typically comprise a majority of life-cycle costs (LCC) in such programs as the International Space Station (ISS) and the now-cancelled Constellation Program. Recognizing this, the Constellation Program and NASA HQs supported the development of an O&S cost model specifically for human spaceflight. This model, known as the Exploration Architectures Operations Cost Model (ExAOCM), provided the operations cost estimates for a variety of alternative human missions to the moon, Mars, and Near-Earth Objects (NEOs) in architectural studies. ExAOCM is philosophically based on the DoD Architecture Framework (DoDAF) concepts of operational nodes, systems, operational functions, and milestones. This paper presents some of the historical background surrounding the development of the model, and discusses the underlying structure, its unusual user interface, and lastly, previous examples of its use in the aforementioned architectural studies.

  11. Efficient architectures for two-dimensional discrete wavelet transform using lifting scheme.

    PubMed

    Xiong, Chengyi; Tian, Jinwen; Liu, Jian

    2007-03-01

    Novel architectures for 1-D and 2-D discrete wavelet transform (DWT) by using lifting schemes are presented in this paper. An embedded decimation technique is exploited to optimize the architecture for 1-D DWT, which is designed to receive an input and generate an output with the low- and high-frequency components of original data being available alternately. Based on this 1-D DWT architecture, an efficient line-based architecture for 2-D DWT is further proposed by employing parallel and pipeline techniques, which is mainly composed of two horizontal filter modules and one vertical filter module, working in parallel and pipeline fashion with 100% hardware utilization. This 2-D architecture is called fast architecture (FA) that can perform J levels of decomposition for N * N image in approximately 2N2(1 - 4(-J))/3 internal clock cycles. Moreover, another efficient generic line-based 2-D architecture is proposed by exploiting the parallelism among four subband transforms in lifting-based 2-D DWT, which can perform J levels of decomposition for N * N image in approximately N2(1 - 4(-J))/3 internal clock cycles; hence, it is called high-speed architecture. The throughput rate of the latter is increased by two times when comparing with the former 2-D architecture, but only less additional hardware cost is added. Compared with the works reported in previous literature, the proposed architectures for 2-D DWT are efficient alternatives in tradeoff among hardware cost, throughput rate, output latency and control complexity, etc. PMID:17357722

  12. Multichanneled encryption via a joint transform correlator architecture.

    PubMed

    Amaya, Dafne; Tebaldi, Myrian; Torroba, Roberto; Bolognini, Néstor

    2008-11-01

    We propose a multichanneling encryption method by using multiple random-phase mask apertures in the input plane based on a joint transform correlation scheme. In the proposal, this multiple aperture arrangement is changed as different input objects are inserted and stored. Then, during the decryption step, the appropriate use of the random-phase mask apertures can ensure the retrieval of different information. This approach provides different access levels. Computer simulations show the potential of the technique and experimental results verify the feasibility of this method. PMID:19122732

  13. Parameter estimation for transformer modeling

    NASA Astrophysics Data System (ADS)

    Cho, Sung Don

    Large Power transformers, an aging and vulnerable part of our energy infrastructure, are at choke points in the grid and are key to reliability and security. Damage or destruction due to vandalism, misoperation, or other unexpected events is of great concern, given replacement costs upward of $2M and lead time of 12 months. Transient overvoltages can cause great damage and there is much interest in improving computer simulation models to correctly predict and avoid the consequences. EMTP (the Electromagnetic Transients Program) has been developed for computer simulation of power system transients. Component models for most equipment have been developed and benchmarked. Power transformers would appear to be simple. However, due to their nonlinear and frequency-dependent behaviors, they can be one of the most complex system components to model. It is imperative that the applied models be appropriate for the range of frequencies and excitation levels that the system experiences. Thus, transformer modeling is not a mature field and newer improved models must be made available. In this work, improved topologically-correct duality-based models are developed for three-phase autotransformers having five-legged, three-legged, and shell-form cores. The main problem in the implementation of detailed models is the lack of complete and reliable data, as no international standard suggests how to measure and calculate parameters. Therefore, parameter estimation methods are developed here to determine the parameters of a given model in cases where available information is incomplete. The transformer nameplate data is required and relative physical dimensions of the core are estimated. The models include a separate representation of each segment of the core, including hysteresis of the core, lambda-i saturation characteristic, capacitive effects, and frequency dependency of winding resistance and core loss. Steady-state excitation, and de-energization and re-energization transients

  14. A parallel VLSI architecture for a digital filter of arbitrary length using Fermat number transforms

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Reed, I. S.; Yeh, C. S.; Shao, H. M.

    1982-01-01

    A parallel architecture for computation of the linear convolution of two sequences of arbitrary lengths using the Fermat number transform (FNT) is described. In particular a pipeline structure is designed to compute a 128-point FNT. In this FNT, only additions and bit rotations are required. A standard barrel shifter circuit is modified so that it performs the required bit rotation operation. The overlap-save method is generalized for the FNT to compute a linear convolution of arbitrary length. A parallel architecture is developed to realize this type of overlap-save method using one FNT and several inverse FNTs of 128 points. The generalized overlap save method alleviates the usual dynamic range limitation in FNTs of long transform lengths. Its architecture is regular, simple, and expandable, and therefore naturally suitable for VLSI implementation.

  15. Architecture Models and Data Flows in Local and Group Datawarehouses

    NASA Astrophysics Data System (ADS)

    Bogza, R. M.; Zaharie, Dorin; Avasilcai, Silvia; Bacali, Laura

    Architecture models and possible data flows for local and group datawarehouses are presented, together with some data processing models. The architecture models consists of several layers and the data flow between them. The choosen architecture of a datawarehouse depends on the data type and volumes from the source data, and inflences the analysis, data mining and reports done upon the data from DWH.

  16. Anisotropic analysis of trabecular architecture in human femur bone radiographs using quaternion wavelet transforms.

    PubMed

    Sangeetha, S; Sujatha, C M; Manamalli, D

    2014-01-01

    In this work, anisotropy of compressive and tensile strength regions of femur trabecular bone are analysed using quaternion wavelet transforms. The normal and abnormal femur trabecular bone radiographic images are considered for this study. The sub-anatomic regions, which include compressive and tensile regions, are delineated using pre-processing procedures. These delineated regions are subjected to quaternion wavelet transforms and statistical parameters are derived from the transformed images. These parameters are correlated with apparent porosity, which is derived from the strength regions. Further, anisotropy is also calculated from the transformed images and is analyzed. Results show that the anisotropy values derived from second and third phase components of quaternion wavelet transform are found to be distinct for normal and abnormal samples with high statistical significance for both compressive and tensile regions. These investigations demonstrate that architectural anisotropy derived from QWT analysis is able to differentiate normal and abnormal samples. PMID:25571265

  17. Performance and Architecture Lab Modeling Tool

    SciTech Connect

    2014-06-19

    Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this link makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior

  18. Performance and Architecture Lab Modeling Tool

    Energy Science and Technology Software Center (ESTSC)

    2014-06-19

    Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, itmore » formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this link makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program

  19. Resource utilization model for the algorithm to architecture mapping model

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Patel, Rakesh R.

    1993-01-01

    The analytical model for resource utilization and the variable node time and conditional node model for the enhanced ATAMM model for a real-time data flow architecture are presented in this research. The Algorithm To Architecture Mapping Model, ATAMM, is a Petri net based graph theoretic model developed at Old Dominion University, and is capable of modeling the execution of large-grained algorithms on a real-time data flow architecture. Using the resource utilization model, the resource envelope may be obtained directly from a given graph and, consequently, the maximum number of required resources may be evaluated. The node timing diagram for one iteration period may be obtained using the analytical resource envelope. The variable node time model, which describes the change in resource requirement for the execution of an algorithm under node time variation, is useful to expand the applicability of the ATAMM model to heterogeneous architectures. The model also describes a method of detecting the presence of resource limited mode and its subsequent prevention. Graphs with conditional nodes are shown to be reduced to equivalent graphs with time varying nodes and, subsequently, may be analyzed using the variable node time model to determine resource requirements. Case studies are performed on three graphs for the illustration of applicability of the analytical theories.

  20. The Fermilab Central Computing Facility architectural model

    SciTech Connect

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs.

  1. Ocean general circulation models for parallel architectures

    SciTech Connect

    Smith, R.D.

    1993-05-01

    The authors report continuing work in developing ocean general circulation models for parallel architectures. In earlier work, they began with the widely-used Bryan-Cox ocean model, but reformulated the barotropic equations (which describe the vertically integrated flow) to solve for the surface-pressure field rather than the volume-transport streamfunction as in the original model. This had the advantage of being more easily parallelized and allowed for a more realistic representation of coastal and bottom topography. Both streamfunction and surface-pressure formulations use a rigid-lid approximation to eliminate fast surface waves. They have now replaced the rigid-lid with a free surface, and solve the barotropic equations implicitly to overcome the timestep restriction associated with the fast waves. This method has several advantages, including: (1) a better physical representation of the barotropic mode, and (2) a better-conditioned operator matrix, which leads to much faster convergence in the conjugate-gradient solver. They have also extended the model to allow use of arbitrary orthogonal curvilinear coordinates for the horizontal grid. The original model uses a standard polar grid that has a singularity at each pole, making it difficult to include the Arctic basin, which plays an important role in global ocean circulation. They can now include the Arctic (while still using an explicit time-integration scheme without high-latitude filtering) by using a distorted grid with a displaced pole for the North Atlantic - Arctic region of the ocean. The computer code, written in Fortran 90 and developed on the Connection Machine, has been substantially restructured so that all communication occurs in low-level stencil routines. The idea is that the stencil routines may be rewritten to optimize communication costs on a particular architecture, while the remainder of the code is for the most part machine-independent, involving only the simplest Fortran 90 constructs.

  2. Architectural approach for semantic EHR systems development based on Detailed Clinical Models.

    PubMed

    Bernal, Juan G; Lopez, Diego M; Blobel, Bernd

    2012-01-01

    The integrative approach to health information in general and the development of pHealth systems in particular, require an integrated approach of formally modeled system architectures. Detailed Clinical Models (DCM) is one of the most promising modeling efforts for clinical concept representation in EHR system architectures. Although the feasibility of DCM modeling methodology has been demonstrated through examples, there is no formal, generic and automatic modeling transformation technique to ensure a semantic lossless transformation of clinical concepts expressed in DCM to either clinical concept representations based on ISO 13606/openEHR Archetypes or HL7 Templates. The objective of this paper is to propose a generic model transformation method and tooling for transforming DCM Clinical Concepts into ISO/EN 13606/openEHR Archetypes or HL7 Template models. The automation of the transformation process is supported by Model Driven-Development (MDD) transformation mechanisms and tools. The availability of processes, techniques and tooling for automatic DCM transformation would enable the development of intelligent, adaptive information systems as demanded for pHealth solutions. PMID:22942049

  3. A Framework and Model for Evaluating Clinical Decision Support Architectures

    PubMed Central

    Wright, Adam; Sittig, Dean F.

    2008-01-01

    In this paper, we develop a four-phase model for evaluating architectures for clinical decision support that focuses on: defining a set of desirable features for a decision support architecture; building a proof-of-concept prototype; demonstrating that the architecture is useful by showing that it can be integrated with existing decision support systems and comparing its coverage to that of other architectures. We apply this framework to several well-known decision support architectures, including Arden Syntax, GLIF, SEBASTIAN and SAGE PMID:18462999

  4. Toward a Framework for Modeling Space Systems Architectures

    NASA Technical Reports Server (NTRS)

    Shames, Peter; Skipper, Joseph

    2006-01-01

    In this paper we will describe this extended RASDS/RAMSS methodology, the set of viewpoints that we have derived, and describe their relationship to RM-ODP. While this methodology may be directly used in a variety of document driven ways to describe space system architecture, the real power of it will come when there are tools available that will support full description of system architectures that can be captured electronically in a way that permits their analysis, verification, and transformation.

  5. Efficient Algorithm and Architecture of Critical-Band Transform for Low-Power Speech Applications

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Gan, Woon-Seng

    2007-12-01

    An efficient algorithm and its corresponding VLSI architecture for the critical-band transform (CBT) are developed to approximate the critical-band filtering of the human ear. The CBT consists of a constant-bandwidth transform in the lower frequency range and a Brown constant-[InlineEquation not available: see fulltext.] transform (CQT) in the higher frequency range. The corresponding VLSI architecture is proposed to achieve significant power efficiency by reducing the computational complexity, using pipeline and parallel processing, and applying the supply voltage scaling technique. A 21-band Bark scale CBT processor with a sampling rate of 16 kHz is designed and simulated. Simulation results verify its suitability for performing short-time spectral analysis on speech. It has a better fitting on the human ear critical-band analysis, significantly fewer computations, and therefore is more energy-efficient than other methods. With a 0.35[InlineEquation not available: see fulltext.]m CMOS technology, it calculates a 160-point speech in 4.99 milliseconds at 234 kHz. The power dissipation is 15.6[InlineEquation not available: see fulltext.]W at 1.1 V. It achieves 82.1[InlineEquation not available: see fulltext.] power reduction as compared to a benchmark 256-point FFT processor.

  6. Advancing Software Architecture Modeling for Large Scale Heterogeneous Systems

    SciTech Connect

    Gorton, Ian; Liu, Yan

    2010-11-07

    In this paper we describe how incorporating technology-specific modeling at the architecture level can help reduce risks and produce better designs for large, heterogeneous software applications. We draw an analogy with established modeling approaches in scientific domains, using groundwater modeling as an example, to help illustrate gaps in current software architecture modeling approaches. We then describe the advances in modeling, analysis and tooling that are required to bring sophisticated modeling and development methods within reach of software architects.

  7. A comparison of VLSI architectures for time and transform domain decoding of Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Truong, T. K.; Deutsch, L. J.; Satorius, E. H.; Reed, I. S.

    1988-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial needed to decode a Reed-Solomon (RS) code. It is shown that this algorithm can be used for both time and transform domain decoding by replacing its initial conditions with the Forney syndromes and the erasure locator polynomial. By this means both the errata locator polynomial and the errate evaluator polynomial can be obtained with the Euclidean algorithm. With these ideas, both time and transform domain Reed-Solomon decoders for correcting errors and erasures are simplified and compared. As a consequence, the architectures of Reed-Solomon decoders for correcting both errors and erasures can be made more modular, regular, simple, and naturally suitable for VLSI implementation.

  8. HL7 document patient record architecture: an XML document architecture based on a shared information model.

    PubMed

    Dolin, R H; Alschuler, L; Behlen, F; Biron, P V; Boyer, S; Essin, D; Harding, L; Lincoln, T; Mattison, J E; Rishel, W; Sokolowski, R; Spinosa, J; Williams, J P

    1999-01-01

    The HL7 SGML/XML Special Interest Group is developing the HL7 Document Patient Record Architecture. This draft proposal strives to create a common data architecture for the interoperability of healthcare documents. Key components are that it is under the umbrella of HL7 standards, it is specified in Extensible Markup Language, the semantics are drawn from the HL7 Reference Information Model, and the document specifications form an architecture that, in aggregate, define the semantics and structural constraints necessary for the exchange of clinical documents. The proposal is a work in progress and has not yet been submitted to HL7's formal balloting process. PMID:10566319

  9. Modelling the pulse transformer in SPICE

    NASA Astrophysics Data System (ADS)

    Godlewska, Malgorzata; Górecki, Krzysztof; Górski, Krzysztof

    2016-01-01

    The paper is devoted to modelling pulse transformers in SPICE. It shows the character of the selected models of this element, points out their advantages and disadvantages, and presents the results of experimental verification of the considered models. These models are characterized by varying degrees of complexity - from linearly coupled linear coils to nonlinear electrothermal models. The study was conducted for transformer with ring cores made of a variety of ferromagnetic materials, while exciting the sinusoidal signal of a frequency 100 kHz and different values of load resistance. The transformers operating conditions under which the considered models ensure the acceptable accuracy of calculations are indicated.

  10. Space Generic Open Avionics Architecture (SGOAA) reference model technical guide

    NASA Technical Reports Server (NTRS)

    Wray, Richard B.; Stovall, John R.

    1993-01-01

    This report presents a full description of the Space Generic Open Avionics Architecture (SGOAA). The SGOAA consists of a generic system architecture for the entities in spacecraft avionics, a generic processing architecture, and a six class model of interfaces in a hardware/software system. The purpose of the SGOAA is to provide an umbrella set of requirements for applying the generic architecture interface model to the design of specific avionics hardware/software systems. The SGOAA defines a generic set of system interface points to facilitate identification of critical interfaces and establishes the requirements for applying appropriate low level detailed implementation standards to those interface points. The generic core avionics system and processing architecture models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.

  11. A Transformation Model of Engineering Education

    ERIC Educational Resources Information Center

    Owens, Camelia L.; Fortenberry, Norman L.

    2007-01-01

    A transformation model of engineering education at the undergraduate level is constructed to define the human and technical resources that contribute to the production of a university-trained engineer. The theory of technical systems is applied in the development of the model to transform a graduating pre-university pupil into a university-trained…

  12. Model-Drive Architecture for Agent-Based Systems

    NASA Technical Reports Server (NTRS)

    Gradanin, Denis; Singh, H. Lally; Bohner, Shawn A.; Hinchey, Michael G.

    2004-01-01

    The Model Driven Architecture (MDA) approach uses a platform-independent model to define system functionality, or requirements, using some specification language. The requirements are then translated to a platform-specific model for implementation. An agent architecture based on the human cognitive model of planning, the Cognitive Agent Architecture (Cougaar) is selected for the implementation platform. The resulting Cougaar MDA prescribes certain kinds of models to be used, how those models may be prepared and the relationships of the different kinds of models. Using the existing Cougaar architecture, the level of application composition is elevated from individual components to domain level model specifications in order to generate software artifacts. The software artifacts generation is based on a metamodel. Each component maps to a UML structured component which is then converted into multiple artifacts: Cougaar/Java code, documentation, and test cases.

  13. Origin and models of oceanic transform faults

    NASA Astrophysics Data System (ADS)

    Gerya, Taras

    2012-02-01

    Mid-ocean ridges sectioned by transform faults represent prominent surface expressions of plate tectonics. A fundamental problem of plate tectonics is how this pattern has formed and why it is maintained. Gross-scale geometry of mid-ocean ridges is often inherited from respective rifted margins. Indeed, transform faults seem to nucleate after the beginning of the oceanic spreading and can spontaneously form at a single straight ridge. Both analog and numerical models of transform faults were investigated since the 1970s. Two main groups of analog models were developed: thermomechanical (freezing wax) models with accreting and cooling plates and mechanical models with non-accreting lithosphere. Freezing wax models reproduced ridge-ridge transform faults, inactive fracture zones, rotating microplates, overlapping spreading centers and other features of oceanic ridges. However, these models often produced open spreading centers that are dissimilar to nature. Mechanical models, on the other hand, do not accrete the lithosphere and their results are thus only applicable for relatively small amount of spreading. Three main types of numerical models were investigated: models of stress and displacement distribution around transforms, models of their thermal structure and crustal growth, and models of nucleation and evolution of ridge-transform fault patterns. It was shown that a limited number of spreading modes can form: transform faults, microplates, overlapping spreading centers, zigzag ridges and oblique connecting spreading centers. However, the controversy exists whether these patterns always result from pre-existing ridge offsets or can also form spontaneously at a single straight ridge during millions of year of accretion. Therefore, two types of transform fault interpretation exist: plate fragmentation structures vs. plate accretion structures. Models of transform faults are yet relatively scarce and partly controversial. Consequently, a number of first order

  14. Modeling the Evolution of Protein Domain Architectures Using Maximum Parsimony

    PubMed Central

    Fong, Jessica H.; Geer, Lewis Y.; Panchenko, Anna R.; Bryant, Stephen H.

    2007-01-01

    Domains are basic evolutionary units of proteins and most proteins have more than one domain. Advances in domain modeling and collection are making it possible to annotate a large fraction of known protein sequences by a linear ordering of their domains, yielding their architecture. Protein domain architectures link evolutionarily related proteins and underscore their shared functions. Here, we attempt to better understand this association by identifying the evolutionary pathways by which extant architectures may have evolved. We propose a model of evolution in which architectures arise through rearrangements of inferred precursor architectures and acquisition of new domains. These pathways are ranked using a parsimony principle, whereby scenarios requiring the fewest number of independent recombination events, namely fission and fusion operations, are assumed to be more likely. Using a data set of domain architectures present in 159 proteomes that represent all three major branches of the tree of life allows us to estimate the history of over 85% of all architectures in the sequence database. We find that the distribution of rearrangement classes is robust with respect to alternative parsimony rules for inferring the presence of precursor architectures in ancestral species. Analyzing the most parsimonious pathways, we find 87% of architectures to gain complexity over time through simple changes, among which fusion events account for 5.6 times as many architectures as fission. Our results may be used to compute domain architecture similarities, for example, based on the number of historical recombination events separating them. Domain architecture “neighbors” identified in this way may lead to new insights about the evolution of protein function. PMID:17166515

  15. A Model of Transformative Collaboration

    ERIC Educational Resources Information Center

    Swartz, Ann L.; Triscari, Jacqlyn S.

    2011-01-01

    Two collaborative writing partners sought to deepen their understanding of transformative learning by conducting several spirals of grounded theory research on their own collaborative relationship. Drawing from adult education, business, and social science literature and including descriptive analysis of their records of activity and interaction…

  16. Modeling Techniques for High Dependability Protocols and Architecture

    NASA Technical Reports Server (NTRS)

    LaValley, Brian; Ellis, Peter; Walter, Chris J.

    2012-01-01

    This report documents an investigation into modeling high dependability protocols and some specific challenges that were identified as a result of the experiments. The need for an approach was established and foundational concepts proposed for modeling different layers of a complex protocol and capturing the compositional properties that provide high dependability services for a system architecture. The approach centers around the definition of an architecture layer, its interfaces for composability with other layers and its bindings to a platform specific architecture model that implements the protocols required for the layer.

  17. E-Governance and Service Oriented Computing Architecture Model

    NASA Astrophysics Data System (ADS)

    Tejasvee, Sanjay; Sarangdevot, S. S.

    2010-11-01

    E-Governance is the effective application of information communication and technology (ICT) in the government processes to accomplish safe and reliable information lifecycle management. Lifecycle of the information involves various processes as capturing, preserving, manipulating and delivering information. E-Governance is meant to transform of governance in better manner to the citizens which is transparent, reliable, participatory, and accountable in point of view. The purpose of this paper is to attempt e-governance model, focus on the Service Oriented Computing Architecture (SOCA) that includes combination of information and services provided by the government, innovation, find out the way of optimal service delivery to citizens and implementation in transparent and liable practice. This paper also try to enhance focus on the E-government Service Manager as a essential or key factors service oriented and computing model that provides a dynamically extensible structural design in which all area or branch can bring in innovative services. The heart of this paper examine is an intangible model that enables E-government communication for trade and business, citizen and government and autonomous bodies.

  18. Framework for the Parametric System Modeling of Space Exploration Architectures

    NASA Technical Reports Server (NTRS)

    Komar, David R.; Hoffman, Jim; Olds, Aaron D.; Seal, Mike D., II

    2008-01-01

    This paper presents a methodology for performing architecture definition and assessment prior to, or during, program formulation that utilizes a centralized, integrated architecture modeling framework operated by a small, core team of general space architects. This framework, known as the Exploration Architecture Model for IN-space and Earth-to-orbit (EXAMINE), enables: 1) a significantly larger fraction of an architecture trade space to be assessed in a given study timeframe; and 2) the complex element-to-element and element-to-system relationships to be quantitatively explored earlier in the design process. Discussion of the methodology advantages and disadvantages with respect to the distributed study team approach typically used within NASA to perform architecture studies is presented along with an overview of EXAMINE s functional components and tools. An example Mars transportation system architecture model is used to demonstrate EXAMINE s capabilities in this paper. However, the framework is generally applicable for exploration architecture modeling with destinations to any celestial body in the solar system.

  19. Vibrational testing of trabecular bone architectures using rapid prototype models.

    PubMed

    Mc Donnell, P; Liebschner, M A K; Tawackoli, Wafa; Mc Hugh, P E

    2009-01-01

    The purpose of this study was to investigate if standard analysis of the vibrational characteristics of trabecular architectures can be used to detect changes in the mechanical properties due to progressive bone loss. A cored trabecular specimen from a human lumbar vertebra was microCT scanned and a three-dimensional, virtual model in stereolithography (STL) format was generated. Uniform bone loss was simulated using a surface erosion algorithm. Rapid prototype (RP) replicas were manufactured from these virtualised models with 0%, 16% and 42% bone loss. Vibrational behaviour of the RP replicas was evaluated by performing a dynamic compression test through a frequency range using an electro-dynamic shaker. The acceleration and dynamic force responses were recorded and fast Fourier transform (FFT) analyses were performed to determine the response spectrum. Standard resonant frequency analysis and damping factor calculations were performed. The RP replicas were subsequently tested in compression beyond failure to determine their strength and modulus. It was found that the reductions in resonant frequency with increasing bone loss corresponded well with reductions in apparent stiffness and strength. This suggests that structural dynamics has the potential to be an alternative diagnostic technique for osteoporosis, although significant challenges must be overcome to determine the effect of the skin/soft tissue interface, the cortex and variabilities associated with in vivo testing. PMID:18555727

  20. Hough transform has O(N) complexity on SIMD N x N mesh array architectures. Technical report

    SciTech Connect

    Cypher, R.E.; Sanz, J.L.; Snyder, L.

    1987-07-01

    This paper reports on new algorithms for computing the Hough transform on mesh-array architectures. The mesh is fine-grained, consisting of an N x N array of processors, each holding a single pixel of the image. The mesh array operates in an SIMD mode. Several algorithms, differing in the techniques they use, their asymptotic complexity, or the architectural resources required, are presented for computing the Hough transform. The main algorithm computes any P angles of the Hough transform in O(N + P) time and used only a constant amount of memory per processor. All the algorithms apply to the more general problem of computing the Radon transform of gray-level images.

  1. Model based analysis of piezoelectric transformers.

    PubMed

    Hemsel, T; Priya, S

    2006-12-22

    Piezoelectric transformers are increasingly getting popular in the electrical devices owing to several advantages such as small size, high efficiency, no electromagnetic noise and non-flammable. In addition to the conventional applications such as ballast for back light inverter in notebook computers, camera flash, and fuel ignition several new applications have emerged such as AC/DC converter, battery charger and automobile lighting. These new applications demand high power density and wide range of voltage gain. Currently, the transformer power density is limited to 40 W/cm(3) obtained at low voltage gain. The purpose of this study was to investigate a transformer design that has the potential of providing higher power density and wider range of voltage gain. The new transformer design utilizes radial mode both at the input and output port and has the unidirectional polarization in the ceramics. This design was found to provide 30 W power with an efficiency of 98% and 30 degrees C temperature rise from the room temperature. An electro-mechanical equivalent circuit model was developed to describe the characteristics of the piezoelectric transformer. The model was found to successfully predict the characteristics of the transformer. Excellent matching was found between the computed and experimental results. The results of this study will allow to deterministically design unipoled piezoelectric transformers with specified performance. It is expected that in near future the unipoled transformer will gain significant importance in various electrical components. PMID:16808951

  2. Generalized transformation for decorated spin models

    NASA Astrophysics Data System (ADS)

    Rojas, Onofre; Valverde, J. S.; de Souza, S. M.

    2009-04-01

    The paper discusses the transformation of decorated Ising models into an effective undecorated spin model, using the most general Hamiltonian for interacting Ising models including a long range and high order interactions. The inverse of a Vandermonde matrix with equidistant nodes [-s,s] is used to obtain an analytical expression of the transformation. This kind of transformation is very useful to obtain the partition function of decorated systems. The method presented by Fisher is also extended, in order to obtain the correlation functions of the decorated Ising models transforming into an effective undecorated Ising model. We apply this transformation to a particular mixed spin-(1/2, 1) and (1/2, 2) square lattice with only nearest site interaction. This model could be transformed into an effective uniform spin- S square lattice with nearest and next-nearest interaction, furthermore the effective Hamiltonian also includes combinations of three-body and four-body interactions; in particular we considered spin 1 and 2.

  3. Systolic architectures for the computation of the discrete Hartley and the discrete cosine transforms based on prime factor decomposition

    SciTech Connect

    Chakrabarti, C. . Dept. of Electrical Engineering); Ja Ja, J. . Dept. of Electrical Engineering)

    1990-11-01

    This paper proposes two-dimensional systolic array implementations for computing the discrete Hartley (DHT) and the discrete cosine transforms (DCT) when the transform size N is decomposable into mutually prime factors. The existing two-dimensional formulations for DHT and DCT are modified and the corresponding algorithms are mapped into two-dimensional systolic arrays. The resulting architecture is fully pipelined with no control units. The hardware design is based on bit serial left to right MSB to LSB binary arithmetic.

  4. Development of a small single-ring OpenPET prototype with a novel transformable architecture.

    PubMed

    Tashima, Hideaki; Yoshida, Eiji; Inadama, Naoko; Nishikido, Fumihiko; Nakajima, Yasunori; Wakizaka, Hidekatsu; Shinaji, Tetsuya; Nitta, Munetaka; Kinouchi, Shoko; Suga, Mikio; Haneishi, Hideaki; Inaniwa, Taku; Yamaya, Taiga

    2016-02-21

    The single-ring OpenPET (SROP), for which the detector arrangement has a cylinder shape cut by two parallel planes at a slant angle to form an open space, is our original proposal for in-beam PET. In this study, we developed a small prototype of an axial-shift type SROP (AS-SROP) with a novel transformable architecture for a proof-of-concept. In the AS-SROP, detectors originally forming a cylindrical PET are axially shifted little by little. We designed the small AS-SROP prototype for 4-layer depth-of-interaction detectors arranged in a ring diameter of 250 mm. The prototype had two modes: open and closed. The open mode formed the SROP with the open space of 139 mm and the closed mode formed a conventional cylindrical PET. The detectors were simultaneously moved by a rotation handle allowing them to be transformed between the two modes. We evaluated the basic performance of the developed prototype and carried out in-beam imaging tests in the HIMAC using (11)C radioactive beam irradiation. As a result, we found the open mode enabled in-beam PET imaging at a slight cost of imaging performance; the spatial resolution and sensitivity were 2.6 mm and 5.1% for the open mode and 2.1 mm and 7.3% for the closed mode. We concluded that the AS-SROP can minimize the decrease of resolution and sensitivity, for example, by transforming into the closed mode immediately after the irradiation while maintaining the open space only for the in-beam PET measurement. PMID:26854528

  5. Development of a small single-ring OpenPET prototype with a novel transformable architecture

    NASA Astrophysics Data System (ADS)

    Tashima, Hideaki; Yoshida, Eiji; Inadama, Naoko; Nishikido, Fumihiko; Nakajima, Yasunori; Wakizaka, Hidekatsu; Shinaji, Tetsuya; Nitta, Munetaka; Kinouchi, Shoko; Suga, Mikio; Haneishi, Hideaki; Inaniwa, Taku; Yamaya, Taiga

    2016-02-01

    The single-ring OpenPET (SROP), for which the detector arrangement has a cylinder shape cut by two parallel planes at a slant angle to form an open space, is our original proposal for in-beam PET. In this study, we developed a small prototype of an axial-shift type SROP (AS-SROP) with a novel transformable architecture for a proof-of-concept. In the AS-SROP, detectors originally forming a cylindrical PET are axially shifted little by little. We designed the small AS-SROP prototype for 4-layer depth-of-interaction detectors arranged in a ring diameter of 250 mm. The prototype had two modes: open and closed. The open mode formed the SROP with the open space of 139 mm and the closed mode formed a conventional cylindrical PET. The detectors were simultaneously moved by a rotation handle allowing them to be transformed between the two modes. We evaluated the basic performance of the developed prototype and carried out in-beam imaging tests in the HIMAC using 11C radioactive beam irradiation. As a result, we found the open mode enabled in-beam PET imaging at a slight cost of imaging performance; the spatial resolution and sensitivity were 2.6 mm and 5.1% for the open mode and 2.1 mm and 7.3% for the closed mode. We concluded that the AS-SROP can minimize the decrease of resolution and sensitivity, for example, by transforming into the closed mode immediately after the irradiation while maintaining the open space only for the in-beam PET measurement.

  6. Demand Activated Manufacturing Architecture (DAMA) model for supply chain collaboration

    SciTech Connect

    CHAPMAN,LEON D.; PETERSEN,MARJORIE B.

    2000-03-13

    The Demand Activated Manufacturing Architecture (DAMA) project during the last five years of work with the U.S. Integrated Textile Complex (retail, apparel, textile, and fiber sectors) has developed an inter-enterprise architecture and collaborative model for supply chains. This model will enable improved collaborative business across any supply chain. The DAMA Model for Supply Chain Collaboration is a high-level model for collaboration to achieve Demand Activated Manufacturing. The five major elements of the architecture to support collaboration are (1) activity or process, (2) information, (3) application, (4) data, and (5) infrastructure. These five elements are tied to the application of the DAMA architecture to three phases of collaboration - prepare, pilot, and scale. There are six collaborative activities that may be employed in this model: (1) Develop Business Planning Agreements, (2) Define Products, (3) Forecast and Plan Capacity Commitments, (4) Schedule Product and Product Delivery, (5) Expedite Production and Delivery Exceptions, and (6) Populate Supply Chain Utility. The Supply Chain Utility is a set of applications implemented to support collaborative product definition, forecast visibility, planning, scheduling, and execution. The DAMA architecture and model will be presented along with the process for implementing this DAMA model.

  7. Towards a Framework for Modeling Space Systems Architectures

    NASA Technical Reports Server (NTRS)

    Shames, Peter; Skipper, Joseph

    2006-01-01

    Topics covered include: 1) Statement of the problem: a) Space system architecture is complex; b) Existing terrestrial approaches must be adapted for space; c) Need a common architecture methodology and information model; d) Need appropriate set of viewpoints. 2) Requirements on a space systems model. 3) Model Based Engineering and Design (MBED) project: a) Evaluated different methods; b) Adapted and utilized RASDS & RM-ODP; c) Identified useful set of viewpoints; d) Did actual model exchanges among selected subset of tools. 4) Lessons learned & future vision.

  8. Bayesian Transformation Models for Multivariate Survival Data

    PubMed Central

    DE CASTRO, MÁRIO; CHEN, MING-HUI; IBRAHIM, JOSEPH G.; KLEIN, JOHN P.

    2014-01-01

    In this paper we propose a general class of gamma frailty transformation models for multivariate survival data. The transformation class includes the commonly used proportional hazards and proportional odds models. The proposed class also includes a family of cure rate models. Under an improper prior for the parameters, we establish propriety of the posterior distribution. A novel Gibbs sampling algorithm is developed for sampling from the observed data posterior distribution. A simulation study is conducted to examine the properties of the proposed methodology. An application to a data set from a cord blood transplantation study is also reported. PMID:24904194

  9. Non-linear transformer modeling and simulation

    SciTech Connect

    Archer, W.E.; Deveney, M.F.; Nagel, R.L.

    1994-08-01

    Transformers models for simulation with Pspice and Analogy`s Saber are being developed using experimental B-H Loop and network analyzer measurements. The models are evaluated for accuracy and convergence using several test circuits. Results are presented which demonstrate the effects on circuit performance from magnetic core losses eddy currents and mechanical stress on the magnetic cores.

  10. Optimizing transformations of stencil operations for parallel cache-based architectures

    SciTech Connect

    Bassetti, F.; Davis, K.

    1999-06-28

    This paper describes a new technique for optimizing serial and parallel stencil- and stencil-like operations for cache-based architectures. This technique takes advantage of the semantic knowledge implicity in stencil-like computations. The technique is implemented as a source-to-source program transformation; because of its specificity it could not be expected of a conventional compiler. Empirical results demonstrate a uniform factor of two speedup. The experiments clearly show the benefits of this technique to be a consequence, as intended, of the reduction in cache misses. The test codes are based on a 5-point stencil obtained by the discretization of the Poisson equation and applied to a two-dimensional uniform grid using the Jacobi method as an iterative solver. Results are presented for a 1-D tiling for a single processor, and in parallel using 1-D data partition. For the parallel case both blocking and non-blocking communication are tested. The same scheme of experiments has bee n performed for the 2-D tiling case. However, for the parallel case the 2-D partitioning is not discussed here, so the parallel case handled for 2-D is 2-D tiling with 1-D data partitioning.

  11. Transformation model selection by multiple hypotheses testing

    NASA Astrophysics Data System (ADS)

    Lehmann, Rüdiger

    2014-12-01

    Transformations between different geodetic reference frames are often performed such that first the transformation parameters are determined from control points. If in the first place we do not know which of the numerous transformation models is appropriate then we can set up a multiple hypotheses test. The paper extends the common method of testing transformation parameters for significance, to the case that also constraints for such parameters are tested. This provides more flexibility when setting up such a test. One can formulate a general model with a maximum number of transformation parameters and specialize it by adding constraints to those parameters, which need to be tested. The proper test statistic in a multiple test is shown to be either the extreme normalized or the extreme studentized Lagrange multiplier. They are shown to perform superior to the more intuitive test statistics derived from misclosures. It is shown how model selection by multiple hypotheses testing relates to the use of information criteria like AICc and Mallows' , which are based on an information theoretic approach. Nevertheless, whenever comparable, the results of an exemplary computation almost coincide.

  12. Hierarchical decomposition model for reconfigurable architecture

    NASA Astrophysics Data System (ADS)

    Erdogan, Simsek; Wahab, Abdul

    1996-10-01

    This paper introduces a systematic approach for abstract modeling of VLSI digital systems using a hierarchical decomposition process and HDL. In particular, the modeling of the back propagation neural network on a massively parallel reconfigurable hardware is used to illustrate the design process rather than toy examples. Based on the design specification of the algorithm, a functional model is developed through successive refinement and decomposition for execution on the reconfiguration machine. First, a top- level block diagram of the system is derived. Then, a schematic sheet of the corresponding structural model is developed to show the interconnections of the main functional building blocks. Next, the functional blocks are decomposed iteratively as required. Finally, the blocks are modeled using HDL and verified against the block specifications.

  13. Improving Project Management Using Formal Models and Architectures

    NASA Technical Reports Server (NTRS)

    Kahn, Theodore; Sturken, Ian

    2011-01-01

    This talk discusses the advantages formal modeling and architecture brings to project management. These emerging technologies have both great potential and challenges for improving information available for decision-making. The presentation covers standards, tools and cultural issues needing consideration, and includes lessons learned from projects the presenters have worked on.

  14. Modeling of Euclidean braided fiber architectures to optimize composite properties

    NASA Technical Reports Server (NTRS)

    Armstrong-Carroll, E.; Pastore, C.; Ko, F. K.

    1992-01-01

    Three-dimensional braided fiber reinforcements are a very effective toughening mechanism for composite materials. The integral yarn path inherent to this fiber architecture allows for effective multidirectional dispersion of strain energy and negates delamination problems. In this paper a geometric model of Euclidean braid fiber architectures is presented. This information is used to determine the degree of geometric isotropy in the braids. This information, when combined with candidate material properties, can be used to quickly generate an estimate of the available load-carrying capacity of Euclidean braids at any arbitrary angle.

  15. Parallel, iterative solution of sparse linear systems: Models and architectures

    NASA Technical Reports Server (NTRS)

    Reed, D. A.; Patrick, M. L.

    1984-01-01

    A model of a general class of asynchronous, iterative solution methods for linear systems is developed. In the model, the system is solved by creating several cooperating tasks that each compute a portion of the solution vector. A data transfer model predicting both the probability that data must be transferred between two tasks and the amount of data to be transferred is presented. This model is used to derive an execution time model for predicting parallel execution time and an optimal number of tasks given the dimension and sparsity of the coefficient matrix and the costs of computation, synchronization, and communication. The suitability of different parallel architectures for solving randomly sparse linear systems is discussed. Based on the complexity of task scheduling, one parallel architecture, based on a broadcast bus, is presented and analyzed.

  16. Modeling Virtual Organization Architecture with the Virtual Organization Breeding Methodology

    NASA Astrophysics Data System (ADS)

    Paszkiewicz, Zbigniew; Picard, Willy

    While Enterprise Architecture Modeling (EAM) methodologies become more and more popular, an EAM methodology tailored to the needs of virtual organizations (VO) is still to be developed. Among the most popular EAM methodologies, TOGAF has been chosen as the basis for a new EAM methodology taking into account characteristics of VOs presented in this paper. In this new methodology, referred as Virtual Organization Breeding Methodology (VOBM), concepts developed within the ECOLEAD project, e.g. the concept of Virtual Breeding Environment (VBE) or the VO creation schema, serve as fundamental elements for development of VOBM. VOBM is a generic methodology that should be adapted to a given VBE. VOBM defines the structure of VBE and VO architectures in a service-oriented environment, as well as an architecture development method for virtual organizations (ADM4VO). Finally, a preliminary set of tools and methods for VOBM is given in this paper.

  17. Architecture, modeling, and analysis of a plasma impedance probe

    NASA Astrophysics Data System (ADS)

    Jayaram, Magathi

    Variations in ionospheric plasma density can cause large amplitude and phase changes in the radio waves passing through this region. Ionospheric weather can have detrimental effects on several communication systems, including radars, navigation systems such as the Global Positioning Sytem (GPS), and high-frequency communications. As a result, creating models of the ionospheric density is of paramount interest to scientists working in the field of satellite communication. Numerous empirical and theoretical models have been developed to study the upper atmosphere climatology and weather. Multiple measurements of plasma density over a region are of marked importance while creating these models. The lack of spatially distributed observations in the upper atmosphere is currently a major limitation in space weather research. A constellation of CubeSat platforms would be ideal to take such distributed measurements. The use of miniaturized instruments that can be accommodated on small satellites, such as CubeSats, would be key to achieving these science goals for space weather. The accepted instrumentation techniques for measuring the electron density are the Langmuir probes and the Plasma Impedance Probe (PIP). While Langmuir probes are able to provide higher resolution measurements of relative electron density, the Plasma Impedance Probes provide absolute electron density measurements irrespective of spacecraft charging. The central goal of this dissertation is to develop an integrated architecture for the PIP that will enable space weather research from CubeSat platforms. The proposed PIP chip integrates all of the major analog and mixed-signal components needed to perform swept-frequency impedance measurements. The design's primary innovation is the integration of matched Analog-to-Digital Converters (ADC) on a single chip for sampling the probes current and voltage signals. A Fast Fourier Transform (FFT) is performed by an off-chip Field-Programmable Gate Array (FPGA

  18. Self-organizing spiking neural model for learning fault-tolerant spatio-motor transformations.

    PubMed

    Srinivasa, Narayan; Cho, Youngkwan

    2012-10-01

    In this paper, we present a spiking neural model that learns spatio-motor transformations. The model is in the form of a multilayered architecture consisting of integrate and fire neurons and synapses that employ spike-timing-dependent plasticity learning rule to enable the learning of such transformations. We developed a simple 2-degree-of-freedom robot-based reaching task which involves the learning of a nonlinear function. Computer simulations demonstrate the capability of such a model for learning the forward and inverse kinematics for such a task and hence to learn spatio-motor transformations. The interesting aspect of the model is its capacity to be tolerant to partial absence of sensory or motor inputs at various stages of learning. We believe that such a model lays the foundation for learning other complex functions and transformations in real-world scenarios. PMID:24807999

  19. Multiscale Modeling of Phase Transformations in Steels

    NASA Astrophysics Data System (ADS)

    Militzer, M.; Hoyt, J. J.; Provatas, N.; Rottler, J.; Sinclair, C. W.; Zurob, H. S.

    2014-05-01

    Multiscale modeling tools have great potential to aid the development of new steels and processing routes. Currently, industrial process models are at least in part based on empirical material parameters to describe microstructure evolution and the resulting material properties. Modeling across different length and time scales is a promising approach to develop next-generation process models with enhanced predictive capabilities for the role of alloying elements. The status and challenges of this multiscale modeling approach are discussed for microstructure evolution in advanced low-carbon steels. First-principle simulations of solute segregation to a grain boundary and an austenite-ferrite interface in iron confirm trends of important alloying elements (e.g., Nb, Mo, and Mn) on grain growth, recrystallization, and phase transformation in steels. In particular, the linkage among atomistic simulations, phase-field modeling, and classic diffusion models is illustrated for the effects of solute drag on the austenite-to-ferrite transformation as observed in dedicated experimental studies for iron model alloys and commercial steels.

  20. System Architecture Modeling for Technology Portfolio Management using ATLAS

    NASA Technical Reports Server (NTRS)

    Thompson, Robert W.; O'Neil, Daniel A.

    2006-01-01

    Strategic planners and technology portfolio managers have traditionally relied on consensus-based tools, such as Analytical Hierarchy Process (AHP) and Quality Function Deployment (QFD) in planning the funding of technology development. While useful to a certain extent, these tools are limited in the ability to fully quantify the impact of a technology choice on system mass, system reliability, project schedule, and lifecycle cost. The Advanced Technology Lifecycle Analysis System (ATLAS) aims to provide strategic planners a decision support tool for analyzing technology selections within a Space Exploration Architecture (SEA). Using ATLAS, strategic planners can select physics-based system models from a library, configure the systems with technologies and performance parameters, and plan the deployment of a SEA. Key parameters for current and future technologies have been collected from subject-matter experts and other documented sources in the Technology Tool Box (TTB). ATLAS can be used to compare the technical feasibility and economic viability of a set of technology choices for one SEA, and compare it against another set of technology choices or another SEA. System architecture modeling in ATLAS is a multi-step process. First, the modeler defines the system level requirements. Second, the modeler identifies technologies of interest whose impact on an SEA. Third, the system modeling team creates models of architecture elements (e.g. launch vehicles, in-space transfer vehicles, crew vehicles) if they are not already in the model library. Finally, the architecture modeler develops a script for the ATLAS tool to run, and the results for comparison are generated.

  1. Space station architectural elements model study

    NASA Technical Reports Server (NTRS)

    Taylor, T. C.; Spencer, J. S.; Rocha, C. J.; Kahn, E.; Cliffton, E.; Carr, C.

    1987-01-01

    The worksphere, a user controlled computer workstation enclosure, was expanded in scope to an engineering workstation suitable for use on the Space Station as a crewmember desk in orbit. The concept was also explored as a module control station capable of enclosing enough equipment to control the station from each module. The concept has commercial potential for the Space Station and surface workstation applications. The central triangular beam interior configuration was expanded and refined to seven different beam configurations. These included triangular on center, triangular off center, square, hexagonal small, hexagonal medium, hexagonal large and the H beam. Each was explored with some considerations as to the utilities and a suggested evaluation factor methodology was presented. Scale models of each concept were made. The models were helpful in researching the seven beam configurations and determining the negative residual (unused) volume of each configuration. A flexible hardware evaluation factor concept is proposed which could be helpful in evaluating interior space volumes from a human factors point of view. A magnetic version with all the graphics is available from the author or the technical monitor.

  2. A Model-Driven Architecture Approach for Modeling, Specifying and Deploying Policies in Autonomous and Autonomic Systems

    NASA Technical Reports Server (NTRS)

    Pena, Joaquin; Hinchey, Michael G.; Sterritt, Roy; Ruiz-Cortes, Antonio; Resinas, Manuel

    2006-01-01

    Autonomic Computing (AC), self-management based on high level guidance from humans, is increasingly gaining momentum as the way forward in designing reliable systems that hide complexity and conquer IT management costs. Effectively, AC may be viewed as Policy-Based Self-Management. The Model Driven Architecture (MDA) approach focuses on building models that can be transformed into code in an automatic manner. In this paper, we look at ways to implement Policy-Based Self-Management by means of models that can be converted to code using transformations that follow the MDA philosophy. We propose a set of UML-based models to specify autonomic and autonomous features along with the necessary procedures, based on modification and composition of models, to deploy a policy as an executing system.

  3. On Estimation of Partially Linear Transformation Models.

    PubMed

    Lu, Wenbin; Zhang, Hao Helen

    2010-06-01

    We study a general class of partially linear transformation models, which extend linear transformation models by incorporating nonlinear covariate effects in survival data analysis. A new martingale-based estimating equation approach, consisting of both global and kernel-weighted local estimation equations, is developed for estimating the parametric and nonparametric covariate effects in a unified manner. We show that with a proper choice of the kernel bandwidth parameter, one can obtain the consistent and asymptotically normal parameter estimates for the linear effects. Asymptotic properties of the estimated nonlinear effects are established as well. We further suggest a simple resampling method to estimate the asymptotic variance of the linear estimates and show its effectiveness. To facilitate the implementation of the new procedure, an iterative algorithm is developed. Numerical examples are given to illustrate the finite-sample performance of the procedure. PMID:20802823

  4. On Estimation of Partially Linear Transformation Models

    PubMed Central

    Lu, Wenbin; Zhang, Hao Helen

    2010-01-01

    We study a general class of partially linear transformation models, which extend linear transformation models by incorporating nonlinear covariate effects in survival data analysis. A new martingale-based estimating equation approach, consisting of both global and kernel-weighted local estimation equations, is developed for estimating the parametric and nonparametric covariate effects in a unified manner. We show that with a proper choice of the kernel bandwidth parameter, one can obtain the consistent and asymptotically normal parameter estimates for the linear effects. Asymptotic properties of the estimated nonlinear effects are established as well. We further suggest a simple resampling method to estimate the asymptotic variance of the linear estimates and show its effectiveness. To facilitate the implementation of the new procedure, an iterative algorithm is developed. Numerical examples are given to illustrate the finite-sample performance of the procedure. PMID:20802823

  5. Modelling parallel programs and multiprocessor architectures with AXE

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Fineman, Charles E.

    1991-01-01

    AXE, An Experimental Environment for Parallel Systems, was designed to model and simulate for parallel systems at the process level. It provides an integrated environment for specifying computation models, multiprocessor architectures, data collection, and performance visualization. AXE is being used at NASA-Ames for developing resource management strategies, parallel problem formulation, multiprocessor architectures, and operating system issues related to the High Performance Computing and Communications Program. AXE's simple, structured user-interface enables the user to model parallel programs and machines precisely and efficiently. Its quick turn-around time keeps the user interested and productive. AXE models multicomputers. The user may easily modify various architectural parameters including the number of sites, connection topologies, and overhead for operating system activities. Parallel computations in AXE are represented as collections of autonomous computing objects known as players. Their use and behavior is described. Performance data of the multiprocessor model can be observed on a color screen. These include CPU and message routing bottlenecks, and the dynamic status of the software.

  6. Modeling of transformers using circuit simulators

    SciTech Connect

    Archer, W.E.; Deveney, M.F.; Nagel, R.L.

    1994-07-01

    Transformers of two different designs; and unencapsulated pot core and an encapsulated toroidal core have been modeled for circuit analysis with circuit simulation tools. We selected MicroSim`s PSPICE and Anology`s SABER as the simulation tools and used experimental BH Loop and network analyzer measurements to generate the needed input data. The models are compared for accuracy and convergence using the circuit simulators. Results are presented which demonstrate the effects on circuit performance from magnetic core losses, eddy currents, and mechanical stress on the magnetic cores.

  7. A kinematic model of ridge-transform geometry evolution

    NASA Technical Reports Server (NTRS)

    Stoddard, Paul R.; Stein, Seth

    1988-01-01

    A simple kinematic model is used to study the effects of various parameters on the evolution of zero-offset transforms and very-long-offset transforms. Consideration is given to the effects of initial configuration, degree of asymmetry, and degree of bias in asymmetry on the generation of these ridge transform geometries and on the possible steady-state nature of the transform length spectra. Of the parameters tests, only lack of 'memory' of zero-offset transforms affects transform length distribution.

  8. Coaching Model + Clinical Playbook = Transformative Learning.

    PubMed

    Fletcher, Katherine A; Meyer, Mary

    2016-01-01

    Health care employers demand that workers be skilled in clinical reasoning, able to work within complex interprofessional teams to provide safe, quality patient-centered care in a complex evolving system. To this end, there have been calls for radical transformation of nursing education including the development of a baccalaureate generalist nurse. Based on recommendations from the American Association of Colleges of Nursing, faculty concluded that clinical education must change moving beyond direct patient care by applying the concepts associated with designer, manager, and coordinator of care and being a member of a profession. To accomplish this, the faculty utilized a system of focused learning assignments (FLAs) that present transformative learning opportunities that expose students to "disorienting dilemmas," alternative perspectives, and repeated opportunities to reflect and challenge their own beliefs. The FLAs collected in a "Playbook" were scaffolded to build the student's competencies over the course of the clinical experience. The FLAs were centered on the 6 Quality and Safety Education for Nurses competencies, with 2 additional concepts of professionalism and systems-based practice. The FLAs were competency-based exercises that students performed when not assigned to direct patient care or had free clinical time. Each FLA had a lesson plan that allowed the student and faculty member to see the competency addressed by the lesson, resources, time on task, student instructions, guide for reflection, grading rubric, and recommendations for clinical instructor. The major advantages of the model included (a) consistent implementation of structured learning experiences by a diverse teaching staff using a coaching model of instruction; (b) more systematic approach to present learning activities that build upon each other; (c) increased time for faculty to interact with students providing direct patient care; (d) guaranteed capture of selected transformative

  9. Entity-Centric Abstraction and Modeling Framework for Transportation Architectures

    NASA Technical Reports Server (NTRS)

    Lewe, Jung-Ho; DeLaurentis, Daniel A.; Mavris, Dimitri N.; Schrage, Daniel P.

    2007-01-01

    A comprehensive framework for representing transpportation architectures is presented. After discussing a series of preceding perspectives and formulations, the intellectual underpinning of the novel framework using an entity-centric abstraction of transportation is described. The entities include endogenous and exogenous factors and functional expressions are offered that relate these and their evolution. The end result is a Transportation Architecture Field which permits analysis of future concepts under the holistic perspective. A simulation model which stems from the framework is presented and exercised producing results which quantify improvements in air transportation due to advanced aircraft technologies. Finally, a modeling hypothesis and its accompanying criteria are proposed to test further use of the framework for evaluating new transportation solutions.

  10. From Point Clouds to Architectural Models: Algorithms for Shape Reconstruction

    NASA Astrophysics Data System (ADS)

    Canciani, M.; Falcolini, C.; Saccone, M.; Spadafora, G.

    2013-02-01

    The use of terrestrial laser scanners in architectural survey applications has become more and more common. Row data complexity, as given by scanner restitution, leads to several problems about design and 3D-modelling starting from Point Clouds. In this context we present a study on architectural sections and mathematical algorithms for their shape reconstruction, according to known or definite geometrical rules, focusing on shapes of different complexity. Each step of the semi-automatic algorithm has been developed using Mathematica software and CAD, integrating both programs in order to reconstruct a geometrical CAD model of the object. Our study is motivated by the fact that, for architectural survey, most of three dimensional modelling procedures concerning point clouds produce superabundant, but often unnecessary, information and are also very expensive in terms of cpu time using more and more sophisticated hardware and software. On the contrary, it's important to simplify/decimate the point cloud in order to recognize a particular form out of some definite geometric/architectonic shapes. Such a process consists of several steps: first the definition of plane sections and characterization of their architecture; secondly the construction of a continuous plane curve depending on some parameters. In the third step we allow the selection on the curve of some nodal points with given specific characteristics (symmetry, tangency conditions, shadowing exclusion, corners, … ). The fourth and last step is the construction of a best shape defined by the comparison with an abacus of known geometrical elements, such as moulding profiles, leading to a precise architectonical section. The algorithms have been developed and tested in very different situations and are presented in a case study of complex geometries such as some mouldings profiles in the Church of San Carlo alle Quattro Fontane.

  11. A performance model of the OSI communication architecture

    NASA Astrophysics Data System (ADS)

    Kritzinger, P. S.

    1986-06-01

    An analytical model aiming at predicting the performance of software implementations which would be built according to the OSI basic reference model is proposed. The model uses the peer protocol standard of a layer as the reference description of an implementation of that layer. The model is basically a closed multiclass multichain queueing network with a processor-sharing center, modeling process contention at the processor, and a delay center, modeling times spent waiting for responses from the corresponding peer processes. Each individual transition of the protocol constitutes a different class and each layer of the architecture forms a closed chain. Performance statistics include queue lengths and response times at the processor as a function of processor speed and the number of open connections. It is shown how to reduce the model should the protocol state space become very large. Numerical results based upon the derived formulas are given.

  12. Managing changes in the enterprise architecture modelling context

    NASA Astrophysics Data System (ADS)

    Khanh Dam, Hoa; Lê, Lam-Son; Ghose, Aditya

    2016-07-01

    Enterprise architecture (EA) models the whole enterprise in various aspects regarding both business processes and information technology resources. As the organisation grows, the architecture of its systems and processes must also evolve to meet the demands of the business environment. Evolving an EA model may involve making changes to various components across different levels of the EA. As a result, an important issue before making a change to an EA model is assessing the ripple effect of the change, i.e. change impact analysis. Another critical issue is change propagation: given a set of primary changes that have been made to the EA model, what additional secondary changes are needed to maintain consistency across multiple levels of the EA. There has been however limited work on supporting the maintenance and evolution of EA models. This article proposes an EA description language, namely ChangeAwareHierarchicalEA, integrated with an evolution framework to support both change impact analysis and change propagation within an EA model. The core part of our framework is a technique for computing the impact of a change and a new method for generating interactive repair plans from Alloy consistency rules that constrain the EA model.

  13. A new global GIS architecture based on STQIE model

    NASA Astrophysics Data System (ADS)

    Cheng, Chengqi; Guan, Li; Guo, Shide; Pu, Guoliang; Sun, Min

    2007-06-01

    Global GIS is a system, which supports the huge data process and the global direct manipulation on global grid based on spheroid or ellipsoid surface. A new Global GIS architecture based on STQIE model is designed in this paper, according to the computer cluster theory, the space-time integration technology and the virtual real technology. There is four-level protocol framework and three-layer data management pattern of Global GIS based on organization, management and publication of spatial information in this architecture. In this paper a global 3D prototype system is developed taking advantage of C++ language according to the above thought. This system integrated the simulation system with GIS, and supported display of multi-resolution DEM, image and multi-dimensional static or dynamic 3D objects.

  14. SpaceWire model development technology for satellite architecture.

    SciTech Connect

    Eldridge, John M.; Leemaster, Jacob Edward; Van Leeuwen, Brian P.

    2011-09-01

    Packet switched data communications networks that use distributed processing architectures have the potential to simplify the design and development of new, increasingly more sophisticated satellite payloads. In addition, the use of reconfigurable logic may reduce the amount of redundant hardware required in space-based applications without sacrificing reliability. These concepts were studied using software modeling and simulation, and the results are presented in this report. Models of the commercially available, packet switched data interconnect SpaceWire protocol were developed and used to create network simulations of data networks containing reconfigurable logic with traffic flows for timing system distribution.

  15. Plant growth and architectural modelling and its applications

    PubMed Central

    Guo, Yan; Fourcaud, Thierry; Jaeger, Marc; Zhang, Xiaopeng; Li, Baoguo

    2011-01-01

    Over the last decade, a growing number of scientists around the world have invested in research on plant growth and architectural modelling and applications (often abbreviated to plant modelling and applications, PMA). By combining physical and biological processes, spatially explicit models have shown their ability to help in understanding plant–environment interactions. This Special Issue on plant growth modelling presents new information within this topic, which are summarized in this preface. Research results for a variety of plant species growing in the field, in greenhouses and in natural environments are presented. Various models and simulation platforms are developed in this field of research, opening new features to a wider community of researchers and end users. New modelling technologies relating to the structure and function of plant shoots and root systems are explored from the cellular to the whole-plant and plant-community levels. PMID:21638797

  16. Semiparametric transformation models for semicompeting survival data.

    PubMed

    Lin, Huazhen; Zhou, Ling; Li, Chunhong; Li, Yi

    2014-09-01

    Semicompeting risk outcome data (e.g., time to disease progression and time to death) are commonly collected in clinical trials. However, analysis of these data is often hampered by a scarcity of available statistical tools. As such, we propose a novel semiparametric transformation model that improves the existing models in the following two ways. First, it estimates regression coefficients and association parameters simultaneously. Second, the measure of surrogacy, for example, the proportion of the treatment effect that is mediated by the surrogate and the ratio of the overall treatment effect on the true endpoint over that on the surrogate endpoint, can be directly obtained. We propose an estimation procedure for inference and show that the proposed estimator is consistent and asymptotically normal. Extensive simulations demonstrate the valid usage of our method. We apply the method to a multiple myeloma trial to study the impact of several biomarkers on patients' semicompeting outcomes--namely, time to progression and time to death. PMID:24749525

  17. Architecture for time or transform domain decoding of reed-solomon codes

    NASA Technical Reports Server (NTRS)

    Shao, Howard M. (Inventor); Truong, Trieu-Kie (Inventor); Hsu, In-Shek (Inventor); Deutsch, Leslie J. (Inventor)

    1989-01-01

    Two pipeline (255,233) RS decoders, one a time domain decoder and the other a transform domain decoder, use the same first part to develop an errata locator polynomial .tau.(x), and an errata evaluator polynominal A(x). Both the time domain decoder and transform domain decoder have a modified GCD that uses an input multiplexer and an output demultiplexer to reduce the number of GCD cells required. The time domain decoder uses a Chien search and polynomial evaluator on the GCD outputs .tau.(x) and A(x), for the final decoding steps, while the transform domain decoder uses a transform error pattern algorithm operating on .tau.(x) and the initial syndrome computation S(x), followed by an inverse transform algorithm in sequence for the final decoding steps prior to adding the received RS coded message to produce a decoded output message.

  18. Reservoir architecture modeling: Nonstationary models for quantitative geological characterization. Final report, April 30, 1998

    SciTech Connect

    Kerr, D.; Epili, D.; Kelkar, M.; Redner, R.; Reynolds, A.

    1998-12-01

    The study was comprised of four investigations: facies architecture; seismic modeling and interpretation; Markov random field and Boolean models for geologic modeling of facies distribution; and estimation of geological architecture using the Bayesian/maximum entropy approach. This report discusses results from all four investigations. Investigations were performed using data from the E and F units of the Middle Frio Formation, Stratton Field, one of the major reservoir intervals in the Gulf Coast Basin.

  19. Building energy modeling for green architecture and intelligent dashboard applications

    NASA Astrophysics Data System (ADS)

    DeBlois, Justin

    Buildings are responsible for 40% of the carbon emissions in the United States. Energy efficiency in this sector is key to reducing overall greenhouse gas emissions. This work studied the passive technique called the roof solar chimney for reducing the cooling load in homes architecturally. Three models of the chimney were created: a zonal building energy model, computational fluid dynamics model, and numerical analytic model. The study estimated the error introduced to the building energy model (BEM) through key assumptions, and then used a sensitivity analysis to examine the impact on the model outputs. The conclusion was that the error in the building energy model is small enough to use it for building simulation reliably. Further studies simulated the roof solar chimney in a whole building, integrated into one side of the roof. Comparisons were made between high and low efficiency constructions, and three ventilation strategies. The results showed that in four US climates, the roof solar chimney results in significant cooling load energy savings of up to 90%. After developing this new method for the small scale representation of a passive architecture technique in BEM, the study expanded the scope to address a fundamental issue in modeling - the implementation of the uncertainty from and improvement of occupant behavior. This is believed to be one of the weakest links in both accurate modeling and proper, energy efficient building operation. A calibrated model of the Mascaro Center for Sustainable Innovation's LEED Gold, 3,400 m2 building was created. Then algorithms were developed for integration to the building's dashboard application that show the occupant the energy savings for a variety of behaviors in real time. An approach using neural networks to act on real-time building automation system data was found to be the most accurate and efficient way to predict the current energy savings for each scenario. A stochastic study examined the impact of the

  20. Probabilistic logic modeling of network reliability for hybrid network architectures

    SciTech Connect

    Wyss, G.D.; Schriner, H.K.; Gaylor, T.R.

    1996-10-01

    Sandia National Laboratories has found that the reliability and failure modes of current-generation network technologies can be effectively modeled using fault tree-based probabilistic logic modeling (PLM) techniques. We have developed fault tree models that include various hierarchical networking technologies and classes of components interconnected in a wide variety of typical and atypical configurations. In this paper we discuss the types of results that can be obtained from PLMs and why these results are of great practical value to network designers and analysts. After providing some mathematical background, we describe the `plug-and-play` fault tree analysis methodology that we have developed for modeling connectivity and the provision of network services in several current- generation network architectures. Finally, we demonstrate the flexibility of the method by modeling the reliability of a hybrid example network that contains several interconnected ethernet, FDDI, and token ring segments. 11 refs., 3 figs., 1 tab.

  1. Crystal Level Continuum Modeling of Phase Transformations: The (alpha) <--> (epsilon) Transformation in Iron

    SciTech Connect

    Barton, N R; Benson, D J; Becker, R; Bykov, Y; Caplan, M

    2004-10-18

    We present a crystal level model for thermo-mechanical deformation with phase transformation capabilities. The model is formulated to allow for large pressures (on the order of the elastic moduli) and makes use of a multiplicative decomposition of the deformation gradient. Elastic and thermal lattice distortions are combined into a single lattice stretch to allow the model to be used in conjunction with general equation of state relationships. Phase transformations change the mass fractions of the material constituents. The driving force for phase transformations includes terms arising from mechanical work, from the temperature dependent chemical free energy change on transformation, and from interaction energy among the constituents. Deformation results from both these phase transformations and elasto-viscoplastic deformation of the constituents themselves. Simulation results are given for the {alpha} to {epsilon} phase transformation in iron. Results include simulations of shock induced transformation in single crystals and of compression of polycrystals. Results are compared to available experimental data.

  2. Architecture in motion: A model for music composition

    NASA Astrophysics Data System (ADS)

    Variego, Jorge Elias

    2011-12-01

    Speculations regarding the relationship between music and architecture go back to the very origins of these disciplines. Throughout history, these links have always reaffirmed that music and architecture are analogous art forms that only diverge in their object of study. In the 1 st c. BCE Vitruvius conceived Architecture as "one of the most inclusive and universal human activities" where the architect should be educated in all the arts, having a vast knowledge in history, music and philosophy. In the 18th c., the German thinker Johann Wolfgang von Goethe, described Architecture as "frozen music". More recently, in the 20th c., Iannis Xenakis studied the similar structuring principles between Music and Architecture creating his own "models" of musical composition based on mathematical principles and geometric constructions. The goal of this document is to propose a compositional method that will function as a translator between the acoustical properties of a room and music, to facilitate the creation of musical works that will not only happen within an enclosed space but will also intentionally interact with the space. Acoustical measurements of rooms such as reverberation time, frequency response and volume will be measured and systematically organized in correspondence with orchestrational parameters. The musical compositions created after the proposed model are evocative of the spaces on which they are based. They are meant to be performed in any space, not exclusively in the one where the acoustical measurements were obtained. The visual component of architectural design is disregarded; the room is considered a musical instrument, with its particular sound qualities and resonances. Compositions using the proposed model will not result as sonified shapes, they will be musical works literally "tuned" to a specific space. This Architecture in motion is an attempt to adopt scientific research to the service of a creative activity and to let the aural properties of

  3. A Functional Model of Sensemaking in a Neurocognitive Architecture

    PubMed Central

    Lebiere, Christian; Paik, Jaehyon; Rutledge-Taylor, Matthew; Staszewski, James; Anderson, John R.

    2013-01-01

    Sensemaking is the active process of constructing a meaningful representation (i.e., making sense) of some complex aspect of the world. In relation to intelligence analysis, sensemaking is the act of finding and interpreting relevant facts amongst the sea of incoming reports, images, and intelligence. We present a cognitive model of core information-foraging and hypothesis-updating sensemaking processes applied to complex spatial probability estimation and decision-making tasks. While the model was developed in a hybrid symbolic-statistical cognitive architecture, its correspondence to neural frameworks in terms of both structure and mechanisms provided a direct bridge between rational and neural levels of description. Compared against data from two participant groups, the model correctly predicted both the presence and degree of four biases: confirmation, anchoring and adjustment, representativeness, and probability matching. It also favorably predicted human performance in generating probability distributions across categories, assigning resources based on these distributions, and selecting relevant features given a prior probability distribution. This model provides a constrained theoretical framework describing cognitive biases as arising from three interacting factors: the structure of the task environment, the mechanisms and limitations of the cognitive architecture, and the use of strategies to adapt to the dual constraints of cognition and the environment. PMID:24302930

  4. Architecture for Integrated Medical Model Dynamic Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    Jaworske, D. A.; Myers, J. G.; Goodenow, D.; Young, M.; Arellano, J. D.

    2016-01-01

    Probabilistic Risk Assessment (PRA) is a modeling tool used to predict potential outcomes of a complex system based on a statistical understanding of many initiating events. Utilizing a Monte Carlo method, thousands of instances of the model are considered and outcomes are collected. PRA is considered static, utilizing probabilities alone to calculate outcomes. Dynamic Probabilistic Risk Assessment (dPRA) is an advanced concept where modeling predicts the outcomes of a complex system based not only on the probabilities of many initiating events, but also on a progression of dependencies brought about by progressing down a time line. Events are placed in a single time line, adding each event to a queue, as managed by a planner. Progression down the time line is guided by rules, as managed by a scheduler. The recently developed Integrated Medical Model (IMM) summarizes astronaut health as governed by the probabilities of medical events and mitigation strategies. Managing the software architecture process provides a systematic means of creating, documenting, and communicating a software design early in the development process. The software architecture process begins with establishing requirements and the design is then derived from the requirements.

  5. Building Structure Design as an Integral Part of Architecture: A Teaching Model for Students of Architecture

    ERIC Educational Resources Information Center

    Unay, Ali Ihsan; Ozmen, Cengiz

    2006-01-01

    This paper explores the place of structural design within undergraduate architectural education. The role and format of lecture-based structure courses within an education system, organized around the architectural design studio is discussed with its most prominent problems and proposed solutions. The fundamental concept of the current teaching…

  6. Modern multicore and manycore architectures: Modelling, optimisation and benchmarking a multiblock CFD code

    NASA Astrophysics Data System (ADS)

    Hadade, Ioan; di Mare, Luca

    2016-08-01

    Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.

  7. The Application of Architecture Frameworks to Modelling Exploration Operations Costs

    NASA Technical Reports Server (NTRS)

    Shishko, Robert

    2006-01-01

    Developments in architectural frameworks and system-of-systems thinking have provided useful constructs for systems engineering. DoDAF concepts, language, and formalisms, in particular, provide a natural way of conceptualizing an operations cost model applicable to NASA's space exploration vision. Not all DoDAF products have meaning or apply to a DoDAF inspired operations cost model, but this paper describes how such DoDAF concepts as nodes, systems, and operational activities relate to the development of a model to estimate exploration operations costs. The paper discusses the specific implementation to the Mission Operations Directorate (MOD) operational functions/activities currently being developed and presents an overview of how this powerful representation can apply to robotic space missions as well.

  8. Polygonal Shapes Detection in 3d Models of Complex Architectures

    NASA Astrophysics Data System (ADS)

    Benciolini, G. B.; Vitti, A.

    2015-02-01

    A sequential application of two global models defined on a variational framework is proposed for the detection of polygonal shapes in 3D models of complex architectures. As a first step, the procedure involves the use of the Mumford and Shah (1989) 1st-order variational model in dimension two (gridded height data are processed). In the Mumford-Shah model an auxiliary function detects the sharp changes, i.e., the discontinuities, of a piecewise smooth approximation of the data. The Mumford-Shah model requires the global minimization of a specific functional to simultaneously produce both the smooth approximation and its discontinuities. In the proposed procedure, the edges of the smooth approximation derived by a specific processing of the auxiliary function are then processed using the Blake and Zisserman (1987) 2nd-order variational model in dimension one (edges are processed in the plane). This second step permits to describe the edges of an object by means of piecewise almost-linear approximation of the input edges themselves and to detects sharp changes of the first-derivative of the edges so to detect corners. The Mumford-Shah variational model is used in two dimensions accepting the original data as primary input. The Blake-Zisserman variational model is used in one dimension for the refinement of the description of the edges. The selection among all the boundaries detected by the Mumford-Shah model of those that present a shape close to a polygon is performed by considering only those boundaries for which the Blake-Zisserman model identified discontinuities in their first derivative. The output of the procedure are hence shapes, coming from 3D geometric data, that can be considered as polygons. The application of the procedure is suitable for, but not limited to, the detection of objects such as foot-print of polygonal buildings, building facade boundaries or windows contours. v The procedure is applied to a height model of the building of the Engineering

  9. 3D model tools for architecture and archaeology reconstruction

    NASA Astrophysics Data System (ADS)

    Vlad, Ioan; Herban, Ioan Sorin; Stoian, Mircea; Vilceanu, Clara-Beatrice

    2016-06-01

    The main objective of architectural and patrimonial survey is to provide a precise documentation of the status quo of the surveyed objects (monuments, buildings, archaeological object and sites) for preservation and protection, for scientific studies and restoration purposes, for the presentation to the general public. Cultural heritage documentation includes an interdisciplinary approach having as purpose an overall understanding of the object itself and an integration of the information which characterize it. The accuracy and the precision of the model are directly influenced by the quality of the measurements realized on field and by the quality of the software. The software is in the process of continuous development, which brings many improvements. On the other side, compared to aerial photogrammetry, close range photogrammetry and particularly architectural photogrammetry is not limited to vertical photographs with special cameras. The methodology of terrestrial photogrammetry has changed significantly and various photographic acquisitions are widely in use. In this context, the present paper brings forward a comparative study of TLS (Terrestrial Laser Scanner) and digital photogrammetry for 3D modeling. The authors take into account the accuracy of the 3D models obtained, the overall costs involved for each technology and method and the 4th dimension - time. The paper proves its applicability as photogrammetric technologies are nowadays used at a large scale for obtaining the 3D model of cultural heritage objects, efficacious in their assessment and monitoring, thus contributing to historic conservation. Its importance also lies in highlighting the advantages and disadvantages of each method used - very important issue for both the industrial and scientific segment when facing decisions such as in which technology to invest more research and funds.

  10. Simple example of an SADMT SDI-(Strategic Defense Initiative) Architecture Dataflow Modeling Technique) architecture specification. Version 1. 5. Final report

    SciTech Connect

    Linn, C.J.; Linn, J.L.; Edwards, S.H.; Kappel, M.R.; Ardoin, C.D.

    1988-04-21

    This report presents a simple architecture specification in the SDI Architecture Dataflow Modeling Technique (SADMT). The example code is given in the SADMT Generator (SAGEN) Language. This simple architecture includes (1) an informal description of the architecture, (2) the main program that creates the components of the simulation, (3) the specification of the BM/C3 logical processes of the architecture, (4) the specification of the Technology Modules (TMs) of the architecture, and (5) the specification of the Battle Management/Command, Control and Communications (BM/C3) and TMs of the threat.

  11. ARPENTEUR: a web-based photogrammetry tool for architectural modeling

    NASA Astrophysics Data System (ADS)

    Grussenmeyer, Pierre; Drap, Pierre

    2000-12-01

    ARPENTEUR is a web application for digital photogrammetry mainly dedicated to architecture. ARPENTEUR has been developed since 1998 by two French research teams: the 'Photogrammetry and Geomatics' group of ENSAIS-LERGEC's laboratory and the MAP-gamsau CNRS laboratory located in the school of Architecture of Marseille. The software package is a web based tool since photogrammetric concepts are embedded in Web technology and Java programming language. The aim of this project is to propose a photogrammetric software package and 3D modeling methods available on the Internet as applets through a simple browser. The use of Java and the Web platform is ful of advantages. Distributing software on any platform, at any pace connected to Internet is of course very promising. The updating is done directly on the server and the user always works with the latest release installed on the server. Three years ago the first prototype of ARPENTEUR was based on the Java Development Kit at the time only available for some browsers. Nowadays, we are working with the JDK 1.3 plug-in enriched by Java Advancing Imaging library.

  12. Optimization of Forward Wave Modeling on Contemporary HPC Architectures

    SciTech Connect

    Krueger, Jens; Micikevicius, Paulius; Williams, Samuel

    2012-07-20

    Reverse Time Migration (RTM) is one of the main approaches in the seismic processing industry for imaging the subsurface structure of the Earth. While RTM provides qualitative advantages over its predecessors, it has a high computational cost warranting implementation on HPC architectures. We focus on three progressively more complex kernels extracted from RTM: for isotropic (ISO), vertical transverse isotropic (VTI) and tilted transverse isotropic (TTI) media. In this work, we examine performance optimization of forward wave modeling, which describes the computational kernels used in RTM, on emerging multi- and manycore processors and introduce a novel common subexpression elimination optimization for TTI kernels. We compare attained performance and energy efficiency in both the single-node and distributed memory environments in order to satisfy industry’s demands for fidelity, performance, and energy efficiency. Moreover, we discuss the interplay between architecture (chip and system) and optimizations (both on-node computation) highlighting the importance of NUMA-aware approaches to MPI communication. Ultimately, our results show we can improve CPU energy efficiency by more than 10× on Magny Cours nodes while acceleration via multiple GPUs can surpass the energy-efficient Intel Sandy Bridge by as much as 3.6×.

  13. Optical image encryption based on joint fractional transform correlator architecture and digital holography

    NASA Astrophysics Data System (ADS)

    Wang, Qu; Guo, Qing; Lei, Liang; Zhou, Jinyun

    2013-04-01

    We present a hybrid configuration of joint transform correlator (JTC) and joint fractional transform correlator (JFTC) for encryption purpose. The original input is encoded in the joint fractional power spectrum distribution of JFTC. In our experimental arrangement, an additional random phase mask (master key) is holographically generated beforehand by a Mach-Zehnder interferometer with a JTC as the object arm. The fractional order of JFTC, together with the master key, can remarkably strengthen the safety level of encryption. Different from many previous digital-holography-based encryption schemes, the stability and alignment requirement for our system is not high, since the interferometric operation is only performed in the generation procedure of the master key. The advantages and feasibility of the proposed scheme have been verified by the experimental results. By combining with a multiplex technique, an application for multiple images encryption using the system is also given a detailed description.

  14. Java Architecture for Detect and Avoid Extensibility and Modeling

    NASA Technical Reports Server (NTRS)

    Santiago, Confesor; Mueller, Eric Richard; Johnson, Marcus A.; Abramson, Michael; Snow, James William

    2015-01-01

    Unmanned aircraft will equip with a detect-and-avoid (DAA) system that enables them to comply with the requirement to "see and avoid" other aircraft, an important layer in the overall set of procedural, strategic and tactical separation methods designed to prevent mid-air collisions. This paper describes a capability called Java Architecture for Detect and Avoid Extensibility and Modeling (JADEM), developed to prototype and help evaluate various DAA technological requirements by providing a flexible and extensible software platform that models all major detect-and-avoid functions. Figure 1 illustrates JADEM's architecture. The surveillance module can be actual equipment on the unmanned aircraft or simulators that model the process by which sensors on-board detect other aircraft and provide track data to the traffic display. The track evaluation function evaluates each detected aircraft and decides whether to provide an alert to the pilot and its severity. Guidance is a combination of intruder track information, alerting, and avoidance/advisory algorithms behind the tools shown on the traffic display to aid the pilot in determining a maneuver to avoid a loss of well clear. All these functions are designed with a common interface and configurable implementation, which is critical in exploring DAA requirements. To date, JADEM has been utilized in three computer simulations of the National Airspace System, three pilot-in-the-loop experiments using a total of 37 professional UAS pilots, and two flight tests using NASA's Predator-B unmanned aircraft, named Ikhana. The data collected has directly informed the quantitative separation standard for "well clear", safety case, requirements development, and the operational environment for the DAA minimum operational performance standards. This work was performed by the Separation Assurance/Sense and Avoid Interoperability team under NASA's UAS Integration in the NAS project.

  15. An ontological model of the practice transformation process.

    PubMed

    Sen, Arun; Sinha, Atish P

    2016-06-01

    Patient-centered medical home is defined as an approach for providing comprehensive primary care that facilitates partnerships between individual patients and their personal providers. The current state of the practice transformation process is ad hoc and no methodological basis exists for transforming a practice into a patient-centered medical home. Practices and hospitals somehow accomplish the transformation and send the transformation information to a certification agency, such as the National Committee for Quality Assurance, completely ignoring the development and maintenance of the processes that keep the medical home concept alive. Many recent studies point out that such a transformation is hard as it requires an ambitious whole-practice reengineering and redesign. As a result, the practices suffer change fatigue in getting the transformation done. In this paper, we focus on the complexities of the practice transformation process and present a robust ontological model for practice transformation. The objective of the model is to create an understanding of the practice transformation process in terms of key process areas and their activities. We describe how our ontology captures the knowledge of the practice transformation process, elicited from domain experts, and also discuss how, in the future, that knowledge could be diffused across stakeholders in a healthcare organization. Our research is the first effort in practice transformation process modeling. To build an ontological model for practice transformation, we adopt the Methontology approach. Based on the literature, we first identify the key process areas essential for a practice transformation process to achieve certification status. Next, we develop the practice transformation ontology by creating key activities and precedence relationships among the key process areas using process maturity concepts. At each step, we employ a panel of domain experts to verify the intermediate representations of the

  16. Automatic Texture Mapping of Architectural and Archaeological 3d Models

    NASA Astrophysics Data System (ADS)

    Kersten, T. P.; Stallmann, D.

    2012-07-01

    Today, detailed, complete and exact 3D models with photo-realistic textures are increasingly demanded for numerous applications in architecture and archaeology. Manual texture mapping of 3D models by digital photographs with software packages, such as Maxon Cinema 4D, Autodesk 3Ds Max or Maya, still requires a complex and time-consuming workflow. So, procedures for automatic texture mapping of 3D models are in demand. In this paper two automatic procedures are presented. The first procedure generates 3D surface models with textures by web services, while the second procedure textures already existing 3D models with the software tmapper. The program tmapper is based on the Multi Layer 3D image (ML3DImage) algorithm and developed in the programming language C++. The studies showing that the visibility analysis using the ML3DImage algorithm is not sufficient to obtain acceptable results of automatic texture mapping. To overcome the visibility problem the Point Cloud Painter algorithm in combination with the Z-buffer-procedure will be applied in the future.

  17. Fortran Transformational Tools in Support of Scientific Application Development for Petascale Computer Architectures

    SciTech Connect

    Sottille, Matthew

    2013-09-12

    This document is the final report for a multi-year effort building infrastructure to support tool development for Fortran programs. We also investigated static analysis and code transformation methods relevant to scientific programmers who are writing Fortran programs for petascale-class high performance computing systems. This report details our accomplishments, technical approaches, and provides information on where the research results and code may be obtained from an open source software repository. The report for the first year of the project that was performed at the University of Oregon prior to the PI moving to Galois, Inc. is included as an appendix.

  18. Phase transformations in a model mesenchymal tissue

    NASA Astrophysics Data System (ADS)

    Newman, Stuart A.; Forgacs, Gabor; Hinner, Bernhard; Maier, Christian W.; Sackmann, Erich

    2004-06-01

    Connective tissues, the most abundant tissue type of the mature mammalian body, consist of cells suspended in complex microenvironments known as extracellular matrices (ECMs). In the immature connective tissues (mesenchymes) encountered in developmental biology and tissue engineering applications, the ECMs contain varying amounts of randomly arranged fibers, and the physical state of the ECM changes as the fibers secreted by the cells undergo fibril and fiber assembly and organize into networks. In vitro composites consisting of assembling solutions of type I collagen, containing suspended polystyrene latex beads (~6 µm in diameter) with collagen-binding surface properties, provide a simplified model for certain physical aspects of developing mesenchymes. In particular, assembly-dependent topological (i.e., connectivity) transitions within the ECM could change a tissue from one in which cell-sized particles (e.g., latex beads or cells) are mechanically unlinked to one in which the particles are part of a mechanical continuum. Any particle-induced alterations in fiber organization would imply that cells could similarly establish physically distinct microdomains within tissues. Here we show that the presence of beads above a critical number density accelerates the sol-gel transition that takes place during the assembly of collagen into a globally interconnected network of fibers. The presence of this suprathreshold number of beads also dramatically changes the viscoelastic properties of the collagen matrix, but only when the initial concentration of soluble collagen is itself above a critical value. Our studies provide a starting point for the analysis of phase transformations of more complex biomaterials including developing and healing tissues as well as tissue substitutes containing living cells.

  19. A high frequency transformer model for the EMTP

    SciTech Connect

    Morched, A.; Marti, L.; Ottevangers, J. )

    1993-07-01

    A model to simulate the high frequency behavior of a power transformer is presented. This model is based on the frequency characteristics of the transformer admittance matrix between its terminals over a given range of frequencies. The transformer admittance characteristics can be obtained from measurements or from detailed internal models based on the physical layout of the transformer. The elements of the nodal admittance matrix are approximated with rational functions consisting of real as well as complex conjugate poles and zeros. These approximations are realized in the form of an RLC network in a format suitable for direct use with EMTP. The high frequency transformer model can be used as a stand-alone linear model or as an add-on module of a more comprehensive model where iron core nonlinearities are represented in detail.

  20. An avionics scenario and command model description for Space Generic Open Avionics Architecture (SGOAA)

    NASA Technical Reports Server (NTRS)

    Stovall, John R.; Wray, Richard B.

    1994-01-01

    This paper presents a description of a model for a space vehicle operational scenario and the commands for avionics. This model will be used in developing a dynamic architecture simulation model using the Statemate CASE tool for validation of the Space Generic Open Avionics Architecture (SGOAA). The SGOAA has been proposed as an avionics architecture standard to NASA through its Strategic Avionics Technology Working Group (SATWG) and has been accepted by the Society of Automotive Engineers (SAE) for conversion into an SAE Avionics Standard. This architecture was developed for the Flight Data Systems Division (FDSD) of the NASA Johnson Space Center (JSC) by the Lockheed Engineering and Sciences Company (LESC), Houston, Texas. This SGOAA includes a generic system architecture for the entities in spacecraft avionics, a generic processing external and internal hardware architecture, and a nine class model of interfaces. The SGOAA is both scalable and recursive and can be applied to any hierarchical level of hardware/software processing systems.

  1. A discrete dislocation transformation model for austenitic single crystals

    NASA Astrophysics Data System (ADS)

    Shi, J.; Turteltaub, S.; Van der Giessen, E.; Remmers, J. J. C.

    2008-07-01

    A discrete model for analyzing the interaction between plastic flow and martensitic phase transformations is developed. The model is intended for simulating the microstructure evolution in a single crystal of austenite that transforms non-homogeneously into martensite. The plastic flow in the untransformed austenite is simulated using a plane-strain discrete dislocation model. The phase transformation is modeled via the nucleation and growth of discrete martensitic regions embedded in the austenitic single crystal. At each instant during loading, the coupled elasto-plasto-transformation problem is solved using the superposition of analytical solutions for the discrete dislocations and discrete transformation regions embedded in an infinite homogeneous medium and the numerical solution of a complementary problem used to enforce the actual boundary conditions and the heterogeneities in the medium. In order to describe the nucleation and growth of martensitic regions, a nucleation criterion and a kinetic law suitable for discrete regions are specified. The constitutive rules used in discrete dislocation simulations are supplemented with additional evolution rules to account for the phase transformation. To illustrate the basic features of the model, simulations of specimens under plane-strain uniaxial extension and contraction are analyzed. The simulations indicate that plastic flow reduces the average stress at which transformation begins, but it also reduces the transformation rate when compared with benchmark simulations without plasticity. Furthermore, due to local stress fluctuations caused by dislocations, martensitic systems can be activated even though transformation would not appear to be favorable based on the average stress. Conversely, the simulations indicate that the plastic hardening behavior is influenced by the reduction in the effective austenitic grain size due to the evolution of transformation. During cyclic simulations, the coupled plasticity-transformation

  2. Developing a scalable modeling architecture for studying survivability technologies

    NASA Astrophysics Data System (ADS)

    Mohammad, Syed; Bounker, Paul; Mason, James; Brister, Jason; Shady, Dan; Tucker, David

    2006-05-01

    To facilitate interoperability of models in a scalable environment, and provide a relevant virtual environment in which Survivability technologies can be evaluated, the US Army Research Development and Engineering Command (RDECOM) Modeling Architecture for Technology Research and Experimentation (MATREX) Science and Technology Objective (STO) program has initiated the Survivability Thread which will seek to address some of the many technical and programmatic challenges associated with the effort. In coordination with different Thread customers, such as the Survivability branches of various Army labs, a collaborative group has been formed to define the requirements for the simulation environment that would in turn provide them a value-added tool for assessing models and gauge system-level performance relevant to Future Combat Systems (FCS) and the Survivability requirements of other burgeoning programs. An initial set of customer requirements has been generated in coordination with the RDECOM Survivability IPT lead, through the Survivability Technology Area at RDECOM Tank-automotive Research Development and Engineering Center (TARDEC, Warren, MI). The results of this project are aimed at a culminating experiment and demonstration scheduled for September, 2006, which will include a multitude of components from within RDECOM and provide the framework for future experiments to support Survivability research. This paper details the components with which the MATREX Survivability Thread was created and executed, and provides insight into the capabilities currently demanded by the Survivability faculty within RDECOM.

  3. An 8×8/4×4 Adaptive Hadamard Transform Based FME VLSI Architecture for 4K×2K H.264/AVC Encoder

    NASA Astrophysics Data System (ADS)

    Fan, Yibo; Liu, Jialiang; Zhang, Dexue; Zeng, Xiaoyang; Chen, Xinhua

    Fidelity Range Extension (FRExt) (i.e. High Profile) was added to the H.264/AVC recommendation in the second version. One of the features included in FRExt is the Adaptive Block-size Transform (ABT). In order to conform to the FRExt, a Fractional Motion Estimation (FME) architecture is proposed to support the 8×8/4×4 adaptive Hadamard Transform (8×8/4×4 AHT). The 8×8/4×4 AHT circuit contributes to higher throughput and encoding performance. In order to increase the utilization of SATD (Sum of Absolute Transformed Difference) Generator (SG) in unit time, the proposed architecture employs two 8-pel interpolators (IP) to time-share one SG. These two IPs can work in turn to provide the available data continuously to the SG, which increases the data throughput and significantly reduces the cycles that are needed to process one Macroblock. Furthermore, this architecture also exploits the linear feature of Hadamard Transform to generate the quarter-pel SATD. This method could help to shorten the long datapath in the second-step of two-iteration FME algorithm. Finally, experimental results show that this architecture could be used in the applications requiring different performances by adjusting the supported modes and operation frequency. It can support the real-time encoding of the seven-mode 4K×2K@24fps or six-mode 4K×2K@30fps video sequences.

  4. Policy improvement by a model-free Dyna architecture.

    PubMed

    Hwang, Kao-Shing; Lo, Chia-Yue

    2013-05-01

    The objective of this paper is to accelerate the process of policy improvement in reinforcement learning. The proposed Dyna-style system combines two learning schemes, one of which utilizes a temporal difference method for direct learning; the other uses relative values for indirect learning in planning between two successive direct learning cycles. Instead of establishing a complicated world model, the approach introduces a simple predictor of average rewards to actor-critic architecture in the simulation (planning) mode. The relative value of a state, defined as the accumulated differences between immediate reward and average reward, is used to steer the improvement process in the right direction. The proposed learning scheme is applied to control a pendulum system for tracking a desired trajectory to demonstrate its adaptability and robustness. Through reinforcement signals from the environment, the system takes the appropriate action to drive an unknown dynamic to track desired outputs in few learning cycles. Comparisons are made between the proposed model-free method, a connectionist adaptive heuristic critic, and an advanced method of Dyna-Q learning in the experiments of labyrinth exploration. The proposed method outperforms its counterparts in terms of elapsed time and convergence rate. PMID:24808427

  5. Modeling of martensitic transformation in adaptive composites

    NASA Astrophysics Data System (ADS)

    Slutsker, J.; Artemev, A.; Roitburd, A. L.

    2003-10-01

    The formation of elastic domains in transforming constrained films is a mechanism of relaxation of internal stresses caused by the misfit between a film and a substrate. The formation and evolution of polydomain microstructure as a result of the cubic-tetragonal transformation in a constrained layer are investigated by phasefield simulation. It has been shown that the three-domain hierarchical structure can be formed in the epitaxial films. With changing a fraction of out-of-plane domain there are two types of morphological transitions: from the three-domain structure to the two-domain one and from the hierarchical three-domain structure to the cellular three-domain structure. The results of the phase-field simulation are compared with available experimental data.

  6. A Type-Theoretic Framework for Certified Model Transformations

    NASA Astrophysics Data System (ADS)

    Calegari, Daniel; Luna, Carlos; Szasz, Nora; Tasistro, Álvaro

    We present a framework based on the Calculus of Inductive Constructions (CIC) and its associated tool the Coq proof assistant to allow certification of model transformations in the context of Model-Driven Engineering (MDE). The approached is based on a semi-automatic translation process from metamodels, models and transformations of the MDE technical space into types, propositions and functions of the CIC technical space. We describe this translation and illustrate its use in a standard case study.

  7. Typical Phases of Transformative Learning: A Practice-Based Model

    ERIC Educational Resources Information Center

    Nohl, Arnd-Michael

    2015-01-01

    Empirical models of transformative learning offer important insights into the core characteristics of this concept. Whereas previous analyses were limited to specific social groups or topical terrains, this article empirically typifies the phases of transformative learning on the basis of a comparative analysis of various social groups and topical…

  8. A Multiperspectival Conceptual Model of Transformative Meaning Making

    ERIC Educational Resources Information Center

    Freed, Maxine

    2009-01-01

    Meaning making is central to transformative learning, but little work has explored how meaning is constructed in the process. Moreover, no meaning-making theory adequately captures its characteristics and operations during radical transformation. The purpose of this dissertation was to formulate and specify a multiperspectival conceptual model of…

  9. Organoids as Models for Neoplastic Transformation | Office of Cancer Genomics

    Cancer.gov

    Cancer models strive to recapitulate the incredible diversity inherent in human tumors. A key challenge in accurate tumor modeling lies in capturing the panoply of homo- and heterotypic cellular interactions within the context of a three-dimensional tissue microenvironment. To address this challenge, researchers have developed organotypic cancer models (organoids) that combine the 3D architecture of in vivo tissues with the experimental facility of 2D cell lines.

  10. Plum (Prunus domestica) trees transformed with poplar FT1 result in altered architecture, dormancy requirement, and continuous flowering.

    PubMed

    Srinivasan, Chinnathambi; Dardick, Chris; Callahan, Ann; Scorza, Ralph

    2012-01-01

    The Flowering Locus T1 (FT1) gene from Populus trichocarpa under the control of the 35S promoter was transformed into European plum (Prunus domestica L). Transgenic plants expressing higher levels of FT flowered and produced fruits in the greenhouse within 1 to 10 months. FT plums did not enter dormancy after cold or short day treatments yet field planted FT plums remained winter hardy down to at least -10°C. The plants also displayed pleiotropic phenotypes atypical for plum including shrub-type growth habit and panicle flower architecture. The flowering and fruiting phenotype was found to be continuous in the greenhouse but limited to spring and fall in the field. The pattern of flowering in the field correlated with lower daily temperatures. This apparent temperature effect was subsequently confirmed in growth chamber studies. The pleitropic phenotypes associated with FT1 expression in plum suggests a fundamental role of this gene in plant growth and development. This study demonstrates the potential for a single transgene event to markedly affect the vegetative and reproductive growth and development of an economically important temperate woody perennial crop. We suggest that FT1 may be a useful tool to modify temperate plants to changing climates and/or to adapt these crops to new growing areas. PMID:22859952

  11. Practical Application of Model-based Programming and State-based Architecture to Space Missions

    NASA Technical Reports Server (NTRS)

    Horvath, Gregory; Ingham, Michel; Chung, Seung; Martin, Oliver; Williams, Brian

    2006-01-01

    A viewgraph presentation to develop models from systems engineers that accomplish mission objectives and manage the health of the system is shown. The topics include: 1) Overview; 2) Motivation; 3) Objective/Vision; 4) Approach; 5) Background: The Mission Data System; 6) Background: State-based Control Architecture System; 7) Background: State Analysis; 8) Overview of State Analysis; 9) Background: MDS Software Frameworks; 10) Background: Model-based Programming; 10) Background: Titan Model-based Executive; 11) Model-based Execution Architecture; 12) Compatibility Analysis of MDS and Titan Architectures; 13) Integrating Model-based Programming and Execution into the Architecture; 14) State Analysis and Modeling; 15) IMU Subsystem State Effects Diagram; 16) Titan Subsystem Model: IMU Health; 17) Integrating Model-based Programming and Execution into the Software IMU; 18) Testing Program; 19) Computationally Tractable State Estimation & Fault Diagnosis; 20) Diagnostic Algorithm Performance; 21) Integration and Test Issues; 22) Demonstrated Benefits; and 23) Next Steps

  12. Strategic Defense Initiative Architecture Dataflow Modeling Technique, Version 1. 5. Final report

    SciTech Connect

    Linn, J.L.; Ardoin, C.D.; Linn, C.J.; Edwards, S.E.; Kappel, M.R.

    1988-04-22

    This report presents the SDI Architecture Dataflow Modeling Technique (SADMT), a uniform formal notation for the description of SDI system architectures and Battle Management and Command, Control, and Communications (BM/C3) architectures. SADMT is a technique for thinking about and describing architectural processes and structures that use the typing and functional facilities of the Ada programming language. This document defines SADMT and the programming interface to the SADMT Simulation Facility (SADMT/SF). The issues addressed here are those relevant to providing formal descriptions of system structure and behavior for interface consistency checking, system simulation, and system evaluation.

  13. IDENTIFICATION AND EVALUATION OF FUNDAMENTAL TRANSPORT AND TRANSFORMATION PROCESS MODELS

    EPA Science Inventory

    Chemical fate models require explicit algorithms for computing the effects of transformation and transport processes on the spatial and temporal distribution of chemical concentrations. Transport processes in aquatic systems are driven by physical characteristics on the system an...

  14. The comparison study among several data transformations in autoregressive modeling

    NASA Astrophysics Data System (ADS)

    Setiyowati, Susi; Waluyo, Ramdhani Try

    2015-12-01

    In finance, the adjusted close of stocks are used to observe the performance of a company. The extreme prices, which may increase or decrease drastically, are often become particular concerned since it can impact to bankruptcy. As preventing action, the investors have to observe the future (forecasting) stock prices comprehensively. For that purpose, time series analysis could be one of statistical methods that can be implemented, for both stationary and non-stationary processes. Since the variability process of stocks prices tend to large and also most of time the extreme values are always exist, then it is necessary to do data transformation so that the time series models, i.e. autoregressive model, could be applied appropriately. One of popular data transformation in finance is return model, in addition to ratio of logarithm and some others Tukey ladder transformation. In this paper these transformations are applied to AR stationary models and non-stationary ARCH and GARCH models through some simulations with varying parameters. As results, this work present the suggestion table that shows transformations behavior for some condition of parameters and models. It is confirmed that the better transformation is obtained, depends on type of data distributions. In other hands, the parameter conditions term give significant influence either.

  15. Model Assessment and Optimization Using a Flow Time Transformation

    NASA Astrophysics Data System (ADS)

    Smith, T. J.; Marshall, L. A.; McGlynn, B. L.

    2012-12-01

    Hydrologic modeling is a particularly complex problem that is commonly confronted with complications due to multiple dominant streamflow states, temporal switching of streamflow generation mechanisms, and dynamic responses to model inputs based on antecedent conditions. These complexities can inhibit the development of model structures and their fitting to observed data. As a result of these complexities and the heterogeneity that can exist within a catchment, optimization techniques are typically employed to obtain reasonable estimates of model parameters. However, when calibrating a model, the cost function itself plays a large role in determining the "optimal" model parameters. In this study, we introduce a transformation that allows for the estimation of model parameters in the "flow time" domain. The flow time transformation dynamically weights streamflows in the time domain, effectively stretching time during high streamflows and compressing time during low streamflows. Given the impact of cost functions on model optimization, such transformations focus on the hydrologic fluxes themselves rather than on equal time weighting common to traditional approaches. The utility of such a transform is of particular note to applications concerned with total hydrologic flux (water resources management, nutrient loading, etc.). The flow time approach can improve the predictive consistency of total fluxes in hydrologic models and provide insights into model performance by highlighting model strengths and deficiencies in an alternate modeling domain. Flow time transformations can also better remove positive skew from the streamflow time series, resulting in improved model fits, satisfaction of the normality assumption of model residuals, and enhanced uncertainty quantification. We illustrate the value of this transformation for two distinct sets of catchment conditions (snow-dominated and subtropical).

  16. Transforming teacher knowledge: Modeling instruction in physics

    NASA Astrophysics Data System (ADS)

    Cabot, Lloyd H.

    I show that the Modeling physics curriculum is readily accommodated by most teachers in favor of traditional didactic pedagogies. This is so, at least in part, because Modeling focuses on a small set of connected models embedded in a self-consistent theoretical framework and thus is closely congruent with human cognition in this context which is to generate mental models of physical phenomena as both predictive and explanatory devices. Whether a teacher fully implements the Modeling pedagogy depends on the depth of the teacher's commitment to inquiry-based instruction, specifically Modeling instruction, as a means of promoting student understanding of Newtonian mechanics. Moreover, this commitment trumps all other characteristics: teacher educational background, content coverage issues, student achievement data, district or state learning standards, and district or state student assessments. Indeed, distinctive differences exist in how Modeling teachers deliver their curricula and some teachers are measurably more effective than others in their delivery, but they all share an unshakable belief in the efficacy of inquiry-based, constructivist-oriented instruction. The Modeling Workshops' pedagogy, duration, and social interactions impacts teachers' self-identification as members of a professional community. Finally, I discuss the consequences my research may have for the Modeling Instruction program designers and for designers of professional development programs generally.

  17. Lithospheric Architecture, Heterogenities, Instabilities, Melting - insight form numerical modelling

    NASA Astrophysics Data System (ADS)

    Gorczyk, Weronika; Hobbs, Bruce; Ord, Alison; Gessner, Klaus; Gerya, Taras V.

    2010-05-01

    The seismological structure of the Earth's lithosphere is identified to be strongly heterogeneous in terms of thermal and rheological structures. Lithospheric discontinuities (sharp changes in the thermal and/or compositional structure) are thought to be long lived and are mostly correlated with major tectonic boundaries that commonly have been reactivated and which subsequently are the foci of magma intrusion and major mineralization. Resent studies have shown that mantle metasomatism is also controlled by such boundaries. This paper explores the control that lithospheric heterogeneity exerts on the thermal and chemical evolution during deformation subsequent to the development of the heterogeneity. We explore the behaviour of the rheological heterogeneous lithosphere in a compressional regime. The occurrence of such variations may be caused for instance by amalgamation of micro-continents such as is thought to be characteristic of the Yilgarn, Western Australia or South Africa. Theses micro-continents, due to diverse histories may be characterised by various thermal and rheological structures. The models are simplistic but illustrate the basic principles. The code used in this study is based on a conservative finite-difference, multi-grid, marker in cell method. Devolatilisation reactions and melting can affect the physical properties of rocks and are incorporated in a self-consistent manner. We use a petrological-thermomechanical modelling approach with all rock properties including mechanical properties calculated in the Lagrangian scheme for rock markers at every time step based on Gibbs free energy minimization as a function of the local pressure, temperature and rock composition. The results illustrate that initial structural complexity is necessary for and has a dramatic effect on fault and development, the growth of deep basins, core complex formation, melting and devolatilisation within the lithosphere. The horizontal and vertical variation in plastic

  18. URBAN AEROSOL TRANSFORMATION AND TRANSPORT MODELING

    EPA Science Inventory

    Modules for secondary aerosol formation have been included in the urban scale K-theory aerosol model, AR0S0L. hese are: (1) An empirical first-order 502 conversion scheme due to Meaghers, termed EMM; (2) The lumped parameter kinetic model termed the Carbon Bond Mechanism, in the ...

  19. Evaluating the Effectiveness of Reference Models in Federating Enterprise Architectures

    ERIC Educational Resources Information Center

    Wilson, Jeffery A.

    2012-01-01

    Agencies need to collaborate with each other to perform missions, improve mission performance, and find efficiencies. The ability of individual government agencies to collaborate with each other for mission and business success and efficiency is complicated by the different techniques used to describe their Enterprise Architectures (EAs).…

  20. Analysis of new position and height transformation models in Poland

    NASA Astrophysics Data System (ADS)

    Andrasik, Ewa; Ryczywolski, Marcin

    2014-05-01

    In January 2014 Head Office of Geodesy and Cartography, Polish authority for geodesy and cartography, has released transformation models for position and height. The appearance of the models is related to changes in legal acts concerning the introduction of new reference system and frames used in Poland. The transformation models link old reference frames PL-ETRF89-GRS80h (also called EUREF-89) and PL-KRON86-NH with new realizations PL-ETRF2000- GRS80h and PL-EVRS2007-NH. The reference frame for position is expressed in the same reference system ETRS89. In case of height system Poland is currently switching form Kronstadt normal height system to EVRS - European height system referred to the Normaal Amsterdams Peil. The transformation models are based on grids covering territory of Poland with node spacing of 0.01 of degree. Model for transformation between previous and current ETRS89 realizations is based on the results of GNSS calibration campaign conducted between 2008 and 2011, covering over 500 points (permanent reference stations and 1st order ground control points) regularly distributed over interest area. Above transformation model has been analyzed in the context of differences to previous frames realizations and to the approach based on parameter transformation. In the context of implementation of EVRF2007 the new local quasi-geoid model PL-geoid-2011 has been compared to the latest geopotential model, European quasi-geoid models EGG and local quasi-geoid models used so far. In addition the new model has been confronted with undulations based on the existing satellite levelling data, including the results of the fourth leveling campaign.

  1. TRANSFORMATION

    SciTech Connect

    LACKS,S.A.

    2003-10-09

    Transformation, which alters the genetic makeup of an individual, is a concept that intrigues the human imagination. In Streptococcus pneumoniae such transformation was first demonstrated. Perhaps our fascination with genetics derived from our ancestors observing their own progeny, with its retention and assortment of parental traits, but such interest must have been accelerated after the dawn of agriculture. It was in pea plants that Gregor Mendel in the late 1800s examined inherited traits and found them to be determined by physical elements, or genes, passed from parents to progeny. In our day, the material basis of these genetic determinants was revealed to be DNA by the lowly bacteria, in particular, the pneumococcus. For this species, transformation by free DNA is a sexual process that enables cells to sport new combinations of genes and traits. Genetic transformation of the type found in S. pneumoniae occurs naturally in many species of bacteria (70), but, initially only a few other transformable species were found, namely, Haemophilus influenzae, Neisseria meningitides, Neisseria gonorrheae, and Bacillus subtilis (96). Natural transformation, which requires a set of genes evolved for the purpose, contrasts with artificial transformation, which is accomplished by shocking cells either electrically, as in electroporation, or by ionic and temperature shifts. Although such artificial treatments can introduce very small amounts of DNA into virtually any type of cell, the amounts introduced by natural transformation are a million-fold greater, and S. pneumoniae can take up as much as 10% of its cellular DNA content (40).

  2. An Agent-Based Dynamic Model for Analysis of Distributed Space Exploration Architectures

    NASA Astrophysics Data System (ADS)

    Sindiy, Oleg V.; DeLaurentis, Daniel A.; Stein, William B.

    2009-07-01

    A range of complex challenges, but also potentially unique rewards, underlie the development of exploration architectures that use a distributed, dynamic network of resources across the solar system. From a methodological perspective, the prime challenge is to systematically model the evolution (and quantify comparative performance) of such architectures, under uncertainty, to effectively direct further study of specialized trajectories, spacecraft technologies, concept of operations, and resource allocation. A process model for System-of-Systems Engineering is used to define time-varying performance measures for comparative architecture analysis and identification of distinguishing patterns among interoperating systems. Agent-based modeling serves as the means to create a discrete-time simulation that generates dynamics for the study of architecture evolution. A Solar System Mobility Network proof-of-concept problem is introduced representing a set of longer-term, distributed exploration architectures. Options within this set revolve around deployment of human and robotic exploration and infrastructure assets, their organization, interoperability, and evolution, i.e., a system-of-systems. Agent-based simulations quantify relative payoffs for a fully distributed architecture (which can be significant over the long term), the latency period before they are manifest, and the up-front investment (which can be substantial compared to alternatives). Verification and sensitivity results provide further insight on development paths and indicate that the framework and simulation modeling approach may be useful in architectural design of other space exploration mass, energy, and information exchange settings.

  3. Rapid architecture alternative modeling (RAAM): A framework for capability-based analysis of system of systems architectures

    NASA Astrophysics Data System (ADS)

    Iacobucci, Joseph V.

    The research objective for this manuscript is to develop a Rapid Architecture Alternative Modeling (RAAM) methodology to enable traceable Pre-Milestone A decision making during the conceptual phase of design of a system of systems. Rather than following current trends that place an emphasis on adding more analysis which tends to increase the complexity of the decision making problem, RAAM improves on current methods by reducing both runtime and model creation complexity. RAAM draws upon principles from computer science, system architecting, and domain specific languages to enable the automatic generation and evaluation of architecture alternatives. For example, both mission dependent and mission independent metrics are considered. Mission dependent metrics are determined by the performance of systems accomplishing a task, such as Probability of Success. In contrast, mission independent metrics, such as acquisition cost, are solely determined and influenced by the other systems in the portfolio. RAAM also leverages advances in parallel computing to significantly reduce runtime by defining executable models that are readily amendable to parallelization. This allows the use of cloud computing infrastructures such as Amazon's Elastic Compute Cloud and the PASTEC cluster operated by the Georgia Institute of Technology Research Institute (GTRI). Also, the amount of data that can be generated when fully exploring the design space can quickly exceed the typical capacity of computational resources at the analyst's disposal. To counter this, specific algorithms and techniques are employed. Streaming algorithms and recursive architecture alternative evaluation algorithms are used that reduce computer memory requirements. Lastly, a domain specific language is created to provide a reduction in the computational time of executing the system of systems models. A domain specific language is a small, usually declarative language that offers expressive power focused on a particular

  4. Rapid architecture alternative modeling (RAAM): A framework for capability-based analysis of system of systems architectures

    NASA Astrophysics Data System (ADS)

    Iacobucci, Joseph V.

    The research objective for this manuscript is to develop a Rapid Architecture Alternative Modeling (RAAM) methodology to enable traceable Pre-Milestone A decision making during the conceptual phase of design of a system of systems. Rather than following current trends that place an emphasis on adding more analysis which tends to increase the complexity of the decision making problem, RAAM improves on current methods by reducing both runtime and model creation complexity. RAAM draws upon principles from computer science, system architecting, and domain specific languages to enable the automatic generation and evaluation of architecture alternatives. For example, both mission dependent and mission independent metrics are considered. Mission dependent metrics are determined by the performance of systems accomplishing a task, such as Probability of Success. In contrast, mission independent metrics, such as acquisition cost, are solely determined and influenced by the other systems in the portfolio. RAAM also leverages advances in parallel computing to significantly reduce runtime by defining executable models that are readily amendable to parallelization. This allows the use of cloud computing infrastructures such as Amazon's Elastic Compute Cloud and the PASTEC cluster operated by the Georgia Institute of Technology Research Institute (GTRI). Also, the amount of data that can be generated when fully exploring the design space can quickly exceed the typical capacity of computational resources at the analyst's disposal. To counter this, specific algorithms and techniques are employed. Streaming algorithms and recursive architecture alternative evaluation algorithms are used that reduce computer memory requirements. Lastly, a domain specific language is created to provide a reduction in the computational time of executing the system of systems models. A domain specific language is a small, usually declarative language that offers expressive power focused on a particular

  5. Velocity-density twin transforms in the thin disc model

    NASA Astrophysics Data System (ADS)

    Bratek, Łukasz; Sikora, Szymon; Jałocha, Joanna; Kutschera, Marek

    2015-08-01

    Ring mass density and the corresponding circular velocity in thin disc model are known to be integral transforms of one another. But it may be less familiar that the transforms can be reduced to one-fold integrals with identical weight functions. It may be of practical value that the integral for the surface density does not involve the velocity derivative, unlike the equivalent and widely known Toomre's formula.

  6. Transforming Community Access to Space Science Models

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Heese, Michael; Kunetsova, Maria; Maddox, Marlo; Rastaetter, Lutz; Berrios, David; Pulkkinen, Antti

    2012-01-01

    Researching and forecasting the ever changing space environment (often referred to as space weather) and its influence on humans and their activities are model-intensive disciplines. This is true because the physical processes involved are complex, but, in contrast to terrestrial weather, the supporting observations are typically sparse. Models play a vital role in establishing a physically meaningful context for interpreting limited observations, testing theory, and producing both nowcasts and forecasts. For example, with accurate forecasting of hazardous space weather conditions, spacecraft operators can place sensitive systems in safe modes, and power utilities can protect critical network components from damage caused by large currents induced in transmission lines by geomagnetic storms.

  7. Transforming community access to space science models

    NASA Astrophysics Data System (ADS)

    MacNeice, Peter; Hesse, Michael; Kuznetsova, Maria; Maddox, Marlo; Rastaetter, Lutz; Berrios, David; Pulkkinen, Antti

    2012-04-01

    Researching and forecasting the ever changing space environment (often referred to as space weather) and its influence on humans and their activities are model-intensive disciplines. This is true because the physical processes involved are complex, but, in contrast to terrestrial weather, the supporting observations are typically sparse. Models play a vital role in establishing a physically meaningful context for interpreting limited observations, testing theory, and producing both nowcasts and forecasts. For example, with accurate forecasting of hazardous space weather conditions, spacecraft operators can place sensitive systems in safe modes, and power utilities can protect critical network components from damage caused by large currents induced in transmission lines by geomagnetic storms.

  8. The Transformation of the Getzels Model.

    ERIC Educational Resources Information Center

    McPherson, R. Bruce

    The author describes the model of social behavior in a social system first framed by Jacob Getzels, with the assistance of Egon Guba, in the middle 1950s. Significant changes in the conceptualization of organizational functioning have occurred in the years since then, though the methodological processes for studying that functioning have remained…

  9. A model for heterogeneous materials including phase transformations

    SciTech Connect

    Addessio, F.L.; Clements, B.E.; Williams, T.O.

    2005-04-15

    A model is developed for particulate composites, which includes phase transformations in one or all of the constituents. The model is an extension of the method of cells formalism. Representative simulations for a single-phase, brittle particulate (SiC) embedded in a ductile material (Ti), which undergoes a solid-solid phase transformation, are provided. Also, simulations for a tungsten heavy alloy (WHA) are included. In the WHA analyses a particulate composite, composed of tungsten particles embedded in a tungsten-iron-nickel alloy matrix, is modeled. A solid-liquid phase transformation of the matrix material is included in the WHA numerical calculations. The example problems also demonstrate two approaches for generating free energies for the material constituents. Simulations for volumetric compression, uniaxial strain, biaxial strain, and pure shear are used to demonstrate the versatility of the model.

  10. TRANSFORMER

    DOEpatents

    Baker, W.R.

    1959-08-25

    Transformers of a type adapted for use with extreme high power vacuum tubes where current requirements may be of the order of 2,000 to 200,000 amperes are described. The transformer casing has the form of a re-entrant section being extended through an opening in one end of the cylinder to form a coaxial terminal arrangement. A toroidal multi-turn primary winding is disposed within the casing in coaxial relationship therein. In a second embodiment, means are provided for forming the casing as a multi-turn secondary. The transformer is characterized by minimized resistance heating, minimized external magnetic flux, and an economical construction.

  11. Negotiation Areas for "Transformation" and "Turnaround" Intervention Models

    ERIC Educational Resources Information Center

    Mass Insight Education (NJ1), 2011

    2011-01-01

    To receive School Improvement Grant (SIG) funding, districts must submit an application to the state that outlines their strategic plan to implement one of four intervention models in their persistently lowest-achieving schools. The four intervention models include: (1) School Closure; (2) Restart; (3) Turnaround; and (4) Transformation. The…

  12. Transformative leadership: an ethical stewardship model for healthcare.

    PubMed

    Caldwell, Cam; Voelker, Carolyn; Dixon, Rolf D; LeJeune, Adena

    2008-01-01

    The need for effective leadership is a compelling priority for those who would choose to govern in public, private, and nonprofit organizations, and applies as much to the healthcare profession as it does to other sectors of the economy (Moody, Horton-Deutsch, & Pesut, 2007). Transformative Leadership, an approach to leadership and governance that incorporates the best characteristics of six other highly respected leadership models, is an integrative theory of ethical stewardship that can help healthcare professionals to more effectively achieve organizational efficiencies, build stakeholder commitment and trust, and create valuable synergies to transform and enrich today's healthcare systems (cf. Caldwell, LeJeune, & Dixon, 2007). The purpose of this article is to introduce the concept of Transformative Leadership and to explain how this model applies within a healthcare context. We define Transformative Leadership and identify its relationship to Transformational, Charismatic, Level 5, Principle-Centered, Servant, and Covenantal Leadership--providing examples of each of these elements of Transformative Leadership within a healthcare leadership context. We conclude by identifying contributions of this article to the healthcare leadership literature. PMID:18839754

  13. Modeling interface-controlled phase transformation kinetics in thin films

    NASA Astrophysics Data System (ADS)

    Pang, E. L.; Vo, N. Q.; Philippe, T.; Voorhees, P. W.

    2015-05-01

    The Johnson-Mehl-Avrami-Kolmogorov (JMAK) equation is widely used to describe phase transformation kinetics. This description, however, is not valid in finite size domains, in particular, thin films. A new computational model incorporating the level-set method is employed to study phase evolution in thin film systems. For both homogeneous (bulk) and heterogeneous (surface) nucleation, nucleation density and film thickness were systematically adjusted to study finite-thickness effects on the Avrami exponent during the transformation process. Only site-saturated nucleation with isotropic interface-kinetics controlled growth is considered in this paper. We show that the observed Avrami exponent is not constant throughout the phase transformation process in thin films with a value that is not consistent with the dimensionality of the transformation. Finite-thickness effects are shown to result in reduced time-dependent Avrami exponents when bulk nucleation is present, but not necessarily when surface nucleation is present.

  14. Transfer Function Identification Using Orthogonal Fourier Transform Modeling Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2013-01-01

    A method for transfer function identification, including both model structure determination and parameter estimation, was developed and demonstrated. The approach uses orthogonal modeling functions generated from frequency domain data obtained by Fourier transformation of time series data. The method was applied to simulation data to identify continuous-time transfer function models and unsteady aerodynamic models. Model fit error, estimated model parameters, and the associated uncertainties were used to show the effectiveness of the method for identifying accurate transfer function models from noisy data.

  15. NASA Integrated Model Centric Architecture (NIMA) Model Use and Re-Use

    NASA Technical Reports Server (NTRS)

    Conroy, Mike; Mazzone, Rebecca; Lin, Wei

    2012-01-01

    This whitepaper accepts the goals, needs and objectives of NASA's Integrated Model-centric Architecture (NIMA); adds experience and expertise from the Constellation program as well as NASA's architecture development efforts; and provides suggested concepts, practices and norms that nurture and enable model use and re-use across programs, projects and other complex endeavors. Key components include the ability to effectively move relevant information through a large community, process patterns that support model reuse and the identification of the necessary meta-information (ex. history, credibility, and provenance) to safely use and re-use that information. In order to successfully Use and Re-Use Models and Simulations we must define and meet key organizational and structural needs: 1. We must understand and acknowledge all the roles and players involved from the initial need identification through to the final product, as well as how they change across the lifecycle. 2. We must create the necessary structural elements to store and share NIMA-enabled information throughout the Program or Project lifecycle. 3. We must create the necessary organizational processes to stand up and execute a NIMA-enabled Program or Project throughout its lifecycle. NASA must meet all three of these needs to successfully use and re-use models. The ability to Reuse Models a key component of NIMA and the capabilities inherent in NIMA are key to accomplishing NASA's space exploration goals. 11

  16. The role of technology and engineering models in transforming healthcare.

    PubMed

    Pavel, Misha; Jimison, Holly Brugge; Wactlar, Howard D; Hayes, Tamara L; Barkis, Will; Skapik, Julia; Kaye, Jeffrey

    2013-01-01

    The healthcare system is in crisis due to challenges including escalating costs, the inconsistent provision of care, an aging population, and high burden of chronic disease related to health behaviors. Mitigating this crisis will require a major transformation of healthcare to be proactive, preventive, patient-centered, and evidence-based with a focus on improving quality-of-life. Information technology, networking, and biomedical engineering are likely to be essential in making this transformation possible with the help of advances, such as sensor technology, mobile computing, machine learning, etc. This paper has three themes: 1) motivation for a transformation of healthcare; 2) description of how information technology and engineering can support this transformation with the help of computational models; and 3) a technical overview of several research areas that illustrate the need for mathematical modeling approaches, ranging from sparse sampling to behavioral phenotyping and early detection. A key tenet of this paper concerns complementing prior work on patient-specific modeling and simulation by modeling neuropsychological, behavioral, and social phenomena. The resulting models, in combination with frequent or continuous measurements, are likely to be key components of health interventions to enhance health and wellbeing and the provision of healthcare. PMID:23549108

  17. Modelling of internal architecture of kinesin nanomotor as a machine language.

    PubMed

    Khataee, H R; Ibrahim, M Y

    2012-09-01

    Kinesin is a protein-based natural nanomotor that transports molecular cargoes within cells by walking along microtubules. Kinesin nanomotor is considered as a bio-nanoagent which is able to sense the cell through its sensors (i.e. its heads and tail), make the decision internally and perform actions on the cell through its actuator (i.e. its motor domain). The study maps the agent-based architectural model of internal decision-making process of kinesin nanomotor to a machine language using an automata algorithm. The applied automata algorithm receives the internal agent-based architectural model of kinesin nanomotor as a deterministic finite automaton (DFA) model and generates a regular machine language. The generated regular machine language was acceptable by the architectural DFA model of the nanomotor and also in good agreement with its natural behaviour. The internal agent-based architectural model of kinesin nanomotor indicates the degree of autonomy and intelligence of the nanomotor interactions with its cell. Thus, our developed regular machine language can model the degree of autonomy and intelligence of kinesin nanomotor interactions with its cell as a language. Modelling of internal architectures of autonomous and intelligent bio-nanosystems as machine languages can lay the foundation towards the concept of bio-nanoswarms and next phases of the bio-nanorobotic systems development. PMID:22894532

  18. Möbius transformational high dimensional model representation on multi-way arrays

    NASA Astrophysics Data System (ADS)

    Özay, Evrim Korkmaz

    2012-09-01

    Transformational High Dimensional Model Representation has been used for continous structures with different transformations before. This work is inventive because not only for the transformation type but also its usage. Möbius Transformational High Dimensional Model Representation has been used at multi-way arrays, by using truncation approximant and inverse transformation an approximation has been obtained for original multi-way array.

  19. Bayesian spatial transformation models with applications in neuroimaging data

    PubMed Central

    Miranda, Michelle F.; Zhu, Hongtu; Ibrahim, Joseph G.

    2013-01-01

    Summary The aim of this paper is to develop a class of spatial transformation models (STM) to spatially model the varying association between imaging measures in a three-dimensional (3D) volume (or 2D surface) and a set of covariates. Our STMs include a varying Box-Cox transformation model for dealing with the issue of non-Gaussian distributed imaging data and a Gaussian Markov Random Field model for incorporating spatial smoothness of the imaging data. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. Simulations and real data analysis demonstrate that the STM significantly outperforms the voxel-wise linear model with Gaussian noise in recovering meaningful geometric patterns. Our STM is able to reveal important brain regions with morphological changes in children with attention deficit hyperactivity disorder. PMID:24128143

  20. Simplified three-phase transformer model for electromagnetic transient studies

    SciTech Connect

    Chimklai, S.; Marti, J.R.

    1995-07-01

    This paper presents a simplified high-frequency model for three-phase, two- and three-winding transformers. The model is based on the classical 60-Hz equivalent circuit, extended to high frequencies by the addition of the winding capacitances and the synthesis of the frequency-dependent short-circuit branch by an RLC equivalent network. By retaining the T-form of the classical model, it is possible to separate the frequency-dependent series branch from the constant-valued shunt capacitances. Since the short-circuit branch can be synthesized by a minimum-phase-shift rational approximation, the mathematical complications of fitting mutual impedance or admittance functions are avoided and the model is guaranteed to be numerically absolutely stable. Experimental tests were performed on actual power transformers to determine the parameters of the model. EMTP simulation results are also presented.

  1. Modeling of a 3DTV service in the software-defined networking architecture

    NASA Astrophysics Data System (ADS)

    Wilczewski, Grzegorz

    2014-11-01

    In this article a newly developed concept towards modeling of a multimedia service offering stereoscopic motion imagery is presented. Proposed model is based on the approach of utilization of Software-defined Networking or Software Defined Networks architecture (SDN). The definition of 3D television service spanning SDN concept is identified, exposing basic characteristic of a 3DTV service in a modern networking organization layout. Furthermore, exemplary functionalities of the proposed 3DTV model are depicted. It is indicated that modeling of a 3DTV service in the Software-defined Networking architecture leads to multiplicity of improvements, especially towards flexibility of a service supporting heterogeneity of end user devices.

  2. Fractional brownian functions as mathematical models of natural rhythm in architecture.

    PubMed

    Cirovic, Ivana M

    2014-10-01

    Carl Bovill suggested and described a method of generating rhythm in architecture with the help of fractional Brownian functions, as they are mathematical models of natural rhythm. A relationship established in the stated procedure between fractional Brownian functions as models of rhythm, and the observed group of architectural elements, is recognized as an analogical relationship, and the procedure of generating rhythm as a process of analogical transfer from the natural domain to the architectural domain. Since analogical transfer implies relational similarity of two domains, and the establishment of one-to-one correspondence, this paper is trying to determine under which conditions such correspondence could be established. For example, if the values of the observed visual feature of architectural elements are not similar to each other in a way in which they can form a monotonically increasing, or a monotonically decreasing bounded sequence, then the structural alignment and the one-to-one correspondence with a single fractional Brownian function cannot be established, hence, this function is deemed inappropriate as a model for the architectural rhythm. In this case we propose overlapping of two or more functions, so that each of them is an analog for one subset of mutually similar values of the visual feature of architectural elements. PMID:25196709

  3. A Service Oriented Architecture for Exploring High Performance Distributed Power Models

    SciTech Connect

    Liu, Yan; Chase, Jared M.; Gorton, Ian

    2012-11-12

    Power grids are increasingly incorporating high quality, high throughput sensor devices inside power distribution networks. These devices are driving an unprecedented increase in the volume and rate of available information. The real-time requirements for handling this data are beyond the capacity of conventional power models running in central utilities. Hence, we are exploring distributed power models deployed at the regional scale. The connection of these models for a larger geographic region is supported by a distributed system architecture. This architecture is built in a service oriented style, whereby distributed power models running on high performance clusters are exposed as services. Each service is semantically annotated and therefore can be discovered through a service catalog and composed into workflows. The overall architecture has been implemented as an integrated workflow environment useful for power researchers to explore newly developed distributed power models.

  4. Transformative Professional Development: A Model for Urban Science Education Reform

    ERIC Educational Resources Information Center

    Johnson, Carla C.; Marx, Sherry

    2009-01-01

    This study presents a model of Transformative Professional Development (TPD) for use in sustained, collaborative, professional development of teachers in urban middle school science. TPD focuses on urban science teacher change and is responsive to school climate, teacher needs, and teacher beliefs with the intention of promoting change in…

  5. Model Transformation for a System of Systems Dependability Safety Case

    NASA Technical Reports Server (NTRS)

    Murphy, Judy; Driskell, Steve

    2011-01-01

    The presentation reviews the dependability and safety effort of NASA's Independent Verification and Validation Facility. Topics include: safety engineering process, applications to non-space environment, Phase I overview, process creation, sample SRM artifact, Phase I end result, Phase II model transformation, fault management, and applying Phase II to individual projects.

  6. Software architecture and design of the web services facilitating climate model diagnostic analysis

    NASA Astrophysics Data System (ADS)

    Pan, L.; Lee, S.; Zhang, J.; Tang, B.; Zhai, C.; Jiang, J. H.; Wang, W.; Bao, Q.; Qi, M.; Kubar, T. L.; Teixeira, J.

    2015-12-01

    Climate model diagnostic analysis is a computationally- and data-intensive task because it involves multiple numerical model outputs and satellite observation data that can both be high resolution. We have built an online tool that facilitates this process. The tool is called Climate Model Diagnostic Analyzer (CMDA). It employs the web service technology and provides a web-based user interface. The benefits of these choices include: (1) No installation of any software other than a browser, hence it is platform compatable; (2) Co-location of computation and big data on the server side, and small results and plots to be downloaded on the client side, hence high data efficiency; (3) multi-threaded implementation to achieve parallel performance on multi-core servers; and (4) cloud deployment so each user has a dedicated virtual machine. In this presentation, we will focus on the computer science aspects of this tool, namely the architectural design, the infrastructure of the web services, the implementation of the web-based user interface, the mechanism of provenance collection, the approach to virtualization, and the Amazon Cloud deployment. As an example, We will describe our methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks (i.e., Flask, Gunicorn, and Tornado). Another example is the use of Docker, a light-weight virtualization container, to distribute and deploy CMDA onto an Amazon EC2 instance. Our tool of CMDA has been successfully used in the 2014 Summer School hosted by the JPL Center for Climate Science. Students had positive feedbacks in general and we will report their comments. An enhanced version of CMDA with several new features, some requested by the 2014 students, will be used in the 2015 Summer School soon.

  7. Laguerre-Volterra model and architecture for MIMO system identification and output prediction.

    PubMed

    Li, Will X Y; Xin, Yao; Chan, Rosa H M; Song, Dong; Berger, Theodore W; Cheung, Ray C C

    2014-01-01

    A generalized mathematical model is proposed for behaviors prediction of biological causal systems with multiple inputs and multiple outputs (MIMO). The system properties are represented by a set of model parameters, which can be derived with random input stimuli probing it. The system calculates predicted outputs based on the estimated parameters and its novel inputs. An efficient hardware architecture is established for this mathematical model and its circuitry has been implemented using the field-programmable gate arrays (FPGAs). This architecture is scalable and its functionality has been validated by using experimental data gathered from real-world measurement. PMID:25571001

  8. Transitioning ISR architecture into the cloud

    NASA Astrophysics Data System (ADS)

    Lash, Thomas D.

    2012-06-01

    Emerging cloud computing platforms offer an ideal opportunity for Intelligence, Surveillance, and Reconnaissance (ISR) intelligence analysis. Cloud computing platforms help overcome challenges and limitations of traditional ISR architectures. Modern ISR architectures can benefit from examining commercial cloud applications, especially as they relate to user experience, usage profiling, and transformational business models. This paper outlines legacy ISR architectures and their limitations, presents an overview of cloud technologies and their applications to the ISR intelligence mission, and presents an idealized ISR architecture implemented with cloud computing.

  9. The mathematical modeling of phase transformation of steel during quenching

    SciTech Connect

    Jahanian, S.; Mosleh, M.

    1999-02-01

    In the heat treatment of steel, uneven cooling invariably introduces residual stresses in the workpiece. These residual stresses can combine with the thermomechanical stresses encountered in operation to cause premature fatigue failure of the material. A prediction of the residual and thermoelastoplastic stresses developed during heat treatment would be beneficial for component design. In this article a numerical model is developed to predict the thermoelastoplastic and residual stresses during rapid cooling of a long solid cylinder. The total strains developed during cooling of the cylinder comprise elastic, thermal, and plastic strains and strains due to phase transformation. For plastic deformation an extension of Jiang`s constitutive equations developed by Jahanian is adopted. The properties of the material are assumed to be temperature dependent and characterized by nonlinear strain hardening. For phase transformation two parts are considered: nucleation according to Scheil`s method and phase growth according to Johnson and Mehl`s law. For martensitic transformation, a law established by Koisteinin and Marburger is used. Non-additivity of pearlitic and bainitic nucleation suggested by Manning and Lorig is taken into account by means of a correction factor to Scheil`s summation of the transition from pearlitic to bainitic. The effect of phase transformation and temperature dependence of material properties is investigated. It is shown that by neglecting the temperature dependency and phase transformation in numerical calculations, the results are underestimated. The numerical results are compared with the available experimental data in the literature, and good agreement is observed.

  10. A Unitary Transformation in the Contracted Symplectic Model Approach

    NASA Astrophysics Data System (ADS)

    Castaños, Octavio; López-Moreno, Enrique

    1997-04-01

    In the last years a contracted version of the Symplectic Shell Model scheme has been used to describe the energy spectra and electromagnetic transitions to describe light and heavy rotational nuclei. In these works a model hamiltonian that takes into account the shell structure, couplings to major shells through a quadrupole-quadrupole interaction, and a residual rotor term were used. In the present contribution a unitary transformation is introduced, which gives rise to a simpler hamiltonian and the matrix elements of its different component terms with respect to the Ub × U_s(3) basis states are easily calculated. Also the quadrupole electromagnetic transitions can be easily determined. At the same time this unitary transformation in the boson approximation limit yields new insights to the shell model interpretation of the quantum rotor hamiltonian.

  11. Modeling solid-state transformations occurring in dissolution testing.

    PubMed

    Laaksonen, Timo; Aaltonen, Jaakko

    2013-04-15

    Changes in the solid-state form can occur during dissolution testing of drugs. This can often complicate interpretation of results. Additionally, there can be several mechanisms through which such a change proceeds, e.g. solvent-mediated transformation or crystal growth within the drug material itself. Here, a mathematical model was constructed to study the dissolution testing of a material, which undergoes such changes. The model consisted of two processes: the recrystallization of the drug from a supersaturated liquid state caused by the dissolution of the more soluble solid form and the crystal growth of the stable solid form at the surface of the drug formulation. Comparison to experimental data on theophylline dissolution showed that the results obtained with the model matched real solid-state changes and that it was able to distinguish between cases where the transformation was controlled either by solvent-mediated crystallization or solid-state crystal growth. PMID:23506958

  12. Tensor product model transformation based decoupled terminal sliding mode control

    NASA Astrophysics Data System (ADS)

    Zhao, Guoliang; Li, Hongxing; Song, Zhankui

    2016-06-01

    The main objective of this paper is to propose a tensor product model transformation based decoupled terminal sliding mode controller design methodology. The methodology is divided into two steps. In the first step, tensor product model transformation is applied to the single-input-multi-output system and a parameter-varying weighted linear time-invariant system is obtained. Then, decoupled terminal sliding mode controller is designed based on the linear time-invariant systems. The main novelty of this paper is that the nonsingular terminal sliding mode control design is based on a numerical model rather than an analytical one. Finally, simulations are tested on cart-pole system and translational oscillations with a rotational actuator system.

  13. Thermo-mechanical Model of the Dead Sea Transform

    NASA Astrophysics Data System (ADS)

    Sobolev, S. V.; Babeyko, A. Y.; Garfunkel, Z.

    2002-12-01

    The Dead Sea transform system (DST) is the boundary between the Arabian and African plates, where left-lateral transform motion has largely accommodated the opening of the Red Sea basin during the last 15-20 My. One of the key questions related to this plate boundary is whether the DST crosses the crust and mantle lithosphere, and how the rheologically different units composing the lithosphere interact during strong deformation? Another major question is how important is the rifting (transform-perpendicular extension) deformation component at the DST? We address these questions using the internally consistent finite element thermo-mechanical modelling of the lithospheric deformation constrained by high-resolution geophysical observations and especially by the recent geophysical data of the DESERT Project. From our modelling, we conclude that the DST lithospheric structure is controlled by the plate-scale transform displacement within a relatively cold and strong lithosphere. In such a lithosphere, shear strain is localized in a narrow (20-40 km wide) vertical decoupling zone (VDZ), which crosses the entire lithosphere and even continues into the asthenosphere. In the upper crust the deformation localizes at one or two major faults located at the top of this zone. The location of the VDZ is controlled by the temperature of the uppermost mantle prior to the transform motion. Most of the lithospheric structures imaged along the DESERT seismic line is explained by the 105 km transform motion combined with less than 4 km transform-perpendicular extension. Uplift of the Arabian Shield adjacent to the DST can be explained by young (<20 Ma) thinning of the lithosphere at and east of the plate boundary. Such lithospheric thinning is consistent with seismological observations, with the low present-day surface heat flow and with the high temperatures derived from mantle xenoliths brought up by Neogene-Quaternary basalts. Taking into account the timing of the onset of the

  14. Phase Transformation Hysteresis in a Plutonium Alloy System: Modeling the Resistivity during the Transformation

    SciTech Connect

    Haslam, J J; Wall, M A; Johnson, D L; Mayhall, D J; Schwartz, A J

    2001-11-14

    We have induced, measured, and modeled the {delta}-{alpha}' martensitic transformation in a Pu-Ga alloy by a resistivity technique on a 2.8-mm diameter disk sample. Our measurements of the resistance by a 4-probe technique were consistent with the expected resistance obtained from a finite element analysis of the 4-point measurement of resistivity in our round disk configuration. Analysis by finite element methods of the postulated configuration of {alpha}' particles within model {delta} grains suggests that a considerable anisotropy in the resistivity may be obtained depending on the arrangement of the {alpha}' lens shaped particles within the grains. The resistivity of these grains departs from the series resistance model and can lead to significant errors in the predicted amount of the {alpha}' phase present in the microstructure. An underestimation of the amount of {alpha}' in the sample by 15%, or more, appears to be possible.

  15. Cultural heritage conservation and communication by digital modeling tools. Case studies: minor architectures of the Thirties in the Turin area

    NASA Astrophysics Data System (ADS)

    Bruno, A., Jr.; Spallone, R.

    2015-08-01

    Between the end of the twenties and the beginning of the World war two Turin, as the most of the Italian cities, was endowed by the fascist regime of many new buildings to guarantee its visibility and to control the territory: the fascist party main houses and the local ones. The style that was adopted for these constructions was inspired by the guide lines of the Modern movement which were spreading by a generation of architects as Le Corbusier, Gropius, Mendelsohn. At the end of the war many buildings were reconverted to several functions that led heavy transformations not respectful of the original worth, other were demolished. Today it's possible to rebuild those lost architectures in their primal format as it was created by their architects on paper (and in their mind). This process can guarantee the three-dimensional perception, the authenticity of the materials and the placement into the Turin urban tissue, using static and dynamic digital representation systems. The "three-dimensional re-drawing" of the projects, thought as an heuristic practice devoted to reveal the original idea of the project, inserts itself in a digital model of the urban and natural context as we can live it today, to simulate the perceptive effects that the building could stir up today. The modeling skills are the basis to product videos able to explore the relationship between the environment and "re-built architectures", describing with the synthetic movie techniques, the main formal and perceptive roots. The model represents a scientific product that can be involved in a virtual archive of cultural goods to preserve the collective memory of the architectural and urban past image of Turin.

  16. Algorithm To Architecture Mapping Model (ATAMM) multicomputer operating system functional specification

    NASA Technical Reports Server (NTRS)

    Mielke, R.; Stoughton, J.; Som, S.; Obando, R.; Malekpour, M.; Mandala, B.

    1990-01-01

    A functional description of the ATAMM Multicomputer Operating System is presented. ATAMM (Algorithm to Architecture Mapping Model) is a marked graph model which describes the implementation of large grained, decomposed algorithms on data flow architectures. AMOS, the ATAMM Multicomputer Operating System, is an operating system which implements the ATAMM rules. A first generation version of AMOS which was developed for the Advanced Development Module (ADM) is described. A second generation version of AMOS being developed for the Generic VHSIC Spaceborne Computer (GVSC) is also presented.

  17. Information Model Driven Semantic Framework Architecture and Design for Distributed Data Repositories

    NASA Astrophysics Data System (ADS)

    Fox, P. A.; Semantic eScience Framework Team

    2011-12-01

    In Earth and space science, the steady evolution away from isolated and single purpose data 'systems' toward systems of systems, data ecosystems, or data frameworks that provide access to highly heterogeneous data repositories is picking up in pace. As a result, common informatics approaches are being sought for how newer architectures are developed and/or implemented. In particular, a clear need to have a repeatable method for modeling, implementing and evolving the information architectures has emerged and one that goes beyond traditional software design. This presentation outlines new component design approaches bases in sets of information model and semantic encodings for mediation.

  18. Protein modeling with hybrid Hidden Markov Model/Neurel network architectures

    SciTech Connect

    Baldi, P.; Chauvin, Y.

    1995-12-31

    Hidden Markov Models (HMMs) are useful in a number of tasks in computational molecular biology, and in particular to model and align protein families. We argue that HMMs are somewhat optimal within a certain modeling hierarchy. Single first order HMMs, however, have two potential limitations: a large number of unstructured parameters, and a built-in inability to deal with long-range dependencies. Hybrid HMM/Neural Network (NN) architectures attempt to overcome these limitations. In hybrid HMM/NN, the HMM parameters are computed by a NN. This provides a reparametrization that allows for flexible control of model complexity, and incorporation of constraints. The approach is tested on the immunoglobulin family. A hybrid model is trained, and a multiple alignment derived, with less than a fourth of the number of parameters used with previous single HMMs. To capture dependencies, however, one must resort to a larger hybrid model class, where the data is modeled by multiple HMMs. The parameters of the HMMs, and their modulation as a function of input or context, is again calculated by a NN.

  19. Evaluation of a server-client architecture for accelerator modeling and simulation

    SciTech Connect

    Bowling, B.A.; Akers, W.; Shoaee, H.; Watson, W.; van Zeijts, J.; Witherspoon, S.

    1997-02-01

    Traditional approaches to computational modeling and simulation often utilize a batch method for code execution using file-formatted input/output. This method of code implementation was generally chosen for several factors, including CPU throughput and availability, complexity of the required modeling problem, and presentation of computation results. With the advent of faster computer hardware and the advances in networking and software techniques, other program architectures for accelerator modeling have recently been employed. Jefferson Laboratory has implemented a client/server solution for accelerator beam transport modeling utilizing a query-based I/O. The goal of this code is to provide modeling information for control system applications and to serve as a computation engine for general modeling tasks, such as machine studies. This paper performs a comparison between the batch execution and server/client architectures, focusing on design and implementation issues, performance, and general utility towards accelerator modeling demands. {copyright} {ital 1997 American Institute of Physics.}

  20. Evaluation of a server-client architecture for accelerator modeling and simulation

    SciTech Connect

    Bowling, B. A.; Akers, W.; Shoaee, H.; Watson, W.; Zeijts, J. van; Witherspoon, S.

    1997-02-01

    Traditional approaches to computational modeling and simulation often utilize a batch method for code execution using file-formatted input/output. This method of code implementation was generally chosen for several factors, including CPU throughput and availability, complexity of the required modeling problem, and presentation of computation results. With the advent of faster computer hardware and the advances in networking and software techniques, other program architectures for accelerator modeling have recently been employed. Jefferson Laboratory has implemented a client/server solution for accelerator beam transport modeling utilizing a query-based I/O. The goal of this code is to provide modeling information for control system applications and to serve as a computation engine for general modeling tasks, such as machine studies. This paper performs a comparison between the batch execution and server/client architectures, focusing on design and implementation issues, performance, and general utility towards accelerator modeling demands.

  1. Evaluation of a server-client architecture for accelerator modeling and simulation

    SciTech Connect

    Bowling, B.A.; Akers, W.; Shoaee, H.; Watson, W.; Zeijts, J. van; Witherspoon, S.

    1997-11-01

    Traditional approaches to computational modeling and simulation often utilize a batch method for code execution using file-formatted input/output. This method of code implementation was generally chosen for several factors, including CPU throughput and availability, complexity of the required modeling problem, and presentation of computation results. With the advent of faster computer hardware and the advances in networking and software techniques, other program architectures for accelerator modeling have recently been employed. Jefferson Laboratory has implemented a client/server solution for accelerator beam transport modeling utilizing a query-based I/O. The goal of this code is to provide modeling information for control system applications and to serve as a computation engine for general modeling tasks, such as machine studies. This paper performs a comparison between the batch execution and server/client architectures, focusing on design and implementation issues, performance, and general utility towards accelerator modeling demands.

  2. Rasch family models in e-learning: analyzing architectural sketching with a digital pen.

    PubMed

    Scalise, Kathleen; Cheng, Nancy Yen-Wen; Oskui, Nargas

    2009-01-01

    Since architecture students studying design drawing are usually assessed qualitatively on the basis of their final products, the challenges and stages of their learning have remained masked. To clarify the challenges in design drawing, we have been using the BEAR Assessment System and Rasch family models to measure levels of understanding for individuals and groups, in order to correct pedagogical assumptions and tune teaching materials. This chapter discusses the analysis of 81 drawings created by architectural students to solve a space layout problem, collected and analyzed with digital pen-and-paper technology. The approach allows us to map developmental performance criteria and perceive achievement overlaps in learning domains assumed separate, and then re-conceptualize a three-part framework to represent learning in architectural drawing. Results and measurement evidence from the assessment and Rasch modeling are discussed. PMID:19671990

  3. A transformation model for Laminaria Japonica (Phaeophyta, Laminariales)

    NASA Astrophysics Data System (ADS)

    Qin, Song; Jiang, Peng; Li, Xin-Ping; Wang, Xi-Hua; Zeng, Cheng-Kui

    1998-03-01

    A genetic transformation model for the seaweed Laminaria japonica mainly includes the following aspects: 1. The method to introduce foreign genes into the kelp, L. japonica Biolistic bombardment has been proved to be an effective method to bombard foreign DNA through cell walls into intact cells of both sporophytes and gametophytes. The expression of cat and lacZ was detected in regenerated sporophytes, which suggests that this method could induce random integration of foreign genes. Promoters to drive gene expression

  4. Understanding Portability of a High-Level Programming Model on Contemporary Heterogeneous Architectures

    SciTech Connect

    Sabne, Amit J.; Sakdhnagool, Putt; Lee, Seyong; Vetter, Jeffrey S.

    2015-07-13

    Accelerator-based heterogeneous computing is gaining momentum in the high-performance computing arena. However, the increased complexity of heterogeneous architectures demands more generic, high-level programming models. OpenACC is one such attempt to tackle this problem. Although the abstraction provided by OpenACC offers productivity, it raises questions concerning both functional and performance portability. In this article, the authors propose HeteroIR, a high-level, architecture-independent intermediate representation, to map high-level programming models, such as OpenACC, to heterogeneous architectures. They present a compiler approach that translates OpenACC programs into HeteroIR and accelerator kernels to obtain OpenACC functional portability. They then evaluate the performance portability obtained by OpenACC with their approach on 12 OpenACC programs on Nvidia CUDA, AMD GCN, and Intel Xeon Phi architectures. They study the effects of various compiler optimizations and OpenACC program settings on these architectures to provide insights into the achieved performance portability.

  5. Understanding Portability of a High-Level Programming Model on Contemporary Heterogeneous Architectures

    DOE PAGESBeta

    Sabne, Amit J.; Sakdhnagool, Putt; Lee, Seyong; Vetter, Jeffrey S.

    2015-07-13

    Accelerator-based heterogeneous computing is gaining momentum in the high-performance computing arena. However, the increased complexity of heterogeneous architectures demands more generic, high-level programming models. OpenACC is one such attempt to tackle this problem. Although the abstraction provided by OpenACC offers productivity, it raises questions concerning both functional and performance portability. In this article, the authors propose HeteroIR, a high-level, architecture-independent intermediate representation, to map high-level programming models, such as OpenACC, to heterogeneous architectures. They present a compiler approach that translates OpenACC programs into HeteroIR and accelerator kernels to obtain OpenACC functional portability. They then evaluate the performance portability obtained bymore » OpenACC with their approach on 12 OpenACC programs on Nvidia CUDA, AMD GCN, and Intel Xeon Phi architectures. They study the effects of various compiler optimizations and OpenACC program settings on these architectures to provide insights into the achieved performance portability.« less

  6. Three real-time architectures - A study using reward models

    NASA Technical Reports Server (NTRS)

    Sjogren, J. A.; Smith, R. M.

    1990-01-01

    Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the evolutionary behavior of the computer system by a continuous-time Markov chain, and a reward rate is associated with each state. In reliability/availability models, upstates have reward rate 1, and down states have reward rate zero associated with them. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Steady-state expected reward rate and expected instantaneous reward rate are clearly useful measures which can be extracted from the Markov reward model. The diversity of areas where Markov reward models may be used is illustrated with a comparative study of three examples of interest to the fault tolerant computing community.

  7. Transforming 2d Cadastral Data Into a Dynamic Smart 3d Model

    NASA Astrophysics Data System (ADS)

    Tsiliakou, E.; Labropoulos, T.; Dimopoulou, E.

    2013-08-01

    3D property registration has become an imperative need in order to optimally reflect all complex cases of the multilayer reality of property rights and restrictions, revealing their vertical component. This paper refers to the potentials and multiple applications of 3D cadastral systems and explores the current state-of-the art, especially the available software with which 3D visualization can be achieved. Within this context, the Hellenic Cadastre's current state is investigated, in particular its data modeling frame. Presenting the methodologies and specifications addressing the registration of 3D properties, the operating cadastral system's shortcomings and merits are pointed out. Nonetheless, current technological advances as well as the availability of sophisticated software packages (proprietary or open source) call for 3D modeling. In order to register and visualize the complex reality in 3D, Esri's CityEngine modeling software has been used, which is specialized in the generation of 3D urban environments, transforming 2D GIS Data into Smart 3D City Models. The application of the 3D model concerns the Campus of the National Technical University of Athens, in which a complex ownership status is established along with approved special zoning regulations. The 3D model was built using different parameters based on input data, derived from cadastral and urban planning datasets, as well as legal documents and architectural plans. The process resulted in a final 3D model, optimally describing the cadastral situation and built environment and proved to be a good practice example of 3D visualization.

  8. Towards automatic Markov reliability modeling of computer architectures

    NASA Technical Reports Server (NTRS)

    Liceaga, C. A.; Siewiorek, D. P.

    1986-01-01

    The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.

  9. Modelling a single phase voltage controlled rectifier using Laplace transforms

    NASA Technical Reports Server (NTRS)

    Kraft, L. Alan; Kankam, M. David

    1992-01-01

    The development of a 20 kHz, AC power system by NASA for large space projects has spurred a need to develop models for the equipment which will be used on these single phase systems. To date, models for the AC source (i.e., inverters) have been developed. It is the intent of this paper to develop a method to model the single phase voltage controlled rectifiers which will be attached to the AC power grid as an interface for connected loads. A modified version of EPRI's HARMFLO program is used as the shell for these models. The results obtained from the model developed in this paper are quite adequate for the analysis of problems such as voltage resonance. The unique technique presented in this paper uses the Laplace transforms to determine the harmonic content of the load current of the rectifier rather than a curve fitting technique. Laplace transforms yield the coefficient of the differential equations which model the line current to the rectifier directly.

  10. Can diversity in root architecture explain plant water use efficiency? A modeling study

    PubMed Central

    Tron, Stefania; Bodner, Gernot; Laio, Francesco; Ridolfi, Luca; Leitner, Daniel

    2015-01-01

    Drought stress is a dominant constraint to crop production. Breeding crops with adapted root systems for effective uptake of water represents a novel strategy to increase crop drought resistance. Due to complex interaction between root traits and high diversity of hydrological conditions, modeling provides important information for trait based selection. In this work we use a root architecture model combined with a soil-hydrological model to analyze whether there is a root system ideotype of general adaptation to drought or water uptake efficiency of root systems is a function of specific hydrological conditions. This was done by modeling transpiration of 48 root architectures in 16 drought scenarios with distinct soil textures, rainfall distributions, and initial soil moisture availability. We find that the efficiency in water uptake of root architecture is strictly dependent on the hydrological scenario. Even dense and deep root systems are not superior in water uptake under all hydrological scenarios. Our results demonstrate that mere architectural description is insufficient to find root systems of optimum functionality. We find that in environments with sufficient rainfall before the growing season, root depth represents the key trait for the exploration of stored water, especially in fine soils. Root density, instead, especially near the soil surface, becomes the most relevant trait for exploiting soil moisture when plant water supply is mainly provided by rainfall events during the root system development. We therefore concluded that trait based root breeding has to consider root systems with specific adaptation to the hydrology of the target environment. PMID:26412932

  11. A data-driven parallel execution model and architecture for logic programs

    SciTech Connect

    Tseng, Chien-Chao.

    1989-01-01

    Logic Programming has come to prominence in recent years after the decision of the Japanese Fifth Generation Project to adopt it as the kernel language. A significant number of research projects are attempting to implement different schemes to exploit the inherent parallelism in logic programs. Data flow architectural model has been found to attractive for parallel execution of logic programs. In this research, five dataflow execution models available in literature, have been critically reviewed. The primary aim of the critical review was to establish a set of design issues critical to efficient execution. Based on the established design issues, the abstract date - driven machine model, names LogDf, is developed for parallel execution of logic programs. The execution scheme supports OR - parallelism, Restricted AND parallelism and stream parallelism. Multiple binding environments are represented using stream of streams structure (S-stream). Eager evaluation is performed by passing binding environment between subgoal literals as S-streams, which are formed using non-strict constructors. The hierarchical multi-level stream structure provides a logical framework for distributing the streams to enhance parallelism in production/consumption as well as control of parallelism. The scheme for compiling the dataflow graphs, developed in this thesis, eliminates the necessity of any operand matching unit in the underlying dynamic dataflow architecture. In this thesis, an architecture for the abstract machine LogDf is also provided and the performance evaluation of this model is based on this architecture.

  12. A Model Based Framework for Semantic Interpretation of Architectural Construction Drawings

    ERIC Educational Resources Information Center

    Babalola, Olubi Oluyomi

    2011-01-01

    The study addresses the automated translation of architectural drawings from 2D Computer Aided Drafting (CAD) data into a Building Information Model (BIM), with emphasis on the nature, possible role, and limitations of a drafting language Knowledge Representation (KR) on the problem and process. The central idea is that CAD to BIM translation is a…

  13. Using UML Modeling to Facilitate Three-Tier Architecture Projects in Software Engineering Courses

    ERIC Educational Resources Information Center

    Mitra, Sandeep

    2014-01-01

    This article presents the use of a model-centric approach to facilitate software development projects conforming to the three-tier architecture in undergraduate software engineering courses. Many instructors intend that such projects create software applications for use by real-world customers. While it is important that the first version of these…

  14. A multi-scale strength model with phase transformation

    NASA Astrophysics Data System (ADS)

    Barton, Nathan; Arsenlis, Athanasios; Rhee, Moono; Marian, Jaime; Bernier, Joel V.; Tang, Meijie; Yang, Lin

    2012-03-01

    We present a multi-scale strength model that includes phase transformation. In each phase, strength depends on pressure, strain rate, temperature, and evolving dislocation density descriptors. A donor cell type of approach is used for the transfer of dislocation density between phases. While the shear modulus can be modeled as smooth through the BCC to rhombohedral transformation in vanadium, the multi-phase strength model predicts abrupt changes in the material strength due to changes in dislocation kinetics. In the rhombohedral phase, the dislocation density is decomposed into populations associated with short and long Burgers vectors. Strength model construction employs an information passing paradigm to span from the atomistic level to the continuum level. Simulation methods in the overall hierarchy include density functional theory, molecular statics, molecular dynamics, dislocation dynamics, and continuum based approaches. We demonstrate the behavior of the model through simulations of Rayleigh Taylor instability growth experiments of the type used to assess material strength at high pressure and strain rate.

  15. A multi-scale strength model with phase transformation

    NASA Astrophysics Data System (ADS)

    Barton, N.; Arsenlis, A.; Rhee, M.; Marian, J.; Bernier, J.; Tang, M.; Yang, L.

    2011-06-01

    We present a multi-scale strength model that includes phase transformation. In each phase, strength depends on pressure, strain rate, temperature, and evolving dislocation density descriptors. A donor cell type of approach is used for the transfer of dislocation density between phases. While the shear modulus can be modeled as smooth through the BCC to rhombohedral transformation in vanadium, the multi-phase strength model predicts abrupt changes in the material strength due to changes in dislocation kinetics. In the rhombohedral phase, the dislocation density is decomposed into populations associated with short and long Burgers vectors. Strength model construction employs an information passing paradigm to span from the atomistic level to the continuum level. Simulation methods in the overall hierarchy include density functional theory, molecular statics, molecular dynamics, dislocation dynamics, and continuum based approaches. We demonstrate the behavior of the model through simulations of Rayleigh Taylor instability growth experiments of the type used to assess material strength at high pressure and strain rate. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 (LLNL-ABS-464695).

  16. Designing Capital-Intensive Systems with Architectural and Operational Flexibility Using a Screening Model

    NASA Astrophysics Data System (ADS)

    Lin, Jijun; de Weck, Olivier; de Neufville, Richard; Robinson, Bob; MacGowan, David

    Development of capital intensive systems, such as offshore oil platforms or other industrial infrastructure, generally requires a significant amount of capital investment under various resource, technical, and market uncertainties. It is a very challenging task for development co-owners or joint ventures because important decisions, such as system architectures, have to be made while uncertainty remains high. This paper develops a screening model and a simulation framework to quickly explore the design space for complex engineering systems under uncertainty allowing promising strategies or architectures to be identified. Flexibility in systems’ design and operation is proposed as a proactive means to enable systems to adapt to future uncertainty. Architectural and operational flexibility can improve systems’ lifecycle value by mitigating downside risks and capturing upside opportunities. In order to effectively explore different flexible strategies addressing a view of uncertainty which changes with time, a computational framework based on Monte Carlo simulation is proposed in this paper. This framework is applied to study flexible development strategies for a representative offshore petroleum project. The complexity of this problem comes from multi-domain uncertainties, large architectural design space, and structure of flexibility decision rules. The results demonstrate that architectural and operational flexibility can significantly improve projects’ Expected Net Present Value (ENPV), reduce downside risks, and improve upside gains, compared to adopting an inflexible strategy appropriate to the view of uncertainty at the start of the project. In this particular case study, the most flexible strategy improves ENPV by 85% over an inflexible base case.

  17. Connecting Requirements to Architecture and Analysis via Model-Based Systems Engineering

    NASA Technical Reports Server (NTRS)

    Cole, Bjorn F.; Jenkins, J. Steven

    2015-01-01

    In traditional systems engineering practice, architecture, concept development, and requirements development are related but still separate activities. Concepts for operation, key technical approaches, and related proofs of concept are developed. These inform the formulation of an architecture at multiple levels, starting with the overall system composition and functionality and progressing into more detail. As this formulation is done, a parallel activity develops a set of English statements that constrain solutions. These requirements are often called "shall statements" since they are formulated to use "shall." The separation of requirements from design is exacerbated by well-meaning tools like the Dynamic Object-Oriented Requirements System (DOORS) that remained separated from engineering design tools. With the Europa Clipper project, efforts are being taken to change the requirements development approach from a separate activity to one intimately embedded in formulation effort. This paper presents a modeling approach and related tooling to generate English requirement statements from constraints embedded in architecture definition.

  18. Use of the Chemical Transformation Simulator as a Parameterization Tool for Modeling the Environmental Fate of Organic Chemicals and their Transformation Products

    EPA Science Inventory

    A Chemical Transformation Simulator is a web-based system for predicting transformation pathways and physicochemical properties of organic chemicals. Role in Environmental Modeling • Screening tool for identifying likely transformation products in the environment • Parameteri...

  19. A model to simulate the oxygen distribution in hypoxic tumors for different vascular architectures

    SciTech Connect

    Espinoza, Ignacio; Peschke, Peter; Karger, Christian P.

    2013-08-15

    Purpose: As hypoxic cells are more resistant to photon radiation, it is desirable to obtain information about the oxygen distribution in tumors prior to the radiation treatment. Noninvasive techniques are currently not able to provide reliable oxygenation maps with sufficient spatial resolution; therefore mathematical models may help to simulate microvascular architectures and the resulting oxygen distributions in the surrounding tissue. Here, the authors present a new computer model, which uses the vascular fraction of tumor voxels, in principle measurable noninvasively in vivo, as input parameter for simulating realistic PO2 histograms in tumors, assuming certain 3D vascular architectures.Methods: Oxygen distributions were calculated by solving a reaction-diffusion equation in a reference volume using the particle strength exchange method. Different types of vessel architectures as well as different degrees of vascular heterogeneities are considered. Two types of acute hypoxia (ischemic and hypoxemic) occurring additionally to diffusion-limited (chronic) hypoxia were implemented as well.Results: No statistically significant differences were observed when comparing 2D- and 3D-vessel architectures (p > 0.79 in all cases) and highly heterogeneously distributed linear vessels show good agreement, when comparing with published experimental intervessel distance distributions and PO2 histograms. It could be shown that, if information about additional acute hypoxia is available, its contribution to the hypoxic fraction (HF) can be simulated as well. Increases of 128% and 168% in the HF were obtained when representative cases of ischemic and hypoxemic acute hypoxia, respectively, were considered in the simulations.Conclusions: The presented model is able to simulate realistic microscopic oxygen distributions in tumors assuming reasonable vessel architectures and using the vascular fraction as macroscopic input parameter. The model may be used to generate PO2 histograms

  20. Product Lifecycle Management Architecture: A Model Based Systems Engineering Analysis.

    SciTech Connect

    Noonan, Nicholas James

    2015-07-01

    This report is an analysis of the Product Lifecycle Management (PLM) program. The analysis is centered on a need statement generated by a Nuclear Weapons (NW) customer. The need statement captured in this report creates an opportunity for the PLM to provide a robust service as a solution. Lifecycles for both the NW and PLM are analyzed using Model Based System Engineering (MBSE).

  1. Implementation of Remaining Useful Lifetime Transformer Models in the Fleet-Wide Prognostic and Health Management Suite

    SciTech Connect

    Agarwal, Vivek; Lybeck, Nancy J.; Pham, Binh; Rusaw, Richard; Bickford, Randall

    2015-02-01

    Research and development efforts are required to address aging and reliability concerns of the existing fleet of nuclear power plants. As most plants continue to operate beyond the license life (i.e., towards 60 or 80 years), plant components are more likely to incur age-related degradation mechanisms. To assess and manage the health of aging plant assets across the nuclear industry, the Electric Power Research Institute has developed a web-based Fleet-Wide Prognostic and Health Management (FW-PHM) Suite for diagnosis and prognosis. FW-PHM is a set of web-based diagnostic and prognostic tools and databases, comprised of the Diagnostic Advisor, the Asset Fault Signature Database, the Remaining Useful Life Advisor, and the Remaining Useful Life Database, that serves as an integrated health monitoring architecture. The main focus of this paper is the implementation of prognostic models for generator step-up transformers in the FW-PHM Suite. One prognostic model discussed is based on the functional relationship between degree of polymerization, (the most commonly used metrics to assess the health of the winding insulation in a transformer) and furfural concentration in the insulating oil. The other model is based on thermal-induced degradation of the transformer insulation. By utilizing transformer loading information, established thermal models are used to estimate the hot spot temperature inside the transformer winding. Both models are implemented in the Remaining Useful Life Database of the FW-PHM Suite. The Remaining Useful Life Advisor utilizes the implemented prognostic models to estimate the remaining useful life of the paper winding insulation in the transformer based on actual oil testing and operational data.

  2. Analysis of trabecular bone architectural changes induced by osteoarthritis in rabbit femur using 3D active shape model and digital topology

    NASA Astrophysics Data System (ADS)

    Saha, P. K.; Rajapakse, C. S.; Williams, D. S.; Duong, L.; Coimbra, A.

    2007-03-01

    Osteoarthritis (OA) is the most common chronic joint disease, which causes the cartilage between the bone joints to wear away, leading to pain and stiffness. Currently, progression of OA is monitored by measuring joint space width using x-ray or cartilage volume using MRI. However, OA affects all periarticular tissues, including cartilage and bone. It has been shown previously that in animal models of OA, trabecular bone (TB) architecture is particularly affected. Furthermore, relative changes in architecture are dependent on the depth of the TB region with respect to the bone surface and main direction of load on the bone. The purpose of this study was to develop a new method for accurately evaluating 3D architectural changes induced by OA in TB. Determining the TB test domain that represents the same anatomic region across different animals is crucial for studying disease etiology, progression and response to therapy. It also represents a major technical challenge in analyzing architectural changes. Here, we solve this problem using a new active shape model (ASM)-based approach. A new and effective semi-automatic landmark selection approach has been developed for rabbit distal femur surface that can easily be adopted for many other anatomical regions. It has been observed that, on average, a trained operator can complete the user interaction part of landmark specification process in less than 15 minutes for each bone data set. Digital topological analysis and fuzzy distance transform derived parameters are used for quantifying TB architecture. The method has been applied on micro-CT data of excised rabbit femur joints from anterior cruciate ligament transected (ACLT) (n = 6) and sham (n = 9) operated groups collected at two and two-to-eight week post-surgery, respectively. An ASM of the rabbit right distal femur has been generated from the sham group micro-CT data. The results suggest that, in conjunction with ASM, digital topological parameters are suitable for

  3. Research on mixed network architecture collaborative application model

    NASA Astrophysics Data System (ADS)

    Jing, Changfeng; Zhao, Xi'an; Liang, Song

    2009-10-01

    When facing complex requirements of city development, ever-growing spatial data, rapid development of geographical business and increasing business complexity, collaboration between multiple users and departments is needed urgently, however conventional GIS software (such as Client/Server model or Browser/Server model) are not support this well. Collaborative application is one of the good resolutions. Collaborative application has four main problems to resolve: consistency and co-edit conflict, real-time responsiveness, unconstrained operation, spatial data recoverability. In paper, application model called AMCM is put forward based on agent and multi-level cache. AMCM can be used in mixed network structure and supports distributed collaborative. Agent is an autonomous, interactive, initiative and reactive computing entity in a distributed environment. Agent has been used in many fields such as compute science and automation. Agent brings new methods for cooperation and the access for spatial data. Multi-level cache is a part of full data. It reduces the network load and improves the access and handle of spatial data, especially, in editing the spatial data. With agent technology, we make full use of its characteristics of intelligent for managing the cache and cooperative editing that brings a new method for distributed cooperation and improves the efficiency.

  4. Deep Phenotyping of Coarse Root Architecture in R. pseudoacacia Reveals That Tree Root System Plasticity Is Confined within Its Architectural Model

    PubMed Central

    Danjon, Frédéric; Khuder, Hayfa; Stokes, Alexia

    2013-01-01

    This study aims at assessing the influence of slope angle and multi-directional flexing and their interaction on the root architecture of Robinia pseudoacacia seedlings, with a particular focus on architectural model and trait plasticity. 36 trees were grown from seed in containers inclined at 0° (control) or 45° (slope) in a glasshouse. The shoots of half the plants were gently flexed for 5 minutes a day. After 6 months, root systems were excavated and digitized in 3D, and biomass measured. Over 100 root architectural traits were determined. Both slope and flexing increased significantly plant size. Non-flexed trees on 45° slopes developed shallow roots which were largely aligned perpendicular to the slope. Compared to the controls, flexed trees on 0° slopes possessed a shorter and thicker taproot held in place by regularly distributed long and thin lateral roots. Flexed trees on the 45° slope also developed a thick vertically aligned taproot, with more volume allocated to upslope surface lateral roots, due to the greater soil volume uphill. We show that there is an inherent root system architectural model, but that a certain number of traits are highly plastic. This plasticity will permit root architectural design to be modified depending on external mechanical signals perceived by young trees. PMID:24386227

  5. Deep phenotyping of coarse root architecture in R. pseudoacacia reveals that tree root system plasticity is confined within its architectural model.

    PubMed

    Danjon, Frédéric; Khuder, Hayfa; Stokes, Alexia

    2013-01-01

    This study aims at assessing the influence of slope angle and multi-directional flexing and their interaction on the root architecture of Robinia pseudoacacia seedlings, with a particular focus on architectural model and trait plasticity. 36 trees were grown from seed in containers inclined at 0° (control) or 45° (slope) in a glasshouse. The shoots of half the plants were gently flexed for 5 minutes a day. After 6 months, root systems were excavated and digitized in 3D, and biomass measured. Over 100 root architectural traits were determined. Both slope and flexing increased significantly plant size. Non-flexed trees on 45° slopes developed shallow roots which were largely aligned perpendicular to the slope. Compared to the controls, flexed trees on 0° slopes possessed a shorter and thicker taproot held in place by regularly distributed long and thin lateral roots. Flexed trees on the 45° slope also developed a thick vertically aligned taproot, with more volume allocated to upslope surface lateral roots, due to the greater soil volume uphill. We show that there is an inherent root system architectural model, but that a certain number of traits are highly plastic. This plasticity will permit root architectural design to be modified depending on external mechanical signals perceived by young trees. PMID:24386227

  6. Objective Evaluation of Sensor Web Modeling and Data System Architectures

    NASA Astrophysics Data System (ADS)

    Seablom, M. S.; Atlas, R. M.; Ardizzone, J.; Kemp, E. M.; Talabac, S.

    2013-12-01

    We discuss the recent development of an end-to-end simulator designed to quantitatively assess the scientific value of incorporating model- and event-driven "sensor web" capabilities into future NASA Earth Science missions. The intent is to provide an objective analysis tool for performing engineering and scientific trade studies in which new technologies are introduced. In the case study presented here we focus on meteorological applications in which a numerical model is used to intelligently schedule data collection by space-based assets. Sensor web observing systems that enable dynamic targeting by various observing platforms have the potential to significantly improve our ability to monitor, understand, and predict the evolution of rapidly evolving, transient, or variable meteorological events. The use case focuses on landfalling hurricanes and was selected due to the obvious societal impact and the ongoing need to improve warning times. Although hurricane track prediction has improved over the past several decades, further improvement is necessary in the prediction of hurricane intensity. We selected a combination of future observing platforms to apply sensor web measurement techniques: global 3D lidar winds, next-generation scatterometer ocean vector winds, and high resolution cloud motion vectors from GOES-R. Targeting of the assets by a numerical model would allow the spacecraft to change its attitude by performing a roll maneuver to enable off-nadir measurements to be acquired. In this study, synthetic measurements were derived through Observing System Simulation Experiments (OSSEs) and enabled in part through the Dopplar Lidar Simulation Model developed by Simpson Weather Associates. We describe the capabilities of the simulator through three different sensor web configurations of the wind lidar: winds obtained from a nominal "survey mode" operation, winds obtained with a reduced duty cycle of the lidar (designed for preserving the life of the instrument

  7. Coupling root architecture and pore network modeling - an attempt towards better understanding root-soil interactions

    NASA Astrophysics Data System (ADS)

    Leitner, Daniel; Bodner, Gernot; Raoof, Amir

    2013-04-01

    Understanding root-soil interactions is of high importance for environmental and agricultural management. Root uptake is an essential component in water and solute transport modeling. The amount of groundwater recharge and solute leaching significantly depends on the demand based plant extraction via its root system. Plant uptake however not only responds to the potential demand, but in most situations is limited by supply form the soil. The ability of the plant to access water and solutes in the soil is governed mainly by root distribution. Particularly under conditions of heterogeneous distribution of water and solutes in the soil, it is essential to capture the interaction between soil and roots. Root architecture models allow studying plant uptake from soil by describing growth and branching of root axes in the soil. Currently root architecture models are able to respond dynamically to water and nutrient distribution in the soil by directed growth (tropism), modified branching and enhanced exudation. The porous soil medium as rooting environment in these models is generally described by classical macroscopic water retention and sorption models, average over the pore scale. In our opinion this simplified description of the root growth medium implies several shortcomings for better understanding root-soil interactions: (i) It is well known that roots grow preferentially in preexisting pores, particularly in more rigid/dry soil. Thus the pore network contributes to the architectural form of the root system; (ii) roots themselves can influence the pore network by creating preferential flow paths (biopores) which are an essential element of structural porosity with strong impact on transport processes; (iii) plant uptake depend on both the spatial location of water/solutes in the pore network as well as the spatial distribution of roots. We therefore consider that for advancing our understanding in root-soil interactions, we need not only to extend our root models

  8. Modelling Elastic Media With Arbitrary Shapes Using the Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Rosa, J. W.; Cardoso, F. A.; Rosa, J. W.; Aki, K.

    2004-12-01

    We extend the new method proposed by Rosa et al. (2001) for the study of elastic bodies with complete arbitrary shapes. The method was originally developed for modelling 2-D elastic media with the application of the wavelet transform, and was extended to cases where discontinuities simulated geologic faults between two different elastic media. In addition to extending the method for the study of bodies with complete arbitrary shapes, we also test new transforms with the objective of making the related matrices more compact, which are also applied to the most general case of the method. The basic method consists of the discretization of the polynomial expansion for the boundary conditions of the 2-D problem involving the stress and strain relations for the media. This parameterization leads to a system of linear equations that should be solved for the determination of the expansion coefficients, which are the model parameters, and their determination leads to the solution of the problem. Despite the fact that the media we studied originally were 2-D bodies, the result of the application of this new method can be viewed as an approximate solution to some specific 3-D problems. Among the motivations for developing this method are possible geological applications (that is, the study of tectonic plates and geologic faults) and simulations of the elastic behaviour of materials in several other fields of science. The wavelet transform is applied with two main objectives, namely to decrease the error related to the truncation of the polynomial expansion and to make the system of linear equations more compact for computation. Having validated this method for the original 2-D elastic media, we plan that this extension to elastic bodies with complete arbitrary shapes will enable it to be even more attractive for modelling real media. Reference Rosa, J. W. C., F. A. C. M. Cardoso, K. Aki, H. S. Malvar, F. A. V. Artola, and J. W. C. Rosa, Modelling elastic media with the

  9. Transforming a care delivery model to increase breastfeeding.

    PubMed

    Magri, Eileen P; Hylton-McGuire, Karen

    2013-01-01

    This article describes the process of changing the care delivery model for maternity practice in a New York State Regional Perinatal Center to support exclusive breastfeeding, defined as providing nothing other than human milk feedings. Barriers exist in hospitals that inhibit exclusive breastfeeding of newborns at the time of discharge and fail to meet the recommendations outlined by the World Health Organization and New York State Department of Health. All aspects of mother/baby care were evaluated to meet the recommendations and increase exclusive breastfeeding. Transforming the care delivery model for mothers and babies began in 2010 with an invitation to participate in the New York State Breastfeeding Quality Improvement in Hospitals Learning Collaborative. Twelve hospitals were selected to participate with the following objectives: increase exclusive breastfeeding; improve hospital breastfeeding policies, practices, and systems that are consistent with New York State hospital regulations, laws and recommended best practices; increase staff skills and knowledge of breastfeeding and lactation support through education; empower, educate, and support new mothers to successfully breastfeed and change the culture and social norm relative to breastfeeding. The transformation of the care delivery model resulted in an increase in exclusive breastfeeding from 6% to 44%. PMID:23399862

  10. Culture models of human mammary epithelial cell transformation

    SciTech Connect

    Stampfer, Martha R.; Yaswen, Paul

    2000-11-10

    Human pre-malignant breast diseases, particularly ductal carcinoma in situ (DCIS)3 already display several of the aberrant phenotypes found in primary breast cancers, including chromosomal abnormalities, telomerase activity, inactivation of the p53 gene and overexpression of some oncogenes. Efforts to model early breast carcinogenesis in human cell cultures have largely involved studies in vitro transformation of normal finite lifespan human mammary epithelial cells (HMEC) to immortality and malignancy. We present a model of HMEC immortal transformation consistent with the know in vivo data. This model includes a recently described, presumably epigenetic process, termed conversion, which occurs in cells that have overcome stringent replicative senescence and are thus able to maintain proliferation with critically short telomeres. The conversion process involves reactivation of telomerase activity, and acquisition of good uniform growth in the absence and presence of TFGB. We propose th at overcoming the proliferative constraints set by senescence, and undergoing conversion, represent key rate-limiting steps in human breast carcinogenesis, and occur during early stage breast cancer progression.

  11. Architectural Improvements and New Processing Tools for the Open XAL Online Model

    SciTech Connect

    Allen, Christopher K; Pelaia II, Tom; Freed, Jonathan M

    2015-01-01

    The online model is the component of Open XAL providing accelerator modeling, simulation, and dynamic synchronization to live hardware. Significant architectural changes and feature additions have been recently made in two separate areas: 1) the managing and processing of simulation data, and 2) the modeling of RF cavities. Simulation data and data processing have been completely decoupled. A single class manages all simulation data while standard tools were developed for processing the simulation results. RF accelerating cavities are now modeled as composite structures where parameter and dynamics computations are distributed. The beam and hardware models both maintain their relative phase information, which allows for dynamic phase slip and elapsed time computation.

  12. Creative Practices Embodied, Embedded, and Enacted in Architectural Settings: Toward an Ecological Model of Creativity

    PubMed Central

    Malinin, Laura H.

    2016-01-01

    Memoires by eminently creative people often describe architectural spaces and qualities they believe instrumental for their creativity. However, places designed to encourage creativity have had mixed results, with some found to decrease creative productivity for users. This may be due, in part, to lack of suitable empirical theory or model to guide design strategies. Relationships between creative cognition and features of the physical environment remain largely uninvestigated in the scientific literature, despite general agreement among researchers that human cognition is physically and socially situated. This paper investigates what role architectural settings may play in creative processes by examining documented first person and biographical accounts of creativity with respect to three central theories of situated cognition. First, the embodied thesis argues that cognition encompasses both the mind and the body. Second, the embedded thesis maintains that people exploit features of the physical and social environment to increase their cognitive capabilities. Third, the enaction thesis describes cognition as dependent upon a person’s interactions with the world. Common themes inform three propositions, illustrated in a new theoretical framework describing relationships between people and their architectural settings with respect to different cognitive processes of creativity. The framework is intended as a starting point toward an ecological model of creativity, which may be used to guide future creative process research and architectural design strategies to support user creative productivity. PMID:26779087

  13. Creative Practices Embodied, Embedded, and Enacted in Architectural Settings: Toward an Ecological Model of Creativity.

    PubMed

    Malinin, Laura H

    2015-01-01

    Memoires by eminently creative people often describe architectural spaces and qualities they believe instrumental for their creativity. However, places designed to encourage creativity have had mixed results, with some found to decrease creative productivity for users. This may be due, in part, to lack of suitable empirical theory or model to guide design strategies. Relationships between creative cognition and features of the physical environment remain largely uninvestigated in the scientific literature, despite general agreement among researchers that human cognition is physically and socially situated. This paper investigates what role architectural settings may play in creative processes by examining documented first person and biographical accounts of creativity with respect to three central theories of situated cognition. First, the embodied thesis argues that cognition encompasses both the mind and the body. Second, the embedded thesis maintains that people exploit features of the physical and social environment to increase their cognitive capabilities. Third, the enaction thesis describes cognition as dependent upon a person's interactions with the world. Common themes inform three propositions, illustrated in a new theoretical framework describing relationships between people and their architectural settings with respect to different cognitive processes of creativity. The framework is intended as a starting point toward an ecological model of creativity, which may be used to guide future creative process research and architectural design strategies to support user creative productivity. PMID:26779087

  14. Diagnostic and Prognostic Models for Generator Step-Up Transformers

    SciTech Connect

    Vivek Agarwal; Nancy J. Lybeck; Binh T. Pham

    2014-09-01

    In 2014, the online monitoring (OLM) of active components project under the Light Water Reactor Sustainability program at Idaho National Laboratory (INL) focused on diagnostic and prognostic capabilities for generator step-up transformers. INL worked with subject matter experts from the Electric Power Research Institute (EPRI) to augment and revise the GSU fault signatures previously implemented in the Electric Power Research Institute’s (EPRI’s) Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. Two prognostic models were identified and implemented for GSUs in the FW-PHM Suite software. INL and EPRI demonstrated the use of prognostic capabilities for GSUs. The complete set of fault signatures developed for GSUs in the Asset Fault Signature Database of the FW-PHM Suite for GSUs is presented in this report. Two prognostic models are described for paper insulation: the Chendong model for degree of polymerization, and an IEEE model that uses a loading profile to calculates life consumption based on hot spot winding temperatures. Both models are life consumption models, which are examples of type II prognostic models. Use of the models in the FW-PHM Suite was successfully demonstrated at the 2014 August Utility Working Group Meeting, Idaho Falls, Idaho, to representatives from different utilities, EPRI, and the Halden Research Project.

  15. Modeling biochemical transformation processes and information processing with Narrator

    PubMed Central

    Mandel, Johannes J; Fuß, Hendrik; Palfreyman, Niall M; Dubitzky, Werner

    2007-01-01

    Background Software tools that model and simulate the dynamics of biological processes and systems are becoming increasingly important. Some of these tools offer sophisticated graphical user interfaces (GUIs), which greatly enhance their acceptance by users. Such GUIs are based on symbolic or graphical notations used to describe, interact and communicate the developed models. Typically, these graphical notations are geared towards conventional biochemical pathway diagrams. They permit the user to represent the transport and transformation of chemical species and to define inhibitory and stimulatory dependencies. A critical weakness of existing tools is their lack of supporting an integrative representation of transport, transformation as well as biological information processing. Results Narrator is a software tool facilitating the development and simulation of biological systems as Co-dependence models. The Co-dependence Methodology complements the representation of species transport and transformation together with an explicit mechanism to express biological information processing. Thus, Co-dependence models explicitly capture, for instance, signal processing structures and the influence of exogenous factors or events affecting certain parts of a biological system or process. This combined set of features provides the system biologist with a powerful tool to describe and explore the dynamics of life phenomena. Narrator's GUI is based on an expressive graphical notation which forms an integral part of the Co-dependence Methodology. Behind the user-friendly GUI, Narrator hides a flexible feature which makes it relatively easy to map models defined via the graphical notation to mathematical formalisms and languages such as ordinary differential equations, the Systems Biology Markup Language or Gillespie's direct method. This powerful feature facilitates reuse, interoperability and conceptual model development. Conclusion Narrator is a flexible and intuitive systems

  16. Assessment of Mechanical Performance of Bone Architecture Using Rapid Prototyping Models

    NASA Astrophysics Data System (ADS)

    Saparin, Peter; Woesz, Alexander; Thomsen, Jasper S.; Fratzl, Peter

    2008-06-01

    The aim of this on-going research project is to assess the influence of bone microarchitecture on the mechanical performance of trabecular bone. A testing chain consist-ing of three steps was established: 1) micro computed tomography (μCT) imaging of human trabecular bone; 2) building of models of the bone from a light-sensitive polymer using Rapid Prototyping (RP); 3) mechanical testing of the models in a material testing machine. A direct resampling procedure was developed to convert μCT data into the format of the RP machine. Standardized parameters for production and testing of the plastic models were established by use of regular cellular structures. Next, normal, osteoporotic, and extreme osteoporotic vertebral trabecular bone architectures were re-produced by RP and compression tested. We found that normal architecture of vertebral trabecular bone exhibit behaviour characteristic of a cellular structure. In normal bone the fracture occurs at much higher strain values that in osteoporotic bone. After the fracture a normal trabecular architecture is able to carry much higher loads than an osteoporotic architecture. However, no statistically significant differences were found in maximal stress during uniaxial compression of the central part of normal, osteoporotic, and extreme osteoporotic vertebral trabecular bone. This supports the hypothesis that osteoporotic trabecular bone can compensate for a loss of trabeculae by thickening the remaining trabeculae in the loading direction (compensatory hypertrophy). The developed approach could be used for mechanical evaluation of structural data acquired non-invasively and assessment of changes in performance of bone architecture.

  17. Optimizing transformations of stencil operations for parallel object-oriented scientific frameworks on cache-based architectures

    SciTech Connect

    Bassetti, F.; Davis, K.; Quinlan, D.

    1998-12-31

    High-performance scientific computing relies increasingly on high-level large-scale object-oriented software frameworks to manage both algorithmic complexity and the complexities of parallelism: distributed data management, process management, inter-process communication, and load balancing. This encapsulation of data management, together with the prescribed semantics of a typical fundamental component of such object-oriented frameworks--a parallel or serial array-class library--provides an opportunity for increasingly sophisticated compile-time optimization techniques. This paper describes two optimizing transformations suitable for certain classes of numerical algorithms, one for reducing the cost of inter-processor communication, and one for improving cache utilization; demonstrates and analyzes the resulting performance gains; and indicates how these transformations are being automated.

  18. Operations Assessment of Launch Vehicle Architectures using Activity Based Cost Models

    NASA Technical Reports Server (NTRS)

    Ruiz-Torres, Alex J.; McCleskey, Carey

    2000-01-01

    The growing emphasis on affordability for space transportation systems requires the assessment of new space vehicles for all life cycle activities, from design and development, through manufacturing and operations. This paper addresses the operational assessment of launch vehicles, focusing on modeling the ground support requirements of a vehicle architecture, and estimating the resulting costs and flight rate. This paper proposes the use of Activity Based Costing (ABC) modeling for this assessment. The model uses expert knowledge to determine the activities, the activity times and the activity costs based on vehicle design characteristics. The approach provides several advantages to current approaches to vehicle architecture assessment including easier validation and allowing vehicle designers to understand the cost and cycle time drivers.

  19. Recognizing architectural distortion in mammogram: a multiscale texture modeling approach with GMM.

    PubMed

    Biswas, Sujoy Kumar; Mukherjee, Dipti Prasad

    2011-07-01

    We propose a generative model for constructing an efficient set of distinctive textures for recognizing architectural distortion in digital mammograms. In the first layer of the proposed two-layer architecture, the mammogram is analyzed by a multiscale oriented filter bank to form texture descriptor of vectorized filter responses. Our model presumes that every mammogram can be characterized by a "bag of primitive texture patterns" and the set of textural primitives (or textons) is represented by a mixture of Gaussians which builds up the second layer of the proposed model. The observed textural descriptor in the first layer is assumed to be a stochastic realization of one (hard mapping) or more (soft mapping) textural primitive(s) from the second layer. The results obtained on two publicly available datasets, namely Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM), demonstrate the efficacy of the proposed approach. PMID:21421429

  20. Thermodynamic modeling of martensitic transformations in shape memory alloys

    NASA Astrophysics Data System (ADS)

    Guthikonda, Venkata Suresh Reddy

    The unusual properties of shape memory alloys (SMAs) are due to solid-to-solid martensitic transformations (MTs) which correspond to a lattice level instability of the crystal structure. Currently, there exists a shortage of material models that can capture the details of lattice level MTs occurring in SMAs. In the first part of this work, an effective interaction potential (EIP) model is developed for the SMA AuCd. EIPs are atomic interaction potentials that are explicit functions of temperature. In particular, the Morse pair potential is used and its adjustable coefficients are taken to be temperature dependent. A hysteretic temperature-induced MT between the B2 cubic and B19 orthorhombic crystal structures is predicted. This is the behavior that is observed in the real material. The model predicts, to reasonable accuracy, the transformation strain tensor and captures the latent heat and thermal hysteresis to within an order of magnitude. The second part of this work consists of developing a lattice dynamics model to simulate the MTs. The atomic interactions are modeled using temperature independent Morse pair potentials. The effects of atomic vibrations on the material properties are captured using the first-order self-consistent approach which consists of renormalizing the frequencies of atomic vibration using self-consistent equations. These renormalized frequencies are dependent on both configuration and temperature. The model is applied for the case of a one-dimensional bi-atomic chain. The constant Morse pair potential parameters are chosen to demonstrate the usefulness of the current model. The resulting model is evaluated by generating equilibrium paths with temperature and mechanical load as the loading parameters. In both types of loading, a first-order MT is predicted indicating that the current model is able to capture the first-order MTs that occur in SMAs. This qualitative prediction of a first-order MT indicates the likely-hood that the current

  1. A latent variable transformation model approach for exploring dysphagia.

    PubMed

    Snavely, Anna C; Harrington, David P; Li, Yi

    2014-11-10

    Multiple outcomes are often collected in applications where the quantity of interest cannot be measured directly or is difficult or expensive to measure. In a head and neck cancer study conducted at Dana-Farber Cancer Institute, the investigators wanted to determine the effect of clinical and treatment factors on unobservable dysphagia through collected multiple outcomes of mixed types. Latent variable models are commonly adopted in this setting. These models stipulate that multiple collected outcomes are conditionally independent given the latent factor. Mixed types of outcomes (e.g., continuous vs. ordinal) and censored outcomes present statistical challenges, however, as a natural analog of the multivariate normal distribution does not exist for mixed data. Recently, Lin et al. proposed a semiparametric latent variable transformation model for mixed outcome data; however, it may not readily accommodate event time outcomes where censoring is present. In this paper, we extend the work of Lin et al. by proposing both semiparametric and parametric latent variable models that allow for the estimation of the latent factor in the presence of measurable outcomes of mixed types, including censored outcomes. Both approaches allow for a direct estimate of the treatment (or other covariate) effect on the unobserved latent variable, greatly enhancing the interpretability of the models. The semiparametric approach has the added advantage of allowing the relationship between the measurable outcomes and latent variables to be unspecified, rendering more robust inference. The parametric and semiparametric models can also be used together, providing a comprehensive modeling strategy for complicated latent variable problems. PMID:24974798

  2. Photo-Modeling and Cloud Computing. Applications in the Survey of Late Gothic Architectural Elements

    NASA Astrophysics Data System (ADS)

    Casu, P.; Pisu, C.

    2013-02-01

    This work proposes the application of the latest methods of photo-modeling to the study of Gothic architecture in Sardinia. The aim is to consider the versatility and ease of use of such documentation tools in order to study architecture and its ornamental details. The paper illustrates a procedure of integrated survey and restitution, with the purpose to obtain an accurate 3D model of some gothic portals. We combined the contact survey and the photographic survey oriented to the photo-modelling. The software used is 123D Catch by Autodesk an Image Based Modelling (IBM) system available free. It is a web-based application that requires a few simple steps to produce a mesh from a set of not oriented photos. We tested the application on four portals, working at different scale of detail: at first the whole portal and then the different architectural elements that composed it. We were able to model all the elements and to quickly extrapolate simple sections, in order to make a comparison between the moldings, highlighting similarities and differences. Working in different sites at different scale of detail, have allowed us to test the procedure under different conditions of exposure, sunshine, accessibility, degradation of surface, type of material, and with different equipment and operators, showing if the final result could be affected by these factors. We tested a procedure, articulated in a few repeatable steps, that can be applied, with the right corrections and adaptations, to similar cases and/or larger or smaller elements.

  3. Imputation for semiparametric transformation models with biased-sampling data

    PubMed Central

    Liu, Hao; Qin, Jing; Shen, Yu

    2012-01-01

    Widely recognized in many fields including economics, engineering, epidemiology, health sciences, technology and wildlife management, length-biased sampling generates biased and right-censored data but often provide the best information available for statistical inference. Different from traditional right-censored data, length-biased data have unique aspects resulting from their sampling procedures. We exploit these unique aspects and propose a general imputation-based estimation method for analyzing length-biased data under a class of flexible semiparametric transformation models. We present new computational algorithms that can jointly estimate the regression coefficients and the baseline function semiparametrically. The imputation-based method under the transformation model provides an unbiased estimator regardless whether the censoring is independent or not on the covariates. We establish large-sample properties using the empirical processes method. Simulation studies show that under small to moderate sample sizes, the proposed procedure has smaller mean square errors than two existing estimation procedures. Finally, we demonstrate the estimation procedure by a real data example. PMID:22903245

  4. A scaleable architecture for the modeling and simulation of intelligent transportation systems.

    SciTech Connect

    Ewing, T.; Tentner, A.

    1999-03-17

    A distributed, scaleable architecture for the modeling and simulation of Intelligent Transportation Systems on a network of workstations or a parallel computer has been developed at Argonne National Laboratory. The resulting capability provides a modular framework supporting plug-in models, hardware, and live data sources; visually realistic graphics displays to support training and human factors studies; and a set of basic ITS models. The models and capabilities are described, along with atypical scenario involving dynamic rerouting of smart vehicles which send probe reports to and receive traffic advisories from a traffic management center capable of incident detection.

  5. A model for the analysis of fault-tolerant signal processing architectures

    NASA Technical Reports Server (NTRS)

    Nair, V. S. S.; Abraham, J. A.

    1989-01-01

    This paper develops a new model, using matrices, for the analysis of fault-tolerant multiprocessor systems. The relationship between processors computing useful data, the output data, and the check processors is defined in terms of matrix entries. Unlike the matrix-based models proposed previously for the analysis of digital systems, this model uses only numerical computations rather than logical operations for the analysis of a system. Algorithms to evaluate the fault detection and location capability of the system are proposed which are much less complex than the existing ones. The new model is used to analyze some fault-tolerant architectures proposed for signal-processing applications.

  6. Developing a reversible rapid coordinate transformation model for the cylindrical projection

    NASA Astrophysics Data System (ADS)

    Ye, Si-jing; Yan, Tai-lai; Yue, Yan-li; Lin, Wei-yan; Li, Lin; Yao, Xiao-chuang; Mu, Qin-yun; Li, Yong-qin; Zhu, De-hai

    2016-04-01

    Numerical models are widely used for coordinate transformations. However, in most numerical models, polynomials are generated to approximate "true" geographic coordinates or plane coordinates, and one polynomial is hard to make simultaneously appropriate for both forward and inverse transformations. As there is a transformation rule between geographic coordinates and plane coordinates, how accurate and efficient is the calculation of the coordinate transformation if we construct polynomials to approximate the transformation rule instead of "true" coordinates? In addition, is it preferable to compare models using such polynomials with traditional numerical models with even higher exponents? Focusing on cylindrical projection, this paper reports on a grid-based rapid numerical transformation model - a linear rule approximation model (LRA-model) that constructs linear polynomials to approximate the transformation rule and uses a graticule to alleviate error propagation. Our experiments on cylindrical projection transformation between the WGS 84 Geographic Coordinate System (EPSG 4326) and the WGS 84 UTM ZONE 50N Plane Coordinate System (EPSG 32650) with simulated data demonstrate that the LRA-model exhibits high efficiency, high accuracy, and high stability; is simple and easy to use for both forward and inverse transformations; and can be applied to the transformation of a large amount of data with a requirement of high calculation efficiency. Furthermore, the LRA-model exhibits advantages in terms of calculation efficiency, accuracy and stability for coordinate transformations, compared to the widely used hyperbolic transformation model.

  7. Modeling human endothelial cell transformation in vascular neoplasias

    PubMed Central

    Wen, Victoria W.; MacKenzie, Karen L.

    2013-01-01

    Endothelial cell (EC)-derived neoplasias range from benign hemangioma to aggressive metastatic angiosarcoma, which responds poorly to current treatments and has a very high mortality rate. The development of treatments that are more effective for these disorders will be expedited by insight into the processes that promote abnormal proliferation and malignant transformation of human ECs. The study of primary endothelial malignancy has been limited by the rarity of the disease; however, there is potential for carefully characterized EC lines and animal models to play a central role in the discovery, development and testing of molecular targeted therapies for vascular neoplasias. This review describes molecular alterations that have been identified in EC-derived neoplasias, as well as the processes that underpin the immortalization and tumorigenic conversion of ECs. Human EC lines, established through the introduction of defined genetic elements or by culture of primary tumor tissue, are catalogued and discussed in relation to their relevance as models of vascular neoplasia. PMID:24046386

  8. Using two coefficients modeling of nonsubsampled Shearlet transform for despeckling

    NASA Astrophysics Data System (ADS)

    Jafari, Saeed; Ghofrani, Sedigheh

    2016-01-01

    Synthetic aperture radar (SAR) images are inherently affected by multiplicative speckle noise. Two approaches based on modeling the nonsubsampled Shearlet transform (NSST) coefficients are presented. Two-sided generalized Gamma distribution and normal inverse Gaussian probability density function have been used to model the statistics of NSST coefficients. Bayesian maximum a posteriori estimator is applied to the corrupted NSST coefficients in order to estimate the noise-free NSST coefficients. Finally, experimental results, according to objective and subjective criteria, carried out on both artificially speckled images and the true SAR images, demonstrate that the proposed methods outperform other state of art references via two points of view, speckle noise reduction and image quality preservation.

  9. Modeling of Solid State Transformer for the FREEDM System Demonstration

    NASA Astrophysics Data System (ADS)

    Jiang, Youyuan

    The Solid State Transformer (SST) is an essential component in the FREEDM system. This research focuses on the modeling of the SST and the controller hardware in the loop (CHIL) implementation of the SST for the support of the FREEDM system demonstration. The energy based control strategy for a three-stage SST is analyzed and applied. A simplified average model of the three-stage SST that is suitable for simulation in real time digital simulator (RTDS) has been developed in this study. The model is also useful for general time-domain power system analysis and simulation. The proposed simplified av-erage model has been validated in MATLAB and PLECS. The accuracy of the model has been verified through comparison with the cycle-by-cycle average (CCA) model and de-tailed switching model. These models are also implemented in PSCAD, and a special strategy to implement the phase shift modulation has been proposed to enable the switching model simulation in PSCAD. The implementation of the CHIL test environment of the SST in RTDS is described in this report. The parameter setup of the model has been discussed in detail. One of the dif-ficulties is the choice of the damping factor, which is revealed in this paper. Also the grounding of the system has large impact on the RTDS simulation. Another problem is that the performance of the system is highly dependent on the switch parameters such as voltage and current ratings. Finally, the functionalities of the SST have been realized on the platform. The distributed energy storage interface power injection and reverse power flow have been validated. Some limitations are noticed and discussed through the simulation on RTDS.

  10. Sagen (SADMT (Strategic Defense Initiative Architecture Ddataflow Modeling Technique) generator) user's guide Version. 1. 5. Final report

    SciTech Connect

    Kappel, M.R.; Ardoin, C.D.; Linn, C.J.; Linn, J.L.; Salasin, J.

    1988-04-01

    IDA Paper P-2028 documents a tool that can facilitate the description of processes for the Strategic Defense System (SDS) and Battle Management/Command, Control and Communications (BM/C3) architectures. The process descriptions generated by this tool conform to the Strategic Defense Initiative (SDI) Architecture Dataflow Modeling Technique (SADMT).

  11. Gait-based person recognition using arbitrary view transformation model.

    PubMed

    Muramatsu, Daigo; Shiraishi, Akira; Makihara, Yasushi; Uddin, Md Zasim; Yagi, Yasushi

    2015-01-01

    Gait recognition is a useful biometric trait for person authentication because it is usable even with low image resolution. One challenge is robustness to a view change (cross-view matching); view transformation models (VTMs) have been proposed to solve this. The VTMs work well if the target views are the same as their discrete training views. However, the gait traits are observed from an arbitrary view in a real situation. Thus, the target views may not coincide with discrete training views, resulting in recognition accuracy degradation. We propose an arbitrary VTM (AVTM) that accurately matches a pair of gait traits from an arbitrary view. To realize an AVTM, we first construct 3D gait volume sequences of training subjects, disjoint from the test subjects in the target scene. We then generate 2D gait silhouette sequences of the training subjects by projecting the 3D gait volume sequences onto the same views as the target views, and train the AVTM with gait features extracted from the 2D sequences. In addition, we extend our AVTM by incorporating a part-dependent view selection scheme (AVTM_PdVS), which divides the gait feature into several parts, and sets part-dependent destination views for transformation. Because appropriate destination views may differ for different body parts, the part-dependent destination view selection can suppress transformation errors, leading to increased recognition accuracy. Experiments using data sets collected in different settings show that the AVTM improves the accuracy of cross-view matching and that the AVTM_PdVS further improves the accuracy in many cases, in particular, verification scenarios. PMID:25423652

  12. Architecture-dependent signal conduction in model networks of endothelial cells

    NASA Astrophysics Data System (ADS)

    Deymier, Pierre A.; Eray, Mete; Deymier, Martin J.; Runge, Keith; Hoying, James B.; Vasseur, Jérome O.

    2010-04-01

    Signal conduction between endothelial cells along the walls of vessels appears to play an important role in circulatory function. A recently developed approach to calculate analytically the spectrum of propagating compositional waves in models of multicellular architectures is extended to study putative signal conduction dynamics across networks of endothelial cells. Here, compositional waves originate from negative feedback loops, such as between Ca2+ and inositol triphosphate (IP3) in endothelial cells, and are shaped by their connection topologies. We consider models of networks constituted of a main chain of endothelial cells and multiple side chains. The resulting transmission spectra encode information concerning the position and size of the side branches in the form of gaps. This observation suggests that endothelial cell networks may be able to “communicate” information regarding long-range order in their architecture.

  13. Practical Application of Model-based Programming and State-based Architecture to Space Missions

    NASA Technical Reports Server (NTRS)

    Horvath, Gregory A.; Ingham, Michel D.; Chung, Seung; Martin, Oliver; Williams, Brian

    2006-01-01

    Innovative systems and software engineering solutions are required to meet the increasingly challenging demands of deep-space robotic missions. While recent advances in the development of an integrated systems and software engineering approach have begun to address some of these issues, they are still at the core highly manual and, therefore, error-prone. This paper describes a task aimed at infusing MIT's model-based executive, Titan, into JPL's Mission Data System (MDS), a unified state-based architecture, systems engineering process, and supporting software framework. Results of the task are presented, including a discussion of the benefits and challenges associated with integrating mature model-based programming techniques and technologies into a rigorously-defined domain specific architecture.

  14. A functional–structural kiwifruit vine model integrating architecture, carbon dynamics and effects of the environment

    PubMed Central

    Cieslak, Mikolaj; Seleznyova, Alla N.; Hanan, Jim

    2011-01-01

    Background and Aims Functional–structural modelling can be used to increase our understanding of how different aspects of plant structure and function interact, identify knowledge gaps and guide priorities for future experimentation. By integrating existing knowledge of the different aspects of the kiwifruit (Actinidia deliciosa) vine's architecture and physiology, our aim is to develop conceptual and mathematical hypotheses on several of the vine's features: (a) plasticity of the vine's architecture; (b) effects of organ position within the canopy on its size; (c) effects of environment and horticultural management on shoot growth, light distribution and organ size; and (d) role of carbon reserves in early shoot growth. Methods Using the L-system modelling platform, a functional–structural plant model of a kiwifruit vine was created that integrates architectural development, mechanistic modelling of carbon transport and allocation, and environmental and management effects on vine and fruit growth. The branching pattern was captured at the individual shoot level by modelling axillary shoot development using a discrete-time Markov chain. An existing carbon transport resistance model was extended to account for several source/sink components of individual plant elements. A quasi-Monte Carlo path-tracing algorithm was used to estimate the absorbed irradiance of each leaf. Key Results Several simulations were performed to illustrate the model's potential to reproduce the major features of the vine's behaviour. The model simulated vine growth responses that were qualitatively similar to those observed in experiments, including the plastic response of shoot growth to local carbon supply, the branching patterns of two Actinidia species, the effect of carbon limitation and topological distance on fruit size and the complex behaviour of sink competition for carbon. Conclusions The model is able to reproduce differences in vine and fruit growth arising from various

  15. Modelling chromosomal aberration induction by ionising radiation: The influence of interphase chromosome architecture

    NASA Astrophysics Data System (ADS)

    Ottolenghi, A.; Ballarini, F.; Biaggi, M.

    Several advances have been achieved in the knowledge of nuclear architecture and functions during the last decade, thus allowing the identification of interphase chromosome territories and sub-chromosomal domains (e.g. arm and band domains). This is an important step in the study of radiation-induced chromosome aberrations; indeed, the coupling between track-structure simulations and reliable descriptions of the geometrical properties of the target is one of the main tasks in modelling aberration induction by radiation, since it allows one to clarify the role of the initial positioning of two DNA lesions in determining their interaction probability. In the present paper, the main recent findings on nuclear and chromosomal architecture are summarised. A few examples of models based on different descriptions of interphase chromosome organisation (random-walk models, domain models and static models) are presented, focussing on how the approach adopted in modelling the target nuclei and chromosomes can influence the simulation of chromosomal aberration yields. Each model is discussed by taking into account available experimental data on chromosome aberration induction and/or interphase chromatin organisation. Preliminary results from a mechanistic model based on a coupling between radiation track-structure features and explicitly-modelled, non-overlapping chromosome territories are presented.

  16. Development of Groundwater Modeling Support System Based on Service-Oriented Architecture

    NASA Astrophysics Data System (ADS)

    WANG, Y.; Tsai, J. P.; Hsiao, C. T.; Chang, L. C.

    2014-12-01

    Groundwater simulation has become an essential step on the groundwater resources management and assessment. There are many stand-alone pre and post processing software packages to alleviate the model simulation loading, but the stand-alone software do not consider centralized management of data and simulation results neither do they provide network sharing function. The model buildings are still implemented independently case to case when using these packages. Hence, it is difficult to share and reuse the data and knowledge (simulation cases) systematically within or across companies. Therefore, this study develops a centralized and network based groundwater model developing system to assist model simulation. The system is based on service-oriented architecture and allows remote user to develop their modeling cases on internet. The data and cases (knowledge) are thus easy to manage centralized. MODFLOW is the modeling engine of the system, which is the most popular groundwater model in the world. Other functions include the database management and variety of model developing assisted web services including auto digitalizing of geology profile map、groundwater missing data recovery assisting、graphic data demonstration and auto generation of MODFLOW input files from database that is the most important function of the system. Since the system architecture is service-oriented, it is scalable and flexible. The system can be easily extended to include the scenarios analysis and knowledge management to facilitate the reuse of groundwater modeling knowledge.

  17. HYBRID FAST HANKEL TRANSFORM ALGORITHM FOR ELECTROMAGNETIC MODELING

    EPA Science Inventory

    A hybrid fast Hankel transform algorithm has been developed that uses several complementary features of two existing algorithms: Anderson's digital filtering or fast Hankel transform (FHT) algorithm and Chave's quadrature and continued fraction algorithm. A hybrid FHT subprogram ...

  18. Characterization of Model-Based Reasoning Strategies for Use in IVHM Architectures

    NASA Technical Reports Server (NTRS)

    Poll, Scott; Iverson, David; Patterson-Hine, Ann

    2003-01-01

    Open architectures are gaining popularity for Integrated Vehicle Health Management (IVHM) applications due to the diversity of subsystem health monitoring strategies in use and the need to integrate a variety of techniques at the system health management level. The basic concept of an open architecture suggests that whatever monitoring or reasoning strategy a subsystem wishes to deploy, the system architecture will support the needs of that subsystem and will be capable of transmitting subsystem health status across subsystem boundaries and up to the system level for system-wide fault identification and diagnosis. There is a need to understand the capabilities of various reasoning engines and how they, coupled with intelligent monitoring techniques, can support fault detection and system level fault management. Researchers in IVHM at NASA Ames Research Center are supporting the development of an IVHM system for liquefying-fuel hybrid rockets. In the initial stage of this project, a few readily available reasoning engines were studied to assess candidate technologies for application in next generation launch systems. Three tools representing the spectrum of model-based reasoning approaches, from a quantitative simulation based approach to a graph-based fault propagation technique, were applied to model the behavior of the Hybrid Combustion Facility testbed at Ames. This paper summarizes the characterization of the modeling process for each of the techniques.

  19. Impact of plant shoot architecture on leaf cooling: a coupled heat and mass transfer model

    PubMed Central

    Bridge, L. J.; Franklin, K. A.; Homer, M. E.

    2013-01-01

    Plants display a range of striking architectural adaptations when grown at elevated temperatures. In the model plant Arabidopsis thaliana, these include elongation of petioles, and increased petiole and leaf angles from the soil surface. The potential physiological significance of these architectural changes remains speculative. We address this issue computationally by formulating a mathematical model and performing numerical simulations, testing the hypothesis that elongated and elevated plant configurations may reflect a leaf-cooling strategy. This sets in place a new basic model of plant water use and interaction with the surrounding air, which couples heat and mass transfer within a plant to water vapour diffusion in the air, using a transpiration term that depends on saturation, temperature and vapour concentration. A two-dimensional, multi-petiole shoot geometry is considered, with added leaf-blade shape detail. Our simulations show that increased petiole length and angle generally result in enhanced transpiration rates and reduced leaf temperatures in well-watered conditions. Furthermore, our computations also reveal plant configurations for which elongation may result in decreased transpiration rate owing to decreased leaf liquid saturation. We offer further qualitative and quantitative insights into the role of architectural parameters as key determinants of leaf-cooling capacity. PMID:23720538

  20. A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture

    NASA Technical Reports Server (NTRS)

    Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.

    2005-01-01

    Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.

  1. Integrating microarray gene expression object model and clinical document architecture for cancer genomics research.

    PubMed

    Park, Yu Rang; Lee, Hye Won; Kim, Ju Han

    2005-01-01

    Systematic integration of genomic-scale expression profiles with clinical information may facilitate cancer genomics research. MAGE-OM (Microarray Gene Expression Object Model) defines standard objects for genomic but not for clinical data. HL7 CDA (Clinical Document Architecture) is a document model for clinical information, describing syntax (generic structure) but not semantics. We designed a document template in XML Schema with additional constraints for CDA to define content semantics, enabling data model-level integration of MAGE-OM and CDA for cancer genomics research. PMID:16779360

  2. OBSERVATIONS RELATED TO THE USE OF THE SIGMA COORDINATE TRANSFORMATION FOR ESTUARIES AND COASTAL MODELING STUDIES

    EPA Science Inventory

    One of the common techniques used in application of time-dependent, three-dimensional models addressing estuarine and coastal environmental problems is the sigma coordinate transformation. his transformation has proven useful in applications with highly irregular bottom topograph...

  3. A dynamic object-oriented architecture approach to ecosystem modeling and simulation.

    SciTech Connect

    Dolph, J. E.; Majerus, K. A.; Sydelko, P. J.; Taxon, T. N.

    1999-04-09

    Modeling and simulation in support of adaptive ecosystem management can be better accomplished through a dynamic, integrated, and flexible approach that incorporates scientific and technological components into a comprehensive ecosystem-modeling framework. The Integrated Dynamic Landscape Analysis and Modeling System (IDLAMS) integrates ecological models and decision support techniques, through a geographic information system (GIS)-based framework. The Strategic Environmental Research and Development Program (SERDP) sponsored the development of IDLAMS. Initially built upon a GIS framework, IDLAMS is migrating to an object-oriented (OO) architectural framework. An object-oriented architecture is more flexible and modular. It allows disparate applications and dynamic models to be integrated in a manner that minimizes (or eliminates) the need to rework or recreate the system as new models are added to the suite. In addition, an object-oriented design makes it easier to provide run-time feedback among models, thereby making it a more dynamic tool for exploring and providing insight into the interactions among ecosystem processes. Finally, an object-oriented design encourages the reuse of existing technology because OO-IDLAMS is able to integrate disparate models, databases, or applications executed in their native languages. Reuse is also accomplished through a structured approach to building a consistent and reusable object library. This reusability can substantially reduce the time and effort needed to develop future integrated ecosystem simulations.

  4. Combining Wavelet Transform and Hidden Markov Models for ECG Segmentation

    NASA Astrophysics Data System (ADS)

    Andreão, Rodrigo Varejão; Boudy, Jérôme

    2006-12-01

    This work aims at providing new insights on the electrocardiogram (ECG) segmentation problem using wavelets. The wavelet transform has been originally combined with a hidden Markov models (HMMs) framework in order to carry out beat segmentation and classification. A group of five continuous wavelet functions commonly used in ECG analysis has been implemented and compared using the same framework. All experiments were realized on the QT database, which is composed of a representative number of ambulatory recordings of several individuals and is supplied with manual labels made by a physician. Our main contribution relies on the consistent set of experiments performed. Moreover, the results obtained in terms of beat segmentation and premature ventricular beat (PVC) detection are comparable to others works reported in the literature, independently of the type of the wavelet. Finally, through an original concept of combining two wavelet functions in the segmentation stage, we achieve our best performances.

  5. The NIST Real-Time Control System (RCS): A Reference Model Architecture for Computational Intelligence

    NASA Technical Reports Server (NTRS)

    Albus, James S.

    1996-01-01

    The Real-time Control System (RCS) developed at NIST and elsewhere over the past two decades defines a reference model architecture for design and analysis of complex intelligent control systems. The RCS architecture consists of a hierarchically layered set of functional processing modules connected by a network of communication pathways. The primary distinguishing feature of the layers is the bandwidth of the control loops. The characteristic bandwidth of each level is determined by the spatial and temporal integration window of filters, the temporal frequency of signals and events, the spatial frequency of patterns, and the planning horizon and granularity of the planners that operate at each level. At each level, tasks are decomposed into sequential subtasks, to be performed by cooperating sets of subordinate agents. At each level, signals from sensors are filtered and correlated with spatial and temporal features that are relevant to the control function being implemented at that level.

  6. Model of the reliability analysis of the distributed computer systems with architecture "client-server"

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Zelenkov, P. V.; Karaseva, M. V.; Tsarev, M. Yu; Tsarev, R. Yu

    2015-01-01

    The paper considers the problem of the analysis of distributed computer systems reliability with client-server architecture. A distributed computer system is a set of hardware and software for implementing the following main functions: processing, storage, transmission and data protection. This paper discusses the distributed computer systems architecture "client-server". The paper presents the scheme of the distributed computer system functioning represented as a graph where vertices are the functional state of the system and arcs are transitions from one state to another depending on the prevailing conditions. In reliability analysis we consider such reliability indicators as the probability of the system transition in the stopping state and accidents, as well as the intensity of these transitions. The proposed model allows us to obtain correlations for the reliability parameters of the distributed computer system without any assumptions about the distribution laws of random variables and the elements number in the system.

  7. An Evaluation of the High Level Architecture (HLA) as a Framework for NASA Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reid, Michael R.; Powers, Edward I. (Technical Monitor)

    2000-01-01

    The High Level Architecture (HLA) is a current US Department of Defense and an industry (IEEE-1516) standard architecture for modeling and simulations. It provides a framework and set of functional rules and common interfaces for integrating separate and disparate simulators into a larger simulation. The goal of the HLA is to reduce software costs by facilitating the reuse of simulation components and by providing a runtime infrastructure to manage the simulations. In order to evaluate the applicability of the HLA as a technology for NASA space mission simulations, a Simulations Group at Goddard Space Flight Center (GSFC) conducted a study of the HLA and developed a simple prototype HLA-compliant space mission simulator. This paper summarizes the prototyping effort and discusses the potential usefulness of the HLA in the design and planning of future NASA space missions with a focus on risk mitigation and cost reduction.

  8. Avionic Architecture for Model Predictive Control Application in Mars Sample & Return Rendezvous Scenario

    NASA Astrophysics Data System (ADS)

    Saponara, M.; Tramutola, A.; Creten, P.; Hardy, J.; Philippe, C.

    2013-08-01

    Optimization-based control techniques such as Model Predictive Control (MPC) are considered extremely attractive for space rendezvous, proximity operations and capture applications that require high level of autonomy, optimal path planning and dynamic safety margins. Such control techniques require high-performance computational needs for solving large optimization problems. The development and implementation in a flight representative avionic architecture of a MPC based Guidance, Navigation and Control system has been investigated in the ESA R&T study “On-line Reconfiguration Control System and Avionics Architecture” (ORCSAT) of the Aurora programme. The paper presents the baseline HW and SW avionic architectures, and verification test results obtained with a customised RASTA spacecraft avionics development platform from Aeroflex Gaisler.

  9. The Transformative Individual School Counseling Model: An Accountability Model for Urban School Counselors

    ERIC Educational Resources Information Center

    Eschenauer, Robert; Chen-Hayes, Stuart F.

    2005-01-01

    The realities and needs of urban students, families, and educators have outgrown traditional individual counseling models. The American School Counselor Association's National Model and National Standards and the Education Trust's Transforming School Counseling Initiative encourage professional school counselors to shift roles toward implementing…

  10. Phase-field-crystal methodology for modeling of structural transformations.

    PubMed

    Greenwood, Michael; Rottler, Jörg; Provatas, Nikolas

    2011-03-01

    We introduce and characterize free-energy functionals for modeling of solids with different crystallographic symmetries within the phase-field-crystal methodology. The excess free energy responsible for the emergence of periodic phases is inspired by classical density-functional theory, but uses only a minimal description for the modes of the direct correlation function to preserve computational efficiency. We provide a detailed prescription for controlling the crystal structure and introduce parameters for changing temperature and surface energies, so that phase transformations between body-centered-cubic (bcc), face-centered-cubic (fcc), hexagonal-close-packed (hcp), and simple-cubic (sc) lattices can be studied. To illustrate the versatility of our free-energy functional, we compute the phase diagram for fcc-bcc-liquid coexistence in the temperature-density plane. We also demonstrate that our model can be extended to include hcp symmetry by dynamically simulating hcp-liquid coexistence from a seeded crystal nucleus. We further quantify the dependence of the elastic constants on the model control parameters in two and three dimensions, showing how the degree of elastic anisotropy can be tuned from the shape of the direct correlation functions. PMID:21517507

  11. Model for a transformer-coupled toroidal plasma source

    SciTech Connect

    Rauf, Shahid; Balakrishna, Ajit; Chen Zhigang; Collins, Ken

    2012-01-15

    A two-dimensional fluid plasma model for a transformer-coupled toroidal plasma source is described. Ferrites are used in this device to improve the electromagnetic coupling between the primary coils carrying radio frequency (rf) current and a secondary plasma loop. Appropriate components of the Maxwell equations are solved to determine the electromagnetic fields and electron power deposition in the model. The effect of gas flow on species transport is also considered. The model is applied to 1 Torr Ar/NH{sub 3} plasma in this article. Rf electric field lines form a loop in the vacuum chamber and generate a plasma ring. Due to rapid dissociation of NH{sub 3}, NH{sub x}{sup +} ions are more prevalent near the gas inlet and Ar{sup +} ions are the dominant ions farther downstream. NH{sub 3} and its by-products rapidly dissociate into small fragments as the gas flows through the plasma. With increasing source power, NH{sub 3} dissociates more readily and NH{sub x}{sup +} ions are more tightly confined near the gas inlet. Gas flow rate significantly influences the plasma characteristics. With increasing gas flow rate, NH{sub 3} dissociation occurs farther from the gas inlet in regions with higher electron density. Consequently, more NH{sub 4}{sup +} ions are produced and dissociation by-products have higher concentrations near the outlet.

  12. Kinetic Modeling of Damage Repair, Genome Instability, and Neoplastic Transformation

    SciTech Connect

    Stewart, Robert D

    2007-03-17

    Inducible repair and pathway interactions may fundamentally alter the shape of dose-response curves because different mechanisms may be important under low- and high-dose exposure conditions. However, the significance of these phenomena for risk assessment purposes is an open question. This project developed new modeling tools to study the putative effects of DNA damage induction and repair on higher-level biological endpoints, including cell killing, neoplastic transformation and cancer. The project scope included (1) the development of new approaches to simulate the induction and base excision repair (BER) of DNA damage using Monte Carlo methods and (2) the integration of data from the Monte Carlo simulations with kinetic models for higher-level biological endpoints. Methods of calibrating and testing such multiscale biological simulations were developed. We also developed models to aid in the analysis and interpretation of data from experimental assays, such as the pulsed-field gel electrophoresis (PFGE) assay used to quantity the amount of DNA damage caused by ionizing radiation.

  13. From Tls to Hbim. High Quality Semantically-Aware 3d Modeling of Complex Architecture

    NASA Astrophysics Data System (ADS)

    Quattrini, R.; Malinverni, E. S.; Clini, P.; Nespeca, R.; Orlietti, E.

    2015-02-01

    In order to improve the framework for 3D modeling, a great challenge is to obtain the suitability of Building Information Model (BIM) platform for historical architecture. A specific challenge in HBIM is to guarantee appropriateness of geometrical accuracy. The present work demonstrates the feasibility of a whole HBIM approach for complex architectural shapes, starting from TLS point clouds. A novelty of our method is to work in a 3D environment throughout the process and to develop semantics during the construction phase. This last feature of HBIM was analyzed in the present work verifying the studied ontologies, enabling the data enrichment of the model with non-geometrical information, such as historical notes, decay or deformation evidence, decorative elements etc. The case study is the Church of Santa Maria at Portonovo, an abbey from the Romanesque period. Irregular or complex historical architecture, such as Romanesque, needs the construction of shared libraries starting from the survey of its already existing elements. This is another key aspect in delivering Building Information Modeling standards. In particular, we focus on the quality assessment of the obtained model, using an open-source sw and the point cloud as reference. The proposed work shows how it is possible to develop a high quality 3D model semantic-aware, capable of connecting geometrical-historical survey with descriptive thematic databases. In this way, a centralized HBIM will serve as comprehensive dataset of information about all disciplines, particularly for restoration and conservation. Moreover, the geometric accuracy will ensure also reliable visualization outputs.

  14. Modeling the Office of Science Ten Year Facilities Plan: The PERI Architecture Team

    SciTech Connect

    de Supinski, Bronis R.; Alam, Sadaf R; Bailey, David; Carrington, Laura; Daley, Christopher; Dubey, Anshu; Gamblin, Todd; Gunter, Dan; Hovland, Paul; Jagode, Heike; Karavanic, Karen; Marin, Gabriel; Mellor-Crummey, John; Moore, Shirley; Norris, Boyana; Oliker, Leonid; Olschanowsky, Cathy; Roth, Philip C; Schulz, Martin; Shende, Sameer; Snavely, Allan; Spea, Wyatt; Tikir, Mustafa; Vetter, Jeffrey S; Worley, Patrick H; Wright, Nicholas

    2009-01-01

    The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort optimizing key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measured the performance of these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfilll our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully.

  15. Modeling the Office of Science Ten Year FacilitiesPlan: The PERI Architecture Tiger Team

    SciTech Connect

    de Supinski, B R; Alam, S R; Bailey, D H; Carrington, L; Daley, C

    2009-05-27

    The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort to the optimization of key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measured the performance of these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfill our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully.

  16. Modeling the Office of Science Ten Year Facilities Plan: The PERI Architecture Tiger Team

    SciTech Connect

    de Supinski, Bronis R.; Alam, Sadaf; Bailey, David H.; Carrington, Laura; Daley, Chris; Dubey, Anshu; Gamblin, Todd; Gunter, Dan; Hovland, Paul D.; Jagode, Heike; Karavanic, Karen; Marin, Gabriel; Mellor-Crummey, John; Moore, Shirley; Norris, Boyana; Oliker, Leonid; Olschanowsky, Catherine; Roth, Philip C.; Schulz, Martin; Shende, Sameer; Snavely, Allan; Spear, Wyatt; Tikir, Mustafa; Vetter, Jeff; Worley, Pat; Wright, Nicholas

    2009-06-26

    The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort optimizing key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measured the performance of these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfill our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully.

  17. Fault zone architecture and fluid flow: Insights from field data and numerical modeling

    NASA Astrophysics Data System (ADS)

    Caine, Jonathan Saul; Forster, Craig B.

    Fault zones in the upper crust are typically composed of complex fracture networks and discrete zones of comminuted and geochemically altered fault rocks. Determining the patterns and rates of fluid flow in these distinct structural discontinuities is a three-dimensional problem. A series of numerical simulations of fluid flow in a set of three-dimensional discrete fracture network models aids in identifying the primary controlling parameters of fault-related fluid flow, and their interactions, throughout episodic deformation. Four idealized, but geologically realistic, fault zone architectural models are based on fracture data collected along exposures of the Stillwater Fault Zone in Dixie Valley, Nevada and geometric data from a series of normal fault zones in east Greenland. The models are also constrained by an Andersonian model for mechanically compatible fracture networks associated with normal faulting. Fluid flow in individual fault zone components, such as a fault core and damage zone, and full outcrop scale model domains are simulated using a finite element routine. Permeability contrasts between components and permeability anisotropy within components are identified as the major controlling factors in fault-related fluid flow. Additionally, the structural and hydraulic variations in these components are also major controls of flow at the scale of the full model domains. The four models can also be viewed as a set of snapshots in the mechanical evolution of a single fault zone. Changes in the hydraulic parameters within the models mimic the evolution of the permeability structure of each model through a single deformation cycle. The model results demonstrate that small changes in the architecture and hydraulic parameters of individual fault zone components can have very large impacts, up to five orders of magnitude, on the permeability structure of the full model domains. Closure of fracture apertures in each fault zone magnifies the magnitude and

  18. Integrating water quality models in the High Level Architecture (HLA) environment

    NASA Astrophysics Data System (ADS)

    Lindenschmidt, K.-E.; Hesser, F. B.; Rode, M.

    2005-08-01

    HLA (High Level Architecture) is a computer architecture for constructing distributed simulations. It facilitates interoperability among different simulations and simulation types and promotes reuse of simulation software modules. The core of the HLA is the Run-Time Infrastructure (RTI) that provides services to start and stop a simulation execution, to transfer data between interoperating simulations, to control the amount and routing of data that is passed, and to co-ordinate the passage of simulated time among the simulations. The authors are not aware of any HLA applications in the field of water resources management. The development of such a system is underway at the UFZ -Centre for Environmental Research, Germany, in which the simulations of a hydrodynamic model (DYNHYD), eutrophication model (EUTRO) and sediment and micro-pollutant transport model (TOXI) are interlinked and co-ordinated by the HLA RTI environment. This configuration enables extensions such as (i) "cross-model" uncertainty analysis with Monte Carlo Analysis: time synchronisation allows EUTRO and TOXI simulations to be made after each successive simulation time step in DYNHYD, (ii) information transfer from EUTRO to TOXI to compute organic carbon fractions of particulate matter in TOXI, (iii) information transfer from TOXI to EUTRO to compute extinction coefficients in EUTRO and (iv) feedback from water quality simulations to the hydrodynamic modeling.

  19. Behavioral Model Architectures: A New Way Of Doing Real-Time Planning In Intelligent Robots

    NASA Astrophysics Data System (ADS)

    Cassinis, Riccardo; Biroli, Ernesto; Meregalli, Alberto; Scalise, Fabio

    1987-01-01

    Traditional hierarchical robot control systems, although well suited for manufacturing applications, appear to be inefficient for innovative applications, such as mobile robots. The research we present aims to the development of a new architecture, designed to overcome actual limitations. The control system was named BARCS (Behavioral Architecture Robot Control System). It is composed of several modules, that exchange information through a blackboard. The original point is that the functions of the modules were selected according to a behavioral rather than a functional decomposition model. Therefore, the system includes, among other, purpose, strategy, movement, sensor handling and safety modules. Both the hardware structure and the logical decomposition allow a great freedom in the design of each module and of the connections between modules, that have to be as flexible and efficient as possible. In order to obtain an "intelligent" behavior, a mixture of traditional programming, artificial intelligence techniques and fuzzy logic are used, according to the needs of each moddle. The approach is particularly interesting because the robot can be quite easily "specialized", i.e. it can be given behaviors and problem solving strategies that suit some applications better than other. Another interesting aspect of the proposed architecture is that sensor information handling and fusion can be dynamically tailored to the robot's situation, thus eliminating all time-consuming useless processing.

  20. Multi-scale modeling of the iron bcc arrow hcp martensitic phase transformation

    NASA Astrophysics Data System (ADS)

    Caspersen, Kyle; Carter, Emily; Lew, Adrian; Ortiz, Michael

    2004-03-01

    Pressures exceeding 10 GPa induce a martensitic phase transformation in iron, where ferro-magnetic bcc transforms into non-magnetic hcp. The transition pressure is not known precisely, but is thought to depend strongly on shear. To investigate the properties of this transformation and the role of shear, we have developed a multi-scale iron model. This model contains a free energy derived from an ab-initio based non-linear elastic expansion, a kinematically compatible spinodal decomposition of phases, ab-initio based interfacial energies, and a dependence on the bcc rightarrow hcp transformation path(s). The model shows spinodal decomposition behavior (with a slight expected deviation) as well as predicting 10 GPa to be the transformation pressure. Additionally, the model predicted that the inclusion of shear facilitates the transformation, causing transformation pressure to decrease.

  1. The software architecture of climate models: a graphical comparison of CMIP5 and EMICAR5 configurations

    NASA Astrophysics Data System (ADS)

    Alexander, K.; Easterbrook, S. M.

    2015-04-01

    We analyze the source code of eight coupled climate models, selected from those that participated in the CMIP5 (Taylor et al., 2012) or EMICAR5 (Eby et al., 2013; Zickfeld et al., 2013) intercomparison projects. For each model, we sort the preprocessed code into components and subcomponents based on dependency structure. We then create software architecture diagrams that show the relative sizes of these components/subcomponents and the flow of data between them. The diagrams also illustrate several major classes of climate model design; the distribution of complexity between components, which depends on historical development paths as well as the conscious goals of each institution; and the sharing of components between different modeling groups. These diagrams offer insights into the similarities and differences in structure between climate models, and have the potential to be useful tools for communication between scientists, scientific institutions, and the public.

  2. The software architecture of climate models: a graphical comparison of CMIP5 and EMICAR5 configurations

    NASA Astrophysics Data System (ADS)

    Alexander, K.; Easterbrook, S. M.

    2015-01-01

    We analyse the source code of eight coupled climate models, selected from those that participated in the CMIP5 (Taylor et al., 2012) or EMICAR5 (Eby et al., 2013; Zickfeld et al., 2013) intercomparison projects. For each model, we sort the preprocessed code into components and subcomponents based on dependency structure. We then create software architecture diagrams which show the relative sizes of these components/subcomponents and the flow of data between them. The diagrams also illustrate several major classes of climate model design; the distribution of complexity between components, which depends on historical development paths as well as the conscious goals of each institution; and the sharing of components between different modelling groups. These diagrams offer insights into the similarities and differences between models, and have the potential to be useful tools for communication between scientists, scientific institutions, and the public.

  3. Investigating the genetic architecture of conditional strategies using the environmental threshold model.

    PubMed

    Buzatto, Bruno A; Buoro, Mathieu; Hazel, Wade N; Tomkins, Joseph L

    2015-12-22

    The threshold expression of dichotomous phenotypes that are environmentally cued or induced comprise the vast majority of phenotypic dimorphisms in colour, morphology, behaviour and life history. Modelled as conditional strategies under the framework of evolutionary game theory, the quantitative genetic basis of these traits is a challenge to estimate. The challenge exists firstly because the phenotypic expression of the trait is dichotomous and secondly because the apparent environmental cue is separate from the biological signal pathway that induces the switch between phenotypes. It is the cryptic variation underlying the translation of cue to phenotype that we address here. With a 'half-sib common environment' and a 'family-level split environment' experiment, we examine the environmental and genetic influences that underlie male dimorphism in the earwig Forficula auricularia. From the conceptual framework of the latent environmental threshold (LET) model, we use pedigree information to dissect the genetic architecture of the threshold expression of forceps length. We investigate for the first time the strength of the correlation between observable and cryptic 'proximate' cues. Furthermore, in support of the environmental threshold model, we found no evidence for a genetic correlation between cue and the threshold between phenotypes. Our results show strong correlations between observable and proximate cues and less genetic variation for thresholds than previous studies have suggested. We discuss the importance of generating better estimates of the genetic variation for thresholds when investigating the genetic architecture and heritability of threshold traits. By investigating genetic architecture by means of the LET model, our study supports several key evolutionary ideas related to conditional strategies and improves our understanding of environmentally cued decisions. PMID:26674955

  4. Using Three-dimensional Plant Root Architecture in Models of Shallow-slope Stability

    PubMed Central

    Danjon, Frédéric; Barker, David H.; Drexhage, Michael; Stokes, Alexia

    2008-01-01

    Background The contribution of vegetation to shallow-slope stability is of major importance in landslide-prone regions. However, existing slope stability models use only limited plant root architectural parameters. This study aims to provide a chain of tools useful for determining the contribution of tree roots to soil reinforcement. Methods Three-dimensional digitizing in situ was used to obtain accurate root system architecture data for mature Quercus alba in two forest stands. These data were used as input to tools developed, which analyse the spatial position of roots, topology and geometry. The contribution of roots to soil reinforcement was determined by calculating additional soil cohesion using the limit equilibrium model, and the factor of safety (FOS) using an existing slope stability model, Slip4Ex. Key Results Existing models may incorrectly estimate the additional soil cohesion provided by roots, as the spatial position of roots crossing the potential slip surface is usually not taken into account. However, most soil reinforcement by roots occurs close to the tree stem and is negligible at a distance >1·0 m from the tree, and therefore global values of FOS for a slope do not take into account local slippage along the slope. Conclusions Within a forest stand on a landslide-prone slope, soil fixation by roots can be minimal between uniform rows of trees, leading to local soil slippage. Therefore, staggered rows of trees would improve overall slope stability, as trees would arrest the downward movement of soil. The chain of tools consisting of both software (free for non-commercial use) and functions available from the first author will enable a more accurate description and use of root architectural parameters in standard slope stability analyses. PMID:17766845

  5. A Canopy Architectural Model to Study the Competitive Ability of Chickpea with Sowthistle

    PubMed Central

    Cici, S-Zahra-Hosseini; Adkins, Steve; Hanan, Jim

    2008-01-01

    Background and Aims Improving the competitive ability of crops is a sustainable method of weed management. This paper shows how a virtual plant model of competition between chickpea (Cicer arietinum) and sowthistle (Sonchus oleraceus) can be used as a framework for discovering and/or developing more competitive chickpea cultivars. Methods The virtual plant models were developed using the L-systems formalism, parameterized according to measurements taken on plants at intervals during their development. A quasi-Monte Carlo light-environment model was used to model the effect of chickpea canopy on the development of sowthistle. The chickpea–light environment–sowthistle model (CLES model) captured the hypothesis that the architecture of chickpea plants modifies the light environment inside the canopy and determines sowthistle growth and development pattern. The resulting CLES model was parameterized for different chickpea cultivars (viz. ‘Macarena’, ‘Bumper’, ‘Jimbour’ and ‘99071-1001’) to compare their competitive ability with sowthistle. To validate the CLES model, an experiment was conducted using the same four chickpea cultivars as different treatments with a sowthistle growing under their canopy. Results and Conclusions The growth of sowthistle, both in silico and in glasshouse experiments, was reduced most by ‘99071-1001’, a cultivar with a short phyllochron. The second rank of competitive ability belonged to ‘Macarena’ and ‘Bumper’, while ‘Jimbour’ was the least competitive cultivar. The architecture of virtual chickpea plants modified the light inside the canopy, which influenced the growth and development of the sowthistle plants in response to different cultivars. This is the first time that a virtual plant model of a crop–weed interaction has been developed. This virtual plant model can serve as a platform for a broad range of applications in the study of chickpea–weed interactions and their environment. PMID:18375962

  6. Historic Building Information Modelling - Adding intelligence to laser and image based surveys of European classical architecture

    NASA Astrophysics Data System (ADS)

    Murphy, Maurice; McGovern, Eugene; Pavia, Sara

    2013-02-01

    Historic Building Information Modelling (HBIM) is a novel prototype library of parametric objects, based on historic architectural data and a system of cross platform programmes for mapping parametric objects onto point cloud and image survey data. The HBIM process begins with remote collection of survey data using a terrestrial laser scanner combined with digital photo modelling. The next stage involves the design and construction of a parametric library of objects, which are based on the manuscripts ranging from Vitruvius to 18th century architectural pattern books. In building parametric objects, the problem of file format and exchange of data has been overcome within the BIM ArchiCAD software platform by using geometric descriptive language (GDL). The plotting of parametric objects onto the laser scan surveys as building components to create or form the entire building is the final stage in the reverse engineering process. The final HBIM product is the creation of full 3D models including detail behind the object's surface concerning its methods of construction and material make-up. The resultant HBIM can automatically create cut sections, details and schedules in addition to the orthographic projections and 3D models (wire frame or textured) for both the analysis and conservation of historic objects, structures and environments.

  7. Model-Driven Development of Reliable Avionics Architectures for Lunar Surface Systems

    NASA Technical Reports Server (NTRS)

    Borer, Nicholas; Claypool, Ian; Clark, David; West, John; Somervill, Kevin; Odegard, Ryan; Suzuki, Nantel

    2010-01-01

    This paper discusses a method used for the systematic improvement of NASA s Lunar Surface Systems avionics architectures in the area of reliability and fault-tolerance. This approach utilizes an integrated system model to determine the effects of component failure on the system s ability to provide critical functions. A Markov model of the potential degraded system modes is created to characterize the probability of these degraded modes, and the system model is run for each Markov state to determine its status (operational or system loss). The probabilistic results from the Markov model are first produced from state transition rates based on NASA data for heritage failure rate data of similar components. An additional set of probabilistic results are created from a representative set of failure rates developed for this study, for a variety of component quality grades (space-rated, mil-spec, ruggedized, and commercial). The results show that careful application of redundancy and selected component improvement should result in Lunar Surface Systems architectures that exhibit an appropriate degree of fault-tolerance, reliability, performance, and affordability.

  8. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The non-linear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  9. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The nonlinear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  10. Structured Analysis Tool interface to the Strategic Defense Initiative architecture dataflow modeling technique. Master's thesis

    SciTech Connect

    Austin, K.A.

    1989-12-01

    A software interface was designed and implemented that extends the use of Structured Analysis (SA) Tool (SAtool) as a graphical front-end to the Strategic Defense Initiative Architecture Dataflow Modeling Technique (SADMT). SAtool is a computer-aided software engineering tool developed at the Air Force Institute of Technology that automates the requirements analysis phase of software development using a graphics editor. The tool automates two approaches for documenting software requirements analysis: SA diagrams and data dictionaries. SADMT is an Ada based simulation framework that enables users to model real-world architectures for simulation purposes. This research was accomplished in three phases. During the first phase, entity-relationship (E-R) models of each software package were developed. From these E-R models, relationships between the two software packages were identified and used to develop a mapping from SAtool to SADMT. The next phase of the research was the development of a software interface in Ada based on the mapping developed in the first phase. A combination of a top-down and a bottom-up approach was used in developing the software.

  11. High-Performance Work Systems: American Models of Workplace Transformation.

    ERIC Educational Resources Information Center

    Appelbaum, Eileen; Batt, Rosemary

    Rising competition in world and domestic markets for the past 2 decades has necessitated that U.S. companies undergo significant transformations to improve their performance with respect to a wide array of efficiency and quality indicators. Research on the transformations recently undertaken by some U.S. companies to boost performance revealed two…

  12. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    NASA Astrophysics Data System (ADS)

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  13. Orion Flight Test 1 Architecture: Observed Benefits of a Model Based Engineering Approach

    NASA Technical Reports Server (NTRS)

    Simpson, Kimberly A.; Sindiy, Oleg V.; McVittie, Thomas I.

    2012-01-01

    This paper details how a NASA-led team is using a model-based systems engineering approach to capture, analyze and communicate the end-to-end information system architecture supporting the first unmanned orbital flight of the Orion Multi-Purpose Crew Exploration Vehicle. Along with a brief overview of the approach and its products, the paper focuses on the observed program-level benefits, challenges, and lessons learned; all of which may be applied to improve system engineering tasks for characteristically similarly challenges

  14. The Reactive-Causal Architecture: Introducing an Emotion Model along with Theories of Needs

    NASA Astrophysics Data System (ADS)

    Aydin, Ali Orhan; Orgun, Mehmet Ali

    In the entertainment application area, one of the major aims is to develop believable agents. To achieve this aim, agents should be highly autonomous, situated, flexible, and display affect. The Reactive-Causal Architecture (ReCau) is proposed to simulate these core attributes. In its current form, ReCau cannot explain the effects of emotions on intelligent behaviour. This study aims is to further improve the emotion model of ReCau to explain the effects of emotions on intelligent behaviour. This improvement allows ReCau to be emotional to support the development of believable agents.

  15. Structural Models that Manage IT Portfolio Affecting Business Value of Enterprise Architecture

    NASA Astrophysics Data System (ADS)

    Kamogawa, Takaaki

    This paper examines the structural relationships between Information Technology (IT) governance and Enterprise Architecture (EA), with the objective of enhancing business value in the enterprise society. Structural models consisting of four related hypotheses reveal the relationship between IT governance and EA in the improvement of business values. We statistically examined the hypotheses by analyzing validated questionnaire items from respondents within firms listed on the Japanese stock exchange who were qualified to answer them. We concluded that firms which have organizational ability controlled by IT governance are more likely to deliver business value based on IT portfolio management.

  16. Guiding Principles for Data Architecture to Support the Pathways Community HUB Model

    PubMed Central

    Zeigler, Bernard P.; Redding, Sarah; Leath, Brenda A.; Carter, Ernest L.; Russell, Cynthia

    2016-01-01

    Introduction: The Pathways Community HUB Model provides a unique strategy to effectively supplement health care services with social services needed to overcome barriers for those most at risk of poor health outcomes. Pathways are standardized measurement tools used to define and track health and social issues from identification through to a measurable completion point. The HUB use Pathways to coordinate agencies and service providers in the community to eliminate the inefficiencies and duplication that exist among them. Pathways Community HUB Model and Formalization: Experience with the Model has brought out the need for better information technology solutions to support implementation of the Pathways themselves through decision-support tools for care coordinators and other users to track activities and outcomes, and to facilitate reporting. Here we provide a basis for discussing recommendations for such a data infrastructure by developing a conceptual model that formalizes the Pathway concept underlying current implementations. Requirements for Data Architecture to Support the Pathways Community HUB Model: The main contribution is a set of core recommendations as a framework for developing and implementing a data architecture to support implementation of the Pathways Community HUB Model. The objective is to present a tool for communities interested in adopting the Model to learn from and to adapt in their own development and implementation efforts. Problems with Quality of Data Extracted from the CHAP Database: Experience with the Community Health Access Project (CHAP) data base system (the core implementation of the Model) has identified several issues and remedies that have been developed to address these issues. Based on analysis of issues and remedies, we present several key features for a data architecture meeting the just mentioned recommendations. Implementation of Features: Presentation of features is followed by a practical guide to their implementation

  17. Model-Based Systems Engineering for Capturing Mission Architecture System Processes with an Application Case Study - Orion Flight Test 1

    NASA Technical Reports Server (NTRS)

    Bonanne, Kevin H.

    2011-01-01

    Model-based Systems Engineering (MBSE) is an emerging methodology that can be leveraged to enhance many system development processes. MBSE allows for the centralization of an architecture description that would otherwise be stored in various locations and formats, thus simplifying communication among the project stakeholders, inducing commonality in representation, and expediting report generation. This paper outlines the MBSE approach taken to capture the processes of two different, but related, architectures by employing the Systems Modeling Language (SysML) as a standard for architecture description and the modeling tool MagicDraw. The overarching goal of this study was to demonstrate the effectiveness of MBSE as a means of capturing and designing a mission systems architecture. The first portion of the project focused on capturing the necessary system engineering activities that occur when designing, developing, and deploying a mission systems architecture for a space mission. The second part applies activities from the first to an application problem - the system engineering of the Orion Flight Test 1 (OFT-1) End-to-End Information System (EEIS). By modeling the activities required to create a space mission architecture and then implementing those activities in an application problem, the utility of MBSE as an approach to systems engineering can be demonstrated.

  18. A Kinetics Model for Martensite Transformation in Plain Carbon and Low-Alloyed Steels

    NASA Astrophysics Data System (ADS)

    Lee, Seok-Jae; van Tyne, Chester J.

    2012-02-01

    An empirical martensite kinetics model is proposed that both captures the sigmodial transformation behavior for alloy steels and remains computationally efficient. The model improves on the Koistinen and Marburger model and the van Bohemen and Sietsma model with a function that better represents the transformation rate, especially during the early stages. When compared with existing models, the proposed model exhibits better predictions of volume fraction of martensite. The proposed model also predicts various other transformation properties accurately, such as M90 temperatures and retained austenite.

  19. The influence of inhomogeneity in architecture on the modelled force-length relationship of muscles.

    PubMed

    Savelberg, H H; Schamhardt, H C

    1995-02-01

    In this study a model has been developed to assess the length-force relationship of skeletal muscle, accounting for inhomogeneity in the muscular architecture. The muscle was modelled as a number of parallelepipeds parallel to but geometrically different from each other. For each of the parallelepipeds the force at a given length was calculated. The force-length relationship of the whole muscle was obtained by summing the forces of each of the parallelepipeds. Four kinds of inhomogeneities were simulated using this model: (i) variation of the aponeuroses lengths over the parallel parallelepipeds; (ii) variation of the muscle fibre lengths; (iii) variation of the pennation angles; (iv) variation of the fibre lengths keeping the pennation angle constant. By varying geometry parameters a total of 1000 inhomogeneous muscles was created. The force-length relationships were also calculated neglecting the inhomogeneities, by taking the extreme, or the median of the inhomogeneous parameter value, as descriptive for a homogeneous model. It was found that an inhomogeneity of the aponeuroses length has only minimal influence on the calculated force-length relationship. In the cases of the three other inhomogeneities simulated considerable differences were found. Especially when in the model neglecting inhomogeneities one of the extreme architectural variables was used, unacceptable large differences with the model considering the inhomogeneities occurred. The model was also applied to the equine M. flexor carpi radialis. Considerable differences in the calculated force-length relationship were found after incorporating inhomogeneity. These appeared to depend largely on the pennation angles, either being kept constant, or changing with varying fibre lengths over the parallel parallelepipeds. PMID:7896861

  20. View factor modeling of sputter-deposition on micron-scale-architectured surfaces exposed to plasma

    NASA Astrophysics Data System (ADS)

    Huerta, C. E.; Matlock, T. S.; Wirz, R. E.

    2016-03-01

    The sputter-deposition on surfaces exposed to plasma plays an important role in the erosion behavior and overall performance of a wide range of plasma devices. Plasma models in the low density, low energy plasma regime typically neglect micron-scale surface feature effects on the net sputter yield and erosion rate. The model discussed in this paper captures such surface architecture effects via a computationally efficient view factor model. The model compares well with experimental measurements of argon ion sputter yield from a nickel surface with a triangle wave geometry with peak heights in the hundreds of microns range. Further analysis with the model shows that increasing the surface pitch angle beyond about 45° can lead to significant decreases in the normalized net sputter yield for all simulated ion incident energies (i.e., 75, 100, 200, and 400 eV) for both smooth and roughened surfaces. At higher incident energies, smooth triangular surfaces exhibit a nonmonotonic trend in the normalized net sputter yield with surface pitch angle with a maximum yield above unity over a range of intermediate angles. The resulting increased erosion rate occurs because increased sputter yield due to the local ion incidence angle outweighs increased deposition due to the sputterant angular distribution. The model also compares well with experimentally observed radial expansion of protuberances (measuring tens of microns) in a nano-rod field exposed to an argon beam. The model captures the coalescence of sputterants at the protuberance sites and accurately illustrates the structure's expansion due to deposition from surrounding sputtering surfaces; these capabilities will be used for future studies into more complex surface architectures.

  1. A Generic Model to Simulate Air-Borne Diseases as a Function of Crop Architecture

    PubMed Central

    Casadebaig, Pierre; Quesnel, Gauthier; Langlais, Michel; Faivre, Robert

    2012-01-01

    In a context of pesticide use reduction, alternatives to chemical-based crop protection strategies are needed to control diseases. Crop and plant architectures can be viewed as levers to control disease outbreaks by affecting microclimate within the canopy or pathogen transmission between plants. Modeling and simulation is a key approach to help analyze the behaviour of such systems where direct observations are difficult and tedious. Modeling permits the joining of concepts from ecophysiology and epidemiology to define structures and functions generic enough to describe a wide range of epidemiological dynamics. Additionally, this conception should minimize computing time by both limiting the complexity and setting an efficient software implementation. In this paper, our aim was to present a model that suited these constraints so it could first be used as a research and teaching tool to promote discussions about epidemic management in cropping systems. The system was modelled as a combination of individual hosts (population of plants or organs) and infectious agents (pathogens) whose contacts are restricted through a network of connections. The system dynamics were described at an individual scale. Additional attention was given to the identification of generic properties of host-pathogen systems to widen the model's applicability domain. Two specific pathosystems with contrasted crop architectures were considered: ascochyta blight on pea (homogeneously layered canopy) and potato late blight (lattice of individualized plants). The model behavior was assessed by simulation and sensitivity analysis and these results were discussed against the model ability to discriminate between the defined types of epidemics. Crop traits related to disease avoidance resulting in a low exposure, a slow dispersal or a de-synchronization of plant and pathogen cycles were shown to strongly impact the disease severity at the crop scale. PMID:23226209

  2. A generic model to simulate air-borne diseases as a function of crop architecture.

    PubMed

    Casadebaig, Pierre; Quesnel, Gauthier; Langlais, Michel; Faivre, Robert

    2012-01-01

    In a context of pesticide use reduction, alternatives to chemical-based crop protection strategies are needed to control diseases. Crop and plant architectures can be viewed as levers to control disease outbreaks by affecting microclimate within the canopy or pathogen transmission between plants. Modeling and simulation is a key approach to help analyze the behaviour of such systems where direct observations are difficult and tedious. Modeling permits the joining of concepts from ecophysiology and epidemiology to define structures and functions generic enough to describe a wide range of epidemiological dynamics. Additionally, this conception should minimize computing time by both limiting the complexity and setting an efficient software implementation. In this paper, our aim was to present a model that suited these constraints so it could first be used as a research and teaching tool to promote discussions about epidemic management in cropping systems. The system was modelled as a combination of individual hosts (population of plants or organs) and infectious agents (pathogens) whose contacts are restricted through a network of connections. The system dynamics were described at an individual scale. Additional attention was given to the identification of generic properties of host-pathogen systems to widen the model's applicability domain. Two specific pathosystems with contrasted crop architectures were considered: ascochyta blight on pea (homogeneously layered canopy) and potato late blight (lattice of individualized plants). The model behavior was assessed by simulation and sensitivity analysis and these results were discussed against the model ability to discriminate between the defined types of epidemics. Crop traits related to disease avoidance resulting in a low exposure, a slow dispersal or a de-synchronization of plant and pathogen cycles were shown to strongly impact the disease severity at the crop scale. PMID:23226209

  3. Infra-Free® (IF) Architecture System as the Method for Post-Disaster Shelter Model

    NASA Astrophysics Data System (ADS)

    Chang, Huai-Chien; Anilir, Serkan

    Currently, International Space Station (ISS) is capable to support 3 to 4 astronauts onboard for at least 6 months using an integrated life support system to support the need of crew onboard. Waste from daily life of the crew members are collected by waste recycle systems, electricity consumption depends on collecting solar energy, etc. though it likes the infrastructure we use on Earth, ISS can be comprehended nearly a self-reliant integrated architecture so far, this could be given an important hint for current architecture which is based on urban centralized infrastructure to support our daily lives but could be vulnerable in case of nature disasters. Comparatively, more and more economic activities and communications rely on the enormous urban central infrastructure to support our daily lives. Therefore, when in case of natural disasters, it may cut-out the infrastructure system temporarily or permanent. In order to solve this problem, we propose to design a temporary shelter, which is capable to work without depending on any existing infrastructure. We propose to use some closed-life-cycle or integrated technologies inspired by the possibilities of space and other emerging technologies into current daily architecture by using Infra-free® design framework; which proposes to integrate various life supporting infrastructural elements into one-closed system. We try to work on a scenario for post-disaster management housing as the method for solving the lifeline problems such as solid and liquid waste, energy, and water and hygiene solution into one system. And trying to establish an Infra-free® model of shelter for disaster area. The ultimate objective is to design a Temp Infra-free® model dealing with the sanitation and environment preservation concerns for disaster area.

  4. Modelling Dynamic Decision Making with the ACT-R Cognitive Architecture

    NASA Astrophysics Data System (ADS)

    Peebles, David; Banks, Adrian

    2010-12-01

    This paper describes a model of dynamic decision making in the Dynamic Stocks and Flows (DSF) task, developed using the ACT-R cognitive architecture. This task is a simple simulation of a water tank in which the water level must be kept constant whilst the inflow and outflow changes at varying rates. The basic functions of the model are based around three steps. Firstly, the model predicts the water level in the next cycle by adding the current water level to the predicted net inflow of water. Secondly, based on this projection, the net outflow of the water is adjusted to bring the water level back to the target. Thirdly, the predicted net inflow of water is adjusted to improve its accuracy in the future. If the prediction has overestimated net inflow then it is reduced, if it has underestimated net inflow it is increased. The model was entered into a model comparison competition—the Dynamic Stocks and Flows Challenge—to model human performance on four conditions of the DSF task and then subject the model to testing on five unseen transfer conditions. The model reproduced the main features of the development data reasonably well but did not reproduce human performance well under the transfer conditions. This suggests that the principles underlying human performance across the different conditions differ considerably despite their apparent similarity. Further lessons for the future development of our model and model comparison challenges are considered.

  5. Nonlinear model of a distribution transformer appropriate for evaluating the effects of unbalanced loads

    NASA Astrophysics Data System (ADS)

    Toman, Matej; Štumberger, Gorazd; Štumberger, Bojan; Dolinar, Drago

    Power packages for calculation of power system transients are often used when studying and designing electromagnetic power systems. An accurate model of a distribution transformer is needed in order to obtain realistic values from these calculations. This transformer model must be derived in such a way that it is applicable when calculating those operating conditions appearing in practice. Operation conditions where transformers are loaded with nonlinear and unbalanced loads are especially challenging. The purpose of this work is to derive a three-phase transformer model that is appropriate for evaluating the effects of nonlinear and unbalanced loads. A lumped parameter model instead of a finite element (FE) model is considered in order to ensure that the model can be used in power packages for the calculation of power system transients. The transformer model is obtained by coupling electric and magnetic equivalent circuits. The magnetic equivalent circuit contains only three nonlinear reluctances, which represent nonlinear behaviour of the transformer. They are calculated by the inverse Jiles-Atherton (J-A) hysteresis model, while parameters of hysteresis are identified using differential evolution (DE). This considerably improves the accuracy of the derived transformer model. Although the obtained transformer model is simple, the simulation results show good agreement between measured and calculated results.

  6. Hierarchical fiber bundle model to investigate the complex architectures of biological materials.

    PubMed

    Pugno, Nicola M; Bosia, Federico; Abdalrahman, Tamer

    2012-01-01

    The mechanics of fiber bundles has been widely studied in the literature, and fiber bundle models in particular have provided a wealth of useful analytical and numerical results for modeling ordinary materials. These models, however, are inadequate to treat bioinspired nanostructured materials, where hierarchy, multiscale, and complex properties play a decisive role in determining the overall mechanical characteristics. Here, we develop an ad hoc hierarchical theory designed to tackle these complex architectures, thus allowing the determination of the strength of macroscopic hierarchical materials from the properties of their constituents at the nanoscale. The roles of finite size, twisting angle, and friction are also included. Size effects on the statistical distribution of fiber strengths naturally emerge without invoking best-fit or unknown parameters. A comparison between the developed theory and various experimental results on synthetic and natural materials yields considerable agreement. PMID:22400587

  7. A hybrid fast Hankel transform algorithm for electromagnetic modeling

    USGS Publications Warehouse

    Anderson, W.L.

    1989-01-01

    A hybrid fast Hankel transform algorithm has been developed that uses several complementary features of two existing algorithms: Anderson's digital filtering or fast Hankel transform (FHT) algorithm and Chave's quadrature and continued fraction algorithm. A hybrid FHT subprogram (called HYBFHT) written in standard Fortran-77 provides a simple user interface to call either subalgorithm. The hybrid approach is an attempt to combine the best features of the two subalgorithms to minimize the user's coding requirements and to provide fast execution and good accuracy for a large class of electromagnetic problems involving various related Hankel transform sets with multiple arguments. Special cases of Hankel transforms of double-order and double-argument are discussed, where use of HYBFHT is shown to be advantageous for oscillatory kernal functions. -Author

  8. Transform-both-sides nonlinear models for in vitro pharmacokinetic experiments.

    PubMed

    Latif, A H M Mahbub; Gilmour, Steven G

    2015-06-01

    Transform-both-sides nonlinear models have proved useful in many experimental applications including those in pharmaceutical sciences and biochemistry. The maximum likelihood method is commonly used to fit transform-both-sides nonlinear models, where the regression and transformation parameters are estimated simultaneously. In this paper, an analysis of variance-based method is described in detail for estimating transform-both-sides nonlinear models from randomized experiments. It estimates the transformation parameter from the full treatment model and then the regression parameters are estimated conditionally on this estimate of the transformation parameter. The analysis of variance method is computationally simpler compared with the maximum likelihood method of estimation and allows a more natural separation of different sources of lack of fit. Simulation studies show that the analysis of variance method can provide unbiased estimators of complex transform-both-sides nonlinear models, such as transform-both-sides random coefficient nonlinear regression models and transform-both-sides fixed coefficient nonlinear regression models with random block effects. PMID:25038072

  9. Development of a Subcell Based Modeling Approach for Modeling the Architecturally Dependent Impact Response of Triaxially Braided Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Sorini, Chris; Chattopadhyay, Aditi; Goldberg, Robert K.; Kohlman, Lee W.

    2016-01-01

    Understanding the high velocity impact response of polymer matrix composites with complex architectures is critical to many aerospace applications, including engine fan blade containment systems where the structure must be able to completely contain fan blades in the event of a blade-out. Despite the benefits offered by these materials, the complex nature of textile composites presents a significant challenge for the prediction of deformation and damage under both quasi-static and impact loading conditions. The relatively large mesoscale repeating unit cell (in comparison to the size of structural components) causes the material to behave like a structure rather than a homogeneous material. Impact experiments conducted at NASA Glenn Research Center have shown the damage patterns to be a function of the underlying material architecture. Traditional computational techniques that involve modeling these materials using smeared homogeneous, orthotropic material properties at the macroscale result in simulated damage patterns that are a function of the structural geometry, but not the material architecture. In order to preserve heterogeneity at the highest length scale in a robust yet computationally efficient manner, and capture the architecturally dependent damage patterns, a previously-developed subcell modeling approach where the braided composite unit cell is approximated as a series of four adjacent laminated composites is utilized. This work discusses the implementation of the subcell methodology into the commercial transient dynamic finite element code LS-DYNA (Livermore Software Technology Corp.). Verification and validation studies are also presented, including simulation of the tensile response of straight-sided and notched quasi-static coupons composed of a T700/PR520 triaxially braided [0deg/60deg/-60deg] composite. Based on the results of the verification and validation studies, advantages and limitations of the methodology as well as plans for future work

  10. Conversion of Highly Complex Faulted Hydrostratigraphic Architectures into MODFLOW Grid for Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Pham, H. V.; Tsai, F. T.

    2013-12-01

    The USGS MODFLOW is widely used for groundwater modeling. Because of using structured grid, all layers have to be continuous throughout the model domain. This makes it difficult to generate computational grid for complex hydrostratigraphic architectures including thin and discontinuous layers, interconnections of sand units, pinch-outs, and faults. In this study, we present a technique for automatically generating MODFLOW grid for complex aquifer systems of strongly sand-clay binary heterogeneity. To do so, an indicator geostatistical method is adopted to interpolate sand and clay distributions in a gridded two-dimensional plane along the structural dip for every one-foot vertical interval. A three-dimensional gridded binary geological architecture is reconstructed by assembling all two-dimensional planes. Then, the geological architecture is converted to MODFLOW computational grid by the procedures as follows. First, we determine bed boundary elevation of sand and clay units for each vertical column. Then, we determine the total number of bed boundaries for a vertical column by projecting the bed boundaries of its adjacent four vertical columns to the column. This step is of importance to preserve flow pathways, especially for narrow connections between sand units. Finally, we determine the number of MODFLOW layers and assign layer indices to bed boundaries. A MATLAB code was developed to implement the technique. The inputs for the code are bed boundary data from well logs, a structural dip, minimal layer thickness, and the number of layers. The outputs are MODFLOW grid of sand and clay indicators. The technique is able to generate grid that preserves fault features in the geological architecture. Moreover, the code is very efficient for regenerating MODFLOW grid with different grid resolutions. The technique was applied to MODFLOW grid generation for the fluvial aquifer system in Baton Rouge, Louisiana. The study area consists of the '1,200-foot' sand, the '1

  11. Quantitative Analysis and Modeling of 3-D TSV-Based Power Delivery Architectures

    NASA Astrophysics Data System (ADS)

    He, Huanyu

    As 3-D technology enters the commercial production stage, it is critical to understand different 3-D power delivery architectures on the stacked ICs and packages with through-silicon vias (TSVs). Appropriate design, modeling, analysis, and optimization approaches of the 3-D power delivery system are of foremost significance and great practical interest to the semiconductor industry in general. Based on fundamental physics of 3-D integration components, the objective of this thesis work is to quantitatively analyze the power delivery for 3D-IC systems, develop appropriate physics-based models and simulation approaches, understand the key issues, and provide potential solutions for design of 3D-IC power delivery architectures. In this work, a hybrid simulation approach is adopted as the major approach along with analytical method to examine 3-D power networks. Combining electromagnetic (EM) tools and circuit simulators, the hybrid approach is able to analyze and model micrometer-scale components as well as centimeter-scale power delivery system with high accuracy and efficiency. The parasitic elements of the components on the power delivery can be precisely modeled by full-wave EM solvers. Stack-up circuit models for the 3-D power delivery networks (PDNs) are constructed through a partition and assembly method. With the efficiency advantage of the SPICE circuit simulation, the overall 3-D system power performance can be analyzed and the 3-D power delivery architectures can be evaluated in a short computing time. The major power delivery issues are the voltage drop (IR drop) and voltage noise. With a baseline of 3-D power delivery architecture, the on-chip PDNs of TSV-based chip stacks are modeled and analyzed for the IR drop and AC noise. The basic design factors are evaluated using the hybrid approach, such as the number of stacked chips, the number of TSVs, and the TSV arrangement. Analytical formulas are also developed to evaluate the IR drop in 3-D chip stack in

  12. Model of a DNA-protein complex of the architectural monomeric protein MC1 from Euryarchaea.

    PubMed

    Paquet, Françoise; Delalande, Olivier; Goffinont, Stephane; Culard, Françoise; Loth, Karine; Asseline, Ulysse; Castaing, Bertrand; Landon, Celine

    2014-01-01

    In Archaea the two major modes of DNA packaging are wrapping by histone proteins or bending by architectural non-histone proteins. To supplement our knowledge about the binding mode of the different DNA-bending proteins observed across the three domains of life, we present here the first model of a complex in which the monomeric Methanogen Chromosomal protein 1 (MC1) from Euryarchaea binds to the concave side of a strongly bent DNA. In laboratory growth conditions MC1 is the most abundant architectural protein present in Methanosarcina thermophila CHTI55. Like most proteins that strongly bend DNA, MC1 is known to bind in the minor groove. Interaction areas for MC1 and DNA were mapped by Nuclear Magnetic Resonance (NMR) data. The polarity of protein binding was determined using paramagnetic probes attached to the DNA. The first structural model of the DNA-MC1 complex we propose here was obtained by two complementary docking approaches and is in good agreement with the experimental data previously provided by electron microscopy and biochemistry. Residues essential to DNA-binding and -bending were highlighted and confirmed by site-directed mutagenesis. It was found that the Arg25 side-chain was essential to neutralize the negative charge of two phosphates that come very close in response to a dramatic curvature of the DNA. PMID:24558431

  13. Modeling workplace contact networks: The effects of organizational structure, architecture, and reporting errors on epidemic predictions

    PubMed Central

    Potter, Gail E.; Smieszek, Timo; Sailer, Kerstin

    2015-01-01

    Face-to-face social contacts are potentially important transmission routes for acute respiratory infections, and understanding the contact network can improve our ability to predict, contain, and control epidemics. Although workplaces are important settings for infectious disease transmission, few studies have collected workplace contact data and estimated workplace contact networks. We use contact diaries, architectural distance measures, and institutional structures to estimate social contact networks within a Swiss research institute. Some contact reports were inconsistent, indicating reporting errors. We adjust for this with a latent variable model, jointly estimating the true (unobserved) network of contacts and duration-specific reporting probabilities. We find that contact probability decreases with distance, and that research group membership, role, and shared projects are strongly predictive of contact patterns. Estimated reporting probabilities were low only for 0–5 min contacts. Adjusting for reporting error changed the estimate of the duration distribution, but did not change the estimates of covariate effects and had little effect on epidemic predictions. Our epidemic simulation study indicates that inclusion of network structure based on architectural and organizational structure data can improve the accuracy of epidemic forecasting models. PMID:26634122

  14. The effect of geometrical assumptions in modeling solid-state transformation kinetics

    SciTech Connect

    Leeuwen, Y. van; Vooijs, S.; Sietsma, J.; Zwaag, S. van der

    1998-12-01

    In the quest for the ideal transformation model describing the austenite decomposition in steel, emphasis shifts from empirical to physical models. This has resulted in the widely used description of the transformation by means of the interface velocity between the parent phase and the newly formed phase, a description which yields reliable predictions of the transformation behavior only when combined with a realistic austenite geometry. This article deals with a single-grain austenite geometry model applied to transformations in which the interface velocity is constant throughout the transformation, e.g., certain types of massive transformations. The selected geometry is a regular tetrakaidecahedron, combining topological features of a random Voronoi distribution with the advantages of single-grain calculations. The simulations show the influence of the ferrite-nucleus density, the relative positions of the ferrite nuclei inside the austenite grain, and the grain-size distribution. From simulations with a constant interface velocity, the transformation behavior for a tetrakaidecahedron is in agreement with transformation kinetics in terms of the Johnson-Mehl-Avrami (JMA) model. Using the tetrakaidecahedron geometry, one can simulate transformation curves that can be experimentally obtained by calorimetry or dilatometry, in order to study the quantities affecting the transformation behavior.

  15. Dawn: A Simulation Model for Evaluating Costs and Tradeoffs of Big Data Science Architectures

    NASA Astrophysics Data System (ADS)

    Cinquini, L.; Crichton, D. J.; Braverman, A. J.; Kyo, L.; Fuchs, T.; Turmon, M.

    2014-12-01

    In many scientific disciplines, scientists and data managers are bracing for an upcoming deluge of big data volumes, which will increase the size of current data archives by a factor of 10-100 times. For example, the next Climate Model Inter-comparison Project (CMIP6) will generate a global archive of model output of approximately 10-20 Peta-bytes, while the upcoming next generation of NASA decadal Earth Observing instruments are expected to collect tens of Giga-bytes/day. In radio-astronomy, the Square Kilometre Array (SKA) will collect data in the Exa-bytes/day range, of which (after reduction and processing) around 1.5 Exa-bytes/year will be stored. The effective and timely processing of these enormous data streams will require the design of new data reduction and processing algorithms, new system architectures, and new techniques for evaluating computation uncertainty. Yet at present no general software tool or framework exists that will allow system architects to model their expected data processing workflow, and determine the network, computational and storage resources needed to prepare their data for scientific analysis. In order to fill this gap, at NASA/JPL we have been developing a preliminary model named DAWN (Distributed Analytics, Workflows and Numerics) for simulating arbitrary complex workflows composed of any number of data processing and movement tasks. The model can be configured with a representation of the problem at hand (the data volumes, the processing algorithms, the available computing and network resources), and is able to evaluate tradeoffs between different possible workflows based on several estimators: overall elapsed time, separate computation and transfer times, resulting uncertainty, and others. So far, we have been applying DAWN to analyze architectural solutions for 4 different use cases from distinct science disciplines: climate science, astronomy, hydrology and a generic cloud computing use case. This talk will present

  16. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGESBeta

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; Brandt, Steven R.; Ciznicki, Milosz; Kierzynka, Michal; Löffler, Frank; Schnetter, Erik; Tao, Jian

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretizationmore » is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  17. Analysis of optical near-field energy transfer by stochastic model unifying architectural dependencies

    SciTech Connect

    Naruse, Makoto; Akahane, Kouichi; Yamamoto, Naokatsu; Holmström, Petter; Thylén, Lars; Huant, Serge; Ohtsu, Motoichi

    2014-04-21

    We theoretically and experimentally demonstrate energy transfer mediated by optical near-field interactions in a multi-layer InAs quantum dot (QD) structure composed of a single layer of larger dots and N layers of smaller ones. We construct a stochastic model in which optical near-field interactions that follow a Yukawa potential, QD size fluctuations, and temperature-dependent energy level broadening are unified, enabling us to examine device-architecture-dependent energy transfer efficiencies. The model results are consistent with the experiments. This study provides an insight into optical energy transfer involving inherent disorders in materials and paves the way to systematic design principles of nanophotonic devices that will allow optimized performance and the realization of designated functions.

  18. Publishing biomedical journals on the World-Wide Web using an open architecture model.

    PubMed Central

    Shareck, E. P.; Greenes, R. A.

    1996-01-01

    BACKGROUND: In many respects, biomedical publications are ideally suited for distribution via the World-Wide Web, but economic concerns have prevented the rapid adoption of an on-line publishing model. PURPOSE: We report on our experiences with assisting biomedical journals in developing an online presence, issues that were encountered, and methods used to address these issues. Our approach is based on an open architecture that fosters adaptation and interconnection of biomedical resources. METHODS: We have worked with the New England Journal of Medicine (NEJM), as well as five other publishers. A set of tools and protocols was employed to develop a scalable and customizable solution for publishing journals on-line. RESULTS: In March, 1996, the New England Journal of Medicine published its first World-Wide Web issue. Explorations with other publishers have helped to generalize the model. CONCLUSIONS: Economic and technical issues play a major role in developing World-Wide Web publishing solutions. PMID:8947685

  19. Wind Evaluation Breadboard: mechanical design and analysis, control architecture, dynamic model, and performance simulation

    NASA Astrophysics Data System (ADS)

    Reyes García-Talavera, Marcos; Viera, Teodora; Núñez, Miguel; Zuluaga, Pablo; Ronquillo, Bernardo; Ronquillo, Mariano; Brunetto, Enzo; Quattri, Marco; Castro, Javier; Hernández, Elvio

    2008-07-01

    The Wind Evaluation Breadboard (WEB) for the European Extremely Large Telescope (ELT) is a primary mirror and telescope simulator formed by seven segments simulators, including position sensors, electromechanical support systems and support structures. The purpose of the WEB is to evaluate the performance of the control of wind buffeting disturbance on ELT segmented mirrors using an electro-mechanical set-up which simulates the real operational constrains applied to large segmented mirrors. The instrument has been designed and developed by IAC, ALTRAN, JUPASA and ESO, with FOGALE responsible of the Edge Sensors, and TNO of the Position Actuators. This paper describes the mechanical design and analysis, the control architecture, the dynamic model generated based on the Finite Element Model and the close loop performance achieved in simulations. A comparison in control performance between segments modal control and actuators local control is also presented.

  20. Evaluation of minor hysteresis loops using Langevin transforms in modified inverse Jiles-Atherton model

    NASA Astrophysics Data System (ADS)

    Hamimid, M.; Mimoune, S. M.; Feliachi, M.

    2013-11-01

    In this paper, we present a Langevin transforms model which evaluates accurately minor hysteresis loops for the modified inverse Jiles-Atherton model by using appropriate expressions in order to improve minor hysteresis loops characteristics. The parameters of minor hysteresis loops are then related to the parameters of the major hysteresis loop according to each level of maximal induction by using Langevin transforms expressions. The stochastic optimization method “simulated annealing” is used for the determination of the Langevin transforms coefficients. This model needs only two experimental tests to generate all hysteresis loops. The validity of the Langevin transforms model is justified by comparison of calculated minor hysteresis loops to measured ones and good agreements are obtained with better results than the exponential transforms model (Hamimid et al., 2011 [4]).

  1. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  2. Key Technology Research on Open Architecture for The Sharing of Heterogeneous Geographic Analysis Models

    NASA Astrophysics Data System (ADS)

    Yue, S. S.; Wen, Y. N.; Lv, G. N.; Hu, D.

    2013-10-01

    In recent years, the increasing development of cloud computing technologies laid critical foundation for efficiently solving complicated geographic issues. However, it is still difficult to realize the cooperative operation of massive heterogeneous geographical models. Traditional cloud architecture is apt to provide centralized solution to end users, while all the required resources are often offered by large enterprises or special agencies. Thus, it's a closed framework from the perspective of resource utilization. Solving comprehensive geographic issues requires integrating multifarious heterogeneous geographical models and data. In this case, an open computing platform is in need, with which the model owners can package and deploy their models into cloud conveniently, while model users can search, access and utilize those models with cloud facility. Based on this concept, the open cloud service strategies for the sharing of heterogeneous geographic analysis models is studied in this article. The key technology: unified cloud interface strategy, sharing platform based on cloud service, and computing platform based on cloud service are discussed in detail, and related experiments are conducted for further verification.

  3. A Grid-Based Architecture for Coupling Hydro-Meteorological Models

    NASA Astrophysics Data System (ADS)

    Schiffers, Michael; Straube, Christian; gentschen Felde, Nils; Clematis, Andrea; Galizia, Antonella; D'Agostino, Daniele; Danovaro, Emanuele

    2014-05-01

    Computational hydro-meteorological research (HMR) requires the execution of various meteorological, hydrological, hydraulic, and impact models, either standalone or as well-orchestrated chains (workflows). While the former approach is straightforward, the latter one is not because consecutive models may depend on different execution environments, on organizational constraints, and on separate data formats and semantics to be bridged. Consequently, in order to gain the most benefit from HMR model chains, it is of paramount interest a) to seamlessly couple heterogeneous models; b) to access models and data in various administrative domains; c) to execute models on the most appropriate resources available in right time. In this contribution we present our experience in using a Grid-based computing infrastructure for HMR. In particular we will first explore various coupling mechanisms. We then specify an enabling Grid infrastructure to support dynamic model chains. Using the DRIHM project as an example we report on implementation details, especially in the context of the European Grid Infrastructure (EGI). Finally, we apply the architecture for hydro-meteorological disaster management and elaborate on the opportunities the Grid infrastructure approach offers in a worldwide context.

  4. PREDICTING SUBSURFACE CONTAMINANT TRANSPORT AND TRANSFORMATION: CONSIDERATIONS FOR MODEL SELECTION AND FIELD VALIDATION

    EPA Science Inventory

    Predicting subsurface contaminant transport and transformation requires mathematical models based on a variety of physical, chemical, and biological processes. The mathematical model is an attempt to quantitatively describe observed processes in order to permit systematic forecas...

  5. Equivalent circuit of radio frequency-plasma with the transformer model.

    PubMed

    Nishida, K; Mochizuki, S; Ohta, M; Yasumoto, M; Lettry, J; Mattei, S; Hatayama, A

    2014-02-01

    LINAC4 H(-) source is radio frequency (RF) driven type source. In the RF system, it is required to match the load impedance, which includes H(-) source, to that of final amplifier. We model RF plasma inside the H(-) source as circuit elements using transformer model so that characteristics of the load impedance become calculable. It has been shown that the modeling based on the transformer model works well to predict the resistance and inductance of the plasma. PMID:24593557

  6. Multiresolution modeling with a JMASS-JWARS high-level architecture (HLA) federation

    NASA Astrophysics Data System (ADS)

    Plotz, Gary A.; Prince, John

    2003-09-01

    Traditionally, acquisition analyses require a hierarchical suite of simulation models to address engineering, engagement, mission and theater/campaign measures of performance, measures of effectiveness and measures of merit. Configuring and running this suite of simulations and transferring the appropriate data between each model are both time consuming and error prone. The ideal solution would be a single simulation with the requisite resolution and fidelity to perform all four levels of acquisition analysis. However, current computer hardware technologies cannot deliver the runtime performance necessary to support the resulting "extremely large" simulation. One viable alternative is to "integrate" the current hierarchical suite of simulation models using the DoD's High Level Architecture (HLA) in order to support multi-resolution modeling. An HLA integration -- called a federation -- eliminates the problem of "extremely large" models, provides a well-defined and manageable mixed resolution simulation and minimizes Verification, Validation, and Accreditation (VV&A) issues. This paper describes the process and results of integrating the Joint Modeling and Simulation System (JMASS) and the Joint Warfare System (JWARS) simulations -- two of the Department of Defense's (DoD) next-generation simulations -- using a HLA federation.

  7. Forecasting performance of denoising signal by Wavelet and Fourier Transforms using SARIMA model

    NASA Astrophysics Data System (ADS)

    Ismail, Mohd Tahir; Mamat, Siti Salwana; Hamzah, Firdaus Mohamad; Karim, Samsul Ariffin Abdul

    2014-07-01

    The goal of this research is to determine the forecasting performance of denoising signal. Monthly rainfall and monthly number of raindays with duration of 20 years (1990-2009) from Bayan Lepas station are utilized as the case study. The Fast Fourier Transform (FFT) and Wavelet Transform (WT) are used in this research to find the denoise signal. The denoise data obtained by Fast Fourier Transform and Wavelet Transform are being analyze by seasonal ARIMA model. The best fitted model is determined by the minimum value of MSE. The result indicates that Wavelet Transform is an effective method in denoising the monthly rainfall and number of rain days signals compared to Fast Fourier Transform.

  8. A physically based model for the isothermal martensitic transformation in a maraging steel

    NASA Astrophysics Data System (ADS)

    Kruijver, S. O.; Blaauw, H. S.; Beyer, J.; Post, J.

    2003-10-01

    Isothermal transformation from austenite to martensite in steel products during or after the production process often show residual stresses which can create unacceptable dimensional changes in the final product. Tn order to gain more insight in the effects infiuencing the isothermai transformation, the overall kinetics in a low Carbon-Nickel maraging steel is investigated. The influence of the austenitizing température, time and quenching rate on the transformation is measured magnetically and yields information about the transformation rate and final amount of transformation. A physically based model describing the nucleation and growth of martensite is used to explain the observed effects. The results show a very good fit of the experimental values and the model description of the transformation, within the limitations of the inhomogeneities (carbides and intermetallics, size and distribution in the material and stress state) and experimental conditions.

  9. Architectural Methodology Report

    NASA Technical Reports Server (NTRS)

    Dhas, Chris

    2000-01-01

    The establishment of conventions between two communicating entities in the end systems is essential for communications. Examples of the kind of decisions that need to be made in establishing a protocol convention include the nature of the data representation, the for-mat and the speed of the date representation over the communications path, and the sequence of control messages (if any) which are sent. One of the main functions of a protocol is to establish a standard path between the communicating entities. This is necessary to create a virtual communications medium with certain desirable characteristics. In essence, it is the function of the protocol to transform the characteristics of the physical communications environment into a more useful virtual communications model. The final function of a protocol is to establish standard data elements for communications over the path; that is, the protocol serves to create a virtual data element for exchange. Other systems may be constructed in which the transferred element is a program or a job. Finally, there are special purpose applications in which the element to be transferred may be a complex structure such as all or part of a graphic display. NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to describe the methodologies used in developing a protocol architecture for an in-space Internet node. The node would support NASA:s four mission areas: Earth Science; Space Science; Human Exploration and Development of Space (HEDS); Aerospace Technology. This report presents the methodology for developing the protocol architecture. The methodology addresses the architecture for a computer communications environment. It does not address an analog voice architecture.

  10. Use of Dynamic Models and Operational Architecture to Solve Complex Navy Challenges

    NASA Technical Reports Server (NTRS)

    Grande, Darby; Black, J. Todd; Freeman, Jared; Sorber, TIm; Serfaty, Daniel

    2010-01-01

    The United States Navy established 8 Maritime Operations Centers (MOC) to enhance the command and control of forces at the operational level of warfare. Each MOC is a headquarters manned by qualified joint operational-level staffs, and enabled by globally interoperable C41 systems. To assess and refine MOC staffing, equipment, and schedules, a dynamic software model was developed. The model leverages pre-existing operational process architecture, joint military task lists that define activities and their precedence relations, as well as Navy documents that specify manning and roles per activity. The software model serves as a "computational wind-tunnel" in which to test a MOC on a mission, and to refine its structure, staffing, processes, and schedules. More generally, the model supports resource allocation decisions concerning Doctrine, Organization, Training, Material, Leadership, Personnel and Facilities (DOTMLPF) at MOCs around the world. A rapid prototype effort efficiently produced this software in less than five months, using an integrated process team consisting of MOC military and civilian staff, modeling experts, and software developers. The work reported here was conducted for Commander, United States Fleet Forces Command in Norfolk, Virginia, code N5-0LW (Operational Level of War) that facilitates the identification, consolidation, and prioritization of MOC capabilities requirements, and implementation and delivery of MOC solutions.

  11. Assessment of model uncertainty during the river export modelling of pesticides and transformation products

    NASA Astrophysics Data System (ADS)

    Gassmann, Matthias; Olsson, Oliver; Kümmerer, Klaus

    2013-04-01

    The modelling of organic pollutants in the environment is burdened by a load of uncertainties. Not only parameter values are uncertain but often also the mass and timing of pesticide application. By introducing transformation products (TPs) into modelling, further uncertainty coming from the dependence of these substances on their parent compounds and the introduction of new model parameters are likely. The purpose of this study was the investigation of the behaviour of a parsimonious catchment scale model for the assessment of river concentrations of the insecticide Chlorpyrifos (CP) and two of its TPs, Chlorpyrifos Oxon (CPO) and 3,5,6-trichloro-2-pyridinol (TCP) under the influence of uncertain input parameter values. Especially parameter uncertainty and pesticide application uncertainty were investigated by Global Sensitivity Analysis (GSA) and the Generalized Likelihood Uncertainty Estimation (GLUE) method, based on Monte-Carlo sampling. GSA revealed that half-lives and sorption parameters as well as half-lives and transformation parameters were correlated to each other. This means, that the concepts of modelling sorption and degradation/transformation were correlated. Thus, it may be difficult in modelling studies to optimize parameter values for these modules. Furthermore, we could show that erroneous pesticide application mass and timing were compensated during Monte-Carlo sampling by changing the half-life of CP. However, the introduction of TCP into the calculation of the objective function was able to enhance identifiability of pesticide application mass. The GLUE analysis showed that CP and TCP were modelled successfully, but CPO modelling failed with high uncertainty and insensitive parameters. We assumed a structural error of the model which was especially important for CPO assessment. This shows that there is the possibility that a chemical and some of its TPs can be modelled successfully by a specific model structure, but for other TPs, the model

  12. Enhanced Engine Performance During Emergency Operation Using a Model-Based Engine Control Architecture

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2015-01-01

    This paper discusses the design and application of model-based engine control (MBEC) for use during emergency operation of the aircraft. The MBEC methodology is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40,000) and features an optimal tuner Kalman Filter (OTKF) to estimate unmeasured engine parameters, which can then be used for control. During an emergency scenario, normally-conservative engine operating limits may be relaxed to increase the performance of the engine and overall survivability of the aircraft; this comes at the cost of additional risk of an engine failure. The MBEC architecture offers the advantage of estimating key engine parameters that are not directly measureable. Estimating the unknown parameters allows for tighter control over these parameters, and on the level of risk the engine will operate at. This will allow the engine to achieve better performance than possible when operating to more conservative limits on a related, measurable parameter.

  13. Enhanced Engine Performance During Emergency Operation Using a Model-Based Engine Control Architecture

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and application of model-based engine control (MBEC) for use during emergency operation of the aircraft. The MBEC methodology is applied to the Commercial Modular Aero-Propulsion System Simulation 40k (CMAPSS40k) and features an optimal tuner Kalman Filter (OTKF) to estimate unmeasured engine parameters, which can then be used for control. During an emergency scenario, normally-conservative engine operating limits may be relaxed to increase the performance of the engine and overall survivability of the aircraft; this comes at the cost of additional risk of an engine failure. The MBEC architecture offers the advantage of estimating key engine parameters that are not directly measureable. Estimating the unknown parameters allows for tighter control over these parameters, and on the level of risk the engine will operate at. This will allow the engine to achieve better performance than possible when operating to more conservative limits on a related, measurable parameter.

  14. Coupling Multi-Component Models with MPH on Distributed MemoryComputer Architectures

    SciTech Connect

    He, Yun; Ding, Chris

    2005-03-24

    A growing trend in developing large and complex applications on today's Teraflop scale computers is to integrate stand-alone and/or semi-independent program components into a comprehensive simulation package. One example is the Community Climate System Model which consists of atmosphere, ocean, land-surface and sea-ice components. Each component is semi-independent and has been developed at a different institution. We study how this multi-component, multi-executable application can run effectively on distributed memory architectures. For the first time, we clearly identify five effective execution modes and develop the MPH library to support application development utilizing these modes. MPH performs component-name registration, resource allocation and initial component handshaking in a flexible way.

  15. How plant architecture affects light absorption and photosynthesis in tomato: towards an ideotype for plant architecture using a functional–structural plant model

    PubMed Central

    Sarlikioti, V.; de Visser, P. H. B.; Buck-Sorlin, G. H.; Marcelis, L. F. M.

    2011-01-01

    Background and Aims Manipulation of plant structure can strongly affect light distribution in the canopy and photosynthesis. The aim of this paper is to find a plant ideotype for optimization of light absorption and canopy photosynthesis. Using a static functional structural plant model (FSPM), a range of different plant architectural characteristics was tested for two different seasons in order to find the optimal architecture with respect to light absorption and photosynthesis. Methods Simulations were performed with an FSPM of a greenhouse-grown tomato crop. Sensitivity analyses were carried out for leaf elevation angle, leaf phyllotaxis, leaflet angle, leaf shape, leaflet arrangement and internode length. From the results of this analysis two possible ideotypes were proposed. Four different vertical light distributions were also tested, while light absorption cumulated over the whole canopy was kept the same. Key Results Photosynthesis was augmented by 6 % in winter and reduced by 7 % in summer, when light absorption in the top part of the canopy was increased by 25 %, while not changing light absorption of the canopy as a whole. The measured plant structure was already optimal with respect to leaf elevation angle, leaflet angle and leaflet arrangement for both light absorption and photosynthesis while phyllotaxis had no effect. Increasing the length : width ratio of leaves by 1·5 or increasing internode length from 7 cm to 12 cm led to an increase of 6–10 % for light absorption and photosynthesis. Conclusions At high light intensities (summer) deeper penetration of light in the canopy improves crop photosynthesis, but not at low light intensities (winter). In particular, internode length and leaf shape affect the vertical distribution of light in the canopy. A new plant ideotype with more spacious canopy architecture due to long internodes and long and narrow leaves led to an increase in crop photosynthesis of up to 10 %. PMID:21865217

  16. High Resolution Genomic Scans Reveal Genetic Architecture Controlling Alcohol Preference in Bidirectionally Selected Rat Model.

    PubMed

    Lo, Chiao-Ling; Lossie, Amy C; Liang, Tiebing; Liu, Yunlong; Xuei, Xiaoling; Lumeng, Lawrence; Zhou, Feng C; Muir, William M

    2016-08-01

    Investigations on the influence of nature vs. nurture on Alcoholism (Alcohol Use Disorder) in human have yet to provide a clear view on potential genomic etiologies. To address this issue, we sequenced a replicated animal model system bidirectionally-selected for alcohol preference (AP). This model is uniquely suited to map genetic effects with high reproducibility, and resolution. The origin of the rat lines (an 8-way cross) resulted in small haplotype blocks (HB) with a corresponding high level of resolution. We sequenced DNAs from 40 samples (10 per line of each replicate) to determine allele frequencies and HB. We achieved ~46X coverage per line and replicate. Excessive differentiation in the genomic architecture between lines, across replicates, termed signatures of selection (SS), were classified according to gene and region. We identified SS in 930 genes associated with AP. The majority (50%) of the SS were confined to single gene regions, the greatest numbers of which were in promoters (284) and intronic regions (169) with the least in exon's (4), suggesting that differences in AP were primarily due to alterations in regulatory regions. We confirmed previously identified genes and found many new genes associated with AP. Of those newly identified genes, several demonstrated neuronal function involved in synaptic memory and reward behavior, e.g. ion channels (Kcnf1, Kcnn3, Scn5a), excitatory receptors (Grin2a, Gria3, Grip1), neurotransmitters (Pomc), and synapses (Snap29). This study not only reveals the polygenic architecture of AP, but also emphasizes the importance of regulatory elements, consistent with other complex traits. PMID:27490364

  17. High Resolution Genomic Scans Reveal Genetic Architecture Controlling Alcohol Preference in Bidirectionally Selected Rat Model

    PubMed Central

    Lo, Chiao-Ling; Liang, Tiebing; Liu, Yunlong; Lumeng, Lawrence; Zhou, Feng C.; Muir, William M.

    2016-01-01

    Investigations on the influence of nature vs. nurture on Alcoholism (Alcohol Use Disorder) in human have yet to provide a clear view on potential genomic etiologies. To address this issue, we sequenced a replicated animal model system bidirectionally-selected for alcohol preference (AP). This model is uniquely suited to map genetic effects with high reproducibility, and resolution. The origin of the rat lines (an 8-way cross) resulted in small haplotype blocks (HB) with a corresponding high level of resolution. We sequenced DNAs from 40 samples (10 per line of each replicate) to determine allele frequencies and HB. We achieved ~46X coverage per line and replicate. Excessive differentiation in the genomic architecture between lines, across replicates, termed signatures of selection (SS), were classified according to gene and region. We identified SS in 930 genes associated with AP. The majority (50%) of the SS were confined to single gene regions, the greatest numbers of which were in promoters (284) and intronic regions (169) with the least in exon's (4), suggesting that differences in AP were primarily due to alterations in regulatory regions. We confirmed previously identified genes and found many new genes associated with AP. Of those newly identified genes, several demonstrated neuronal function involved in synaptic memory and reward behavior, e.g. ion channels (Kcnf1, Kcnn3, Scn5a), excitatory receptors (Grin2a, Gria3, Grip1), neurotransmitters (Pomc), and synapses (Snap29). This study not only reveals the polygenic architecture of AP, but also emphasizes the importance of regulatory elements, consistent with other complex traits. PMID:27490364

  18. Model-Based Systems Engineering With the Architecture Analysis and Design Language (AADL) Applied to NASA Mission Operations

    NASA Technical Reports Server (NTRS)

    Munoz Fernandez, Michela Miche

    2014-01-01

    The potential of Model Model Systems Engineering (MBSE) using the Architecture Analysis and Design Language (AADL) applied to space systems will be described. AADL modeling is applicable to real-time embedded systems- the types of systems NASA builds. A case study with the Juno mission to Jupiter showcases how this work would enable future missions to benefit from using these models throughout their life cycle from design to flight operations.

  19. Research on transformation and optimization of large scale 3D modeling for real time rendering

    NASA Astrophysics Data System (ADS)

    Yan, Hu; Yang, Yongchao; Zhao, Gang; He, Bin; Shen, Guosheng

    2011-12-01

    During the simulation process of real-time three-dimensional scene, the popular modeling software and the real-time rendering platform are not compatible. The common solution is to create three-dimensional scene model by using modeling software and then transform the format supported by rendering platform. This paper takes digital campus scene simulation as an example, analyzes and solves the problems of surface loss; texture distortion and loss; model flicker and so on during the transformation from 3Ds Max to MultiGen Creator. Besides, it proposes the optimization strategy of model which is transformed. The operation results show that this strategy is a good solution to all kinds of problems existing in transformation and it can speed up the rendering speed of the model.

  20. Ivory Coast-Ghana margin: model of a transform margin

    SciTech Connect

    Mascle, J.; Blarez, E.

    1987-05-01

    The authors present a marine study of the eastern Ivory Coast-Ghana continental margins which they consider one of the most spectacular extinct transform margins. This margin has been created during Early-Lower Cretaceous time and has not been submitted to any major geodynamic reactivation since its fabric. Based on this example, they propose to consider during the evolution of the transform margin four main and successive stages. Shearing contact is first active between two probably thick continental crusts and then between progressively thinning continental crusts. This leads to the creation of specific geological structures such as pull-apart graben, elongated fault lineaments, major fault scarps, shear folds, and marginal ridges. After the final continental breakup, a hot center (the mid-oceanic ridge axis) is progressively drifting along the newly created margin. The contact between two lithospheres of different nature should necessarily induce, by thermal exchanges, vertical crustal readjustments. Finally, the transform margin remains directly adjacent to a hot but cooling oceanic lithosphere; its subsidence behavior should then progressively be comparable to the thermal subsidence of classic rifted margins.

  1. Modelling of Nb influence on phase transformation behaviours from austenite to ferrite in low carbon steels

    NASA Astrophysics Data System (ADS)

    Wang, L.; Parker, S. V.; Rose, A. J.; West, G. D.; Thomson, R. C.

    2016-03-01

    In this paper, a new model has been developed to predict the phase transformation behaviours from austenite to ferrite in Nb-containing low carbon steels. The new model is based on some previous work and incorporates the effects of Nb on phase transformation behaviours, in order to make it applicable for Nb-containing steels. Dissolved Nb atoms segregated at prior austenite grain boundaries increase the critical energy for ferrite nucleation, and thus the ferrite nucleation rate is decreased. Dissolved Nb atoms also apply a solute drag effect to the moving transformation interface, and the ferrite grain growth rate is also decreased. The overall transformation kinetics is then calculated according to the classic Johnson-Mehl-Avrami-Kolmogorov (JMAK) theory. The new model predictions are quite consistent with experimental results for various steels during isothermal transformations or continuous cooling.

  2. How Plates Pull Transforms Apart: 3-D Numerical Models of Oceanic Transform Fault Response to Changes in Plate Motion Direction

    NASA Astrophysics Data System (ADS)

    Morrow, T. A.; Mittelstaedt, E. L.; Olive, J. A. L.

    2015-12-01

    Observations along oceanic fracture zones suggest that some mid-ocean ridge transform faults (TFs) previously split into multiple strike-slip segments separated by short (<~50 km) intra-transform spreading centers and then reunited to a single TF trace. This history of segmentation appears to correspond with changes in plate motion direction. Despite the clear evidence of TF segmentation, the processes governing its development and evolution are not well characterized. Here we use a 3-D, finite-difference / marker-in-cell technique to model the evolution of localized strain at a TF subjected to a sudden change in plate motion direction. We simulate the oceanic lithosphere and underlying asthenosphere at a ridge-transform-ridge setting using a visco-elastic-plastic rheology with a history-dependent plastic weakening law and a temperature- and stress-dependent mantle viscosity. To simulate the development of topography, a low density, low viscosity 'sticky air' layer is present above the oceanic lithosphere. The initial thermal gradient follows a half-space cooling solution with an offset across the TF. We impose an enhanced thermal diffusivity in the uppermost 6 km of lithosphere to simulate the effects of hydrothermal circulation. An initial weak seed in the lithosphere helps localize shear deformation between the two offset ridge axes to form a TF. For each model case, the simulation is run initially with TF-parallel plate motion until the thermal structure reaches a steady state. The direction of plate motion is then rotated either instantaneously or over a specified time period, placing the TF in a state of trans-tension. Model runs continue until the system reaches a new steady state. Parameters varied here include: initial TF length, spreading rate, and the rotation rate and magnitude of spreading obliquity. We compare our model predictions to structural observations at existing TFs and records of TF segmentation preserved in oceanic fracture zones.

  3. Integrating mixed-effect models into an architectural plant model to simulate inter- and intra-progeny variability: a case study on oil palm (Elaeis guineensis Jacq.).

    PubMed

    Perez, Raphaël P A; Pallas, Benoît; Le Moguédec, Gilles; Rey, Hervé; Griffon, Sébastien; Caliman, Jean-Pierre; Costes, Evelyne; Dauzat, Jean

    2016-08-01

    Three-dimensional (3D) reconstruction of plants is time-consuming and involves considerable levels of data acquisition. This is possibly one reason why the integration of genetic variability into 3D architectural models has so far been largely overlooked. In this study, an allometry-based approach was developed to account for architectural variability in 3D architectural models of oil palm (Elaeis guineensis Jacq.) as a case study. Allometric relationships were used to model architectural traits from individual leaflets to the entire crown while accounting for ontogenetic and morphogenetic gradients. Inter- and intra-progeny variabilities were evaluated for each trait and mixed-effect models were used to estimate the mean and variance parameters required for complete 3D virtual plants. Significant differences in leaf geometry (petiole length, density of leaflets, and rachis curvature) and leaflet morphology (gradients of leaflet length and width) were detected between and within progenies and were modelled in order to generate populations of plants that were consistent with the observed populations. The application of mixed-effect models on allometric relationships highlighted an interesting trade-off between model accuracy and ease of defining parameters for the 3D reconstruction of plants while at the same time integrating their observed variability. Future research will be dedicated to sensitivity analyses coupling the structural model presented here with a radiative balance model in order to identify the key architectural traits involved in light interception efficiency. PMID:27302128

  4. Accuracy assessment of modeling architectural structures and details using terrestrial laser scanning

    NASA Astrophysics Data System (ADS)

    Kedzierski, M.; Walczykowski, P.; Orych, A.; Czarnecka, P.

    2015-08-01

    One of the most important aspects when performing architectural documentation of cultural heritage structures is the accuracy of both the data and the products which are generated from these data: documentation in the form of 3D models or vector drawings. The paper describes an assessment of the accuracy of modelling data acquired using a terrestrial phase scanner in relation to the density of a point cloud representing the surface of different types of construction materials typical for cultural heritage structures. This analysis includes the impact of the scanning geometry: the incidence angle of the laser beam and the scanning distance. For the purposes of this research, a test field consisting of samples of different types of construction materials (brick, wood, plastic, plaster, a ceramic tile, sheet metal) was built. The study involved conducting measurements at different angles and from a range of distances for chosen scanning densities. Data, acquired in the form of point clouds, were then filtered and modelled. An accuracy assessment of the 3D model was conducted by fitting it with the point cloud. The reflection intensity of each type of material was also analyzed, trying to determine which construction materials have the highest reflectance coefficients, and which have the lowest reflection coefficients, and in turn how this variable changes for different scanning parameters. Additionally measurements were taken of a fragment of a building in order to compare the results obtained in laboratory conditions, with those taken in field conditions.

  5. Analysis of Terrestrial Planet Formation by the Grand Tack Model: System Architecture and Tack Location

    NASA Astrophysics Data System (ADS)

    Brasser, R.; Matsumura, S.; Ida, S.; Mojzsis, S. J.; Werner, S. C.

    2016-04-01

    The Grand Tack model of terrestrial planet formation has emerged in recent years as the premier scenario used to account for several observed features of the inner solar system. It relies on the early migration of the giant planets to gravitationally sculpt and mix the planetesimal disk down to ∼1 au, after which the terrestrial planets accrete from material remaining in a narrow circumsolar annulus. Here, we investigate how the model fares under a range of initial conditions and migration course-change (“tack”) locations. We run a large number of N-body simulations with tack locations of 1.5 and 2 au and test initial conditions using equal-mass planetary embryos and a semi-analytical approach to oligarchic growth. We make use of a recent model of the protosolar disk that takes into account viscous heating, includes the full effect of type 1 migration, and employs a realistic mass–radius relation for the growing terrestrial planets. Our results show that the canonical tack location of Jupiter at 1.5 au is inconsistent with the most massive planet residing at 1 au at greater than 95% confidence. This favors a tack farther out at 2 au for the disk model and parameters employed. Of the different initial conditions, we find that the oligarchic case is capable of statistically reproducing the orbital architecture and mass distribution of the terrestrial planets, while the equal-mass embryo case is not.

  6. Influence of diffusive porosity architecture on kinetically-controlled reactions in mobile-immobile models

    NASA Astrophysics Data System (ADS)

    Babey, T.; Ginn, T. R.; De Dreuzy, J. R.

    2014-12-01

    Solute transport in porous media may be structured at various scales by geological features, from connectivity patterns of pores to fracture networks. This structure impacts solute repartition and consequently reactivity. Here we study numerically the influence of the organization of porous volumes within diffusive porosity zones on different reactions. We couple a mobile-immobile transport model where an advective zone exchanges with diffusive zones of variable structure to the geochemical modeling software PHREEQC. We focus on two kinetically-controlled reactions, a linear sorption and a nonlinear dissolution of a mineral. We show that in both cases the structure of the immobile zones has an important impact on the overall reaction rates. Through the Multi-Rate Mass Transfer (MRMT) framework, we show that this impact is very well captured by residence times-based models for the kinetic linear sorption, as it is mathematically equivalent to a modification of the initial diffusive structure; Consequently, the overall reaction rate could be easily extrapolated from a conservative tracer experiment. The MRMT models however struggle to reproduce the non-linearity and the threshold effects associated with the kinetic dissolution. A slower reaction, by allowing more time for diffusion to smooth out the concentration gradients, tends to increase their relevance. Figure: Left: Representation of a mobile-immobile model with a complex immobile architecture. The mobile zone is indicated by an arrow. Right: Total remaining mass of mineral in mobile-immobile models and in their equivalent MRMT models during a flush by a highly under-saturated solution. The models only differ by the organization of their immobile porous volumes.

  7. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    PubMed Central

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  8. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models.

    PubMed

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L; Huffman, Jennifer E; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F; Wilson, James F; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S

    2015-07-15

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  9. Rheology and friction along the Vema transform fault (Central Atlantic) inferred by thermal modeling

    NASA Astrophysics Data System (ADS)

    Cuffaro, Marco; Ligi, Marco

    2016-04-01

    We investigate with 3-D finite element simulations the temperature distribution beneath the Vema transform that offsets the Mid-Atlantic Ridge by ~300 km in the Central Atlantic. The developed thermal model includes the effects of mantle flow beneath a ridge-transform-ridge geometry and the lateral heat conduction across the transform fault, and of the shear heating generated along the fault. Numerical solutions are presented for a 3-D domain, discretized with a non-uniform tetrahedral mesh, where relative plate kinematics is used as boundary condition, providing passive mantle upwelling. Mantle is modelled as a temperature-dependent viscous fluid, and its dynamics can be described by Stokes and advection-conduction heat equations. The results show that shear heating raises significantly the temperature along the transform fault. In order to test model results, we calculated the thermal structure simulating the mantle dynamics beneath an accretionary plate boundary geometry that duplicates the Vema transform fault, assuming the present-day spreading rate and direction of the Mid Atlantic Ridge at 11 °N. Thus, the modelled heat flow at the surface has been compared with 23 heat flow measurements carried out along the Vema Transform valley. Laboratory studies on the frictional stability of olivine aggregates show that the depth extent of oceanic faulting is thermally controlled and limited by the 600 °C isotherm. The depth of isotherms of the thermal model were compared to the depths of earthquakes along transform faults. Slip on oceanic transform faults is primarily aseismic, only 15% of the tectonic offset is accommodated by earthquakes. Despite extensive fault areas, few large earthquakes occur on the fault and few aftershocks follow large events. Rheology constrained by the thermal model combined with geology and seismicity of the Vema Transform fault allows to better understand friction and the spatial distribution of strength along the fault and provides

  10. Internet of Things: a possible change in the distributed modeling and simulation architecture paradigm

    NASA Astrophysics Data System (ADS)

    Riecken, Mark; Lessmann, Kurt; Schillero, David

    2016-05-01

    The Data Distribution Service (DDS) was started by the Object Management Group (OMG) in 2004. Currently, DDS is one of the contenders to support the Internet of Things (IoT) and the Industrial IOT (IIoT). DDS has also been used as a distributed simulation architecture. Given the anticipated proliferation of IoT and II devices, along with the explosive growth of sensor technology, can we expect this to have an impact on the broader community of distributed simulation? If it does, what is the impact and which distributed simulation domains will be most affected? DDS shares many of the same goals and characteristics of distributed simulation such as the need to support scale and an emphasis on Quality of Service (QoS) that can be tailored to meet the end user's needs. In addition, DDS has some built-in features such as security that are not present in traditional distributed simulation protocols. If the IoT and II realize their potential application, we predict a large base of technology to be built around this distributed data paradigm, much of which could be directly beneficial to the distributed M&S community. In this paper we compare some of the perceived gaps and shortfalls of current distributed M&S technology to the emerging capabilities of DDS built around the IoT. Although some trial work has been conducted in this area, we propose a more focused examination of the potential of these new technologies and their applicability to current and future problems in distributed M&S. The Internet of Things (IoT) and its data communications mechanisms such as the Data Distribution System (DDS) share properties in common with distributed modeling and simulation (M&S) and its protocols such as the High Level Architecture (HLA) and the Test and Training Enabling Architecture (TENA). This paper proposes a framework based on the sensor use case for how the two communities of practice (CoP) can benefit from one another and achieve greater capability in practical distributed

  11. 4D/RCS: a reference model architecture for intelligent unmanned ground vehicles

    NASA Astrophysics Data System (ADS)

    Albus, James S.

    2002-07-01

    4D/RCS consists of a multi-layered multi-resolutional hierarchy of computational nodes each containing elements of sensory processing (SP), world modeling (WM), value judgment (VJ), and behavior generation (BG). At the lower levels, these elements generate goal-seeking reactive behavior. At higher levels, they enable goal-defining deliberative behavior. At low levels, range in space and time is short and resolution is high. At high levels, distance and time are long and resolution is low. This enables high-precision fast-action response over short intervals of time and space at low levels, while long-range plans and abstract concepts are being formulated over broad regions of time and space at high levels. 4D/RCS closes feedback loops at every level. SP processes focus attention (i.e., window regions of space or time), group (i.e., segment regions into entities), compute entity attributes, estimate entity state, and assign entities to classes at every level. WM processes maintain a rich and dynamic database of knowledge about the world in the form of images, maps, entities, events, and relationships at every level. Other WM processes use that knowledge to generate estimates and predictions that support perception, reasoning, and planning at every level. 4D/RCS was developed for the Army Research Laboratory Demo III program. To date, only the lower levels of the 4D/RCS architecture have been fully implemented, but the results have been extremely positive. It seems clear that the theoretical basis of 4D/RCS is sound and the architecture is capable of being extended to support much higher levels of performance.

  12. Phase field modeling of tetragonal to monoclinic phase transformation in zirconia

    NASA Astrophysics Data System (ADS)

    Mamivand, Mahmood

    Zirconia based ceramics are strong, hard, inert, and smooth, with low thermal conductivity and good biocompatibility. Such properties made zirconia ceramics an ideal material for different applications form thermal barrier coatings (TBCs) to biomedicine applications like femoral implants and dental bridges. However, this unusual versatility of excellent properties would be mediated by the metastable tetragonal (or cubic) transformation to the stable monoclinic phase after a certain exposure at service temperatures. This transformation from tetragonal to monoclinic, known as LTD (low temperature degradation) in biomedical application, proceeds by propagation of martensite, which corresponds to transformation twinning. As such, tetragonal to monoclinic transformation is highly sensitive to mechanical and chemomechanical stresses. It is known in fact that this transformation is the source of the fracture toughening in stabilized zirconia as it occurs at the stress concentration regions ahead of the crack tip. This dissertation is an attempt to provide a kinetic-based model for tetragonal to monoclinic transformation in zirconia. We used the phase field technique to capture the temporal and spatial evolution of monoclinic phase. In addition to morphological patterns, we were able to calculate the developed internal stresses during tetragonal to monoclinic transformation. The model was started form the two dimensional single crystal then was expanded to the two dimensional polycrystalline and finally to the three dimensional single crystal. The model is able to predict the most physical properties associated with tetragonal to monoclinic transformation in zirconia including: morphological patterns, transformation toughening, shape memory effect, pseudoelasticity, surface uplift, and variants impingement. The model was benched marked with several experimental works. The good agreements between simulation results and experimental data, make the model a reliable tool for

  13. GS3: A Knowledge Management Architecture for Collaborative Geologic Sequestration Modeling

    SciTech Connect

    Gorton, Ian; Black, Gary D.; Schuchardt, Karen L.; Sivaramakrishnan, Chandrika; Wurstner, Signe K.; Hui, Peter SY

    2010-01-10

    Modern scientific enterprises are inherently knowledge-intensive. In general, scientific studies in domains such as groundwater, climate, and other environmental modeling as well as fundamental research in chemistry, physics, and biology require the acquisition and manipulation of large amounts of experimental and field data in order to create inputs for large-scale computational simulations. The results of these simulations must then be analyzed, leading to refinements of inputs and models and further simulations. In this paper we describe our efforts in creating a knowledge management platform to support collaborative, wide-scale studies in the area of geologic sequestration. The platform, known as GS3 (Geologic Sequestration Software Suite), exploits and integrates off-the-shelf software components including semantic wikis, content management systems and open source middleware to create the core architecture. We then extend the wiki environment to support the capture of provenance, the ability to incorporate various analysis tools, and the ability to launch simulations on supercomputers. The paper describes the key components of GS3 and demonstrates its use through illustrative examples. We conclude by assessing the suitability of our approach for geologic sequestration modeling and generalization to other scientific problem domains

  14. Systems modeling of space medical support architecture: topological mapping of high level characteristics and constraints.

    PubMed

    Musson, David M; Doyle, Thomas E; Saary, Joan

    2012-01-01

    The challenges associated with providing medical support to astronauts on long duration lunar or planetary missions are significant. Experience to date in space has included short duration missions to the lunar surface and both short and long duration stays on board spacecraft and space stations in low Earth orbit. Live actor, terrestrial analogue setting simulation provides a means of studying multiple aspects of the medical challenges of exploration class space missions, though few if any published models exist upon which to construct systems-simulation test beds. Current proposed and projected moon mission scenarios were analyzed from a systems perspective to construct such a model. A resulting topological mapping of high-level architecture for a reference lunar mission with presumed EVA excursion and international mission partners is presented. High-level descriptions of crew operational autonomy, medical support related to crew-member status, and communication characteristics within and between multiple teams are presented. It is hoped this modeling will help guide future efforts to simulate medical support operations for research purposes, such as in the use of live actor simulations in terrestrial analogue environments. PMID:23367318

  15. Product toxicity and cometabolic competitive inhibition modeling of chloroform and trichloroethylene transformation by methanotrophic resting cells.

    PubMed Central

    Alvarez-Cohen, L; McCarty, P L

    1991-01-01

    The rate and capacity for chloroform (CF) and trichloroethylene (TCE) transformation by a mixed methanotrophic culture of resting cells (no exogenous energy source) and formate-fed cells were measured. As reported previously for TCE, formate addition resulted in an increased CF transformation rate (0.35 day-1 for resting cells and 1.5 day-1 for formate-fed cells) and transformation capacity (0.0065 mg of CF per mg of cells for resting cells and 0.015 mg of CF per mg of cells for formate-fed cells), suggesting that depletion of energy stores affects transformation behavior. The observed finite transformation capacity, even with an exogenous energy source, suggests that toxicity was also a factor. CF transformation capacity was significantly lower than that for TCE, suggesting a greater toxicity from CF transformation. The toxicity of CF, TCE, and their transformation products to whole cells was evaluated by comparing the formate oxidation activity of acetylene-treated cells to that of non-acetylene-treated cells with and without prior exposure to CF or TCE. Acetylene arrests the activity of methane monooxygenase in CF and TCE oxidation without halting cell activity toward formate. Significantly diminished formate oxidation by cells exposed to either CR or TCE without acetylene compared with that with acetylene suggests that the solvents themselves were not toxic under the experimental conditions but their transformation products were. The concurrent transformation of CF and TCE by resting cells was measured, and results were compared with predictions from a competitive-inhibition cometabolic transformation model. The reasonable fit between model predictions and experimental observations was supportive of model assumptions. PMID:1905516

  16. Model Transformation for a System of Systems Dependability Safety Case

    NASA Technical Reports Server (NTRS)

    Murphy, Judy; Driskell, Stephen B.

    2010-01-01

    Software plays an increasingly larger role in all aspects of NASA's science missions. This has been extended to the identification, management and control of faults which affect safety-critical functions and by default, the overall success of the mission. Traditionally, the analysis of fault identification, management and control are hardware based. Due to the increasing complexity of system, there has been a corresponding increase in the complexity in fault management software. The NASA Independent Validation & Verification (IV&V) program is creating processes and procedures to identify, and incorporate safety-critical software requirements along with corresponding software faults so that potential hazards may be mitigated. This Specific to Generic ... A Case for Reuse paper describes the phases of a dependability and safety study which identifies a new, process to create a foundation for reusable assets. These assets support the identification and management of specific software faults and, their transformation from specific to generic software faults. This approach also has applications to other systems outside of the NASA environment. This paper addresses how a mission specific dependability and safety case is being transformed to a generic dependability and safety case which can be reused for any type of space mission with an emphasis on software fault conditions.

  17. Change in the Pathologic Supraspinatus: A Three-Dimensional Model of Fiber Bundle Architecture within Anterior and Posterior Regions.

    PubMed

    Kim, Soo Y; Sachdeva, Rohit; Li, Zi; Lee, Dongwoon; Rosser, Benjamin W C

    2015-01-01

    Supraspinatus tendon tears are common and lead to changes in the muscle architecture. To date, these changes have not been investigated for the distinct regions and parts of the pathologic supraspinatus. The purpose of this study was to create a novel three-dimensional (3D) model of the muscle architecture throughout the supraspinatus and to compare the architecture between muscle regions and parts in relation to tear severity. Twelve cadaveric specimens with varying degrees of tendon tears were used. Three-dimensional coordinates of fiber bundles were collected in situ using serial dissection and digitization. Data were reconstructed and modeled in 3D using Maya. Fiber bundle length (FBL) and pennation angle (PA) were computed and analyzed. FBL was significantly shorter in specimens with large retracted tears compared to smaller tears, with the deeper fibers being significantly shorter than other parts in the anterior region. PA was significantly greater in specimens with large retracted tears, with the superficial fibers often demonstrating the largest PA. The posterior region was absent in two specimens with extensive tears. Architectural changes associated with tendon tears affect the regions and varying depths of supraspinatus differently. The results provide important insights on residual function of the pathologic muscle, and the 3D model includes detailed data that can be used in future modeling studies. PMID:26413533

  18. Change in the Pathologic Supraspinatus: A Three-Dimensional Model of Fiber Bundle Architecture within Anterior and Posterior Regions

    PubMed Central

    Kim, Soo Y.; Sachdeva, Rohit; Li, Zi; Lee, Dongwoon; Rosser, Benjamin W. C.

    2015-01-01

    Supraspinatus tendon tears are common and lead to changes in the muscle architecture. To date, these changes have not been investigated for the distinct regions and parts of the pathologic supraspinatus. The purpose of this study was to create a novel three-dimensional (3D) model of the muscle architecture throughout the supraspinatus and to compare the architecture between muscle regions and parts in relation to tear severity. Twelve cadaveric specimens with varying degrees of tendon tears were used. Three-dimensional coordinates of fiber bundles were collected in situ using serial dissection and digitization. Data were reconstructed and modeled in 3D using Maya. Fiber bundle length (FBL) and pennation angle (PA) were computed and analyzed. FBL was significantly shorter in specimens with large retracted tears compared to smaller tears, with the deeper fibers being significantly shorter than other parts in the anterior region. PA was significantly greater in specimens with large retracted tears, with the superficial fibers often demonstrating the largest PA. The posterior region was absent in two specimens with extensive tears. Architectural changes associated with tendon tears affect the regions and varying depths of supraspinatus differently. The results provide important insights on residual function of the pathologic muscle, and the 3D model includes detailed data that can be used in future modeling studies. PMID:26413533

  19. Effects of Practice on Task Architecture: Combined Evidence from Interference Experiments and Random-Walk Models of Decision Making

    ERIC Educational Resources Information Center

    Kamienkowski, Juan E.; Pashler, Harold; Dehaene, Stanislas; Sigman, Mariano

    2011-01-01

    Does extensive practice reduce or eliminate central interference in dual-task processing? We explored the reorganization of task architecture with practice by combining interference analysis (delays in dual-task experiment) and random-walk models of decision making (measuring the decision and non-decision contributions to RT). The main delay…

  20. Semantic Web-Driven LMS Architecture towards a Holistic Learning Process Model Focused on Personalization

    ERIC Educational Resources Information Center

    Kerkiri, Tania

    2010-01-01

    A comprehensive presentation is here made on the modular architecture of an e-learning platform with a distinctive emphasis on content personalization, combining advantages from semantic web technology, collaborative filtering and recommendation systems. Modules of this architecture handle information about both the domain-specific didactic…

  1. B-Transform and Its Application to a Fish-Hyacinth Model

    ERIC Educational Resources Information Center

    Oyelami, B. O.; Ale, S. O.

    2002-01-01

    A new transform proposed by Oyelami and Ale for impulsive systems is applied to an impulsive fish-hyacinth model. A biological policy regarding the growth of the fish and the hyacinth populations is formulated.

  2. A Model-Based Study of On-Board Data Processing Architecture for Balloon-Borne Aurora Observation

    NASA Technical Reports Server (NTRS)

    Lim, Chester

    2011-01-01

    This paper discusses an application of ISAAC design methodology to a balloon-borne payload electronic system for aurora observation. The methodology is composed of two phases, high level design and low level implementation, the focus of this paper is on the high level design. This paper puts the system architecture in the context of a balloon based application but it can be generalized to any airborne/space-borne application. The system architecture includes a front-end detector, its corresponding data processing unit, and a controller. VisualSim has been used to perform modeling and simulations to explore the entire design space, finding optimal solutions that meet system requirements.

  3. An Approach for Detecting Inconsistencies between Behavioral Models of the Software Architecture and the Code

    SciTech Connect

    Ciraci, Selim; Sozer, Hasan; Tekinerdogan, Bedir

    2012-07-16

    In practice, inconsistencies between architectural documentation and the code might arise due to improper implementation of the architecture or the separate, uncontrolled evolution of the code. Several approaches have been proposed to detect the inconsistencies between the architecture and the code but these tend to be limited for capturing inconsistencies that might occur at runtime. We present a runtime verification approach for detecting inconsistencies between the dynamic behavior of the architecture and the actual code. The approach is supported by a set of tools that implement the architecture and the code patterns in Prolog, and support the automatic generation of runtime monitors for detecting inconsistencies. We illustrate the approach and the toolset for a Crisis Management System case study.

  4. A micromechanics constitutive model of transformation plasticity with shear and dilatation effect

    NASA Astrophysics Data System (ADS)

    Sun, Q. P.; Hwang, K. C.; Yu, S. W.

    B ASED on micromechanics, thermodynamics and microscale t → m transformation mechanism considerations a micromechanics constitutive model which takes into account both the dilatation and shear effects of the transformation is proposed to describe the plastic, pseudoelastic and shape memory behaviors of structural ceramics during transformation under different temperatures. In the derivation, a constitutive element (representative material sample) was used which contains many of the transformed m-ZrO 2 grains or precipitates as the second phase inclusions embedded in an elastic matrix. Under some basic assumptions, analytic expressions for the Helmholtz and complementary free energy of the constitutive element are derived in a self-consistent manner by using the Mori-Tanaka method which takes into account the interaction between the transformed inclusions. The derived free energy is a function of externally applied macroscopic stress (or strain), temperature, volume fraction of transformed phase and the averaged stressfree transformation strain (eigenstrain) of all the transformed inclusions in the constitutive element, the latter two quantities being considered to be the internal variables describing the micro-structural rearrangement in the constitutive element. In the framework of the Hill-Rice internal variable constitutive theory, the transformation yield function and incremental stress strain relations, in analogy to the theory of metal plasticity, for proportional and non-proportional loading histories are derived, respectively. The theoretical predictions are compared with the available experimental data of Mg-PSZ and Ce-TZP polycrystalline toughening ceramics.

  5. Application and project portfolio valuation using enterprise architecture and business requirements modelling

    NASA Astrophysics Data System (ADS)

    Quartel, Dick; Steen, Maarten W. A.; Lankhorst, Marc M.

    2012-05-01

    This article describes an architecture-based approach to IT valuation. This approach offers organisations an instrument to valuate their application and project portfolios and to make well-balanced decisions about IT investments. The value of a software application is assessed in terms of its contribution to a selection of business goals. Based on such assessments, the value of different applications can be compared, and requirements for innovation, development, maintenance and phasing out can be identified. IT projects are proposed to realise the requirements. The value of each project is assessed in terms of the value it adds to one or more applications. This value can be obtained by relating the 'as-is' application portfolio to the 'to-be' portfolio that is being proposed by the project portfolio. In this way, projects can be ranked according to their added value, given a certain selection of business goals. The approach uses ArchiMate to model the relationship between software applications, business processes, services and products. In addition, two language extensions are used to model the relationship of these elements to business goals and requirements and to projects and project portfolios. The approach is illustrated using the portfolio method of Bedell and has been implemented in BiZZdesign Architect.

  6. Model-Driven Methodology for Rapid Deployment of Smart Spaces Based on Resource-Oriented Architectures

    PubMed Central

    Corredor, Iván; Bernardos, Ana M.; Iglesias, Josué; Casar, José R.

    2012-01-01

    Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym. PMID:23012544

  7. Model-driven methodology for rapid deployment of smart spaces based on resource-oriented architectures.

    PubMed

    Corredor, Iván; Bernardos, Ana M; Iglesias, Josué; Casar, José R

    2012-01-01

    Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym. PMID:23012544

  8. Building an Online Wisdom Community: A Transformational Design Model

    ERIC Educational Resources Information Center

    Gunawardena, Charlotte N.; Jennings, Barbara; Ortegano-Layne, Ludmila C.; Frechette, Casey; Carabajal, Kayleigh; Lindemann, Ken; Mummert, Julia

    2004-01-01

    This paper discusses the development of a new instructional design model based on socioconstructivist learning theories and distance education principles for the design of online wisdom communities and the efficacy of the model drawing on evaluation results from its implementation in Fall 2002. The model, Final Outcome Centered Around Learner…

  9. Mental Models and Transformative Learning: The Key to Leadership Development?

    ERIC Educational Resources Information Center

    Johnson, Homer H.

    2008-01-01

    What separates successful leaders from unsuccessful ones is their mental models or meaning structures, not their knowledge, information, training, or experience per se. Thus the development of leaders should focus on acquisition of new mental models, models that offer more valid and useful ways for effectively dealing with the complex challenges…

  10. LaPlace Transform1 Adaptive Control Law in Support of Large Flight Envelope Modeling Work

    NASA Technical Reports Server (NTRS)

    Gregory, Irene M.; Xargay, Enric; Cao, Chengyu; Hovakimyan, Naira

    2011-01-01

    This paper presents results of a flight test of the L1 adaptive control architecture designed to directly compensate for significant uncertain cross-coupling in nonlinear systems. The flight test was conducted on the subscale turbine powered Generic Transport Model that is an integral part of the Airborne Subscale Transport Aircraft Research system at the NASA Langley Research Center. The results presented are in support of nonlinear aerodynamic modeling and instrumentation calibration.

  11. Performance of linear and nonlinear texture measures in 2D and 3D for monitoring architectural changes in osteoporosis using computer-generated models of trabecular bone

    NASA Astrophysics Data System (ADS)

    Boehm, Holger F.; Link, Thomas M.; Monetti, Roberto A.; Mueller, Dirk; Rummeny, Ernst J.; Raeth, Christoph W.

    2005-04-01

    Osteoporosis is a metabolic bone disease leading to de-mineralization and increased risk of fracture. The two major factors that determine the biomechanical competence of bone are the degree of mineralization and the micro-architectural integrity. Today, modern imaging modalities (high resolution MRI, micro-CT) are capable of depicting structural details of trabecular bone tissue. From the image data, structural properties obtained by quantitative measures are analysed with respect to the presence of osteoporotic fractures of the spine (in-vivo) or correlated with biomechanical strength as derived from destructive testing (in-vitro). Fairly well established are linear structural measures in 2D that are originally adopted from standard histo-morphometry. Recently, non-linear techniques in 2D and 3D based on the scaling index method (SIM), the standard Hough transform (SHT), and the Minkowski Functionals (MF) have been introduced, which show excellent performance in predicting bone strength and fracture risk. However, little is known about the performance of the various parameters with respect to monitoring structural changes due to progression of osteoporosis or as a result of medical treatment. In this contribution, we generate models of trabecular bone with pre-defined structural properties which are exposed to simulated osteoclastic activity. We apply linear and non-linear texture measures to the models and analyse their performance with respect to detecting architectural changes. This study demonstrates, that the texture measures are capable of monitoring structural changes of complex model data. The diagnostic potential varies for the different parameters and is found to depend on the topological composition of the model and initial "bone density". In our models, non-linear texture measures tend to react more sensitively to small structural changes than linear measures. Best performance is observed for the 3rd and 4th Minkowski Functionals and for the scaling

  12. Blind watermark algorithm on 3D motion model based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Qi, Hu; Zhai, Lang

    2013-12-01

    With the continuous development of 3D vision technology, digital watermark technology, as the best choice for copyright protection, has fused with it gradually. This paper proposed a blind watermark plan of 3D motion model based on wavelet transform, and made it loaded into the Vega real-time visual simulation system. Firstly, put 3D model into affine transform, and take the distance from the center of gravity to the vertex of 3D object in order to generate a one-dimensional discrete signal; then make this signal into wavelet transform to change its frequency coefficients and embed watermark, finally generate 3D motion model with watermarking. In fixed affine space, achieve the robustness in translation, revolving and proportion transforms. The results show that this approach has better performances not only in robustness, but also in watermark- invisibility.

  13. Transformation properties of dynamic subgrid-scale models in a frame of reference undergoing rotation

    NASA Astrophysics Data System (ADS)

    Horiuti, Kiyosi

    Theoretical consideration is presented for the transformation properties of the subgrid-scale (SGS) models for the SGS stress tensor in a non-inertial frame of reference undergoing rotation. As was previously shown (Speziale, C.G., Geophys. Astrophys. Fluid Dynamics 33, 199 (1985)), an extra correction term is yielded for the SGS stress tensor in the transformation of a rotating frame relative to an inertial framing. We derived the exact expression for the correction term for the spherical Gaussian filter function. Certain transformation rules are imposed on the SGS stress by the derived correction term, namely the SGS stress is not indifferent to a frame rotation, but the divergence of the SGS stress is frame indifferent. Conformity of the modelled SGS stress tensor estimated using the previous dynamic SGS models (the dynamic Smagorinsky, dynamic mixed and nonlinear models) with these transformation rules is examined. It is shown that values for certain model parameters contained in the mixed models can be theoretically determined by imposing these rules. We have conducted the a priori and a postepriori numerical assessments of the SGS models in decaying homogeneous turbulence which is subjected to rotation. All of the previous dynamic models were found to violate the rules except for the nonlinear model. The nonlinear model is form invariant, but the result obtained using the nonlinear model showed significant deviation from the DNS data. Failure of previous models was attributable to insufficient accuracy in approximating the modified cross term in the decomposition of the SGS stress tensor. A dynamic mixed model is proposed to eliminate the truncation error for the modelled correction term, in which multilevel filtering of the velocity field was utilized. The proposed model obeyed the transformation rules when the level of the multifiltering operation was large. It was shown that the defiltered model is derived in the limit of the infinite level of

  14. Marginal deformations of WZNW and coset models from O( d, d) transformations

    NASA Astrophysics Data System (ADS)

    Hassan, S. F.; Sen, Ashoke

    1993-09-01

    We show that the O(2, 2) transformation of the SU(2) WZNW model gives rise to marginal deformation of this model by the operator ∫ d2zJ(z) overlineJ( overlinez) where J, overlineJareU(1) currents in the Cartan subalgebra. Generalization of this result to other WZNW theories is discussed. We also consider the O(3, 3) transformation of the product of an SU(2) WZNW model and a gauged SU(2) WZNW model. The three-parameter set of models obtained after the transformation is shown to be the result of first deforming the product of two SU(2) WZNW theories by marginal operators of the form Σ i,j = 12 C ijJ ioverlineJj, and then gauging an appropriate U(1) subgroup of the theory. Our analysis leads to a general conjecture that O( d, d) transformations of any WZNW model correspond to marginal deformation of the WZNW theory by an appropriate combination of left and right moving currents belonging to the Cartan subalgebra; and O( d, d) transformations of a gauged WZNW model can be identified to the gauged version of such marginally deformed WZNW models.

  15. Mentoring Resulting in a New Model: Affect-Centered Transformational Leadership

    ERIC Educational Resources Information Center

    Moffett, David W.; Tejeda, Armando R.

    2014-01-01

    The authors were professor and student, in a doctoral leadership course, during fall semester of 2013-2014. Across the term the professor mentored the mentee, guiding him to the creation of the next, needed model for leadership. The new model, known as The Affect-Centered Transformational Leadership Model, came about as the result. Becoming an…

  16. Modeling and optimization of multiple unmanned aerial vehicles system architecture alternatives.

    PubMed

    Qin, Dongliang; Li, Zhifei; Yang, Feng; Wang, Weiping; He, Lei

    2014-01-01

    Unmanned aerial vehicle (UAV) systems have already been used in civilian activities, although very limitedly. Confronted different types of tasks, multi UAVs usually need to be coordinated. This can be extracted as a multi UAVs system architecture problem. Based on the general system architecture problem, a specific description of the multi UAVs system architecture problem is presented. Then the corresponding optimization problem and an efficient genetic algorithm with a refined crossover operator (GA-RX) is proposed to accomplish the architecting process iteratively in the rest of this paper. The availability and effectiveness of overall method is validated using 2 simulations based on 2 different scenarios. PMID:25140328

  17. Modeling and Optimization of Multiple Unmanned Aerial Vehicles System Architecture Alternatives

    PubMed Central

    Wang, Weiping; He, Lei

    2014-01-01

    Unmanned aerial vehicle (UAV) systems have already been used in civilian activities, although very limitedly. Confronted different types of tasks, multi UAVs usually need to be coordinated. This can be extracted as a multi UAVs system architecture problem. Based on the general system architecture problem, a specific description of the multi UAVs system architecture problem is presented. Then the corresponding optimization problem and an efficient genetic algorithm with a refined crossover operator (GA-RX) is proposed to accomplish the architecting process iteratively in the rest of this paper. The availability and effectiveness of overall method is validated using 2 simulations based on 2 different scenarios. PMID:25140328

  18. Modelling Transformations of Quadratic Functions: A Proposal of Inductive Inquiry

    ERIC Educational Resources Information Center

    Sokolowski, Andrzej

    2013-01-01

    This paper presents a study about using scientific simulations to enhance the process of mathematical modelling. The main component of the study is a lesson whose major objective is to have students mathematise a trajectory of a projected object and then apply the model to formulate other trajectories by using the properties of function…

  19. A conceptual approach to approximate tree root architecture in infinite slope models

    NASA Astrophysics Data System (ADS)

    Schmaltz, Elmar; Glade, Thomas

    2016-04-01

    Vegetation-related properties - particularly tree root distribution and coherent hydrologic and mechanical effects on the underlying soil mantle - are commonly not considered in infinite slope models. Indeed, from a geotechnical point of view, these effects appear to be difficult to be reproduced reliably in a physically-based modelling approach. The growth of a tree and the expansion of its root architecture are directly connected with both intrinsic properties such as species and age, and extrinsic factors like topography, availability of nutrients, climate and soil type. These parameters control four main issues of the tree root architecture: 1) Type of rooting; 2) maximum growing distance to the tree stem (radius r); 3) maximum growing depth (height h); and 4) potential deformation of the root system. Geometric solids are able to approximate the distribution of a tree root system. The objective of this paper is to investigate whether it is possible to implement root systems and the connected hydrological and mechanical attributes sufficiently in a 3-dimensional slope stability model. Hereby, a spatio-dynamic vegetation module should cope with the demands of performance, computation time and significance. However, in this presentation, we focus only on the distribution of roots. The assumption is that the horizontal root distribution around a tree stem on a 2-dimensional plane can be described by a circle with the stem located at the centroid and a distinct radius r that is dependent on age and species. We classified three main types of tree root systems and reproduced the species-age-related root distribution with three respective mathematical solids in a synthetic 3-dimensional hillslope ambience. Thus, two solids in an Euclidian space were distinguished to represent the three root systems: i) cylinders with radius r and height h, whilst the dimension of latter defines the shape of a taproot-system or a shallow-root-system respectively; ii) elliptic

  20. Modeling of converter transformers using frequency domain terminal impedance measurements

    SciTech Connect

    Liu, Yilu; Sebo, S.A.; Caldecott, R.; Kasten, D.G. ); Wright, S.E. )

    1993-01-01

    HVDC converter stations generate radio frequency (RF) electromagnetic (EM) noise which could interfere with adjacent communication and computer equipment, and carrier system operations. In order to calculate and predict the RF EM noise produced by the valve ignition of a converter station, it is essential to develop accurate models of station equipment over a broad frequency range. Models of all station equipment can be characterized by frequency dependent impedances. The paper describes the frequency dependent node-to-node impedance function (NIF) models of power system equipment based on systematic broad frequency range (50 Hz to 1MHz) external driving point impedance measurements, sponsored by the Electric Power Research Institute (EPRI). The regular structure, high accuracy, and virtually unlimited frequency range are important features of the NIF model. Examples of NIF model application in converter station RF EM noise calculations are presented.

  1. Architecture-based force-velocity models of load-moving skeletal muscles.

    PubMed

    Baratta, R V; Solomonow, M; Best, R; Zembo, M; D'Ambrosia, R

    1995-04-01

    A predictive model of muscle force-velocity relationships is presented based on functional architectural variables. The parameters of Hill's equation describing muscle force-velocity relationship of nine muscles were estimated by their relationships with variables extracted from the whole-muscle length-force relationship and the percentage of slow-twitch fibres. Specifically, the maximal unloaded velocity (Vo) was estimated through multiple linear regression, from each muscle's fibre composition and the shortening range through which each muscle could produce active force. The maximal isometric force (Po) was also extracted from each muscle's length-force relationship. The ratio of Hill's dynamic constanta to Po and b to Vo, which determines the degree of curvature of the relation, was determined solely by the percent of slow-twitch fibres. This model was verified by fitting it to experimental force-velocity curves of nine different muscles in the cat's hindlimb. It was found that reasonable fits of force-velocity curves would be obtained with correlation coefficient in the range of 0.61 to 0.92, with an average of 0.82. The model predicted that muscles with relatively long shortening ranges would achieve higher maximal velocity, and that muscles with higher percentage of slow-twitch fibres had less pronounced curvature and lower maximal velocity in their force-velocity relationships. RELEVANCE: The results have direct implications in the design of neuroprosthetic limb control systems, which use electrical stimulation to restore function to muscles paralysed from spinal cord injury. The designer is enabled to optimally calibrate the controller according to the predicted individual force-velocity curves of different muscles by using the length-tension curves and fibre composition data available in the literature. PMID:11415546

  2. Semiparametric Transformation Models with Random Effects for Joint Analysis of Recurrent and Terminal Events

    PubMed Central

    Zeng, Donglin; Lin, D. Y.

    2011-01-01

    Summary We propose a broad class of semiparametric transformation models with random effects for the joint analysis of recurrent events and a terminal event. The transformation models include proportional hazards/intensity and proportional odds models. We estimate the model parameters by the nonparametric maximum likelihood approach. The estimators are shown to be consistent, asymptotically normal, and asymptotically efficient. Simple and stable numerical algorithms are provided to calculate the parameter estimators and to estimate their variances. Extensive simulation studies demonstrate that the proposed inference procedures perform well in realistic settings. Applications to two HIV/AIDS studies are presented. PMID:18945267

  3. Characterization, Modeling, and Energy Harvesting of Phase Transformations in Ferroelectric Materials

    NASA Astrophysics Data System (ADS)

    Dong, Wenda

    Solid state phase transformations can be induced through mechanical, electrical, and thermal loading in ferroelectric materials that are compositionally close to morphotropic phase boundaries. Large changes in strain, polarization, compliance, permittivity, and coupling properties are typically observed across the phase transformation regions and are phenomena of interest for energy harvesting and transduction applications where increased coupling behavior is desired. This work characterized and modeled solid state phase transformations in ferroelectric materials and assessed the potential of phase transforming materials for energy harvesting applications. Two types of phase transformations were studied. The first type was ferroelectric rhombohedral to ferroelectric orthorhombic observed in lead indium niobate lead magnesium niobate lead titanate (PIN-PMN-PT) and driven by deviatoric stress, temperature, and electric field. The second type of phase transformation is ferroelectric to antiferroelectric observed in lead zirconate titanate (PZT) and driven by pressure, temperature, and electric field. Experimental characterizations of the phase transformations were conducted in both PIN-PMN-PT and PZT in order to understand the thermodynamic characteristics of the phase transformations and map out the phase stability of both materials. The ferroelectric materials were characterized under combinations of stress, electric field, and temperature. Material models of phase transforming materials were developed using a thermodynamic based variant switching technique and thermodynamic observations of the phase transformations. These models replicate the phase transformation behavior of PIN-PMN-PT and PZT under mechanical and electrical loading conditions. The switching model worked in conjunction with linear piezoelectric equations as ferroelectric/ferroelastic constitutive equations within a finite element framework that solved the mechanical and electrical field equations

  4. An architecture and model for cognitive engineering simulation analysis - Application to advanced aviation automation

    NASA Technical Reports Server (NTRS)

    Corker, Kevin M.; Smith, Barry R.

    1993-01-01

    The process of designing crew stations for large-scale, complex automated systems is made difficult because of the flexibility of roles that the crew can assume, and by the rapid rate at which system designs become fixed. Modern cockpit automation frequently involves multiple layers of control and display technology in which human operators must exercise equipment in augmented, supervisory, and fully automated control modes. In this context, we maintain that effective human-centered design is dependent on adequate models of human/system performance in which representations of the equipment, the human operator(s), and the mission tasks are available to designers for manipulation and modification. The joint Army-NASA Aircrew/Aircraft Integration (A3I) Program, with its attendant Man-machine Integration Design and Analysis System (MIDAS), was initiated to meet this challenge. MIDAS provides designers with a test bed for analyzing human-system integration in an environment in which both cognitive human function and 'intelligent' machine function are described in similar terms. This distributed object-oriented simulation system, its architecture and assumptions, and our experiences from its application in advanced aviation crew stations are described.

  5. Parallel eigenanalysis of finite element models in a completely connected architecture

    NASA Technical Reports Server (NTRS)

    Akl, F. A.; Morel, M. R.

    1989-01-01

    A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis, (K)(phi) = (M)(phi)(omega), where (K) and (M) are of order N, and (omega) is order of q. The concurrent solution of the eigenproblem is based on the multifrontal/modified subspace method and is achieved in a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm was successfully implemented on a tightly coupled multiple-instruction multiple-data parallel processing machine, Cray X-MP. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The macrotasking library routines are used in mapping each domain to a user task. Computational speed-up and efficiency are used to determine the effectiveness of the algorithm. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts and the dimension of the subspace on the performance of the algorithm are investigated. A parallel finite element dynamic analysis program, p-feda, is documented and the performance of its subroutines in parallel environment is analyzed.

  6. Evaluating radiative transfer schemes treatment of vegetation canopy architecture in land surface models

    NASA Astrophysics Data System (ADS)

    Braghiere, Renato; Quaife, Tristan; Black, Emily

    2016-04-01

    Incoming shortwave radiation is the primary source of energy driving the majority of the Earth's climate system. The partitioning of shortwave radiation by vegetation into absorbed, reflected, and transmitted terms is important for most of biogeophysical processes, including leaf temperature changes and photosynthesis, and it is currently calculated by most of land surface schemes (LSS) of climate and/or numerical weather prediction models. The most commonly used radiative transfer scheme in LSS is the two-stream approximation, however it does not explicitly account for vegetation architectural effects on shortwave radiation partitioning. Detailed three-dimensional (3D) canopy radiative transfer schemes have been developed, but they are too computationally expensive to address large-scale related studies over long time periods. Using a straightforward one-dimensional (1D) parameterisation proposed by Pinty et al. (2006), we modified a two-stream radiative transfer scheme by including a simple function of Sun zenith angle, so-called "structure factor", which does not require an explicit description and understanding of the complex phenomena arising from the presence of vegetation heterogeneous architecture, and it guarantees accurate simulations of the radiative balance consistently with 3D representations. In order to evaluate the ability of the proposed parameterisation in accurately represent the radiative balance of more complex 3D schemes, a comparison between the modified two-stream approximation with the "structure factor" parameterisation and state-of-art 3D radiative transfer schemes was conducted, following a set of virtual scenarios described in the RAMI4PILPS experiment. These experiments have been evaluating the radiative balance of several models under perfectly controlled conditions in order to eliminate uncertainties arising from an incomplete or erroneous knowledge of the structural, spectral and illumination related canopy characteristics typical

  7. An architecturally constrained model of random number generation and its application to modeling the effect of generation rate

    PubMed Central

    Sexton, Nicholas J.; Cooper, Richard P.

    2014-01-01

    Random number generation (RNG) is a complex cognitive task for human subjects, requiring deliberative control to avoid production of habitual, stereotyped sequences. Under various manipulations (e.g., speeded responding, transcranial magnetic stimulation, or neurological damage) the performance of human subjects deteriorates, as reflected in a number of qualitatively distinct, dissociable biases. For example, the intrusion of stereotyped behavior (e.g., counting) increases at faster rates of generation. Theoretical accounts of the task postulate that it requires the integrated operation of multiple, computationally heterogeneous cognitive control (“executive”) processes. We present a computational model of RNG, within the framework of a novel, neuropsychologically-inspired cognitive architecture, ESPro. Manipulating the rate of sequence generation in the model reproduced a number of key effects observed in empirical studies, including increasing sequence stereotypy at faster rates. Within the model, this was due to time limitations on the interaction of supervisory control processes, namely, task setting, proposal of responses, monitoring, and response inhibition. The model thus supports the fractionation of executive function into multiple, computationally heterogeneous processes. PMID:25071644

  8. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  9. Transforming the Gray Factory: The Presidential Leadership of Charles M. Vest and the Architecture of Change at Massachusetts Institute of Technology

    ERIC Educational Resources Information Center

    Daas, Mahesh

    2013-01-01

    The single-site exemplar study presents an in-depth account of the presidential leadership of Charles M. Vest of MIT--the second longest presidency in the Institute's history--and his leadership team's journey between 1990 and 2004 into campus architectural changes that involved over a billion dollars, added a quarter of floor space to MIT's…

  10. [Job crisis and transformations in the new model of accumulation].

    PubMed

    Zerda-Sarmiento, Alvaro

    2012-06-01

    The general and structural crisis capitalism is going through is the token of the difficulties accumulation model has been dealing with since 70's in developed countries. This model has been trying to settle down again on the basis of neoliberal principle and a new technical-economical paradigm. The new accumulation pattern has had a effect in employment sphere which have been made evident at all the elements that constitute work relationships. In Colombia, this model implementation has been partial and segmented. However, its consequences (and the long-term current crisis) have been evident in unemployment, precarious work, segmentation, informal work and restricted and private health insurance. Besides, financial accumulation makes labour profits flow at different levels. The economic model current government has aimed to implement leads to strengthening exports, so making population life conditions more difficult. In order to overcome the current state of affairs, the work sphere needs to become more creative. This creative approach should look for new schemes for expression and mobilization of work sphere's claims. This is supposed to be done by establishing a different economic model aimed to build a more inclusive future, with social justice. PMID:23258748

  11. A Functional and Structural Mongolian Scots Pine (Pinus sylvestris var. mongolica) Model Integrating Architecture, Biomass and Effects of Precipitation

    PubMed Central

    Wang, Feng; Letort, Véronique; Lu, Qi; Bai, Xuefeng; Guo, Yan; de Reffye, Philippe; Li, Baoguo

    2012-01-01

    Mongolian Scots pine (Pinus sylvestris var. mongolica) is one of the principal tree species in the network of Three-North Shelterbelt for windbreak and sand stabilisation in China. The functions of shelterbelts are highly correlated with the architecture and eco-physiological processes of individual tree. Thus, model-assisted analysis of canopy architecture and function dynamic in Mongolian Scots pine is of value for better understanding its role and behaviour within shelterbelt ecosystems in these arid and semiarid regions. We present here a single-tree functional and structural model, derived from the GreenLab model, which is adapted for young Mongolian Scots pines by incorporation of plant biomass production, allocation, allometric rules and soil water dynamics. The model is calibrated and validated based on experimental measurements taken on Mongolian Scots pines in 2007 and 2006 under local meteorological conditions. Measurements include plant biomass, topology and geometry, as well as soil attributes and standard meteorological data. After calibration, the model allows reconstruction of three-dimensional (3D) canopy architecture and biomass dynamics for trees from one- to six-year-old at the same site using meteorological data for the six years from 2001 to 2006. Sensitivity analysis indicates that rainfall variation has more influence on biomass increment than on architecture, and the internode and needle compartments and the aboveground biomass respond linearly to increases in precipitation. Sensitivity analysis also shows that the balance between internode and needle growth varies only slightly within the range of precipitations considered here. The model is expected to be used to investigate the growth of Mongolian Scots pines in other regions with different soils and climates. PMID:22927982

  12. A functional and structural Mongolian Scots pine (Pinus sylvestris var. mongolica) model integrating architecture, biomass and effects of precipitation.

    PubMed

    Wang, Feng; Letort, Véronique; Lu, Qi; Bai, Xuefeng; Guo, Yan; de Reffye, Philippe; Li, Baoguo

    2012-01-01

    Mongolian Scots pine (Pinus sylvestris var. mongolica) is one of the principal tree species in the network of Three-North Shelterbelt for windbreak and sand stabilisation in China. The functions of shelterbelts are highly correlated with the architecture and eco-physiological processes of individual tree. Thus, model-assisted analysis of canopy architecture and function dynamic in Mongolian Scots pine is of value for better understanding its role and behaviour within shelterbelt ecosystems in these arid and semiarid regions. We present here a single-tree functional and structural model, derived from the GreenLab model, which is adapted for young Mongolian Scots pines by incorporation of plant biomass production, allocation, allometric rules and soil water dynamics. The model is calibrated and validated based on experimental measurements taken on Mongolian Scots pines in 2007 and 2006 under local meteorological conditions. Measurements include plant biomass, topology and geometry, as well as soil attributes and standard meteorological data. After calibration, the model allows reconstruction of three-dimensional (3D) canopy architecture and biomass dynamics for trees from one- to six-year-old at the same site using meteorological data for the six years from 2001 to 2006. Sensitivity analysis indicates that rainfall variation has more influence on biomass increment than on architecture, and the internode and needle compartments and the aboveground biomass respond linearly to increases in precipitation. Sensitivity analysis also shows that the balance between internode and needle growth varies only slightly within the range of precipitations considered here. The model is expected to be used to investigate the growth of Mongolian Scots pines in other regions with different soils and climates. PMID:22927982

  13. A multistage differential transformation method for approximate solution of Hantavirus infection model

    NASA Astrophysics Data System (ADS)

    Gökdoğan, Ahmet; Merdan, Mehmet; Yildirim, Ahmet

    2012-01-01

    The goal of this study is presented a reliable algorithm based on the standard differential transformation method (DTM), which is called the multi-stage differential transformation method (MsDTM) for solving Hantavirus infection model. The results obtanied by using MsDTM are compared to those obtained by using the Runge-Kutta method (R-K-method). The proposed technique is a hopeful tool to solving for a long time intervals in this kind of systems.

  14. Linking lipid architecture to bilayer structure and mechanics using self-consistent field modelling

    NASA Astrophysics Data System (ADS)

    Pera, H.; Kleijn, J. M.; Leermakers, F. A. M.

    2014-02-01

    To understand how lipid architecture determines the lipid bilayer structure and its mechanics, we implement a molecularly detailed model that uses the self-consistent field theory. This numerical model accurately predicts parameters such as Helfrichs mean and Gaussian bending modulus kc and bar{k} and the preferred monolayer curvature J_0^m, and also delivers structural membrane properties like the core thickness, and head group position and orientation. We studied how these mechanical parameters vary with system variations, such as lipid tail length, membrane composition, and those parameters that control the lipid tail and head group solvent quality. For the membrane composition, negatively charged phosphatidylglycerol (PG) or zwitterionic, phosphatidylcholine (PC), and -ethanolamine (PE) lipids were used. In line with experimental findings, we find that the values of kc and the area compression modulus kA are always positive. They respond similarly to parameters that affect the core thickness, but differently to parameters that affect the head group properties. We found that the trends for bar{k} and J_0^m can be rationalised by the concept of Israelachivili's surfactant packing parameter, and that both bar{k} and J_0^m change sign with relevant parameter changes. Although typically bar{k}<0, membranes can form stable cubic phases when the Gaussian bending modulus becomes positive, which occurs with membranes composed of PC lipids with long tails. Similarly, negative monolayer curvatures appear when a small head group such as PE is combined with long lipid tails, which hints towards the stability of inverse hexagonal phases at the cost of the bilayer topology. To prevent the destabilisation of bilayers, PG lipids can be mixed into these PC or PE lipid membranes. Progressive loading of bilayers with PG lipids lead to highly charged membranes, resulting in J_0^m ≫ 0, especially at low ionic strengths. We anticipate that these changes lead to unstable membranes

  15. Overelaborated synaptic architecture and reduced synaptomatrix glycosylation in a Drosophila classic galactosemia disease model

    PubMed Central

    Jumbo-Lucioni, Patricia; Parkinson, William; Broadie, Kendal

    2014-01-01

    Classic galactosemia (CG) is an autosomal recessive disorder resulting from loss of galactose-1-phosphate uridyltransferase (GALT), which catalyzes conversion of galactose-1-phosphate and uridine diphosphate (UDP)-glucose to glucose-1-phosphate and UDP-galactose, immediately upstream of UDP–N-acetylgalactosamine and UDP–N-acetylglucosamine synthesis. These four UDP-sugars are essential donors for driving the synthesis of glycoproteins and glycolipids, which heavily decorate cell surfaces and extracellular spaces. In addition to acute, potentially lethal neonatal symptoms, maturing individuals with CG develop striking neurodevelopmental, motor and cognitive impairments. Previous studies suggest that neurological symptoms are associated with glycosylation defects, with CG recently being described as a congenital disorder of glycosylation (CDG), showing defects in both N- and O-linked glycans. Here, we characterize behavioral traits, synaptic development and glycosylated synaptomatrix formation in a GALT-deficient Drosophila disease model. Loss of Drosophila GALT (dGALT) greatly impairs coordinated movement and results in structural overelaboration and architectural abnormalities at the neuromuscular junction (NMJ). Dietary galactose and mutation of galactokinase (dGALK) or UDP-glucose dehydrogenase (sugarless) genes are identified, respectively, as critical environmental and genetic modifiers of behavioral and cellular defects. Assaying the NMJ extracellular synaptomatrix with a broad panel of lectin probes reveals profound alterations in dGALT mutants, including depletion of galactosyl, N-acetylgalactosamine and fucosylated horseradish peroxidase (HRP) moieties, which are differentially corrected by dGALK co-removal and sugarless overexpression. Synaptogenesis relies on trans-synaptic signals modulated by this synaptomatrix carbohydrate environment, and dGALT-null NMJs display striking changes in heparan sulfate proteoglycan (HSPG) co-receptor and Wnt ligand

  16. Overelaborated synaptic architecture and reduced synaptomatrix glycosylation in a Drosophila classic galactosemia disease model.

    PubMed

    Jumbo-Lucioni, Patricia; Parkinson, William; Broadie, Kendal

    2014-12-01

    Classic galactosemia (CG) is an autosomal recessive disorder resulting from loss of galactose-1-phosphate uridyltransferase (GALT), which catalyzes conversion of galactose-1-phosphate and uridine diphosphate (UDP)-glucose to glucose-1-phosphate and UDP-galactose, immediately upstream of UDP-N-acetylgalactosamine and UDP-N-acetylglucosamine synthesis. These four UDP-sugars are essential donors for driving the synthesis of glycoproteins and glycolipids, which heavily decorate cell surfaces and extracellular spaces. In addition to acute, potentially lethal neonatal symptoms, maturing individuals with CG develop striking neurodevelopmental, motor and cognitive impairments. Previous studies suggest that neurological symptoms are associated with glycosylation defects, with CG recently being described as a congenital disorder of glycosylation (CDG), showing defects in both N- and O-linked glycans. Here, we characterize behavioral traits, synaptic development and glycosylated synaptomatrix formation in a GALT-deficient Drosophila disease model. Loss of Drosophila GALT (dGALT) greatly impairs coordinated movement and results in structural overelaboration and architectural abnormalities at the neuromuscular junction (NMJ). Dietary galactose and mutation of galactokinase (dGALK) or UDP-glucose dehydrogenase (sugarless) genes are identified, respectively, as critical environmental and genetic modifiers of behavioral and cellular defects. Assaying the NMJ extracellular synaptomatrix with a broad panel of lectin probes reveals profound alterations in dGALT mutants, including depletion of galactosyl, N-acetylgalactosamine and fucosylated horseradish peroxidase (HRP) moieties, which are differentially corrected by dGALK co-removal and sugarless overexpression. Synaptogenesis relies on trans-synaptic signals modulated by this synaptomatrix carbohydrate environment, and dGALT-null NMJs display striking changes in heparan sulfate proteoglycan (HSPG) co-receptor and Wnt ligand levels

  17. Time Domain Transformations to Improve Hydrologic Model Consistency: Parameterization in Flow-Corrected Time

    NASA Astrophysics Data System (ADS)

    Smith, T. J.; Marshall, L. A.; McGlynn, B. L.

    2015-12-01

    Streamflow modeling is highly complex. Beyond the identification and mapping of dominant runoff processes to mathematical models, additional challenges are posed by the switching of dominant streamflow generation mechanisms temporally and dynamic catchment responses to precipitation inputs based on antecedent conditions. As a result, model calibration is required to obtain parameter values that produce acceptable simulations of the streamflow hydrograph. Typical calibration approaches assign equal weight to all observations to determine the best fit over the simulation period. However, the objective function can be biased toward (i.e., implicitly weight) certain parts of the hydrograph (e.g., high streamflows). Data transformations (e.g., logarithmic or square root) scale the magnitude of the observations and are commonly used in the calibration process to reduce implicit weighting or better represent assumptions about the model residuals. Here, we consider a time domain data transformation rather than the more common data domain approaches. Flow-corrected time was previously employed in the transit time modeling literature. Conceptually, it stretches time during high streamflow and compresses time during low streamflow periods. Therefore, streamflow is dynamically weighted in the time domain, with greater weight assigned to periods with larger hydrologic flux. Here, we explore the utility of the flow-corrected time transformation in improving model performance of the Catchment Connectivity Model. Model process fidelity was assessed directly using shallow groundwater connectivity data collected at Tenderfoot Creek Experimental Forest. Our analysis highlights the impact of data transformations on model consistency and parameter sensitivity.

  18. The Blended Advising Model: Transforming Advising with ePortfolios

    ERIC Educational Resources Information Center

    Ambrose, G. Alex; Williamson Ambrose, Laura

    2013-01-01

    This paper provides the rationale and framework for the blended advising model, a coherent approach to fusing technology--particularly the ePortfolio--into advising. The proposed term, "blended advising," is based on blended learning theory and incorporates the deliberate use of the strengths from both face-to-face and online…

  19. Modeling Transformations of Neurodevelopmental Sequences across Mammalian Species

    PubMed Central

    Workman, Alan D.; Charvet, Christine J.; Clancy, Barbara; Darlington, Richard B.

    2013-01-01

    A general model of neural development is derived to fit 18 mammalian species, including humans, macaques, several rodent species, and six metatherian (marsupial) mammals. The goal of this work is to describe heterochronic changes in brain evolution within its basic developmental allometry, and provide an empirical basis to recognize equivalent maturational states across animals. The empirical data generating the model comprises 271 developmental events, including measures of initial neurogenesis, axon extension, establishment, and refinement of connectivity, as well as later events such as myelin formation, growth of brain volume, and early behavioral milestones, to the third year of human postnatal life. The progress of neural events across species is sufficiently predictable that a single model can be used to predict the timing of all events in all species, with a correlation of modeled values to empirical data of 0.9929. Each species' rate of progress through the event scale, described by a regression equation predicting duration of development in days, is highly correlated with adult brain size. Neural heterochrony can be seen in selective delay of retinogenesis in the cat, associated with greater numbers of rods in its retina, and delay of corticogenesis in all species but rodents and the rabbit, associated with relatively larger cortices in species with delay. Unexpectedly, precocial mammals (those unusually mature at birth) delay the onset of first neurogenesis but then progress rapidly through remaining developmental events. PMID:23616543

  20. Teachers' Practices and Mental Models: Transformation through Reflection on Action

    ERIC Educational Resources Information Center

    Manrique, María Soledad; Sánchez Abchi, Verónica

    2015-01-01

    This contribution explores the relationship between teaching practices, teaching discourses and teachers' implicit representations and mental models and the way these dimensions change through teacher education (T.E). In order to study these relationships, and based on the assumptions that representations underlie teaching practices and that T.E…

  1. The Healing Web: A Transformative Model for Nursing.

    ERIC Educational Resources Information Center

    Bunkers, Sandra Schmidt

    1992-01-01

    A Navajo legend describes a web woven by Spider Woman that saved the people during a great flood. This article uses the imagery of the web to help education and service think more clearly about nursing's future. The Healing Web project seeks to educate nurses in a futuristic differentiated model. (Author/JOW)

  2. Modeling transformations of neurodevelopmental sequences across mammalian species.

    PubMed

    Workman, Alan D; Charvet, Christine J; Clancy, Barbara; Darlington, Richard B; Finlay, Barbara L

    2013-04-24

    A general model of neural development is derived to fit 18 mammalian species, including humans, macaques, several rodent species, and six metatherian (marsupial) mammals. The goal of this work is to describe heterochronic changes in brain evolution within its basic developmental allometry, and provide an empirical basis to recognize equivalent maturational states across animals. The empirical data generating the model comprises 271 developmental events, including measures of initial neurogenesis, axon extension, establishment, and refinement of connectivity, as well as later events such as myelin formation, growth of brain volume, and early behavioral milestones, to the third year of human postnatal life. The progress of neural events across species is sufficiently predictable that a single model can be used to predict the timing of all events in all species, with a correlation of modeled values to empirical data of 0.9929. Each species' rate of progress through the event scale, described by a regression equation predicting duration of development in days, is highly correlated with adult brain size. Neural heterochrony can be seen in selective delay of retinogenesis in the cat, associated with greater numbers of rods in its retina, and delay of corticogenesis in all species but rodents and the rabbit, associated with relatively larger cortices in species with delay. Unexpectedly, precocial mammals (those unusually mature at birth) delay the onset of first neurogenesis but then progress rapidly through remaining developmental events. PMID:23616543

  3. Population-Dynamic Modeling of Bacterial Horizontal Gene Transfer by Natural Transformation.

    PubMed

    Mao, Junwen; Lu, Ting

    2016-01-01

    Natural transformation is a major mechanism of horizontal gene transfer (HGT) and plays an essential role in bacterial adaptation, evolution, and speciation. Although its molecular underpinnings have been increasingly revealed, natural transformation is not well characterized in terms of its quantitative ecological roles. Here, by using Neisseria gonorrhoeae as an example, we developed a population-dynamic model for natural transformation and analyzed its dynamic characteristics with nonlinear tools and simulations. Our study showed that bacteria capable of natural transformation can display distinct population behaviors ranging from extinction to coexistence and to bistability, depending on their HGT rate and selection coefficient. With the model, we also illustrated the roles of environmental DNA sources-active secretion and passive release-in impacting population dynamics. Additionally, by constructing and utilizing a stochastic version of the model, we examined how noise shapes the steady and dynamic behaviors of the system. Notably, we found that distinct waiting time statistics for HGT events, namely a power-law distribution, an exponential distribution, and a mix of the both, are associated with the dynamics in the regimes of extinction, coexistence, and bistability accordingly. This work offers a quantitative illustration of natural transformation by revealing its complex population dynamics and associated characteristics, therefore advancing our ecological understanding of natural transformation as well as HGT in general. PMID:26745428

  4. Modeling and identification of Rosen-type transformer in nonlinear behavior.

    PubMed

    Pigache, François; Nadal, Clément

    2011-12-01

    This paper is about the modeling of piezoelectric transformer in nonlinear behavior conditions. In the frame of applications with high output loads, nonlinear behavior be comes non-negligible. First, the origins of nonlinearities and theoretical approaches are preliminarily discussed. Then, the model is developed for a typical Rosen-type transformer and experimental investigations are presented. The results are used to confirm the validity of the analytical model and the methodology to express the terms added to the typical constitutive piezoelectric relations. PMID:23443692

  5. Foundational model of structural connectivity in the nervous system with a schema for wiring diagrams, connectome, and basic plan architecture.

    PubMed

    Swanson, Larry W; Bota, Mihail

    2010-11-30

    The nervous system is a biological computer integrating the body's reflex and voluntary environmental interactions (behavior) with a relatively constant internal state (homeostasis)-- promoting survival of the individual and species. The wiring diagram of the nervous system's structural connectivity provides an obligatory foundational model for understanding functional localization at molecular, cellular, systems, and behavioral organization levels. This paper provides a high-level, downwardly extendible, conceptual framework--like a compass and map--for describing and exploring in neuroinformatics systems (such as our Brain Architecture Knowledge Management System) the structural architecture of the nervous system's basic wiring diagram. For this, the Foundational Model of Connectivity's universe of discourse is the structural architecture of nervous system connectivity in all animals at all resolutions, and the model includes two key elements--a set of basic principles and an internally consistent set of concepts (defined vocabulary of standard terms)--arranged in an explicitly defined schema (set of relationships between concepts) allowing automatic inferences. In addition, rules and procedures for creating and modifying the foundational model are considered. Controlled vocabularies with broad community support typically are managed by standing committees of experts that create and refine boundary conditions, and a set of rules that are available on the Web. PMID:21078980

  6. Irreversibility of T-Cell Specification: Insights from Computational Modelling of a Minimal Network Architecture

    PubMed Central

    Manesso, Erica; Kueh, Hao Yuan; Freedman, George; Rothenberg, Ellen V.

    2016-01-01

    Background/Objectives A cascade of gene activations under the control of Notch signalling is required during T-cell specification, when T-cell precursors gradually lose the potential to undertake other fates and become fully committed to the T-cell lineage. We elucidate how the gene/protein dynamics for a core transcriptional module governs this important process by computational means. Methods We first assembled existing knowledge about transcription factors known to be important for T-cell specification to form a minimal core module consisting of TCF-1, GATA-3, BCL11B, and PU.1 aiming at dynamical modeling. Model architecture was based on published experimental measurements of the effects on each factor when each of the others is perturbed. While several studies provided gene expression measurements at different stages of T-cell development, pure time series are not available, thus precluding a straightforward study of the dynamical interactions among these genes. We therefore translate stage dependent data into time series. A feed-forward motif with multiple positive feed-backs can account for the observed delay between BCL11B versus TCF-1 and GATA-3 activation by Notch signalling. With a novel computational approach, all 32 possible interactions among Notch signalling, TCF-1, and GATA-3 are explored by translating combinatorial logic expressions into differential equations for BCL11B production rate. Results Our analysis reveals that only 3 of 32 possible configurations, where GATA-3 works as a dimer, are able to explain not only the time delay, but very importantly, also give rise to irreversibility. The winning models explain the data within the 95% confidence region and are consistent with regard to decay rates. Conclusions This first generation model for early T-cell specification has relatively few players. Yet it explains the gradual transition into a committed state with no return. Encoding logics in a rate equation setting allows determination of

  7. Multi phase field model for solid state transformation with elastic strain

    NASA Astrophysics Data System (ADS)

    Steinbach, I.; Apel, M.

    2006-05-01

    A multi phase field model is presented for the investigation of the effect of transformation strain on the transformation kinetics, morphology and thermodynamic stability in multi phase materials. The model conserves homogeneity of stress in the diffuse interface between elastically inhomogeneous phases, in which respect it differs from previous models. The model is formulated consistently with the multi phase field model for diffusional and surface driven phase transitions [I. Steinbach, F. Pezzolla, B. Nestler, M. Seeßelberg, R. Prieler, G.J. Schmitz, J.L.L. Rezende, A phase field concept for multiphase systems, Physica D 94 (1996) 135-147; J. Tiaden, B. Nestler, H.J. Diepers, I. Steinbach, The multiphase-field model with an integrated concept for modeling solute diffusion, Physica D 115 (1998) 73-86; I. Steinbach, F. Pezzolla, A generalized field method for multiphase transformations using interface fields, Physica D 134 (1999) 385] and gives a consistent description of interfacial tension, multi phase thermodynamics and elastic stress balance in multiple junctions between an arbitrary number of grains and phases. Some aspects of the model are demonstrated with respect to numerical accuracy and the relation between transformation strain, external stress and thermodynamic equilibrium.

  8. Climate change induced transformations of agricultural systems: insights from a global model

    NASA Astrophysics Data System (ADS)

    Leclère, D.; Havlík, P.; Fuss, S.; Schmid, E.; Mosnier, A.; Walsh, B.; Valin, H.; Herrero, M.; Khabarov, N.; Obersteiner, M.

    2014-12-01

    Climate change might impact crop yields considerably and anticipated transformations of agricultural systems are needed in the coming decades to sustain affordable food provision. However, decision-making on transformational shifts in agricultural systems is plagued by uncertainties concerning the nature and geography of climate change, its impacts, and adequate responses. Locking agricultural systems into inadequate transformations costly to adjust is a significant risk and this acts as an incentive to delay action. It is crucial to gain insight into how much transformation is required from agricultural systems, how robust such strategies are, and how we can defuse the associated challenge for decision-making. While implementing a definition related to large changes in resource use into a global impact assessment modelling framework, we find transformational adaptations to be required of agricultural systems in most regions by 2050s in order to cope with climate change. However, these transformations widely differ across climate change scenarios: uncertainties in large-scale development of irrigation span in all continents from 2030s on, and affect two-thirds of regions by 2050s. Meanwhile, significant but uncertain reduction of major agricultural areas affects the Northern Hemisphere’s temperate latitudes, while increases to non-agricultural zones could be large but uncertain in one-third of regions. To help reducing the associated challenge for decision-making, we propose a methodology exploring which, when, where and why transformations could be required and uncertain, by means of scenario analysis.

  9. Time transformation for random walks in the quenched trap model.

    PubMed

    Burov, S; Barkai, E

    2011-04-01

    We investigate subdiffusion in the quenched trap model by mapping the problem onto a new stochastic process: Brownian motion stopped at the operational time S(α) = ∑(x=-∞)(∞) (n(x))(α) where n(x) is the visitation number at site x and α is a measure of the disorder. In the limit of zero temperature we recover the renormalization group solution found by Monthus. Our approach is an alternative to the renormalization group and is capable of dealing with any disorder strength. PMID:21561177

  10. Box–Cox Transformation and Random Regression Models for Fecal egg Count Data

    PubMed Central

    da Silva, Marcos Vinícius Gualberto Barbosa; Van Tassell, Curtis P.; Sonstegard, Tad S.; Cobuci, Jaime Araujo; Gasbarre, Louis C.

    2012-01-01

    Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants, fecal egg count (FEC) is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used in an effort to achieve normality before analysis. However, the transformed data are often still not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box–Cox transformation to approach normality and to estimate (co)variance components. We also proposed using random regression models (RRM) for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4) adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box–Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated. PMID:22303406

  11. Animation Strategies for Smooth Transformations Between Discrete Lods of 3d Building Models

    NASA Astrophysics Data System (ADS)

    Kada, Martin; Wichmann, Andreas; Filippovska, Yevgeniya; Hermes, Tobias

    2016-06-01

    The cartographic 3D visualization of urban areas has experienced tremendous progress over the last years. An increasing number of applications operate interactively in real-time and thus require advanced techniques to improve the quality and time response of dynamic scenes. The main focus of this article concentrates on the discussion of strategies for smooth transformation between two discrete levels of detail (LOD) of 3D building models that are represented as restricted triangle meshes. Because the operation order determines the geometrical and topological properties of the transformation process as well as its visual perception by a human viewer, three different strategies are proposed and subsequently analyzed. The simplest one orders transformation operations by the length of the edges to be collapsed, while the other two strategies introduce a general transformation direction in the form of a moving plane. This plane either pushes the nodes that need to be removed, e.g. during the transformation of a detailed LOD model to a coarser one, towards the main building body, or triggers the edge collapse operations used as transformation paths for the cartographic generalization.

  12. Laser Hardening Prediction Tool Based On a Solid State Transformations Numerical Model

    SciTech Connect

    Martinez, S.; Ukar, E.; Lamikiz, A.

    2011-01-17

    This paper presents a tool to predict hardening layer in selective laser hardening processes where laser beam heats the part locally while the bulk acts as a heat sink.The tool to predict accurately the temperature field in the workpiece is a numerical model that combines a three dimensional transient numerical solution for heating where is possible to introduce different laser sources. The thermal field was modeled using a kinetic model based on Johnson-Mehl-Avrami equation. Considering this equation, an experimental adjustment of transformation parameters was carried out to get the heating transformation diagrams (CHT). With the temperature field and CHT diagrams the model predicts the percentage of base material converted into austenite. These two parameters are used as first step to estimate the depth of hardened layer in the part.The model has been adjusted and validated with experimental data for DIN 1.2379, cold work tool steel typically used in mold and die making industry. This steel presents solid state diffusive transformations at relative low temperature. These transformations must be considered in order to get good accuracy of temperature field prediction during heating phase. For model validation, surface temperature measured by pyrometry, thermal field as well as the hardened layer obtained from metallographic study, were compared with the model data showing a good adjustment.

  13. Modeling the stress-induced transformation behavior of shape memory alloy single crystals

    SciTech Connect

    Buchheit, T.E.; Kumpf, S.L.; Wert, J.A.

    1995-11-01

    The phenomenological theory of martensite crystallography has been used to determine habit plane/shear direction combinations for stress-induced transformation of NiTi, Cu-Ni-Al and NiAl shape memory alloys (SMA) to twin-related martensite correspondence variant pairs. By considering the habit plane/shear direction combinations as unidirectional shear systems, generalized Schmid`s law is then used to predict the mechanical response of unconstrained single crystals of each SMA. Model results include axial transformation strain, and plane stress transformation surfaces as a function of crystal orientation. Comparison of the predicted mechanical response results with the habit plane/shear direction combinations reveals a link between the anisotropy and asymmetry of the mechanical response of SMA single crystals, and the crystallography of the martensitic transformation.

  14. Three-Dimensional Numerical Model Considering Phase Transformation in Friction Stir Welding of Steel

    NASA Astrophysics Data System (ADS)

    Cho, Hoon-Hwe; Kim, Dong-Wan; Hong, Sung-Tae; Jeong, Yong-Ha; Lee, Keunho; Cho, Yi-Gil; Kang, Suk Hoon; Han, Heung Nam

    2015-12-01

    A three-dimensional (3D) thermo-mechanical model is developed considering the phase transformation occurring during the friction stir welding (FSW) of steel, and the simulated result is compared with both the measured temperature distribution during FSW and the microstructural changes after FSW. The austenite grain size (AGS) decreases significantly because of the frictional heat and severe plastic deformation generated during FSW, and the decreased AGS accelerates the diffusional phase transformation during FSW. The ferrite phase, one of the diffusional phases, is developed mainly in mild steel, whereas the bainite phase transformation occurs significantly in high-strength steel with large hardenability. Additionally, transformation-induced heat is observed mainly in the stir zone during FSW. The measured temperature distribution and phase fraction agree fairly well with the predicted data.

  15. From PCK to TPACK: Developing a Transformative Model for Pre-Service Science Teachers

    NASA Astrophysics Data System (ADS)

    Jang, Syh-Jong; Chen, Kuan-Chung

    2010-12-01

    New science teachers should be equipped with the ability to integrate and design the curriculum and technology for innovative teaching. How to integrate technology into pre-service science teachers' pedagogical content knowledge is the important issue. This study examined the impact on a transformative model of integrating technology and peer coaching for developing technological pedagogical and content knowledge (TPACK) of pre-service science teachers. A transformative model and an online system were designed to restructure science teacher education courses. Participants of this study included an instructor and 12 pre-service teachers. The main sources of data included written assignments, online data, reflective journals, videotapes and interviews. This study expanded four views, namely, the comprehensive, imitative, transformative and integrative views to explore the impact of TPACK. The model could help pre-service teachers develop technological pedagogical methods and strategies of integrating subject-matter knowledge into science lessons, and further enhanced their TPACK.

  16. Study on Information Management for the Conservation of Traditional Chinese Architectural Heritage - 3d Modelling and Metadata Representation

    NASA Astrophysics Data System (ADS)

    Yen, Y. N.; Weng, K. H.; Huang, H. Y.

    2013-07-01

    After over 30 years of practise and development, Taiwan's architectural conservation field is moving rapidly into digitalization and its applications. Compared to modern buildings, traditional Chinese architecture has considerably more complex elements and forms. To document and digitize these unique heritages in their conservation lifecycle is a new and important issue. This article takes the caisson ceiling of the Taipei Confucius Temple, octagonal with 333 elements in 8 types, as a case study for digitization practise. The application of metadata representation and 3D modelling are the two key issues to discuss. Both Revit and SketchUp were appliedin this research to compare its effectiveness to metadata representation. Due to limitation of the Revit database, the final 3D models wasbuilt with SketchUp. The research found that, firstly, cultural heritage databasesmustconvey that while many elements are similar in appearance, they are unique in value; although 3D simulations help the general understanding of architectural heritage, software such as Revit and SketchUp, at this stage, could onlybe used tomodel basic visual representations, and is ineffective indocumenting additional critical data ofindividually unique elements. Secondly, when establishing conservation lifecycle information for application in management systems, a full and detailed presentation of the metadata must also be implemented; the existing applications of BIM in managing conservation lifecycles are still insufficient. Results of the research recommends SketchUp as a tool for present modelling needs, and BIM for sharing data between users, but the implementation of metadata representation is of the utmost importance.

  17. Architecture & Environment

    ERIC Educational Resources Information Center

    Erickson, Mary; Delahunt, Michael

    2010-01-01

    Most art teachers would agree that architecture is an important form of visual art, but they do not always include it in their curriculums. In this article, the authors share core ideas from "Architecture and Environment," a teaching resource that they developed out of a long-term interest in teaching architecture and their fascination with the…

  18. Robotic Intelligence Kernel: Architecture

    Energy Science and Technology Software Center (ESTSC)

    2009-09-16

    The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.

  19. Analysis of central enterprise architecture elements in models of six eHealth projects.

    PubMed

    Virkanen, Hannu; Mykkänen, Juha

    2014-01-01

    Large-scale initiatives for eHealth services have been established in many countries on regional or national level. The use of Enterprise Architecture has been suggested as a methodology to govern and support the initiation, specification and implementation of large-scale initiatives including the governance of business changes as well as information technology. This study reports an analysis of six health IT projects in relation to Enterprise Architecture elements, focusing on central EA elements and viewpoints in different projects. PMID:25160162

  20. A transformative model for undergraduate quantitative biology education.

    PubMed

    Usher, David C; Driscoll, Tobin A; Dhurjati, Prasad; Pelesko, John A; Rossi, Louis F; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B

    2010-01-01

    The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions. PMID:20810949

  1. Compositional Specification of Software Architecture

    NASA Technical Reports Server (NTRS)

    Penix, John; Lau, Sonie (Technical Monitor)

    1998-01-01

    This paper describes our experience using parameterized algebraic specifications to model properties of software architectures. The goal is to model the decomposition of requirements independent of the style used to implement the architecture. We begin by providing an overview of the role of architecture specification in software development. We then describe how architecture specifications are build up from component and connector specifications and give an overview of insights gained from a case study used to validate the method.

  2. On the Extended Lorentz Transformation Model and Its Application to Superluminal Neutrinos

    NASA Astrophysics Data System (ADS)

    Hamieh, Salah D.

    2012-09-01

    In this paper, we consider the apparent superluminal speed of neutrinos in their travel from CERN to Gran Susso, as measured by the OPERA experiment, within the framework of the Extended Lorentz Transformation Model. The model is based on a natural extension of Lorentz transformation by wick rotation. Scalar and Dirac's fields are considered and invariance under the new Lorentz group is discussed. Moreover, an extension of quantum mechanics to accommodate new particles is considered using the newly proposed Generalized-C quantum mechanics. A two dimensional represen- tation of the new Dirac's equation is therefore formulated and its solution is calculated.

  3. PCB/transformer techno-economic analysis model: User manual, volume 2

    NASA Astrophysics Data System (ADS)

    Plum, Martin M.; Geimer, Ray M.

    1989-02-01

    This model is designed to evaluate the economic viability of replacing or retrofilling a PCB transformer with numerous non-PCB transformer options. Replacement options include conventional, amorphous, amorphous-liquid filled, or amorphous-liquid filled-high performance transformers. The retrofill option is the process that removes and disposes this with non-PCB dielectric. Depending on data available, the skills of the user, and the intent of the analysis, there are three model options available to the user. For practical purposes, Level 1 requires the least amount of input data from the user while Level 3 requires the greatest quantity of data. This manual is designed for people who have a minimum experience with Lotus 123 and are familiar with electric transformers. This manual covers system requirements, how to install the model on your system, how to get started, how to move around in the model, how to make changes in the model data, how to print information, how to save your work, and how to exit from the model.

  4. Linking lipid architecture to bilayer structure and mechanics using self-consistent field modelling

    SciTech Connect

    Pera, H.; Kleijn, J. M.; Leermakers, F. A. M.

    2014-02-14

    To understand how lipid architecture determines the lipid bilayer structure and its mechanics, we implement a molecularly detailed model that uses the self-consistent field theory. This numerical model accurately predicts parameters such as Helfrichs mean and Gaussian bending modulus k{sub c} and k{sup ¯} and the preferred monolayer curvature J{sub 0}{sup m}, and also delivers structural membrane properties like the core thickness, and head group position and orientation. We studied how these mechanical parameters vary with system variations, such as lipid tail length, membrane composition, and those parameters that control the lipid tail and head group solvent quality. For the membrane composition, negatively charged phosphatidylglycerol (PG) or zwitterionic, phosphatidylcholine (PC), and -ethanolamine (PE) lipids were used. In line with experimental findings, we find that the values of k{sub c} and the area compression modulus k{sub A} are always positive. They respond similarly to parameters that affect the core thickness, but differently to parameters that affect the head group properties. We found that the trends for k{sup ¯} and J{sub 0}{sup m} can be rationalised by the concept of Israelachivili's surfactant packing parameter, and that both k{sup ¯} and J{sub 0}{sup m} change sign with relevant parameter changes. Although typically k{sup ¯}<0, membranes can form stable cubic phases when the Gaussian bending modulus becomes positive, which occurs with membranes composed of PC lipids with long tails. Similarly, negative monolayer curvatures appear when a small head group such as PE is combined with long lipid tails, which hints towards the stability of inverse hexagonal phases at the cost of the bilayer topology. To prevent the destabilisation of bilayers, PG lipids can be mixed into these PC or PE lipid membranes. Progressive loading of bilayers with PG lipids lead to highly charged membranes, resulting in J{sub 0}{sup m}≫0, especially at low ionic

  5. A three-phase cylinder model for residual and transformational stresses in SMA composites

    SciTech Connect

    Berman, J.B.; White, S.R.

    1994-12-31

    SMA composites are a class of smart materials in which shape memory alloy (SMA) actuators are embedded in a polymer matrix composite. The difference in thermal expansion between the SMA and the host material leads to residual stresses during processing. Similarly, the SMA transformations from martensite to austenite, or the reverse, also generate stresses. These stresses acting in combination can lead to SMA/epoxy interfacial debonding. In this study the residual and transformational stresses are investigated for an SMA wire embedded in a graphite/epoxy composite. A three phase micromechanical model is developed. The SMA wire is assumed to behave as a thermoelastic material. Nitinol{trademark} SMA austenitic and martensitic transformations are modeled using linear piecewise interpolation of the experimental data. The interphase is modeled as a thermoelastic polymer. A transversely isotropic thermoelastic composite is used for the outer phase. Stress-free conditions are assumed immediately before cool down from the cure temperature. The effect of SMA and coating properties on residual and transformational stresses are evaluated. A decrease in stresses at the composite/coating interface is predicted through the use of thick, compliant coatings. Reducing the recovery strain and moving the transformation to higher temperatures are also effective in reducing residual stresses.

  6. Project Integration Architecture: Application Architecture

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    The Project Integration Architecture (PIA) implements a flexible, object-oriented, wrapping architecture which encapsulates all of the information associated with engineering applications. The architecture allows the progress of a project to be tracked and documented in its entirety. Additionally, by bringing all of the information sources and sinks of a project into a single architectural space, the ability to transport information between those applications is enabled.

  7. PRISMA-MAR: An Architecture Model for Data Visualization in Augmented Reality Mobile Devices

    ERIC Educational Resources Information Center

    Gomes Costa, Mauro Alexandre Folha; Serique Meiguins, Bianchi; Carneiro, Nikolas S.; Gonçalves Meiguins, Aruanda Simões

    2013-01-01

    This paper proposes an extension to mobile augmented reality (MAR) environments--the addition of data charts to the more usual text, image and video components. To this purpose, we have designed a client-server architecture including the main necessary modules and services to provide an Information Visualization MAR experience. The server side…

  8. caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability

    PubMed Central

    Komatsoulis, George A.; Warzel, Denise B.; Hartel, Frank W.; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; de Coronado, Sherri; Reeves, Dianne M.; Hadfield, Jillaine B.; Ludet, Christophe; Covitz, Peter A.

    2008-01-01

    One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service Oriented Architecture (SSOA) for cancer research by the National Cancer Institute’s cancer Biomedical Informatics Grid (caBIG™). PMID:17512259

  9. caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability.

    PubMed

    Komatsoulis, George A; Warzel, Denise B; Hartel, Francis W; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; Coronado, Sherri de; Reeves, Dianne M; Hadfield, Jillaine B; Ludet, Christophe; Covitz, Peter A

    2008-02-01

    One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service-Oriented Architecture (SSOA) for cancer research by the National Cancer Institute's cancer Biomedical Informatics Grid (caBIG). PMID:17512259

  10. Influence of parameter values and variances and algorithm architecture in ConsExpo model on modeled exposures.

    PubMed

    Arnold, Susan F; Ramachandran, Gurumurthy

    2014-01-01

    This study evaluated the influence of parameter values and variances and model architecture on modeled exposures, and identified important data gaps that influence lack-of-knowledge-related uncertainty, using Consexpo 4.1 as an illustrative case study. Understanding the influential determinants in exposure estimates enables more informed and appropriate use of this model and the resulting exposure estimates. In exploring the influence of parameter placement in an algorithm and of the values and variances chosen to characterize the parameters within ConsExpo, "sensitive" and "important" parameters were identified: product amount, weight fraction, exposure duration, exposure time, and ventilation rate were deemed "important," or "always sensitive." With this awareness, exposure assessors can strategically focus on acquiring the most robust estimates for these parameters. ConsExpo relies predominantly on three algorithms to assess the default scenarios: inhalation vapors evaporation equation using the Langmuir mass transfer, the dermal instant application with diffusion through the skin, and the oral ingestion by direct uptake algorithm. These algorithms, which do not necessarily render health conservative estimates, account for 87, 89 and 59% of the inhalation, dermal and oral default scenario assessments,respectively, according them greater influence relative to the less frequently used algorithms. Default data provided in ConsExpo may be useful to initiate assessments, but are insufficient for determining exposure acceptability or setting policy, as parameters defined by highly uncertain values produce biased estimates that may not be health conservative. Furthermore, this lack-of-knowledge uncertainty makes the magnitude of this bias uncertain. Significant data gaps persist for product amount, exposure time, and exposure duration. These "important" parameters exert influence in requiring broad values and variances to account for their uncertainty. Prioritizing

  11. Phase Field Modeling of Cyclic Austenite-Ferrite Transformations in Fe-C-Mn Alloys

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Zhu, Benqiang; Militzer, Matthias

    2016-08-01

    Three different approaches for considering the effect of Mn on the austenite-ferrite interface migration in an Fe-0.1C-0.5Mn alloy have been coupled with a phase field model (PFM). In the first approach (PFM-I), only long-range C diffusion is considered while Mn is assumed to be immobile during the phase transformations. Both long-range C and Mn diffusions are considered in the second approach (PFM-II). In the third approach (PFM-III), long-range C diffusion is considered in combination with the Gibbs energy dissipation due to Mn diffusion inside the interface instead of solving for long-range diffusion of Mn. The three PFM approaches are first benchmarked with isothermal austenite-to-ferrite transformation at 1058.15 K (785 °C) before considering cyclic phase transformations. It is found that PFM-II can predict the stagnant stage and growth retardation experimentally observed during cycling transformations, whereas PFM-III can only replicate the stagnant stage but not the growth retardation and PFM-I predicts neither the stagnant stage nor the growth retardation. The results of this study suggest a significant role of Mn redistribution near the interface on reducing transformation rates, which should, therefore, be considered in future simulations of austenite-ferrite transformations in steels, particularly at temperatures in the intercritical range and above.

  12. Phase Field Modeling of Cyclic Austenite-Ferrite Transformations in Fe-C-Mn Alloys

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Zhu, Benqiang; Militzer, Matthias

    2016-06-01

    Three different approaches for considering the effect of Mn on the austenite-ferrite interface migration in an Fe-0.1C-0.5Mn alloy have been coupled with a phase field model (PFM). In the first approach (PFM-I), only long-range C diffusion is considered while Mn is assumed to be immobile during the phase transformations. Both long-range C and Mn diffusions are considered in the second approach (PFM-II). In the third approach (PFM-III), long-range C diffusion is considered in combination with the Gibbs energy dissipation due to Mn diffusion inside the interface instead of solving for long-range diffusion of Mn. The three PFM approaches are first benchmarked with isothermal austenite-to-ferrite transformation at 1058.15 K (785 °C) before considering cyclic phase transformations. It is found that PFM-II can predict the stagnant stage and growth retardation experimentally observed during cycling transformations, whereas PFM-III can only replicate the stagnant stage but not the growth retardation and PFM-I predicts neither the stagnant stage nor the growth retardation. The results of this study suggest a significant role of Mn redistribution near the interface on reducing transformation rates, which should, therefore, be considered in future simulations of austenite-ferrite transformations in steels, particularly at temperatures in the intercritical range and above.

  13. Effect of Colorspace Transformation, the Illuminance Component, and Color Modeling on Skin Detection

    SciTech Connect

    Jayaram, S; Schmugge, S; Shin, M C; Tsap, L V

    2004-03-22

    Skin detection is an important preliminary process in human motion analysis. It is commonly performed in three steps: transforming the pixel color to a non-RGB colorspace, dropping the illumination component of skin color, and classifying by modeling the skin color distribution. In this paper, we evaluate the effect of these three steps on the skin detection performance. The importance of this study is a new comprehensive colorspace and color modeling testing methodology that would allow for making the best choices for skin detection. Combinations of nine colorspaces, the presence of the absence of the illuminance component, and the two color modeling approaches are compared. The performance is measured by using a receiver operating characteristic (ROC) curve on a large dataset of 805 images with manual ground truth. The results reveal that (1) the absence of the illuminance component decreases performance, (2) skin color modeling has a greater impact than colorspace transformation, and (3) colorspace transformations can improve performance in certain instances. We found that the best performance was obtained by transforming the pixel color to the SCT, HSI, or CIELAB colorspaces, keeping the illuminance component, and modeling the color with the histogram approach.

  14. Modelling static 3-D spatial background error covariances - the effect of vertical and horizontal transform order

    NASA Astrophysics Data System (ADS)

    Wlasak, M. A.; Cullen, M. J. P.

    2014-06-01

    A major difference in the formulation of the univariate part of static background error covariance models for use in global operational 4DVAR arises from the order in which the horizontal and vertical transforms are applied. This is because the atmosphere is non-separable with large horizontal scales generally tied to large vertical scales and small horizontal scales tied to small vertical scales. Also horizontal length scales increase dramatically as one enters the stratosphere. A study is presented which evaluates the strengths and weaknesses of each approach with the Met Office Unified Model. It is shown that if the vertical transform is applied as a function of horizontal wavenumber then the horizontal globally-averaged variance and the homogenous, isotropic length scale on each model level for each control variable of the training data is preserved by the covariance model. In addition the wind variance and associated length scales are preserved as the scheme preserves the variances and length scales of horizontal derivatives. If the vertical transform is applied in physical space, it is possible to make it a function of latitude at the cost of not preserving the variances and length scales of the horizontal derivatives. Summer and winter global 4DVAR trials have been run with both background error covariance models. A clear benefit is seen in the fit to observations when the vertical transform is in spectral space and is a function of total horizontal wavenumber.

  15. Transform-invariant feature based functional MR image registration and neural activity modelling.

    PubMed

    Gong, Jiaqi; Hao, Qi; Hu, Fei

    2013-01-01

    In this paper, a set of non-rigid image registration and neural activity modelling methods using functional MR Images (fMRI) are proposed based on transform-invariant feature representations. Our work made two contributions. First, we propose to use a transform-invariant feature to improve image registration performance of Iterative Closest Point (ICP) based methods. The proposed feature utilises Gaussian Mixture Models (GMM) to describe the local topological structure of fMRI data. Second, we propose to use a 3-dimensional Scale-Invariant Feature Transform (SIFT) based descriptor to represent neural activities related to drinking behaviour. As a result, neural activities patterns of different subjects drinking water or intaking glucose can be recognised, with strong robustness against various artefacts. PMID:23900434

  16. Construction of a dc-dc transformer - A model of transitory behavior under load

    NASA Astrophysics Data System (ADS)

    Louail, G.

    A numerical model is presented for the construction of high performance dc-dc transformers for industrial applications, taking into account a variety of control techniques. Control logic to minimize fluctuations during load dumping intervals are defined. Problems linked to the demagnetization of the core are investigated and solutions are proposed. Attention is given to the selection of a commutator for a given application of a transformer, and functional characteristics of bipolar and MOS transistors are described. The principles are applied to the construction of a prototype second order transformer which is amenable to modular use. Finally, two methods of numerical modeling are presented: the first with simplified hypotheses for use with a hand calculator, and the second more rigourous, using discretized equations in a static regime. It is shown that a sudden power surge is the most critical phase for a power commutator. Progressively loading logic is devised, and the fabrication of 150 A commutators is indicated

  17. Information architecture. Volume 3: Guidance

    SciTech Connect

    1997-04-01

    The purpose of this document, as presented in Volume 1, The Foundations, is to assist the Department of Energy (DOE) in developing and promulgating information architecture guidance. This guidance is aimed at increasing the development of information architecture as a Departmentwide management best practice. This document describes departmental information architecture principles and minimum design characteristics for systems and infrastructures within the DOE Information Architecture Conceptual Model, and establishes a Departmentwide standards-based architecture program. The publication of this document fulfills the commitment to address guiding principles, promote standard architectural practices, and provide technical guidance. This document guides the transition from the baseline or defacto Departmental architecture through approved information management program plans and budgets to the future vision architecture. This document also represents another major step toward establishing a well-organized, logical foundation for the DOE information architecture.

  18. Modelling the self-assembly of elastomeric proteins provides insights into the evolution of their domain architectures.

    PubMed

    Song, Hongyan; Parkinson, John

    2012-01-01

    Elastomeric proteins have evolved independently multiple times through evolution. Produced as monomers, they self-assemble into polymeric structures that impart properties of stretch and recoil. They are composed of an alternating domain architecture of elastomeric domains interspersed with cross-linking elements. While the former provide the elasticity as well as help drive the assembly process, the latter serve to stabilise the polymer. Changes in the number and arrangement of the elastomeric and cross-linking regions have been shown to significantly impact their assembly and mechanical properties. However, to date, such studies are relatively limited. Here we present a theoretical study that examines the impact of domain architecture on polymer assembly and integrity. At the core of this study is a novel simulation environment that uses a model of diffusion limited aggregation to simulate the self-assembly of rod-like particles with alternating domain architectures. Applying the model to different domain architectures, we generate a variety of aggregates which are subsequently analysed by graph-theoretic metrics to predict their structural integrity. Our results show that the relative length and number of elastomeric and cross-linking domains can significantly impact the morphology and structural integrity of the resultant polymeric structure. For example, the most highly connected polymers were those constructed from asymmetric rods consisting of relatively large cross-linking elements interspersed with smaller elastomeric domains. In addition to providing insights into the evolution of elastomeric proteins, simulations such as those presented here may prove valuable for the tuneable design of new molecules that may be exploited as useful biomaterials. PMID:22396636

  19. An architecture for the development of real-time fault diagnosis systems using model-based reasoning

    NASA Technical Reports Server (NTRS)

    Hall, Gardiner A.; Schuetzle, James; Lavallee, David; Gupta, Uday

    1992-01-01

    Presented here is an architecture for implementing real-time telemetry based diagnostic systems using model-based reasoning. First, we describe Paragon, a knowledge acquisition tool for offline entry and validation of physical system models. Paragon provides domain experts with a structured editing capability to capture the physical component's structure, behavior, and causal relationships. We next describe the architecture of the run time diagnostic system. The diagnostic system, written entirely in Ada, uses the behavioral model developed offline by Paragon to simulate expected component states as reflected in the telemetry stream. The diagnostic algorithm traces causal relationships contained within the model to isolate system faults. Since the diagnostic process relies exclusively on the behavioral model and is implemented without the use of heuristic rules, it can be used to isolate unpredicted faults in a wide variety of systems. Finally, we discuss the implementation of a prototype system constructed using this technique for diagnosing faults in a science instrument. The prototype demonstrates the use of model-based reasoning to develop maintainable systems with greater diagnostic capabilities at a lower cost.

  20. A new multiplicative denoising variational model based on mth root transformation.

    PubMed

    Yun, Sangwoon; Woo, Hyenkyun

    2012-05-01

    In coherent imaging systems, such as the synthetic aperture radar (SAR), the observed images are contaminated by multiplicative noise. Due to the edge-preserving feature of the total variation (TV), variational models with TV regularization have attracted much interest in removing multiplicative noise. However, the fidelity term of the variational model, based on maximum a posteriori estimation, is not convex, and so, it is usually difficult to find a global solution. Hence, the logarithmic function is used to transform the nonconvex variational model to the convex one. In this paper, instead of using the log, we exploit the m th root function to relax the nonconvexity of the variational model. An algorithm based on the augmented Lagrangian function, which has been applied to solve the log transformed convex variational model, can be applied to solve our proposed model. However, this algorithm requires solving a subproblem, which does not have a closed-form solution, at each iteration. Hence, we propose to adapt the linearized proximal alternating minimization algorithm, which does not require inner iterations for solving the subproblems. In addition, the proposed method is very simple and highly parallelizable; thus, it is efficient to remove multiplicative noise in huge SAR images. The proposed model for multiplicative noise removal shows overall better performance than the convex model based on the log transformation. PMID:22287244

  1. Modeling the Ferrite-Austenite Transformation in the Heat-Affected Zone of Stainless Steel Welds

    SciTech Connect

    Vitek, J.M.; David, S.A.

    1997-12-01

    The diffusion-controlled ferrite-austenite transformation in stainless steel welds was modeled. An implicit finite-difference analysis that considers multi-component diffusion was used. The model was applied to the Fe-Cr-Ni system to investigate the ferrite- austenite transformation in the heat-affected zone of stainless steel weld metal. The transformation was followed as a function of time as the heat-affected zone was subjected to thermal cycles comparable to those experienced during gas-tungsten arc welding. The results showed that the transformation behavior and the final microstructural state are very sensitive to the maximum temperature that is experienced by the heat-affected zone. For high maximum exposure temperatures ({approximately} 1300{degree} C), the ferrite formation that occurs at the highest temperatures is not completely offset by the reverse ferrite dissolution at lower temperatures. As a result, for high temperature exposures there is a net increase in the amount of ferrite in the microstructure. It was also found that if compositional gradients are present in the initial ferrite and austenite phases, the extent of the transformation is impacted.

  2. A Faculty-Development Model for Transforming Introductory Biology and Ecology Courses

    ERIC Educational Resources Information Center

    D'Avanzo, Charlene; Anderson, Charles W.; Hartley, Laurel M.; Pelaez, Nancy

    2012-01-01

    The Diagnostic Question Cluster (DQC) project integrates education research and faculty development to articulate a model for the effective transformation of introductory biology and ecology teaching. Over three years, faculty members from a wide range of institutions used active teaching and DQCs, a type of concept inventory, as pre- and…

  3. Mellin transforming the minimal model CFTs: AdS/CFT at strong curvature

    NASA Astrophysics Data System (ADS)

    Lowe, David A.

    2016-09-01

    Mack has conjectured that all conformal field theories are equivalent to string theories. We explore the example of the two-dimensional minimal model CFTs and confirm that the Mellin transformed amplitudes have the desired properties of string theory in three-dimensional anti-de Sitter spacetime.

  4. Educational Transformation in Upper-Division Physics: The Science Education Initiative Model, Outcomes, and Lessons Learned

    ERIC Educational Resources Information Center

    Chasteen, Stephanie V.; Wilcox, Bethany; Caballero, Marcos D.; Perkins, Katherine K.; Pollock, Steven J.; Wieman, Carl E.

    2015-01-01

    In response to the need for a scalable, institutionally supported model of educational change, the Science Education Initiative (SEI) was created as an experiment in transforming course materials and faculty practices at two institutions--University of Colorado Boulder (CU) and University of British Columbia. We find that this departmentally…

  5. School Counseling Leadership Team: A Statewide Collaborative Model to Transform School Counseling

    ERIC Educational Resources Information Center

    Kaffenberger, Carol J.; Murphy, Sally; Bemak, Fred

    2006-01-01

    The School Counseling Leadership Team (SCLT) is a model of a collaborative team formed to advocate for the transformed role of professional school counselors. The members of the SCLT included school district counseling supervisors, counselor educators, and leaders of statewide school counselor organizations. This article reviews the need for and…

  6. A Transformational Curriculum Model: A Wilderness Travel Adventure Dog Sledding in Temagami.

    ERIC Educational Resources Information Center

    Leckie, Linda

    1996-01-01

    Personal narrative links elements of a dog sledding trip with the transformational curriculum model as applied to outdoor education. Describes the physical, mental, and spiritual challenges of a seven-day winter camping and dog sledding trip, during which students learned responsibility through experience and natural consequences and realized the…

  7. On the Formal Componential Structure of the Transformational-Generative Model of Grammar.

    ERIC Educational Resources Information Center

    Brew, P. J.

    1970-01-01

    This paper examines the relationship that exists between the syntactic and phonological components of the transformational-generative model insofar as their formal structures are concerned. It is demonstrated that the number and importance of the structural similarities between the syntax and the phonology make it necessary to provide for them in…

  8. Chemical Transformation System: Cloud Based Cheminformatic Services to Support Integrated Environmental Modeling

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) systems that account for the fate/transport of organics frequently require physicochemical properties as well as transformation products. A myriad of chemical property databases exist but these can be difficult to access and often do not co...

  9. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  10. From PCK to TPACK: Developing a Transformative Model for Pre-Service Science Teachers

    ERIC Educational Resources Information Center

    Jang, Syh-Jong; Chen, Kuan-Chung

    2010-01-01

    New science teachers should be equipped with the ability to integrate and design the curriculum and technology for innovative teaching. How to integrate technology into pre-service science teachers' pedagogical content knowledge is the important issue. This study examined the impact on a transformative model of integrating technology and peer…

  11. Cultural Arts Education as Community Development: An Innovative Model of Healing and Transformation

    ERIC Educational Resources Information Center

    Archer-Cunningham, Kwayera

    2007-01-01

    This article discusses a three-tiered process of collective experiences of various artistic and cultural forms that fosters the healing and transformation of individuals, families, and communities of the African Diaspora. Ifetayo Cultural Arts, located in the Flatbush section of Brooklyn, espouses and practices a three-tiered model of community…

  12. The Federal Transformation Intervention Model in Persistently Lowest Achieving High Schools: A Mixed-Methods Study

    ERIC Educational Resources Information Center

    Le Patner, Michelle B.

    2012-01-01

    This study examined the American Recovery and Reinvestment Act federal mandate of the Transformation Intervention Model (TIM) outlined by the School Improvement Grant, which was designed to turn around persistently lowest achieving schools. The study was conducted in four high schools in a large Southern California urban district that selected the…

  13. The Efficacy of Ecological Macro-Models in Preservice Teacher Education: Transforming States of Mind

    ERIC Educational Resources Information Center

    Stibbards, Adam; Puk, Tom

    2011-01-01

    The present study aimed to describe and evaluate a transformative, embodied, emergent learning approach to acquiring ecological literacy through higher education. A class of teacher candidates in a bachelor of education program filled out a survey, which had them rate their level of agreement with 15 items related to ecological macro-models.…

  14. Enterprise-Wide Technological Transformation in Higher Education: The LASO Model

    ERIC Educational Resources Information Center

    Uys, Philip

    2007-01-01

    Purpose: This paper seeks to discuss the Leadership, Academic and Student Ownership and Readiness (LASO) model for enterprise-wide technological transformation in higher education developed by the writer as part of his PhD research. Design/methodology/approach: The article uses a comparative analysis of three case studies of the implementation of…

  15. Business Collaborations in Grids: The BREIN Architectural Principals and VO Model

    NASA Astrophysics Data System (ADS)

    Taylor, Steve; Surridge, Mike; Laria, Giuseppe; Ritrovato, Pierluigi; Schubert, Lutz

    We describe the business-oriented architectural principles of the EC FP7 project “BREIN” for service-based computing. The architecture is founded on principles of how real businesses interact to mutual benefit, and we show how these can be applied to SOA and Grid computing. We present building blocks that can be composed in many ways to produce different value systems and supply chains for the provision of computing services over the Internet. We also introduce the complementary BREIN VO concept, which is centric to, and managed by, a main contractor who bears the responsibility for the whole VO. The BREIN VO has an execution lifecycle for the creation and operation of the VO, and we have related this to an application-focused workflow involving steps that provide real end-user value. We show how this can be applied to an engineering simulation application and how the workflow can be adapted should the need arise.

  16. Studies of transformational leadership: evaluating two alternative models of trust and satisfaction.

    PubMed

    Yang, Yi-Feng

    2014-06-01

    This study evaluates the influence of leadership style and employee trust in their leaders on job satisfaction. 341 personnel (164 men, 177 women; M age = 33.5 yr., SD = 5.1) from four large insurance companies in Taiwan completed the transformational leadership behavior inventory, the leadership trust scale and a short version of the Minnesota (Job) Satisfaction Questionnaire. A bootstrapping mediation and structural equation modeling revealed that the effect of transformational leadership on job satisfaction was mediated by leadership trust. This study highlights the importance of leadership trust in leadership-satisfaction relationships, and provides managers with practical ways to enhance job satisfaction. PMID:25074300

  17. Variational data assimilation schemes for transport and transformation models of atmospheric chemistry

    NASA Astrophysics Data System (ADS)

    Penenko, Alexey; Penenko, Vladimir; Tsvetova, Elena; Antokhin, Pavel

    2016-04-01

    The work is devoted to data assimilation algorithm for atmospheric chemistry transport and transformation models. In the work a control function is introduced into the model source term (emission rate) to provide flexibility to adjust to data. This function is evaluated as the constrained minimum of the target functional combining a control function norm with a norm of the misfit between measured data and its model-simulated analog. Transport and transformation processes model is acting as a constraint. The constrained minimization problem is solved with Euler-Lagrange variational principle [1] which allows reducing it to a system of direct, adjoint and control function estimate relations. This provides a physically-plausible structure of the resulting analysis without model error covariance matrices that are sought within conventional approaches to data assimilation. High dimensionality of the atmospheric chemistry models and a real-time mode of operation demand for computational efficiency of the data assimilation algorithms. Computational issues with complicated models can be solved by using a splitting technique. Within this approach a complex model is split to a set of relatively independent simpler models equipped with a coupling procedure. In a fine-grained approach data assimilation is carried out quasi-independently on the separate splitting stages with shared measurement data [2]. In integrated schemes data assimilation is carried out with respect to the split model as a whole. We compare the two approaches both theoretically and numerically. Data assimilation on the transport stage is carried out with a direct algorithm without iterations. Different algorithms to assimilate data on nonlinear transformation stage are compared. In the work we compare data assimilation results for both artificial and real measurement data. With these data we study the impact of transformation processes and data assimilation to the performance of the modeling system [3]. The

  18. A two-dimensional analytical model and experimental validation of garter stitch knitted shape memory alloy actuator architecture

    NASA Astrophysics Data System (ADS)

    Abel, Julianna; Luntz, Jonathan; Brei, Diann

    2012-08-01

    Active knits are a unique architectural approach to meeting emerging smart structure needs for distributed high strain actuation with simultaneous force generation. This paper presents an analytical state-based model for predicting the actuation response of a shape memory alloy (SMA) garter knit textile. Garter knits generate significant contraction against moderate to large loads when heated, due to the continuous interlocked network of loops of SMA wire. For this knit architecture, the states of operation are defined on the basis of the thermal and mechanical loading of the textile, the resulting phase change of the SMA, and the load path followed to that state. Transitions between these operational states induce either stick or slip frictional forces depending upon the state and path, which affect the actuation response. A load-extension model of the textile is derived for each operational state using elastica theory and Euler-Bernoulli beam bending for the large deformations within a loop of wire based on the stress-strain behavior of the SMA material. This provides kinematic and kinetic relations which scale to form analytical transcendental expressions for the net actuation motion against an external load. This model was validated experimentally for an SMA garter knit textile over a range of applied forces with good correlation for both the load-extension behavior in each state as well as the net motion produced during the actuation cycle (250% recoverable strain and over 50% actuation). The two-dimensional analytical model of the garter stitch active knit provides the ability to predict the kinetic actuation performance, providing the basis for the design and synthesis of large stroke, large force distributed actuators that employ this novel architecture.

  19. Joint transform correlator based on CIELAB model with encoding technique for color pattern recognition

    NASA Astrophysics Data System (ADS)

    Lin, Tiengsheng; Chen, Chulung; Liu, Chengyu; Chen, Yuming

    2010-10-01

    The CIELAB standard color vision model instead of the traditional RGB color model is utilized for polychromatic pattern recognition. The image encoding technique is introduced. The joint transform correlator is set to be the optical configuration. To achieve the distortion invariance in discrimination processes, we have used the minimum average correlation energy approach to yield sharp correlation peak. From the numerical results, it is found that the recognition ability based on CIELAB color specification system is accepted.

  20. The Syrian hamster embryo (SHE) cell transformation system: a biologically relevant in vitro model--with carcinogen predicting capabilities--of in vivo multistage neoplastic transformation.

    PubMed

    Isfort, R J; LeBoeuf, R A

    1995-01-01

    Neoplastic transformation is a multistep process that can be modeled in vitro using Syrian hamster embryo (SHE) cells. SHE cells multistage transformation involves several intermediate stages, including morphological transformation, immortality, acquisition of tumorigenicity, and malignant progression. Analysis of the molecular alterations that occur at each stage indicated that morphological transformation results from both carcinogen-induced irreversible chromosomal/genetic mutations and reversible genetic events, including altered DNA methylation. Morphological transformation results from a block in the cellular differentiation of progenitor and determined stem-like cells in the SHE cell population via alternation in the expression of the H19 tumor suppressor gene and other genes. Immortality results from genetic mutations in growth factor responsiveness, including loss of growth suppression by TGF beta and autocrine growth factor production, and genomic stability, resulting in genomic instability and an increased mutation rate. Acquisition of tumorigenicity involves loss of tumor suppressor gene function, altered mitogenic signal transduction, mutation of oncogenes, acquisition of anchorage independent growth, and chromosomal aberrations. Malignant progression is associated with alterations in extracellular matrix growth characteristics, alterations in cytoskeleton structure, elevated fibrinolytic activity, secretion of proteases, and changes in extracellular matrix protein secretion. Together, these changes model the alterations observed during in vivo neoplastic transformation and possibly explain why the SHE assay, as a carcinogen screening tool, is able to identify carcinogens with a 80 to 85% accuracy. PMID:9012585

  1. Numerical Modeling of Arsenic Mobility during Reductive Iron-Mineral Transformations.

    PubMed

    Rawson, Joey; Prommer, Henning; Siade, Adam; Carr, Jackson; Berg, Michael; Davis, James A; Fendorf, Scott

    2016-03-01

    Millions of individuals worldwide are chronically exposed to hazardous concentrations of arsenic from contaminated drinking water. Despite massive efforts toward understanding the extent and underlying geochemical processes of the problem, numerical modeling and reliable predictions of future arsenic behavior remain a significant challenge. One of the key knowledge gaps concerns a refined understanding of the mechanisms that underlie arsenic mobilization, particularly under the onset of anaerobic conditions, and the quantification of the factors that affect this process. In this study, we focus on the development and testing of appropriate conceptual and numerical model approaches to represent and quantify the reductive dissolution of iron oxides, the concomitant release of sorbed arsenic, and the role of iron-mineral transformations. The initial model development in this study was guided by data and hypothesized processes from a previously reported,1 well-controlled column experiment in which arsenic desorption from ferrihydrite coated sands by variable loads of organic carbon was investigated. Using the measured data as constraints, we provide a quantitative interpretation of the processes controlling arsenic mobility during the microbial reductive transformation of iron oxides. Our analysis suggests that the observed arsenic behavior is primarily controlled by a combination of reductive dissolution of ferrihydrite, arsenic incorporation into or co-precipitation with freshly transformed iron minerals, and partial arsenic redox transformations. PMID:26835553

  2. Green Architecture

    NASA Astrophysics Data System (ADS)

    Lee, Seung-Ho

    Today, the environment has become a main subject in lots of science disciplines and the industrial development due to the global warming. This paper presents the analysis of the tendency of Green Architecture in France on the threes axes: Regulations and Approach for the Sustainable Architecture (Certificate and Standard), Renewable Materials (Green Materials) and Strategies (Equipments) of Sustainable Technology. The definition of 'Green Architecture' will be cited in the introduction and the question of the interdisciplinary for the technological development in 'Green Architecture' will be raised up in the conclusion.

  3. Mature seed-derived callus of the model indica rice variety Kasalath is highly competent in Agrobacterium-mediated transformation.

    PubMed

    Saika, Hiroaki; Toki, Seiichi

    2010-12-01

    We previously established an efficient Agrobacterium-mediated transformation system using primary calli derived from mature seeds of the model japonica rice variety Nipponbare. We expected that the shortened tissue culture period would reduce callus browning--a common problem with the indica transformation system during prolonged tissue culture in the undifferentiated state. In this study, we successfully applied our efficient transformation system to Kasalath--a model variety of indica rice. The Luc reporter system is sensitive enough to allow quantitative analysis of the competency of rice callus for Agrobacterium-mediated transformation. We unexpectedly discovered that primary callus of Kasalath exhibits a remarkably high competency for Agrobacterium-mediated transformation compared to Nipponbare. Southern blot analysis and Luc luminescence showed that independent transformation events in primary callus of Kasalath occurred successfully at ca. tenfold higher frequency than in Nipponbare, and single copy T-DNA integration was observed in ~40% of these events. We also compared the competency of secondary callus of Nipponbare and Kasalath and again found superior competency in Kasalath, although the identification and subsequent observation of independent transformation events in secondary callus is difficult due to the vigorous growth of both transformed and non-transformed cells. An efficient transformation system in Kasalath could facilitate the identification of QTL genes, since many QTL genes are analyzed in a Nipponbare × Kasalath genetic background. The higher transformation competency of Kasalath could be a useful trait in the establishment of highly efficient systems involving new transformation technologies such as gene targeting. PMID:20853107

  4. Focus on connections for successful organizational transformation to model based engineering

    NASA Astrophysics Data System (ADS)

    Babineau, Guy L.

    2015-05-01

    Organizational Transformation to a Model Based Engineering Culture is a significant goal for Northrop Grumman Electronic Systems in order to achieve objectives of increased engineering performance. While organizational change is difficult, a focus on connections is creating success. Connections include model to model, program phase to program phase and organization to organization all through Model Based techniques. This presentation will address the techniques employed by Northrop Grumman to achieve these results as well as address continued focus and efforts. Model to model connections are very effective in automating implicit linkages between models for the purpose of ensuring consistency across a set of models and also for rapidly assessing impact of change. Program phase to phase connections are very important for reducing development time as well as reducing potential errors in moving from one program phase to another. Organization to organization communication is greatly facilitated using model based techniques to eliminate ambiguity and drive consistency and reuse.

  5. The Simulation Intranet Architecture

    SciTech Connect

    Holmes, V.P.; Linebarger, J.M.; Miller, D.J.; Vandewart, R.L.

    1998-12-02

    The Simdarion Infranet (S1) is a term which is being used to dcscribc one element of a multidisciplinary distributed and distance computing initiative known as DisCom2 at Sandia National Laboratory (http ct al. 1998). The Simulation Intranet is an architecture for satisfying Sandia's long term goal of providing an end- to-end set of scrviccs for high fidelity full physics simu- lations in a high performance, distributed, and distance computing environment. The Intranet Architecture group was formed to apply current distributed object technologies to this problcm. For the hardware architec- tures and software models involved with the current simulation process, a CORBA-based architecture is best suited to meet Sandia's needs. This paper presents the initial desi-a and implementation of this Intranct based on a three-tier Network Computing Architecture(NCA). The major parts of the architecture include: the Web Cli- ent, the Business Objects, and Data Persistence.

  6. Genetic transformation of Knufia petricola A95 - a model organism for biofilm-material interactions

    PubMed Central

    2014-01-01

    We established a protoplast-based system to transfer DNA to Knufia petricola strain A95, a melanised rock-inhabiting microcolonial fungus that is also a component of a model sub-aerial biofilm (SAB) system. To test whether the desiccation resistant, highly melanised cell walls would hinder protoplast formation, we treated a melanin-minus mutant of A95 as well as the type-strain with a variety of cell-degrading enzymes. Of the different enzymes tested, lysing enzymes from Trichoderma harzianum were most effective in producing protoplasts. This mixture was equally effective on the melanin-minus mutant and the type-strain. Protoplasts produced using lysing enzymes were mixed with polyethyleneglycol (PEG) and plasmid pCB1004 which contains the hygromycin B (HmB) phosphotransferase (hph) gene under the control of the Aspergillus nidulans trpC. Integration and expression of hph into the A95 genome conferred hygromycin resistance upon the transformants. Two weeks after plating out on selective agar containing HmB, the protoplasts developed cell-walls and formed colonies. Transformation frequencies were in the range 36 to 87 transformants per 10 μg of vector DNA and 106 protoplasts. Stability of transformation was confirmed by sub-culturing the putative transformants on selective agar containing HmB as well as by PCR-detection of the hph gene in the colonies. The hph gene was stably integrated as shown by five subsequent passages with and without selection pressure. PMID:25401079

  7. Seismo-thermo-mechanical modeling of mature and immature transform faults

    NASA Astrophysics Data System (ADS)

    Preuss, Simon; Gerya, Taras; van Dinther, Ylona

    2016-04-01

    Transform faults (TF) are subdivided into continental and oceanic ones due to their markedly different tectonic position, structure, surface expression, dynamics and seismicity. Both continental and oceanic TFs are zones of rheological weakness, which is a pre-requisite for their existence and long-term stability. Compared to subduction zones, TFs are typically characterized by smaller earthquake magnitudes as both their potential seismogenic width and length are reduced. However, a few very large magnitude (Mw>8) strike-slip events were documented, which are presumably related to the generation of new transform boundaries and/or sudden reactivation of pre-existing fossil structures. In particular, the 11 April 2012 Sumatra Mw 8.6 earthquake is challenging the general concept that such high magnitude events only occur at megathrusts. Hence, the processes of TF nucleation, propagation and their direct relation to the seismic cycle and long-term deformation at both oceanic and continental transforms needs to be investigated jointly to overcome the restricted direct observations in time and space. To gain fundamental understanding of involved physical processes the numerical seismo-thermo-mechanical (STM) modeling approach, validated in a subduction zone setting (Van Dinther et al. 2013), will be adapted for TFs. A simple 2D plane view model geometry using visco-elasto-plastic material behavior will be adopted. We will study and compare seismicity patterns and evolution in two end member TF setups, each with strain-dependent and rate-dependent brittle-plastic weakening processes: (1) A single weak and mature transform fault separating two strong plates (e.g., in between oceanic ridges) and (2) A nucleating or evolving (continental) TF system with disconnected predefined faults within a plate subjected to simple shear deformation (e.g., San Andreas Fault system). The modeling of TFs provides a first tool to establish the STM model approach for transform faults in a

  8. Modeling the cure kinetics of crosslinking free radical polymerizations using the Avrami theory of phase transformation

    SciTech Connect

    Finnegan, G.R.; Shine, A.D.

    1995-12-01

    A model, based on Avrami`s theory of phase transformation, has been developed to describe the cure kinetics of crosslinking free radical polymerizations. The model assumes the growing polymer can be treated as a distinct phase and the nucleation rate is proportional to the initiation rate of the polymerization. The Avrami time exponent was verified to be 4.0. This physically-based, two-parameter model fits vinyl ester resin heat flow data as well as the empirical, four-parameter autocatalytic model, and is capable of describing both neat and fiber-containing resin.

  9. Analysis of transformation plasticity in steel using a finite element method coupled with a phase field model.

    PubMed

    Cho, Yi-Gil; Kim, Jin-You; Cho, Hoon-Hwe; Cha, Pil-Ryung; Suh, Dong-Woo; Lee, Jae Kon; Han, Heung Nam

    2012-01-01

    An implicit finite element model was developed to analyze the deformation behavior of low carbon steel during phase transformation. The finite element model was coupled hierarchically with a phase field model that could simulate the kinetics and micro-structural evolution during the austenite-to-ferrite transformation of low carbon steel. Thermo-elastic-plastic constitutive equations for each phase were adopted to confirm the transformation plasticity due to the weaker phase yielding that was proposed by Greenwood and Johnson. From the simulations under various possible plastic properties of each phase, a more quantitative understanding of the origin of transformation plasticity was attempted by a comparison with the experimental observation. PMID:22558295

  10. Analysis of Transformation Plasticity in Steel Using a Finite Element Method Coupled with a Phase Field Model

    PubMed Central

    Cho, Yi-Gil; Kim, Jin-You; Cho, Hoon-Hwe; Cha, Pil-Ryung; Suh, Dong-Woo; Lee, Jae Kon; Han, Heung Nam

    2012-01-01

    An implicit finite element model was developed to analyze the deformation behavior of low carbon steel during phase transformation. The finite element model was coupled hierarchically with a phase field model that could simulate the kinetics and micro-structural evolution during the austenite-to-ferrite transformation of low carbon steel. Thermo-elastic-plastic constitutive equations for each phase were adopted to confirm the transformation plasticity due to the weaker phase yielding that was proposed by Greenwood and Johnson. From the simulations under various possible plastic properties of each phase, a more quantitative understanding of the origin of transformation plasticity was attempted by a comparison with the experimental observation. PMID:22558295

  11. On Using SysML, DoDAF 2.0 and UPDM to Model the Architecture for the NOAA's Joint Polar Satellite System (JPSS) Ground System (GS)

    NASA Technical Reports Server (NTRS)

    Hayden, Jeffrey L.; Jeffries, Alan

    2012-01-01

    The JPSS Ground System is a lIexible system of systems responsible for telemetry, tracking & command (TT &C), data acquisition, routing and data processing services for a varied lIeet of satellites to support weather prediction, modeling and climate modeling. To assist in this engineering effort, architecture modeling tools are being employed to translate the former NPOESS baseline to the new JPSS baseline, The paper will focus on the methodology for the system engineering process and the use of these architecture modeling tools within that process, The Department of Defense Architecture Framework version 2,0 (DoDAF 2.0) viewpoints and views that are being used to describe the JPSS GS architecture are discussed. The Unified Profile for DoOAF and MODAF (UPDM) and Systems Modeling Language (SysML), as ' provided by extensions to the MagicDraw UML modeling tool, are used to develop the diagrams and tables that make up the architecture model. The model development process and structure are discussed, examples are shown, and details of handling the complexities of a large System of Systems (SoS), such as the JPSS GS, with an equally complex modeling tool, are described

  12. Architecture of Chinese Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Cui, Chen-Zhou; Zhao, Yong-Heng

    2004-06-01

    Virtual Observatory (VO) is brought forward under the background of progresses of astronomical technologies and information technologies. VO architecture design embodies the combination of above two technologies. As an introduction of VO, principle and workflow of Virtual Observatory are given firstly. Then the latest progress on VO architecture is introduced. Based on the Grid technology, layered architecture model and service-oriented architecture model are given for Chinese Virtual Observatory. In the last part of the paper, some problems on architecture design are discussed in detail.

  13. A Modified Approach to Modeling of Diffusive Transformation Kinetics from Nonisothermal Data and Experimental Verification

    NASA Astrophysics Data System (ADS)

    Chen, Xiangjun; Xiao, Namin; Cai, Minghui; Li, Dianzhong; Li, Guangyao; Sun, Guangyong; Rolfe, Bernard F.

    2016-09-01

    An inverse model is proposed to construct the mathematical relationship between continuous cooling transformation (CCT) kinetics with constant rates and the isothermal one. The kinetic parameters in JMAK equations of isothermal kinetics can be deduced from the experimental CCT kinetics. Furthermore, a generalized model with a new additive rule is developed for predicting the kinetics of nucleation and growth during diffusional phase transformation with arbitrary cooling paths based only on CCT curve. A generalized contribution coefficient is introduced into the new additivity rule to describe the influences of current temperature and cooling rate on the incubation time of nuclei. Finally, then the reliability of the proposed model is validated using dilatometry experiments of a microalloy steel with fully bainitic microstructure based on various cooling routes.

  14. A Modified Approach to Modeling of Diffusive Transformation Kinetics from Nonisothermal Data and Experimental Verification

    NASA Astrophysics Data System (ADS)

    Chen, Xiangjun; Xiao, Namin; Cai, Minghui; Li, Dianzhong; Li, Guangyao; Sun, Guangyong; Rolfe, Bernard F.

    2016-06-01

    An inverse model is proposed to construct the mathematical relationship between continuous cooling transformation (CCT) kinetics with constant rates and the isothermal one. The kinetic parameters in JMAK equations of isothermal kinetics can be deduced from the experimental CCT kinetics. Furthermore, a generalized model with a new additive rule is developed for predicting the kinetics of nucleation and growth during diffusional phase transformation with arbitrary cooling paths based only on CCT curve. A generalized contribution coefficient is introduced into the new additivity rule to describe the influences of current temperature and cooling rate on the incubation time of nuclei. Finally, then the reliability of the proposed model is validated using dilatometry experiments of a microalloy steel with fully bainitic microstructure based on various cooling routes.

  15. Multiphase Resistivity Model for Magnetic Nanocomposites Developed for High Frequency, High Power Transformation

    SciTech Connect

    DeGeorge, V; Shen, S; Ohodnicki, P; Andio, M; Mchenry, ME

    2013-12-05

    New power conversion systems that offer promise to transform electricity grids into unified interactive supply networks require high-resistivity soft-magnetic materials to allow for switching of magnetic materials at frequencies approaching 100 kHz for power transformation in the megawatt range. Amorphous and nanocomposite soft-magnetic materials, which represent the state of the art in terms of high power densities and low losses at high frequencies, have resistivities that depend on the structures and spatial distributions of multiple phases in thin ribbons. We present a multiphase resistivity model applicable to nanocomposite materials by considering an equivalent circuit approach considering paths through an amorphous, crystalline, and growth inhibitor shell phase. We detail: (a) identification of amorphous, crystalline, and shell phases; (b) consideration of the role of the morphology of each phase in an equivalent circuit model for the resistance; (c) a two-band model for the Fe/Co composition dependence of the resistivity in crystalline and amorphous phases; (d) a virtual bound state model for resistivity to explain increased resistivity due to early transition-metal growth inhibitors in the shell surrounding the nanocrystalline phase; and (e) disorder effects on amorphous phase resistivity. Experimental design and results for systems of interest in high-frequency power transformation are discussed in the context of our model including: (a) techniques for measurements of cross-section and density, (b) four-point probe and surface resistivity measurements, and (c) measurements in Fe- and Co-rich systems comparing amorphous and nanocomposite materials.

  16. Multiphase Resistivity Model for Magnetic Nanocomposites Developed for High Frequency, High Power Transformation

    NASA Astrophysics Data System (ADS)

    DeGeorge, V.; Shen, S.; Ohodnicki, P.; Andio, M.; McHenry, M. E.

    2014-01-01

    New power conversion systems that offer promise to transform electricity grids into unified interactive supply networks require high-resistivity soft-magnetic materials to allow for switching of magnetic materials at frequencies approaching 100 kHz for power transformation in the megawatt range. Amorphous and nanocomposite soft-magnetic materials, which represent the state of the art in terms of high power densities and low losses at high frequencies, have resistivities that depend on the structures and spatial distributions of multiple phases in thin ribbons. We present a multiphase resistivity model applicable to nanocomposite materials by considering an equivalent circuit approach considering paths through an amorphous, crystalline, and growth inhibitor shell phase. We detail: (a) identification of amorphous, crystalline, and shell phases; (b) consideration of the role of the morphology of each phase in an equivalent circuit model for the resistance; (c) a two-band model for the Fe/Co composition dependence of the resistivity in crystalline and amorphous phases; (d) a virtual bound state model for resistivity to explain increased resistivity due to early transition-metal growth inhibitors in the shell surrounding the nanocrystalline phase; and (e) disorder effects on amorphous phase resistivity. Experimental design and results for systems of interest in high-frequency power transformation are discussed in the context of our model including: (a) techniques for measurements of cross-section and density, (b) four-point probe and surface resistivity measurements, and (c) measurements in Fe- and Co-rich systems comparing amorphous and nanocomposite materials.

  17. Ac loss modelling and measurement of superconducting transformers with coated-conductor Roebel-cable in low-voltage winding

    NASA Astrophysics Data System (ADS)

    Pardo, Enric; Staines, Mike; Jiang, Zhenan; Glasson, Neil

    2015-11-01

    Power transformers using a high temperature superconductor (HTS) ReBCO coated conductor and liquid nitrogen dielectric have many potential advantages over conventional transformers. The ac loss in the windings complicates the cryogenics and reduces the efficiency, and hence it needs to be predicted in its design, usually by numerical calculations. This article presents detailed modelling of superconducting transformers with Roebel cable in the low-voltage (LV) winding and a high-voltage (HV) winding with more than 1000 turns. First, we model a 1 MVA 11 kV/415 V 3-phase transformer. The Roebel cable solenoid forming the LV winding is also analyzed as a stand-alone coil. Agreement between calculations and experiments of the 1 MVA transformer supports the model validity for a larger tentative 40 MVA 110 kV/11 kV 3-phase transformer design. We found that the ac loss in each winding is much lower when it is inserted in the transformer than as a stand-alone coil. The ac loss in the 1 and 40 MVA transformers is dominated by the LV and HV windings, respectively. Finally, the ratio of total loss over rated power of the 40 MVA transformer is reduced below 40% of that of the 1 MVA transformer. In conclusion, the modelling tool in this work can reliably predict the ac loss in real power applications.

  18. Structural model of the dimeric Parkinson’s protein LRRK2 reveals a compact architecture involving distant interdomain contacts

    PubMed Central

    Guaitoli, Giambattista; Raimondi, Francesco; Gilsbach, Bernd K.; Gómez-Llorente, Yacob; Deyaert, Egon; Renzi, Fabiana; Li, Xianting; Schaffner, Adam; Jagtap, Pravin Kumar Ankush; Boldt, Karsten; von Zweydorf, Felix; Gotthardt, Katja; Lorimer, Donald D.; Yue, Zhenyu; Burgin, Alex; Janjic, Nebojsa; Sattler, Michael; Versées, Wim; Ueffing, Marius; Ubarretxena-Belandia, Iban; Kortholt, Arjan; Gloeckner, Christian Johannes

    2016-01-01

    Leucine-rich repeat kinase 2 (LRRK2) is a large, multidomain protein containing two catalytic domains: a Ras of complex proteins (Roc) G-domain and a kinase domain. Mutations associated with familial and sporadic Parkinson’s disease (PD) have been identified in both catalytic domains, as well as in several of its multiple putative regulatory domains. Several of these mutations have been linked to increased kinase activity. Despite the role of LRRK2 in the pathogenesis of PD, little is known about its overall architecture and how PD-linked mutations alter its function and enzymatic activities. Here, we have modeled the 3D structure of dimeric, full-length LRRK2 by combining domain-based homology models with multiple experimental constraints provided by chemical cross-linking combined with mass spectrometry, negative-stain EM, and small-angle X-ray scattering. Our model reveals dimeric LRRK2 has a compact overall architecture with a tight, multidomain organization. Close contacts between the N-terminal ankyrin and C-terminal WD40 domains, and their proximity—together with the LRR domain—to the kinase domain suggest an intramolecular mechanism for LRRK2 kinase activity regulation. Overall, our studies provide, to our knowledge, the first structural framework for understanding the role of the different domains of full-length LRRK2 in the pathogenesis of PD. PMID:27357661

  19. Structural model of the dimeric Parkinson's protein LRRK2 reveals a compact architecture involving distant interdomain contacts.

    PubMed

    Guaitoli, Giambattista; Raimondi, Francesco; Gilsbach, Bernd K; Gómez-Llorente, Yacob; Deyaert, Egon; Renzi, Fabiana; Li, Xianting; Schaffner, Adam; Jagtap, Pravin Kumar Ankush; Boldt, Karsten; von Zweydorf, Felix; Gotthardt, Katja; Lorimer, Donald D; Yue, Zhenyu; Burgin, Alex; Janjic, Nebojsa; Sattler, Michael; Versées, Wim; Ueffing, Marius; Ubarretxena-Belandia, Iban; Kortholt, Arjan; Gloeckner, Christian Johannes

    2016-07-26

    Leucine-rich repeat kinase 2 (LRRK2) is a large, multidomain protein containing two catalytic domains: a Ras of complex proteins (Roc) G-domain and a kinase domain. Mutations associated with familial and sporadic Parkinson's disease (PD) have been identified in both catalytic domains, as well as in several of its multiple putative regulatory domains. Several of these mutations have been linked to increased kinase activity. Despite the role of LRRK2 in the pathogenesis of PD, little is known about its overall architecture and how PD-linked mutations alter its function and enzymatic activities. Here, we have modeled the 3D structure of dimeric, full-length LRRK2 by combining domain-based homology models with multiple experimental constraints provided by chemical cross-linking combined with mass spectrometry, negative-stain EM, and small-angle X-ray scattering. Our model reveals dimeric LRRK2 has a compact overall architecture with a tight, multidomain organization. Close contacts between the N-terminal ankyrin and C-terminal WD40 domains, and their proximity-together with the LRR domain-to the kinase domain suggest an intramolecular mechanism for LRRK2 kinase activity regulation. Overall, our studies provide, to our knowledge, the first structural framework for understanding the role of the different domains of full-length LRRK2 in the pathogenesis of PD. PMID:27357661

  20. Monthly river flow forecasting using artificial neural network and support vector regression models coupled with wavelet transform

    NASA Astrophysics Data System (ADS)

    Kalteh, Aman Mohammad

    2013-04-01

    Reliable and accurate forecasts of river flow is needed in many water resources planning, design development, operation and maintenance activities. In this study, the relative accuracy of artificial neural network (ANN) and support vector regression (SVR) models coupled with wavelet transform in monthly river flow forecasting is investigated, and compared to regular ANN and SVR models, respectively. The relative performance of regular ANN and SVR models is also compared to each other. For this, monthly river flow data of Kharjegil and Ponel stations in Northern Iran are used. The comparison of the results reveals that both ANN and SVR models coupled with wavelet transform, are able to provide more accurate forecasting results than the regular ANN and SVR models. However, it is found that SVR models coupled with wavelet transform provide better forecasting results than ANN models coupled with wavelet transform. The results also indicate that regular SVR models perform slightly better than regular ANN models.

  1. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  2. Data Warehouse Design from HL7 Clinical Document Architecture Schema.

    PubMed

    Pecoraro, Fabrizio; Luzi, Daniela; Ricci, Fabrizio L

    2015-01-01

    This paper proposes a semi-automatic approach to extract clinical information structured in a HL7 Clinical Document Architecture (CDA) and transform it in a data warehouse dimensional model schema. It is based on a conceptual framework published in a previous work that maps the dimensional model primitives with CDA elements. Its feasibility is demonstrated providing a case study based on the analysis of vital signs gathered during laboratory tests. PMID:26152975

  3. Brachypodium sylvaticum, a Model for Perennial Grasses: Transformation and Inbred Line Development

    PubMed Central

    Steinwand, Michael A.; Young, Hugh A.; Bragg, Jennifer N.; Tobias, Christian M.; Vogel, John P.

    2013-01-01

    Perennial species offer significant advantages as crops including reduced soil erosion, lower energy inputs after the first year, deeper root systems that access more soil moisture, and decreased fertilizer inputs due to the remobilization of nutrients at the end of the growing season. These advantages are particularly relevant for emerging biomass crops and it is projected that perennial grasses will be among the most important dedicated biomass crops. The advantages offered by perennial crops could also prove favorable for incorporation into annual grain crops like wheat, rice, sorghum and barley, especially under the dryer and more variable climate conditions projected for many grain-producing regions. Thus, it would be useful to have a perennial model system to test biotechnological approaches to crop improvement and for fundamental research. The perennial grass Brachypodiumsylvaticum is a candidate for such a model because it is diploid, has a small genome, is self-fertile, has a modest stature, and short generation time. Its close relationship to the annual model Brachypodiumdistachyon will facilitate comparative studies and allow researchers to leverage the resources developed for B. distachyon. Here we report on the development of two keystone resources that are essential for a model plant: high-efficiency transformation and inbred lines. Using Agrobacterium tumefaciens-mediated transformation we achieved an average transformation efficiency of 67%. We also surveyed the genetic diversity of 19 accessions from the National Plant Germplasm System using SSR markers and created 15 inbred lines. PMID:24073248

  4. Modeling the transformation stress of constrained shape memory alloy single crystals

    SciTech Connect

    Comstock, R.J. Jr.; Buchheit, T.E.; Somerday, M.; Wert, J.A.

    1996-09-01

    Shape memory alloys (SMA) are a unique class of engineering materials that can be further exploited with accurate polycrystal constitutive models. Previous investigators have modeled stress-induced martensite formation in unconstrained single crystals. Understanding stress-induced martensite formation in constrained single crystals is the next step towards the development of a constitutive model for textured polycrystalline SMA. Such models have been previously developed for imposition of axisymmetric strain on a polycrystal with random crystal orientation; the present paper expands the constrained single crystal SMA model to encompass arbitrary imposed strains. To evaluate the model, axisymmetric tension and compression strains and pure shear strain are imposed on three SMA: NiTi, Cu-Al-Ni ({beta}{sub 1}{yields}{gamma}{prime}{sub 1}) and Ni-Al. Model results are then used to understand the anisotropy and asymmetry of transformation stress in the three SMA considered. Finally, the impact of the present results on polycrystal behavior is addressed.

  5. Current transformer model with hysteresis for improving the protection response in electrical transmission systems

    NASA Astrophysics Data System (ADS)

    Matussek, Robert; Dzienis, Cezary; Blumschein, Jörg; Schulte, Horst

    2014-12-01

    In this paper, a generic enhanced protection current transformer (CT) model with saturation effects and transient behavior is presented. The model is used for the purpose of analysis and design of power system protection algorithms. Three major classes of protection CT have been modeled which all take into account the nonlinear inductance with remanence effects. The transient short-circuit currents in power systems are simulated under CT saturation condition. The response of a common power system protection algorithm with respect to robustness to nominal parameter variations and sensitivity against maloperation is demonstrated by simulation studies.

  6. Modeling the Pulse Signal by Wave-Shape Function and Analyzing by Synchrosqueezing Transform.

    PubMed

    Wu, Hau-Tieng; Wu, Han-Kuei; Wang, Chun-Li; Yang, Yueh-Lung; Wu, Wen-Hsiang; Tsai, Tung-Hu; Chang, Hen-Hong

    2016-01-01

    We apply the recently developed adaptive non-harmonic model based on the wave-shape function, as well as the time-frequency analysis tool called synchrosqueezing transform (SST) to model and analyze oscillatory physiological signals. To demonstrate how the model and algorithm work, we apply them to study the pulse wave signal. By extracting features called the spectral pulse signature, and based on functional regression, we characterize the hemodynamics from the radial pulse wave signals recorded by the sphygmomanometer. Analysis results suggest the potential of the proposed signal processing approach to extract health-related hemodynamics features. PMID:27304979

  7. Modeling the Pulse Signal by Wave-Shape Function and Analyzing by Synchrosqueezing Transform

    PubMed Central

    Wang, Chun-Li; Yang, Yueh-Lung; Wu, Wen-Hsiang; Tsai, Tung-Hu; Chang, Hen-Hong

    2016-01-01

    We apply the recently developed adaptive non-harmonic model based on the wave-shape function, as well as the time-frequency analysis tool called synchrosqueezing transform (SST) to model and analyze oscillatory physiological signals. To demonstrate how the model and algorithm work, we apply them to study the pulse wave signal. By extracting features called the spectral pulse signature, and based on functional regression, we characterize the hemodynamics from the radial pulse wave signals recorded by the sphygmomanometer. Analysis results suggest the potential of the proposed signal processing approach to extract health-related hemodynamics features. PMID:27304979

  8. Computational Nanophotonics: Model Optical Interactions and Transport in Tailored Nanosystem Architectures

    SciTech Connect

    Stockman, Mark; Gray, Steven

    2014-02-21

    The program is directed toward development of new computational approaches to photoprocesses in nanostructures whose geometry and composition are tailored to obtain desirable optical responses. The emphasis of this specific program is on the development of computational methods and prediction and computational theory of new phenomena of optical energy transfer and transformation on the extreme nanoscale (down to a few nanometers).

  9. Fictitious Reference Iterative Tuning for Non-Minimum Phase Systems in the IMC Architecture: Simultaneous Attainment of Controllers and Models

    NASA Astrophysics Data System (ADS)

    Kaneko, Osamu; Nguyen, Hien Thi; Wadagaki, Yusuke; Yamamoto, Shigeru

    This paper provides a practical and meaningful application of controller parameter tuning. Here, we propose a simultaneous attainment of a desired controller and a mathematical model of a plant by utilizing the fictitious reference iterative tuning (FRIT), which is a useful method of controller parameter tuning with only one-shot experimental data, in the internal model control (IMC) architecture. Particularly, this paper focuses on systems with unstable zeros which cannot be eliminated in many applications. We explain how the utilization of the FRIT is effective for obtaining not only the desired control parameter values but also an appropriate mathematical model of the plant. In order to show the effectiveness and the validity of the proposed method, we give illustrative examples.

  10. Logical-Rule Models of Classification Response Times: A Synthesis of Mental-Architecture, Random-Walk, and Decision-Bound Approaches

    ERIC Educational Resources Information Center

    Fific, Mario; Little, Daniel R.; Nosofsky, Robert M.

    2010-01-01

    We formalize and provide tests of a set of logical-rule models for predicting perceptual classification response times (RTs) and choice probabilities. The models are developed by synthesizing mental-architecture, random-walk, and decision-bound approaches. According to the models, people make independent decisions about the locations of stimuli…

  11. A biologically inspired neural network model to transformation invariant object recognition

    NASA Astrophysics Data System (ADS)

    Iftekharuddin, Khan M.; Li, Yaqin; Siddiqui, Faraz

    2007-09-01

    Transformation invariant image recognition has been an active research area due to its widespread applications in a variety of fields such as military operations, robotics, medical practices, geographic scene analysis, and many others. The primary goal for this research is detection of objects in the presence of image transformations such as changes in resolution, rotation, translation, scale and occlusion. We investigate a biologically-inspired neural network (NN) model for such transformation-invariant object recognition. In a classical training-testing setup for NN, the performance is largely dependent on the range of transformation or orientation involved in training. However, an even more serious dilemma is that there may not be enough training data available for successful learning or even no training data at all. To alleviate this problem, a biologically inspired reinforcement learning (RL) approach is proposed. In this paper, the RL approach is explored for object recognition with different types of transformations such as changes in scale, size, resolution and rotation. The RL is implemented in an adaptive critic design (ACD) framework, which approximates the neuro-dynamic programming of an action network and a critic network, respectively. Two ACD algorithms such as Heuristic Dynamic Programming (HDP) and Dual Heuristic dynamic Programming (DHP) are investigated to obtain transformation invariant object recognition. The two learning algorithms are evaluated statistically using simulated transformations in images as well as with a large-scale UMIST face database with pose variations. In the face database authentication case, the 90° out-of-plane rotation of faces from 20 different subjects in the UMIST database is used. Our simulations show promising results for both designs for transformation-invariant object recognition and authentication of faces. Comparing the two algorithms, DHP outperforms HDP in learning capability, as DHP takes fewer steps to

  12. Transformer modeling for low- and mid-frequency electromagnetic transients simulation

    NASA Astrophysics Data System (ADS)

    Lambert, Mathieu

    In this work, new models are developed for single-phase and three-phase shell-type transformers for the simulation of low-frequency transients, with the use of the coupled leakage model. This approach has the advantage that it avoids the use of fictitious windings to connect the leakage model to a topological core model, while giving the same response in short-circuit as the indefinite admittance matrix (BCTRAN) model. To further increase the model sophistication, it is proposed to divide windings into coils in the new models. However, short-circuit measurements between coils are never available. Therefore, a novel analytical method is elaborated for this purpose, which allows the calculation in 2-D of short-circuit inductances between coils of rectangular cross-section. The results of this new method are in agreement with the results obtained from the finite element method in 2-D. Furthermore, the assumption that the leakage field is approximately 2-D in shell-type transformers is validated with a 3-D simulation. The outcome of this method is used to calculate the self and mutual inductances between the coils of the coupled leakage model and the results are showing good correspondence with terminal short-circuit measurements. Typically, leakage inductances in transformers are calculated from short-circuit measurements and the magnetizing branch is calculated from no-load measurements, assuming that leakages are unimportant for the unloaded transformer and that magnetizing current is negligible during a short-circuit. While the core is assumed to have an infinite permeability to calculate short-circuit inductances, and it is a reasonable assumption since the core's magnetomotive force is negligible during a short-circuit, the same reasoning does not necessarily hold true for leakage fluxes in no-load conditions. This is because the core starts to saturate when the transformer is unloaded. To take this into account, a new analytical method is developed in this

  13. The CMIP5 archive architecture: A system for petabyte-scale distributed archival of climate model data

    NASA Astrophysics Data System (ADS)

    Pascoe, Stephen; Cinquini, Luca; Lawrence, Bryan

    2010-05-01

    The Phase 5 Coupled Model Intercomparison Project (CMIP5) will produce a petabyte scale archive of climate data relevant to future international assessments of climate science (e.g., the IPCC's 5th Assessment Report scheduled for publication in 2013). The infrastructure for the CMIP5 archive must meet many challenges to support this ambitious international project. We describe here the distributed software architecture being deployed worldwide to meet these challenges. The CMIP5 architecture extends the Earth System Grid (ESG) distributed architecture of Datanodes, providing data access and visualisation services, and Gateways providing the user interface including registration, search and browse services. Additional features developed for CMIP5 include a publication workflow incorporating quality control and metadata submission, data replication, version control, update notification and production of citable metadata records. Implementation of these features have been driven by the requirements of reliable global access to over 1Pb of data and consistent citability of data and metadata. Central to the implementation is the concept of Atomic Datasets that are identifiable through a Data Reference Syntax (DRS). Atomic Datasets are immutable to allow them to be replicated and tracked whilst maintaining data consistency. However, since occasional errors in data production and processing is inevitable, new versions can be published and users notified of these updates. As deprecated datasets may be the target of existing citations they can remain visible in the system. Replication of Atomic Datasets is designed to improve regional access and provide fault tolerance. Several datanodes in the system are designated replicating nodes and hold replicas of a portion of the archive expected to be of broad interest to the community. Gateways provide a system-wide interface to users where they can track the version history and location of replicas to select the most appropriate

  14. A 3-D constitutive model for pressure-dependent phase transformation of porous shape memory alloys.

    PubMed

    Ashrafi, M J; Arghavani, J; Naghdabadi, R; Sohrabpour, S

    2015-02-01

    Porous shape memory alloys (SMAs) exhibit the interesting characteristics of porous metals together with shape memory effect and pseudo-elasticity of SMAs that make them appropriate for biomedical applications. In this paper, a 3-D phenomenological constitutive model for the pseudo-elastic behavior and shape memory effect of porous SMAs is developed within the framework of irreversible thermodynamics. Comparing to micromechanical and computational models, the proposed model is computationally cost effective and predicts the behavior of porous SMAs under proportional and non-proportional multiaxial loadings. Considering the pressure dependency of phase transformation in porous SMAs, proper internal variables, free energy and limit functions are introduced. With the aim of numerical implementation, time discretization and solution algorithm for the proposed model are also presented. Due to lack of enough experimental data on multiaxial loadings of porous SMAs, we employ a computational simulation method (CSM) together with available experimental data to validate the proposed constitutive model. The method is based on a 3-D finite element model of a representative volume element (RVE) with random pores pattern. Good agreement between the numerical predictions of the model and CSM results is observed for elastic and phase transformation behaviors in various thermomechanical loadings. PMID:25528691

  15. Project Integration Architecture: Architectural Overview

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2001-01-01

    The Project Integration Architecture (PIA) implements a flexible, object-oriented, wrapping architecture which encapsulates all of the information associated with engineering applications. The architecture allows the progress of a project to be tracked and documented in its entirety. By being a single, self-revealing architecture, the ability to develop single tools, for example a single graphical user interface, to span all applications is enabled. Additionally, by bringing all of the information sources and sinks of a project into a single architectural space, the ability to transport information between those applications becomes possible, Object-encapsulation further allows information to become in a sense self-aware, knowing things such as its own dimensionality and providing functionality appropriate to its kind.

  16. Phase-field modeling of the beta to omega phase transformation in Zr–Nb alloys

    SciTech Connect

    Yeddu, Hemantha Kumar; Lookman, Turab

    2015-05-01

    A three-dimensional elastoplastic phase-field model is developed, using the Finite Element Method (FEM), for modeling the athermal beta to omega phase transformation in Zr–Nb alloys by including plastic deformation and strain hardening of the material. The microstructure evolution during athermal transformation as well as under different stress states, e.g. uni-axial tensile and compressive, bi-axial tensile and compressive, shear and tri-axial loadings, is studied. The effects of plasticity, stress states and the stress loading direction on the microstructure evolution as well as on the mechanical properties are studied. The input data corresponding to a Zr – 8 at.% Nb alloy are acquired from experimental studies as well as by using the CALPHAD method. Our simulations show that the four different omega variants grow as ellipsoidal shaped particles. Our results show that due to stress relaxation, the athermal phase transformation occurs slightly more readily in the presence of plasticity compared to that in its absence. The evolution of omega phase is different under different stress states, which leads to the differences in the mechanical properties of the material. The variant selection mechanism, i.e. formation of different variants under different stress loading directions, is also nicely captured by our model.

  17. Educational transformation in upper-division physics: The Science Education Initiative model, outcomes, and lessons learned

    NASA Astrophysics Data System (ADS)

    Chasteen, Stephanie V.; Wilcox, Bethany; Caballero, Marcos D.; Perkins, Katherine K.; Pollock, Steven J.; Wieman, Carl E.

    2015-12-01

    [This paper is part of the Focused Collection on Upper Division Physics Courses.] In response to the need for a scalable, institutionally supported model of educational change, the Science Education Initiative (SEI) was created as an experiment in transforming course materials and faculty practices at two institutions—University of Colorado Boulder (CU) and University of British Columbia. We find that this departmentally focused model of change, which includes an explicit focus on course transformation as supported by a discipline-based postdoctoral education specialist, was generally effective in impacting courses and faculty across the institution. In CU's Department of Physics, the SEI effort focused primarily on upper-division courses, creating high-quality course materials, approaches, and assessments, and demonstrating an impact on student learning. We argue that the SEI implementation in the CU Physics Department, as compared to that in other departments, achieved more extensive impacts on specific course materials, and high-quality assessments, due to guidance by the physics education research group—but with more limited impact on the departmental faculty as a whole. We review the process and progress of the SEI Physics at CU and reflect on lessons learned in the CU Physics Department in particular. These results are useful in considering both institutional and faculty-led models of change and course transformation.

  18. Phase-field modeling of the beta to omega phase transformation in Zr–Nb alloys

    DOE PAGESBeta

    Yeddu, Hemantha Kumar; Lookman, Turab

    2015-05-01

    A three-dimensional elastoplastic phase-field model is developed, using the Finite Element Method (FEM), for modeling the athermal beta to omega phase transformation in Zr–Nb alloys by including plastic deformation and strain hardening of the material. The microstructure evolution during athermal transformation as well as under different stress states, e.g. uni-axial tensile and compressive, bi-axial tensile and compressive, shear and tri-axial loadings, is studied. The effects of plasticity, stress states and the stress loading direction on the microstructure evolution as well as on the mechanical properties are studied. The input data corresponding to a Zr – 8 at.% Nb alloy aremore » acquired from experimental studies as well as by using the CALPHAD method. Our simulations show that the four different omega variants grow as ellipsoidal shaped particles. Our results show that due to stress relaxation, the athermal phase transformation occurs slightly more readily in the presence of plasticity compared to that in its absence. The evolution of omega phase is different under different stress states, which leads to the differences in the mechanical properties of the material. The variant selection mechanism, i.e. formation of different variants under different stress loading directions, is also nicely captured by our model.« less

  19. Phase-field modeling of the beta to omega phase transformation in Zr-Nb alloys

    SciTech Connect

    Yeddu, Hemantha Kumar; Lookman, Turab

    2015-03-17

    A three-dimensional elastoplastic phase-field model is developed, using the finite element method (FEM), for modeling the athermal beta to omega phase transformation in Zr–Nb alloys by including plastic deformation and strain hardening of the material. The microstructure evolution during athermal transformation as well as under different stress states, e.g. uni-axial tensile and compressive, bi-axial tensile and compressive, shear and tri-axial loadings, is studied. The effects of plasticity, stress states and the stress loading direction on the microstructure evolution as well as on the mechanical properties are studied. The input data corresponding to a Zr – 8 at% Nb alloy are acquired from experimental studies as well as by using the CALPHAD method. Our simulations show that the four different omega variants grow as ellipsoidal shaped particles. Our results show that due to stress relaxation, the athermal phase transformation occurs slightly more readily in the presence of plasticity compared to that in its absence. The evolution of omega phase is different under different stress states, which leads to the differences in the mechanical properties of the material. As a result, the variant selection mechanism, i.e. formation of different variants under different stress loading directions, is also nicely captured by our model.

  20. The ecological model web concept: A consultative infrastructure for researchers and decision makers using a Service Oriented Architecture

    NASA Astrophysics Data System (ADS)

    Geller, Gary

    2010-05-01

    Rapid climate and socioeconomic changes may be outrunning society's ability to understand, predict, and respond to change effectively. Decision makers such as natural resource managers want better information about what these changes will be and how the resources they are managing will be affected. Researchers want better understanding of the components and processes of ecological systems, how they interact, and how they respond to change. Nearly all these activities require computer models to make ecological forecasts that can address "what if" questions. However, despite many excellent models in ecology and related disciplines, there is no coordinated model system—that is, a model infrastructure--that researchers or decision makers can consult to gain insight on important ecological questions or help them make decisions. While this is partly due to the complexity of the science, to lack of critical observations, and other issues, limited access to and sharing of models and model outputs is a factor as well. An infrastructure that increased access to and sharing of models and model outputs would benefit researchers, decision makers of all kinds, and modelers. One path to such a "consultative infrastructure" for ecological forecasting is called the Model Web, a concept for an open-ended system of interoperable computer models and databases communicating using a Service Oriented Architectures (SOA). Initially, it could consist of a core of several models, perhaps made interoperable retroactively, and then it could grow gradually as new models or databases were added. Because some models provide basic information of use to many other models, such as simple physical parameters, these "keystone" models are of particular importance in a model web. In the long run, a model web would not be rigidly planned and built--instead, like the World Wide Web, it would grow largely organically, with limited central control, within a framework of broad goals and data exchange

  1. Modeling phase transformation behavior during thermal cycling in the heat-affected zone of stainless steel welds

    SciTech Connect

    Vitek, J.M.; Iskander, Y.S.; David, S.A.

    1995-12-31

    An implicit finite-difference analysis was used to model the diffusion-controlled transformation behavior in a ternary system. The present analysis extends earlier work by examining the transformation behavior under the influence of multiple thermal cycles. The analysis was applied to the Fe-Cr-Ni ternary system to simulate the microstructural development in austenitic stainless steel welds. The ferrite-to-austenite transformation was studied in an effort to model the response of the heat-affected zone to multiple thermal cycles experienced during multipass welding. Results show that under some conditions, a transformation ``inertia`` exists that delays the system`s response when changing from cooling to heating. Conditions under which this ``inertia`` is most influential were examined. It was also found that under some conditions, the transformation behavior does not follow the equilibrium behavior as a function of temperature. Results also provide some insight into effect of composition distribution on transformation behavior.

  2. Generic Distributed Simulation Architecture

    SciTech Connect

    Booker, C.P.

    1999-05-14

    A Generic Distributed Simulation Architecture is described that allows a simulation to be automatically distributed over a heterogeneous network of computers and executed with very little human direction. A prototype Framework is presented that implements the elements of the Architecture and demonstrates the feasibility of the concepts. It provides a basis for a future, improved Framework that will support legacy models. Because the Framework is implemented in Java, it may be installed on almost any modern computer system.

  3. Collaborative Proposal: Transforming How Climate System Models are Used: A Global, Multi-Resolution Approach

    SciTech Connect

    Estep, Donald

    2013-04-15

    Despite the great interest in regional modeling for both weather and climate applications, regional modeling is not yet at the stage that it can be used routinely and effectively for climate modeling of the ocean. The overarching goal of this project is to transform how climate models are used by developing and implementing a robust, efficient, and accurate global approach to regional ocean modeling. To achieve this goal, we will use theoretical and computational means to resolve several basic modeling and algorithmic issues. The first task is to develop techniques for transitioning between parameterized and high-fidelity regional ocean models as the discretization grid transitions from coarse to fine regions. The second task is to develop estimates for the error in scientifically relevant quantities of interest that provide a systematic way to automatically determine where refinement is needed in order to obtain accurate simulations of dynamic and tracer transport in regional ocean models. The third task is to develop efficient, accurate, and robust time-stepping schemes for variable spatial resolution discretizations used in regional ocean models of dynamics and tracer transport. The fourth task is to develop frequency-dependent eddy viscosity finite element and discontinuous Galerkin methods and study their performance and effectiveness for simulation of dynamics and tracer transport in regional ocean models. These four projects share common difficulties and will be approach using a common computational and mathematical toolbox. This is a multidisciplinary project involving faculty and postdocs from Colorado State University, Florida State University, and Penn State University along with scientists from Los Alamos National Laboratory. The completion of the tasks listed within the discussion of the four sub-projects will go a long way towards meeting our goal of developing superior regional ocean models that will transform how climate system models are used.

  4. Transformation-induced plasticity in high-temperature shape memory alloys: a one-dimensional continuum model

    NASA Astrophysics Data System (ADS)

    Sakhaei, Amir Hosein; Lim, Kian-Meng

    2016-07-01

    A constitutive model based on isotropic plasticity consideration is presented in this work to model the thermo-mechanical behavior of high-temperature shape memory alloys. In high-temperature shape memory alloys (HTSMAs), both martensitic transformation and rate-dependent plasticity (creep) occur simultaneously at high temperatures. Furthermore, transformation-induced plasticity is another deformation mechanism during martensitic transformation. All these phenomena are considered as dissipative processes to model the mechanical behavior of HTSMAs in this study. The constitutive model was implemented for one-dimensional cases, and the results have been compared with experimental data from thermal cycling test for actuator applications.

  5. L-Py: An L-System Simulation Framework for Modeling Plant Architecture Development Based on a Dynamic Language

    PubMed Central

    Boudon, Frédéric; Pradal, Christophe; Cokelaer, Thomas; Prusinkiewicz, Przemyslaw; Godin, Christophe

    2012-01-01

    The study of plant development requires increasingly powerful modeling tools to help understand and simulate the growth and functioning of plants. In the last decade, the formalism of L-systems has emerged as a major paradigm for modeling plant development. Previous implementations of this formalism were made based on static languages, i.e., languages that require explicit definition of variable types before using them. These languages are often efficient but involve quite a lot of syntactic overhead, thus restricting the flexibility of use for modelers. In this work, we present an adaptation of L-systems to the Python language, a popular and powerful open-license dynamic language. We show that the use of dynamic language properties makes it possible to enhance the development of plant growth models: (i) by keeping a simple syntax while allowing for high-level programming constructs, (ii) by making code execution easy and avoiding compilation overhead, (iii) by allowing a high-level of model reusability and the building of complex modular models, and (iv) by providing powerful solutions to integrate MTG data-structures (that are a common way to represent plants at several scales) into L-systems and thus enabling to use a wide spectrum of computer tools based on MTGs developed for plant architecture. We then illustrate the use of L-Py in real applications to build complex models or to teach plant modeling in the classroom. PMID:22670147

  6. L-py: an L-system simulation framework for modeling plant architecture development based on a dynamic language.

    PubMed

    Boudon, Frédéric; Pradal, Christophe; Cokelaer, Thomas; Prusinkiewicz, Przemyslaw; Godin, Christophe

    2012-01-01

    The study of plant development requires increasingly powerful modeling tools to help understand and simulate the growth and functioning of plants. In the last decade, the formalism of L-systems has emerged as a major paradigm for modeling plant development. Previous implementations of this formalism were made based on static languages, i.e., languages that require explicit definition of variable types before using them. These languages are often efficient but involve quite a lot of syntactic overhead, thus restricting the flexibility of use for modelers. In this work, we present an adaptation of L-systems to the Python language, a popular and powerful open-license dynamic language. We show that the use of dynamic language properties makes it possible to enhance the development of plant growth models: (i) by keeping a simple syntax while allowing for high-level programming constructs, (ii) by making code execution easy and avoiding compilation overhead, (iii) by allowing a high-level of model reusability and the building of complex modular models, and (iv) by providing powerful solutions to integrate MTG data-structures (that are a common way to represent plants at several scales) into L-systems and thus enabling to use a wide spectrum of computer tools based on MTGs developed for plant architecture. We then illustrate the use of L-Py in real applications to build complex models or to teach plant modeling in the classroom. PMID:22670147

  7. Application of Distribution Transformer Thermal Life Models to Electrified Vehicle Charging Loads Using Monte-Carlo Method: Preprint

    SciTech Connect

    Kuss, M.; Markel, T.; Kramer, W.

    2011-01-01

    Concentrated purchasing patterns of plug-in vehicles may result in localized distribution transformer overload scenarios. Prolonged periods of transformer overloading causes service life decrements, and in worst-case scenarios, results in tripped thermal relays and residential service outages. This analysis will review distribution transformer load models developed in the IEC 60076 standard, and apply the model to a neighborhood with plug-in hybrids. Residential distribution transformers are sized such that night-time cooling provides thermal recovery from heavy load conditions during the daytime utility peak. It is expected that PHEVs will primarily be charged at night in a residential setting. If not managed properly, some distribution transformers could become overloaded, leading to a reduction in transformer life expectancy, thus increasing costs to utilities and consumers. A Monte-Carlo scheme simulated each day of the year, evaluating 100 load scenarios as it swept through the following variables: number of vehicle per transformer, transformer size, and charging rate. A general method for determining expected transformer aging rate will be developed, based on the energy needs of plug-in vehicles loading a residential transformer.

  8. Tensor Product Model Transformation Based Adaptive Integral-Sliding Mode Controller: Equivalent Control Method

    PubMed Central

    Zhao, Guoliang; Li, Hongxing

    2013-01-01

    This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model. PMID:24453897

  9. Problems in mechanistic theoretical models for cell transformation by ionizing radiation

    SciTech Connect

    Chatterjee, A.; Holley, W.R.

    1991-10-01

    A mechanistic model based on yields of double strand breaks has been developed to determine the dose response curves for cell transformation frequencies. At its present stage the model is applicable to immortal cell lines and to various qualities (X-rays, Neon and Iron) of ionizing radiation. Presently, we have considered four types of processes which can lead to activation phenomena: (1) point mutation events on a regulatory segment of selected oncogenes, (2) inactivation of suppressor genes, through point mutation, (3) deletion of a suppressor gene by a single track, and (4) deletion of a suppressor gene by two tracks.

  10. Note: Tesla transformer damping

    NASA Astrophysics Data System (ADS)

    Reed, J. L.

    2012-07-01

    Unexpected heavy damping in the two winding Tesla pulse transformer is shown to be due to small primary inductances. A small primary inductance is a necessary condition of operability, but is also a refractory inefficiency. A 30% performance loss is demonstrated using a typical "spiral strip" transformer. The loss is investigated by examining damping terms added to the transformer's governing equations. A significant alteration of the transformer's architecture is suggested to mitigate these losses. Experimental and simulated data comparing the 2 and 3 winding transformers are cited to support the suggestion.

  11. Phase-filed modelling and synchrotron validation of phase transformations in martensitic dual-phase steel

    SciTech Connect

    Thiessen, R.G.; Sietsma, J.; Palmer, T.A.; Elmer, J.W.; Richardson, I.M.

    2008-11-12

    A thermodynamically based method to describe the phase transformations during heating and cooling of martensitic dual-phase steel has been developed, and in situ synchrotron measurements of phase transformations have been undertaken to support the model experimentally. Nucleation routines are governed by a novel implementation of the classical nucleation theory in a general phase-field code. Physically-based expressions for the temperature-dependent interface mobility and the driving forces for transformation have also been constructed. Modelling of martensite was accomplished by assuming a carbon supersaturation of the body-centred-cubic ferrite lattice. The simulations predict kinetic aspects of the austenite formation during heating and ferrite formation upon cooling. Simulations of partial austenitising thermal cycles predicted peak and retained austenite percentages of 38.2% and 6.7%, respectively, while measurements yielded peak and retained austenite percentages of 31.0% and 7.2% ({+-}1%). Simulations of a complete austenitisation thermal cycle predicted the measured complete austenitisation and, upon cooling, a retained austenite percentage of 10.3% while 9.8% ({+-}1%) retained austenite was measured.

  12. Development of the Architectural Simulation Model for Future Launch Systems and its Application to an Existing Launch Fleet

    NASA Technical Reports Server (NTRS)

    Rabadi, Ghaith

    2005-01-01

    A significant portion of lifecycle costs for launch vehicles are generated during the operations phase. Research indicates that operations costs can account for a large percentage of the total life-cycle costs of reusable space transportation systems. These costs are largely determined by decisions made early during conceptual design. Therefore, operational considerations are an important part of vehicle design and concept analysis process that needs to be modeled and studied early in the design phase. However, this is a difficult and challenging task due to uncertainties of operations definitions, the dynamic and combinatorial nature of the processes, and lack of analytical models and the scarcity of historical data during the conceptual design phase. Ultimately, NASA would like to know the best mix of launch vehicle concepts that would meet the missions launch dates at the minimum cost. To answer this question, we first need to develop a model to estimate the total cost, including the operational cost, to accomplish this set of missions. In this project, we have developed and implemented a discrete-event simulation model using ARENA (a simulation modeling environment) to determine this cost assessment. Discrete-event simulation is widely used in modeling complex systems, including transportation systems, due to its flexibility, and ability to capture the dynamics of the system. The simulation model accepts manifest inputs including the set of missions that need to be accomplished over a period of time, the clients (e.g., NASA or DoD) who wish to transport the payload to space, the payload weights, and their destinations (e.g., International Space Station, LEO, or GEO). A user of the simulation model can define an architecture of reusable or expendable launch vehicles to achieve these missions. Launch vehicles may belong to different families where each family may have it own set of resources, processing times, and cost factors. The goal is to capture the required

  13. Decoding leaf hydraulics with a spatially explicit model: principles of venation architecture and implications for its evolution.

    PubMed

    McKown, Athena D; Cochard, Hervé; Sack, Lawren

    2010-04-01

    Leaf venation architecture is tremendously diverse across plant species. Understanding the hydraulic functions of given venation traits can clarify the organization of the vascular system and its adaptation to environment. Using a spatially explicit model (the program K_leaf), we subjected realistic simulated leaves to modifications and calculated the impacts on xylem and leaf hydraulic conductance (K(x) and K(leaf), respectively), important traits in determining photosynthesis and growth. We tested the sensitivity of leaves to altered vein order conductivities (1) in the absence or (2) presence of hierarchical vein architecture, (3) to major vein tapering, and (4) to modification of vein densities (length/leaf area). The K(x) and K(leaf) increased with individual vein order conductivities and densities; for hierarchical venation systems, the greatest impact was from increases in vein conductivity for lower vein orders and increases in density for higher vein orders. Individual vein order conductivities were colimiting of K(x) and K(leaf), as were their densities, but the effects of vein conductivities and densities were orthogonal. Both vein hierarchy and vein tapering increased K(x) relative to xylem construction cost. These results highlight the important consequences of venation traits for the economics, ecology, and evolution of plant transport capacity. PMID:20178410

  14. First evaluation of the CPU, GPGPU and MIC architectures for real time particle tracking based on Hough transform at the LHC

    NASA Astrophysics Data System (ADS)

    Halyo, V.; LeGresley, P.; Lujan, P.; Karpusenko, V.; Vladimirov, A.

    2014-04-01

    Recent innovations focused around parallel processing, either through systems containing multiple processors or processors containing multiple cores, hold great promise for enhancing the performance of the trigger at the LHC and extending its physics program. The flexibility of the CMS/ATLAS trigger system allows for easy integration of computational accelerators, such as NVIDIA's Tesla Graphics Processing Unit (GPU) or Intel's Xeon Phi, in the High Level Trigger. These accelerators have the potential to provide faster or more energy efficient event selection, thus opening up possibilities for new complex triggers that were not previously feasible. At the same time, it is crucial to explore the performance limits achievable on the latest generation multicore CPUs with the use of the best software optimization methods. In this article, a new tracking algorithm based on the Hough transform will be evaluated for the first time on multi-core Intel i7-3770 and Intel Xeon E5-2697v2 CPUs, an NVIDIA Tesla K20c GPU, and an Intel Xeon Phi 7120 coprocessor. Preliminary time performance will be presented.

  15. Position-specific isotope modeling of organic micropollutants transformation through different reaction pathways.

    PubMed

    Jin, Biao; Rolle, Massimo

    2016-03-01

    The degradation of organic micropollutants occurs via different reaction pathways. Compound specific isotope analysis is a valuable tool to identify such degradation pathways in different environmental systems. We propose a mechanism-based modeling approach that provides a quantitative framework to simultaneously evaluate concentration as well as bulk and position-specific multi-element isotope evolution during the transformation of organic micropollutants. The model explicitly simulates position-specific isotopologues for those atoms that experience isotope effects and, thereby, provides a mechanistic description of isotope fractionation occurring at different molecular positions. To demonstrate specific features of the modeling approach, we simulated the degradation of three selected organic micropollutants: dichlorobenzamide (BAM), isoproturon (IPU) and diclofenac (DCF). The model accurately reproduces the multi-element isotope data observed in previous experimental studies. Furthermore, it precisely captures the dual element isotope trends characteristic of different reaction pathways as well as their range of variation consistent with observed bulk isotope fractionation. It was also possible to directly validate the model capability to predict the evolution of position-specific isotope ratios with available experimental data. Therefore, the approach is useful both for a mechanism-based evaluation of experimental results and as a tool to explore transformation pathways in scenarios for which position-specific isotope data are not yet available. PMID:26708763

  16. An Analysis of Model Scale Data Transformation to Full Scale Flight Using Chevron Nozzles

    NASA Technical Reports Server (NTRS)

    Brown, Clifford; Bridges, James

    2003-01-01

    Ground-based model scale aeroacoustic data is frequently used to predict the results of flight tests while saving time and money. The value of a model scale test is therefore dependent on how well the data can be transformed to the full scale conditions. In the spring of 2000, a model scale test was conducted to prove the value of chevron nozzles as a noise reduction device for turbojet applications. The chevron nozzle reduced noise by 2 EPNdB at an engine pressure ratio of 2.3 compared to that of the standard conic nozzle. This result led to a full scale flyover test in the spring of 2001 to verify these results. The flyover test confirmed the 2 EPNdB reduction predicted by the model scale test one year earlier. However, further analysis of the data revealed that the spectra and directivity, both on an OASPL and PNL basis, do not agree in either shape or absolute level. This paper explores these differences in an effort to improve the data transformation from model scale to full scale.

  17. Linear models for assessing mechanisms of sperm competition: the trouble with transformations.

    PubMed

    Eggert, Anne-Katrin; Reinhardt, Klaus; Sakaluk, Scott K

    2003-01-01

    Although sperm competition is a pervasive selective force shaping the reproductive tactics of males, the mechanisms underlying different patterns of sperm precedence remain obscure. Parker et al. (1990) developed a series of linear models designed to identify two of the more basic mechanisms: sperm lotteries and sperm displacement; the models can be tested experimentally by manipulating the relative numbers of sperm transferred by rival males and determining the paternity of offspring. Here we show that tests of the model derived for sperm lotteries can result in misleading inferences about the underlying mechanism of sperm precedence because the required inverse transformations may lead to a violation of fundamental assumptions of linear regression. We show that this problem can be remedied by reformulating the model using the actual numbers of offspring sired by each male, and log-transforming both sides of the resultant equation. Reassessment of data from a previous study (Sakaluk and Eggert 1996) using the corrected version of the model revealed that we should not have excluded a simple sperm lottery as a possible mechanism of sperm competition in decorated crickets, Gryllodes sigillatus. PMID:12643579

  18. In search of improving the numerical accuracy of the k - ɛ model by a transformation to the k - τ model

    NASA Astrophysics Data System (ADS)

    Dijkstra, Yoeri M.; Uittenbogaard, Rob E.; van Kester, Jan A. Th. M.; Pietrzak, Julie D.

    2016-08-01

    This study presents a detailed comparison between the k - ɛ and k - τ turbulence models. It is demonstrated that the numerical accuracy of the k - ɛ turbulence model can be improved in geophysical and environmental high Reynolds number boundary layer flows. This is achieved by transforming the k - ɛ model to the k - τ model, so that both models use the same physical parametrisation. The models therefore only differ in numerical aspects. A comparison between the two models is carried out using four idealised one-dimensional vertical (1DV) test cases. The advantage of a 1DV model is that it is feasible to carry out convergence tests with grids containing 5 to several thousands of vertical layers. It is shown hat the k - τ model is more accurate than the k - ɛ model in stratified and non-stratified boundary layer flows for grid resolutions between 10 and 100 layers. The k - τ model also shows a more monotonous convergence behaviour than the k - ɛ model. The price for the improved accuracy is about 20% more computational time for the k - τ model, which is due to additional terms in the model equations. The improved performance of the k - τ model is explained by the linearity of τ in the boundary layer and the better defined boundary condition.

  19. Array CGH data modeling and smoothing in Stationary Wavelet Packet Transform domain

    PubMed Central

    Huang, Heng; Nguyen, Nha; Oraintara, Soontorn; Vo, An

    2008-01-01

    Background Array-based comparative genomic hybridization (array CGH) is a highly efficient technique, allowing the simultaneous measurement of genomic DNA copy number at hundreds or thousands of loci and the reliable detection of local one-copy-level variations. Characterization of these DNA copy number changes is important for both the basic understanding of cancer and its diagnosis. In order to develop effective methods to identify aberration regions from array CGH data, many recent research work focus on both smoothing-based and segmentation-based data processing. In this paper, we propose stationary packet wavelet transform based approach to smooth array CGH data. Our purpose is to remove CGH noise in whole frequency while keeping true signal by using bivariate model. Results In both synthetic and real CGH data, Stationary Wavelet Packet Transform (SWPT) is the best wavelet transform to analyze CGH signal in whole frequency. We also introduce a new bivariate shrinkage model which shows the relationship of CGH noisy coefficients of two scales in SWPT. Before smoothing, the symmetric extension is considered as a preprocessing step to save information at the border. Conclusion We have designed the SWTP and the SWPT-Bi which are using the stationary wavelet packet transform with the hard thresholding and the new bivariate shrinkage estimator respectively to smooth the array CGH data. We demonstrate the effectiveness of our approach through theoretical and experimental exploration of a set of array CGH data, including both synthetic data and real data. The comparison results show that our method outperforms the previous approaches. PMID:18831782

  20. Micro CT Analysis of Spine Architecture in a Mouse Model of Scoliosis

    PubMed Central

    Gao, Chan; Chen, Brian P.; Sullivan, Michael B.; Hui, Jasmine; Ouellet, Jean A.; Henderson, Janet E.; Saran, Neil

    2015-01-01

    Objective: Mice homozygous for targeted deletion of the gene encoding fibroblast growth factor receptor 3 (FGFR3−/−) develop kyphoscoliosis by 2 months of age. The first objective of this study was to use high resolution X-ray to characterize curve progression in vivo and micro CT to quantify spine architecture ex vivo in FGFR3−/− mice. The second objective was to determine if slow release of the bone anabolic peptide parathyroid hormone related protein (PTHrP-1-34) from a pellet placed adjacent to the thoracic spine could inhibit progressive kyphoscoliosis. Materials and methods: Pellets loaded with placebo or PTHrP-1-34 were implanted adjacent to the thoracic spine of 1-month-old FGFR3−/− mice obtained from in house breeding. X rays were captured at monthly intervals up to 4 months to quantify curve progression using the Cobb method. High resolution post-mortem scans of FGFR3−/− and FGFR3+/+ spines, from C5/6 to L4/5, were captured to evaluate the 3D structure, rotation, and micro-architecture of the affected vertebrae. Un-decalcified and decalcified histology were performed on the apical and adjacent vertebrae of FGFR3−/− spines, and the corresponding vertebrae from FGFR3+/+ spines. Results: The mean Cobb angle was significantly greater at all ages in FGFR3−/− mice compared with wild type mice and appeared to stabilize around skeletal maturity at 4 months. 3D reconstructions of the thoracic spine of 4-month-old FGFR3−/− mice treated with PTHrP-1-34 revealed correction of left/right asymmetry, vertebral rotation, and lateral displacement compared with mice treated with placebo. Histologic analysis of the apical vertebrae confirmed correction of the asymmetry in PTHrP-1-34 treated mice, in the absence of any change in bone volume, and a significant reduction in the wedging of intervertebral disks (IVD) seen in placebo treated mice. Conclusion: Local treatment of the thoracic spine of juvenile FGFR3−/− mice with a bone anabolic